diff --git a/.gitignore b/.gitignore index 2398ab2..62c8935 100644 --- a/.gitignore +++ b/.gitignore @@ -1,10 +1 @@ - -navidrome/ -music/ -.idea/ - -utils/__pycache__/ - -*.pyc - -geckodriver.log +.idea/ \ No newline at end of file diff --git a/database/album.json b/database/album.json deleted file mode 100644 index 738f013..0000000 --- a/database/album.json +++ /dev/null @@ -1,5 +0,0 @@ -{ - "version": 2, - "keys": [], - "data": {} -} \ No newline at end of file diff --git a/docker-compose.yml b/docker-compose.yml index c70fc6b..c81a5f1 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,15 +1,127 @@ -version: '2' +networks: + laravel: + services: - web: - build: . + + nginx: + build: + context: ./nginx/dockerfiles + dockerfile: nginx.dockerfile + args: + - UID=${UID:-1000} + - GID=${GID:-1000} + ports: + - "80:80" + volumes: + - ./php/src:/var/www/html:delegated + depends_on: + - php + - redis + - mysql + - mailhog + networks: + - laravel + + php: + build: + context: ./php/dockerfiles + dockerfile: php.dockerfile + args: + - UID=${UID:-1000} + - GID=${GID:-1000} + ports: + - "9000:9000" + volumes: + - ./php/src:/var/www/html:delegated + - ./shared:/var/ + networks: + - laravel + + python: + build: ./python command: python3 app.py ports: - - "80:5000" + - "5000:5000" volumes: - - .:/code:Z - - ./user:/home/app:Z + - ./python:/code:Z + - ./php/src:/var/www/html:delegated + - ./shared:/var/ depends_on: - redis + networks: + - laravel + redis: - image: redis + image: redis:alpine + restart: unless-stopped + ports: + - "6379:6379" + networks: + - laravel + composer: + build: + context: ./php/dockerfiles + dockerfile: php.dockerfile + args: + - UID=${UID:-1000} + - GID=${GID:-1000} + volumes: + - ./php/src:/var/www/html + depends_on: + - php + entrypoint: [ 'composer', '--ignore-platform-reqs' ] + networks: + - laravel + + npm: + image: node:current-alpine + volumes: + - ./php/src:/var/www/html + ports: + - "3000:3000" + - "3001:3001" + - "5173:5173" + working_dir: /var/www/html + entrypoint: [ 'npm' ] + networks: + - laravel + + mysql: + image: mariadb:10.6 + restart: unless-stopped + tty: true + ports: + - "3306:3306" + environment: + MYSQL_DATABASE: homestead + MYSQL_USER: homestead + MYSQL_PASSWORD: secret + MYSQL_ROOT_PASSWORD: secret + SERVICE_TAGS: dev + SERVICE_NAME: mysql + networks: + - laravel + + artisan: + build: + context: ./php/dockerfiles + dockerfile: php.dockerfile + args: + - UID=${UID:-1000} + - GID=${GID:-1000} + volumes: + - ./php/src:/var/www/html:delegated + depends_on: + - mysql + entrypoint: [ 'php', '/var/www/html/artisan' ] + networks: + - laravel + + mailhog: + image: mailhog/mailhog:latest + ports: + - "1025:1025" + - "8025:8025" + networks: + - laravel diff --git a/navidrome/cache/images/01/dc/01dc1df46675b3a4cc4149dfff3a2bad485d141e b/navidrome/cache/images/01/dc/01dc1df46675b3a4cc4149dfff3a2bad485d141e deleted file mode 100644 index 4bc10d7..0000000 Binary files a/navidrome/cache/images/01/dc/01dc1df46675b3a4cc4149dfff3a2bad485d141e and /dev/null differ diff --git a/navidrome/cache/images/02/61/02610867f561bbd07db3f2393e4bce269c12a7db b/navidrome/cache/images/02/61/02610867f561bbd07db3f2393e4bce269c12a7db deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/02/61/02610867f561bbd07db3f2393e4bce269c12a7db and /dev/null differ diff --git a/navidrome/cache/images/0d/93/0d9321a102d93f4288e4e8da225feff516afb35f b/navidrome/cache/images/0d/93/0d9321a102d93f4288e4e8da225feff516afb35f deleted file mode 100644 index 1d37da7..0000000 Binary files a/navidrome/cache/images/0d/93/0d9321a102d93f4288e4e8da225feff516afb35f and /dev/null differ diff --git a/navidrome/cache/images/16/7b/167b5bdae701ba598f2a1e0862596f9bd570a914 b/navidrome/cache/images/16/7b/167b5bdae701ba598f2a1e0862596f9bd570a914 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/16/7b/167b5bdae701ba598f2a1e0862596f9bd570a914 and /dev/null differ diff --git a/navidrome/cache/images/1a/15/1a15fc1e2e17a098e09e94664a0f1bdde3182994 b/navidrome/cache/images/1a/15/1a15fc1e2e17a098e09e94664a0f1bdde3182994 deleted file mode 100644 index 3c3323a..0000000 Binary files a/navidrome/cache/images/1a/15/1a15fc1e2e17a098e09e94664a0f1bdde3182994 and /dev/null differ diff --git a/navidrome/cache/images/1d/15/1d1525455687037b510e9c58c6aac1aec3a01bb9 b/navidrome/cache/images/1d/15/1d1525455687037b510e9c58c6aac1aec3a01bb9 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/1d/15/1d1525455687037b510e9c58c6aac1aec3a01bb9 and /dev/null differ diff --git a/navidrome/cache/images/21/03/2103bf6ec1f2008c8a43a60cff2968cd4e067cb7 b/navidrome/cache/images/21/03/2103bf6ec1f2008c8a43a60cff2968cd4e067cb7 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/21/03/2103bf6ec1f2008c8a43a60cff2968cd4e067cb7 and /dev/null differ diff --git a/navidrome/cache/images/21/55/215516c5761dae0017f7ffe74d08fd09ab14cd7e b/navidrome/cache/images/21/55/215516c5761dae0017f7ffe74d08fd09ab14cd7e deleted file mode 100644 index 575f7c8..0000000 Binary files a/navidrome/cache/images/21/55/215516c5761dae0017f7ffe74d08fd09ab14cd7e and /dev/null differ diff --git a/navidrome/cache/images/2f/83/2f8366f1a3158fda017099693211fe24730ec5f4 b/navidrome/cache/images/2f/83/2f8366f1a3158fda017099693211fe24730ec5f4 deleted file mode 100644 index 95149ad..0000000 Binary files a/navidrome/cache/images/2f/83/2f8366f1a3158fda017099693211fe24730ec5f4 and /dev/null differ diff --git a/navidrome/cache/images/43/cf/43cfc7d35115bc141f5cbaffdf95600136b78f0a b/navidrome/cache/images/43/cf/43cfc7d35115bc141f5cbaffdf95600136b78f0a deleted file mode 100644 index 1305a68..0000000 Binary files a/navidrome/cache/images/43/cf/43cfc7d35115bc141f5cbaffdf95600136b78f0a and /dev/null differ diff --git a/navidrome/cache/images/4a/33/4a330de9b563091132d58918ba898576dac97583 b/navidrome/cache/images/4a/33/4a330de9b563091132d58918ba898576dac97583 deleted file mode 100644 index f1d778f..0000000 Binary files a/navidrome/cache/images/4a/33/4a330de9b563091132d58918ba898576dac97583 and /dev/null differ diff --git a/navidrome/cache/images/52/71/52710b6543b5e778467426e38dd117843da128d8 b/navidrome/cache/images/52/71/52710b6543b5e778467426e38dd117843da128d8 deleted file mode 100644 index 1d37da7..0000000 Binary files a/navidrome/cache/images/52/71/52710b6543b5e778467426e38dd117843da128d8 and /dev/null differ diff --git a/navidrome/cache/images/5f/1e/5f1e942acb89b3456ab644ce17150d71800f8139 b/navidrome/cache/images/5f/1e/5f1e942acb89b3456ab644ce17150d71800f8139 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/5f/1e/5f1e942acb89b3456ab644ce17150d71800f8139 and /dev/null differ diff --git a/navidrome/cache/images/60/fe/60feabfb3ad5b0a71eaff6b364aacd17f3510295 b/navidrome/cache/images/60/fe/60feabfb3ad5b0a71eaff6b364aacd17f3510295 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/60/fe/60feabfb3ad5b0a71eaff6b364aacd17f3510295 and /dev/null differ diff --git a/navidrome/cache/images/68/9e/689e94d05db5d6dd42acd6e667870927d8387d78 b/navidrome/cache/images/68/9e/689e94d05db5d6dd42acd6e667870927d8387d78 deleted file mode 100644 index 3ebd4fd..0000000 Binary files a/navidrome/cache/images/68/9e/689e94d05db5d6dd42acd6e667870927d8387d78 and /dev/null differ diff --git a/navidrome/cache/images/6a/46/6a4663184a296bd9902cd68e08e138db6c280dc4 b/navidrome/cache/images/6a/46/6a4663184a296bd9902cd68e08e138db6c280dc4 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/6a/46/6a4663184a296bd9902cd68e08e138db6c280dc4 and /dev/null differ diff --git a/navidrome/cache/images/73/8e/738e414f3ee2cc537b7ca8bfa6e95757065adc63 b/navidrome/cache/images/73/8e/738e414f3ee2cc537b7ca8bfa6e95757065adc63 deleted file mode 100644 index 1305a68..0000000 Binary files a/navidrome/cache/images/73/8e/738e414f3ee2cc537b7ca8bfa6e95757065adc63 and /dev/null differ diff --git a/navidrome/cache/images/75/e0/75e0eb2d3c3d10e88e04ddd9011d3d104cfd1be7 b/navidrome/cache/images/75/e0/75e0eb2d3c3d10e88e04ddd9011d3d104cfd1be7 deleted file mode 100644 index 3e1b34d..0000000 Binary files a/navidrome/cache/images/75/e0/75e0eb2d3c3d10e88e04ddd9011d3d104cfd1be7 and /dev/null differ diff --git a/navidrome/cache/images/80/51/805161f6790c00faffaf61a522492d0de18ed62c b/navidrome/cache/images/80/51/805161f6790c00faffaf61a522492d0de18ed62c deleted file mode 100644 index 0312b7a..0000000 Binary files a/navidrome/cache/images/80/51/805161f6790c00faffaf61a522492d0de18ed62c and /dev/null differ diff --git a/navidrome/cache/images/82/41/8241ab6f33f4e74fb99efb813f6767519a31cc40 b/navidrome/cache/images/82/41/8241ab6f33f4e74fb99efb813f6767519a31cc40 deleted file mode 100644 index 4bc10d7..0000000 Binary files a/navidrome/cache/images/82/41/8241ab6f33f4e74fb99efb813f6767519a31cc40 and /dev/null differ diff --git a/navidrome/cache/images/87/f9/87f94b422331baa1d81e0d277b699ad46cf11187 b/navidrome/cache/images/87/f9/87f94b422331baa1d81e0d277b699ad46cf11187 deleted file mode 100644 index 95149ad..0000000 Binary files a/navidrome/cache/images/87/f9/87f94b422331baa1d81e0d277b699ad46cf11187 and /dev/null differ diff --git a/navidrome/cache/images/88/c7/88c7004eded52ac85da6e662bef03244701bde06 b/navidrome/cache/images/88/c7/88c7004eded52ac85da6e662bef03244701bde06 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/88/c7/88c7004eded52ac85da6e662bef03244701bde06 and /dev/null differ diff --git a/navidrome/cache/images/95/2f/952f4586139ba03d8a59f932964e4362ece6a481 b/navidrome/cache/images/95/2f/952f4586139ba03d8a59f932964e4362ece6a481 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/95/2f/952f4586139ba03d8a59f932964e4362ece6a481 and /dev/null differ diff --git a/navidrome/cache/images/99/95/9995206040bbb762c38a79be73cb06fe00ce1cf1 b/navidrome/cache/images/99/95/9995206040bbb762c38a79be73cb06fe00ce1cf1 deleted file mode 100644 index 4765a6e..0000000 Binary files a/navidrome/cache/images/99/95/9995206040bbb762c38a79be73cb06fe00ce1cf1 and /dev/null differ diff --git a/navidrome/cache/images/9b/15/9b15c7269d5b068386dab8cf8ad34c50930fee8b b/navidrome/cache/images/9b/15/9b15c7269d5b068386dab8cf8ad34c50930fee8b deleted file mode 100644 index 049d9e7..0000000 Binary files a/navidrome/cache/images/9b/15/9b15c7269d5b068386dab8cf8ad34c50930fee8b and /dev/null differ diff --git a/navidrome/cache/images/9e/9a/9e9a48cc36949efbec7067be051c93d88b8eee51 b/navidrome/cache/images/9e/9a/9e9a48cc36949efbec7067be051c93d88b8eee51 deleted file mode 100644 index d56400d..0000000 Binary files a/navidrome/cache/images/9e/9a/9e9a48cc36949efbec7067be051c93d88b8eee51 and /dev/null differ diff --git a/navidrome/cache/images/b1/62/b16257a8253cb16c1a30a54acee4e0394d8a60be b/navidrome/cache/images/b1/62/b16257a8253cb16c1a30a54acee4e0394d8a60be deleted file mode 100644 index 3c3323a..0000000 Binary files a/navidrome/cache/images/b1/62/b16257a8253cb16c1a30a54acee4e0394d8a60be and /dev/null differ diff --git a/navidrome/cache/images/b9/c8/b9c8abd62d15bac48b738e9cd98bcc4c847bd90f b/navidrome/cache/images/b9/c8/b9c8abd62d15bac48b738e9cd98bcc4c847bd90f deleted file mode 100644 index d56400d..0000000 Binary files a/navidrome/cache/images/b9/c8/b9c8abd62d15bac48b738e9cd98bcc4c847bd90f and /dev/null differ diff --git a/navidrome/cache/images/c9/ca/c9ca1e99a28e93e9b5f952430017c833ebbca732 b/navidrome/cache/images/c9/ca/c9ca1e99a28e93e9b5f952430017c833ebbca732 deleted file mode 100644 index bf320f4..0000000 Binary files a/navidrome/cache/images/c9/ca/c9ca1e99a28e93e9b5f952430017c833ebbca732 and /dev/null differ diff --git a/navidrome/cache/images/ce/5e/ce5ec4ddaa6f903a901ac6d5b2d8421f5b3457bf b/navidrome/cache/images/ce/5e/ce5ec4ddaa6f903a901ac6d5b2d8421f5b3457bf deleted file mode 100644 index 2f538e1..0000000 Binary files a/navidrome/cache/images/ce/5e/ce5ec4ddaa6f903a901ac6d5b2d8421f5b3457bf and /dev/null differ diff --git a/navidrome/cache/images/cf/8b/cf8bd7a4a7d89237642713e87c96e9e0b510552f b/navidrome/cache/images/cf/8b/cf8bd7a4a7d89237642713e87c96e9e0b510552f deleted file mode 100644 index 575f7c8..0000000 Binary files a/navidrome/cache/images/cf/8b/cf8bd7a4a7d89237642713e87c96e9e0b510552f and /dev/null differ diff --git a/navidrome/cache/images/d1/7d/d17d22b6ccf5639901cdec7323faa049771cab6a b/navidrome/cache/images/d1/7d/d17d22b6ccf5639901cdec7323faa049771cab6a deleted file mode 100644 index 0e7faa3..0000000 Binary files a/navidrome/cache/images/d1/7d/d17d22b6ccf5639901cdec7323faa049771cab6a and /dev/null differ diff --git a/navidrome/cache/images/d3/d5/d3d51bac720faefa0d4d005287aefcbbecc70352 b/navidrome/cache/images/d3/d5/d3d51bac720faefa0d4d005287aefcbbecc70352 deleted file mode 100644 index 0312b7a..0000000 Binary files a/navidrome/cache/images/d3/d5/d3d51bac720faefa0d4d005287aefcbbecc70352 and /dev/null differ diff --git a/navidrome/cache/images/d4/4f/d44fd9cd59e53482be9cb5c9dee952669bec2d07 b/navidrome/cache/images/d4/4f/d44fd9cd59e53482be9cb5c9dee952669bec2d07 deleted file mode 100644 index 3ebd4fd..0000000 Binary files a/navidrome/cache/images/d4/4f/d44fd9cd59e53482be9cb5c9dee952669bec2d07 and /dev/null differ diff --git a/navidrome/cache/images/d5/c3/d5c3d9464ccb4f69981bd652b4386355ea887969 b/navidrome/cache/images/d5/c3/d5c3d9464ccb4f69981bd652b4386355ea887969 deleted file mode 100644 index 3e1b34d..0000000 Binary files a/navidrome/cache/images/d5/c3/d5c3d9464ccb4f69981bd652b4386355ea887969 and /dev/null differ diff --git a/navidrome/cache/images/db/60/db60bf5ca1bcd80704c3b358da175e24a9501cb4 b/navidrome/cache/images/db/60/db60bf5ca1bcd80704c3b358da175e24a9501cb4 deleted file mode 100644 index 0e7faa3..0000000 Binary files a/navidrome/cache/images/db/60/db60bf5ca1bcd80704c3b358da175e24a9501cb4 and /dev/null differ diff --git a/navidrome/cache/images/e7/28/e7288cdadb7fe821f89e96399cc180e222525161 b/navidrome/cache/images/e7/28/e7288cdadb7fe821f89e96399cc180e222525161 deleted file mode 100644 index 2f538e1..0000000 Binary files a/navidrome/cache/images/e7/28/e7288cdadb7fe821f89e96399cc180e222525161 and /dev/null differ diff --git a/navidrome/cache/images/f1/80/f180b48373bcafa1deff21859159692d83025431 b/navidrome/cache/images/f1/80/f180b48373bcafa1deff21859159692d83025431 deleted file mode 100644 index f1d778f..0000000 Binary files a/navidrome/cache/images/f1/80/f180b48373bcafa1deff21859159692d83025431 and /dev/null differ diff --git a/navidrome/cache/images/f9/85/f985ed095bf95de987c9138ea012ce4d0d9f5481 b/navidrome/cache/images/f9/85/f985ed095bf95de987c9138ea012ce4d0d9f5481 deleted file mode 100644 index bf320f4..0000000 Binary files a/navidrome/cache/images/f9/85/f985ed095bf95de987c9138ea012ce4d0d9f5481 and /dev/null differ diff --git a/navidrome/cache/images/fd/26/fd26b9a68e5b68c1c302bc6622c560dedd6a27c4 b/navidrome/cache/images/fd/26/fd26b9a68e5b68c1c302bc6622c560dedd6a27c4 deleted file mode 100644 index 049d9e7..0000000 Binary files a/navidrome/cache/images/fd/26/fd26b9a68e5b68c1c302bc6622c560dedd6a27c4 and /dev/null differ diff --git a/navidrome/navidrome.db b/navidrome/navidrome.db deleted file mode 100644 index ef17497..0000000 Binary files a/navidrome/navidrome.db and /dev/null differ diff --git a/nginx/conf/default.conf b/nginx/conf/default.conf new file mode 100644 index 0000000..282633d --- /dev/null +++ b/nginx/conf/default.conf @@ -0,0 +1,20 @@ +server { + listen 80; + index index.php index.html; + server_name _; + root /var/www/html/public; + + location / { + try_files $uri $uri/ /index.php?$query_string; + } + + location ~ \.php$ { + try_files $uri =404; + fastcgi_split_path_info ^(.+\.php)(/.+)$; + fastcgi_pass php:9000; + fastcgi_index index.php; + include fastcgi_params; + fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; + fastcgi_param PATH_INFO $fastcgi_path_info; + } +} diff --git a/nginx/dockerfiles/nginx.dockerfile b/nginx/dockerfiles/nginx.dockerfile new file mode 100644 index 0000000..977e2f0 --- /dev/null +++ b/nginx/dockerfiles/nginx.dockerfile @@ -0,0 +1,18 @@ +FROM nginx:stable-alpine + +ARG UID +ARG GID + +ENV UID=${UID} +ENV GID=${GID} + +# MacOS staff group's gid is 20, so is the dialout group in alpine linux. We're not using it, let's just remove it. +RUN delgroup dialout + +RUN addgroup -g ${GID} --system laravel +RUN adduser -G laravel --system -D -s /bin/sh -u ${UID} laravel +RUN sed -i "s/user nginx/user laravel/g" /etc/nginx/nginx.conf + +ADD ./nginx/default.conf /etc/nginx/conf.d/ + +RUN mkdir -p /var/www/html \ No newline at end of file diff --git a/php/dockerfiles/php.dockerfile b/php/dockerfiles/php.dockerfile new file mode 100644 index 0000000..2b7d29d --- /dev/null +++ b/php/dockerfiles/php.dockerfile @@ -0,0 +1,34 @@ +FROM php:8-fpm-alpine + +ARG UID +ARG GID + +ENV UID=${UID} +ENV GID=${GID} + +RUN mkdir -p /var/www/html + +WORKDIR /var/www/html + +COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer + +# MacOS staff group's gid is 20, so is the dialout group in alpine linux. We're not using it, let's just remove it. +RUN delgroup dialout + +RUN addgroup -g ${GID} --system laravel +RUN adduser -G laravel --system -D -s /bin/sh -u ${UID} laravel + +RUN sed -i "s/user = www-data/user = laravel/g" /usr/local/etc/php-fpm.d/www.conf +RUN sed -i "s/group = www-data/group = laravel/g" /usr/local/etc/php-fpm.d/www.conf +RUN echo "php_admin_flag[log_errors] = on" >> /usr/local/etc/php-fpm.d/www.conf + +RUN docker-php-ext-install pdo pdo_mysql + +RUN mkdir -p /usr/src/php/ext/redis \ + && curl -L https://github.com/phpredis/phpredis/archive/5.3.4.tar.gz | tar xvz -C /usr/src/php/ext/redis --strip 1 \ + && echo 'redis' >> /usr/src/php-available-exts \ + && docker-php-ext-install redis + +USER laravel + +CMD ["php-fpm", "-y", "/usr/local/etc/php-fpm.conf", "-R"] diff --git a/php/dockerfiles/php.root.dockerfile b/php/dockerfiles/php.root.dockerfile new file mode 100644 index 0000000..51205ee --- /dev/null +++ b/php/dockerfiles/php.root.dockerfile @@ -0,0 +1,22 @@ +FROM php:8-fpm-alpine + +RUN mkdir -p /var/www/html + +WORKDIR /var/www/html + +COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer + +RUN sed -i "s/user = www-data/user = root/g" /usr/local/etc/php-fpm.d/www.conf +RUN sed -i "s/group = www-data/group = root/g" /usr/local/etc/php-fpm.d/www.conf +RUN echo "php_admin_flag[log_errors] = on" >> /usr/local/etc/php-fpm.d/www.conf + +RUN docker-php-ext-install pdo pdo_mysql + +RUN mkdir -p /usr/src/php/ext/redis \ + && curl -L https://github.com/phpredis/phpredis/archive/5.3.4.tar.gz | tar xvz -C /usr/src/php/ext/redis --strip 1 \ + && echo 'redis' >> /usr/src/php-available-exts \ + && docker-php-ext-install redis + +USER root + +CMD ["php-fpm", "-y", "/usr/local/etc/php-fpm.conf", "-R"] diff --git a/python/.gitignore b/python/.gitignore new file mode 100644 index 0000000..2398ab2 --- /dev/null +++ b/python/.gitignore @@ -0,0 +1,10 @@ + +navidrome/ +music/ +.idea/ + +utils/__pycache__/ + +*.pyc + +geckodriver.log diff --git a/__init__.py b/python/__init__.py similarity index 100% rename from __init__.py rename to python/__init__.py diff --git a/app.py b/python/app.py similarity index 95% rename from app.py rename to python/app.py index db37368..e3f88cb 100644 --- a/app.py +++ b/python/app.py @@ -24,9 +24,6 @@ def process_downloads(): print(album) Album.write(album['id'], {'downloading': True}) download_album(album) - else: - # Sometimes the records are not being removed - Album.purge() cron = BackgroundScheduler({'apscheduler.job_defaults.max_instances': 2}, daemon=True) diff --git a/python/bin/Activate.ps1 b/python/bin/Activate.ps1 new file mode 100644 index 0000000..b49d77b --- /dev/null +++ b/python/bin/Activate.ps1 @@ -0,0 +1,247 @@ +<# +.Synopsis +Activate a Python virtual environment for the current PowerShell session. + +.Description +Pushes the python executable for a virtual environment to the front of the +$Env:PATH environment variable and sets the prompt to signify that you are +in a Python virtual environment. Makes use of the command line switches as +well as the `pyvenv.cfg` file values present in the virtual environment. + +.Parameter VenvDir +Path to the directory that contains the virtual environment to activate. The +default value for this is the parent of the directory that the Activate.ps1 +script is located within. + +.Parameter Prompt +The prompt prefix to display when this virtual environment is activated. By +default, this prompt is the name of the virtual environment folder (VenvDir) +surrounded by parentheses and followed by a single space (ie. '(.venv) '). + +.Example +Activate.ps1 +Activates the Python virtual environment that contains the Activate.ps1 script. + +.Example +Activate.ps1 -Verbose +Activates the Python virtual environment that contains the Activate.ps1 script, +and shows extra information about the activation as it executes. + +.Example +Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv +Activates the Python virtual environment located in the specified location. + +.Example +Activate.ps1 -Prompt "MyPython" +Activates the Python virtual environment that contains the Activate.ps1 script, +and prefixes the current prompt with the specified string (surrounded in +parentheses) while the virtual environment is active. + +.Notes +On Windows, it may be required to enable this Activate.ps1 script by setting the +execution policy for the user. You can do this by issuing the following PowerShell +command: + +PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser + +For more information on Execution Policies: +https://go.microsoft.com/fwlink/?LinkID=135170 + +#> +Param( + [Parameter(Mandatory = $false)] + [String] + $VenvDir, + [Parameter(Mandatory = $false)] + [String] + $Prompt +) + +<# Function declarations --------------------------------------------------- #> + +<# +.Synopsis +Remove all shell session elements added by the Activate script, including the +addition of the virtual environment's Python executable from the beginning of +the PATH variable. + +.Parameter NonDestructive +If present, do not remove this function from the global namespace for the +session. + +#> +function global:deactivate ([switch]$NonDestructive) { + # Revert to original values + + # The prior prompt: + if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { + Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt + Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT + } + + # The prior PYTHONHOME: + if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { + Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME + Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME + } + + # The prior PATH: + if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { + Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH + Remove-Item -Path Env:_OLD_VIRTUAL_PATH + } + + # Just remove the VIRTUAL_ENV altogether: + if (Test-Path -Path Env:VIRTUAL_ENV) { + Remove-Item -Path env:VIRTUAL_ENV + } + + # Just remove VIRTUAL_ENV_PROMPT altogether. + if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) { + Remove-Item -Path env:VIRTUAL_ENV_PROMPT + } + + # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: + if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { + Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force + } + + # Leave deactivate function in the global namespace if requested: + if (-not $NonDestructive) { + Remove-Item -Path function:deactivate + } +} + +<# +.Description +Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the +given folder, and returns them in a map. + +For each line in the pyvenv.cfg file, if that line can be parsed into exactly +two strings separated by `=` (with any amount of whitespace surrounding the =) +then it is considered a `key = value` line. The left hand string is the key, +the right hand is the value. + +If the value starts with a `'` or a `"` then the first and last character is +stripped from the value before being captured. + +.Parameter ConfigDir +Path to the directory that contains the `pyvenv.cfg` file. +#> +function Get-PyVenvConfig( + [String] + $ConfigDir +) { + Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" + + # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). + $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue + + # An empty map will be returned if no config file is found. + $pyvenvConfig = @{ } + + if ($pyvenvConfigPath) { + + Write-Verbose "File exists, parse `key = value` lines" + $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath + + $pyvenvConfigContent | ForEach-Object { + $keyval = $PSItem -split "\s*=\s*", 2 + if ($keyval[0] -and $keyval[1]) { + $val = $keyval[1] + + # Remove extraneous quotations around a string value. + if ("'""".Contains($val.Substring(0, 1))) { + $val = $val.Substring(1, $val.Length - 2) + } + + $pyvenvConfig[$keyval[0]] = $val + Write-Verbose "Adding Key: '$($keyval[0])'='$val'" + } + } + } + return $pyvenvConfig +} + + +<# Begin Activate script --------------------------------------------------- #> + +# Determine the containing directory of this script +$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition +$VenvExecDir = Get-Item -Path $VenvExecPath + +Write-Verbose "Activation script is located in path: '$VenvExecPath'" +Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" +Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" + +# Set values required in priority: CmdLine, ConfigFile, Default +# First, get the location of the virtual environment, it might not be +# VenvExecDir if specified on the command line. +if ($VenvDir) { + Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" +} +else { + Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." + $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") + Write-Verbose "VenvDir=$VenvDir" +} + +# Next, read the `pyvenv.cfg` file to determine any required value such +# as `prompt`. +$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir + +# Next, set the prompt from the command line, or the config file, or +# just use the name of the virtual environment folder. +if ($Prompt) { + Write-Verbose "Prompt specified as argument, using '$Prompt'" +} +else { + Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" + if ($pyvenvCfg -and $pyvenvCfg['prompt']) { + Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" + $Prompt = $pyvenvCfg['prompt']; + } + else { + Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)" + Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" + $Prompt = Split-Path -Path $venvDir -Leaf + } +} + +Write-Verbose "Prompt = '$Prompt'" +Write-Verbose "VenvDir='$VenvDir'" + +# Deactivate any currently active virtual environment, but leave the +# deactivate function in place. +deactivate -nondestructive + +# Now set the environment variable VIRTUAL_ENV, used by many tools to determine +# that there is an activated venv. +$env:VIRTUAL_ENV = $VenvDir + +if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { + + Write-Verbose "Setting prompt to '$Prompt'" + + # Set the prompt to include the env name + # Make sure _OLD_VIRTUAL_PROMPT is global + function global:_OLD_VIRTUAL_PROMPT { "" } + Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT + New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt + + function global:prompt { + Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " + _OLD_VIRTUAL_PROMPT + } + $env:VIRTUAL_ENV_PROMPT = $Prompt +} + +# Clear PYTHONHOME +if (Test-Path -Path Env:PYTHONHOME) { + Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME + Remove-Item -Path Env:PYTHONHOME +} + +# Add the venv to the PATH +Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH +$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/python/bin/activate b/python/bin/activate new file mode 100644 index 0000000..59b4813 --- /dev/null +++ b/python/bin/activate @@ -0,0 +1,69 @@ +# This file must be used with "source bin/activate" *from bash* +# you cannot run it directly + +deactivate () { + # reset old environment variables + if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then + PATH="${_OLD_VIRTUAL_PATH:-}" + export PATH + unset _OLD_VIRTUAL_PATH + fi + if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then + PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" + export PYTHONHOME + unset _OLD_VIRTUAL_PYTHONHOME + fi + + # This should detect bash and zsh, which have a hash command that must + # be called to get it to forget past commands. Without forgetting + # past commands the $PATH changes we made may not be respected + if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null + fi + + if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then + PS1="${_OLD_VIRTUAL_PS1:-}" + export PS1 + unset _OLD_VIRTUAL_PS1 + fi + + unset VIRTUAL_ENV + unset VIRTUAL_ENV_PROMPT + if [ ! "${1:-}" = "nondestructive" ] ; then + # Self destruct! + unset -f deactivate + fi +} + +# unset irrelevant variables +deactivate nondestructive + +VIRTUAL_ENV="/home/spaulding/Apps/getDiscography" +export VIRTUAL_ENV + +_OLD_VIRTUAL_PATH="$PATH" +PATH="$VIRTUAL_ENV/bin:$PATH" +export PATH + +# unset PYTHONHOME if set +# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) +# could use `if (set -u; : $PYTHONHOME) ;` in bash +if [ -n "${PYTHONHOME:-}" ] ; then + _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" + unset PYTHONHOME +fi + +if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then + _OLD_VIRTUAL_PS1="${PS1:-}" + PS1="(getDiscography) ${PS1:-}" + export PS1 + VIRTUAL_ENV_PROMPT="(getDiscography) " + export VIRTUAL_ENV_PROMPT +fi + +# This should detect bash and zsh, which have a hash command that must +# be called to get it to forget past commands. Without forgetting +# past commands the $PATH changes we made may not be respected +if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null +fi diff --git a/python/bin/activate.csh b/python/bin/activate.csh new file mode 100644 index 0000000..81cccce --- /dev/null +++ b/python/bin/activate.csh @@ -0,0 +1,26 @@ +# This file must be used with "source bin/activate.csh" *from csh*. +# You cannot run it directly. +# Created by Davide Di Blasi . +# Ported to Python 3.3 venv by Andrew Svetlov + +alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate' + +# Unset irrelevant variables. +deactivate nondestructive + +setenv VIRTUAL_ENV "/home/spaulding/Apps/getDiscography" + +set _OLD_VIRTUAL_PATH="$PATH" +setenv PATH "$VIRTUAL_ENV/bin:$PATH" + + +set _OLD_VIRTUAL_PROMPT="$prompt" + +if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then + set prompt = "(getDiscography) $prompt" + setenv VIRTUAL_ENV_PROMPT "(getDiscography) " +endif + +alias pydoc python -m pydoc + +rehash diff --git a/python/bin/activate.fish b/python/bin/activate.fish new file mode 100644 index 0000000..0be19fa --- /dev/null +++ b/python/bin/activate.fish @@ -0,0 +1,69 @@ +# This file must be used with "source /bin/activate.fish" *from fish* +# (https://fishshell.com/); you cannot run it directly. + +function deactivate -d "Exit virtual environment and return to normal shell environment" + # reset old environment variables + if test -n "$_OLD_VIRTUAL_PATH" + set -gx PATH $_OLD_VIRTUAL_PATH + set -e _OLD_VIRTUAL_PATH + end + if test -n "$_OLD_VIRTUAL_PYTHONHOME" + set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME + set -e _OLD_VIRTUAL_PYTHONHOME + end + + if test -n "$_OLD_FISH_PROMPT_OVERRIDE" + set -e _OLD_FISH_PROMPT_OVERRIDE + # prevents error when using nested fish instances (Issue #93858) + if functions -q _old_fish_prompt + functions -e fish_prompt + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + end + end + + set -e VIRTUAL_ENV + set -e VIRTUAL_ENV_PROMPT + if test "$argv[1]" != "nondestructive" + # Self-destruct! + functions -e deactivate + end +end + +# Unset irrelevant variables. +deactivate nondestructive + +set -gx VIRTUAL_ENV "/home/spaulding/Apps/getDiscography" + +set -gx _OLD_VIRTUAL_PATH $PATH +set -gx PATH "$VIRTUAL_ENV/bin" $PATH + +# Unset PYTHONHOME if set. +if set -q PYTHONHOME + set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME + set -e PYTHONHOME +end + +if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" + # fish uses a function instead of an env var to generate the prompt. + + # Save the current fish_prompt function as the function _old_fish_prompt. + functions -c fish_prompt _old_fish_prompt + + # With the original prompt function renamed, we can override with our own. + function fish_prompt + # Save the return status of the last command. + set -l old_status $status + + # Output the venv prompt; color taken from the blue of the Python logo. + printf "%s%s%s" (set_color 4B8BBE) "(getDiscography) " (set_color normal) + + # Restore the return status of the previous command. + echo "exit $old_status" | . + # Output the original/"old" prompt. + _old_fish_prompt + end + + set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" + set -gx VIRTUAL_ENV_PROMPT "(getDiscography) " +end diff --git a/python/bin/flask b/python/bin/flask new file mode 100755 index 0000000..f18332d --- /dev/null +++ b/python/bin/flask @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from flask.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/python/bin/mid3cp b/python/bin/mid3cp new file mode 100755 index 0000000..6520e0d --- /dev/null +++ b/python/bin/mid3cp @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mutagen._tools.mid3cp import entry_point +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(entry_point()) diff --git a/python/bin/mid3iconv b/python/bin/mid3iconv new file mode 100755 index 0000000..af9cc54 --- /dev/null +++ b/python/bin/mid3iconv @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mutagen._tools.mid3iconv import entry_point +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(entry_point()) diff --git a/python/bin/mid3v2 b/python/bin/mid3v2 new file mode 100755 index 0000000..1904b29 --- /dev/null +++ b/python/bin/mid3v2 @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mutagen._tools.mid3v2 import entry_point +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(entry_point()) diff --git a/python/bin/moggsplit b/python/bin/moggsplit new file mode 100755 index 0000000..40ba8e6 --- /dev/null +++ b/python/bin/moggsplit @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mutagen._tools.moggsplit import entry_point +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(entry_point()) diff --git a/python/bin/mutagen-inspect b/python/bin/mutagen-inspect new file mode 100755 index 0000000..f15cf89 --- /dev/null +++ b/python/bin/mutagen-inspect @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mutagen._tools.mutagen_inspect import entry_point +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(entry_point()) diff --git a/python/bin/mutagen-pony b/python/bin/mutagen-pony new file mode 100755 index 0000000..68fe50e --- /dev/null +++ b/python/bin/mutagen-pony @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mutagen._tools.mutagen_pony import entry_point +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(entry_point()) diff --git a/python/bin/normalizer b/python/bin/normalizer new file mode 100755 index 0000000..f28fe40 --- /dev/null +++ b/python/bin/normalizer @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from charset_normalizer.cli.normalizer import cli_detect +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cli_detect()) diff --git a/python/bin/pip b/python/bin/pip new file mode 100755 index 0000000..624c2c2 --- /dev/null +++ b/python/bin/pip @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/python/bin/pip3 b/python/bin/pip3 new file mode 100755 index 0000000..624c2c2 --- /dev/null +++ b/python/bin/pip3 @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/python/bin/pip3.10 b/python/bin/pip3.10 new file mode 100755 index 0000000..624c2c2 --- /dev/null +++ b/python/bin/pip3.10 @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/python/bin/pysondb b/python/bin/pysondb new file mode 100755 index 0000000..ae8fdfd --- /dev/null +++ b/python/bin/pysondb @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pysondb.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/python/bin/python b/python/bin/python new file mode 120000 index 0000000..b8a0adb --- /dev/null +++ b/python/bin/python @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/python/bin/python3 b/python/bin/python3 new file mode 120000 index 0000000..ae65fda --- /dev/null +++ b/python/bin/python3 @@ -0,0 +1 @@ +/usr/bin/python3 \ No newline at end of file diff --git a/python/bin/python3.10 b/python/bin/python3.10 new file mode 120000 index 0000000..b8a0adb --- /dev/null +++ b/python/bin/python3.10 @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/python/bin/tldextract b/python/bin/tldextract new file mode 100755 index 0000000..8f35f2d --- /dev/null +++ b/python/bin/tldextract @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from tldextract.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/python/bin/yt-dlp b/python/bin/yt-dlp new file mode 100755 index 0000000..80b0c9f --- /dev/null +++ b/python/bin/yt-dlp @@ -0,0 +1,8 @@ +#!/home/spaulding/Apps/getDiscography/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from yt_dlp import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/const.py b/python/const.py similarity index 100% rename from const.py rename to python/const.py diff --git a/database.py b/python/database.py similarity index 100% rename from database.py rename to python/database.py diff --git a/python/database/album.json b/python/database/album.json new file mode 100644 index 0000000..65fae68 --- /dev/null +++ b/python/database/album.json @@ -0,0 +1,12 @@ +{ + "version": 2, + "keys": [ + "album", + "artist", + "cover", + "downloaded", + "downloading", + "link" + ], + "data": {} +} \ No newline at end of file diff --git a/Dockerfile b/python/dockerfiles/Dockerfile similarity index 78% rename from Dockerfile rename to python/dockerfiles/Dockerfile index 5edf5fb..aab2881 100644 --- a/Dockerfile +++ b/python/dockerfiles/Dockerfile @@ -18,15 +18,15 @@ RUN export PATH=$PATH:/usr/local/bin/geckodriver RUN pip3 install -r requirements.txt ENV FLASK_APP=app -## Set user and group -#ARG user=app -#ARG group=app -#ARG uid=1000 -#ARG gid=1000 -#RUN groupadd -g ${gid} ${group} -#RUN useradd -u ${uid} -g ${group} -s /bin/sh -m ${user} -# -## Switch to user -#USER ${uid}:${gid} +# Set user and group +ARG user=app +ARG group=app +ARG uid=1000 +ARG gid=1000 +RUN groupadd -g ${gid} ${group} +RUN useradd -u ${uid} -g ${group} -s /bin/sh -m ${user} + +# Switch to user +USER ${uid}:${gid} CMD ["python3","-u","app.py"] diff --git a/drivers/geckodriver-0.33.0/.cargo/config b/python/drivers/geckodriver-0.33.0/.cargo/config similarity index 100% rename from drivers/geckodriver-0.33.0/.cargo/config rename to python/drivers/geckodriver-0.33.0/.cargo/config diff --git a/drivers/geckodriver-0.33.0/CHANGES.md b/python/drivers/geckodriver-0.33.0/CHANGES.md similarity index 100% rename from drivers/geckodriver-0.33.0/CHANGES.md rename to python/drivers/geckodriver-0.33.0/CHANGES.md diff --git a/drivers/geckodriver-0.33.0/CONTRIBUTING.md b/python/drivers/geckodriver-0.33.0/CONTRIBUTING.md similarity index 100% rename from drivers/geckodriver-0.33.0/CONTRIBUTING.md rename to python/drivers/geckodriver-0.33.0/CONTRIBUTING.md diff --git a/drivers/geckodriver-0.33.0/Cargo.lock b/python/drivers/geckodriver-0.33.0/Cargo.lock similarity index 100% rename from drivers/geckodriver-0.33.0/Cargo.lock rename to python/drivers/geckodriver-0.33.0/Cargo.lock diff --git a/drivers/geckodriver-0.33.0/Cargo.toml b/python/drivers/geckodriver-0.33.0/Cargo.toml similarity index 100% rename from drivers/geckodriver-0.33.0/Cargo.toml rename to python/drivers/geckodriver-0.33.0/Cargo.toml diff --git a/drivers/geckodriver-0.33.0/ISSUE_TEMPLATE.md b/python/drivers/geckodriver-0.33.0/ISSUE_TEMPLATE.md similarity index 100% rename from drivers/geckodriver-0.33.0/ISSUE_TEMPLATE.md rename to python/drivers/geckodriver-0.33.0/ISSUE_TEMPLATE.md diff --git a/drivers/geckodriver-0.33.0/LICENSE b/python/drivers/geckodriver-0.33.0/LICENSE similarity index 100% rename from drivers/geckodriver-0.33.0/LICENSE rename to python/drivers/geckodriver-0.33.0/LICENSE diff --git a/drivers/geckodriver-0.33.0/README.md b/python/drivers/geckodriver-0.33.0/README.md similarity index 100% rename from drivers/geckodriver-0.33.0/README.md rename to python/drivers/geckodriver-0.33.0/README.md diff --git a/drivers/geckodriver-0.33.0/build.rs b/python/drivers/geckodriver-0.33.0/build.rs similarity index 100% rename from drivers/geckodriver-0.33.0/build.rs rename to python/drivers/geckodriver-0.33.0/build.rs diff --git a/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux32.tar.gz b/python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux32.tar.gz similarity index 100% rename from drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux32.tar.gz rename to python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux32.tar.gz diff --git a/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux64.tar.gz b/python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux64.tar.gz similarity index 100% rename from drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux64.tar.gz rename to python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-linux64.tar.gz diff --git a/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-macos.tar.gz b/python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-macos.tar.gz similarity index 100% rename from drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-macos.tar.gz rename to python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-macos.tar.gz diff --git a/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-win32.zip b/python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-win32.zip similarity index 100% rename from drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-win32.zip rename to python/drivers/geckodriver-0.33.0/dist/geckodriver-v0.33.0-win32.zip diff --git a/drivers/geckodriver-0.33.0/dist/geckodriver.exe b/python/drivers/geckodriver-0.33.0/dist/geckodriver.exe similarity index 100% rename from drivers/geckodriver-0.33.0/dist/geckodriver.exe rename to python/drivers/geckodriver-0.33.0/dist/geckodriver.exe diff --git a/drivers/geckodriver-0.33.0/doc/ARM.md b/python/drivers/geckodriver-0.33.0/doc/ARM.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/ARM.md rename to python/drivers/geckodriver-0.33.0/doc/ARM.md diff --git a/drivers/geckodriver-0.33.0/doc/Bugs.md b/python/drivers/geckodriver-0.33.0/doc/Bugs.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Bugs.md rename to python/drivers/geckodriver-0.33.0/doc/Bugs.md diff --git a/drivers/geckodriver-0.33.0/doc/Building.md b/python/drivers/geckodriver-0.33.0/doc/Building.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Building.md rename to python/drivers/geckodriver-0.33.0/doc/Building.md diff --git a/drivers/geckodriver-0.33.0/doc/Capabilities.md b/python/drivers/geckodriver-0.33.0/doc/Capabilities.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Capabilities.md rename to python/drivers/geckodriver-0.33.0/doc/Capabilities.md diff --git a/drivers/geckodriver-0.33.0/doc/CrashReports.md b/python/drivers/geckodriver-0.33.0/doc/CrashReports.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/CrashReports.md rename to python/drivers/geckodriver-0.33.0/doc/CrashReports.md diff --git a/drivers/geckodriver-0.33.0/doc/Flags.md b/python/drivers/geckodriver-0.33.0/doc/Flags.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Flags.md rename to python/drivers/geckodriver-0.33.0/doc/Flags.md diff --git a/drivers/geckodriver-0.33.0/doc/Notarization.md b/python/drivers/geckodriver-0.33.0/doc/Notarization.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Notarization.md rename to python/drivers/geckodriver-0.33.0/doc/Notarization.md diff --git a/drivers/geckodriver-0.33.0/doc/Patches.md b/python/drivers/geckodriver-0.33.0/doc/Patches.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Patches.md rename to python/drivers/geckodriver-0.33.0/doc/Patches.md diff --git a/drivers/geckodriver-0.33.0/doc/Profiles.md b/python/drivers/geckodriver-0.33.0/doc/Profiles.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Profiles.md rename to python/drivers/geckodriver-0.33.0/doc/Profiles.md diff --git a/drivers/geckodriver-0.33.0/doc/Releasing.md b/python/drivers/geckodriver-0.33.0/doc/Releasing.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Releasing.md rename to python/drivers/geckodriver-0.33.0/doc/Releasing.md diff --git a/drivers/geckodriver-0.33.0/doc/Support.md b/python/drivers/geckodriver-0.33.0/doc/Support.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Support.md rename to python/drivers/geckodriver-0.33.0/doc/Support.md diff --git a/drivers/geckodriver-0.33.0/doc/Testing.md b/python/drivers/geckodriver-0.33.0/doc/Testing.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Testing.md rename to python/drivers/geckodriver-0.33.0/doc/Testing.md diff --git a/drivers/geckodriver-0.33.0/doc/TraceLogs.md b/python/drivers/geckodriver-0.33.0/doc/TraceLogs.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/TraceLogs.md rename to python/drivers/geckodriver-0.33.0/doc/TraceLogs.md diff --git a/drivers/geckodriver-0.33.0/doc/Usage.md b/python/drivers/geckodriver-0.33.0/doc/Usage.md similarity index 100% rename from drivers/geckodriver-0.33.0/doc/Usage.md rename to python/drivers/geckodriver-0.33.0/doc/Usage.md diff --git a/drivers/geckodriver-0.33.0/doc/index.rst b/python/drivers/geckodriver-0.33.0/doc/index.rst similarity index 100% rename from drivers/geckodriver-0.33.0/doc/index.rst rename to python/drivers/geckodriver-0.33.0/doc/index.rst diff --git a/drivers/geckodriver-0.33.0/marionette/Cargo.toml b/python/drivers/geckodriver-0.33.0/marionette/Cargo.toml similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/Cargo.toml rename to python/drivers/geckodriver-0.33.0/marionette/Cargo.toml diff --git a/drivers/geckodriver-0.33.0/marionette/src/common.rs b/python/drivers/geckodriver-0.33.0/marionette/src/common.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/common.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/common.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/error.rs b/python/drivers/geckodriver-0.33.0/marionette/src/error.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/error.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/error.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/lib.rs b/python/drivers/geckodriver-0.33.0/marionette/src/lib.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/lib.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/lib.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/marionette.rs b/python/drivers/geckodriver-0.33.0/marionette/src/marionette.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/marionette.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/marionette.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/message.rs b/python/drivers/geckodriver-0.33.0/marionette/src/message.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/message.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/message.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/result.rs b/python/drivers/geckodriver-0.33.0/marionette/src/result.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/result.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/result.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/test.rs b/python/drivers/geckodriver-0.33.0/marionette/src/test.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/test.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/test.rs diff --git a/drivers/geckodriver-0.33.0/marionette/src/webdriver.rs b/python/drivers/geckodriver-0.33.0/marionette/src/webdriver.rs similarity index 100% rename from drivers/geckodriver-0.33.0/marionette/src/webdriver.rs rename to python/drivers/geckodriver-0.33.0/marionette/src/webdriver.rs diff --git a/drivers/geckodriver-0.33.0/src/android.rs b/python/drivers/geckodriver-0.33.0/src/android.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/android.rs rename to python/drivers/geckodriver-0.33.0/src/android.rs diff --git a/drivers/geckodriver-0.33.0/src/browser.rs b/python/drivers/geckodriver-0.33.0/src/browser.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/browser.rs rename to python/drivers/geckodriver-0.33.0/src/browser.rs diff --git a/drivers/geckodriver-0.33.0/src/build.rs b/python/drivers/geckodriver-0.33.0/src/build.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/build.rs rename to python/drivers/geckodriver-0.33.0/src/build.rs diff --git a/drivers/geckodriver-0.33.0/src/capabilities.rs b/python/drivers/geckodriver-0.33.0/src/capabilities.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/capabilities.rs rename to python/drivers/geckodriver-0.33.0/src/capabilities.rs diff --git a/drivers/geckodriver-0.33.0/src/command.rs b/python/drivers/geckodriver-0.33.0/src/command.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/command.rs rename to python/drivers/geckodriver-0.33.0/src/command.rs diff --git a/drivers/geckodriver-0.33.0/src/logging.rs b/python/drivers/geckodriver-0.33.0/src/logging.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/logging.rs rename to python/drivers/geckodriver-0.33.0/src/logging.rs diff --git a/drivers/geckodriver-0.33.0/src/main.rs b/python/drivers/geckodriver-0.33.0/src/main.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/main.rs rename to python/drivers/geckodriver-0.33.0/src/main.rs diff --git a/drivers/geckodriver-0.33.0/src/marionette.rs b/python/drivers/geckodriver-0.33.0/src/marionette.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/marionette.rs rename to python/drivers/geckodriver-0.33.0/src/marionette.rs diff --git a/drivers/geckodriver-0.33.0/src/prefs.rs b/python/drivers/geckodriver-0.33.0/src/prefs.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/prefs.rs rename to python/drivers/geckodriver-0.33.0/src/prefs.rs diff --git a/drivers/geckodriver-0.33.0/src/test.rs b/python/drivers/geckodriver-0.33.0/src/test.rs similarity index 100% rename from drivers/geckodriver-0.33.0/src/test.rs rename to python/drivers/geckodriver-0.33.0/src/test.rs diff --git a/drivers/geckodriver-0.33.0/src/tests/profile.zip b/python/drivers/geckodriver-0.33.0/src/tests/profile.zip similarity index 100% rename from drivers/geckodriver-0.33.0/src/tests/profile.zip rename to python/drivers/geckodriver-0.33.0/src/tests/profile.zip diff --git a/geckodriver-install.sh b/python/geckodriver-install.sh similarity index 100% rename from geckodriver-install.sh rename to python/geckodriver-install.sh diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/INSTALLER b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/METADATA b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/METADATA rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/RECORD b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/RECORD new file mode 100644 index 0000000..7b56928 --- /dev/null +++ b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/RECORD @@ -0,0 +1,84 @@ +APScheduler-3.10.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +APScheduler-3.10.1.dist-info/LICENSE.txt,sha256=YWP3mH37ONa8MgzitwsvArhivEESZRbVUu8c1DJH51g,1130 +APScheduler-3.10.1.dist-info/METADATA,sha256=nShBYOJMsJ9iwrKP_x4rAVN87_NmBOwpdPPn85sY9G4,5676 +APScheduler-3.10.1.dist-info/RECORD,, +APScheduler-3.10.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +APScheduler-3.10.1.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92 +APScheduler-3.10.1.dist-info/entry_points.txt,sha256=KMxTUp2QykDNL6w-WBU5xrk8ebroCPEBN0eZtyL3x2w,1147 +APScheduler-3.10.1.dist-info/top_level.txt,sha256=O3oMCWxG-AHkecUoO6Ze7-yYjWrttL95uHO8-RFdYvE,12 +apscheduler/__init__.py,sha256=qFEK2ysRBcLiYmm3deyJJ1avUOugaM_nCGHMD42WMBw,380 +apscheduler/__pycache__/__init__.cpython-310.pyc,, +apscheduler/__pycache__/events.cpython-310.pyc,, +apscheduler/__pycache__/job.cpython-310.pyc,, +apscheduler/__pycache__/util.cpython-310.pyc,, +apscheduler/events.py,sha256=KRMTDQUS6d2uVnrQvPoz3ZPV5V9XKsCAZLsgx913FFo,3593 +apscheduler/executors/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +apscheduler/executors/__pycache__/__init__.cpython-310.pyc,, +apscheduler/executors/__pycache__/asyncio.cpython-310.pyc,, +apscheduler/executors/__pycache__/base.cpython-310.pyc,, +apscheduler/executors/__pycache__/base_py3.cpython-310.pyc,, +apscheduler/executors/__pycache__/debug.cpython-310.pyc,, +apscheduler/executors/__pycache__/gevent.cpython-310.pyc,, +apscheduler/executors/__pycache__/pool.cpython-310.pyc,, +apscheduler/executors/__pycache__/tornado.cpython-310.pyc,, +apscheduler/executors/__pycache__/twisted.cpython-310.pyc,, +apscheduler/executors/asyncio.py,sha256=9m4wvRHSSYplllxAQyxWkPVcFdyFG5aZbHt5nfWKIAc,1859 +apscheduler/executors/base.py,sha256=hogiMc_t-huw6BMod0HEeY2FhRNmAAUyNNuBHvIX31M,5336 +apscheduler/executors/base_py3.py,sha256=8WOpTeX1NA-spdbEQ1oJMh5T2O_t2UdsaSnAh-iEWe0,1831 +apscheduler/executors/debug.py,sha256=15_ogSBzl8RRCfBYDnkIV2uMH8cLk1KImYmBa_NVGpc,573 +apscheduler/executors/gevent.py,sha256=aulrNmoefyBgrOkH9awRhFiXIDnSCnZ4U0o0_JXIXgc,777 +apscheduler/executors/pool.py,sha256=h4cYgKMRhjpNHmkhlogHLbmT4O_q6HePXVLmiJIHC3c,2484 +apscheduler/executors/tornado.py,sha256=DU75VaQ9R6nBuy8lbPUvDKUgsuJcZqwAvURC5vg3r6w,1780 +apscheduler/executors/twisted.py,sha256=bRoU0C4BoVcS6_BjKD5wfUs0IJpGkmLsRAcMH2rJJss,778 +apscheduler/job.py,sha256=JCRERBpfWLuomPiNNHX-jrluEwfHkdscEmz4i0Y8rao,11216 +apscheduler/jobstores/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +apscheduler/jobstores/__pycache__/__init__.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/base.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/memory.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/mongodb.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/redis.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/rethinkdb.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/sqlalchemy.cpython-310.pyc,, +apscheduler/jobstores/__pycache__/zookeeper.cpython-310.pyc,, +apscheduler/jobstores/base.py,sha256=DXzSW9XscueHZHMvy1qFiG-vYqUl_MMv0n0uBSZWXGo,4523 +apscheduler/jobstores/memory.py,sha256=ZxWiKsqfsCHFvac-6X9BztuhnuSxlOYi1dhT6g-pjQo,3655 +apscheduler/jobstores/mongodb.py,sha256=r9t2neNuzfPuf_omDm0KdkLGPZXLksiH-U3j13MIBlM,5347 +apscheduler/jobstores/redis.py,sha256=kjQDIzPXz-Yq976U9HK3aMkcCI_QRLKgTADQWKewtik,5483 +apscheduler/jobstores/rethinkdb.py,sha256=k1rSLYJqejuhQxJY3pXwHAQYcpZ1QFJsoQ8n0oEu5MM,5863 +apscheduler/jobstores/sqlalchemy.py,sha256=LIA9iSGMvuPTVqGHdztgQs4YFmYN1xqXvpJauYNK470,6529 +apscheduler/jobstores/zookeeper.py,sha256=avGLXaJGjHD0F7uG6rLJ2gg_TXNqXDEM4PqOu56f-Xg,6363 +apscheduler/schedulers/__init__.py,sha256=jM63xA_K7GSToBenhsz-SCcqfhk1pdEVb6ajwoO5Kqg,406 +apscheduler/schedulers/__pycache__/__init__.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/asyncio.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/background.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/base.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/blocking.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/gevent.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/qt.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/tornado.cpython-310.pyc,, +apscheduler/schedulers/__pycache__/twisted.cpython-310.pyc,, +apscheduler/schedulers/asyncio.py,sha256=iJO6QUo1oW16giOU_nW8WMu2b9NTWT4Tg2gY586G08w,1994 +apscheduler/schedulers/background.py,sha256=751p-f5Di6pY4x6UXlZggpxQ5k2ObJ_Q5wSeWmKHS8o,1566 +apscheduler/schedulers/base.py,sha256=M8WWEKjG-VfyL_UF1Wgbjk01yxa45t_GXfKyvtY0RMs,43228 +apscheduler/schedulers/blocking.py,sha256=8nubfJ4PoUnAkEY6WRQG4COzG4SxGyW9PjuVPhDAbsk,985 +apscheduler/schedulers/gevent.py,sha256=csPBvV75FGcboXXsdex6fCD7J54QgBddYNdWj62ZO9g,1031 +apscheduler/schedulers/qt.py,sha256=aooX3slyDwLglojae5t2tz6NlqfceZYCeXAIS0LQVCk,1613 +apscheduler/schedulers/tornado.py,sha256=D9Vaq3Ee9EFiXa1jDy9tedI048gR_YT_LAFUWqO_uEw,1926 +apscheduler/schedulers/twisted.py,sha256=D5EBjjMRtMBxy0_aAURcULAI8Ky2IvCTr9tK9sO1rYk,1844 +apscheduler/triggers/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +apscheduler/triggers/__pycache__/__init__.cpython-310.pyc,, +apscheduler/triggers/__pycache__/base.cpython-310.pyc,, +apscheduler/triggers/__pycache__/combining.cpython-310.pyc,, +apscheduler/triggers/__pycache__/date.cpython-310.pyc,, +apscheduler/triggers/__pycache__/interval.cpython-310.pyc,, +apscheduler/triggers/base.py,sha256=BvBJdOnIeVClXPXeInzYK25cN64jAc4a9IiEQucSiVk,1355 +apscheduler/triggers/combining.py,sha256=klaSoBp1kyrPX5D3gBpNTlsGKjks5QeKPW5JN_MVs30,3449 +apscheduler/triggers/cron/__init__.py,sha256=D39BQ63qWyk6XZcSuWth46ELQ3VIFpYjUHh7Kj65Z9M,9251 +apscheduler/triggers/cron/__pycache__/__init__.cpython-310.pyc,, +apscheduler/triggers/cron/__pycache__/expressions.cpython-310.pyc,, +apscheduler/triggers/cron/__pycache__/fields.cpython-310.pyc,, +apscheduler/triggers/cron/expressions.py,sha256=hu1kq0mKvivIw7U0D0Nnrbuk3q01dCuhZ7SHRPw6qhI,9184 +apscheduler/triggers/cron/fields.py,sha256=NWPClh1NgSOpTlJ3sm1TXM_ViC2qJGKWkd_vg0xsw7o,3510 +apscheduler/triggers/date.py,sha256=RrfB1PNO9G9e91p1BOf-y_TseVHQQR-KJPhNdPpAHcU,1705 +apscheduler/triggers/interval.py,sha256=ABjcZFaGYAAgdAaUQIuLr9_dLszIifu88qaXrJmdxQ4,4377 +apscheduler/util.py,sha256=zaDgtfj1TzSZp7TGyC_57Gq96hrcrhzP41pwZ0xbBxA,13846 diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/REQUESTED b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/WHEEL b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/WHEEL rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/top_level.txt b/python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/APScheduler-3.10.1.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/INSTALLER b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/LICENSE b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/LICENSE similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/LICENSE rename to python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/LICENSE diff --git a/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/METADATA b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/METADATA new file mode 100644 index 0000000..0e60b84 --- /dev/null +++ b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/METADATA @@ -0,0 +1,37 @@ +Metadata-Version: 2.1 +Name: Brotli +Version: 1.0.9 +Summary: Python bindings for the Brotli compression library +Home-page: https://github.com/google/brotli +Author: The Brotli Authors +License: MIT +Platform: Posix +Platform: MacOS X +Platform: Windows +Classifier: Development Status :: 4 - Beta +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: Microsoft :: Windows +Classifier: Operating System :: POSIX :: Linux +Classifier: Programming Language :: C +Classifier: Programming Language :: C++ +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.3 +Classifier: Programming Language :: Python :: 3.4 +Classifier: Programming Language :: Python :: 3.5 +Classifier: Programming Language :: Unix Shell +Classifier: Topic :: Software Development :: Libraries +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: System :: Archiving +Classifier: Topic :: System :: Archiving :: Compression +Classifier: Topic :: Text Processing :: Fonts +Classifier: Topic :: Utilities +License-File: LICENSE + +UNKNOWN + diff --git a/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/RECORD b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/RECORD new file mode 100644 index 0000000..8eaa403 --- /dev/null +++ b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/RECORD @@ -0,0 +1,10 @@ +Brotli-1.0.9.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Brotli-1.0.9.dist-info/LICENSE,sha256=PRgACONpIqTo2uwRw0x68mT-1ZYtB5JK6pKMOOhmPJQ,1084 +Brotli-1.0.9.dist-info/METADATA,sha256=F-hhct3u9qHLnoc4-lY5EPl6q2CvNkaqe3XHiNPQs9E,1366 +Brotli-1.0.9.dist-info/RECORD,, +Brotli-1.0.9.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Brotli-1.0.9.dist-info/WHEEL,sha256=N9Vl37IHX_8j8HTcRdLPw1BZ19cMYWLY0ZJat5peo4A,225 +Brotli-1.0.9.dist-info/top_level.txt,sha256=gsS54HrhO3ZveFxeMrKo_7qH4Sm4TbQ7jGLVBEqJ4NI,15 +__pycache__/brotli.cpython-310.pyc,, +_brotli.cpython-310-x86_64-linux-gnu.so,sha256=sNXEfmInKcZvYZVPKgdz987v3rpmdDZDdu8FCvnY798,6726712 +brotli.py,sha256=iwoQnPvrANtaJYTJ6mSjOQUENH7Z8RgAEbn-HxMR-kU,1857 diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/REQUESTED b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/REQUESTED diff --git a/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/WHEEL b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/WHEEL new file mode 100644 index 0000000..8a18a6e --- /dev/null +++ b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/WHEEL @@ -0,0 +1,8 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.36.2) +Root-Is-Purelib: false +Tag: cp310-cp310-manylinux_2_5_x86_64 +Tag: cp310-cp310-manylinux1_x86_64 +Tag: cp310-cp310-manylinux_2_12_x86_64 +Tag: cp310-cp310-manylinux2010_x86_64 + diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/top_level.txt b/python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/Brotli-1.0.9.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/AES.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/AES.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/AES.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/AES.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/AES.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/AES.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/AES.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/AES.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC2.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC2.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC2.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC2.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC4.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC4.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC4.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ARC4.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/Blowfish.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/Blowfish.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/Blowfish.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/Blowfish.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/CAST.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/CAST.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/CAST.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/CAST.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/CAST.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/CAST.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/DES.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/DES.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/DES.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/DES.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/DES.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/DES.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/DES.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/DES.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/DES3.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/DES3.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/DES3.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/DES3.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/DES3.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/DES3.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/Salsa20.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/Salsa20.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/Salsa20.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/Salsa20.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_ARC4.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_ARC4.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_ARC4.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_ARC4.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_EKSBlowfish.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_EKSBlowfish.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/__init__.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/__init__.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/__init__.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_chacha20.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_chacha20.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_chacha20.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_chacha20.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cbc.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cbc.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cbc.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cbc.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ccm.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ccm.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ccm.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ccm.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cfb.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cfb.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cfb.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_cfb.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ctr.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ctr.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ctr.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ctr.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_eax.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_eax.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_eax.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_eax.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ecb.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ecb.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ecb.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ecb.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_gcm.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_gcm.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_gcm.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_gcm.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ocb.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ocb.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ocb.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ocb.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ofb.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ofb.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ofb.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_ofb.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_openpgp.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_openpgp.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.py b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_siv.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.py rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_siv.py diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.pyi b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_siv.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_mode_siv.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_des.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_des.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2b.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2b.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2b.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2b.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2s.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2s.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2s.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/BLAKE2s.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/CMAC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/CMAC.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/CMAC.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/CMAC.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/CMAC.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/CMAC.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/HMAC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/HMAC.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/HMAC.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/HMAC.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/HMAC.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/HMAC.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC128.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC128.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC128.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC128.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC256.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC256.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/KMAC256.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/KangarooTwelve.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/KangarooTwelve.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/KangarooTwelve.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/KangarooTwelve.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/MD2.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/MD2.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/MD2.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/MD2.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/MD2.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/MD2.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/MD2.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/MD2.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/MD4.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/MD4.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/MD4.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/MD4.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/MD4.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/MD4.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/MD4.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/MD4.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/MD5.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/MD5.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/MD5.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/MD5.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/MD5.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/MD5.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/MD5.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/MD5.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/Poly1305.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/Poly1305.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/Poly1305.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/Poly1305.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD160.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD160.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD160.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/RIPEMD160.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA1.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA1.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA1.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA1.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA1.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA1.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA224.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA224.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA224.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA224.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA224.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA224.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA256.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA256.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA256.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA256.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA256.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA384.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA384.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA384.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA384.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA384.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA384.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_224.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_224.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_224.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_224.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_256.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_256.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_256.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_384.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_384.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_384.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_384.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_512.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_512.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_512.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA3_512.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA512.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA512.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA512.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHA512.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHA512.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHA512.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE128.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE128.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE128.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE128.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE256.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE256.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/SHAKE256.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash128.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash128.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash128.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash128.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash256.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash256.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/TupleHash256.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_MD2.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_MD2.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_MD2.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_MD2.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_MD4.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_MD4.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_MD4.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_MD4.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_MD5.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_MD5.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_MD5.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_MD5.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_SHA1.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA1.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_SHA1.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA1.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_SHA224.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA224.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_SHA224.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA224.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_SHA256.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA256.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_SHA256.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA256.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_SHA384.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA384.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_SHA384.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA384.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_SHA512.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA512.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_SHA512.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_SHA512.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/__init__.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/__init__.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/__init__.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/__init__.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_keccak.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_keccak.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_keccak.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_keccak.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/_poly1305.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Hash/_poly1305.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/_poly1305.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Hash/_poly1305.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE128.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE128.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE128.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE128.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE256.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE256.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/cSHAKE256.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/keccak.py b/python/lib/python3.10/site-packages/Cryptodome/Hash/keccak.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/keccak.py rename to python/lib/python3.10/site-packages/Cryptodome/Hash/keccak.py diff --git a/lib/python3.11/site-packages/Cryptodome/Hash/keccak.pyi b/python/lib/python3.10/site-packages/Cryptodome/Hash/keccak.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Hash/keccak.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Hash/keccak.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/IO/PEM.py b/python/lib/python3.10/site-packages/Cryptodome/IO/PEM.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/PEM.py rename to python/lib/python3.10/site-packages/Cryptodome/IO/PEM.py diff --git a/lib/python3.11/site-packages/Cryptodome/IO/PEM.pyi b/python/lib/python3.10/site-packages/Cryptodome/IO/PEM.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/PEM.pyi rename to python/lib/python3.10/site-packages/Cryptodome/IO/PEM.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.py b/python/lib/python3.10/site-packages/Cryptodome/IO/PKCS8.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/PKCS8.py rename to python/lib/python3.10/site-packages/Cryptodome/IO/PKCS8.py diff --git a/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.pyi b/python/lib/python3.10/site-packages/Cryptodome/IO/PKCS8.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/PKCS8.pyi rename to python/lib/python3.10/site-packages/Cryptodome/IO/PKCS8.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/IO/_PBES.py b/python/lib/python3.10/site-packages/Cryptodome/IO/_PBES.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/_PBES.py rename to python/lib/python3.10/site-packages/Cryptodome/IO/_PBES.py diff --git a/lib/python3.11/site-packages/Cryptodome/IO/_PBES.pyi b/python/lib/python3.10/site-packages/Cryptodome/IO/_PBES.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/_PBES.pyi rename to python/lib/python3.10/site-packages/Cryptodome/IO/_PBES.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/IO/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/IO/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/IO/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/IO/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/Numbers.py b/python/lib/python3.10/site-packages/Cryptodome/Math/Numbers.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/Numbers.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/Numbers.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/Numbers.pyi b/python/lib/python3.10/site-packages/Cryptodome/Math/Numbers.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/Numbers.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Math/Numbers.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Math/Primality.py b/python/lib/python3.10/site-packages/Cryptodome/Math/Primality.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/Primality.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/Primality.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/Primality.pyi b/python/lib/python3.10/site-packages/Cryptodome/Math/Primality.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/Primality.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Math/Primality.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.py b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerBase.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerBase.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.pyi b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerBase.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerBase.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.py b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerCustom.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerCustom.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.pyi b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerCustom.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerCustom.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.py b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerGMP.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerGMP.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.pyi b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerGMP.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerGMP.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.py b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerNative.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerNative.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.pyi b/python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerNative.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Math/_IntegerNative.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Math/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Math/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Math/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Math/_modexp.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Math/_modexp.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Math/_modexp.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Math/_modexp.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.py b/python/lib/python3.10/site-packages/Cryptodome/Protocol/KDF.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/KDF.py rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/KDF.py diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.pyi b/python/lib/python3.10/site-packages/Cryptodome/Protocol/KDF.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/KDF.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/KDF.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.py b/python/lib/python3.10/site-packages/Cryptodome/Protocol/SecretSharing.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.py rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/SecretSharing.py diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.pyi b/python/lib/python3.10/site-packages/Cryptodome/Protocol/SecretSharing.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/SecretSharing.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Protocol/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.pyi b/python/lib/python3.10/site-packages/Cryptodome/Protocol/__init__.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/__init__.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/__init__.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Protocol/_scrypt.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Protocol/_scrypt.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Protocol/_scrypt.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Protocol/_scrypt.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.py b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/DSA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.py rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/DSA.py diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.pyi b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/DSA.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.pyi rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/DSA.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.py b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/ECC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.py rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/ECC.py diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.pyi b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/ECC.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.pyi rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/ECC.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.py b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/ElGamal.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.py rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/ElGamal.py diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.pyi b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/ElGamal.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.pyi rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/ElGamal.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.py b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/RSA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.py rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/RSA.py diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.pyi b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/RSA.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.pyi rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/RSA.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.pyi b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/__init__.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.pyi rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/__init__.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed448.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/_ed448.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/_ed448.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/_ed448.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.py b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/_openssh.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.py rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/_openssh.py diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.pyi b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/_openssh.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.pyi rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/_openssh.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/PublicKey/_x25519.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/PublicKey/_x25519.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/PublicKey/_x25519.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/PublicKey/_x25519.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Random/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Random/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Random/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Random/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Random/__init__.pyi b/python/lib/python3.10/site-packages/Cryptodome/Random/__init__.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Random/__init__.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Random/__init__.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Random/random.py b/python/lib/python3.10/site-packages/Cryptodome/Random/random.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Random/random.py rename to python/lib/python3.10/site-packages/Cryptodome/Random/random.py diff --git a/lib/python3.11/site-packages/Cryptodome/Random/random.pyi b/python/lib/python3.10/site-packages/Cryptodome/Random/random.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Random/random.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Random/random.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/common.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/common.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/common.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/common.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/common.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/common.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/common.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/common.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/IO/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/IO/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/IO/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PBES.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/IO/test_PBES.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PBES.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/IO/test_PBES.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Math/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Primality.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/test_Primality.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Primality.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/test_Primality.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_modexp.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/test_modexp.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_modexp.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Math/test_modexp.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Random/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Random/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Random/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/test_random.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Random/test_random.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Random/test_random.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Random/test_random.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_dss.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_dss.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_dss.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_dss.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pss.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_pss.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pss.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Signature/test_pss.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Counter.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_Counter.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Counter.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_Counter.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Padding.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_Padding.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Padding.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_Padding.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_asn1.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_asn1.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_asn1.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_asn1.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_number.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_number.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_number.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_number.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_strxor.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_strxor.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_strxor.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/Util/test_strxor.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/__main__.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/__main__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/__main__.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/__main__.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/loader.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/loader.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/loader.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/loader.py diff --git a/lib/python3.11/site-packages/Cryptodome/SelfTest/st_common.py b/python/lib/python3.10/site-packages/Cryptodome/SelfTest/st_common.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/SelfTest/st_common.py rename to python/lib/python3.10/site-packages/Cryptodome/SelfTest/st_common.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/DSS.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/DSS.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/DSS.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/DSS.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/DSS.pyi b/python/lib/python3.10/site-packages/Cryptodome/Signature/DSS.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/DSS.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Signature/DSS.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_PSS.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_PSS.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi b/python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_v1_5.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_v1_5.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi b/python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/eddsa.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/eddsa.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/eddsa.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.pyi b/python/lib/python3.10/site-packages/Cryptodome/Signature/eddsa.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/eddsa.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Signature/eddsa.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/pkcs1_15.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/pkcs1_15.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.pyi b/python/lib/python3.10/site-packages/Cryptodome/Signature/pkcs1_15.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Signature/pkcs1_15.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/pss.py b/python/lib/python3.10/site-packages/Cryptodome/Signature/pss.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/pss.py rename to python/lib/python3.10/site-packages/Cryptodome/Signature/pss.py diff --git a/lib/python3.11/site-packages/Cryptodome/Signature/pss.pyi b/python/lib/python3.10/site-packages/Cryptodome/Signature/pss.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Signature/pss.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Signature/pss.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/Counter.py b/python/lib/python3.10/site-packages/Cryptodome/Util/Counter.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/Counter.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/Counter.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/Counter.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/Counter.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/Counter.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/Counter.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/Padding.py b/python/lib/python3.10/site-packages/Cryptodome/Util/Padding.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/Padding.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/Padding.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/Padding.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/Padding.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/Padding.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/Padding.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.py b/python/lib/python3.10/site-packages/Cryptodome/Util/RFC1751.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/RFC1751.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/RFC1751.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/RFC1751.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/RFC1751.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/RFC1751.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/Util/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.py b/python/lib/python3.10/site-packages/Cryptodome/Util/_cpu_features.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/_cpu_features.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/_cpu_features.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/_cpu_features.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_cpuid_c.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Util/_cpuid_c.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_cpuid_c.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Util/_cpuid_c.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_file_system.py b/python/lib/python3.10/site-packages/Cryptodome/Util/_file_system.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_file_system.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/_file_system.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_file_system.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/_file_system.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_file_system.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/_file_system.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.py b/python/lib/python3.10/site-packages/Cryptodome/Util/_raw_api.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_raw_api.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/_raw_api.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/_raw_api.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_raw_api.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/_raw_api.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/_strxor.abi3.so b/python/lib/python3.10/site-packages/Cryptodome/Util/_strxor.abi3.so similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/_strxor.abi3.so rename to python/lib/python3.10/site-packages/Cryptodome/Util/_strxor.abi3.so diff --git a/lib/python3.11/site-packages/Cryptodome/Util/asn1.py b/python/lib/python3.10/site-packages/Cryptodome/Util/asn1.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/asn1.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/asn1.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/asn1.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/asn1.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/asn1.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/asn1.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/number.py b/python/lib/python3.10/site-packages/Cryptodome/Util/number.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/number.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/number.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/number.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/number.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/number.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/number.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/py3compat.py b/python/lib/python3.10/site-packages/Cryptodome/Util/py3compat.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/py3compat.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/py3compat.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/py3compat.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/py3compat.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/py3compat.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/py3compat.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/Util/strxor.py b/python/lib/python3.10/site-packages/Cryptodome/Util/strxor.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/strxor.py rename to python/lib/python3.10/site-packages/Cryptodome/Util/strxor.py diff --git a/lib/python3.11/site-packages/Cryptodome/Util/strxor.pyi b/python/lib/python3.10/site-packages/Cryptodome/Util/strxor.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/Util/strxor.pyi rename to python/lib/python3.10/site-packages/Cryptodome/Util/strxor.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/__init__.py b/python/lib/python3.10/site-packages/Cryptodome/__init__.py similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/__init__.py rename to python/lib/python3.10/site-packages/Cryptodome/__init__.py diff --git a/lib/python3.11/site-packages/Cryptodome/__init__.pyi b/python/lib/python3.10/site-packages/Cryptodome/__init__.pyi similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/__init__.pyi rename to python/lib/python3.10/site-packages/Cryptodome/__init__.pyi diff --git a/lib/python3.11/site-packages/Cryptodome/py.typed b/python/lib/python3.10/site-packages/Cryptodome/py.typed similarity index 100% rename from lib/python3.11/site-packages/Cryptodome/py.typed rename to python/lib/python3.10/site-packages/Cryptodome/py.typed diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/INSTALLER b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/LICENSE.rst b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/LICENSE.rst similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/LICENSE.rst rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/LICENSE.rst diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/METADATA b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/METADATA rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/RECORD b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/RECORD new file mode 100644 index 0000000..2238824 --- /dev/null +++ b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/RECORD @@ -0,0 +1,54 @@ +../../../bin/flask,sha256=j5_uZoZtL2Oy-uOdUqnhGvBIO4MiDDm_QplSYEcKt90,239 +Flask-2.2.5.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Flask-2.2.5.dist-info/LICENSE.rst,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +Flask-2.2.5.dist-info/METADATA,sha256=rZTjr5v4M7HB-zC-w2Y0ZU96OYSGBb-Hm15jlLJhs3g,3889 +Flask-2.2.5.dist-info/RECORD,, +Flask-2.2.5.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Flask-2.2.5.dist-info/WHEEL,sha256=pkctZYzUS4AYVn6dJ-7367OJZivF2e8RA9b_ZBjif18,92 +Flask-2.2.5.dist-info/entry_points.txt,sha256=s3MqQpduU25y4dq3ftBYD6bMVdVnbMpZP-sUNw0zw0k,41 +Flask-2.2.5.dist-info/top_level.txt,sha256=dvi65F6AeGWVU0TBpYiC04yM60-FX1gJFkK31IKQr5c,6 +flask/__init__.py,sha256=GJgAILDWhW_DQljuoJ4pk9zBUy70zPPu-VZ6kLyiVI4,2890 +flask/__main__.py,sha256=bYt9eEaoRQWdejEHFD8REx9jxVEdZptECFsV7F49Ink,30 +flask/__pycache__/__init__.cpython-310.pyc,, +flask/__pycache__/__main__.cpython-310.pyc,, +flask/__pycache__/app.cpython-310.pyc,, +flask/__pycache__/blueprints.cpython-310.pyc,, +flask/__pycache__/cli.cpython-310.pyc,, +flask/__pycache__/config.cpython-310.pyc,, +flask/__pycache__/ctx.cpython-310.pyc,, +flask/__pycache__/debughelpers.cpython-310.pyc,, +flask/__pycache__/globals.cpython-310.pyc,, +flask/__pycache__/helpers.cpython-310.pyc,, +flask/__pycache__/logging.cpython-310.pyc,, +flask/__pycache__/scaffold.cpython-310.pyc,, +flask/__pycache__/sessions.cpython-310.pyc,, +flask/__pycache__/signals.cpython-310.pyc,, +flask/__pycache__/templating.cpython-310.pyc,, +flask/__pycache__/testing.cpython-310.pyc,, +flask/__pycache__/typing.cpython-310.pyc,, +flask/__pycache__/views.cpython-310.pyc,, +flask/__pycache__/wrappers.cpython-310.pyc,, +flask/app.py,sha256=ue4tEeDnr3m-eSEwz7OJ1_wafSYl3fl6eo-NLFgNNJQ,99141 +flask/blueprints.py,sha256=fenhKP_Sh5eU6qtWeHacg1GVeun4pQzK2vq8sNDd1hY,27266 +flask/cli.py,sha256=pLmnWObe_G4_ZAFQdh7kgwqPMxRXm4oUhaUSBpJMeq4,33532 +flask/config.py,sha256=Ubo_juzSYsAKqD2vD3vm6mjsPo3EOJDdSEzYq8lKTJI,12585 +flask/ctx.py,sha256=bGEQQuF2_cHqZ3ZNMeMeEG8HOLJkDlL88u2BBxCrRao,14829 +flask/debughelpers.py,sha256=_RvAL3TW5lqMJeCVWtTU6rSDJC7jnRaBL6OEkVmooyU,5511 +flask/globals.py,sha256=EX0XdX73BTWdVF0UHDSNet2ER3kI6sKveo3_o5IOs98,3187 +flask/helpers.py,sha256=XTHRgLlyxeEzR988q63-4OY8RswTscR-5exFxK10CLU,25280 +flask/json/__init__.py,sha256=TOwldHT3_kFaXHlORKi9yCWt7dbPNB0ovdHHQWlSRzY,11175 +flask/json/__pycache__/__init__.cpython-310.pyc,, +flask/json/__pycache__/provider.cpython-310.pyc,, +flask/json/__pycache__/tag.cpython-310.pyc,, +flask/json/provider.py,sha256=jXCNypf11PN4ngQjEt6LnSdCWQ1yHIAkNLHlXQlCB-A,10674 +flask/json/tag.py,sha256=fys3HBLssWHuMAIJuTcf2K0bCtosePBKXIWASZEEjnU,8857 +flask/logging.py,sha256=WYng0bLTRS_CJrocGcCLJpibHf1lygHE_pg-KoUIQ4w,2293 +flask/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +flask/scaffold.py,sha256=EKx-Tr5BXLzeKKvq3ZAi_2oUQVZuC4OJSJTocyDXsSo,35958 +flask/sessions.py,sha256=adWCRnJYETJcjjhlcvUgZR5S0DMqKQctS0nzkY9g9Us,15927 +flask/signals.py,sha256=H7QwDciK-dtBxinjKpexpglP0E6k0MJILiFWTItfmqU,2136 +flask/templating.py,sha256=1P4OzvSnA2fsJTYgQT3G4owVKsuOz8XddCiR6jMHGJ0,7419 +flask/testing.py,sha256=JtHRQY7mIH39SM4S51svAr8e7Xk87dqMb30Z6Dyv9TA,10706 +flask/typing.py,sha256=KgxegTF9v9WvuongeF8LooIvpZPauzGrq9ZXf3gBlYc,2969 +flask/views.py,sha256=LulttWL4owVFlgwrJi8GCNM4inC3xbs2IBlY31bdCS4,6765 +flask/wrappers.py,sha256=el3tn1LgSUV0eNGgYMjKICT5I7qGJgbpIhvci4nrwQ8,5702 diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/REQUESTED b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/WHEEL b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/WHEEL rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/entry_points.txt b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/entry_points.txt similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/entry_points.txt rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/entry_points.txt diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/top_level.txt b/python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/Flask-2.2.5.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/INSTALLER b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/METADATA b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/METADATA rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/RECORD b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/RECORD new file mode 100644 index 0000000..c65e7ba --- /dev/null +++ b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/RECORD @@ -0,0 +1,59 @@ +Jinja2-3.1.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Jinja2-3.1.2.dist-info/LICENSE.rst,sha256=O0nc7kEF6ze6wQ-vG-JgQI_oXSUrjp3y4JefweCUQ3s,1475 +Jinja2-3.1.2.dist-info/METADATA,sha256=PZ6v2SIidMNixR7MRUX9f7ZWsPwtXanknqiZUmRbh4U,3539 +Jinja2-3.1.2.dist-info/RECORD,, +Jinja2-3.1.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Jinja2-3.1.2.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +Jinja2-3.1.2.dist-info/entry_points.txt,sha256=zRd62fbqIyfUpsRtU7EVIFyiu1tPwfgO7EvPErnxgTE,59 +Jinja2-3.1.2.dist-info/top_level.txt,sha256=PkeVWtLb3-CqjWi1fO29OCbj55EhX_chhKrCdrVe_zs,7 +jinja2/__init__.py,sha256=8vGduD8ytwgD6GDSqpYc2m3aU-T7PKOAddvVXgGr_Fs,1927 +jinja2/__pycache__/__init__.cpython-310.pyc,, +jinja2/__pycache__/_identifier.cpython-310.pyc,, +jinja2/__pycache__/async_utils.cpython-310.pyc,, +jinja2/__pycache__/bccache.cpython-310.pyc,, +jinja2/__pycache__/compiler.cpython-310.pyc,, +jinja2/__pycache__/constants.cpython-310.pyc,, +jinja2/__pycache__/debug.cpython-310.pyc,, +jinja2/__pycache__/defaults.cpython-310.pyc,, +jinja2/__pycache__/environment.cpython-310.pyc,, +jinja2/__pycache__/exceptions.cpython-310.pyc,, +jinja2/__pycache__/ext.cpython-310.pyc,, +jinja2/__pycache__/filters.cpython-310.pyc,, +jinja2/__pycache__/idtracking.cpython-310.pyc,, +jinja2/__pycache__/lexer.cpython-310.pyc,, +jinja2/__pycache__/loaders.cpython-310.pyc,, +jinja2/__pycache__/meta.cpython-310.pyc,, +jinja2/__pycache__/nativetypes.cpython-310.pyc,, +jinja2/__pycache__/nodes.cpython-310.pyc,, +jinja2/__pycache__/optimizer.cpython-310.pyc,, +jinja2/__pycache__/parser.cpython-310.pyc,, +jinja2/__pycache__/runtime.cpython-310.pyc,, +jinja2/__pycache__/sandbox.cpython-310.pyc,, +jinja2/__pycache__/tests.cpython-310.pyc,, +jinja2/__pycache__/utils.cpython-310.pyc,, +jinja2/__pycache__/visitor.cpython-310.pyc,, +jinja2/_identifier.py,sha256=_zYctNKzRqlk_murTNlzrju1FFJL7Va_Ijqqd7ii2lU,1958 +jinja2/async_utils.py,sha256=dHlbTeaxFPtAOQEYOGYh_PHcDT0rsDaUJAFDl_0XtTg,2472 +jinja2/bccache.py,sha256=mhz5xtLxCcHRAa56azOhphIAe19u1we0ojifNMClDio,14061 +jinja2/compiler.py,sha256=Gs-N8ThJ7OWK4-reKoO8Wh1ZXz95MVphBKNVf75qBr8,72172 +jinja2/constants.py,sha256=GMoFydBF_kdpaRKPoM5cl5MviquVRLVyZtfp5-16jg0,1433 +jinja2/debug.py,sha256=iWJ432RadxJNnaMOPrjIDInz50UEgni3_HKuFXi2vuQ,6299 +jinja2/defaults.py,sha256=boBcSw78h-lp20YbaXSJsqkAI2uN_mD_TtCydpeq5wU,1267 +jinja2/environment.py,sha256=6uHIcc7ZblqOMdx_uYNKqRnnwAF0_nzbyeMP9FFtuh4,61349 +jinja2/exceptions.py,sha256=ioHeHrWwCWNaXX1inHmHVblvc4haO7AXsjCp3GfWvx0,5071 +jinja2/ext.py,sha256=ivr3P7LKbddiXDVez20EflcO3q2aHQwz9P_PgWGHVqE,31502 +jinja2/filters.py,sha256=9js1V-h2RlyW90IhLiBGLM2U-k6SCy2F4BUUMgB3K9Q,53509 +jinja2/idtracking.py,sha256=GfNmadir4oDALVxzn3DL9YInhJDr69ebXeA2ygfuCGA,10704 +jinja2/lexer.py,sha256=DW2nX9zk-6MWp65YR2bqqj0xqCvLtD-u9NWT8AnFRxQ,29726 +jinja2/loaders.py,sha256=BfptfvTVpClUd-leMkHczdyPNYFzp_n7PKOJ98iyHOg,23207 +jinja2/meta.py,sha256=GNPEvifmSaU3CMxlbheBOZjeZ277HThOPUTf1RkppKQ,4396 +jinja2/nativetypes.py,sha256=DXgORDPRmVWgy034H0xL8eF7qYoK3DrMxs-935d0Fzk,4226 +jinja2/nodes.py,sha256=i34GPRAZexXMT6bwuf5SEyvdmS-bRCy9KMjwN5O6pjk,34550 +jinja2/optimizer.py,sha256=tHkMwXxfZkbfA1KmLcqmBMSaz7RLIvvItrJcPoXTyD8,1650 +jinja2/parser.py,sha256=nHd-DFHbiygvfaPtm9rcQXJChZG7DPsWfiEsqfwKerY,39595 +jinja2/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +jinja2/runtime.py,sha256=5CmD5BjbEJxSiDNTFBeKCaq8qU4aYD2v6q2EluyExms,33476 +jinja2/sandbox.py,sha256=Y0xZeXQnH6EX5VjaV2YixESxoepnRbW_3UeQosaBU3M,14584 +jinja2/tests.py,sha256=Am5Z6Lmfr2XaH_npIfJJ8MdXtWsbLjMULZJulTAj30E,5905 +jinja2/utils.py,sha256=u9jXESxGn8ATZNVolwmkjUVu4SA-tLgV0W7PcSfPfdQ,23965 +jinja2/visitor.py,sha256=MH14C6yq24G_KVtWzjwaI7Wg14PCJIYlWW1kpkxYak0,3568 diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/REQUESTED b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/WHEEL b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/WHEEL rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/top_level.txt b/python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/Jinja2-3.1.2.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst rename to python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/METADATA b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/METADATA rename to python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/RECORD b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/RECORD new file mode 100644 index 0000000..7686087 --- /dev/null +++ b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/RECORD @@ -0,0 +1,15 @@ +MarkupSafe-2.1.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +MarkupSafe-2.1.3.dist-info/LICENSE.rst,sha256=SJqOEQhQntmKN7uYPhHg9-HTHwvY-Zp5yESOf_N9B-o,1475 +MarkupSafe-2.1.3.dist-info/METADATA,sha256=Wvvh4Tz-YtW24YagYdqrrrBdm9m-DjTdqJWhxlbU6-0,3003 +MarkupSafe-2.1.3.dist-info/RECORD,, +MarkupSafe-2.1.3.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +MarkupSafe-2.1.3.dist-info/WHEEL,sha256=iZaXX0Td62Nww8bojl0E84uJHjT41csHPKZmbUBbJPs,152 +MarkupSafe-2.1.3.dist-info/top_level.txt,sha256=qy0Plje5IJuvsCBjejJyhDCjEAdcDLK_2agVcex8Z6U,11 +markupsafe/__init__.py,sha256=xIItqrn1Bwi7FxPJO9rCVQBG0Evewue1Tl4BV0l9xEs,10338 +markupsafe/__pycache__/__init__.cpython-310.pyc,, +markupsafe/__pycache__/_native.cpython-310.pyc,, +markupsafe/_native.py,sha256=GR86Qvo_GcgKmKreA1WmYN9ud17OFwkww8E-fiW-57s,1713 +markupsafe/_speedups.c,sha256=X2XvQVtIdcK4Usz70BvkzoOfjTCmQlDkkjYSn-swE0g,7083 +markupsafe/_speedups.cpython-310-x86_64-linux-gnu.so,sha256=huh9xBZy3L1q1ar3y-f44Ozfa25Rg6xiomsq8MThk_Y,44240 +markupsafe/_speedups.pyi,sha256=vfMCsOgbAXRNLUXkyuyonG8uEWKYU4PDqNuMaDELAYw,229 +markupsafe/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/REQUESTED b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/REQUESTED diff --git a/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL new file mode 100644 index 0000000..2d1b4b8 --- /dev/null +++ b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.40.0) +Root-Is-Purelib: false +Tag: cp310-cp310-manylinux_2_17_x86_64 +Tag: cp310-cp310-manylinux2014_x86_64 + diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt b/python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst rename to python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/METADATA b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/METADATA rename to python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/RECORD b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/RECORD new file mode 100644 index 0000000..73a2a7e --- /dev/null +++ b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/RECORD @@ -0,0 +1,99 @@ +Werkzeug-2.2.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +Werkzeug-2.2.3.dist-info/LICENSE.rst,sha256=O0nc7kEF6ze6wQ-vG-JgQI_oXSUrjp3y4JefweCUQ3s,1475 +Werkzeug-2.2.3.dist-info/METADATA,sha256=TIyameVEp5p52N9E1mTWWabY6g1sB0Dm25vznZQeXPQ,4416 +Werkzeug-2.2.3.dist-info/RECORD,, +Werkzeug-2.2.3.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +Werkzeug-2.2.3.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92 +Werkzeug-2.2.3.dist-info/top_level.txt,sha256=QRyj2VjwJoQkrwjwFIOlB8Xg3r9un0NtqVHQF-15xaw,9 +werkzeug/__init__.py,sha256=Hr0lQweC21HXPVBemSpBJUIzrbq2mn8h70J1h30QcqY,188 +werkzeug/__pycache__/__init__.cpython-310.pyc,, +werkzeug/__pycache__/_internal.cpython-310.pyc,, +werkzeug/__pycache__/_reloader.cpython-310.pyc,, +werkzeug/__pycache__/datastructures.cpython-310.pyc,, +werkzeug/__pycache__/exceptions.cpython-310.pyc,, +werkzeug/__pycache__/formparser.cpython-310.pyc,, +werkzeug/__pycache__/http.cpython-310.pyc,, +werkzeug/__pycache__/local.cpython-310.pyc,, +werkzeug/__pycache__/security.cpython-310.pyc,, +werkzeug/__pycache__/serving.cpython-310.pyc,, +werkzeug/__pycache__/test.cpython-310.pyc,, +werkzeug/__pycache__/testapp.cpython-310.pyc,, +werkzeug/__pycache__/urls.cpython-310.pyc,, +werkzeug/__pycache__/user_agent.cpython-310.pyc,, +werkzeug/__pycache__/utils.cpython-310.pyc,, +werkzeug/__pycache__/wsgi.cpython-310.pyc,, +werkzeug/_internal.py,sha256=4lwshe63pFlCo0h2IMcmvhbugA50QXQvfLD5VoY5c4Q,16271 +werkzeug/_reloader.py,sha256=hiP0z4bi6p_8UIJOtq7K0BV2dqCik5yztWLsDXeI_WE,14285 +werkzeug/datastructures.py,sha256=v2WYfs1rb1OuQgXyLripHQFwgodrfTNCd5P5f8n3ueA,97081 +werkzeug/datastructures.pyi,sha256=HRzDLc7A6qnwluhNqn6AT76CsLZIkAbVVqxn0AbfV-s,34506 +werkzeug/debug/__init__.py,sha256=wfJ2OmljsO5C0e0sXJpTUiG6bwGU6uHtFDDDMfJfQJk,18877 +werkzeug/debug/__pycache__/__init__.cpython-310.pyc,, +werkzeug/debug/__pycache__/console.cpython-310.pyc,, +werkzeug/debug/__pycache__/repr.cpython-310.pyc,, +werkzeug/debug/__pycache__/tbtools.cpython-310.pyc,, +werkzeug/debug/console.py,sha256=dechqiCtHfs0AQZWZofUC1S97tCuvwDgT0gdha5KwWM,6208 +werkzeug/debug/repr.py,sha256=vF3TLnYBohYr8V6Gz13PTJspQs42uv3gUJSzSbmHJBo,9472 +werkzeug/debug/shared/ICON_LICENSE.md,sha256=DhA6Y1gUl5Jwfg0NFN9Rj4VWITt8tUx0IvdGf0ux9-s,222 +werkzeug/debug/shared/console.png,sha256=bxax6RXXlvOij_KeqvSNX0ojJf83YbnZ7my-3Gx9w2A,507 +werkzeug/debug/shared/debugger.js,sha256=tg42SZs1SVmYWZ-_Fj5ELK5-FLHnGNQrei0K2By8Bw8,10521 +werkzeug/debug/shared/less.png,sha256=-4-kNRaXJSONVLahrQKUxMwXGm9R4OnZ9SxDGpHlIR4,191 +werkzeug/debug/shared/more.png,sha256=GngN7CioHQoV58rH6ojnkYi8c_qED2Aka5FO5UXrReY,200 +werkzeug/debug/shared/style.css,sha256=-xSxzUEZGw_IqlDR5iZxitNl8LQUjBM-_Y4UAvXVH8g,6078 +werkzeug/debug/tbtools.py,sha256=6iohJovtBSFRAcgX7_aRY4r3e19PLj3FavYB3RM4CmA,13263 +werkzeug/exceptions.py,sha256=8-KOXguQkOLoBUdN-7x_WyHT92TcAmjTWNwG4t8QYIg,26527 +werkzeug/formparser.py,sha256=DBRbbAnzspYUBzgfxPaZC7MjGAK_m5QTvdWoyvrhw4o,16516 +werkzeug/http.py,sha256=NqJjYCt8tKn2XOEKPApq4L3q8zb8YFq3GFOe5gsonI4,42776 +werkzeug/local.py,sha256=v-HEqr4bLpLHl4upCj97MOfUyCjW10Tp6mcNaFRFyew,22288 +werkzeug/middleware/__init__.py,sha256=qfqgdT5npwG9ses3-FXQJf3aB95JYP1zchetH_T3PUw,500 +werkzeug/middleware/__pycache__/__init__.cpython-310.pyc,, +werkzeug/middleware/__pycache__/dispatcher.cpython-310.pyc,, +werkzeug/middleware/__pycache__/http_proxy.cpython-310.pyc,, +werkzeug/middleware/__pycache__/lint.cpython-310.pyc,, +werkzeug/middleware/__pycache__/profiler.cpython-310.pyc,, +werkzeug/middleware/__pycache__/proxy_fix.cpython-310.pyc,, +werkzeug/middleware/__pycache__/shared_data.cpython-310.pyc,, +werkzeug/middleware/dispatcher.py,sha256=Fh_w-KyWnTSYF-Lfv5dimQ7THSS7afPAZMmvc4zF1gg,2580 +werkzeug/middleware/http_proxy.py,sha256=HE8VyhS7CR-E1O6_9b68huv8FLgGGR1DLYqkS3Xcp3Q,7558 +werkzeug/middleware/lint.py,sha256=1w_UVKkAwq5wjjtCcDCDZwhAhWzPSZ0aDyUmbjAEeXw,13952 +werkzeug/middleware/profiler.py,sha256=7pWYDYPC774S0-HYLkG3Uge58PGUMX7tWp_Cor3etvo,4883 +werkzeug/middleware/proxy_fix.py,sha256=l7LC_LDu0Yd4SvUxS5SFigAJMzcIOGm6LNKl9IXJBSU,6974 +werkzeug/middleware/shared_data.py,sha256=fXjrEkuqxUVLG1DLrOdQLc96QQdjftCBZ1oM5oK89h4,9528 +werkzeug/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +werkzeug/routing/__init__.py,sha256=HpvahY7WwkLdV4Cq3Bsc3GrqNon4u6t8-vhbb9E5o00,4819 +werkzeug/routing/__pycache__/__init__.cpython-310.pyc,, +werkzeug/routing/__pycache__/converters.cpython-310.pyc,, +werkzeug/routing/__pycache__/exceptions.cpython-310.pyc,, +werkzeug/routing/__pycache__/map.cpython-310.pyc,, +werkzeug/routing/__pycache__/matcher.cpython-310.pyc,, +werkzeug/routing/__pycache__/rules.cpython-310.pyc,, +werkzeug/routing/converters.py,sha256=05bkekg64vLC6mqqK4ddBh589WH9yBsjtW8IJhdUBvw,6968 +werkzeug/routing/exceptions.py,sha256=RklUDL9ajOv2fTcRNj4pb18Bs4Y-GKk4rIeTSfsqkks,4737 +werkzeug/routing/map.py,sha256=XN4ZjzEF1SfMxtdov89SDE-1_U78KVnnobTfnHzqbmE,36757 +werkzeug/routing/matcher.py,sha256=6VvQYCCOjyj1JKUZKuAiVA_U1nXtvvJ70pSbBUdL_1k,7509 +werkzeug/routing/rules.py,sha256=3YsPpI9ZGcNmFiV2Go2Td1DvZ9ZdaMMnvGP1o17aMfc,31836 +werkzeug/sansio/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +werkzeug/sansio/__pycache__/__init__.cpython-310.pyc,, +werkzeug/sansio/__pycache__/http.cpython-310.pyc,, +werkzeug/sansio/__pycache__/multipart.cpython-310.pyc,, +werkzeug/sansio/__pycache__/request.cpython-310.pyc,, +werkzeug/sansio/__pycache__/response.cpython-310.pyc,, +werkzeug/sansio/__pycache__/utils.cpython-310.pyc,, +werkzeug/sansio/http.py,sha256=k3nREBfU-r8fXCfSTKQenys25q9bzUOvdY-OVGrqztA,5107 +werkzeug/sansio/multipart.py,sha256=vMZ85cvLD55clUTcTin2DtBv2GQRGh0_fExklnXKHoI,10055 +werkzeug/sansio/request.py,sha256=SiGcx2cz-l81TlCCrKrT2fePqC64hN8fSg5Ig6J6vRs,20175 +werkzeug/sansio/response.py,sha256=UTl-teQDDjovrZMkjj3ZQsHw-JtiFak5JfKEk1_vBYU,26026 +werkzeug/sansio/utils.py,sha256=EjbqdHdT-JZWgjUQaaWSgBUIRprXUkrsMQQqJlJHpVU,4847 +werkzeug/security.py,sha256=7TVI0L62emBHAh-1RHB_KlwGYcE08pPCyU674Ho4aNE,4653 +werkzeug/serving.py,sha256=XCiHFbMCFCgecKycgajhF4rFsGoemeN0xW1eTQqNt-g,37558 +werkzeug/test.py,sha256=uMahfM81RqEN3d3Sp4SkN36Pi8oZpV6dTgFY0cW1_2c,48126 +werkzeug/testapp.py,sha256=RJhT_2JweNiMKe304N3bF1zaIeMpRx-CIMERdeydfTY,9404 +werkzeug/urls.py,sha256=Q9Si-eVh7yxk3rwkzrwGRm146FXVXgg9lBP3k0HUfVM,36600 +werkzeug/user_agent.py,sha256=WclZhpvgLurMF45hsioSbS75H1Zb4iMQGKN3_yZ2oKo,1420 +werkzeug/utils.py,sha256=BDX5_7OCMVgl-ib84bCEdBG5MVvrxaSlfdg7Cxh4ND0,25174 +werkzeug/wrappers/__init__.py,sha256=kGyK7rOud3qCxll_jFyW15YarJhj1xtdf3ocx9ZheB8,120 +werkzeug/wrappers/__pycache__/__init__.cpython-310.pyc,, +werkzeug/wrappers/__pycache__/request.cpython-310.pyc,, +werkzeug/wrappers/__pycache__/response.cpython-310.pyc,, +werkzeug/wrappers/request.py,sha256=XmpTThXytTdznbPJnIsfdoIAvdi-THzTJQ9DsvARhn4,24026 +werkzeug/wrappers/response.py,sha256=ii1IaN2eUfoB-tBqbn_46fCB_SVVL8Fu4qd6cu0AlEY,34963 +werkzeug/wsgi.py,sha256=-VKI2iwCgLb-VToIZeBpdutkTETxy9HkIwgcFC5orkU,36060 diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/REQUESTED b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/WHEEL b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/WHEEL rename to python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt b/python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt diff --git a/python/lib/python3.10/site-packages/_brotli.cpython-310-x86_64-linux-gnu.so b/python/lib/python3.10/site-packages/_brotli.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 0000000..3dacfef Binary files /dev/null and b/python/lib/python3.10/site-packages/_brotli.cpython-310-x86_64-linux-gnu.so differ diff --git a/python/lib/python3.10/site-packages/_distutils_hack/__init__.py b/python/lib/python3.10/site-packages/_distutils_hack/__init__.py new file mode 100644 index 0000000..f707416 --- /dev/null +++ b/python/lib/python3.10/site-packages/_distutils_hack/__init__.py @@ -0,0 +1,132 @@ +import sys +import os +import re +import importlib +import warnings + + +is_pypy = '__pypy__' in sys.builtin_module_names + + +warnings.filterwarnings('ignore', + r'.+ distutils\b.+ deprecated', + DeprecationWarning) + + +def warn_distutils_present(): + if 'distutils' not in sys.modules: + return + if is_pypy and sys.version_info < (3, 7): + # PyPy for 3.6 unconditionally imports distutils, so bypass the warning + # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250 + return + warnings.warn( + "Distutils was imported before Setuptools, but importing Setuptools " + "also replaces the `distutils` module in `sys.modules`. This may lead " + "to undesirable behaviors or errors. To avoid these issues, avoid " + "using distutils directly, ensure that setuptools is installed in the " + "traditional way (e.g. not an editable install), and/or make sure " + "that setuptools is always imported before distutils.") + + +def clear_distutils(): + if 'distutils' not in sys.modules: + return + warnings.warn("Setuptools is replacing distutils.") + mods = [name for name in sys.modules if re.match(r'distutils\b', name)] + for name in mods: + del sys.modules[name] + + +def enabled(): + """ + Allow selection of distutils by environment variable. + """ + which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib') + return which == 'local' + + +def ensure_local_distutils(): + clear_distutils() + + # With the DistutilsMetaFinder in place, + # perform an import to cause distutils to be + # loaded from setuptools._distutils. Ref #2906. + add_shim() + importlib.import_module('distutils') + remove_shim() + + # check that submodules load as expected + core = importlib.import_module('distutils.core') + assert '_distutils' in core.__file__, core.__file__ + + +def do_override(): + """ + Ensure that the local copy of distutils is preferred over stdlib. + + See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401 + for more motivation. + """ + if enabled(): + warn_distutils_present() + ensure_local_distutils() + + +class DistutilsMetaFinder: + def find_spec(self, fullname, path, target=None): + if path is not None: + return + + method_name = 'spec_for_{fullname}'.format(**locals()) + method = getattr(self, method_name, lambda: None) + return method() + + def spec_for_distutils(self): + import importlib.abc + import importlib.util + + class DistutilsLoader(importlib.abc.Loader): + + def create_module(self, spec): + return importlib.import_module('setuptools._distutils') + + def exec_module(self, module): + pass + + return importlib.util.spec_from_loader('distutils', DistutilsLoader()) + + def spec_for_pip(self): + """ + Ensure stdlib distutils when running under pip. + See pypa/pip#8761 for rationale. + """ + if self.pip_imported_during_build(): + return + clear_distutils() + self.spec_for_distutils = lambda: None + + @staticmethod + def pip_imported_during_build(): + """ + Detect if pip is being imported in a build script. Ref #2355. + """ + import traceback + return any( + frame.f_globals['__file__'].endswith('setup.py') + for frame, line in traceback.walk_stack(None) + ) + + +DISTUTILS_FINDER = DistutilsMetaFinder() + + +def add_shim(): + sys.meta_path.insert(0, DISTUTILS_FINDER) + + +def remove_shim(): + try: + sys.meta_path.remove(DISTUTILS_FINDER) + except ValueError: + pass diff --git a/lib/python3.11/site-packages/_distutils_hack/override.py b/python/lib/python3.10/site-packages/_distutils_hack/override.py similarity index 100% rename from lib/python3.11/site-packages/_distutils_hack/override.py rename to python/lib/python3.10/site-packages/_distutils_hack/override.py diff --git a/lib/python3.11/site-packages/apscheduler/__init__.py b/python/lib/python3.10/site-packages/apscheduler/__init__.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/__init__.py rename to python/lib/python3.10/site-packages/apscheduler/__init__.py diff --git a/lib/python3.11/site-packages/apscheduler/events.py b/python/lib/python3.10/site-packages/apscheduler/events.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/events.py rename to python/lib/python3.10/site-packages/apscheduler/events.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/__init__.py b/python/lib/python3.10/site-packages/apscheduler/executors/__init__.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/__init__.py rename to python/lib/python3.10/site-packages/apscheduler/executors/__init__.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/asyncio.py b/python/lib/python3.10/site-packages/apscheduler/executors/asyncio.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/asyncio.py rename to python/lib/python3.10/site-packages/apscheduler/executors/asyncio.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/base.py b/python/lib/python3.10/site-packages/apscheduler/executors/base.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/base.py rename to python/lib/python3.10/site-packages/apscheduler/executors/base.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/base_py3.py b/python/lib/python3.10/site-packages/apscheduler/executors/base_py3.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/base_py3.py rename to python/lib/python3.10/site-packages/apscheduler/executors/base_py3.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/debug.py b/python/lib/python3.10/site-packages/apscheduler/executors/debug.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/debug.py rename to python/lib/python3.10/site-packages/apscheduler/executors/debug.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/gevent.py b/python/lib/python3.10/site-packages/apscheduler/executors/gevent.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/gevent.py rename to python/lib/python3.10/site-packages/apscheduler/executors/gevent.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/pool.py b/python/lib/python3.10/site-packages/apscheduler/executors/pool.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/pool.py rename to python/lib/python3.10/site-packages/apscheduler/executors/pool.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/tornado.py b/python/lib/python3.10/site-packages/apscheduler/executors/tornado.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/tornado.py rename to python/lib/python3.10/site-packages/apscheduler/executors/tornado.py diff --git a/lib/python3.11/site-packages/apscheduler/executors/twisted.py b/python/lib/python3.10/site-packages/apscheduler/executors/twisted.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/executors/twisted.py rename to python/lib/python3.10/site-packages/apscheduler/executors/twisted.py diff --git a/lib/python3.11/site-packages/apscheduler/job.py b/python/lib/python3.10/site-packages/apscheduler/job.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/job.py rename to python/lib/python3.10/site-packages/apscheduler/job.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/__init__.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/__init__.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/__init__.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/__init__.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/base.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/base.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/base.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/base.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/memory.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/memory.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/memory.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/memory.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/mongodb.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/mongodb.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/mongodb.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/mongodb.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/redis.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/redis.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/redis.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/redis.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/rethinkdb.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/rethinkdb.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/rethinkdb.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/rethinkdb.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/sqlalchemy.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/sqlalchemy.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/sqlalchemy.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/sqlalchemy.py diff --git a/lib/python3.11/site-packages/apscheduler/jobstores/zookeeper.py b/python/lib/python3.10/site-packages/apscheduler/jobstores/zookeeper.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/jobstores/zookeeper.py rename to python/lib/python3.10/site-packages/apscheduler/jobstores/zookeeper.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/__init__.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/__init__.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/__init__.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/__init__.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/asyncio.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/asyncio.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/asyncio.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/asyncio.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/background.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/background.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/background.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/background.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/base.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/base.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/base.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/base.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/blocking.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/blocking.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/blocking.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/blocking.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/gevent.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/gevent.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/gevent.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/gevent.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/qt.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/qt.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/qt.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/qt.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/tornado.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/tornado.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/tornado.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/tornado.py diff --git a/lib/python3.11/site-packages/apscheduler/schedulers/twisted.py b/python/lib/python3.10/site-packages/apscheduler/schedulers/twisted.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/schedulers/twisted.py rename to python/lib/python3.10/site-packages/apscheduler/schedulers/twisted.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/__init__.py b/python/lib/python3.10/site-packages/apscheduler/triggers/__init__.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/__init__.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/__init__.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/base.py b/python/lib/python3.10/site-packages/apscheduler/triggers/base.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/base.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/base.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/combining.py b/python/lib/python3.10/site-packages/apscheduler/triggers/combining.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/combining.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/combining.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/cron/__init__.py b/python/lib/python3.10/site-packages/apscheduler/triggers/cron/__init__.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/cron/__init__.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/cron/__init__.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/cron/expressions.py b/python/lib/python3.10/site-packages/apscheduler/triggers/cron/expressions.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/cron/expressions.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/cron/expressions.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/cron/fields.py b/python/lib/python3.10/site-packages/apscheduler/triggers/cron/fields.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/cron/fields.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/cron/fields.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/date.py b/python/lib/python3.10/site-packages/apscheduler/triggers/date.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/date.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/date.py diff --git a/lib/python3.11/site-packages/apscheduler/triggers/interval.py b/python/lib/python3.10/site-packages/apscheduler/triggers/interval.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/triggers/interval.py rename to python/lib/python3.10/site-packages/apscheduler/triggers/interval.py diff --git a/lib/python3.11/site-packages/apscheduler/util.py b/python/lib/python3.10/site-packages/apscheduler/util.py similarity index 100% rename from lib/python3.11/site-packages/apscheduler/util.py rename to python/lib/python3.10/site-packages/apscheduler/util.py diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/INSTALLER b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/LICENSE b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/LICENSE similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/LICENSE rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/LICENSE diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/METADATA b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/METADATA rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/RECORD b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/RECORD new file mode 100644 index 0000000..874ed0c --- /dev/null +++ b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/RECORD @@ -0,0 +1,11 @@ +async_timeout-4.0.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +async_timeout-4.0.2.dist-info/LICENSE,sha256=4Y17uPUT4sRrtYXJS1hb0wcg3TzLId2weG9y0WZY-Sw,568 +async_timeout-4.0.2.dist-info/METADATA,sha256=2pfMxxBst5vQ7SfMy5TDaDU0cRgCSQa7wcD5eI-Ew-8,4193 +async_timeout-4.0.2.dist-info/RECORD,, +async_timeout-4.0.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +async_timeout-4.0.2.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92 +async_timeout-4.0.2.dist-info/top_level.txt,sha256=9oM4e7Twq8iD_7_Q3Mz0E6GPIB6vJvRFo-UBwUQtBDU,14 +async_timeout-4.0.2.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 +async_timeout/__init__.py,sha256=N-JUI_VExhHnO0emkF_-h08dl4HBgOje16N4Ci-W-go,7487 +async_timeout/__pycache__/__init__.cpython-310.pyc,, +async_timeout/py.typed,sha256=tyozzRT1fziXETDxokmuyt6jhOmtjUbnVNJdZcG7ik0,12 diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/REQUESTED b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/WHEEL b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/WHEEL rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/top_level.txt b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/zip-safe b/python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/zip-safe similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/zip-safe rename to python/lib/python3.10/site-packages/async_timeout-4.0.2.dist-info/zip-safe diff --git a/lib/python3.11/site-packages/async_timeout/__init__.py b/python/lib/python3.10/site-packages/async_timeout/__init__.py similarity index 100% rename from lib/python3.11/site-packages/async_timeout/__init__.py rename to python/lib/python3.10/site-packages/async_timeout/__init__.py diff --git a/lib/python3.11/site-packages/async_timeout/py.typed b/python/lib/python3.10/site-packages/async_timeout/py.typed similarity index 100% rename from lib/python3.11/site-packages/async_timeout/py.typed rename to python/lib/python3.10/site-packages/async_timeout/py.typed diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA rename to python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD new file mode 100644 index 0000000..ada1395 --- /dev/null +++ b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD @@ -0,0 +1,72 @@ +beautifulsoup4-4.12.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +beautifulsoup4-4.12.2.dist-info/METADATA,sha256=M6TF9wpbgywQQvtehBohLTEr2f8e7cw909PZ3Xsk3N4,3556 +beautifulsoup4-4.12.2.dist-info/RECORD,, +beautifulsoup4-4.12.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +beautifulsoup4-4.12.2.dist-info/WHEEL,sha256=Fd6mP6ydyRguakwUJ05oBE7fh2IPxgtDN9IwHJ9OqJQ,87 +beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS,sha256=uSIdbrBb1sobdXl7VrlUvuvim2dN9kF3MH4Edn0WKGE,2176 +beautifulsoup4-4.12.2.dist-info/licenses/LICENSE,sha256=VbTY1LHlvIbRDvrJG3TIe8t3UmsPW57a-LnNKtxzl7I,1441 +bs4/__init__.py,sha256=4QO9qbbubMeEQw46YDLWEYS1yAebwKYR5l_Se9E_Gxo,33822 +bs4/__pycache__/__init__.cpython-310.pyc,, +bs4/__pycache__/css.cpython-310.pyc,, +bs4/__pycache__/dammit.cpython-310.pyc,, +bs4/__pycache__/diagnose.cpython-310.pyc,, +bs4/__pycache__/element.cpython-310.pyc,, +bs4/__pycache__/formatter.cpython-310.pyc,, +bs4/builder/__init__.py,sha256=KGBl_FgX1KV1wBIshW4EXlWjP3KLcRiF2opZ-zVcyAc,24393 +bs4/builder/__pycache__/__init__.cpython-310.pyc,, +bs4/builder/__pycache__/_html5lib.cpython-310.pyc,, +bs4/builder/__pycache__/_htmlparser.cpython-310.pyc,, +bs4/builder/__pycache__/_lxml.cpython-310.pyc,, +bs4/builder/_html5lib.py,sha256=LnhimXrUdKujKoHHbmzwNk8OBb11YfTRFXUwhZjwqow,19078 +bs4/builder/_htmlparser.py,sha256=2j4Kj0dFi86vD-OblQRaFFCsRXuWb1VdBGJVPxKKEUc,14919 +bs4/builder/_lxml.py,sha256=ik6BFGnxAzV2-21S_Wc-7ZeA174muSA_ZhmpnAe3g0E,14904 +bs4/css.py,sha256=gqGaHRrKeCRF3gDqxzeU0uclOCeSsTpuW9gUaSnJeWc,10077 +bs4/dammit.py,sha256=G0cQfsEqfwJ-FIQMkXgCJwSHMn7t9vPepCrud6fZEKk,41158 +bs4/diagnose.py,sha256=uAwdDugL_67tB-BIwDIFLFbiuzGxP2wQzJJ4_bGYUrA,7195 +bs4/element.py,sha256=R-HP8gtZPFJ71Rl4ieIBct1I9VTErTAD9FW64Jtg6Sc,92716 +bs4/formatter.py,sha256=fE8Xf9SrHvTZcv_zDpgtOGWk3OIWENPoeKcwhuMJnDs,7184 +bs4/tests/__init__.py,sha256=usdUEP_PwnDfhCdx9rQw9HLWRyc4k9goB6ErZT9aAc0,48391 +bs4/tests/__pycache__/__init__.cpython-310.pyc,, +bs4/tests/__pycache__/test_builder.cpython-310.pyc,, +bs4/tests/__pycache__/test_builder_registry.cpython-310.pyc,, +bs4/tests/__pycache__/test_css.cpython-310.pyc,, +bs4/tests/__pycache__/test_dammit.cpython-310.pyc,, +bs4/tests/__pycache__/test_docs.cpython-310.pyc,, +bs4/tests/__pycache__/test_element.cpython-310.pyc,, +bs4/tests/__pycache__/test_formatter.cpython-310.pyc,, +bs4/tests/__pycache__/test_fuzz.cpython-310.pyc,, +bs4/tests/__pycache__/test_html5lib.cpython-310.pyc,, +bs4/tests/__pycache__/test_htmlparser.cpython-310.pyc,, +bs4/tests/__pycache__/test_lxml.cpython-310.pyc,, +bs4/tests/__pycache__/test_navigablestring.cpython-310.pyc,, +bs4/tests/__pycache__/test_pageelement.cpython-310.pyc,, +bs4/tests/__pycache__/test_soup.cpython-310.pyc,, +bs4/tests/__pycache__/test_tag.cpython-310.pyc,, +bs4/tests/__pycache__/test_tree.cpython-310.pyc,, +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase,sha256=Uv_dx4a43TSfoNkjU-jHW2nSXkqHFg4XdAw7SWVObUk,23 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase,sha256=OEyVA0Ej4FxswOElrUNt0In4s4YhrmtaxE_NHGZvGtg,30 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase,sha256=3d8z65o4p7Rur-RmCHoOjzqaYQ8EAtjmiBYTHNyAdl4,19469 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase,sha256=2bq3S8KxZgk8EajLReHD8m4_0Lj_nrkyJAxB_z_U0D0,5 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase,sha256=MZDu31LPLfgu6jP9IZkrlwNes3f_sL8WFP5BChkUKdY,35 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase,sha256=w58r-s6besG5JwPXpnz37W2YTj9-_qxFbk6hiEnKeIQ,51495 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase,sha256=q8rkdMECEXKcqVhOf5zWHkSBTQeOPt0JiLg2TZiPCuk,10380 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase,sha256=QfzoOxKwNuqG-4xIrea6MOQLXhfAAOQJ0r9u-J6kSNs,19 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase,sha256=EItOpSdeD4ewK-qgJ9vtxennwn_huguzXgctrUT7fqE,3546 +bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase,sha256=a2aJTG4FceGSJXsjtxoS8S4jk_8rZsS3aznLkeO2_dY,124 +bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase,sha256=jRFRtCKlP3-3EDLc_iVRTcE6JNymv0rYcVM6qRaPrxI,2607 +bs4/tests/test_builder.py,sha256=nc2JE5EMrEf-p24qhf2R8qAV5PpFiOuNpYCmtmCjlTI,1115 +bs4/tests/test_builder_registry.py,sha256=7WLj2prjSHGphebnrjQuI6JYr03Uy_c9_CkaFSQ9HRo,5114 +bs4/tests/test_css.py,sha256=jCcgIWem3lyPa5AjhAk9S6fWI07hk1rg0v8coD7bEtI,17279 +bs4/tests/test_dammit.py,sha256=MbSmRN6VEP0Rm56-w6Ja0TW8eC-8ZxOJ-wXWVf_hRi8,15451 +bs4/tests/test_docs.py,sha256=xoAxnUfoQ7aRqGImwW_9BJDU8WNMZHIuvWqVepvWXt8,1127 +bs4/tests/test_element.py,sha256=92oRSRoGk8gIXAbAGHErKzocx2MK32TqcQdUJ-dGQMo,2377 +bs4/tests/test_formatter.py,sha256=eTzj91Lmhv90z-WiHjK3sBJZm0hRk0crFY1TZaXstCY,4148 +bs4/tests/test_fuzz.py,sha256=wXfic-J9-sv4C3upnTeZju_PIa9NxktOD_zw3Ek0u9w,3637 +bs4/tests/test_html5lib.py,sha256=2-ipm-_MaPt37WTxEd5DodUTNhS4EbLFKPRaO6XSCW4,8322 +bs4/tests/test_htmlparser.py,sha256=wnngcIlzjEwH21JFfu_mgt6JdpLt0ncJfLcGT7HeGw0,6256 +bs4/tests/test_lxml.py,sha256=nQCmLt7bWk0id7xMumZw--PzEe1xF9PTQn3lvHyNC6I,7635 +bs4/tests/test_navigablestring.py,sha256=RGSgziNf7cZnYdEPsoqL1B2I68TUJp1JmEQVxbh_ryA,5081 +bs4/tests/test_pageelement.py,sha256=VdGjUxx3RhjqmNsJ92ao6VZC_YD7T8mdLkDZjosOYeE,14274 +bs4/tests/test_soup.py,sha256=JmnAPLE1_GXm0wmwEUN7icdvBz9HDch-qoU2mT_TDrs,19877 +bs4/tests/test_tag.py,sha256=f19uie7QehvgvhIqNWfjDRR4TKa-ftm_RRoo6LXZyqk,9016 +bs4/tests/test_tree.py,sha256=n9nTQOzJb3-ZnZ6AkmMdZQ5TYcTUPnqHoVgal0mYXfg,48129 diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/REQUESTED b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL rename to python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS rename to python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE b/python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE rename to python/lib/python3.10/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE diff --git a/lib/python3.11/site-packages/brotli.py b/python/lib/python3.10/site-packages/brotli.py similarity index 100% rename from lib/python3.11/site-packages/brotli.py rename to python/lib/python3.10/site-packages/brotli.py diff --git a/lib/python3.11/site-packages/certifi-2023.5.7.dist-info/INSTALLER b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/certifi-2023.5.7.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/INSTALLER diff --git a/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/METADATA b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/METADATA new file mode 100644 index 0000000..c77c7aa --- /dev/null +++ b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/METADATA @@ -0,0 +1,24 @@ +Metadata-Version: 2.1 +Name: bs4 +Version: 0.0.1 +Summary: Screen-scraping library +Home-page: https://pypi.python.org/pypi/beautifulsoup4 +Author: Leonard Richardson +Author-email: leonardr@segfault.org +License: MIT +Download-URL: http://www.crummy.com/software/BeautifulSoup/bs4/download/ +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2 +Classifier: Programming Language :: Python :: 3 +Classifier: Topic :: Text Processing :: Markup :: HTML +Classifier: Topic :: Text Processing :: Markup :: XML +Classifier: Topic :: Text Processing :: Markup :: SGML +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Requires-Dist: beautifulsoup4 + +Use `beautifulsoup4 `_ instead. + diff --git a/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/RECORD b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/RECORD new file mode 100644 index 0000000..674a799 --- /dev/null +++ b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/RECORD @@ -0,0 +1,6 @@ +bs4-0.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +bs4-0.0.1.dist-info/METADATA,sha256=bVRbr82MkIf0VeeUP0gfl3SouLVKkecr32puiylQwCs,938 +bs4-0.0.1.dist-info/RECORD,, +bs4-0.0.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +bs4-0.0.1.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +bs4-0.0.1.dist-info/top_level.txt,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 diff --git a/lib/python3.11/site-packages/certifi-2023.5.7.dist-info/REQUESTED b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/certifi-2023.5.7.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/click-8.1.3.dist-info/WHEEL b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/click-8.1.3.dist-info/WHEEL rename to python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/top_level.txt b/python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/top_level.txt rename to python/lib/python3.10/site-packages/bs4-0.0.1.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/bs4/__init__.py b/python/lib/python3.10/site-packages/bs4/__init__.py similarity index 100% rename from lib/python3.11/site-packages/bs4/__init__.py rename to python/lib/python3.10/site-packages/bs4/__init__.py diff --git a/lib/python3.11/site-packages/bs4/builder/__init__.py b/python/lib/python3.10/site-packages/bs4/builder/__init__.py similarity index 100% rename from lib/python3.11/site-packages/bs4/builder/__init__.py rename to python/lib/python3.10/site-packages/bs4/builder/__init__.py diff --git a/lib/python3.11/site-packages/bs4/builder/_html5lib.py b/python/lib/python3.10/site-packages/bs4/builder/_html5lib.py similarity index 100% rename from lib/python3.11/site-packages/bs4/builder/_html5lib.py rename to python/lib/python3.10/site-packages/bs4/builder/_html5lib.py diff --git a/lib/python3.11/site-packages/bs4/builder/_htmlparser.py b/python/lib/python3.10/site-packages/bs4/builder/_htmlparser.py similarity index 100% rename from lib/python3.11/site-packages/bs4/builder/_htmlparser.py rename to python/lib/python3.10/site-packages/bs4/builder/_htmlparser.py diff --git a/lib/python3.11/site-packages/bs4/builder/_lxml.py b/python/lib/python3.10/site-packages/bs4/builder/_lxml.py similarity index 100% rename from lib/python3.11/site-packages/bs4/builder/_lxml.py rename to python/lib/python3.10/site-packages/bs4/builder/_lxml.py diff --git a/lib/python3.11/site-packages/bs4/css.py b/python/lib/python3.10/site-packages/bs4/css.py similarity index 100% rename from lib/python3.11/site-packages/bs4/css.py rename to python/lib/python3.10/site-packages/bs4/css.py diff --git a/lib/python3.11/site-packages/bs4/dammit.py b/python/lib/python3.10/site-packages/bs4/dammit.py similarity index 100% rename from lib/python3.11/site-packages/bs4/dammit.py rename to python/lib/python3.10/site-packages/bs4/dammit.py diff --git a/lib/python3.11/site-packages/bs4/diagnose.py b/python/lib/python3.10/site-packages/bs4/diagnose.py similarity index 100% rename from lib/python3.11/site-packages/bs4/diagnose.py rename to python/lib/python3.10/site-packages/bs4/diagnose.py diff --git a/lib/python3.11/site-packages/bs4/element.py b/python/lib/python3.10/site-packages/bs4/element.py similarity index 100% rename from lib/python3.11/site-packages/bs4/element.py rename to python/lib/python3.10/site-packages/bs4/element.py diff --git a/lib/python3.11/site-packages/bs4/formatter.py b/python/lib/python3.10/site-packages/bs4/formatter.py similarity index 100% rename from lib/python3.11/site-packages/bs4/formatter.py rename to python/lib/python3.10/site-packages/bs4/formatter.py diff --git a/lib/python3.11/site-packages/bs4/tests/__init__.py b/python/lib/python3.10/site-packages/bs4/tests/__init__.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/__init__.py rename to python/lib/python3.10/site-packages/bs4/tests/__init__.py diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4999465949331456.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5984173902397440.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6124268085182464.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6450958476902400.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase b/python/lib/python3.10/site-packages/bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase rename to python/lib/python3.10/site-packages/bs4/tests/fuzz/crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08.testcase diff --git a/lib/python3.11/site-packages/bs4/tests/test_builder.py b/python/lib/python3.10/site-packages/bs4/tests/test_builder.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_builder.py rename to python/lib/python3.10/site-packages/bs4/tests/test_builder.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_builder_registry.py b/python/lib/python3.10/site-packages/bs4/tests/test_builder_registry.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_builder_registry.py rename to python/lib/python3.10/site-packages/bs4/tests/test_builder_registry.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_css.py b/python/lib/python3.10/site-packages/bs4/tests/test_css.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_css.py rename to python/lib/python3.10/site-packages/bs4/tests/test_css.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_dammit.py b/python/lib/python3.10/site-packages/bs4/tests/test_dammit.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_dammit.py rename to python/lib/python3.10/site-packages/bs4/tests/test_dammit.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_docs.py b/python/lib/python3.10/site-packages/bs4/tests/test_docs.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_docs.py rename to python/lib/python3.10/site-packages/bs4/tests/test_docs.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_element.py b/python/lib/python3.10/site-packages/bs4/tests/test_element.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_element.py rename to python/lib/python3.10/site-packages/bs4/tests/test_element.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_formatter.py b/python/lib/python3.10/site-packages/bs4/tests/test_formatter.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_formatter.py rename to python/lib/python3.10/site-packages/bs4/tests/test_formatter.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_fuzz.py b/python/lib/python3.10/site-packages/bs4/tests/test_fuzz.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_fuzz.py rename to python/lib/python3.10/site-packages/bs4/tests/test_fuzz.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_html5lib.py b/python/lib/python3.10/site-packages/bs4/tests/test_html5lib.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_html5lib.py rename to python/lib/python3.10/site-packages/bs4/tests/test_html5lib.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_htmlparser.py b/python/lib/python3.10/site-packages/bs4/tests/test_htmlparser.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_htmlparser.py rename to python/lib/python3.10/site-packages/bs4/tests/test_htmlparser.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_lxml.py b/python/lib/python3.10/site-packages/bs4/tests/test_lxml.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_lxml.py rename to python/lib/python3.10/site-packages/bs4/tests/test_lxml.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_navigablestring.py b/python/lib/python3.10/site-packages/bs4/tests/test_navigablestring.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_navigablestring.py rename to python/lib/python3.10/site-packages/bs4/tests/test_navigablestring.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_pageelement.py b/python/lib/python3.10/site-packages/bs4/tests/test_pageelement.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_pageelement.py rename to python/lib/python3.10/site-packages/bs4/tests/test_pageelement.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_soup.py b/python/lib/python3.10/site-packages/bs4/tests/test_soup.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_soup.py rename to python/lib/python3.10/site-packages/bs4/tests/test_soup.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_tag.py b/python/lib/python3.10/site-packages/bs4/tests/test_tag.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_tag.py rename to python/lib/python3.10/site-packages/bs4/tests/test_tag.py diff --git a/lib/python3.11/site-packages/bs4/tests/test_tree.py b/python/lib/python3.10/site-packages/bs4/tests/test_tree.py similarity index 100% rename from lib/python3.11/site-packages/bs4/tests/test_tree.py rename to python/lib/python3.10/site-packages/bs4/tests/test_tree.py diff --git a/lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/INSTALLER b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/certifi-2023.5.7.dist-info/LICENSE b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/LICENSE similarity index 100% rename from lib/python3.11/site-packages/certifi-2023.5.7.dist-info/LICENSE rename to python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/LICENSE diff --git a/lib/python3.11/site-packages/certifi-2023.5.7.dist-info/METADATA b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/certifi-2023.5.7.dist-info/METADATA rename to python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/RECORD b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/RECORD new file mode 100644 index 0000000..3de83b1 --- /dev/null +++ b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/RECORD @@ -0,0 +1,15 @@ +certifi-2023.5.7.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +certifi-2023.5.7.dist-info/LICENSE,sha256=oC9sY4-fuE0G93ZMOrCF2K9-2luTwWbaVDEkeQd8b7A,1052 +certifi-2023.5.7.dist-info/METADATA,sha256=fpDUR-Vqju0gWBVplvFKNISMUn1UqTzr6698ttTzSLo,2190 +certifi-2023.5.7.dist-info/RECORD,, +certifi-2023.5.7.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +certifi-2023.5.7.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92 +certifi-2023.5.7.dist-info/top_level.txt,sha256=KMu4vUCfsjLrkPbSNdgdekS-pVJzBAJFO__nI8NF6-U,8 +certifi/__init__.py,sha256=q5ePznlfOw-XYIOV6RTnh45yS9haN-Nb1d__4QXc3g0,94 +certifi/__main__.py,sha256=xBBoj905TUWBLRGANOcf7oi6e-3dMP4cEoG9OyMs11g,243 +certifi/__pycache__/__init__.cpython-310.pyc,, +certifi/__pycache__/__main__.cpython-310.pyc,, +certifi/__pycache__/core.cpython-310.pyc,, +certifi/cacert.pem,sha256=swFTXcpJHZgU6ij6oyCsehnQ9dlCN5lvoKO1qTZDJRQ,278952 +certifi/core.py,sha256=lhewz0zFb2b4ULsQurElmloYwQoecjWzPqY67P8T7iM,4219 +certifi/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/REQUESTED b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/certifi-2023.5.7.dist-info/WHEEL b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/certifi-2023.5.7.dist-info/WHEEL rename to python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/certifi-2023.5.7.dist-info/top_level.txt b/python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/certifi-2023.5.7.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/certifi-2023.5.7.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/certifi/__init__.py b/python/lib/python3.10/site-packages/certifi/__init__.py similarity index 100% rename from lib/python3.11/site-packages/certifi/__init__.py rename to python/lib/python3.10/site-packages/certifi/__init__.py diff --git a/lib/python3.11/site-packages/certifi/__main__.py b/python/lib/python3.10/site-packages/certifi/__main__.py similarity index 100% rename from lib/python3.11/site-packages/certifi/__main__.py rename to python/lib/python3.10/site-packages/certifi/__main__.py diff --git a/lib/python3.11/site-packages/certifi/cacert.pem b/python/lib/python3.10/site-packages/certifi/cacert.pem similarity index 100% rename from lib/python3.11/site-packages/certifi/cacert.pem rename to python/lib/python3.10/site-packages/certifi/cacert.pem diff --git a/lib/python3.11/site-packages/certifi/core.py b/python/lib/python3.10/site-packages/certifi/core.py similarity index 100% rename from lib/python3.11/site-packages/certifi/core.py rename to python/lib/python3.10/site-packages/certifi/core.py diff --git a/lib/python3.11/site-packages/certifi/py.typed b/python/lib/python3.10/site-packages/certifi/py.typed similarity index 100% rename from lib/python3.11/site-packages/certifi/py.typed rename to python/lib/python3.10/site-packages/certifi/py.typed diff --git a/lib/python3.11/site-packages/click-8.1.3.dist-info/INSTALLER b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/click-8.1.3.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/LICENSE b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/LICENSE similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/LICENSE rename to python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/LICENSE diff --git a/lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/METADATA b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/METADATA rename to python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/RECORD b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/RECORD new file mode 100644 index 0000000..4cc2e9d --- /dev/null +++ b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/RECORD @@ -0,0 +1,36 @@ +../../../bin/normalizer,sha256=pgfkRCNOLuifhDuF1o1fzKY6SANkqlwBL5YNE3oQ8Xk,275 +charset_normalizer-3.1.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +charset_normalizer-3.1.0.dist-info/LICENSE,sha256=6zGgxaT7Cbik4yBV0lweX5w1iidS_vPNcgIT0cz-4kE,1070 +charset_normalizer-3.1.0.dist-info/METADATA,sha256=8lfcrrmtfEq--eZqh8FJzEjptLCEoGXySKruxIms44I,30983 +charset_normalizer-3.1.0.dist-info/RECORD,, +charset_normalizer-3.1.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer-3.1.0.dist-info/WHEEL,sha256=nKSwEH5fkxvG0Vdj1Hx7vbuU-SGQ9Nxl4yFFsCilvhs,152 +charset_normalizer-3.1.0.dist-info/entry_points.txt,sha256=uYo8aIGLWv8YgWfSna5HnfY_En4pkF1w4bgawNAXzP0,76 +charset_normalizer-3.1.0.dist-info/top_level.txt,sha256=7ASyzePr8_xuZWJsnqJjIBtyV8vhEo0wBCv1MPRRi3Q,19 +charset_normalizer/__init__.py,sha256=aAb_F9Zb23pb4NO6TfIfqLXLvf1PjnLBBOuPvQwPA18,1549 +charset_normalizer/__pycache__/__init__.cpython-310.pyc,, +charset_normalizer/__pycache__/api.cpython-310.pyc,, +charset_normalizer/__pycache__/cd.cpython-310.pyc,, +charset_normalizer/__pycache__/constant.cpython-310.pyc,, +charset_normalizer/__pycache__/legacy.cpython-310.pyc,, +charset_normalizer/__pycache__/md.cpython-310.pyc,, +charset_normalizer/__pycache__/models.cpython-310.pyc,, +charset_normalizer/__pycache__/utils.cpython-310.pyc,, +charset_normalizer/__pycache__/version.cpython-310.pyc,, +charset_normalizer/api.py,sha256=Vh44rFXztkxCjW7gF2waq8TyRL3mXKX8RwNGB99bhb4,18624 +charset_normalizer/assets/__init__.py,sha256=wpRfujN7GJuEE5wHHo3wEDVoJ5ovzRIxsImyimCBfGU,20069 +charset_normalizer/assets/__pycache__/__init__.cpython-310.pyc,, +charset_normalizer/cd.py,sha256=mZuiTSKq4XpweSDD2H4T4R3Axtaa-QS0tpEWdpMuAzQ,12554 +charset_normalizer/cli/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer/cli/__pycache__/__init__.cpython-310.pyc,, +charset_normalizer/cli/__pycache__/normalizer.cpython-310.pyc,, +charset_normalizer/cli/normalizer.py,sha256=2F-xURZJzo063Ye-2RLJ2wcmURpbKeAzKwpiws65dAs,9744 +charset_normalizer/constant.py,sha256=PmCeoKXqq3ZbCtCUpKHwwFBIv9DXMT_an1yd24q28mA,19101 +charset_normalizer/legacy.py,sha256=T-QuVMsMeDiQEk8WSszMrzVJg_14AMeSkmHdRYhdl1k,2071 +charset_normalizer/md.cpython-310-x86_64-linux-gnu.so,sha256=VRykbQIynSswzdrRbruysjgWzNW9fAXVUn06wgATyhc,17496 +charset_normalizer/md.py,sha256=MXPKP_oLbsubulEL_1rxcYKSd5FeEfyEfNNm5O6ADpc,18258 +charset_normalizer/md__mypyc.cpython-310-x86_64-linux-gnu.so,sha256=r7AOFco7yQOyP83eUuE0S1s6UENwSuxbZDmRJzWrEHQ,424312 +charset_normalizer/models.py,sha256=mC11wo84l00u2o03TRNX7M5ItBAbPUKKXgJSFxA35GY,11492 +charset_normalizer/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer/utils.py,sha256=tKLpquPYQdaRdFRwBo5gPOi06ov8UCJy5X1Pti0Q78U,11544 +charset_normalizer/version.py,sha256=bekbdpF_D3BtF-PhbPnA9PNaZaI8kKIgl3LTCD5FmYk,79 diff --git a/lib/python3.11/site-packages/click-8.1.3.dist-info/REQUESTED b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/click-8.1.3.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/REQUESTED diff --git a/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/WHEEL b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/WHEEL new file mode 100644 index 0000000..a9630e4 --- /dev/null +++ b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.38.4) +Root-Is-Purelib: false +Tag: cp310-cp310-manylinux_2_17_x86_64 +Tag: cp310-cp310-manylinux2014_x86_64 + diff --git a/lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/entry_points.txt b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/entry_points.txt similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/entry_points.txt rename to python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/entry_points.txt diff --git a/lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/top_level.txt b/python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer-3.1.0.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/charset_normalizer-3.1.0.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/charset_normalizer/__init__.py b/python/lib/python3.10/site-packages/charset_normalizer/__init__.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/__init__.py rename to python/lib/python3.10/site-packages/charset_normalizer/__init__.py diff --git a/lib/python3.11/site-packages/charset_normalizer/api.py b/python/lib/python3.10/site-packages/charset_normalizer/api.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/api.py rename to python/lib/python3.10/site-packages/charset_normalizer/api.py diff --git a/lib/python3.11/site-packages/charset_normalizer/assets/__init__.py b/python/lib/python3.10/site-packages/charset_normalizer/assets/__init__.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/assets/__init__.py rename to python/lib/python3.10/site-packages/charset_normalizer/assets/__init__.py diff --git a/lib/python3.11/site-packages/charset_normalizer/cd.py b/python/lib/python3.10/site-packages/charset_normalizer/cd.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/cd.py rename to python/lib/python3.10/site-packages/charset_normalizer/cd.py diff --git a/lib/python3.11/site-packages/charset_normalizer/cli/__init__.py b/python/lib/python3.10/site-packages/charset_normalizer/cli/__init__.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/cli/__init__.py rename to python/lib/python3.10/site-packages/charset_normalizer/cli/__init__.py diff --git a/lib/python3.11/site-packages/charset_normalizer/cli/normalizer.py b/python/lib/python3.10/site-packages/charset_normalizer/cli/normalizer.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/cli/normalizer.py rename to python/lib/python3.10/site-packages/charset_normalizer/cli/normalizer.py diff --git a/lib/python3.11/site-packages/charset_normalizer/constant.py b/python/lib/python3.10/site-packages/charset_normalizer/constant.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/constant.py rename to python/lib/python3.10/site-packages/charset_normalizer/constant.py diff --git a/lib/python3.11/site-packages/charset_normalizer/legacy.py b/python/lib/python3.10/site-packages/charset_normalizer/legacy.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/legacy.py rename to python/lib/python3.10/site-packages/charset_normalizer/legacy.py diff --git a/python/lib/python3.10/site-packages/charset_normalizer/md.cpython-310-x86_64-linux-gnu.so b/python/lib/python3.10/site-packages/charset_normalizer/md.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 0000000..0e3becc Binary files /dev/null and b/python/lib/python3.10/site-packages/charset_normalizer/md.cpython-310-x86_64-linux-gnu.so differ diff --git a/lib/python3.11/site-packages/charset_normalizer/md.py b/python/lib/python3.10/site-packages/charset_normalizer/md.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/md.py rename to python/lib/python3.10/site-packages/charset_normalizer/md.py diff --git a/python/lib/python3.10/site-packages/charset_normalizer/md__mypyc.cpython-310-x86_64-linux-gnu.so b/python/lib/python3.10/site-packages/charset_normalizer/md__mypyc.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 0000000..441d8d7 Binary files /dev/null and b/python/lib/python3.10/site-packages/charset_normalizer/md__mypyc.cpython-310-x86_64-linux-gnu.so differ diff --git a/lib/python3.11/site-packages/charset_normalizer/models.py b/python/lib/python3.10/site-packages/charset_normalizer/models.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/models.py rename to python/lib/python3.10/site-packages/charset_normalizer/models.py diff --git a/lib/python3.11/site-packages/charset_normalizer/py.typed b/python/lib/python3.10/site-packages/charset_normalizer/py.typed similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/py.typed rename to python/lib/python3.10/site-packages/charset_normalizer/py.typed diff --git a/lib/python3.11/site-packages/charset_normalizer/utils.py b/python/lib/python3.10/site-packages/charset_normalizer/utils.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/utils.py rename to python/lib/python3.10/site-packages/charset_normalizer/utils.py diff --git a/lib/python3.11/site-packages/charset_normalizer/version.py b/python/lib/python3.10/site-packages/charset_normalizer/version.py similarity index 100% rename from lib/python3.11/site-packages/charset_normalizer/version.py rename to python/lib/python3.10/site-packages/charset_normalizer/version.py diff --git a/lib/python3.11/site-packages/filelock-3.12.2.dist-info/INSTALLER b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/filelock-3.12.2.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/click-8.1.3.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/click-8.1.3.dist-info/LICENSE.rst b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/LICENSE.rst similarity index 100% rename from lib/python3.11/site-packages/click-8.1.3.dist-info/LICENSE.rst rename to python/lib/python3.10/site-packages/click-8.1.3.dist-info/LICENSE.rst diff --git a/lib/python3.11/site-packages/click-8.1.3.dist-info/METADATA b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/click-8.1.3.dist-info/METADATA rename to python/lib/python3.10/site-packages/click-8.1.3.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/click-8.1.3.dist-info/RECORD b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/RECORD new file mode 100644 index 0000000..1ca66a0 --- /dev/null +++ b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/RECORD @@ -0,0 +1,40 @@ +click-8.1.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +click-8.1.3.dist-info/LICENSE.rst,sha256=morRBqOU6FO_4h9C9OctWSgZoigF2ZG18ydQKSkrZY0,1475 +click-8.1.3.dist-info/METADATA,sha256=tFJIX5lOjx7c5LjZbdTPFVDJSgyv9F74XY0XCPp_gnc,3247 +click-8.1.3.dist-info/RECORD,, +click-8.1.3.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +click-8.1.3.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +click-8.1.3.dist-info/top_level.txt,sha256=J1ZQogalYS4pphY_lPECoNMfw0HzTSrZglC4Yfwo4xA,6 +click/__init__.py,sha256=rQBLutqg-z6m8nOzivIfigDn_emijB_dKv9BZ2FNi5s,3138 +click/__pycache__/__init__.cpython-310.pyc,, +click/__pycache__/_compat.cpython-310.pyc,, +click/__pycache__/_termui_impl.cpython-310.pyc,, +click/__pycache__/_textwrap.cpython-310.pyc,, +click/__pycache__/_winconsole.cpython-310.pyc,, +click/__pycache__/core.cpython-310.pyc,, +click/__pycache__/decorators.cpython-310.pyc,, +click/__pycache__/exceptions.cpython-310.pyc,, +click/__pycache__/formatting.cpython-310.pyc,, +click/__pycache__/globals.cpython-310.pyc,, +click/__pycache__/parser.cpython-310.pyc,, +click/__pycache__/shell_completion.cpython-310.pyc,, +click/__pycache__/termui.cpython-310.pyc,, +click/__pycache__/testing.cpython-310.pyc,, +click/__pycache__/types.cpython-310.pyc,, +click/__pycache__/utils.cpython-310.pyc,, +click/_compat.py,sha256=JIHLYs7Jzz4KT9t-ds4o4jBzLjnwCiJQKqur-5iwCKI,18810 +click/_termui_impl.py,sha256=qK6Cfy4mRFxvxE8dya8RBhLpSC8HjF-lvBc6aNrPdwg,23451 +click/_textwrap.py,sha256=10fQ64OcBUMuK7mFvh8363_uoOxPlRItZBmKzRJDgoY,1353 +click/_winconsole.py,sha256=5ju3jQkcZD0W27WEMGqmEP4y_crUVzPCqsX_FYb7BO0,7860 +click/core.py,sha256=mz87bYEKzIoNYEa56BFAiOJnvt1Y0L-i7wD4_ZecieE,112782 +click/decorators.py,sha256=yo3zvzgUm5q7h5CXjyV6q3h_PJAiUaem178zXwdWUFI,16350 +click/exceptions.py,sha256=7gDaLGuFZBeCNwY9ERMsF2-Z3R9Fvq09Zc6IZSKjseo,9167 +click/formatting.py,sha256=Frf0-5W33-loyY_i9qrwXR8-STnW3m5gvyxLVUdyxyk,9706 +click/globals.py,sha256=TP-qM88STzc7f127h35TD_v920FgfOD2EwzqA0oE8XU,1961 +click/parser.py,sha256=cAEt1uQR8gq3-S9ysqbVU-fdAZNvilxw4ReJ_T1OQMk,19044 +click/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +click/shell_completion.py,sha256=qOp_BeC9esEOSZKyu5G7RIxEUaLsXUX-mTb7hB1r4QY,18018 +click/termui.py,sha256=ACBQVOvFCTSqtD5VREeCAdRtlHd-Imla-Lte4wSfMjA,28355 +click/testing.py,sha256=ptpMYgRY7dVfE3UDgkgwayu9ePw98sQI3D7zZXiCpj4,16063 +click/types.py,sha256=rEb1aZSQKq3ciCMmjpG2Uva9vk498XRL7ThrcK2GRss,35805 +click/utils.py,sha256=33D6E7poH_nrKB-xr-UyDEXnxOcCiQqxuRLtrqeVv6o,18682 diff --git a/lib/python3.11/site-packages/filelock-3.12.2.dist-info/REQUESTED b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/filelock-3.12.2.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/click-8.1.3.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/WHEEL b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/WHEEL rename to python/lib/python3.10/site-packages/click-8.1.3.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/click-8.1.3.dist-info/top_level.txt b/python/lib/python3.10/site-packages/click-8.1.3.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/click-8.1.3.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/click-8.1.3.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/click/__init__.py b/python/lib/python3.10/site-packages/click/__init__.py similarity index 100% rename from lib/python3.11/site-packages/click/__init__.py rename to python/lib/python3.10/site-packages/click/__init__.py diff --git a/lib/python3.11/site-packages/click/_compat.py b/python/lib/python3.10/site-packages/click/_compat.py similarity index 100% rename from lib/python3.11/site-packages/click/_compat.py rename to python/lib/python3.10/site-packages/click/_compat.py diff --git a/lib/python3.11/site-packages/click/_termui_impl.py b/python/lib/python3.10/site-packages/click/_termui_impl.py similarity index 100% rename from lib/python3.11/site-packages/click/_termui_impl.py rename to python/lib/python3.10/site-packages/click/_termui_impl.py diff --git a/lib/python3.11/site-packages/click/_textwrap.py b/python/lib/python3.10/site-packages/click/_textwrap.py similarity index 100% rename from lib/python3.11/site-packages/click/_textwrap.py rename to python/lib/python3.10/site-packages/click/_textwrap.py diff --git a/lib/python3.11/site-packages/click/_winconsole.py b/python/lib/python3.10/site-packages/click/_winconsole.py similarity index 100% rename from lib/python3.11/site-packages/click/_winconsole.py rename to python/lib/python3.10/site-packages/click/_winconsole.py diff --git a/lib/python3.11/site-packages/click/core.py b/python/lib/python3.10/site-packages/click/core.py similarity index 100% rename from lib/python3.11/site-packages/click/core.py rename to python/lib/python3.10/site-packages/click/core.py diff --git a/lib/python3.11/site-packages/click/decorators.py b/python/lib/python3.10/site-packages/click/decorators.py similarity index 100% rename from lib/python3.11/site-packages/click/decorators.py rename to python/lib/python3.10/site-packages/click/decorators.py diff --git a/lib/python3.11/site-packages/click/exceptions.py b/python/lib/python3.10/site-packages/click/exceptions.py similarity index 100% rename from lib/python3.11/site-packages/click/exceptions.py rename to python/lib/python3.10/site-packages/click/exceptions.py diff --git a/lib/python3.11/site-packages/click/formatting.py b/python/lib/python3.10/site-packages/click/formatting.py similarity index 100% rename from lib/python3.11/site-packages/click/formatting.py rename to python/lib/python3.10/site-packages/click/formatting.py diff --git a/lib/python3.11/site-packages/click/globals.py b/python/lib/python3.10/site-packages/click/globals.py similarity index 100% rename from lib/python3.11/site-packages/click/globals.py rename to python/lib/python3.10/site-packages/click/globals.py diff --git a/lib/python3.11/site-packages/click/parser.py b/python/lib/python3.10/site-packages/click/parser.py similarity index 100% rename from lib/python3.11/site-packages/click/parser.py rename to python/lib/python3.10/site-packages/click/parser.py diff --git a/lib/python3.11/site-packages/click/py.typed b/python/lib/python3.10/site-packages/click/py.typed similarity index 100% rename from lib/python3.11/site-packages/click/py.typed rename to python/lib/python3.10/site-packages/click/py.typed diff --git a/lib/python3.11/site-packages/click/shell_completion.py b/python/lib/python3.10/site-packages/click/shell_completion.py similarity index 100% rename from lib/python3.11/site-packages/click/shell_completion.py rename to python/lib/python3.10/site-packages/click/shell_completion.py diff --git a/lib/python3.11/site-packages/click/termui.py b/python/lib/python3.10/site-packages/click/termui.py similarity index 100% rename from lib/python3.11/site-packages/click/termui.py rename to python/lib/python3.10/site-packages/click/termui.py diff --git a/lib/python3.11/site-packages/click/testing.py b/python/lib/python3.10/site-packages/click/testing.py similarity index 100% rename from lib/python3.11/site-packages/click/testing.py rename to python/lib/python3.10/site-packages/click/testing.py diff --git a/lib/python3.11/site-packages/click/types.py b/python/lib/python3.10/site-packages/click/types.py similarity index 100% rename from lib/python3.11/site-packages/click/types.py rename to python/lib/python3.10/site-packages/click/types.py diff --git a/lib/python3.11/site-packages/click/utils.py b/python/lib/python3.10/site-packages/click/utils.py similarity index 100% rename from lib/python3.11/site-packages/click/utils.py rename to python/lib/python3.10/site-packages/click/utils.py diff --git a/python/lib/python3.10/site-packages/distutils-precedence.pth b/python/lib/python3.10/site-packages/distutils-precedence.pth new file mode 100644 index 0000000..6de4198 --- /dev/null +++ b/python/lib/python3.10/site-packages/distutils-precedence.pth @@ -0,0 +1 @@ +import os; var = 'SETUPTOOLS_USE_DISTUTILS'; enabled = os.environ.get(var, 'stdlib') == 'local'; enabled and __import__('_distutils_hack').add_shim(); diff --git a/lib/python3.11/site-packages/idna-3.4.dist-info/INSTALLER b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/idna-3.4.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/filelock-3.12.2.dist-info/METADATA b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/filelock-3.12.2.dist-info/METADATA rename to python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/RECORD b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/RECORD new file mode 100644 index 0000000..9e4a806 --- /dev/null +++ b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/RECORD @@ -0,0 +1,23 @@ +filelock-3.12.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +filelock-3.12.2.dist-info/METADATA,sha256=XziDNuweWluDKuB0HgIgQPYZr2D1UfRluRfZ7RNypsw,2724 +filelock-3.12.2.dist-info/RECORD,, +filelock-3.12.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +filelock-3.12.2.dist-info/WHEEL,sha256=9QBuHhg6FNW7lppboF2vKVbCGTVzsFykgRQjjlajrhA,87 +filelock-3.12.2.dist-info/licenses/LICENSE,sha256=iNm062BXnBkew5HKBMFhMFctfu3EqG2qWL8oxuFMm80,1210 +filelock/__init__.py,sha256=nCvrEw6t391LA0d_TsybAbRF3HI4g6lYoHil-xghyJs,1230 +filelock/__pycache__/__init__.cpython-310.pyc,, +filelock/__pycache__/_api.cpython-310.pyc,, +filelock/__pycache__/_error.cpython-310.pyc,, +filelock/__pycache__/_soft.cpython-310.pyc,, +filelock/__pycache__/_unix.cpython-310.pyc,, +filelock/__pycache__/_util.cpython-310.pyc,, +filelock/__pycache__/_windows.cpython-310.pyc,, +filelock/__pycache__/version.cpython-310.pyc,, +filelock/_api.py,sha256=iUUv2QVWTX4g3v2LSH2m8iF-ZlyP1UkezsfCSvXgil0,10125 +filelock/_error.py,sha256=-5jMcjTu60YAvAO1UbqDD1GIEjVkwr8xCFwDBtMeYDg,787 +filelock/_soft.py,sha256=FlmkORe37IXz0voO2JPmdDjk2W5BH5B5LSDqnQ7ZOTU,1638 +filelock/_unix.py,sha256=T-g81COqIF-yEJKKyxax_8joejxw7JVYWDPrpy2Cq2I,2062 +filelock/_util.py,sha256=Y3CMudAij-xLOWdIMxWhWEaOTCI_BICW0spcv_LFp4Y,1410 +filelock/_windows.py,sha256=3wpFAtTliqodzqLXk8h1EX_T_zyd32t_roJqKVr0pm0,2100 +filelock/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +filelock/version.py,sha256=Vk4x7NmWnlU1UDYhJpyZCmorJtPQLx9a4YEOiepQZgM,162 diff --git a/lib/python3.11/site-packages/idna-3.4.dist-info/REQUESTED b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/idna-3.4.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/filelock-3.12.2.dist-info/WHEEL b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/filelock-3.12.2.dist-info/WHEEL rename to python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/filelock-3.12.2.dist-info/licenses/LICENSE b/python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/licenses/LICENSE similarity index 100% rename from lib/python3.11/site-packages/filelock-3.12.2.dist-info/licenses/LICENSE rename to python/lib/python3.10/site-packages/filelock-3.12.2.dist-info/licenses/LICENSE diff --git a/lib/python3.11/site-packages/filelock/__init__.py b/python/lib/python3.10/site-packages/filelock/__init__.py similarity index 100% rename from lib/python3.11/site-packages/filelock/__init__.py rename to python/lib/python3.10/site-packages/filelock/__init__.py diff --git a/lib/python3.11/site-packages/filelock/_api.py b/python/lib/python3.10/site-packages/filelock/_api.py similarity index 100% rename from lib/python3.11/site-packages/filelock/_api.py rename to python/lib/python3.10/site-packages/filelock/_api.py diff --git a/lib/python3.11/site-packages/filelock/_error.py b/python/lib/python3.10/site-packages/filelock/_error.py similarity index 100% rename from lib/python3.11/site-packages/filelock/_error.py rename to python/lib/python3.10/site-packages/filelock/_error.py diff --git a/lib/python3.11/site-packages/filelock/_soft.py b/python/lib/python3.10/site-packages/filelock/_soft.py similarity index 100% rename from lib/python3.11/site-packages/filelock/_soft.py rename to python/lib/python3.10/site-packages/filelock/_soft.py diff --git a/lib/python3.11/site-packages/filelock/_unix.py b/python/lib/python3.10/site-packages/filelock/_unix.py similarity index 100% rename from lib/python3.11/site-packages/filelock/_unix.py rename to python/lib/python3.10/site-packages/filelock/_unix.py diff --git a/lib/python3.11/site-packages/filelock/_util.py b/python/lib/python3.10/site-packages/filelock/_util.py similarity index 100% rename from lib/python3.11/site-packages/filelock/_util.py rename to python/lib/python3.10/site-packages/filelock/_util.py diff --git a/lib/python3.11/site-packages/filelock/_windows.py b/python/lib/python3.10/site-packages/filelock/_windows.py similarity index 100% rename from lib/python3.11/site-packages/filelock/_windows.py rename to python/lib/python3.10/site-packages/filelock/_windows.py diff --git a/lib/python3.11/site-packages/filelock/py.typed b/python/lib/python3.10/site-packages/filelock/py.typed similarity index 100% rename from lib/python3.11/site-packages/filelock/py.typed rename to python/lib/python3.10/site-packages/filelock/py.typed diff --git a/lib/python3.11/site-packages/filelock/version.py b/python/lib/python3.10/site-packages/filelock/version.py similarity index 100% rename from lib/python3.11/site-packages/filelock/version.py rename to python/lib/python3.10/site-packages/filelock/version.py diff --git a/lib/python3.11/site-packages/flask/__init__.py b/python/lib/python3.10/site-packages/flask/__init__.py similarity index 100% rename from lib/python3.11/site-packages/flask/__init__.py rename to python/lib/python3.10/site-packages/flask/__init__.py diff --git a/lib/python3.11/site-packages/flask/__main__.py b/python/lib/python3.10/site-packages/flask/__main__.py similarity index 100% rename from lib/python3.11/site-packages/flask/__main__.py rename to python/lib/python3.10/site-packages/flask/__main__.py diff --git a/lib/python3.11/site-packages/flask/app.py b/python/lib/python3.10/site-packages/flask/app.py similarity index 100% rename from lib/python3.11/site-packages/flask/app.py rename to python/lib/python3.10/site-packages/flask/app.py diff --git a/lib/python3.11/site-packages/flask/blueprints.py b/python/lib/python3.10/site-packages/flask/blueprints.py similarity index 100% rename from lib/python3.11/site-packages/flask/blueprints.py rename to python/lib/python3.10/site-packages/flask/blueprints.py diff --git a/lib/python3.11/site-packages/flask/cli.py b/python/lib/python3.10/site-packages/flask/cli.py similarity index 100% rename from lib/python3.11/site-packages/flask/cli.py rename to python/lib/python3.10/site-packages/flask/cli.py diff --git a/lib/python3.11/site-packages/flask/config.py b/python/lib/python3.10/site-packages/flask/config.py similarity index 100% rename from lib/python3.11/site-packages/flask/config.py rename to python/lib/python3.10/site-packages/flask/config.py diff --git a/lib/python3.11/site-packages/flask/ctx.py b/python/lib/python3.10/site-packages/flask/ctx.py similarity index 100% rename from lib/python3.11/site-packages/flask/ctx.py rename to python/lib/python3.10/site-packages/flask/ctx.py diff --git a/lib/python3.11/site-packages/flask/debughelpers.py b/python/lib/python3.10/site-packages/flask/debughelpers.py similarity index 100% rename from lib/python3.11/site-packages/flask/debughelpers.py rename to python/lib/python3.10/site-packages/flask/debughelpers.py diff --git a/lib/python3.11/site-packages/flask/globals.py b/python/lib/python3.10/site-packages/flask/globals.py similarity index 100% rename from lib/python3.11/site-packages/flask/globals.py rename to python/lib/python3.10/site-packages/flask/globals.py diff --git a/lib/python3.11/site-packages/flask/helpers.py b/python/lib/python3.10/site-packages/flask/helpers.py similarity index 100% rename from lib/python3.11/site-packages/flask/helpers.py rename to python/lib/python3.10/site-packages/flask/helpers.py diff --git a/lib/python3.11/site-packages/flask/json/__init__.py b/python/lib/python3.10/site-packages/flask/json/__init__.py similarity index 100% rename from lib/python3.11/site-packages/flask/json/__init__.py rename to python/lib/python3.10/site-packages/flask/json/__init__.py diff --git a/lib/python3.11/site-packages/flask/json/provider.py b/python/lib/python3.10/site-packages/flask/json/provider.py similarity index 100% rename from lib/python3.11/site-packages/flask/json/provider.py rename to python/lib/python3.10/site-packages/flask/json/provider.py diff --git a/lib/python3.11/site-packages/flask/json/tag.py b/python/lib/python3.10/site-packages/flask/json/tag.py similarity index 100% rename from lib/python3.11/site-packages/flask/json/tag.py rename to python/lib/python3.10/site-packages/flask/json/tag.py diff --git a/lib/python3.11/site-packages/flask/logging.py b/python/lib/python3.10/site-packages/flask/logging.py similarity index 100% rename from lib/python3.11/site-packages/flask/logging.py rename to python/lib/python3.10/site-packages/flask/logging.py diff --git a/lib/python3.11/site-packages/flask/py.typed b/python/lib/python3.10/site-packages/flask/py.typed similarity index 100% rename from lib/python3.11/site-packages/flask/py.typed rename to python/lib/python3.10/site-packages/flask/py.typed diff --git a/lib/python3.11/site-packages/flask/scaffold.py b/python/lib/python3.10/site-packages/flask/scaffold.py similarity index 100% rename from lib/python3.11/site-packages/flask/scaffold.py rename to python/lib/python3.10/site-packages/flask/scaffold.py diff --git a/lib/python3.11/site-packages/flask/sessions.py b/python/lib/python3.10/site-packages/flask/sessions.py similarity index 100% rename from lib/python3.11/site-packages/flask/sessions.py rename to python/lib/python3.10/site-packages/flask/sessions.py diff --git a/lib/python3.11/site-packages/flask/signals.py b/python/lib/python3.10/site-packages/flask/signals.py similarity index 100% rename from lib/python3.11/site-packages/flask/signals.py rename to python/lib/python3.10/site-packages/flask/signals.py diff --git a/lib/python3.11/site-packages/flask/templating.py b/python/lib/python3.10/site-packages/flask/templating.py similarity index 100% rename from lib/python3.11/site-packages/flask/templating.py rename to python/lib/python3.10/site-packages/flask/templating.py diff --git a/lib/python3.11/site-packages/flask/testing.py b/python/lib/python3.10/site-packages/flask/testing.py similarity index 100% rename from lib/python3.11/site-packages/flask/testing.py rename to python/lib/python3.10/site-packages/flask/testing.py diff --git a/lib/python3.11/site-packages/flask/typing.py b/python/lib/python3.10/site-packages/flask/typing.py similarity index 100% rename from lib/python3.11/site-packages/flask/typing.py rename to python/lib/python3.10/site-packages/flask/typing.py diff --git a/lib/python3.11/site-packages/flask/views.py b/python/lib/python3.10/site-packages/flask/views.py similarity index 100% rename from lib/python3.11/site-packages/flask/views.py rename to python/lib/python3.10/site-packages/flask/views.py diff --git a/lib/python3.11/site-packages/flask/wrappers.py b/python/lib/python3.10/site-packages/flask/wrappers.py similarity index 100% rename from lib/python3.11/site-packages/flask/wrappers.py rename to python/lib/python3.10/site-packages/flask/wrappers.py diff --git a/lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/INSTALLER b/python/lib/python3.10/site-packages/idna-3.4.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/idna-3.4.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/idna-3.4.dist-info/LICENSE.md b/python/lib/python3.10/site-packages/idna-3.4.dist-info/LICENSE.md similarity index 100% rename from lib/python3.11/site-packages/idna-3.4.dist-info/LICENSE.md rename to python/lib/python3.10/site-packages/idna-3.4.dist-info/LICENSE.md diff --git a/lib/python3.11/site-packages/idna-3.4.dist-info/METADATA b/python/lib/python3.10/site-packages/idna-3.4.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/idna-3.4.dist-info/METADATA rename to python/lib/python3.10/site-packages/idna-3.4.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/idna-3.4.dist-info/RECORD b/python/lib/python3.10/site-packages/idna-3.4.dist-info/RECORD new file mode 100644 index 0000000..31b84fb --- /dev/null +++ b/python/lib/python3.10/site-packages/idna-3.4.dist-info/RECORD @@ -0,0 +1,23 @@ +idna-3.4.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +idna-3.4.dist-info/LICENSE.md,sha256=otbk2UC9JNvnuWRc3hmpeSzFHbeuDVrNMBrIYMqj6DY,1523 +idna-3.4.dist-info/METADATA,sha256=8aLSf9MFS7oB26pZh2hprg7eJp0UJSc-3rpf_evp4DA,9830 +idna-3.4.dist-info/RECORD,, +idna-3.4.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +idna-3.4.dist-info/WHEEL,sha256=4TfKIB_xu-04bc2iKz6_zFt-gEFEEDU_31HGhqzOCE8,81 +idna/__init__.py,sha256=KJQN1eQBr8iIK5SKrJ47lXvxG0BJ7Lm38W4zT0v_8lk,849 +idna/__pycache__/__init__.cpython-310.pyc,, +idna/__pycache__/codec.cpython-310.pyc,, +idna/__pycache__/compat.cpython-310.pyc,, +idna/__pycache__/core.cpython-310.pyc,, +idna/__pycache__/idnadata.cpython-310.pyc,, +idna/__pycache__/intranges.cpython-310.pyc,, +idna/__pycache__/package_data.cpython-310.pyc,, +idna/__pycache__/uts46data.cpython-310.pyc,, +idna/codec.py,sha256=6ly5odKfqrytKT9_7UrlGklHnf1DSK2r9C6cSM4sa28,3374 +idna/compat.py,sha256=0_sOEUMT4CVw9doD3vyRhX80X19PwqFoUBs7gWsFME4,321 +idna/core.py,sha256=1JxchwKzkxBSn7R_oCE12oBu3eVux0VzdxolmIad24M,12950 +idna/idnadata.py,sha256=xUjqKqiJV8Ho_XzBpAtv5JFoVPSupK-SUXvtjygUHqw,44375 +idna/intranges.py,sha256=YBr4fRYuWH7kTKS2tXlFjM24ZF1Pdvcir-aywniInqg,1881 +idna/package_data.py,sha256=C_jHJzmX8PI4xq0jpzmcTMxpb5lDsq4o5VyxQzlVrZE,21 +idna/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +idna/uts46data.py,sha256=zvjZU24s58_uAS850Mcd0NnD0X7_gCMAMjzWNIeUJdc,206539 diff --git a/lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/REQUESTED b/python/lib/python3.10/site-packages/idna-3.4.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/idna-3.4.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/idna-3.4.dist-info/WHEEL b/python/lib/python3.10/site-packages/idna-3.4.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/idna-3.4.dist-info/WHEEL rename to python/lib/python3.10/site-packages/idna-3.4.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/idna/__init__.py b/python/lib/python3.10/site-packages/idna/__init__.py similarity index 100% rename from lib/python3.11/site-packages/idna/__init__.py rename to python/lib/python3.10/site-packages/idna/__init__.py diff --git a/lib/python3.11/site-packages/idna/codec.py b/python/lib/python3.10/site-packages/idna/codec.py similarity index 100% rename from lib/python3.11/site-packages/idna/codec.py rename to python/lib/python3.10/site-packages/idna/codec.py diff --git a/lib/python3.11/site-packages/idna/compat.py b/python/lib/python3.10/site-packages/idna/compat.py similarity index 100% rename from lib/python3.11/site-packages/idna/compat.py rename to python/lib/python3.10/site-packages/idna/compat.py diff --git a/lib/python3.11/site-packages/idna/core.py b/python/lib/python3.10/site-packages/idna/core.py similarity index 100% rename from lib/python3.11/site-packages/idna/core.py rename to python/lib/python3.10/site-packages/idna/core.py diff --git a/lib/python3.11/site-packages/idna/idnadata.py b/python/lib/python3.10/site-packages/idna/idnadata.py similarity index 100% rename from lib/python3.11/site-packages/idna/idnadata.py rename to python/lib/python3.10/site-packages/idna/idnadata.py diff --git a/lib/python3.11/site-packages/idna/intranges.py b/python/lib/python3.10/site-packages/idna/intranges.py similarity index 100% rename from lib/python3.11/site-packages/idna/intranges.py rename to python/lib/python3.10/site-packages/idna/intranges.py diff --git a/lib/python3.11/site-packages/idna/package_data.py b/python/lib/python3.10/site-packages/idna/package_data.py similarity index 100% rename from lib/python3.11/site-packages/idna/package_data.py rename to python/lib/python3.10/site-packages/idna/package_data.py diff --git a/lib/python3.11/site-packages/idna/py.typed b/python/lib/python3.10/site-packages/idna/py.typed similarity index 100% rename from lib/python3.11/site-packages/idna/py.typed rename to python/lib/python3.10/site-packages/idna/py.typed diff --git a/lib/python3.11/site-packages/idna/uts46data.py b/python/lib/python3.10/site-packages/idna/uts46data.py similarity index 100% rename from lib/python3.11/site-packages/idna/uts46data.py rename to python/lib/python3.10/site-packages/idna/uts46data.py diff --git a/lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/INSTALLER b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/LICENSE b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/LICENSE similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/LICENSE rename to python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/LICENSE diff --git a/lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/METADATA b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/METADATA rename to python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/RECORD b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/RECORD new file mode 100644 index 0000000..e963762 --- /dev/null +++ b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/RECORD @@ -0,0 +1,26 @@ +importlib_metadata-6.6.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +importlib_metadata-6.6.0.dist-info/LICENSE,sha256=z8d0m5b2O9McPEK1xHG_dWgUBT6EfBDz6wA0F7xSPTA,11358 +importlib_metadata-6.6.0.dist-info/METADATA,sha256=PkZEyOdZ2YQVAGc1nwMGJ2o6cs7hWeBL2Ya5NoyUGbA,4958 +importlib_metadata-6.6.0.dist-info/RECORD,, +importlib_metadata-6.6.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +importlib_metadata-6.6.0.dist-info/WHEEL,sha256=pkctZYzUS4AYVn6dJ-7367OJZivF2e8RA9b_ZBjif18,92 +importlib_metadata-6.6.0.dist-info/top_level.txt,sha256=CO3fD9yylANiXkrMo4qHLV_mqXL2sC5JFKgt1yWAT-A,19 +importlib_metadata/__init__.py,sha256=e_TtkQm2IsKEBa4H9j502DqMUZ-k6KQa53o9VX47ofs,29949 +importlib_metadata/__pycache__/__init__.cpython-310.pyc,, +importlib_metadata/__pycache__/_adapters.cpython-310.pyc,, +importlib_metadata/__pycache__/_collections.cpython-310.pyc,, +importlib_metadata/__pycache__/_compat.cpython-310.pyc,, +importlib_metadata/__pycache__/_functools.cpython-310.pyc,, +importlib_metadata/__pycache__/_itertools.cpython-310.pyc,, +importlib_metadata/__pycache__/_meta.cpython-310.pyc,, +importlib_metadata/__pycache__/_py39compat.cpython-310.pyc,, +importlib_metadata/__pycache__/_text.cpython-310.pyc,, +importlib_metadata/_adapters.py,sha256=i8S6Ib1OQjcILA-l4gkzktMZe18TaeUNI49PLRp6OBU,2454 +importlib_metadata/_collections.py,sha256=CJ0OTCHIjWA0ZIVS4voORAsn2R4R2cQBEtPsZEJpASY,743 +importlib_metadata/_compat.py,sha256=VZUVAQntwXR1lZUYgZjOnXsV_B5mRV5FjNjwVP_EMSo,2096 +importlib_metadata/_functools.py,sha256=PsY2-4rrKX4RVeRC1oGp1lB1pmC9eKN88_f-bD9uOoA,2895 +importlib_metadata/_itertools.py,sha256=cvr_2v8BRbxcIl5x5ldfqdHjhI8Yi8s8yk50G_nm6jQ,2068 +importlib_metadata/_meta.py,sha256=I2AuaUMr5a6cTdZleV9WpyqUCSooqqV-zSzr1qn7FMw,1615 +importlib_metadata/_py39compat.py,sha256=2Tk5twb_VgLCY-1NEAQjdZp_S9OFMC-pUzP2isuaPsQ,1098 +importlib_metadata/_text.py,sha256=HCsFksZpJLeTP3NEk_ngrAeXVRRtTrtyh9eOABoRP4A,2166 +importlib_metadata/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 diff --git a/lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/REQUESTED b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/WHEEL b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/WHEEL rename to python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/top_level.txt b/python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata-6.6.0.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/importlib_metadata-6.6.0.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/importlib_metadata/__init__.py b/python/lib/python3.10/site-packages/importlib_metadata/__init__.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/__init__.py rename to python/lib/python3.10/site-packages/importlib_metadata/__init__.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_adapters.py b/python/lib/python3.10/site-packages/importlib_metadata/_adapters.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_adapters.py rename to python/lib/python3.10/site-packages/importlib_metadata/_adapters.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_collections.py b/python/lib/python3.10/site-packages/importlib_metadata/_collections.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_collections.py rename to python/lib/python3.10/site-packages/importlib_metadata/_collections.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_compat.py b/python/lib/python3.10/site-packages/importlib_metadata/_compat.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_compat.py rename to python/lib/python3.10/site-packages/importlib_metadata/_compat.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_functools.py b/python/lib/python3.10/site-packages/importlib_metadata/_functools.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_functools.py rename to python/lib/python3.10/site-packages/importlib_metadata/_functools.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_itertools.py b/python/lib/python3.10/site-packages/importlib_metadata/_itertools.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_itertools.py rename to python/lib/python3.10/site-packages/importlib_metadata/_itertools.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_meta.py b/python/lib/python3.10/site-packages/importlib_metadata/_meta.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_meta.py rename to python/lib/python3.10/site-packages/importlib_metadata/_meta.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_py39compat.py b/python/lib/python3.10/site-packages/importlib_metadata/_py39compat.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_py39compat.py rename to python/lib/python3.10/site-packages/importlib_metadata/_py39compat.py diff --git a/lib/python3.11/site-packages/importlib_metadata/_text.py b/python/lib/python3.10/site-packages/importlib_metadata/_text.py similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/_text.py rename to python/lib/python3.10/site-packages/importlib_metadata/_text.py diff --git a/lib/python3.11/site-packages/importlib_metadata/py.typed b/python/lib/python3.10/site-packages/importlib_metadata/py.typed similarity index 100% rename from lib/python3.11/site-packages/importlib_metadata/py.typed rename to python/lib/python3.10/site-packages/importlib_metadata/py.typed diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/INSTALLER b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/LICENSE.rst b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/LICENSE.rst similarity index 100% rename from lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/LICENSE.rst rename to python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/LICENSE.rst diff --git a/lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/METADATA b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/METADATA rename to python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/RECORD b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/RECORD new file mode 100644 index 0000000..156b2ca --- /dev/null +++ b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/RECORD @@ -0,0 +1,24 @@ +itsdangerous-2.1.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +itsdangerous-2.1.2.dist-info/LICENSE.rst,sha256=Y68JiRtr6K0aQlLtQ68PTvun_JSOIoNnvtfzxa4LCdc,1475 +itsdangerous-2.1.2.dist-info/METADATA,sha256=ThrHIJQ_6XlfbDMCAVe_hawT7IXiIxnTBIDrwxxtucQ,2928 +itsdangerous-2.1.2.dist-info/RECORD,, +itsdangerous-2.1.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +itsdangerous-2.1.2.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +itsdangerous-2.1.2.dist-info/top_level.txt,sha256=gKN1OKLk81i7fbWWildJA88EQ9NhnGMSvZqhfz9ICjk,13 +itsdangerous/__init__.py,sha256=n4mkyjlIVn23pgsgCIw0MJKPdcHIetyeRpe5Fwsn8qg,876 +itsdangerous/__pycache__/__init__.cpython-310.pyc,, +itsdangerous/__pycache__/_json.cpython-310.pyc,, +itsdangerous/__pycache__/encoding.cpython-310.pyc,, +itsdangerous/__pycache__/exc.cpython-310.pyc,, +itsdangerous/__pycache__/serializer.cpython-310.pyc,, +itsdangerous/__pycache__/signer.cpython-310.pyc,, +itsdangerous/__pycache__/timed.cpython-310.pyc,, +itsdangerous/__pycache__/url_safe.cpython-310.pyc,, +itsdangerous/_json.py,sha256=wIhs_7-_XZolmyr-JvKNiy_LgAcfevYR0qhCVdlIhg8,450 +itsdangerous/encoding.py,sha256=pgh86snHC76dPLNCnPlrjR5SaYL_M8H-gWRiiLNbhCU,1419 +itsdangerous/exc.py,sha256=VFxmP2lMoSJFqxNMzWonqs35ROII4-fvCBfG0v1Tkbs,3206 +itsdangerous/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +itsdangerous/serializer.py,sha256=zgZ1-U705jHDpt62x_pmLJdryEKDNAbt5UkJtnkcCSw,11144 +itsdangerous/signer.py,sha256=QUH0iX0in-OTptMAXKU5zWMwmOCXn1fsDsubXiGdFN4,9367 +itsdangerous/timed.py,sha256=5CBWLds4Nm8-3bFVC8RxNzFjx6PSwjch8wuZ5cwcHFI,8174 +itsdangerous/url_safe.py,sha256=5bC4jSKOjWNRkWrFseifWVXUnHnPgwOLROjiOwb-eeo,2402 diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/REQUESTED b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/WHEEL b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/WHEEL rename to python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/top_level.txt b/python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/itsdangerous-2.1.2.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/itsdangerous-2.1.2.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/itsdangerous/__init__.py b/python/lib/python3.10/site-packages/itsdangerous/__init__.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/__init__.py rename to python/lib/python3.10/site-packages/itsdangerous/__init__.py diff --git a/lib/python3.11/site-packages/itsdangerous/_json.py b/python/lib/python3.10/site-packages/itsdangerous/_json.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/_json.py rename to python/lib/python3.10/site-packages/itsdangerous/_json.py diff --git a/lib/python3.11/site-packages/itsdangerous/encoding.py b/python/lib/python3.10/site-packages/itsdangerous/encoding.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/encoding.py rename to python/lib/python3.10/site-packages/itsdangerous/encoding.py diff --git a/lib/python3.11/site-packages/itsdangerous/exc.py b/python/lib/python3.10/site-packages/itsdangerous/exc.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/exc.py rename to python/lib/python3.10/site-packages/itsdangerous/exc.py diff --git a/lib/python3.11/site-packages/itsdangerous/py.typed b/python/lib/python3.10/site-packages/itsdangerous/py.typed similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/py.typed rename to python/lib/python3.10/site-packages/itsdangerous/py.typed diff --git a/lib/python3.11/site-packages/itsdangerous/serializer.py b/python/lib/python3.10/site-packages/itsdangerous/serializer.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/serializer.py rename to python/lib/python3.10/site-packages/itsdangerous/serializer.py diff --git a/lib/python3.11/site-packages/itsdangerous/signer.py b/python/lib/python3.10/site-packages/itsdangerous/signer.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/signer.py rename to python/lib/python3.10/site-packages/itsdangerous/signer.py diff --git a/lib/python3.11/site-packages/itsdangerous/timed.py b/python/lib/python3.10/site-packages/itsdangerous/timed.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/timed.py rename to python/lib/python3.10/site-packages/itsdangerous/timed.py diff --git a/lib/python3.11/site-packages/itsdangerous/url_safe.py b/python/lib/python3.10/site-packages/itsdangerous/url_safe.py similarity index 100% rename from lib/python3.11/site-packages/itsdangerous/url_safe.py rename to python/lib/python3.10/site-packages/itsdangerous/url_safe.py diff --git a/lib/python3.11/site-packages/jinja2/__init__.py b/python/lib/python3.10/site-packages/jinja2/__init__.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/__init__.py rename to python/lib/python3.10/site-packages/jinja2/__init__.py diff --git a/lib/python3.11/site-packages/jinja2/_identifier.py b/python/lib/python3.10/site-packages/jinja2/_identifier.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/_identifier.py rename to python/lib/python3.10/site-packages/jinja2/_identifier.py diff --git a/lib/python3.11/site-packages/jinja2/async_utils.py b/python/lib/python3.10/site-packages/jinja2/async_utils.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/async_utils.py rename to python/lib/python3.10/site-packages/jinja2/async_utils.py diff --git a/lib/python3.11/site-packages/jinja2/bccache.py b/python/lib/python3.10/site-packages/jinja2/bccache.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/bccache.py rename to python/lib/python3.10/site-packages/jinja2/bccache.py diff --git a/lib/python3.11/site-packages/jinja2/compiler.py b/python/lib/python3.10/site-packages/jinja2/compiler.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/compiler.py rename to python/lib/python3.10/site-packages/jinja2/compiler.py diff --git a/lib/python3.11/site-packages/jinja2/constants.py b/python/lib/python3.10/site-packages/jinja2/constants.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/constants.py rename to python/lib/python3.10/site-packages/jinja2/constants.py diff --git a/lib/python3.11/site-packages/jinja2/debug.py b/python/lib/python3.10/site-packages/jinja2/debug.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/debug.py rename to python/lib/python3.10/site-packages/jinja2/debug.py diff --git a/lib/python3.11/site-packages/jinja2/defaults.py b/python/lib/python3.10/site-packages/jinja2/defaults.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/defaults.py rename to python/lib/python3.10/site-packages/jinja2/defaults.py diff --git a/lib/python3.11/site-packages/jinja2/environment.py b/python/lib/python3.10/site-packages/jinja2/environment.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/environment.py rename to python/lib/python3.10/site-packages/jinja2/environment.py diff --git a/lib/python3.11/site-packages/jinja2/exceptions.py b/python/lib/python3.10/site-packages/jinja2/exceptions.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/exceptions.py rename to python/lib/python3.10/site-packages/jinja2/exceptions.py diff --git a/lib/python3.11/site-packages/jinja2/ext.py b/python/lib/python3.10/site-packages/jinja2/ext.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/ext.py rename to python/lib/python3.10/site-packages/jinja2/ext.py diff --git a/lib/python3.11/site-packages/jinja2/filters.py b/python/lib/python3.10/site-packages/jinja2/filters.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/filters.py rename to python/lib/python3.10/site-packages/jinja2/filters.py diff --git a/lib/python3.11/site-packages/jinja2/idtracking.py b/python/lib/python3.10/site-packages/jinja2/idtracking.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/idtracking.py rename to python/lib/python3.10/site-packages/jinja2/idtracking.py diff --git a/lib/python3.11/site-packages/jinja2/lexer.py b/python/lib/python3.10/site-packages/jinja2/lexer.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/lexer.py rename to python/lib/python3.10/site-packages/jinja2/lexer.py diff --git a/lib/python3.11/site-packages/jinja2/loaders.py b/python/lib/python3.10/site-packages/jinja2/loaders.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/loaders.py rename to python/lib/python3.10/site-packages/jinja2/loaders.py diff --git a/lib/python3.11/site-packages/jinja2/meta.py b/python/lib/python3.10/site-packages/jinja2/meta.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/meta.py rename to python/lib/python3.10/site-packages/jinja2/meta.py diff --git a/lib/python3.11/site-packages/jinja2/nativetypes.py b/python/lib/python3.10/site-packages/jinja2/nativetypes.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/nativetypes.py rename to python/lib/python3.10/site-packages/jinja2/nativetypes.py diff --git a/lib/python3.11/site-packages/jinja2/nodes.py b/python/lib/python3.10/site-packages/jinja2/nodes.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/nodes.py rename to python/lib/python3.10/site-packages/jinja2/nodes.py diff --git a/lib/python3.11/site-packages/jinja2/optimizer.py b/python/lib/python3.10/site-packages/jinja2/optimizer.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/optimizer.py rename to python/lib/python3.10/site-packages/jinja2/optimizer.py diff --git a/lib/python3.11/site-packages/jinja2/parser.py b/python/lib/python3.10/site-packages/jinja2/parser.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/parser.py rename to python/lib/python3.10/site-packages/jinja2/parser.py diff --git a/lib/python3.11/site-packages/jinja2/py.typed b/python/lib/python3.10/site-packages/jinja2/py.typed similarity index 100% rename from lib/python3.11/site-packages/jinja2/py.typed rename to python/lib/python3.10/site-packages/jinja2/py.typed diff --git a/lib/python3.11/site-packages/jinja2/runtime.py b/python/lib/python3.10/site-packages/jinja2/runtime.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/runtime.py rename to python/lib/python3.10/site-packages/jinja2/runtime.py diff --git a/lib/python3.11/site-packages/jinja2/sandbox.py b/python/lib/python3.10/site-packages/jinja2/sandbox.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/sandbox.py rename to python/lib/python3.10/site-packages/jinja2/sandbox.py diff --git a/lib/python3.11/site-packages/jinja2/tests.py b/python/lib/python3.10/site-packages/jinja2/tests.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/tests.py rename to python/lib/python3.10/site-packages/jinja2/tests.py diff --git a/lib/python3.11/site-packages/jinja2/utils.py b/python/lib/python3.10/site-packages/jinja2/utils.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/utils.py rename to python/lib/python3.10/site-packages/jinja2/utils.py diff --git a/lib/python3.11/site-packages/jinja2/visitor.py b/python/lib/python3.10/site-packages/jinja2/visitor.py similarity index 100% rename from lib/python3.11/site-packages/jinja2/visitor.py rename to python/lib/python3.10/site-packages/jinja2/visitor.py diff --git a/lib/python3.11/site-packages/markupsafe/__init__.py b/python/lib/python3.10/site-packages/markupsafe/__init__.py similarity index 100% rename from lib/python3.11/site-packages/markupsafe/__init__.py rename to python/lib/python3.10/site-packages/markupsafe/__init__.py diff --git a/lib/python3.11/site-packages/markupsafe/_native.py b/python/lib/python3.10/site-packages/markupsafe/_native.py similarity index 100% rename from lib/python3.11/site-packages/markupsafe/_native.py rename to python/lib/python3.10/site-packages/markupsafe/_native.py diff --git a/lib/python3.11/site-packages/markupsafe/_speedups.c b/python/lib/python3.10/site-packages/markupsafe/_speedups.c similarity index 100% rename from lib/python3.11/site-packages/markupsafe/_speedups.c rename to python/lib/python3.10/site-packages/markupsafe/_speedups.c diff --git a/python/lib/python3.10/site-packages/markupsafe/_speedups.cpython-310-x86_64-linux-gnu.so b/python/lib/python3.10/site-packages/markupsafe/_speedups.cpython-310-x86_64-linux-gnu.so new file mode 100755 index 0000000..b95b9d1 Binary files /dev/null and b/python/lib/python3.10/site-packages/markupsafe/_speedups.cpython-310-x86_64-linux-gnu.so differ diff --git a/lib/python3.11/site-packages/markupsafe/_speedups.pyi b/python/lib/python3.10/site-packages/markupsafe/_speedups.pyi similarity index 100% rename from lib/python3.11/site-packages/markupsafe/_speedups.pyi rename to python/lib/python3.10/site-packages/markupsafe/_speedups.pyi diff --git a/lib/python3.11/site-packages/markupsafe/py.typed b/python/lib/python3.10/site-packages/markupsafe/py.typed similarity index 100% rename from lib/python3.11/site-packages/markupsafe/py.typed rename to python/lib/python3.10/site-packages/markupsafe/py.typed diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/COPYING b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/COPYING similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/COPYING rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/COPYING diff --git a/lib/python3.11/site-packages/pip-22.3.1.dist-info/INSTALLER b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/pip-22.3.1.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/METADATA b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/METADATA rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/RECORD b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/RECORD new file mode 100644 index 0000000..e1a2903 --- /dev/null +++ b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/RECORD @@ -0,0 +1,135 @@ +../../../bin/mid3cp,sha256=8F-CD7cYj468pDk28u8rkkuZovP2MGFLuOogwrHLn7Q,265 +../../../bin/mid3iconv,sha256=cLSjr8H98V5_LkeceuEhTOavWhc9Hp7lOl4G9OFXVAk,268 +../../../bin/mid3v2,sha256=16eenWaOPDFlGPkc3paWAvJtr83lh35wELIDAIYqVqg,265 +../../../bin/moggsplit,sha256=2KJZNYiRtiwVIeOQIHRErq0tz5h-VGsnAfE20ElZmwQ,268 +../../../bin/mutagen-inspect,sha256=RjJYyWbq9yi3YTHKNG14OW628tdvlQzprQ5ZpggiSN0,274 +../../../bin/mutagen-pony,sha256=g8jycSLUCF2eNP3m5-WdvExr5EVOC9F8fMcZnxoSYL0,271 +../../../share/man/man1/mid3cp.1,sha256=b3BLq2KQGmgZO5XyiWRkraQmT-hSN6FLplinjRtopg0,1739 +../../../share/man/man1/mid3iconv.1,sha256=BxoSX8eFILegJPhGPgYj1Y7BddlLq2qe4yyGTKZfxtc,1551 +../../../share/man/man1/mid3v2.1,sha256=p1mx1TDKGq5sCqiXJXB53gi0yZaoLrcKOC0MATrhvq0,5163 +../../../share/man/man1/moggsplit.1,sha256=9vTrMHLW4t-AsygPSus0loe8QAfWV4jvMGQQ9WlA0iU,1692 +../../../share/man/man1/mutagen-inspect.1,sha256=af3pH-nZiBWt4PMDPaaOq1DlIVnRU3KdYE6f9pYlzLw,1133 +../../../share/man/man1/mutagen-pony.1,sha256=QEcL1kPq4ad2y6CnjWqugrsikLKQL-7v-D8O349oClY,1120 +mutagen-1.46.0.dist-info/COPYING,sha256=gXf5dRMhNSbfLPYYTY_5hsZ1r7UU1OaKQEAQUhuIBkM,18092 +mutagen-1.46.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +mutagen-1.46.0.dist-info/METADATA,sha256=jcWKb6dck5Gel1NEaWFzNvD4Oo6ZhqEDKZ4_hC5KC04,1767 +mutagen-1.46.0.dist-info/RECORD,, +mutagen-1.46.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +mutagen-1.46.0.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +mutagen-1.46.0.dist-info/entry_points.txt,sha256=-pJdkKwhUfdHjdi0IoOFxoGLZ38t2AnHphs2O6qjHeM,319 +mutagen-1.46.0.dist-info/top_level.txt,sha256=Y8wYPwfNHGLFJ8BcuQfIzyEQ4UPDWMNx40eEGDprkUQ,8 +mutagen/__init__.py,sha256=JtToLbNb53MipuOpfYjsB1rHoj7jF2n60LrcOBmHaeQ,1031 +mutagen/__pycache__/__init__.cpython-310.pyc,, +mutagen/__pycache__/_constants.cpython-310.pyc,, +mutagen/__pycache__/_file.cpython-310.pyc,, +mutagen/__pycache__/_iff.cpython-310.pyc,, +mutagen/__pycache__/_riff.cpython-310.pyc,, +mutagen/__pycache__/_tags.cpython-310.pyc,, +mutagen/__pycache__/_util.cpython-310.pyc,, +mutagen/__pycache__/_vorbis.cpython-310.pyc,, +mutagen/__pycache__/aac.cpython-310.pyc,, +mutagen/__pycache__/ac3.cpython-310.pyc,, +mutagen/__pycache__/aiff.cpython-310.pyc,, +mutagen/__pycache__/apev2.cpython-310.pyc,, +mutagen/__pycache__/dsdiff.cpython-310.pyc,, +mutagen/__pycache__/dsf.cpython-310.pyc,, +mutagen/__pycache__/easyid3.cpython-310.pyc,, +mutagen/__pycache__/easymp4.cpython-310.pyc,, +mutagen/__pycache__/flac.cpython-310.pyc,, +mutagen/__pycache__/m4a.cpython-310.pyc,, +mutagen/__pycache__/monkeysaudio.cpython-310.pyc,, +mutagen/__pycache__/musepack.cpython-310.pyc,, +mutagen/__pycache__/ogg.cpython-310.pyc,, +mutagen/__pycache__/oggflac.cpython-310.pyc,, +mutagen/__pycache__/oggopus.cpython-310.pyc,, +mutagen/__pycache__/oggspeex.cpython-310.pyc,, +mutagen/__pycache__/oggtheora.cpython-310.pyc,, +mutagen/__pycache__/oggvorbis.cpython-310.pyc,, +mutagen/__pycache__/optimfrog.cpython-310.pyc,, +mutagen/__pycache__/smf.cpython-310.pyc,, +mutagen/__pycache__/tak.cpython-310.pyc,, +mutagen/__pycache__/trueaudio.cpython-310.pyc,, +mutagen/__pycache__/wave.cpython-310.pyc,, +mutagen/__pycache__/wavpack.cpython-310.pyc,, +mutagen/_constants.py,sha256=rrwtX-wSIzXGmDgTSkKasxaPwjS5eBl95cWjpkE0k5Q,3619 +mutagen/_file.py,sha256=oKt0h8fZ0-IUzwatGORasVdu2WQm_PWXHCE1xB9tse0,8846 +mutagen/_iff.py,sha256=hVKndRWVHbdC1zi7imcynnAcAZQmpWFynDb4eQB8Jy8,12097 +mutagen/_riff.py,sha256=5u8PIDDTGYN2ntCmLRGpUV_4fhps5O9PxEiFswy55QI,1863 +mutagen/_tags.py,sha256=GJk9RRKjtw0jlvNHago6PkvZQvSXC5bxQr7DXF0kyaQ,3996 +mutagen/_tools/__init__.py,sha256=Tc6qWNk2Xwi2ixB2QAWVsH_pz0RIPwk2BzDj6IOAatI,284 +mutagen/_tools/__pycache__/__init__.cpython-310.pyc,, +mutagen/_tools/__pycache__/_util.cpython-310.pyc,, +mutagen/_tools/__pycache__/mid3cp.cpython-310.pyc,, +mutagen/_tools/__pycache__/mid3iconv.cpython-310.pyc,, +mutagen/_tools/__pycache__/mid3v2.cpython-310.pyc,, +mutagen/_tools/__pycache__/moggsplit.cpython-310.pyc,, +mutagen/_tools/__pycache__/mutagen_inspect.cpython-310.pyc,, +mutagen/_tools/__pycache__/mutagen_pony.cpython-310.pyc,, +mutagen/_tools/_util.py,sha256=P9nEIYqOnQalB8V_giMygKZ2BHV9mrNAouzD8R6A37o,2404 +mutagen/_tools/mid3cp.py,sha256=7B-UhZD9XkaZn3xqv6TZhOtDTw4cXMDSsrjaqJ6FPNQ,4038 +mutagen/_tools/mid3iconv.py,sha256=EUASOBhMK8nhi3I6R6bo8w8b_lOVbzebNvXtpkQZJuA,5143 +mutagen/_tools/mid3v2.py,sha256=ejc56EPow1UwI0OoqAEqeQ9EBmAwgHTsUUYHJmFxNKw,18297 +mutagen/_tools/moggsplit.py,sha256=2okq_eHdGs4F6ia6YQzsS3NxCb511VdiTn1tFERLqyQ,2540 +mutagen/_tools/mutagen_inspect.py,sha256=VeO_MdDExmOy7BRpZ_ekYQwD87fslg-ftzBuMJk6w7U,1199 +mutagen/_tools/mutagen_pony.py,sha256=4RjM4yfUJ82S5pBLsqdFx_nudWB6CCfYdLUcWm3ZYSE,3269 +mutagen/_util.py,sha256=BF8cUk_sWgOVMdbKNyO1hNOfWZa0v4G_qltM6x6wbXA,29464 +mutagen/_vorbis.py,sha256=cISGXsYfcjzDdlMvnhSEz2bJnBnSewiojrtn4rdZDIk,9488 +mutagen/aac.py,sha256=JYtGSBspUg4Pti28DRwcWL2ADITtQGLLjAyIieQg1Yo,11741 +mutagen/ac3.py,sha256=yYdtT3eAJ0oZt1dgMc4dQfv5RtK6MYYF-VKoYlmaon0,10427 +mutagen/aiff.py,sha256=lcMAOElWq-vIJmcXh5yRkCiLshfQcT3eDOGOxfq2FDw,6370 +mutagen/apev2.py,sha256=UCTqKShl2MIs4zlR2aBbWNguMIU5VZX-kQhpSy7ydEw,20747 +mutagen/asf/__init__.py,sha256=RLgJZeEf8Lg7KxUQ9WjQDrBfT2fJ7dYUy_aLBuFEeMc,9848 +mutagen/asf/__pycache__/__init__.cpython-310.pyc,, +mutagen/asf/__pycache__/_attrs.cpython-310.pyc,, +mutagen/asf/__pycache__/_objects.cpython-310.pyc,, +mutagen/asf/__pycache__/_util.cpython-310.pyc,, +mutagen/asf/_attrs.py,sha256=VTjmr9rPTwX836kPtH6et44xK8LmtATav62BZP8iuJY,9790 +mutagen/asf/_objects.py,sha256=2I1rGZipwZt4wlwJ99QkOoi_FSaeXoyflek52FQ7na0,15280 +mutagen/asf/_util.py,sha256=yQaS17X7nXFyAhQBlJpvu4Flc59O2DDmobuaE1L8_Mg,10756 +mutagen/dsdiff.py,sha256=brpxNh9FbCSCdtHypqCmux5n14rWbRYGe9-BCqUfg1w,7965 +mutagen/dsf.py,sha256=tUgc9wrGXcdIYyLUnTfyQKbuSaCs-NPkM0WeQrKed4E,9962 +mutagen/easyid3.py,sha256=jqnkKvHCrrJbHl1JfZLZSniKqRbSCepDGJR8RKjmFvw,15976 +mutagen/easymp4.py,sha256=rmETfoKd1O1HB9fzMxdSOLUxllbX6E7W5lzbVo-V3Y0,8696 +mutagen/flac.py,sha256=Vh6etXwhpRYHddd-OQi9CNNtqvqv9Z4VWqVw-p72eCM,31792 +mutagen/id3/__init__.py,sha256=gh7m94_BiUBBwez6NMJBG2U9DOV2FoY4ZVLAJ8JJSUg,4593 +mutagen/id3/__pycache__/__init__.cpython-310.pyc,, +mutagen/id3/__pycache__/_file.cpython-310.pyc,, +mutagen/id3/__pycache__/_frames.cpython-310.pyc,, +mutagen/id3/__pycache__/_id3v1.cpython-310.pyc,, +mutagen/id3/__pycache__/_specs.cpython-310.pyc,, +mutagen/id3/__pycache__/_tags.cpython-310.pyc,, +mutagen/id3/__pycache__/_util.cpython-310.pyc,, +mutagen/id3/_file.py,sha256=0tqxb_3xWz50DTtVA4QyWOffWImuFPFu8GlN3H81pHw,12297 +mutagen/id3/_frames.py,sha256=7LQuCK0T4sT5qeDJRtKgygRwNlxye_YXmUfzosQCx2o,48637 +mutagen/id3/_id3v1.py,sha256=A9di3mF6FQmfo-e4pdQCDhdWyE7woiLdjN-l0XUZMrI,6637 +mutagen/id3/_specs.py,sha256=7pSGma8B7SoJxIzBa152lMkM2WfWqvCfv0OQCU2takc,25151 +mutagen/id3/_tags.py,sha256=hWcLfeuQGGCbQhkT-WK0hxEfbV8DXlM9VVoxqdE-Uok,20803 +mutagen/id3/_util.py,sha256=yHh7tqSfs3SuXnCEbvRJblnaD-PcKMtqG2_9TFK7kp4,4399 +mutagen/m4a.py,sha256=HaO4v_fUea9HGHqJCDzK7ZCZC2e2UwX4kun20Mg2gFQ,2007 +mutagen/monkeysaudio.py,sha256=2H0dLvbKL5Zb5Qp6-QK4LTXF_c3KdqwX0MbEf3hRrOw,3328 +mutagen/mp3/__init__.py,sha256=PHpOP6gWINNJ9m36JL8vZ_fTDPdbO4Lwob6gNnW_oMQ,14895 +mutagen/mp3/__pycache__/__init__.cpython-310.pyc,, +mutagen/mp3/__pycache__/_util.cpython-310.pyc,, +mutagen/mp3/_util.py,sha256=SfeBo7xNPpPkMUH_phBmZusNxPyrp-qcXaxWbBB8w-w,15888 +mutagen/mp4/__init__.py,sha256=mraq-eq5f-rfPdj10dIuVId2cTLdAjRufHw3KVLZalk,41160 +mutagen/mp4/__pycache__/__init__.cpython-310.pyc,, +mutagen/mp4/__pycache__/_as_entry.cpython-310.pyc,, +mutagen/mp4/__pycache__/_atom.cpython-310.pyc,, +mutagen/mp4/__pycache__/_util.cpython-310.pyc,, +mutagen/mp4/_as_entry.py,sha256=1OGSVceIMiDNDmgOGw9jHVM7M_0Pk00DsQD_ci6u5p8,16906 +mutagen/mp4/_atom.py,sha256=h8Mi_Yw_5SmJRJs1x0YucyMQ1YkWAfYuGYrCRrKMR-g,6309 +mutagen/mp4/_util.py,sha256=VOQ9P1N65v91jPHjMBx5AXrEYRzGx4puNDylZx2RIY0,641 +mutagen/musepack.py,sha256=sYqdMy6l_somP3LHkvRfnIKfum6MtDBIyWqBc1xOT6o,9821 +mutagen/ogg.py,sha256=sH8SNZLfX6sdLHU3G7uJzUdi5ka8pqf6tGOerc75UTg,20226 +mutagen/oggflac.py,sha256=gprJMcJS-hGOO4Kqi7xqTT2QKVnFlDrkIkqiK0bMY8s,5278 +mutagen/oggopus.py,sha256=RQq1lJprziAz3HozaaFRoBPaDlp9M5ycK2hCikBRMMs,5334 +mutagen/oggspeex.py,sha256=GxTnqlNAcWfGsj7E7AhnqVP3ztie80xoMo89JaTg2Qk,5211 +mutagen/oggtheora.py,sha256=pgz0us99iYCc48W8JwNTmo2BcDcgfl1bLL29QUKNt0A,5465 +mutagen/oggvorbis.py,sha256=cuQEq0TIJVwlIkzkM9pF8wgzTiCUmWyHgXMPud4rzPU,5828 +mutagen/optimfrog.py,sha256=IUcQhA_9S5vMxBlIqx5JSwpJk_FixozJiEk9aaHCFOw,3229 +mutagen/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +mutagen/smf.py,sha256=_Eku5r7fCEFLU1YNjxERIfQI_SgbZifhMNdJF1X5etM,5640 +mutagen/tak.py,sha256=gNJRFu8HXULFmGaWyGSJC3fkWnrN6cOapmHH7bXKEt0,7038 +mutagen/trueaudio.py,sha256=1OUqVvNpPZy19eX7aGeWBsUxAhbuk1PFWLugaZxSRLk,2586 +mutagen/wave.py,sha256=JYnqoih_tV6ALPC46dDzUnqI1KReThbw9Hvy1eSXrg4,5794 +mutagen/wavpack.py,sha256=4Hkuce093CzXPBzjS0zz9mnwCKLnik22JLEwRQPQPxM,4402 diff --git a/lib/python3.11/site-packages/pip-22.3.1.dist-info/REQUESTED b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/pip-22.3.1.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/tzlocal-5.0.1.dist-info/WHEEL b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/tzlocal-5.0.1.dist-info/WHEEL rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/entry_points.txt b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/entry_points.txt similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/entry_points.txt rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/entry_points.txt diff --git a/lib/python3.11/site-packages/mutagen-1.46.0.dist-info/top_level.txt b/python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/mutagen-1.46.0.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/mutagen-1.46.0.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/mutagen/__init__.py b/python/lib/python3.10/site-packages/mutagen/__init__.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/__init__.py rename to python/lib/python3.10/site-packages/mutagen/__init__.py diff --git a/lib/python3.11/site-packages/mutagen/_constants.py b/python/lib/python3.10/site-packages/mutagen/_constants.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_constants.py rename to python/lib/python3.10/site-packages/mutagen/_constants.py diff --git a/lib/python3.11/site-packages/mutagen/_file.py b/python/lib/python3.10/site-packages/mutagen/_file.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_file.py rename to python/lib/python3.10/site-packages/mutagen/_file.py diff --git a/lib/python3.11/site-packages/mutagen/_iff.py b/python/lib/python3.10/site-packages/mutagen/_iff.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_iff.py rename to python/lib/python3.10/site-packages/mutagen/_iff.py diff --git a/lib/python3.11/site-packages/mutagen/_riff.py b/python/lib/python3.10/site-packages/mutagen/_riff.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_riff.py rename to python/lib/python3.10/site-packages/mutagen/_riff.py diff --git a/lib/python3.11/site-packages/mutagen/_tags.py b/python/lib/python3.10/site-packages/mutagen/_tags.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tags.py rename to python/lib/python3.10/site-packages/mutagen/_tags.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/__init__.py b/python/lib/python3.10/site-packages/mutagen/_tools/__init__.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/__init__.py rename to python/lib/python3.10/site-packages/mutagen/_tools/__init__.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/_util.py b/python/lib/python3.10/site-packages/mutagen/_tools/_util.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/_util.py rename to python/lib/python3.10/site-packages/mutagen/_tools/_util.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/mid3cp.py b/python/lib/python3.10/site-packages/mutagen/_tools/mid3cp.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/mid3cp.py rename to python/lib/python3.10/site-packages/mutagen/_tools/mid3cp.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/mid3iconv.py b/python/lib/python3.10/site-packages/mutagen/_tools/mid3iconv.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/mid3iconv.py rename to python/lib/python3.10/site-packages/mutagen/_tools/mid3iconv.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/mid3v2.py b/python/lib/python3.10/site-packages/mutagen/_tools/mid3v2.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/mid3v2.py rename to python/lib/python3.10/site-packages/mutagen/_tools/mid3v2.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/moggsplit.py b/python/lib/python3.10/site-packages/mutagen/_tools/moggsplit.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/moggsplit.py rename to python/lib/python3.10/site-packages/mutagen/_tools/moggsplit.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/mutagen_inspect.py b/python/lib/python3.10/site-packages/mutagen/_tools/mutagen_inspect.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/mutagen_inspect.py rename to python/lib/python3.10/site-packages/mutagen/_tools/mutagen_inspect.py diff --git a/lib/python3.11/site-packages/mutagen/_tools/mutagen_pony.py b/python/lib/python3.10/site-packages/mutagen/_tools/mutagen_pony.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_tools/mutagen_pony.py rename to python/lib/python3.10/site-packages/mutagen/_tools/mutagen_pony.py diff --git a/lib/python3.11/site-packages/mutagen/_util.py b/python/lib/python3.10/site-packages/mutagen/_util.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_util.py rename to python/lib/python3.10/site-packages/mutagen/_util.py diff --git a/lib/python3.11/site-packages/mutagen/_vorbis.py b/python/lib/python3.10/site-packages/mutagen/_vorbis.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/_vorbis.py rename to python/lib/python3.10/site-packages/mutagen/_vorbis.py diff --git a/lib/python3.11/site-packages/mutagen/aac.py b/python/lib/python3.10/site-packages/mutagen/aac.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/aac.py rename to python/lib/python3.10/site-packages/mutagen/aac.py diff --git a/lib/python3.11/site-packages/mutagen/ac3.py b/python/lib/python3.10/site-packages/mutagen/ac3.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/ac3.py rename to python/lib/python3.10/site-packages/mutagen/ac3.py diff --git a/lib/python3.11/site-packages/mutagen/aiff.py b/python/lib/python3.10/site-packages/mutagen/aiff.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/aiff.py rename to python/lib/python3.10/site-packages/mutagen/aiff.py diff --git a/lib/python3.11/site-packages/mutagen/apev2.py b/python/lib/python3.10/site-packages/mutagen/apev2.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/apev2.py rename to python/lib/python3.10/site-packages/mutagen/apev2.py diff --git a/lib/python3.11/site-packages/mutagen/asf/__init__.py b/python/lib/python3.10/site-packages/mutagen/asf/__init__.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/asf/__init__.py rename to python/lib/python3.10/site-packages/mutagen/asf/__init__.py diff --git a/lib/python3.11/site-packages/mutagen/asf/_attrs.py b/python/lib/python3.10/site-packages/mutagen/asf/_attrs.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/asf/_attrs.py rename to python/lib/python3.10/site-packages/mutagen/asf/_attrs.py diff --git a/lib/python3.11/site-packages/mutagen/asf/_objects.py b/python/lib/python3.10/site-packages/mutagen/asf/_objects.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/asf/_objects.py rename to python/lib/python3.10/site-packages/mutagen/asf/_objects.py diff --git a/lib/python3.11/site-packages/mutagen/asf/_util.py b/python/lib/python3.10/site-packages/mutagen/asf/_util.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/asf/_util.py rename to python/lib/python3.10/site-packages/mutagen/asf/_util.py diff --git a/lib/python3.11/site-packages/mutagen/dsdiff.py b/python/lib/python3.10/site-packages/mutagen/dsdiff.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/dsdiff.py rename to python/lib/python3.10/site-packages/mutagen/dsdiff.py diff --git a/lib/python3.11/site-packages/mutagen/dsf.py b/python/lib/python3.10/site-packages/mutagen/dsf.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/dsf.py rename to python/lib/python3.10/site-packages/mutagen/dsf.py diff --git a/lib/python3.11/site-packages/mutagen/easyid3.py b/python/lib/python3.10/site-packages/mutagen/easyid3.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/easyid3.py rename to python/lib/python3.10/site-packages/mutagen/easyid3.py diff --git a/lib/python3.11/site-packages/mutagen/easymp4.py b/python/lib/python3.10/site-packages/mutagen/easymp4.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/easymp4.py rename to python/lib/python3.10/site-packages/mutagen/easymp4.py diff --git a/lib/python3.11/site-packages/mutagen/flac.py b/python/lib/python3.10/site-packages/mutagen/flac.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/flac.py rename to python/lib/python3.10/site-packages/mutagen/flac.py diff --git a/lib/python3.11/site-packages/mutagen/id3/__init__.py b/python/lib/python3.10/site-packages/mutagen/id3/__init__.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/__init__.py rename to python/lib/python3.10/site-packages/mutagen/id3/__init__.py diff --git a/lib/python3.11/site-packages/mutagen/id3/_file.py b/python/lib/python3.10/site-packages/mutagen/id3/_file.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/_file.py rename to python/lib/python3.10/site-packages/mutagen/id3/_file.py diff --git a/lib/python3.11/site-packages/mutagen/id3/_frames.py b/python/lib/python3.10/site-packages/mutagen/id3/_frames.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/_frames.py rename to python/lib/python3.10/site-packages/mutagen/id3/_frames.py diff --git a/lib/python3.11/site-packages/mutagen/id3/_id3v1.py b/python/lib/python3.10/site-packages/mutagen/id3/_id3v1.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/_id3v1.py rename to python/lib/python3.10/site-packages/mutagen/id3/_id3v1.py diff --git a/lib/python3.11/site-packages/mutagen/id3/_specs.py b/python/lib/python3.10/site-packages/mutagen/id3/_specs.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/_specs.py rename to python/lib/python3.10/site-packages/mutagen/id3/_specs.py diff --git a/lib/python3.11/site-packages/mutagen/id3/_tags.py b/python/lib/python3.10/site-packages/mutagen/id3/_tags.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/_tags.py rename to python/lib/python3.10/site-packages/mutagen/id3/_tags.py diff --git a/lib/python3.11/site-packages/mutagen/id3/_util.py b/python/lib/python3.10/site-packages/mutagen/id3/_util.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/id3/_util.py rename to python/lib/python3.10/site-packages/mutagen/id3/_util.py diff --git a/lib/python3.11/site-packages/mutagen/m4a.py b/python/lib/python3.10/site-packages/mutagen/m4a.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/m4a.py rename to python/lib/python3.10/site-packages/mutagen/m4a.py diff --git a/lib/python3.11/site-packages/mutagen/monkeysaudio.py b/python/lib/python3.10/site-packages/mutagen/monkeysaudio.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/monkeysaudio.py rename to python/lib/python3.10/site-packages/mutagen/monkeysaudio.py diff --git a/lib/python3.11/site-packages/mutagen/mp3/__init__.py b/python/lib/python3.10/site-packages/mutagen/mp3/__init__.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/mp3/__init__.py rename to python/lib/python3.10/site-packages/mutagen/mp3/__init__.py diff --git a/lib/python3.11/site-packages/mutagen/mp3/_util.py b/python/lib/python3.10/site-packages/mutagen/mp3/_util.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/mp3/_util.py rename to python/lib/python3.10/site-packages/mutagen/mp3/_util.py diff --git a/lib/python3.11/site-packages/mutagen/mp4/__init__.py b/python/lib/python3.10/site-packages/mutagen/mp4/__init__.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/mp4/__init__.py rename to python/lib/python3.10/site-packages/mutagen/mp4/__init__.py diff --git a/lib/python3.11/site-packages/mutagen/mp4/_as_entry.py b/python/lib/python3.10/site-packages/mutagen/mp4/_as_entry.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/mp4/_as_entry.py rename to python/lib/python3.10/site-packages/mutagen/mp4/_as_entry.py diff --git a/lib/python3.11/site-packages/mutagen/mp4/_atom.py b/python/lib/python3.10/site-packages/mutagen/mp4/_atom.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/mp4/_atom.py rename to python/lib/python3.10/site-packages/mutagen/mp4/_atom.py diff --git a/lib/python3.11/site-packages/mutagen/mp4/_util.py b/python/lib/python3.10/site-packages/mutagen/mp4/_util.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/mp4/_util.py rename to python/lib/python3.10/site-packages/mutagen/mp4/_util.py diff --git a/lib/python3.11/site-packages/mutagen/musepack.py b/python/lib/python3.10/site-packages/mutagen/musepack.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/musepack.py rename to python/lib/python3.10/site-packages/mutagen/musepack.py diff --git a/lib/python3.11/site-packages/mutagen/ogg.py b/python/lib/python3.10/site-packages/mutagen/ogg.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/ogg.py rename to python/lib/python3.10/site-packages/mutagen/ogg.py diff --git a/lib/python3.11/site-packages/mutagen/oggflac.py b/python/lib/python3.10/site-packages/mutagen/oggflac.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/oggflac.py rename to python/lib/python3.10/site-packages/mutagen/oggflac.py diff --git a/lib/python3.11/site-packages/mutagen/oggopus.py b/python/lib/python3.10/site-packages/mutagen/oggopus.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/oggopus.py rename to python/lib/python3.10/site-packages/mutagen/oggopus.py diff --git a/lib/python3.11/site-packages/mutagen/oggspeex.py b/python/lib/python3.10/site-packages/mutagen/oggspeex.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/oggspeex.py rename to python/lib/python3.10/site-packages/mutagen/oggspeex.py diff --git a/lib/python3.11/site-packages/mutagen/oggtheora.py b/python/lib/python3.10/site-packages/mutagen/oggtheora.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/oggtheora.py rename to python/lib/python3.10/site-packages/mutagen/oggtheora.py diff --git a/lib/python3.11/site-packages/mutagen/oggvorbis.py b/python/lib/python3.10/site-packages/mutagen/oggvorbis.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/oggvorbis.py rename to python/lib/python3.10/site-packages/mutagen/oggvorbis.py diff --git a/lib/python3.11/site-packages/mutagen/optimfrog.py b/python/lib/python3.10/site-packages/mutagen/optimfrog.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/optimfrog.py rename to python/lib/python3.10/site-packages/mutagen/optimfrog.py diff --git a/lib/python3.11/site-packages/mutagen/py.typed b/python/lib/python3.10/site-packages/mutagen/py.typed similarity index 100% rename from lib/python3.11/site-packages/mutagen/py.typed rename to python/lib/python3.10/site-packages/mutagen/py.typed diff --git a/lib/python3.11/site-packages/mutagen/smf.py b/python/lib/python3.10/site-packages/mutagen/smf.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/smf.py rename to python/lib/python3.10/site-packages/mutagen/smf.py diff --git a/lib/python3.11/site-packages/mutagen/tak.py b/python/lib/python3.10/site-packages/mutagen/tak.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/tak.py rename to python/lib/python3.10/site-packages/mutagen/tak.py diff --git a/lib/python3.11/site-packages/mutagen/trueaudio.py b/python/lib/python3.10/site-packages/mutagen/trueaudio.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/trueaudio.py rename to python/lib/python3.10/site-packages/mutagen/trueaudio.py diff --git a/lib/python3.11/site-packages/mutagen/wave.py b/python/lib/python3.10/site-packages/mutagen/wave.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/wave.py rename to python/lib/python3.10/site-packages/mutagen/wave.py diff --git a/lib/python3.11/site-packages/mutagen/wavpack.py b/python/lib/python3.10/site-packages/mutagen/wavpack.py similarity index 100% rename from lib/python3.11/site-packages/mutagen/wavpack.py rename to python/lib/python3.10/site-packages/mutagen/wavpack.py diff --git a/lib/python3.11/site-packages/pycryptodomex-3.18.0.dist-info/INSTALLER b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/INSTALLER similarity index 100% rename from lib/python3.11/site-packages/pycryptodomex-3.18.0.dist-info/INSTALLER rename to python/lib/python3.10/site-packages/pip-22.0.2.dist-info/INSTALLER diff --git a/lib/python3.11/site-packages/pip-22.3.1.dist-info/LICENSE.txt b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/LICENSE.txt similarity index 100% rename from lib/python3.11/site-packages/pip-22.3.1.dist-info/LICENSE.txt rename to python/lib/python3.10/site-packages/pip-22.0.2.dist-info/LICENSE.txt diff --git a/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/METADATA b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/METADATA new file mode 100644 index 0000000..29392cd --- /dev/null +++ b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/METADATA @@ -0,0 +1,92 @@ +Metadata-Version: 2.1 +Name: pip +Version: 22.0.2 +Summary: The PyPA recommended tool for installing Python packages. +Home-page: https://pip.pypa.io/ +Author: The pip developers +Author-email: distutils-sig@python.org +License: MIT +Project-URL: Documentation, https://pip.pypa.io +Project-URL: Source, https://github.com/pypa/pip +Project-URL: Changelog, https://pip.pypa.io/en/stable/news/ +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Topic :: Software Development :: Build Tools +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Requires-Python: >=3.7 +License-File: LICENSE.txt + +pip - The Python Package Installer +================================== + +.. image:: https://img.shields.io/pypi/v/pip.svg + :target: https://pypi.org/project/pip/ + +.. image:: https://readthedocs.org/projects/pip/badge/?version=latest + :target: https://pip.pypa.io/en/latest + +pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes. + +Please take a look at our documentation for how to install and use pip: + +* `Installation`_ +* `Usage`_ + +We release updates regularly, with a new version every 3 months. Find more details in our documentation: + +* `Release notes`_ +* `Release process`_ + +In pip 20.3, we've `made a big improvement to the heart of pip`_; `learn more`_. We want your input, so `sign up for our user experience research studies`_ to help us do it right. + +**Note**: pip 21.0, in January 2021, removed Python 2 support, per pip's `Python 2 support policy`_. Please migrate to Python 3. + +If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms: + +* `Issue tracking`_ +* `Discourse channel`_ +* `User IRC`_ + +If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms: + +* `GitHub page`_ +* `Development documentation`_ +* `Development mailing list`_ +* `Development IRC`_ + +Code of Conduct +--------------- + +Everyone interacting in the pip project's codebases, issue trackers, chat +rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_. + +.. _package installer: https://packaging.python.org/guides/tool-recommendations/ +.. _Python Package Index: https://pypi.org +.. _Installation: https://pip.pypa.io/en/stable/installation/ +.. _Usage: https://pip.pypa.io/en/stable/ +.. _Release notes: https://pip.pypa.io/en/stable/news.html +.. _Release process: https://pip.pypa.io/en/latest/development/release-process/ +.. _GitHub page: https://github.com/pypa/pip +.. _Development documentation: https://pip.pypa.io/en/latest/development +.. _made a big improvement to the heart of pip: https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html +.. _learn more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020 +.. _sign up for our user experience research studies: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html +.. _Python 2 support policy: https://pip.pypa.io/en/latest/development/release-process/#python-2-support +.. _Issue tracking: https://github.com/pypa/pip/issues +.. _Discourse channel: https://discuss.python.org/c/packaging +.. _Development mailing list: https://mail.python.org/mailman3/lists/distutils-sig.python.org/ +.. _User IRC: https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:+6697/pypa +.. _Development IRC: https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:+6697/pypa-dev +.. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md + + diff --git a/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/RECORD b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/RECORD new file mode 100644 index 0000000..683a938 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/RECORD @@ -0,0 +1,1037 @@ +../../../bin/pip,sha256=mOhgxQUxbvpXOyHoT1DAzpxDdW7TWwQ0egrY4VJkgDw,252 +../../../bin/pip3,sha256=mOhgxQUxbvpXOyHoT1DAzpxDdW7TWwQ0egrY4VJkgDw,252 +../../../bin/pip3.10,sha256=mOhgxQUxbvpXOyHoT1DAzpxDdW7TWwQ0egrY4VJkgDw,252 +../../../bin/pip3.10,sha256=mOhgxQUxbvpXOyHoT1DAzpxDdW7TWwQ0egrY4VJkgDw,252 +pip-22.0.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +pip-22.0.2.dist-info/LICENSE.txt,sha256=Y0MApmnUmurmWxLGxIySTFGkzfPR_whtw0VtyLyqIQQ,1093 +pip-22.0.2.dist-info/METADATA,sha256=Yixa0LKkyzjT2N5JQO5qYDgZcmTs6Z6dg4UbwBNyT2A,4166 +pip-22.0.2.dist-info/RECORD,, +pip-22.0.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip-22.0.2.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +pip-22.0.2.dist-info/entry_points.txt,sha256=vUvIlB_ga0fFQuWvFEq6uJKftMG_HNuoe4kgXkb5rNY,126 +pip-22.0.2.dist-info/top_level.txt,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +pip/__init__.py,sha256=PZBF-ESk5Q0DZxQd4HHmTU_wX8y1ynzxBCRdu_fxHSI,357 +pip/__main__.py,sha256=mXwWDftNLMKfwVqKFWGE_uuBZvGSIiUELhLkeysIuZc,1198 +pip/__pycache__/__init__.cpython-310.pyc,, +pip/__pycache__/__main__.cpython-310.pyc,, +pip/_internal/__init__.py,sha256=nnFCuxrPMgALrIDxSoy-H6Zj4W4UY60D-uL1aJyq0pc,573 +pip/_internal/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/__pycache__/build_env.cpython-310.pyc,, +pip/_internal/__pycache__/cache.cpython-310.pyc,, +pip/_internal/__pycache__/configuration.cpython-310.pyc,, +pip/_internal/__pycache__/exceptions.cpython-310.pyc,, +pip/_internal/__pycache__/main.cpython-310.pyc,, +pip/_internal/__pycache__/pyproject.cpython-310.pyc,, +pip/_internal/__pycache__/self_outdated_check.cpython-310.pyc,, +pip/_internal/__pycache__/wheel_builder.cpython-310.pyc,, +pip/_internal/build_env.py,sha256=QAsnxJFvj74jS2cZUcxk7zXLvrtAYiRL0EkSPkpSJTo,9739 +pip/_internal/cache.py,sha256=71eaYwrls34HJ6gzbmmYiotiKhPNFTM_tqYJXD5nf3s,9441 +pip/_internal/cli/__init__.py,sha256=FkHBgpxxb-_gd6r1FjnNhfMOzAUYyXoXKJ6abijfcFU,132 +pip/_internal/cli/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/cli/__pycache__/autocompletion.cpython-310.pyc,, +pip/_internal/cli/__pycache__/base_command.cpython-310.pyc,, +pip/_internal/cli/__pycache__/cmdoptions.cpython-310.pyc,, +pip/_internal/cli/__pycache__/command_context.cpython-310.pyc,, +pip/_internal/cli/__pycache__/main.cpython-310.pyc,, +pip/_internal/cli/__pycache__/main_parser.cpython-310.pyc,, +pip/_internal/cli/__pycache__/parser.cpython-310.pyc,, +pip/_internal/cli/__pycache__/progress_bars.cpython-310.pyc,, +pip/_internal/cli/__pycache__/req_command.cpython-310.pyc,, +pip/_internal/cli/__pycache__/spinners.cpython-310.pyc,, +pip/_internal/cli/__pycache__/status_codes.cpython-310.pyc,, +pip/_internal/cli/autocompletion.py,sha256=wY2JPZY2Eji1vhR7bVo-yCBPJ9LCy6P80iOAhZD1Vi8,6676 +pip/_internal/cli/base_command.py,sha256=6IVFmOjObv0ILip28QcgP8glhXHiGRvU_9kO35Hr7Z0,8037 +pip/_internal/cli/cmdoptions.py,sha256=GT2G2YKBj-851qGseugn2Veq7fJe3FA30gWdcziPQvo,28525 +pip/_internal/cli/command_context.py,sha256=a1pBBvvGLDiZ1Kw64_4tT6HmRTwYDoYy8JFgG5Czn7s,760 +pip/_internal/cli/main.py,sha256=ioJ8IVlb2K1qLOxR-tXkee9lURhYV89CDM71MKag7YY,2472 +pip/_internal/cli/main_parser.py,sha256=Q9TnytfuC5Z2JSjBFWVGtEdYLFy7rukNIb04movHdAo,2614 +pip/_internal/cli/parser.py,sha256=CDXTuFr2UD8ozOlZYf1KDziQdo9-X_IaYOiUcyJQwrA,10788 +pip/_internal/cli/progress_bars.py,sha256=_52w11WoZrvDSR3oItLWvLrEZFUKAfLf4Y6I6WtOnIU,10339 +pip/_internal/cli/req_command.py,sha256=VwqonOy18QwZsRsVjHhp-6w15fG9x3Ltwoa8yJqQno8,18669 +pip/_internal/cli/spinners.py,sha256=TFhjxtOnLeNJ5YmRvQm4eKPgPbJNkZiqO8jOXuxRaYU,5076 +pip/_internal/cli/status_codes.py,sha256=sEFHUaUJbqv8iArL3HAtcztWZmGOFX01hTesSytDEh0,116 +pip/_internal/commands/__init__.py,sha256=Vc1HjsLEtyCh7506OozPHPKXe2Hk-z9cFkFF3BMj1lM,3736 +pip/_internal/commands/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/commands/__pycache__/cache.cpython-310.pyc,, +pip/_internal/commands/__pycache__/check.cpython-310.pyc,, +pip/_internal/commands/__pycache__/completion.cpython-310.pyc,, +pip/_internal/commands/__pycache__/configuration.cpython-310.pyc,, +pip/_internal/commands/__pycache__/debug.cpython-310.pyc,, +pip/_internal/commands/__pycache__/download.cpython-310.pyc,, +pip/_internal/commands/__pycache__/freeze.cpython-310.pyc,, +pip/_internal/commands/__pycache__/hash.cpython-310.pyc,, +pip/_internal/commands/__pycache__/help.cpython-310.pyc,, +pip/_internal/commands/__pycache__/index.cpython-310.pyc,, +pip/_internal/commands/__pycache__/install.cpython-310.pyc,, +pip/_internal/commands/__pycache__/list.cpython-310.pyc,, +pip/_internal/commands/__pycache__/search.cpython-310.pyc,, +pip/_internal/commands/__pycache__/show.cpython-310.pyc,, +pip/_internal/commands/__pycache__/uninstall.cpython-310.pyc,, +pip/_internal/commands/__pycache__/wheel.cpython-310.pyc,, +pip/_internal/commands/cache.py,sha256=p9gvc6W_xgxE2zO0o8NXqO1gGJEinEK42qEC-a7Cnuk,7524 +pip/_internal/commands/check.py,sha256=0gjXR7j36xJT5cs2heYU_dfOfpnFfzX8OoPNNoKhqdM,1685 +pip/_internal/commands/completion.py,sha256=kTG_I1VR3N5kGC4Ma9pQTSoY9Q1URCrNyseHSQ-rCL4,2958 +pip/_internal/commands/configuration.py,sha256=arE8vLstjBg-Ar1krXF-bBmT1qBtnL7Fpk-NVh38a0U,8944 +pip/_internal/commands/debug.py,sha256=krET-y45CnQzXwKR1qA3M_tJE4LE2vnQtm3yfGyDSnE,6629 +pip/_internal/commands/download.py,sha256=gVIAEOcpWolhRj9hl89Qzn52G2b_pcZ8naXhxaXobdo,4942 +pip/_internal/commands/freeze.py,sha256=PaJJB9mT_3vHeZ3mbFL_m1fzTYL-_Or3kDtXwTdZZ-A,2968 +pip/_internal/commands/hash.py,sha256=EVVOuvGtoPEdFi8SNnmdqlCQrhCxV-kJsdwtdcCnXGQ,1703 +pip/_internal/commands/help.py,sha256=gcc6QDkcgHMOuAn5UxaZwAStsRBrnGSn_yxjS57JIoM,1132 +pip/_internal/commands/index.py,sha256=8pYkICUJlccjm3E83b7UuZ5DtOfLh1N7ZHXAgkajjHo,4849 +pip/_internal/commands/install.py,sha256=YVygBF6vfrNi0jmdNBCM6bcoWb7vaALEGG1--8Mmf88,27893 +pip/_internal/commands/list.py,sha256=aKt1PP7enTiNLD_1qDXXaIKQ2QvLmUDfoQU6SYxJ8Ek,12318 +pip/_internal/commands/search.py,sha256=sbBZiARRc050QquOKcCvOr2K3XLsoYebLKZGRi__iUI,5697 +pip/_internal/commands/show.py,sha256=2VicM3jF0YWgn4O1jG_QF5oxOT0ln57VDu1NE6hqWcM,5859 +pip/_internal/commands/uninstall.py,sha256=DNTYAGJNljMO_YYBxrpcwj0FEl7lo_P55_98O6g2TNk,3526 +pip/_internal/commands/wheel.py,sha256=7HAjLclZxIzBrX6JmhmGBVxH5xrjaBYCtSdpQi1pWCE,6206 +pip/_internal/configuration.py,sha256=qmCX3uuVM73PQeAuWQHic22bhops8s31B8k02nFAoiQ,13171 +pip/_internal/distributions/__init__.py,sha256=Hq6kt6gXBgjNit5hTTWLAzeCNOKoB-N0pGYSqehrli8,858 +pip/_internal/distributions/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/distributions/__pycache__/base.cpython-310.pyc,, +pip/_internal/distributions/__pycache__/installed.cpython-310.pyc,, +pip/_internal/distributions/__pycache__/sdist.cpython-310.pyc,, +pip/_internal/distributions/__pycache__/wheel.cpython-310.pyc,, +pip/_internal/distributions/base.py,sha256=3FUYD8Gb4YuSu3pggC_FRctZBDbpm5ZK89tPksIUjoE,1172 +pip/_internal/distributions/installed.py,sha256=HzfNRu3smoOm54m8H2iK6LHzBx6_DEnka4OPEsizbXg,680 +pip/_internal/distributions/sdist.py,sha256=0nJvU1RhZtbwaeYtLbzSwYrbGRcY6IgNsWdEhAHROK8,5499 +pip/_internal/distributions/wheel.py,sha256=-NgzdIs-w_hcer_U81yzgpVTljJRg5m79xufqvbjv0s,1115 +pip/_internal/exceptions.py,sha256=U-dV1ixkSz6NAU6Aw9dosKi2EzZ5D3BA7ilYZuTLKeU,20912 +pip/_internal/index/__init__.py,sha256=vpt-JeTZefh8a-FC22ZeBSXFVbuBcXSGiILhQZJaNpQ,30 +pip/_internal/index/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/index/__pycache__/collector.cpython-310.pyc,, +pip/_internal/index/__pycache__/package_finder.cpython-310.pyc,, +pip/_internal/index/__pycache__/sources.cpython-310.pyc,, +pip/_internal/index/collector.py,sha256=8kXlmlnZ-qAknyxd0duCn5mxFHX-zr468ykutk8WOwo,21392 +pip/_internal/index/package_finder.py,sha256=9UVg-7582nYNEWa0cIIl8otzPm4mlfyrQVuozAcssLo,36783 +pip/_internal/index/sources.py,sha256=SVyPitv08-Qalh2_Bk5diAJ9GAA_d-a93koouQodAG0,6557 +pip/_internal/locations/__init__.py,sha256=ergvPwlfNTmQYFmaRYbj--ZwTN5izgTL9KE5d0FB7-8,17362 +pip/_internal/locations/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/locations/__pycache__/_distutils.cpython-310.pyc,, +pip/_internal/locations/__pycache__/_sysconfig.cpython-310.pyc,, +pip/_internal/locations/__pycache__/base.cpython-310.pyc,, +pip/_internal/locations/_distutils.py,sha256=Sk7tw8ZP1DWMYJ8MibABsa8IME2Ejv1PKeGlYQCBTZc,5871 +pip/_internal/locations/_sysconfig.py,sha256=LQNKTJKyjVqxXaPntlBwdUqTG1xwYf6GVCKMbyRJx5M,7918 +pip/_internal/locations/base.py,sha256=x5D1ONktmPJd8nnUTh-ELsAJ7fiXA-k-0a_vhfi2_Us,1579 +pip/_internal/main.py,sha256=r-UnUe8HLo5XFJz8inTcOOTiu_sxNhgHb6VwlGUllOI,340 +pip/_internal/metadata/__init__.py,sha256=iGoDbe_iTXQTIAEVy9f7dm-VQfZANO8kkwFr1CpqxqI,2036 +pip/_internal/metadata/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/metadata/__pycache__/base.cpython-310.pyc,, +pip/_internal/metadata/__pycache__/pkg_resources.cpython-310.pyc,, +pip/_internal/metadata/base.py,sha256=SCRPtShrtPy0lfFxuaFTgJJHsRXToGFToQUAZoBBbeA,19429 +pip/_internal/metadata/pkg_resources.py,sha256=wAnEtrcgH9YtV996MfoBjR2hGLHvi3uxk0vUOHbqBak,9456 +pip/_internal/models/__init__.py,sha256=3DHUd_qxpPozfzouoqa9g9ts1Czr5qaHfFxbnxriepM,63 +pip/_internal/models/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/models/__pycache__/candidate.cpython-310.pyc,, +pip/_internal/models/__pycache__/direct_url.cpython-310.pyc,, +pip/_internal/models/__pycache__/format_control.cpython-310.pyc,, +pip/_internal/models/__pycache__/index.cpython-310.pyc,, +pip/_internal/models/__pycache__/link.cpython-310.pyc,, +pip/_internal/models/__pycache__/scheme.cpython-310.pyc,, +pip/_internal/models/__pycache__/search_scope.cpython-310.pyc,, +pip/_internal/models/__pycache__/selection_prefs.cpython-310.pyc,, +pip/_internal/models/__pycache__/target_python.cpython-310.pyc,, +pip/_internal/models/__pycache__/wheel.cpython-310.pyc,, +pip/_internal/models/candidate.py,sha256=6pcABsaR7CfIHlbJbr2_kMkVJFL_yrYjTx6SVWUnCPQ,990 +pip/_internal/models/direct_url.py,sha256=7XtGQSLLDQb5ZywI2EMnnLcddtf5CJLx44lMtTHPxFw,6350 +pip/_internal/models/format_control.py,sha256=DJpMYjxeYKKQdwNcML2_F0vtAh-qnKTYe-CpTxQe-4g,2520 +pip/_internal/models/index.py,sha256=tYnL8oxGi4aSNWur0mG8DAP7rC6yuha_MwJO8xw0crI,1030 +pip/_internal/models/link.py,sha256=hoT_qsOBAgLBm9GKqpBrNF_mrEXeGXQE-aH_RX2cGgg,9817 +pip/_internal/models/scheme.py,sha256=3EFQp_ICu_shH1-TBqhl0QAusKCPDFOlgHFeN4XowWs,738 +pip/_internal/models/search_scope.py,sha256=LwloG0PJAmtI1hFXIypsD95kWE9xfR5hf_a2v1Vw7sk,4520 +pip/_internal/models/selection_prefs.py,sha256=KZdi66gsR-_RUXUr9uejssk3rmTHrQVJWeNA2sV-VSY,1907 +pip/_internal/models/target_python.py,sha256=qKpZox7J8NAaPmDs5C_aniwfPDxzvpkrCKqfwndG87k,3858 +pip/_internal/models/wheel.py,sha256=wlyz23BcZ40nBLX3rXKtrV6tmc8-8RxHyV-hq5zJ74Q,3525 +pip/_internal/network/__init__.py,sha256=jf6Tt5nV_7zkARBrKojIXItgejvoegVJVKUbhAa5Ioc,50 +pip/_internal/network/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/network/__pycache__/auth.cpython-310.pyc,, +pip/_internal/network/__pycache__/cache.cpython-310.pyc,, +pip/_internal/network/__pycache__/download.cpython-310.pyc,, +pip/_internal/network/__pycache__/lazy_wheel.cpython-310.pyc,, +pip/_internal/network/__pycache__/session.cpython-310.pyc,, +pip/_internal/network/__pycache__/utils.cpython-310.pyc,, +pip/_internal/network/__pycache__/xmlrpc.cpython-310.pyc,, +pip/_internal/network/auth.py,sha256=a3C7Xaa8kTJjXkdi_wrUjqaySc8Z9Yz7U6QIbXfzMyc,12190 +pip/_internal/network/cache.py,sha256=FJ3uTUo3wgf2KHmeZ3ltN9x3tQoy_0X6qNsRtNXsuL0,2131 +pip/_internal/network/download.py,sha256=12Ef_L7MlhNUN_0-n_3DggozWJER8c9J0us16cbvkKA,6062 +pip/_internal/network/lazy_wheel.py,sha256=1b8ZJ1w4bSBzpGzGwJR_CL2yQ6AFIwWQkS1vbPPw2XU,7627 +pip/_internal/network/session.py,sha256=38IKGKC64MTVUIH5XOR1hr2pOCzp39RccykdmGAvqRU,16729 +pip/_internal/network/utils.py,sha256=igLlTu_-q0LmL8FdJKq-Uj7AT_owrQ-T9FfyarkhK5U,4059 +pip/_internal/network/xmlrpc.py,sha256=AzQgG4GgS152_cqmGr_Oz2MIXsCal-xfsis7fA7nmU0,1791 +pip/_internal/operations/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_internal/operations/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/operations/__pycache__/check.cpython-310.pyc,, +pip/_internal/operations/__pycache__/freeze.cpython-310.pyc,, +pip/_internal/operations/__pycache__/prepare.cpython-310.pyc,, +pip/_internal/operations/build/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_internal/operations/build/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/operations/build/__pycache__/metadata.cpython-310.pyc,, +pip/_internal/operations/build/__pycache__/metadata_editable.cpython-310.pyc,, +pip/_internal/operations/build/__pycache__/metadata_legacy.cpython-310.pyc,, +pip/_internal/operations/build/__pycache__/wheel.cpython-310.pyc,, +pip/_internal/operations/build/__pycache__/wheel_editable.cpython-310.pyc,, +pip/_internal/operations/build/__pycache__/wheel_legacy.cpython-310.pyc,, +pip/_internal/operations/build/metadata.py,sha256=ES_uRmAvhrNm_nDTpZxshBfUsvnXtkj-g_4rZrH9Rww,1404 +pip/_internal/operations/build/metadata_editable.py,sha256=_Rai0VZjxoeJUkjkuICrq45LtjwFoDOveosMYH43rKc,1456 +pip/_internal/operations/build/metadata_legacy.py,sha256=o-eU21As175hDC7dluM1fJJ_FqokTIShyWpjKaIpHZw,2198 +pip/_internal/operations/build/wheel.py,sha256=AO9XnTGhTgHtZmU8Dkbfo1OGr41rBuSDjIgAa4zUKgE,1063 +pip/_internal/operations/build/wheel_editable.py,sha256=TVETY-L_M_dSEKBhTIcQOP75zKVXw8tuq1U354Mm30A,1405 +pip/_internal/operations/build/wheel_legacy.py,sha256=C9j6rukgQI1n_JeQLoZGuDdfUwzCXShyIdPTp6edbMQ,3064 +pip/_internal/operations/check.py,sha256=ca4O9CkPt9Em9sLCf3H0iVt1GIcW7M8C0U5XooaBuT4,5109 +pip/_internal/operations/freeze.py,sha256=ZiYw5GlUpLVx4VJHz4S1AP2JFNyvH0iq5kpcYj2ovyw,9770 +pip/_internal/operations/install/__init__.py,sha256=mX7hyD2GNBO2mFGokDQ30r_GXv7Y_PLdtxcUv144e-s,51 +pip/_internal/operations/install/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/operations/install/__pycache__/editable_legacy.cpython-310.pyc,, +pip/_internal/operations/install/__pycache__/legacy.cpython-310.pyc,, +pip/_internal/operations/install/__pycache__/wheel.cpython-310.pyc,, +pip/_internal/operations/install/editable_legacy.py,sha256=ee4kfJHNuzTdKItbfAsNOSEwq_vD7DRPGkBdK48yBhU,1354 +pip/_internal/operations/install/legacy.py,sha256=x7BG8kBm0K3JO6AR4sBl0zh2LOrfUaz7EdNt-keHBv4,4091 +pip/_internal/operations/install/wheel.py,sha256=QuQyCZE-XjuJjDYRixo40oUt2ucFhNmSrCbcXY7A9aE,27412 +pip/_internal/operations/prepare.py,sha256=LJP97jsuiCAaTGVIRrcINvxc1ntVsB45MoRbyMIukg4,24145 +pip/_internal/pyproject.py,sha256=Wm2ljdT6spC-tSdf1LBRaMYSJaXr1xUxV3OwdHCW9jc,6722 +pip/_internal/req/__init__.py,sha256=A7mUvT1KAcCYP3H7gUOTx2GRMlgoDur3H68Q0OJqM5A,2793 +pip/_internal/req/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/req/__pycache__/constructors.cpython-310.pyc,, +pip/_internal/req/__pycache__/req_file.cpython-310.pyc,, +pip/_internal/req/__pycache__/req_install.cpython-310.pyc,, +pip/_internal/req/__pycache__/req_set.cpython-310.pyc,, +pip/_internal/req/__pycache__/req_tracker.cpython-310.pyc,, +pip/_internal/req/__pycache__/req_uninstall.cpython-310.pyc,, +pip/_internal/req/constructors.py,sha256=fXmtNI_J77JFP_HRvYcQW-1nKw3AiUu6Q3b1Nm8aMm0,16094 +pip/_internal/req/req_file.py,sha256=5N8OTouPCof-305StC2YK9HBxQMw-xO46skRoBPbkZo,17421 +pip/_internal/req/req_install.py,sha256=jU1HQBT_DnXZean7jY8wPNMhb6_CzdKHcilHFY_o-Fc,32524 +pip/_internal/req/req_set.py,sha256=kHYiLvkKRx21WaLTwOI-54Ng0SSzZZ9SE7FD0PsfvYA,7584 +pip/_internal/req/req_tracker.py,sha256=jK7JDu-Wt73X-gqozrFtgJVlUlnQo0P4IQ4x4_gPlfM,4117 +pip/_internal/req/req_uninstall.py,sha256=K2BHYRRJAfkSpFqcPzc9XfX2EvbhaRtQIPRFmMtUdfo,23814 +pip/_internal/resolution/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_internal/resolution/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/resolution/__pycache__/base.cpython-310.pyc,, +pip/_internal/resolution/base.py,sha256=qlmh325SBVfvG6Me9gc5Nsh5sdwHBwzHBq6aEXtKsLA,583 +pip/_internal/resolution/legacy/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_internal/resolution/legacy/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/resolution/legacy/__pycache__/resolver.cpython-310.pyc,, +pip/_internal/resolution/legacy/resolver.py,sha256=b7bf5qL1ROg73sl8dhTvLdD1w5XF8xybBAF6eF_kz7c,18288 +pip/_internal/resolution/resolvelib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_internal/resolution/resolvelib/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/base.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/candidates.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/factory.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/found_candidates.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/provider.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/reporter.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/requirements.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/__pycache__/resolver.cpython-310.pyc,, +pip/_internal/resolution/resolvelib/base.py,sha256=u1O4fkvCO4mhmu5i32xrDv9AX5NgUci_eYVyBDQhTIM,5220 +pip/_internal/resolution/resolvelib/candidates.py,sha256=KR5jxZRSahByOABXbwrX-zNoawa7Gm9Iss-HrvrcvNw,18357 +pip/_internal/resolution/resolvelib/factory.py,sha256=0bbxnUSSjaeTmtIEgeeKtEqhEFfNhv3xpq7j9IaMq2c,28298 +pip/_internal/resolution/resolvelib/found_candidates.py,sha256=hvL3Hoa9VaYo-qEOZkBi2Iqw251UDxPz-uMHVaWmLpE,5705 +pip/_internal/resolution/resolvelib/provider.py,sha256=LzQQyzMVaZYAwLgKInbq-it6mbQL1gX0hGohz5Cr5wg,9915 +pip/_internal/resolution/resolvelib/reporter.py,sha256=3ZVVYrs5PqvLFJkGLcuXoMK5mTInFzl31xjUpDBpZZk,2526 +pip/_internal/resolution/resolvelib/requirements.py,sha256=B1ndvKPSuyyyTEXt9sKhbwminViSWnBrJa7qO2ln4Z0,5455 +pip/_internal/resolution/resolvelib/resolver.py,sha256=ucoVKHtwH6gkZjcfIVJbUiOIHLqJxeYlrKTMIJciYwM,11335 +pip/_internal/self_outdated_check.py,sha256=GKSatNlt2cz_CMGxu72FbUzuPaXpWOnIVKOOYIk0gvY,6849 +pip/_internal/utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_internal/utils/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/utils/__pycache__/_log.cpython-310.pyc,, +pip/_internal/utils/__pycache__/appdirs.cpython-310.pyc,, +pip/_internal/utils/__pycache__/compat.cpython-310.pyc,, +pip/_internal/utils/__pycache__/compatibility_tags.cpython-310.pyc,, +pip/_internal/utils/__pycache__/datetime.cpython-310.pyc,, +pip/_internal/utils/__pycache__/deprecation.cpython-310.pyc,, +pip/_internal/utils/__pycache__/direct_url_helpers.cpython-310.pyc,, +pip/_internal/utils/__pycache__/distutils_args.cpython-310.pyc,, +pip/_internal/utils/__pycache__/egg_link.cpython-310.pyc,, +pip/_internal/utils/__pycache__/encoding.cpython-310.pyc,, +pip/_internal/utils/__pycache__/entrypoints.cpython-310.pyc,, +pip/_internal/utils/__pycache__/filesystem.cpython-310.pyc,, +pip/_internal/utils/__pycache__/filetypes.cpython-310.pyc,, +pip/_internal/utils/__pycache__/glibc.cpython-310.pyc,, +pip/_internal/utils/__pycache__/hashes.cpython-310.pyc,, +pip/_internal/utils/__pycache__/inject_securetransport.cpython-310.pyc,, +pip/_internal/utils/__pycache__/logging.cpython-310.pyc,, +pip/_internal/utils/__pycache__/misc.cpython-310.pyc,, +pip/_internal/utils/__pycache__/models.cpython-310.pyc,, +pip/_internal/utils/__pycache__/packaging.cpython-310.pyc,, +pip/_internal/utils/__pycache__/setuptools_build.cpython-310.pyc,, +pip/_internal/utils/__pycache__/subprocess.cpython-310.pyc,, +pip/_internal/utils/__pycache__/temp_dir.cpython-310.pyc,, +pip/_internal/utils/__pycache__/unpacking.cpython-310.pyc,, +pip/_internal/utils/__pycache__/urls.cpython-310.pyc,, +pip/_internal/utils/__pycache__/virtualenv.cpython-310.pyc,, +pip/_internal/utils/__pycache__/wheel.cpython-310.pyc,, +pip/_internal/utils/_log.py,sha256=-jHLOE_THaZz5BFcCnoSL9EYAtJ0nXem49s9of4jvKw,1015 +pip/_internal/utils/appdirs.py,sha256=swgcTKOm3daLeXTW6v5BUS2Ti2RvEnGRQYH_yDXklAo,1665 +pip/_internal/utils/compat.py,sha256=ACyBfLgj3_XG-iA5omEDrXqDM0cQKzi8h8HRBInzG6Q,1884 +pip/_internal/utils/compatibility_tags.py,sha256=ydin8QG8BHqYRsPY4OL6cmb44CbqXl1T0xxS97VhHkk,5377 +pip/_internal/utils/datetime.py,sha256=m21Y3wAtQc-ji6Veb6k_M5g6A0ZyFI4egchTdnwh-pQ,242 +pip/_internal/utils/deprecation.py,sha256=NKo8VqLioJ4nnXXGmW4KdasxF90EFHkZaHeX1fT08C8,3627 +pip/_internal/utils/direct_url_helpers.py,sha256=6F1tc2rcKaCZmgfVwsE6ObIe_Pux23mUVYA-2D9wCFc,3206 +pip/_internal/utils/distutils_args.py,sha256=mcAscyp80vTt3xAGTipnpgc83V-_wCvydNELVXLq7JI,1249 +pip/_internal/utils/egg_link.py,sha256=5MVlpz5LirT4iLQq86OYzjXaYF0D4Qk1dprEI7ThST4,2203 +pip/_internal/utils/encoding.py,sha256=bdZ3YgUpaOEBI5MP4-DEXiQarCW3V0rxw1kRz-TaU1Q,1169 +pip/_internal/utils/entrypoints.py,sha256=aPvCnQVi9Hdk35Kloww_D5ibjUpqxgqcJP8O9VuMZek,1055 +pip/_internal/utils/filesystem.py,sha256=rrl-rY1w8TYyKYndUyZlE9ffkQyA4-jI9x_59zXkn5s,5893 +pip/_internal/utils/filetypes.py,sha256=i8XAQ0eFCog26Fw9yV0Yb1ygAqKYB1w9Cz9n0fj8gZU,716 +pip/_internal/utils/glibc.py,sha256=tDfwVYnJCOC0BNVpItpy8CGLP9BjkxFHdl0mTS0J7fc,3110 +pip/_internal/utils/hashes.py,sha256=anpZfFGIT6HcIj2td9NHtE8AWg6GeAIhwpP8GPvZE0E,4811 +pip/_internal/utils/inject_securetransport.py,sha256=o-QRVMGiENrTJxw3fAhA7uxpdEdw6M41TjHYtSVRrcg,795 +pip/_internal/utils/logging.py,sha256=Rvght-fDXL70VWib1cpgZ3iU-kXODV98bNeLUlbqVto,11522 +pip/_internal/utils/misc.py,sha256=MdUB12BMhj73sEmskEutmPyWFaJB7asoPCfLzs_YeT0,19359 +pip/_internal/utils/models.py,sha256=5GoYU586SrxURMvDn_jBMJInitviJg4O5-iOU-6I0WY,1193 +pip/_internal/utils/packaging.py,sha256=5Wm6_x7lKrlqVjPI5MBN_RurcRHwVYoQ7Ksrs84de7s,2108 +pip/_internal/utils/setuptools_build.py,sha256=vNH9hQB9wT6d-h1hVQhBKw91jNeT42meHpVeii-urOI,5652 +pip/_internal/utils/subprocess.py,sha256=vIWGpet5ARBmZ2Qn4NEHNgzCOduqbPIuByZmhhmr6mM,9182 +pip/_internal/utils/temp_dir.py,sha256=zob3PYMVevONkheOMUp_4jDofrEY3HIu5DHK78cSspI,7662 +pip/_internal/utils/unpacking.py,sha256=HUFlMEyCa9dPwdLh6sWeh95DeKytV8rsOyKShEw9y6g,8906 +pip/_internal/utils/urls.py,sha256=AhaesUGl-9it6uvG6fsFPOr9ynFpGaTMk4t5XTX7Z_Q,1759 +pip/_internal/utils/virtualenv.py,sha256=4_48qMzCwB_F5jIK5BC_ua7uiAMVifmQWU9NdaGUoVA,3459 +pip/_internal/utils/wheel.py,sha256=lXOgZyTlOm5HmK8tw5iw0A3_5A6wRzsXHOaQkIvvloU,4549 +pip/_internal/vcs/__init__.py,sha256=UAqvzpbi0VbZo3Ub6skEeZAw-ooIZR-zX_WpCbxyCoU,596 +pip/_internal/vcs/__pycache__/__init__.cpython-310.pyc,, +pip/_internal/vcs/__pycache__/bazaar.cpython-310.pyc,, +pip/_internal/vcs/__pycache__/git.cpython-310.pyc,, +pip/_internal/vcs/__pycache__/mercurial.cpython-310.pyc,, +pip/_internal/vcs/__pycache__/subversion.cpython-310.pyc,, +pip/_internal/vcs/__pycache__/versioncontrol.cpython-310.pyc,, +pip/_internal/vcs/bazaar.py,sha256=IGb5ca1xSZfgegRD2_JeyoZPrQQHs7lEYEIgpVsKpoU,3047 +pip/_internal/vcs/git.py,sha256=mjhwudCx9WlLNkxZ6_kOKmueF0rLoU2i1xeASKF6yiQ,18116 +pip/_internal/vcs/mercurial.py,sha256=Bzbd518Jsx-EJI0IhIobiQqiRsUv5TWYnrmRIFWE0Gw,5238 +pip/_internal/vcs/subversion.py,sha256=TEMRdwECvMcXakZX0pTNUep79kmBYkWDkWFkrYmcmac,11718 +pip/_internal/vcs/versioncontrol.py,sha256=KUOc-hN51em9jrqxKwUR3JnkgSE-xSOqMiiJcSaL6B8,22811 +pip/_internal/wheel_builder.py,sha256=65rOA8FSYt3c3HyqEw17uujjlCgqmoKEIv6rv9xN2NM,12307 +pip/_vendor/__init__.py,sha256=xjcBX0EP50pkaMdCssrsBXoZgo2hTtYxlcH1CIyA3T4,4708 +pip/_vendor/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/__pycache__/distro.cpython-310.pyc,, +pip/_vendor/__pycache__/six.cpython-310.pyc,, +pip/_vendor/__pycache__/typing_extensions.cpython-310.pyc,, +pip/_vendor/cachecontrol/__init__.py,sha256=1j_YQfjmiix6YyouLrftC6NzksAm8e8xGSjMKMRPIkM,465 +pip/_vendor/cachecontrol/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/_cmd.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/adapter.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/cache.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/compat.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/controller.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/filewrapper.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/heuristics.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/serialize.cpython-310.pyc,, +pip/_vendor/cachecontrol/__pycache__/wrapper.cpython-310.pyc,, +pip/_vendor/cachecontrol/_cmd.py,sha256=lxUXqfNTVx84zf6tcWbkLZHA6WVBRtJRpfeA9ZqhaAY,1379 +pip/_vendor/cachecontrol/adapter.py,sha256=ew9OYEQHEOjvGl06ZsuX8W3DAvHWsQKHwWAxISyGug8,5033 +pip/_vendor/cachecontrol/cache.py,sha256=eMS9Bn9JWQkHiIYA5GPRBqKVU95uS-yXkxrzpoafRig,917 +pip/_vendor/cachecontrol/caches/__init__.py,sha256=gGFOtIH8QDRvkP4YAfGIh-u9YYcGZVxwLM1-6e1mPNI,170 +pip/_vendor/cachecontrol/caches/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/cachecontrol/caches/__pycache__/file_cache.cpython-310.pyc,, +pip/_vendor/cachecontrol/caches/__pycache__/redis_cache.cpython-310.pyc,, +pip/_vendor/cachecontrol/caches/file_cache.py,sha256=P2KHcNXiqxEW7fCq5KC-NYHGSk0nNR9NIKuN-vBTn-E,4251 +pip/_vendor/cachecontrol/caches/redis_cache.py,sha256=tu_YBV7EV8vdBRGazUErkoRqYYjSBmNcB8dZ7BNomqk,940 +pip/_vendor/cachecontrol/compat.py,sha256=LNx7vqBndYdHU8YuJt53ab_8rzMGTXVrvMb7CZJkxG0,778 +pip/_vendor/cachecontrol/controller.py,sha256=9DSEiV58Gx7Ce69fLCrRcpN-_sHzXTY4ol9bEviatR0,15625 +pip/_vendor/cachecontrol/filewrapper.py,sha256=X4BAQOO26GNOR7nH_fhTzAfeuct2rBQcx_15MyFBpcs,3946 +pip/_vendor/cachecontrol/heuristics.py,sha256=8kAyuZLSCyEIgQr6vbUwfhpqg9ows4mM0IV6DWazevI,4154 +pip/_vendor/cachecontrol/serialize.py,sha256=dlySaeA5U7Q5eHvjiObgo1M8j8_huVjfWjid7Aq-r8c,6783 +pip/_vendor/cachecontrol/wrapper.py,sha256=X3-KMZ20Ho3VtqyVaXclpeQpFzokR5NE8tZSfvKVaB8,774 +pip/_vendor/certifi/__init__.py,sha256=xWdRgntT3j1V95zkRipGOg_A1UfEju2FcpujhysZLRI,62 +pip/_vendor/certifi/__main__.py,sha256=1k3Cr95vCxxGRGDljrW3wMdpZdL3Nhf0u1n-k2qdsCY,255 +pip/_vendor/certifi/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/certifi/__pycache__/__main__.cpython-310.pyc,, +pip/_vendor/certifi/__pycache__/core.cpython-310.pyc,, +pip/_vendor/certifi/cacert.pem,sha256=-og4Keu4zSpgL5shwfhd4kz0eUnVILzrGCi0zRy2kGw,265969 +pip/_vendor/certifi/core.py,sha256=CcwptmiI-3M50jIdO0HT6Fh6W_wqGsf8QcX9yfzvyuc,2791 +pip/_vendor/chardet/__init__.py,sha256=mWZaWmvZkhwfBEAT9O1Y6nRTfKzhT7FHhQTTAujbqUA,3271 +pip/_vendor/chardet/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/big5freq.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/big5prober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/chardistribution.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/charsetgroupprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/charsetprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/codingstatemachine.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/compat.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/cp949prober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/enums.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/escprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/escsm.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/eucjpprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/euckrfreq.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/euckrprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/euctwfreq.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/euctwprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/gb2312freq.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/gb2312prober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/hebrewprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/jisfreq.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/jpcntx.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langbulgarianmodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langgreekmodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langhebrewmodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langhungarianmodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langrussianmodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langthaimodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/langturkishmodel.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/latin1prober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/mbcharsetprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/mbcsgroupprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/mbcssm.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/sbcharsetprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/sbcsgroupprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/sjisprober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/universaldetector.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/utf8prober.cpython-310.pyc,, +pip/_vendor/chardet/__pycache__/version.cpython-310.pyc,, +pip/_vendor/chardet/big5freq.py,sha256=D_zK5GyzoVsRes0HkLJziltFQX0bKCLOrFe9_xDvO_8,31254 +pip/_vendor/chardet/big5prober.py,sha256=kBxHbdetBpPe7xrlb-e990iot64g_eGSLd32lB7_h3M,1757 +pip/_vendor/chardet/chardistribution.py,sha256=3woWS62KrGooKyqz4zQSnjFbJpa6V7g02daAibTwcl8,9411 +pip/_vendor/chardet/charsetgroupprober.py,sha256=GZLReHP6FRRn43hvSOoGCxYamErKzyp6RgOQxVeC3kg,3839 +pip/_vendor/chardet/charsetprober.py,sha256=KSmwJErjypyj0bRZmC5F5eM7c8YQgLYIjZXintZNstg,5110 +pip/_vendor/chardet/cli/__init__.py,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1 +pip/_vendor/chardet/cli/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/chardet/cli/__pycache__/chardetect.cpython-310.pyc,, +pip/_vendor/chardet/cli/chardetect.py,sha256=XK5zqjUG2a4-y6eLHZ8ThYcp6WWUrdlmELxNypcc2SE,2747 +pip/_vendor/chardet/codingstatemachine.py,sha256=VYp_6cyyki5sHgXDSZnXW4q1oelHc3cu9AyQTX7uug8,3590 +pip/_vendor/chardet/compat.py,sha256=40zr6wICZwknxyuLGGcIOPyve8DTebBCbbvttvnmp5Q,1200 +pip/_vendor/chardet/cp949prober.py,sha256=TZ434QX8zzBsnUvL_8wm4AQVTZ2ZkqEEQL_lNw9f9ow,1855 +pip/_vendor/chardet/enums.py,sha256=Aimwdb9as1dJKZaFNUH2OhWIVBVd6ZkJJ_WK5sNY8cU,1661 +pip/_vendor/chardet/escprober.py,sha256=kkyqVg1Yw3DIOAMJ2bdlyQgUFQhuHAW8dUGskToNWSc,3950 +pip/_vendor/chardet/escsm.py,sha256=RuXlgNvTIDarndvllNCk5WZBIpdCxQ0kcd9EAuxUh84,10510 +pip/_vendor/chardet/eucjpprober.py,sha256=iD8Jdp0ISRjgjiVN7f0e8xGeQJ5GM2oeZ1dA8nbSeUw,3749 +pip/_vendor/chardet/euckrfreq.py,sha256=-7GdmvgWez4-eO4SuXpa7tBiDi5vRXQ8WvdFAzVaSfo,13546 +pip/_vendor/chardet/euckrprober.py,sha256=MqFMTQXxW4HbzIpZ9lKDHB3GN8SP4yiHenTmf8g_PxY,1748 +pip/_vendor/chardet/euctwfreq.py,sha256=No1WyduFOgB5VITUA7PLyC5oJRNzRyMbBxaKI1l16MA,31621 +pip/_vendor/chardet/euctwprober.py,sha256=13p6EP4yRaxqnP4iHtxHOJ6R2zxHq1_m8hTRjzVZ95c,1747 +pip/_vendor/chardet/gb2312freq.py,sha256=JX8lsweKLmnCwmk8UHEQsLgkr_rP_kEbvivC4qPOrlc,20715 +pip/_vendor/chardet/gb2312prober.py,sha256=gGvIWi9WhDjE-xQXHvNIyrnLvEbMAYgyUSZ65HUfylw,1754 +pip/_vendor/chardet/hebrewprober.py,sha256=c3SZ-K7hvyzGY6JRAZxJgwJ_sUS9k0WYkvMY00YBYFo,13838 +pip/_vendor/chardet/jisfreq.py,sha256=vpmJv2Bu0J8gnMVRPHMFefTRvo_ha1mryLig8CBwgOg,25777 +pip/_vendor/chardet/jpcntx.py,sha256=PYlNqRUQT8LM3cT5FmHGP0iiscFlTWED92MALvBungo,19643 +pip/_vendor/chardet/langbulgarianmodel.py,sha256=rk9CJpuxO0bObboJcv6gNgWuosYZmd8qEEds5y7DS_Y,105697 +pip/_vendor/chardet/langgreekmodel.py,sha256=S-uNQ1ihC75yhBvSux24gLFZv3QyctMwC6OxLJdX-bw,99571 +pip/_vendor/chardet/langhebrewmodel.py,sha256=DzPP6TPGG_-PV7tqspu_d8duueqm7uN-5eQ0aHUw1Gg,98776 +pip/_vendor/chardet/langhungarianmodel.py,sha256=RtJH7DZdsmaHqyK46Kkmnk5wQHiJwJPPJSqqIlpeZRc,102498 +pip/_vendor/chardet/langrussianmodel.py,sha256=THqJOhSxiTQcHboDNSc5yofc2koXXQFHFyjtyuntUfM,131180 +pip/_vendor/chardet/langthaimodel.py,sha256=R1wXHnUMtejpw0JnH_JO8XdYasME6wjVqp1zP7TKLgg,103312 +pip/_vendor/chardet/langturkishmodel.py,sha256=rfwanTptTwSycE4-P-QasPmzd-XVYgevytzjlEzBBu8,95946 +pip/_vendor/chardet/latin1prober.py,sha256=S2IoORhFk39FEFOlSFWtgVybRiP6h7BlLldHVclNkU8,5370 +pip/_vendor/chardet/mbcharsetprober.py,sha256=AR95eFH9vuqSfvLQZN-L5ijea25NOBCoXqw8s5O9xLQ,3413 +pip/_vendor/chardet/mbcsgroupprober.py,sha256=h6TRnnYq2OxG1WdD5JOyxcdVpn7dG0q-vB8nWr5mbh4,2012 +pip/_vendor/chardet/mbcssm.py,sha256=SY32wVIF3HzcjY3BaEspy9metbNSKxIIB0RKPn7tjpI,25481 +pip/_vendor/chardet/metadata/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/chardet/metadata/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/chardet/metadata/__pycache__/languages.cpython-310.pyc,, +pip/_vendor/chardet/metadata/languages.py,sha256=41tLq3eLSrBEbEVVQpVGFq9K7o1ln9b1HpY1l0hCUQo,19474 +pip/_vendor/chardet/sbcharsetprober.py,sha256=nmyMyuxzG87DN6K3Rk2MUzJLMLR69MrWpdnHzOwVUwQ,6136 +pip/_vendor/chardet/sbcsgroupprober.py,sha256=hqefQuXmiFyDBArOjujH6hd6WFXlOD1kWCsxDhjx5Vc,4309 +pip/_vendor/chardet/sjisprober.py,sha256=IIt-lZj0WJqK4rmUZzKZP4GJlE8KUEtFYVuY96ek5MQ,3774 +pip/_vendor/chardet/universaldetector.py,sha256=DpZTXCX0nUHXxkQ9sr4GZxGB_hveZ6hWt3uM94cgWKs,12503 +pip/_vendor/chardet/utf8prober.py,sha256=IdD8v3zWOsB8OLiyPi-y_fqwipRFxV9Nc1eKBLSuIEw,2766 +pip/_vendor/chardet/version.py,sha256=A4CILFAd8MRVG1HoXPp45iK9RLlWyV73a1EtwE8Tvn8,242 +pip/_vendor/colorama/__init__.py,sha256=pCdErryzLSzDW5P-rRPBlPLqbBtIRNJB6cMgoeJns5k,239 +pip/_vendor/colorama/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/colorama/__pycache__/ansi.cpython-310.pyc,, +pip/_vendor/colorama/__pycache__/ansitowin32.cpython-310.pyc,, +pip/_vendor/colorama/__pycache__/initialise.cpython-310.pyc,, +pip/_vendor/colorama/__pycache__/win32.cpython-310.pyc,, +pip/_vendor/colorama/__pycache__/winterm.cpython-310.pyc,, +pip/_vendor/colorama/ansi.py,sha256=Top4EeEuaQdBWdteKMEcGOTeKeF19Q-Wo_6_Cj5kOzQ,2522 +pip/_vendor/colorama/ansitowin32.py,sha256=yV7CEmCb19MjnJKODZEEvMH_fnbJhwnpzo4sxZuGXmA,10517 +pip/_vendor/colorama/initialise.py,sha256=PprovDNxMTrvoNHFcL2NZjpH2XzDc8BLxLxiErfUl4k,1915 +pip/_vendor/colorama/win32.py,sha256=bJ8Il9jwaBN5BJ8bmN6FoYZ1QYuMKv2j8fGrXh7TJjw,5404 +pip/_vendor/colorama/winterm.py,sha256=2y_2b7Zsv34feAsP67mLOVc-Bgq51mdYGo571VprlrM,6438 +pip/_vendor/distlib/__init__.py,sha256=y-rKDBB99QJ3N1PJGAXQo89ou615aAeBjV2brBxKgM8,581 +pip/_vendor/distlib/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/compat.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/database.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/index.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/locators.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/manifest.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/markers.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/metadata.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/resources.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/scripts.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/util.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/version.cpython-310.pyc,, +pip/_vendor/distlib/__pycache__/wheel.cpython-310.pyc,, +pip/_vendor/distlib/compat.py,sha256=tfoMrj6tujk7G4UC2owL6ArgDuCKabgBxuJRGZSmpko,41259 +pip/_vendor/distlib/database.py,sha256=hBO2dgvDF7W3BqX8Ecns6p_RPerCaIbNKbdUOuJ1a14,51456 +pip/_vendor/distlib/index.py,sha256=UfcimNW19AB7IKWam4VaJbXuCBvArKfSxhV16EwavzE,20739 +pip/_vendor/distlib/locators.py,sha256=4D2hEcHePNuW4mXEZ3Cuw12eW-vbO-4WuAlbf4h5K7w,51963 +pip/_vendor/distlib/manifest.py,sha256=nQEhYmgoreaBZzyFzwYsXxJARu3fo4EkunU163U16iE,14811 +pip/_vendor/distlib/markers.py,sha256=TpHHHLgkzyT7YHbwj-2i6weRaq-Ivy2-MUnrDkjau-U,5058 +pip/_vendor/distlib/metadata.py,sha256=vatoxFdmBr6ie-sTVXVNPOPG3uwMDWJTnEECnm7xDCw,39109 +pip/_vendor/distlib/resources.py,sha256=LwbPksc0A1JMbi6XnuPdMBUn83X7BPuFNWqPGEKI698,10820 +pip/_vendor/distlib/scripts.py,sha256=tjSwENINeV91ROZxec5zTSMRg2jEeKc4enyCHDzNvEE,17720 +pip/_vendor/distlib/util.py,sha256=31dPXn3Rfat0xZLeVoFpuniyhe6vsbl9_QN-qd9Lhlk,66262 +pip/_vendor/distlib/version.py,sha256=WG__LyAa2GwmA6qSoEJtvJE8REA1LZpbSizy8WvhJLk,23513 +pip/_vendor/distlib/wheel.py,sha256=pj5VVCjqZMcHvgizORWwAFPS7hOk61CZ59dxP8laQ4E,42943 +pip/_vendor/distro.py,sha256=O1EeHMq1-xAO373JI2_6pYEtd09yEkxtmrYkdY-9S-w,48414 +pip/_vendor/html5lib/__init__.py,sha256=BYzcKCqeEii52xDrqBFruhnmtmkiuHXFyFh-cglQ8mk,1160 +pip/_vendor/html5lib/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/_ihatexml.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/_inputstream.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/_tokenizer.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/_utils.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/constants.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/html5parser.cpython-310.pyc,, +pip/_vendor/html5lib/__pycache__/serializer.cpython-310.pyc,, +pip/_vendor/html5lib/_ihatexml.py,sha256=ifOwF7pXqmyThIXc3boWc96s4MDezqRrRVp7FwDYUFs,16728 +pip/_vendor/html5lib/_inputstream.py,sha256=jErNASMlkgs7MpOM9Ve_VdLDJyFFweAjLuhVutZz33U,32353 +pip/_vendor/html5lib/_tokenizer.py,sha256=04mgA2sNTniutl2fxFv-ei5bns4iRaPxVXXHh_HrV_4,77040 +pip/_vendor/html5lib/_trie/__init__.py,sha256=nqfgO910329BEVJ5T4psVwQtjd2iJyEXQ2-X8c1YxwU,109 +pip/_vendor/html5lib/_trie/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/html5lib/_trie/__pycache__/_base.cpython-310.pyc,, +pip/_vendor/html5lib/_trie/__pycache__/py.cpython-310.pyc,, +pip/_vendor/html5lib/_trie/_base.py,sha256=CaybYyMro8uERQYjby2tTeSUatnWDfWroUN9N7ety5w,1013 +pip/_vendor/html5lib/_trie/py.py,sha256=wXmQLrZRf4MyWNyg0m3h81m9InhLR7GJ002mIIZh-8o,1775 +pip/_vendor/html5lib/_utils.py,sha256=Dx9AKntksRjFT1veBj7I362pf5OgIaT0zglwq43RnfU,4931 +pip/_vendor/html5lib/constants.py,sha256=Ll-yzLU_jcjyAI_h57zkqZ7aQWE5t5xA4y_jQgoUUhw,83464 +pip/_vendor/html5lib/filters/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/html5lib/filters/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/alphabeticalattributes.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/base.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/inject_meta_charset.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/lint.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/optionaltags.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/sanitizer.cpython-310.pyc,, +pip/_vendor/html5lib/filters/__pycache__/whitespace.cpython-310.pyc,, +pip/_vendor/html5lib/filters/alphabeticalattributes.py,sha256=lViZc2JMCclXi_5gduvmdzrRxtO5Xo9ONnbHBVCsykU,919 +pip/_vendor/html5lib/filters/base.py,sha256=z-IU9ZAYjpsVsqmVt7kuWC63jR11hDMr6CVrvuao8W0,286 +pip/_vendor/html5lib/filters/inject_meta_charset.py,sha256=egDXUEHXmAG9504xz0K6ALDgYkvUrC2q15YUVeNlVQg,2945 +pip/_vendor/html5lib/filters/lint.py,sha256=jk6q56xY0ojiYfvpdP-OZSm9eTqcAdRqhCoPItemPYA,3643 +pip/_vendor/html5lib/filters/optionaltags.py,sha256=8lWT75J0aBOHmPgfmqTHSfPpPMp01T84NKu0CRedxcE,10588 +pip/_vendor/html5lib/filters/sanitizer.py,sha256=m6oGmkBhkGAnn2nV6D4hE78SCZ6WEnK9rKdZB3uXBIc,26897 +pip/_vendor/html5lib/filters/whitespace.py,sha256=8eWqZxd4UC4zlFGW6iyY6f-2uuT8pOCSALc3IZt7_t4,1214 +pip/_vendor/html5lib/html5parser.py,sha256=anr-aXre_ImfrkQ35c_rftKXxC80vJCREKe06Tq15HA,117186 +pip/_vendor/html5lib/serializer.py,sha256=_PpvcZF07cwE7xr9uKkZqh5f4UEaI8ltCU2xPJzaTpk,15759 +pip/_vendor/html5lib/treeadapters/__init__.py,sha256=A0rY5gXIe4bJOiSGRO_j_tFhngRBO8QZPzPtPw5dFzo,679 +pip/_vendor/html5lib/treeadapters/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/html5lib/treeadapters/__pycache__/genshi.cpython-310.pyc,, +pip/_vendor/html5lib/treeadapters/__pycache__/sax.cpython-310.pyc,, +pip/_vendor/html5lib/treeadapters/genshi.py,sha256=CH27pAsDKmu4ZGkAUrwty7u0KauGLCZRLPMzaO3M5vo,1715 +pip/_vendor/html5lib/treeadapters/sax.py,sha256=BKS8woQTnKiqeffHsxChUqL4q2ZR_wb5fc9MJ3zQC8s,1776 +pip/_vendor/html5lib/treebuilders/__init__.py,sha256=AysSJyvPfikCMMsTVvaxwkgDieELD5dfR8FJIAuq7hY,3592 +pip/_vendor/html5lib/treebuilders/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/html5lib/treebuilders/__pycache__/base.cpython-310.pyc,, +pip/_vendor/html5lib/treebuilders/__pycache__/dom.cpython-310.pyc,, +pip/_vendor/html5lib/treebuilders/__pycache__/etree.cpython-310.pyc,, +pip/_vendor/html5lib/treebuilders/__pycache__/etree_lxml.cpython-310.pyc,, +pip/_vendor/html5lib/treebuilders/base.py,sha256=z-o51vt9r_l2IDG5IioTOKGzZne4Fy3_Fc-7ztrOh4I,14565 +pip/_vendor/html5lib/treebuilders/dom.py,sha256=22whb0C71zXIsai5mamg6qzBEiigcBIvaDy4Asw3at0,8925 +pip/_vendor/html5lib/treebuilders/etree.py,sha256=w5ZFpKk6bAxnrwD2_BrF5EVC7vzz0L3LMi9Sxrbc_8w,12836 +pip/_vendor/html5lib/treebuilders/etree_lxml.py,sha256=9gqDjs-IxsPhBYa5cpvv2FZ1KZlG83Giusy2lFmvIkE,14766 +pip/_vendor/html5lib/treewalkers/__init__.py,sha256=OBPtc1TU5mGyy18QDMxKEyYEz0wxFUUNj5v0-XgmYhY,5719 +pip/_vendor/html5lib/treewalkers/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/html5lib/treewalkers/__pycache__/base.cpython-310.pyc,, +pip/_vendor/html5lib/treewalkers/__pycache__/dom.cpython-310.pyc,, +pip/_vendor/html5lib/treewalkers/__pycache__/etree.cpython-310.pyc,, +pip/_vendor/html5lib/treewalkers/__pycache__/etree_lxml.cpython-310.pyc,, +pip/_vendor/html5lib/treewalkers/__pycache__/genshi.cpython-310.pyc,, +pip/_vendor/html5lib/treewalkers/base.py,sha256=ouiOsuSzvI0KgzdWP8PlxIaSNs9falhbiinAEc_UIJY,7476 +pip/_vendor/html5lib/treewalkers/dom.py,sha256=EHyFR8D8lYNnyDU9lx_IKigVJRyecUGua0mOi7HBukc,1413 +pip/_vendor/html5lib/treewalkers/etree.py,sha256=xo1L5m9VtkfpFJK0pFmkLVajhqYYVisVZn3k9kYpPkI,4551 +pip/_vendor/html5lib/treewalkers/etree_lxml.py,sha256=_b0LAVWLcVu9WaU_-w3D8f0IRSpCbjf667V-3NRdhTw,6357 +pip/_vendor/html5lib/treewalkers/genshi.py,sha256=4D2PECZ5n3ZN3qu3jMl9yY7B81jnQApBQSVlfaIuYbA,2309 +pip/_vendor/idna/__init__.py,sha256=KJQN1eQBr8iIK5SKrJ47lXvxG0BJ7Lm38W4zT0v_8lk,849 +pip/_vendor/idna/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/codec.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/compat.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/core.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/idnadata.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/intranges.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/package_data.cpython-310.pyc,, +pip/_vendor/idna/__pycache__/uts46data.cpython-310.pyc,, +pip/_vendor/idna/codec.py,sha256=6ly5odKfqrytKT9_7UrlGklHnf1DSK2r9C6cSM4sa28,3374 +pip/_vendor/idna/compat.py,sha256=0_sOEUMT4CVw9doD3vyRhX80X19PwqFoUBs7gWsFME4,321 +pip/_vendor/idna/core.py,sha256=RFIkY-HhFZaDoBEFjGwyGd_vWI04uOAQjnzueMWqwOU,12795 +pip/_vendor/idna/idnadata.py,sha256=fzMzkCea2xieVxcrjngJ-2pLsKQNejPCZFlBajIuQdw,44025 +pip/_vendor/idna/intranges.py,sha256=YBr4fRYuWH7kTKS2tXlFjM24ZF1Pdvcir-aywniInqg,1881 +pip/_vendor/idna/package_data.py,sha256=szxQhV0ZD0nKJ84Kuobw3l8q4_KeCyXjFRdpwIpKZmw,21 +pip/_vendor/idna/uts46data.py,sha256=o-D7V-a0fOLZNd7tvxof6MYfUd0TBZzE2bLR5XO67xU,204400 +pip/_vendor/msgpack/__init__.py,sha256=2gJwcsTIaAtCM0GMi2rU-_Y6kILeeQuqRkrQ22jSANc,1118 +pip/_vendor/msgpack/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/msgpack/__pycache__/_version.cpython-310.pyc,, +pip/_vendor/msgpack/__pycache__/exceptions.cpython-310.pyc,, +pip/_vendor/msgpack/__pycache__/ext.cpython-310.pyc,, +pip/_vendor/msgpack/__pycache__/fallback.cpython-310.pyc,, +pip/_vendor/msgpack/_version.py,sha256=JpTcnRd3YUioA24NDtDZbLW0Nhl2yA-N1Rq2lLDBB-g,20 +pip/_vendor/msgpack/exceptions.py,sha256=dCTWei8dpkrMsQDcjQk74ATl9HsIBH0ybt8zOPNqMYc,1081 +pip/_vendor/msgpack/ext.py,sha256=4l356Y4sVEcvCla2dh_cL57vh4GMhZfa3kuWHFHYz6A,6088 +pip/_vendor/msgpack/fallback.py,sha256=L5jriXysURbf6rPbbHbvXgvoFrKZiryIBmujMTcrf3A,34475 +pip/_vendor/packaging/__about__.py,sha256=ugASIO2w1oUyH8_COqQ2X_s0rDhjbhQC3yJocD03h2c,661 +pip/_vendor/packaging/__init__.py,sha256=b9Kk5MF7KxhhLgcDmiUWukN-LatWFxPdNug0joPhHSk,497 +pip/_vendor/packaging/__pycache__/__about__.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/_manylinux.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/_musllinux.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/_structures.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/markers.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/requirements.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/specifiers.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/tags.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/utils.cpython-310.pyc,, +pip/_vendor/packaging/__pycache__/version.cpython-310.pyc,, +pip/_vendor/packaging/_manylinux.py,sha256=XcbiXB-qcjv3bcohp6N98TMpOP4_j3m-iOA8ptK2GWY,11488 +pip/_vendor/packaging/_musllinux.py,sha256=_KGgY_qc7vhMGpoqss25n2hiLCNKRtvz9mCrS7gkqyc,4378 +pip/_vendor/packaging/_structures.py,sha256=q3eVNmbWJGG_S0Dit_S3Ao8qQqz_5PYTXFAKBZe5yr4,1431 +pip/_vendor/packaging/markers.py,sha256=AJBOcY8Oq0kYc570KuuPTkvuqjAlhufaE2c9sCUbm64,8487 +pip/_vendor/packaging/requirements.py,sha256=NtDlPBtojpn1IUC85iMjPNsUmufjpSlwnNA-Xb4m5NA,4676 +pip/_vendor/packaging/specifiers.py,sha256=LRQ0kFsHrl5qfcFNEEJrIFYsnIHQUJXY9fIsakTrrqE,30110 +pip/_vendor/packaging/tags.py,sha256=lmsnGNiJ8C4D_Pf9PbM0qgbZvD9kmB9lpZBQUZa3R_Y,15699 +pip/_vendor/packaging/utils.py,sha256=dJjeat3BS-TYn1RrUFVwufUMasbtzLfYRoy_HXENeFQ,4200 +pip/_vendor/packaging/version.py,sha256=_fLRNrFrxYcHVfyo8vk9j8s6JM8N_xsSxVFr6RJyco8,14665 +pip/_vendor/pep517/__init__.py,sha256=Y1bATL2qbFNN6M_DQa4yyrwqjpIiL-j9T6kBmR0DS14,130 +pip/_vendor/pep517/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/build.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/check.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/colorlog.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/compat.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/dirtools.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/envbuild.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/meta.cpython-310.pyc,, +pip/_vendor/pep517/__pycache__/wrappers.cpython-310.pyc,, +pip/_vendor/pep517/build.py,sha256=2bar6EdjwIz2Dlfy94qdxn3oA9mVnnny40mfoT5f-qI,3457 +pip/_vendor/pep517/check.py,sha256=bCORq1WrHjhpTONa-zpAqG0EB9rHNuhO1ORu6DsDuL8,6084 +pip/_vendor/pep517/colorlog.py,sha256=Tk9AuYm_cLF3BKTBoSTJt9bRryn0aFojIQOwbfVUTxQ,4098 +pip/_vendor/pep517/compat.py,sha256=NmLImE5oiDT3gbEhJ4w7xeoMFcpAPrGu_NltBytSJUY,1253 +pip/_vendor/pep517/dirtools.py,sha256=2mkAkAL0mRz_elYFjRKuekTJVipH1zTn4tbf1EDev84,1129 +pip/_vendor/pep517/envbuild.py,sha256=zFde--rmzjXMLXcm7SA_3hDtgk5VCTA8hjpk88RbF6E,6100 +pip/_vendor/pep517/in_process/__init__.py,sha256=MyWoAi8JHdcBv7yXuWpUSVADbx6LSB9rZh7kTIgdA8Y,563 +pip/_vendor/pep517/in_process/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pep517/in_process/__pycache__/_in_process.cpython-310.pyc,, +pip/_vendor/pep517/in_process/_in_process.py,sha256=D3waguyNSGcwosociD5USfcycYr2RCzCjYtxX5UHQmQ,11201 +pip/_vendor/pep517/meta.py,sha256=8mnM5lDnT4zXQpBTliJbRGfesH7iioHwozbDxALPS9Y,2463 +pip/_vendor/pep517/wrappers.py,sha256=impq7Cz_LL1iDF1iiOzYWB4MaEu6O6Gps7TJ5qsJz1Q,13429 +pip/_vendor/pkg_resources/__init__.py,sha256=NnpQ3g6BCHzpMgOR_OLBmYtniY4oOzdKpwqghfq_6ug,108287 +pip/_vendor/pkg_resources/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pkg_resources/__pycache__/py31compat.cpython-310.pyc,, +pip/_vendor/pkg_resources/py31compat.py,sha256=CRk8fkiPRDLsbi5pZcKsHI__Pbmh_94L8mr9Qy9Ab2U,562 +pip/_vendor/platformdirs/__init__.py,sha256=Aizpxewwd4nY63Gqw-Od1Rso9Ah4bSoc6rkx-GBRu2Y,12676 +pip/_vendor/platformdirs/__main__.py,sha256=ZmsnTxEOxtTvwa-Y_Vfab_JN3X4XCVeN8X0yyy9-qnc,1176 +pip/_vendor/platformdirs/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/__main__.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/android.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/api.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/macos.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/unix.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/version.cpython-310.pyc,, +pip/_vendor/platformdirs/__pycache__/windows.cpython-310.pyc,, +pip/_vendor/platformdirs/android.py,sha256=xhlD4NmrKCARe5lgnpBGYo4lOYxEOBOByNDNYy91gEE,4012 +pip/_vendor/platformdirs/api.py,sha256=MXKHXOL3eh_-trSok-JUTjAR_zjmmKF3rjREVABjP8s,4910 +pip/_vendor/platformdirs/macos.py,sha256=-3UXQewbT0yMhMdkzRXfXGAntmLIH7Qt4a9Hlf8I5_Y,2655 +pip/_vendor/platformdirs/unix.py,sha256=b4aVYTz0qZ50HntwOXo8r6tp82jAa3qTjxw-WlnC2yc,6910 +pip/_vendor/platformdirs/version.py,sha256=bXzLJCe23FNQRQrf7ZRWKejxWnct_wft7dxdkMGT33E,80 +pip/_vendor/platformdirs/windows.py,sha256=ISruopR5UGBePC0BxCxXevkZYfjJsIZc49YWU5iYfQ4,6439 +pip/_vendor/progress/__init__.py,sha256=1HejNZtv2ouUNQeStUDAtZrtwkz_3FmYKQ476hJ7zOs,5294 +pip/_vendor/progress/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/progress/__pycache__/bar.cpython-310.pyc,, +pip/_vendor/progress/__pycache__/colors.cpython-310.pyc,, +pip/_vendor/progress/__pycache__/counter.cpython-310.pyc,, +pip/_vendor/progress/__pycache__/spinner.cpython-310.pyc,, +pip/_vendor/progress/bar.py,sha256=GbedY0oZ-Q1duXjmvVLO0tSf-uTSH7hJ3zzyI91Esws,2942 +pip/_vendor/progress/colors.py,sha256=cCYXQnYFYVmQKKmYEbQ_lj6SPSFzdw4FN98F2x2kR-U,2655 +pip/_vendor/progress/counter.py,sha256=zYt9DWH0_05s8Q9TrJwHVud-WwsyyaR3PwYtk5hxwwQ,1613 +pip/_vendor/progress/spinner.py,sha256=u5ElzW94XEiLGH-aAlr54VJtKfeK745xr6UfGvvflzU,1461 +pip/_vendor/pygments/__init__.py,sha256=CAmA9UthykwxvtutUcH0IxqtiyQcSg6CmYdM-jKlcRY,3002 +pip/_vendor/pygments/__main__.py,sha256=X7rGLMUC54EXgO14FZ9goKXZDmhPzKXTsUglmb_McIU,353 +pip/_vendor/pygments/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/__main__.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/cmdline.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/console.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/filter.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/formatter.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/lexer.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/modeline.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/plugin.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/regexopt.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/scanner.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/sphinxext.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/style.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/token.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/unistring.cpython-310.pyc,, +pip/_vendor/pygments/__pycache__/util.cpython-310.pyc,, +pip/_vendor/pygments/cmdline.py,sha256=XpsyWgErcSqHC7rXiYKLF3Y61Uy8SR2DNQDDhZGuezg,23408 +pip/_vendor/pygments/console.py,sha256=QZXBUAkyl4dPLQ1e6XHjQu3mmXBWvuGQwsQT2q1mtCY,1697 +pip/_vendor/pygments/filter.py,sha256=35iMZiB1rcuogxokm92kViB2DPXPp_wWoxWuMmwvvzY,1938 +pip/_vendor/pygments/filters/__init__.py,sha256=-veOimzCyYGEARru2Dfo6ofSYcZ8tGsIVuMprtaZQ24,40292 +pip/_vendor/pygments/filters/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pygments/formatter.py,sha256=zSBbX2U_OOriy7SJvSTK6OAxjuXtROWxQlNpJEJZjBA,2917 +pip/_vendor/pygments/formatters/__init__.py,sha256=fjkYDy5-F998XczKi0ymHFayr5ObIRLHF8cgp9k8kpA,5119 +pip/_vendor/pygments/formatters/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/_mapping.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/bbcode.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/groff.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/html.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/img.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/irc.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/latex.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/other.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/pangomarkup.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/rtf.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/svg.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/terminal.cpython-310.pyc,, +pip/_vendor/pygments/formatters/__pycache__/terminal256.cpython-310.pyc,, +pip/_vendor/pygments/formatters/_mapping.py,sha256=3A1rYSjYN9MLduCFWy2_mYhllPVpwlw55anRYnPXX8w,6516 +pip/_vendor/pygments/formatters/bbcode.py,sha256=cSKMOioUnE4TzvCCsK4IbJ6G78W07ZwHtkz4V1Wte0U,3314 +pip/_vendor/pygments/formatters/groff.py,sha256=ULgMKvGeLswX0KZn3IBp0p0U3rruiSHBtpl6O5qbqLs,5005 +pip/_vendor/pygments/formatters/html.py,sha256=0jM7Jc4xA4tsjmPq35uklm_En_OVdcNb0__SEXp2pDQ,35330 +pip/_vendor/pygments/formatters/img.py,sha256=r4iag_jCfyv_LhIt-1fRDeVEEoAfVJzkD9nZChIwiS8,21819 +pip/_vendor/pygments/formatters/irc.py,sha256=gi_IeIZeNaTfTMtvseLigZdS6lNicN7r7O7rnI6myo0,5871 +pip/_vendor/pygments/formatters/latex.py,sha256=qZUerrHt2Nn2aB4gJcdqj99qBkIxl_1v1ukYsf230Gk,18930 +pip/_vendor/pygments/formatters/other.py,sha256=Q01LtkqPZ8m_EYdgMVzXPUGjHoL00lXI3By97wzytYU,5073 +pip/_vendor/pygments/formatters/pangomarkup.py,sha256=ZpjALTSuGFwviJd5kOYwr-1NgqxCX3XRJrjXC7x1UbQ,2212 +pip/_vendor/pygments/formatters/rtf.py,sha256=qh7-z_wbUsTY6z7fZUGrYECYBlWB0wEdBwIZVEVybL0,5014 +pip/_vendor/pygments/formatters/svg.py,sha256=T7Jj004I3JUPOr48aAhQ368K2qWCciUyMQ2tdU-LB-4,7335 +pip/_vendor/pygments/formatters/terminal.py,sha256=cRD5hitINOkYlGZo9ma252vpJYPSGNgLivrsm6zGyec,4674 +pip/_vendor/pygments/formatters/terminal256.py,sha256=Bvz9zZL3UWc94TDm1GhKMI4x0BTit0XplhyRL0zmtkw,11753 +pip/_vendor/pygments/lexer.py,sha256=ECXWlEsbRnKs_njozZns6BGQ4riTMzct_BzAr3zV6dY,31937 +pip/_vendor/pygments/lexers/__init__.py,sha256=6Ds0GVBP3jrIU02wmjRdpoL4eFGhwT2IVD1zf3cV5_Y,11307 +pip/_vendor/pygments/lexers/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pygments/lexers/__pycache__/_mapping.cpython-310.pyc,, +pip/_vendor/pygments/lexers/__pycache__/python.cpython-310.pyc,, +pip/_vendor/pygments/lexers/_mapping.py,sha256=jAxmvh5wvNkD-p3Fh6E7hY_B0sGbcxWRfseT6iq7ex4,70032 +pip/_vendor/pygments/lexers/python.py,sha256=LXnk43Lcngqn9xj6eRqdk2f73oF4kHZWiwgHMM_RlVM,52776 +pip/_vendor/pygments/modeline.py,sha256=37fen3cf1moCz4vMVJqX41eAQCmj8pzUchikgPcHp-U,986 +pip/_vendor/pygments/plugin.py,sha256=zGSig3S7QX-3o6RDxd4_Uvice_t25l_BN9aQQ9k8vmU,1727 +pip/_vendor/pygments/regexopt.py,sha256=mj8Fgu3sT0d5PZwRwDLexEvVOQbuHeosubQnqVwgiqs,3072 +pip/_vendor/pygments/scanner.py,sha256=nGoHy-Npk2ylUd4bws_CJN1hK785Xqo8e0teRmNX2jo,3091 +pip/_vendor/pygments/sphinxext.py,sha256=FZ2puvLe2Bztqtj6UJvQd7D8TvtOZ1GsfRJObvH59tE,4630 +pip/_vendor/pygments/style.py,sha256=lGyan5bU42q1kGMfFqafwL3g1j5EurTvfkv8vdP7NzQ,6257 +pip/_vendor/pygments/styles/__init__.py,sha256=Qx2zq6ufbDNE2cTp51M-s9zW-sDE-KLIqFw31qr3Bhg,3252 +pip/_vendor/pygments/styles/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pygments/token.py,sha256=lNPgeaQTzu2DEUi6n_lxAIU7uy4DVj8LMI3nSVnTjks,6143 +pip/_vendor/pygments/unistring.py,sha256=Xs0FzOzE0l0iWRoTlcgi-Q_kAMdF5Gt5FL_goGKJc98,63188 +pip/_vendor/pygments/util.py,sha256=s9n8BQXIxG3lIwCPWv5-ci8yhaqq5JbEVK9v8Z-8_3I,9123 +pip/_vendor/pyparsing/__init__.py,sha256=jXheGTFT1b6r_4WxuOE0uVUqiouLJ3WHzOScpLieRgQ,9107 +pip/_vendor/pyparsing/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/actions.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/common.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/core.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/exceptions.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/helpers.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/results.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/testing.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/unicode.cpython-310.pyc,, +pip/_vendor/pyparsing/__pycache__/util.cpython-310.pyc,, +pip/_vendor/pyparsing/actions.py,sha256=60v7mETOBzc01YPH_qQD5isavgcSJpAfIKpzgjM3vaU,6429 +pip/_vendor/pyparsing/common.py,sha256=lFL97ooIeR75CmW5hjURZqwDCTgruqltcTCZ-ulLO2Q,12936 +pip/_vendor/pyparsing/core.py,sha256=GtQsD06HlwKPc7M8K8hyOuOW-cRnd87AxAHq-ad5lEk,212248 +pip/_vendor/pyparsing/diagram/__init__.py,sha256=h0gsUwmo5N3shgvfXVQTtqvTpUAv-ZdQjSQ6IUJmsxY,22165 +pip/_vendor/pyparsing/diagram/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/pyparsing/exceptions.py,sha256=H4D9gqMavqmAFSsdrU_J6bO-jA-T-A7yvtXWZpooIUA,9030 +pip/_vendor/pyparsing/helpers.py,sha256=kqpIZFG-y0fQ3g_TmloYllo9we6YCYiewZMXIK0y5wc,38299 +pip/_vendor/pyparsing/results.py,sha256=4D-oURF1cLeL7k0d3zMqUuWH_gTjop_OrZwik9O0HXU,25339 +pip/_vendor/pyparsing/testing.py,sha256=szs8AKZREZMhL0y0vsMfaTVAnpqPHetg6VKJBNmc4QY,13388 +pip/_vendor/pyparsing/unicode.py,sha256=IR-ioeGY29cZ49tG8Ts7ITPWWNP5G2DcZs58oa8zn44,10381 +pip/_vendor/pyparsing/util.py,sha256=kq772O5YSeXOSdP-M31EWpbH_ayj7BMHImBYo9xPD5M,6805 +pip/_vendor/requests/__init__.py,sha256=6IUFQM6K9V2NIu4fe4LtUsN21-TFbw_w3EfPpdUN-qc,5130 +pip/_vendor/requests/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/__version__.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/_internal_utils.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/adapters.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/api.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/auth.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/certs.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/compat.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/cookies.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/exceptions.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/help.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/hooks.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/models.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/packages.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/sessions.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/status_codes.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/structures.cpython-310.pyc,, +pip/_vendor/requests/__pycache__/utils.cpython-310.pyc,, +pip/_vendor/requests/__version__.py,sha256=q8miOQaomOv3S74lK4eQs1zZ5jwcnOusyEU-M2idhts,441 +pip/_vendor/requests/_internal_utils.py,sha256=Zx3PnEUccyfsB-ie11nZVAW8qClJy0gx1qNME7rgT18,1096 +pip/_vendor/requests/adapters.py,sha256=WazYJQ_b2LHhNDb_y0hscNlWVsSe5ca5I3pymPrer5w,21861 +pip/_vendor/requests/api.py,sha256=hjuoP79IAEmX6Dysrw8t032cLfwLHxbI_wM4gC5G9t0,6402 +pip/_vendor/requests/auth.py,sha256=OMoJIVKyRLy9THr91y8rxysZuclwPB-K1Xg1zBomUhQ,10207 +pip/_vendor/requests/certs.py,sha256=nXRVq9DtGmv_1AYbwjTu9UrgAcdJv05ZvkNeaoLOZxY,465 +pip/_vendor/requests/compat.py,sha256=N1281mkcTluMjKqCSLf88LR6HNOygEhS1TbR9LLsoVY,2114 +pip/_vendor/requests/cookies.py,sha256=Y-bKX6TvW3FnYlE6Au0SXtVVWcaNdFvuAwQxw-G0iTI,18430 +pip/_vendor/requests/exceptions.py,sha256=VcpBXOL-9JYhNbK8OZxCIImBgpQSXJlUelDPf1f-pmM,3446 +pip/_vendor/requests/help.py,sha256=dyhe3lcmHXnFCzDiZVjcGmVvvO_jtsfAm-AC542ndw8,3972 +pip/_vendor/requests/hooks.py,sha256=QReGyy0bRcr5rkwCuObNakbYsc7EkiKeBwG4qHekr2Q,757 +pip/_vendor/requests/models.py,sha256=7pzscX_47qxx7-zEaBWGxMoB33Vdf6HLoUKZh1ktEvM,35116 +pip/_vendor/requests/packages.py,sha256=njJmVifY4aSctuW3PP5EFRCxjEwMRDO6J_feG2dKWsI,695 +pip/_vendor/requests/sessions.py,sha256=Zu-Y9YPlwTIsyFx1hvIrc3ziyeFpuFPqcOuSuz8BNWs,29835 +pip/_vendor/requests/status_codes.py,sha256=gT79Pbs_cQjBgp-fvrUgg1dn2DQO32bDj4TInjnMPSc,4188 +pip/_vendor/requests/structures.py,sha256=msAtr9mq1JxHd-JRyiILfdFlpbJwvvFuP3rfUQT_QxE,3005 +pip/_vendor/requests/utils.py,sha256=siud-FQ6xgKFbL49DRvAb3PMQMMHoeCL_TCmuHh9AUU,33301 +pip/_vendor/resolvelib/__init__.py,sha256=UL-B2BDI0_TRIqkfGwLHKLxY-LjBlomz7941wDqzB1I,537 +pip/_vendor/resolvelib/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/resolvelib/__pycache__/providers.cpython-310.pyc,, +pip/_vendor/resolvelib/__pycache__/reporters.cpython-310.pyc,, +pip/_vendor/resolvelib/__pycache__/resolvers.cpython-310.pyc,, +pip/_vendor/resolvelib/__pycache__/structs.cpython-310.pyc,, +pip/_vendor/resolvelib/compat/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/resolvelib/compat/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/resolvelib/compat/__pycache__/collections_abc.cpython-310.pyc,, +pip/_vendor/resolvelib/compat/collections_abc.py,sha256=uy8xUZ-NDEw916tugUXm8HgwCGiMO0f-RcdnpkfXfOs,156 +pip/_vendor/resolvelib/providers.py,sha256=roVmFBItQJ0TkhNua65h8LdNny7rmeqVEXZu90QiP4o,5872 +pip/_vendor/resolvelib/reporters.py,sha256=fW91NKf-lK8XN7i6Yd_rczL5QeOT3sc6AKhpaTEnP3E,1583 +pip/_vendor/resolvelib/resolvers.py,sha256=2wYzVGBGerbmcIpH8cFmgSKgLSETz8jmwBMGjCBMHG4,17592 +pip/_vendor/resolvelib/structs.py,sha256=IVIYof6sA_N4ZEiE1C1UhzTX495brCNnyCdgq6CYq28,4794 +pip/_vendor/rich/__init__.py,sha256=wF1th4JGBCVC02xfaw8j6P2MrFcJaQJL72scKtEmDYQ,5804 +pip/_vendor/rich/__main__.py,sha256=vd1PP-o7_1un-ThdgMU9LHV-D8z56yz_-fryczn38eE,8810 +pip/_vendor/rich/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/__main__.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_cell_widths.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_emoji_codes.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_emoji_replace.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_extension.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_inspect.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_log_render.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_loop.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_lru_cache.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_palettes.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_pick.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_ratio.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_spinners.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_stack.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_timer.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_windows.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/_wrap.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/abc.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/align.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/ansi.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/bar.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/box.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/cells.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/color.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/color_triplet.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/columns.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/console.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/constrain.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/containers.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/control.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/default_styles.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/diagnose.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/emoji.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/errors.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/file_proxy.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/filesize.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/highlighter.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/json.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/jupyter.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/layout.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/live.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/live_render.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/logging.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/markup.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/measure.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/padding.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/pager.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/palette.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/panel.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/pretty.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/progress.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/progress_bar.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/prompt.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/protocol.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/region.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/repr.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/rule.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/scope.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/screen.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/segment.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/spinner.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/status.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/style.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/styled.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/syntax.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/table.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/tabulate.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/terminal_theme.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/text.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/theme.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/themes.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/traceback.cpython-310.pyc,, +pip/_vendor/rich/__pycache__/tree.cpython-310.pyc,, +pip/_vendor/rich/_cell_widths.py,sha256=2n4EiJi3X9sqIq0O16kUZ_zy6UYMd3xFfChlKfnW1Hc,10096 +pip/_vendor/rich/_emoji_codes.py,sha256=hu1VL9nbVdppJrVoijVshRlcRRe_v3dju3Mmd2sKZdY,140235 +pip/_vendor/rich/_emoji_replace.py,sha256=n-kcetsEUx2ZUmhQrfeMNc-teeGhpuSQ5F8VPBsyvDo,1064 +pip/_vendor/rich/_extension.py,sha256=Xt47QacCKwYruzjDi-gOBq724JReDj9Cm9xUi5fr-34,265 +pip/_vendor/rich/_inspect.py,sha256=vq6BjewwEvddjcBTr_lCcjYQBsKi92aTNpcXyaA5ERA,7444 +pip/_vendor/rich/_log_render.py,sha256=1ByI0PA1ZpxZY3CGJOK54hjlq4X-Bz_boIjIqCd8Kns,3225 +pip/_vendor/rich/_loop.py,sha256=hV_6CLdoPm0va22Wpw4zKqM0RYsz3TZxXj0PoS-9eDQ,1236 +pip/_vendor/rich/_lru_cache.py,sha256=M7H1ZQF32o6SxrpOur9zTIhEHlNXT9XnrcdhruUmG5I,1246 +pip/_vendor/rich/_palettes.py,sha256=cdev1JQKZ0JvlguV9ipHgznTdnvlIzUFDBb0It2PzjI,7063 +pip/_vendor/rich/_pick.py,sha256=evDt8QN4lF5CiwrUIXlOJCntitBCOsI3ZLPEIAVRLJU,423 +pip/_vendor/rich/_ratio.py,sha256=2lLSliL025Y-YMfdfGbutkQDevhcyDqc-DtUYW9mU70,5472 +pip/_vendor/rich/_spinners.py,sha256=huT1biTlwyp9Lm8S7bLfVzg1psUaIH5xHDwTaWEHVh0,26521 +pip/_vendor/rich/_stack.py,sha256=-C8OK7rxn3sIUdVwxZBBpeHhIzX0eI-VM3MemYfaXm0,351 +pip/_vendor/rich/_timer.py,sha256=zelxbT6oPFZnNrwWPpc1ktUeAT-Vc4fuFcRZLQGLtMI,417 +pip/_vendor/rich/_windows.py,sha256=nBO71icHMIHlzT7hg6fkoIdh1mT-5MvDdPDwunkshyw,2065 +pip/_vendor/rich/_wrap.py,sha256=OtnSxnERkuNlSM1d_MYtNg8KIYTcTBk3peg16dCZH_U,1804 +pip/_vendor/rich/abc.py,sha256=ON-E-ZqSSheZ88VrKX2M3PXpFbGEUUZPMa_Af0l-4f0,890 +pip/_vendor/rich/align.py,sha256=2zRHV8SzR5eP-vQkSDgjmgsBLBluCBwykgejAW6oRD0,10425 +pip/_vendor/rich/ansi.py,sha256=QaVVkfvVL6C3OsuWI9iQ-iJFkMsMohjYlxgMLnVTEPo,6676 +pip/_vendor/rich/bar.py,sha256=a7UD303BccRCrEhGjfMElpv5RFYIinaAhAuqYqhUvmw,3264 +pip/_vendor/rich/box.py,sha256=o0ywz1iW0WjGLPrRVDAZPh1CVPEgAOaWsn8Bf3sf43g,9069 +pip/_vendor/rich/cells.py,sha256=NadN20gFxE8Aj-2S3Drn7qgn-ZpsRZcNnTNtweRL7rA,4285 +pip/_vendor/rich/color.py,sha256=SD3yTf3t8japb-jOv8GYCMCDqyzpipzXS_0rAXhSlU4,17285 +pip/_vendor/rich/color_triplet.py,sha256=3lhQkdJbvWPoLDO-AnYImAWmJvV5dlgYNCVZ97ORaN4,1054 +pip/_vendor/rich/columns.py,sha256=HUX0KcMm9dsKNi11fTbiM_h2iDtl8ySCaVcxlalEzq8,7131 +pip/_vendor/rich/console.py,sha256=bioCy8012eZ8PIOBxMyyqxYPltKk2pGEG9jmwylNCQk,81236 +pip/_vendor/rich/constrain.py,sha256=1VIPuC8AgtKWrcncQrjBdYqA3JVWysu6jZo1rrh7c7Q,1288 +pip/_vendor/rich/containers.py,sha256=aKgm5UDHn5Nmui6IJaKdsZhbHClh_X7D-_Wg8Ehrr7s,5497 +pip/_vendor/rich/control.py,sha256=qxg6Yjd78XuF0VxthlT8O4dpvpACYwKkBfm2S4-IvHA,5298 +pip/_vendor/rich/default_styles.py,sha256=At42PcWzmnYWcx5fUOKyOUpI8HK5m4ItZqxkgHToaMs,7614 +pip/_vendor/rich/diagnose.py,sha256=4L8SZfbqjIRotzJ39QzD9-d4I80FyV1mNKHryg1eArE,183 +pip/_vendor/rich/emoji.py,sha256=omTF9asaAnsM4yLY94eR_9dgRRSm1lHUszX20D1yYCQ,2501 +pip/_vendor/rich/errors.py,sha256=5pP3Kc5d4QJ_c0KFsxrfyhjiPVe7J1zOqSFbFAzcV-Y,642 +pip/_vendor/rich/file_proxy.py,sha256=fHeReSO3VJ7IbH_9ri-OrPYbFC3UYOzeTNjngiiWOcY,1613 +pip/_vendor/rich/filesize.py,sha256=oQJnM5_7ygkpzt3GtNq5l3F6gmB7YahBA5vpdQVKLwI,2511 +pip/_vendor/rich/highlighter.py,sha256=AdhjC0meTYswZ_xKgka0cRYdNjLABLUzHAbyF3QpPWo,4894 +pip/_vendor/rich/json.py,sha256=RCm4lXBXrjvXHpqrWPH8wdGP0jEo4IohLmkddlhRY18,5051 +pip/_vendor/rich/jupyter.py,sha256=4sxNAwJs4g3dYfWy_enPw9fp0Tdn-82tV4T9uh9vAOM,3025 +pip/_vendor/rich/layout.py,sha256=b64KMDP2EPiC103P-v-_VZKGY13oWiiGS418P_KRRlc,14048 +pip/_vendor/rich/live.py,sha256=OKxMaFU5sFfuR--cJftGYjSvg1VPQri1U_DNZUjCsvI,13711 +pip/_vendor/rich/live_render.py,sha256=zElm3PrfSIvjOce28zETHMIUf9pFYSUA5o0AflgUP64,3667 +pip/_vendor/rich/logging.py,sha256=YNcCSK6pCo2Wg6JKqScAe6VgFqebHBnS5nDnBO4gXAA,10868 +pip/_vendor/rich/markup.py,sha256=hsVW_k1TIvj5OPPQ12ihAii9HSVa8N1TStvA5B2GGpo,8058 +pip/_vendor/rich/measure.py,sha256=Z74XvzIgLZm0xH-QIo1uX5d4oahavHe8D8MKyxLNqPQ,5258 +pip/_vendor/rich/padding.py,sha256=kTFGsdGe0os7tXLnHKpwTI90CXEvrceeZGCshmJy5zw,4970 +pip/_vendor/rich/pager.py,sha256=VK_2EfH0JduZWdyV-KZma06bvi_V5PWmHG6W7BoiaTg,838 +pip/_vendor/rich/palette.py,sha256=lInvR1ODDT2f3UZMfL1grq7dY_pDdKHw4bdUgOGaM4Y,3396 +pip/_vendor/rich/panel.py,sha256=O6ORyIhDcOLSEasTjpcDvmhvIcppPGCeQoXpoycIUT8,8637 +pip/_vendor/rich/pretty.py,sha256=HAB68BpYysaL1EXeV4X5Tt-U2hDlcLpbFz06fkojWWE,32572 +pip/_vendor/rich/progress.py,sha256=jcgi7aMnQ_YjSpAmQkalwtNsgVn9i56SeZGprr7tuOk,35926 +pip/_vendor/rich/progress_bar.py,sha256=ELiBaxJOgsRYKpNIrot7BC0bFXvmf8cTd6nxI02BbK0,7762 +pip/_vendor/rich/prompt.py,sha256=gKVd13YWv6jedzwcRPZGUINBjC-xcJhJ_xz_NvMW80c,11307 +pip/_vendor/rich/protocol.py,sha256=Vx6n4fEoSDhzSup8t3KH0iK2RWyssIOks5E0S1qw1GA,1401 +pip/_vendor/rich/region.py,sha256=rNT9xZrVZTYIXZC0NYn41CJQwYNbR-KecPOxTgQvB8Y,166 +pip/_vendor/rich/repr.py,sha256=1A0U0_ibG_bZbw71pUBIctO9Az-CQUuyOTbiKcJOwyw,4309 +pip/_vendor/rich/rule.py,sha256=cPK6NYo4kzh-vM_8a-rXajXplsbaHa6ahErYvGSsrJ0,4197 +pip/_vendor/rich/scope.py,sha256=HX13XsJfqzQHpPfw4Jn9JmJjCsRj9uhHxXQEqjkwyLA,2842 +pip/_vendor/rich/screen.py,sha256=YoeReESUhx74grqb0mSSb9lghhysWmFHYhsbMVQjXO8,1591 +pip/_vendor/rich/segment.py,sha256=MBBAWaHyqCQFCfiNbrTW4BGaFR1uU31XktJ1S3Taqb4,23916 +pip/_vendor/rich/spinner.py,sha256=V6dW0jIk5IO0_2MyxyftQf5VjCHI0T2cRhJ4F31hPIQ,4312 +pip/_vendor/rich/status.py,sha256=gJsIXIZeSo3urOyxRUjs6VrhX5CZrA0NxIQ-dxhCnwo,4425 +pip/_vendor/rich/style.py,sha256=AD1I7atfclsFCtGeL8ronH1Jj-02WLp9ZQ2VYqmpBjM,26469 +pip/_vendor/rich/styled.py,sha256=eZNnzGrI4ki_54pgY3Oj0T-x3lxdXTYh4_ryDB24wBU,1258 +pip/_vendor/rich/syntax.py,sha256=pJAD08ywowg5xVwTGCqUOMpDYskjoMoDYEV-hryEX5s,26994 +pip/_vendor/rich/table.py,sha256=oQAEBaV4zMUPyg_tSA93_GrCirdIf-osolxf9wb3pEo,36757 +pip/_vendor/rich/tabulate.py,sha256=nl0oeNbiXectEgTHyj3K7eN4NZMISpaogpOdZyEOGbs,1700 +pip/_vendor/rich/terminal_theme.py,sha256=E0nI_ycFpvflamt-KVCY4J52LmUjRi1Y6ICB-Ef3gMo,1459 +pip/_vendor/rich/text.py,sha256=auX3LpY-I6PBiNyxB3o3LyMEx7lna2cx9IbNQJDwtw8,44424 +pip/_vendor/rich/theme.py,sha256=GKNtQhDBZKAzDaY0vQVQQFzbc0uWfFe6CJXA-syT7zQ,3627 +pip/_vendor/rich/themes.py,sha256=0xgTLozfabebYtcJtDdC5QkX5IVUEaviqDUJJh4YVFk,102 +pip/_vendor/rich/traceback.py,sha256=hAU3IR295eFuup_px2NU4aCEWu7KQs1qpZbnqoHCtR0,25935 +pip/_vendor/rich/tree.py,sha256=JxyWbc27ZuwoLQnd7I-rSsRsqI9lzaVKlfTLJXla9U0,9122 +pip/_vendor/six.py,sha256=TOOfQi7nFGfMrIvtdr6wX4wyHH8M7aknmuLfo2cBBrM,34549 +pip/_vendor/tenacity/__init__.py,sha256=GLLsTFD4Bd5VDgTR6mU_FxyOsrxc48qONorVaRebeD4,18257 +pip/_vendor/tenacity/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/_asyncio.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/_utils.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/after.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/before.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/before_sleep.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/nap.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/retry.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/stop.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/tornadoweb.cpython-310.pyc,, +pip/_vendor/tenacity/__pycache__/wait.cpython-310.pyc,, +pip/_vendor/tenacity/_asyncio.py,sha256=HEb0BVJEeBJE9P-m9XBxh1KcaF96BwoeqkJCL5sbVcQ,3314 +pip/_vendor/tenacity/_utils.py,sha256=-y68scDcyoqvTJuJJ0GTfjdSCljEYlbCYvgk7nM4NdM,1944 +pip/_vendor/tenacity/after.py,sha256=dlmyxxFy2uqpLXDr838DiEd7jgv2AGthsWHGYcGYsaI,1496 +pip/_vendor/tenacity/before.py,sha256=7XtvRmO0dRWUp8SVn24OvIiGFj8-4OP5muQRUiWgLh0,1376 +pip/_vendor/tenacity/before_sleep.py,sha256=ThyDvqKU5yle_IvYQz_b6Tp6UjUS0PhVp6zgqYl9U6Y,1908 +pip/_vendor/tenacity/nap.py,sha256=fRWvnz1aIzbIq9Ap3gAkAZgDH6oo5zxMrU6ZOVByq0I,1383 +pip/_vendor/tenacity/retry.py,sha256=62R71W59bQjuNyFKsDM7hE2aEkEPtwNBRA0tnsEvgSk,6645 +pip/_vendor/tenacity/stop.py,sha256=sKHmHaoSaW6sKu3dTxUVKr1-stVkY7lw4Y9yjZU30zQ,2790 +pip/_vendor/tenacity/tornadoweb.py,sha256=E8lWO2nwe6dJgoB-N2HhQprYLDLB_UdSgFnv-EN6wKE,2145 +pip/_vendor/tenacity/wait.py,sha256=e_Saa6I2tsNLpCL1t9897wN2fGb0XQMQlE4bU2t9V2w,6691 +pip/_vendor/tomli/__init__.py,sha256=z1Elt0nLAqU5Y0DOn9p__8QnLWavlEOpRyQikdYgKro,230 +pip/_vendor/tomli/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/tomli/__pycache__/_parser.cpython-310.pyc,, +pip/_vendor/tomli/__pycache__/_re.cpython-310.pyc,, +pip/_vendor/tomli/_parser.py,sha256=50BD4o9YbzFAGAYyZLqZC8F81DQ7iWWyJnrHNwBKa6A,22415 +pip/_vendor/tomli/_re.py,sha256=5GPfgXKteg7wRFCF-DzlkAPI2ilHbkMK2-JC49F-AJQ,2681 +pip/_vendor/typing_extensions.py,sha256=1uqi_RSlI7gos4eJB_NEV3d5wQwzTUQHd3_jrkbTo8Q,87149 +pip/_vendor/urllib3/__init__.py,sha256=j3yzHIbmW7CS-IKQJ9-PPQf_YKO8EOAey_rMW0UR7us,2763 +pip/_vendor/urllib3/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/_collections.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/_version.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/connection.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/connectionpool.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/exceptions.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/fields.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/filepost.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/poolmanager.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/request.cpython-310.pyc,, +pip/_vendor/urllib3/__pycache__/response.cpython-310.pyc,, +pip/_vendor/urllib3/_collections.py,sha256=pyASJJhW7wdOpqJj9QJA8FyGRfr8E8uUUhqUvhF0728,11372 +pip/_vendor/urllib3/_version.py,sha256=_NdMUQaeBvFHAX2z3zAIX2Wum58A6rVtY1f7ByHsQ4g,63 +pip/_vendor/urllib3/connection.py,sha256=6zokyboYYKm9VkyrQvVVLgxMyCZK7n9Vmg_2ZK6pbhc,20076 +pip/_vendor/urllib3/connectionpool.py,sha256=eQ1jWJ2dDdRADuCj9Yx7RCpzY2iM8P32jGHbjYBkAIk,39308 +pip/_vendor/urllib3/contrib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/urllib3/contrib/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/__pycache__/_appengine_environ.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/__pycache__/appengine.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/__pycache__/ntlmpool.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/__pycache__/pyopenssl.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/__pycache__/securetransport.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/__pycache__/socks.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/_appengine_environ.py,sha256=bDbyOEhW2CKLJcQqAKAyrEHN-aklsyHFKq6vF8ZFsmk,957 +pip/_vendor/urllib3/contrib/_securetransport/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/urllib3/contrib/_securetransport/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/_securetransport/__pycache__/bindings.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/_securetransport/__pycache__/low_level.cpython-310.pyc,, +pip/_vendor/urllib3/contrib/_securetransport/bindings.py,sha256=4Xk64qIkPBt09A5q-RIFUuDhNc9mXilVapm7WnYnzRw,17632 +pip/_vendor/urllib3/contrib/_securetransport/low_level.py,sha256=B2JBB2_NRP02xK6DCa1Pa9IuxrPwxzDzZbixQkb7U9M,13922 +pip/_vendor/urllib3/contrib/appengine.py,sha256=lfzpHFmJiO82shClLEm3QB62SYgHWnjpZOH_2JhU5Tc,11034 +pip/_vendor/urllib3/contrib/ntlmpool.py,sha256=ej9gGvfAb2Gt00lafFp45SIoRz-QwrQ4WChm6gQmAlM,4538 +pip/_vendor/urllib3/contrib/pyopenssl.py,sha256=DD4pInv_3OEEGffEFynBoirc8ldR789sLmGSKukzA0E,16900 +pip/_vendor/urllib3/contrib/securetransport.py,sha256=4qUKo7PUV-vVIqXmr2BD-sH7qplB918jiD5eNsRI9vU,34449 +pip/_vendor/urllib3/contrib/socks.py,sha256=aRi9eWXo9ZEb95XUxef4Z21CFlnnjbEiAo9HOseoMt4,7097 +pip/_vendor/urllib3/exceptions.py,sha256=0Mnno3KHTNfXRfY7638NufOPkUb6mXOm-Lqj-4x2w8A,8217 +pip/_vendor/urllib3/fields.py,sha256=kvLDCg_JmH1lLjUUEY_FLS8UhY7hBvDPuVETbY8mdrM,8579 +pip/_vendor/urllib3/filepost.py,sha256=5b_qqgRHVlL7uLtdAYBzBh-GHmU5AfJVt_2N0XS3PeY,2440 +pip/_vendor/urllib3/packages/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/urllib3/packages/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/urllib3/packages/__pycache__/six.cpython-310.pyc,, +pip/_vendor/urllib3/packages/backports/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pip/_vendor/urllib3/packages/backports/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/urllib3/packages/backports/__pycache__/makefile.cpython-310.pyc,, +pip/_vendor/urllib3/packages/backports/makefile.py,sha256=nbzt3i0agPVP07jqqgjhaYjMmuAi_W5E0EywZivVO8E,1417 +pip/_vendor/urllib3/packages/six.py,sha256=1LVW7ljqRirFlfExjwl-v1B7vSAUNTmzGMs-qays2zg,34666 +pip/_vendor/urllib3/poolmanager.py,sha256=xfVcBtEBc8Xwa8jURSqdS7QmXvUuMHhjL1sjFOY-rUk,20001 +pip/_vendor/urllib3/request.py,sha256=ZFSIqX0C6WizixecChZ3_okyu7BEv0lZu1VT0s6h4SM,5985 +pip/_vendor/urllib3/response.py,sha256=hGhGBh7TkEkh_IQg5C1W_xuPNrgIKv5BUXPyE-q0LuE,28203 +pip/_vendor/urllib3/util/__init__.py,sha256=JEmSmmqqLyaw8P51gUImZh8Gwg9i1zSe-DoqAitn2nc,1155 +pip/_vendor/urllib3/util/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/connection.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/proxy.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/queue.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/request.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/response.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/retry.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/ssl_.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/ssl_match_hostname.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/ssltransport.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/timeout.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/url.cpython-310.pyc,, +pip/_vendor/urllib3/util/__pycache__/wait.cpython-310.pyc,, +pip/_vendor/urllib3/util/connection.py,sha256=5Lx2B1PW29KxBn2T0xkN1CBgRBa3gGVJBKoQoRogEVk,4901 +pip/_vendor/urllib3/util/proxy.py,sha256=zUvPPCJrp6dOF0N4GAVbOcl6o-4uXKSrGiTkkr5vUS4,1605 +pip/_vendor/urllib3/util/queue.py,sha256=nRgX8_eX-_VkvxoX096QWoz8Ps0QHUAExILCY_7PncM,498 +pip/_vendor/urllib3/util/request.py,sha256=NnzaEKQ1Pauw5MFMV6HmgEMHITf0Aua9fQuzi2uZzGc,4123 +pip/_vendor/urllib3/util/response.py,sha256=GJpg3Egi9qaJXRwBh5wv-MNuRWan5BIu40oReoxWP28,3510 +pip/_vendor/urllib3/util/retry.py,sha256=eUKOZ16Ya_Tu3_sXF5KVhLJmHQF7YXOCX-MWRoZVzqs,22011 +pip/_vendor/urllib3/util/ssl_.py,sha256=X4-AqW91aYPhPx6-xbf66yHFQKbqqfC_5Zt4WkLX1Hc,17177 +pip/_vendor/urllib3/util/ssl_match_hostname.py,sha256=w01jCYuwvQ038p9mhc1P1gF8IiTN1qHakThpoukOlbw,5751 +pip/_vendor/urllib3/util/ssltransport.py,sha256=NA-u5rMTrDFDFC8QzRKUEKMG0561hOD4qBTr3Z4pv6E,6895 +pip/_vendor/urllib3/util/timeout.py,sha256=QSbBUNOB9yh6AnDn61SrLQ0hg5oz0I9-uXEG91AJuIg,10003 +pip/_vendor/urllib3/util/url.py,sha256=QVEzcbHipbXyCWwH6R4K4TR-N8T4LM55WEMwNUTBmLE,14047 +pip/_vendor/urllib3/util/wait.py,sha256=3MUKRSAUJDB2tgco7qRUskW0zXGAWYvRRE4Q1_6xlLs,5404 +pip/_vendor/vendor.txt,sha256=H-9fScoah7nx4K8O4Uft0l5iH2P_mVo4RqyuMVOTJEc,496 +pip/_vendor/webencodings/__init__.py,sha256=qOBJIuPy_4ByYH6W_bNgJF-qYQ2DoU-dKsDu5yRWCXg,10579 +pip/_vendor/webencodings/__pycache__/__init__.cpython-310.pyc,, +pip/_vendor/webencodings/__pycache__/labels.cpython-310.pyc,, +pip/_vendor/webencodings/__pycache__/mklabels.cpython-310.pyc,, +pip/_vendor/webencodings/__pycache__/tests.cpython-310.pyc,, +pip/_vendor/webencodings/__pycache__/x_user_defined.cpython-310.pyc,, +pip/_vendor/webencodings/labels.py,sha256=4AO_KxTddqGtrL9ns7kAPjb0CcN6xsCIxbK37HY9r3E,8979 +pip/_vendor/webencodings/mklabels.py,sha256=GYIeywnpaLnP0GSic8LFWgd0UVvO_l1Nc6YoF-87R_4,1305 +pip/_vendor/webencodings/tests.py,sha256=OtGLyjhNY1fvkW1GvLJ_FV9ZoqC9Anyjr7q3kxTbzNs,6563 +pip/_vendor/webencodings/x_user_defined.py,sha256=yOqWSdmpytGfUgh_Z6JYgDNhoc-BAHyyeeT15Fr42tM,4307 +pip/py.typed,sha256=EBVvvPRTn_eIpz5e5QztSCdrMX7Qwd7VP93RSoIlZ2I,286 diff --git a/lib/python3.11/site-packages/pycryptodomex-3.18.0.dist-info/REQUESTED b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/REQUESTED similarity index 100% rename from lib/python3.11/site-packages/pycryptodomex-3.18.0.dist-info/REQUESTED rename to python/lib/python3.10/site-packages/pip-22.0.2.dist-info/REQUESTED diff --git a/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/WHEEL b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/WHEEL new file mode 100644 index 0000000..becc9a6 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/entry_points.txt b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/entry_points.txt new file mode 100644 index 0000000..c4ad521 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/entry_points.txt @@ -0,0 +1,5 @@ +[console_scripts] +pip = pip._internal.cli.main:main +pip3 = pip._internal.cli.main:main +pip3.10 = pip._internal.cli.main:main + diff --git a/lib/python3.11/site-packages/pip-22.3.1.dist-info/top_level.txt b/python/lib/python3.10/site-packages/pip-22.0.2.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/pip-22.3.1.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/pip-22.0.2.dist-info/top_level.txt diff --git a/python/lib/python3.10/site-packages/pip/__init__.py b/python/lib/python3.10/site-packages/pip/__init__.py new file mode 100644 index 0000000..8a50472 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/__init__.py @@ -0,0 +1,13 @@ +from typing import List, Optional + +__version__ = "22.0.2" + + +def main(args: Optional[List[str]] = None) -> int: + """This is an internal API only meant for use by pip's own console scripts. + + For additional details, see https://github.com/pypa/pip/issues/7498. + """ + from pip._internal.utils.entrypoints import _wrapper + + return _wrapper(args) diff --git a/lib/python3.11/site-packages/pip/__main__.py b/python/lib/python3.10/site-packages/pip/__main__.py similarity index 100% rename from lib/python3.11/site-packages/pip/__main__.py rename to python/lib/python3.10/site-packages/pip/__main__.py diff --git a/lib/site-packages/pip/_internal/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/__init__.py similarity index 100% rename from lib/site-packages/pip/_internal/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/build_env.py b/python/lib/python3.10/site-packages/pip/_internal/build_env.py new file mode 100644 index 0000000..daeb7fb --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/build_env.py @@ -0,0 +1,296 @@ +"""Build Environment used for isolation during sdist building +""" + +import contextlib +import logging +import os +import pathlib +import sys +import textwrap +import zipfile +from collections import OrderedDict +from sysconfig import get_paths +from types import TracebackType +from typing import TYPE_CHECKING, Iterable, Iterator, List, Optional, Set, Tuple, Type + +from pip._vendor.certifi import where +from pip._vendor.packaging.requirements import Requirement +from pip._vendor.packaging.version import Version + +from pip import __file__ as pip_location +from pip._internal.cli.spinners import open_spinner +from pip._internal.locations import get_platlib, get_prefixed_libs, get_purelib +from pip._internal.metadata import get_environment +from pip._internal.utils.subprocess import call_subprocess +from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds + +if TYPE_CHECKING: + from pip._internal.index.package_finder import PackageFinder + +logger = logging.getLogger(__name__) + + +class _Prefix: + def __init__(self, path: str) -> None: + self.path = path + self.setup = False + self.bin_dir = get_paths( + "nt" if os.name == "nt" else "posix_prefix", + vars={"base": path, "platbase": path}, + )["scripts"] + self.lib_dirs = get_prefixed_libs(path) + + +@contextlib.contextmanager +def _create_standalone_pip() -> Iterator[str]: + """Create a "standalone pip" zip file. + + The zip file's content is identical to the currently-running pip. + It will be used to install requirements into the build environment. + """ + source = pathlib.Path(pip_location).resolve().parent + + # Return the current instance if `source` is not a directory. We can't build + # a zip from this, and it likely means the instance is already standalone. + if not source.is_dir(): + yield str(source) + return + + with TempDirectory(kind="standalone-pip") as tmp_dir: + pip_zip = os.path.join(tmp_dir.path, "__env_pip__.zip") + kwargs = {} + if sys.version_info >= (3, 8): + kwargs["strict_timestamps"] = False + with zipfile.ZipFile(pip_zip, "w", **kwargs) as zf: + for child in source.rglob("*"): + zf.write(child, child.relative_to(source.parent).as_posix()) + yield os.path.join(pip_zip, "pip") + + +class BuildEnvironment: + """Creates and manages an isolated environment to install build deps""" + + def __init__(self) -> None: + temp_dir = TempDirectory(kind=tempdir_kinds.BUILD_ENV, globally_managed=True) + + self._prefixes = OrderedDict( + (name, _Prefix(os.path.join(temp_dir.path, name))) + for name in ("normal", "overlay") + ) + + self._bin_dirs: List[str] = [] + self._lib_dirs: List[str] = [] + for prefix in reversed(list(self._prefixes.values())): + self._bin_dirs.append(prefix.bin_dir) + self._lib_dirs.extend(prefix.lib_dirs) + + # Customize site to: + # - ensure .pth files are honored + # - prevent access to system site packages + system_sites = { + os.path.normcase(site) for site in (get_purelib(), get_platlib()) + } + self._site_dir = os.path.join(temp_dir.path, "site") + if not os.path.exists(self._site_dir): + os.mkdir(self._site_dir) + with open( + os.path.join(self._site_dir, "sitecustomize.py"), "w", encoding="utf-8" + ) as fp: + fp.write( + textwrap.dedent( + """ + import os, site, sys + + # First, drop system-sites related paths. + original_sys_path = sys.path[:] + known_paths = set() + for path in {system_sites!r}: + site.addsitedir(path, known_paths=known_paths) + system_paths = set( + os.path.normcase(path) + for path in sys.path[len(original_sys_path):] + ) + original_sys_path = [ + path for path in original_sys_path + if os.path.normcase(path) not in system_paths + ] + sys.path = original_sys_path + + # Second, add lib directories. + # ensuring .pth file are processed. + for path in {lib_dirs!r}: + assert not path in sys.path + site.addsitedir(path) + """ + ).format(system_sites=system_sites, lib_dirs=self._lib_dirs) + ) + + def __enter__(self) -> None: + self._save_env = { + name: os.environ.get(name, None) + for name in ("PATH", "PYTHONNOUSERSITE", "PYTHONPATH") + } + + path = self._bin_dirs[:] + old_path = self._save_env["PATH"] + if old_path: + path.extend(old_path.split(os.pathsep)) + + pythonpath = [self._site_dir] + + os.environ.update( + { + "PATH": os.pathsep.join(path), + "PYTHONNOUSERSITE": "1", + "PYTHONPATH": os.pathsep.join(pythonpath), + } + ) + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + for varname, old_value in self._save_env.items(): + if old_value is None: + os.environ.pop(varname, None) + else: + os.environ[varname] = old_value + + def check_requirements( + self, reqs: Iterable[str] + ) -> Tuple[Set[Tuple[str, str]], Set[str]]: + """Return 2 sets: + - conflicting requirements: set of (installed, wanted) reqs tuples + - missing requirements: set of reqs + """ + missing = set() + conflicting = set() + if reqs: + env = get_environment(self._lib_dirs) + for req_str in reqs: + req = Requirement(req_str) + dist = env.get_distribution(req.name) + if not dist: + missing.add(req_str) + continue + if isinstance(dist.version, Version): + installed_req_str = f"{req.name}=={dist.version}" + else: + installed_req_str = f"{req.name}==={dist.version}" + if dist.version not in req.specifier: + conflicting.add((installed_req_str, req_str)) + # FIXME: Consider direct URL? + return conflicting, missing + + def install_requirements( + self, + finder: "PackageFinder", + requirements: Iterable[str], + prefix_as_string: str, + *, + kind: str, + ) -> None: + prefix = self._prefixes[prefix_as_string] + assert not prefix.setup + prefix.setup = True + if not requirements: + return + with contextlib.ExitStack() as ctx: + pip_runnable = ctx.enter_context(_create_standalone_pip()) + self._install_requirements( + pip_runnable, + finder, + requirements, + prefix, + kind=kind, + ) + + @staticmethod + def _install_requirements( + pip_runnable: str, + finder: "PackageFinder", + requirements: Iterable[str], + prefix: _Prefix, + *, + kind: str, + ) -> None: + args: List[str] = [ + sys.executable, + pip_runnable, + "install", + "--ignore-installed", + "--no-user", + "--prefix", + prefix.path, + "--no-warn-script-location", + ] + if logger.getEffectiveLevel() <= logging.DEBUG: + args.append("-v") + for format_control in ("no_binary", "only_binary"): + formats = getattr(finder.format_control, format_control) + args.extend( + ( + "--" + format_control.replace("_", "-"), + ",".join(sorted(formats or {":none:"})), + ) + ) + + index_urls = finder.index_urls + if index_urls: + args.extend(["-i", index_urls[0]]) + for extra_index in index_urls[1:]: + args.extend(["--extra-index-url", extra_index]) + else: + args.append("--no-index") + for link in finder.find_links: + args.extend(["--find-links", link]) + + for host in finder.trusted_hosts: + args.extend(["--trusted-host", host]) + if finder.allow_all_prereleases: + args.append("--pre") + if finder.prefer_binary: + args.append("--prefer-binary") + args.append("--") + args.extend(requirements) + extra_environ = {"_PIP_STANDALONE_CERT": where()} + with open_spinner(f"Installing {kind}") as spinner: + call_subprocess( + args, + command_desc=f"pip subprocess to install {kind}", + spinner=spinner, + extra_environ=extra_environ, + ) + + +class NoOpBuildEnvironment(BuildEnvironment): + """A no-op drop-in replacement for BuildEnvironment""" + + def __init__(self) -> None: + pass + + def __enter__(self) -> None: + pass + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + pass + + def cleanup(self) -> None: + pass + + def install_requirements( + self, + finder: "PackageFinder", + requirements: Iterable[str], + prefix_as_string: str, + *, + kind: str, + ) -> None: + raise NotImplementedError() diff --git a/python/lib/python3.10/site-packages/pip/_internal/cache.py b/python/lib/python3.10/site-packages/pip/_internal/cache.py new file mode 100644 index 0000000..1d6df22 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cache.py @@ -0,0 +1,264 @@ +"""Cache Management +""" + +import hashlib +import json +import logging +import os +from typing import Any, Dict, List, Optional, Set + +from pip._vendor.packaging.tags import Tag, interpreter_name, interpreter_version +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.exceptions import InvalidWheelFilename +from pip._internal.models.format_control import FormatControl +from pip._internal.models.link import Link +from pip._internal.models.wheel import Wheel +from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds +from pip._internal.utils.urls import path_to_url + +logger = logging.getLogger(__name__) + + +def _hash_dict(d: Dict[str, str]) -> str: + """Return a stable sha224 of a dictionary.""" + s = json.dumps(d, sort_keys=True, separators=(",", ":"), ensure_ascii=True) + return hashlib.sha224(s.encode("ascii")).hexdigest() + + +class Cache: + """An abstract class - provides cache directories for data from links + + + :param cache_dir: The root of the cache. + :param format_control: An object of FormatControl class to limit + binaries being read from the cache. + :param allowed_formats: which formats of files the cache should store. + ('binary' and 'source' are the only allowed values) + """ + + def __init__( + self, cache_dir: str, format_control: FormatControl, allowed_formats: Set[str] + ) -> None: + super().__init__() + assert not cache_dir or os.path.isabs(cache_dir) + self.cache_dir = cache_dir or None + self.format_control = format_control + self.allowed_formats = allowed_formats + + _valid_formats = {"source", "binary"} + assert self.allowed_formats.union(_valid_formats) == _valid_formats + + def _get_cache_path_parts(self, link: Link) -> List[str]: + """Get parts of part that must be os.path.joined with cache_dir""" + + # We want to generate an url to use as our cache key, we don't want to + # just re-use the URL because it might have other items in the fragment + # and we don't care about those. + key_parts = {"url": link.url_without_fragment} + if link.hash_name is not None and link.hash is not None: + key_parts[link.hash_name] = link.hash + if link.subdirectory_fragment: + key_parts["subdirectory"] = link.subdirectory_fragment + + # Include interpreter name, major and minor version in cache key + # to cope with ill-behaved sdists that build a different wheel + # depending on the python version their setup.py is being run on, + # and don't encode the difference in compatibility tags. + # https://github.com/pypa/pip/issues/7296 + key_parts["interpreter_name"] = interpreter_name() + key_parts["interpreter_version"] = interpreter_version() + + # Encode our key url with sha224, we'll use this because it has similar + # security properties to sha256, but with a shorter total output (and + # thus less secure). However the differences don't make a lot of + # difference for our use case here. + hashed = _hash_dict(key_parts) + + # We want to nest the directories some to prevent having a ton of top + # level directories where we might run out of sub directories on some + # FS. + parts = [hashed[:2], hashed[2:4], hashed[4:6], hashed[6:]] + + return parts + + def _get_candidates(self, link: Link, canonical_package_name: str) -> List[Any]: + can_not_cache = not self.cache_dir or not canonical_package_name or not link + if can_not_cache: + return [] + + formats = self.format_control.get_allowed_formats(canonical_package_name) + if not self.allowed_formats.intersection(formats): + return [] + + candidates = [] + path = self.get_path_for_link(link) + if os.path.isdir(path): + for candidate in os.listdir(path): + candidates.append((candidate, path)) + return candidates + + def get_path_for_link(self, link: Link) -> str: + """Return a directory to store cached items in for link.""" + raise NotImplementedError() + + def get( + self, + link: Link, + package_name: Optional[str], + supported_tags: List[Tag], + ) -> Link: + """Returns a link to a cached item if it exists, otherwise returns the + passed link. + """ + raise NotImplementedError() + + +class SimpleWheelCache(Cache): + """A cache of wheels for future installs.""" + + def __init__(self, cache_dir: str, format_control: FormatControl) -> None: + super().__init__(cache_dir, format_control, {"binary"}) + + def get_path_for_link(self, link: Link) -> str: + """Return a directory to store cached wheels for link + + Because there are M wheels for any one sdist, we provide a directory + to cache them in, and then consult that directory when looking up + cache hits. + + We only insert things into the cache if they have plausible version + numbers, so that we don't contaminate the cache with things that were + not unique. E.g. ./package might have dozens of installs done for it + and build a version of 0.0...and if we built and cached a wheel, we'd + end up using the same wheel even if the source has been edited. + + :param link: The link of the sdist for which this will cache wheels. + """ + parts = self._get_cache_path_parts(link) + assert self.cache_dir + # Store wheels within the root cache_dir + return os.path.join(self.cache_dir, "wheels", *parts) + + def get( + self, + link: Link, + package_name: Optional[str], + supported_tags: List[Tag], + ) -> Link: + candidates = [] + + if not package_name: + return link + + canonical_package_name = canonicalize_name(package_name) + for wheel_name, wheel_dir in self._get_candidates(link, canonical_package_name): + try: + wheel = Wheel(wheel_name) + except InvalidWheelFilename: + continue + if canonicalize_name(wheel.name) != canonical_package_name: + logger.debug( + "Ignoring cached wheel %s for %s as it " + "does not match the expected distribution name %s.", + wheel_name, + link, + package_name, + ) + continue + if not wheel.supported(supported_tags): + # Built for a different python/arch/etc + continue + candidates.append( + ( + wheel.support_index_min(supported_tags), + wheel_name, + wheel_dir, + ) + ) + + if not candidates: + return link + + _, wheel_name, wheel_dir = min(candidates) + return Link(path_to_url(os.path.join(wheel_dir, wheel_name))) + + +class EphemWheelCache(SimpleWheelCache): + """A SimpleWheelCache that creates it's own temporary cache directory""" + + def __init__(self, format_control: FormatControl) -> None: + self._temp_dir = TempDirectory( + kind=tempdir_kinds.EPHEM_WHEEL_CACHE, + globally_managed=True, + ) + + super().__init__(self._temp_dir.path, format_control) + + +class CacheEntry: + def __init__( + self, + link: Link, + persistent: bool, + ): + self.link = link + self.persistent = persistent + + +class WheelCache(Cache): + """Wraps EphemWheelCache and SimpleWheelCache into a single Cache + + This Cache allows for gracefully degradation, using the ephem wheel cache + when a certain link is not found in the simple wheel cache first. + """ + + def __init__(self, cache_dir: str, format_control: FormatControl) -> None: + super().__init__(cache_dir, format_control, {"binary"}) + self._wheel_cache = SimpleWheelCache(cache_dir, format_control) + self._ephem_cache = EphemWheelCache(format_control) + + def get_path_for_link(self, link: Link) -> str: + return self._wheel_cache.get_path_for_link(link) + + def get_ephem_path_for_link(self, link: Link) -> str: + return self._ephem_cache.get_path_for_link(link) + + def get( + self, + link: Link, + package_name: Optional[str], + supported_tags: List[Tag], + ) -> Link: + cache_entry = self.get_cache_entry(link, package_name, supported_tags) + if cache_entry is None: + return link + return cache_entry.link + + def get_cache_entry( + self, + link: Link, + package_name: Optional[str], + supported_tags: List[Tag], + ) -> Optional[CacheEntry]: + """Returns a CacheEntry with a link to a cached item if it exists or + None. The cache entry indicates if the item was found in the persistent + or ephemeral cache. + """ + retval = self._wheel_cache.get( + link=link, + package_name=package_name, + supported_tags=supported_tags, + ) + if retval is not link: + return CacheEntry(retval, persistent=True) + + retval = self._ephem_cache.get( + link=link, + package_name=package_name, + supported_tags=supported_tags, + ) + if retval is not link: + return CacheEntry(retval, persistent=False) + + return None diff --git a/lib/python3.11/site-packages/pip/_internal/cli/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/cli/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/cli/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/cli/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/cli/autocompletion.py b/python/lib/python3.10/site-packages/pip/_internal/cli/autocompletion.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/cli/autocompletion.py rename to python/lib/python3.10/site-packages/pip/_internal/cli/autocompletion.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/base_command.py b/python/lib/python3.10/site-packages/pip/_internal/cli/base_command.py new file mode 100644 index 0000000..f5dc0fe --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/base_command.py @@ -0,0 +1,220 @@ +"""Base Command class, and related routines""" + +import functools +import logging +import logging.config +import optparse +import os +import sys +import traceback +from optparse import Values +from typing import Any, Callable, List, Optional, Tuple + +from pip._internal.cli import cmdoptions +from pip._internal.cli.command_context import CommandContextMixIn +from pip._internal.cli.parser import ConfigOptionParser, UpdatingDefaultsHelpFormatter +from pip._internal.cli.status_codes import ( + ERROR, + PREVIOUS_BUILD_DIR_ERROR, + UNKNOWN_ERROR, + VIRTUALENV_NOT_FOUND, +) +from pip._internal.exceptions import ( + BadCommand, + CommandError, + DiagnosticPipError, + InstallationError, + NetworkConnectionError, + PreviousBuildDirError, + UninstallationError, +) +from pip._internal.utils.filesystem import check_path_owner +from pip._internal.utils.logging import BrokenStdoutLoggingError, setup_logging +from pip._internal.utils.misc import get_prog, normalize_path +from pip._internal.utils.temp_dir import TempDirectoryTypeRegistry as TempDirRegistry +from pip._internal.utils.temp_dir import global_tempdir_manager, tempdir_registry +from pip._internal.utils.virtualenv import running_under_virtualenv + +__all__ = ["Command"] + +logger = logging.getLogger(__name__) + + +class Command(CommandContextMixIn): + usage: str = "" + ignore_require_venv: bool = False + + def __init__(self, name: str, summary: str, isolated: bool = False) -> None: + super().__init__() + + self.name = name + self.summary = summary + self.parser = ConfigOptionParser( + usage=self.usage, + prog=f"{get_prog()} {name}", + formatter=UpdatingDefaultsHelpFormatter(), + add_help_option=False, + name=name, + description=self.__doc__, + isolated=isolated, + ) + + self.tempdir_registry: Optional[TempDirRegistry] = None + + # Commands should add options to this option group + optgroup_name = f"{self.name.capitalize()} Options" + self.cmd_opts = optparse.OptionGroup(self.parser, optgroup_name) + + # Add the general options + gen_opts = cmdoptions.make_option_group( + cmdoptions.general_group, + self.parser, + ) + self.parser.add_option_group(gen_opts) + + self.add_options() + + def add_options(self) -> None: + pass + + def handle_pip_version_check(self, options: Values) -> None: + """ + This is a no-op so that commands by default do not do the pip version + check. + """ + # Make sure we do the pip version check if the index_group options + # are present. + assert not hasattr(options, "no_index") + + def run(self, options: Values, args: List[str]) -> int: + raise NotImplementedError + + def parse_args(self, args: List[str]) -> Tuple[Values, List[str]]: + # factored out for testability + return self.parser.parse_args(args) + + def main(self, args: List[str]) -> int: + try: + with self.main_context(): + return self._main(args) + finally: + logging.shutdown() + + def _main(self, args: List[str]) -> int: + # We must initialize this before the tempdir manager, otherwise the + # configuration would not be accessible by the time we clean up the + # tempdir manager. + self.tempdir_registry = self.enter_context(tempdir_registry()) + # Intentionally set as early as possible so globally-managed temporary + # directories are available to the rest of the code. + self.enter_context(global_tempdir_manager()) + + options, args = self.parse_args(args) + + # Set verbosity so that it can be used elsewhere. + self.verbosity = options.verbose - options.quiet + + level_number = setup_logging( + verbosity=self.verbosity, + no_color=options.no_color, + user_log_file=options.log, + ) + + # TODO: Try to get these passing down from the command? + # without resorting to os.environ to hold these. + # This also affects isolated builds and it should. + + if options.no_input: + os.environ["PIP_NO_INPUT"] = "1" + + if options.exists_action: + os.environ["PIP_EXISTS_ACTION"] = " ".join(options.exists_action) + + if options.require_venv and not self.ignore_require_venv: + # If a venv is required check if it can really be found + if not running_under_virtualenv(): + logger.critical("Could not find an activated virtualenv (required).") + sys.exit(VIRTUALENV_NOT_FOUND) + + if options.cache_dir: + options.cache_dir = normalize_path(options.cache_dir) + if not check_path_owner(options.cache_dir): + logger.warning( + "The directory '%s' or its parent directory is not owned " + "or is not writable by the current user. The cache " + "has been disabled. Check the permissions and owner of " + "that directory. If executing pip with sudo, you should " + "use sudo's -H flag.", + options.cache_dir, + ) + options.cache_dir = None + + if "2020-resolver" in options.features_enabled: + logger.warning( + "--use-feature=2020-resolver no longer has any effect, " + "since it is now the default dependency resolver in pip. " + "This will become an error in pip 21.0." + ) + + def intercepts_unhandled_exc( + run_func: Callable[..., int] + ) -> Callable[..., int]: + @functools.wraps(run_func) + def exc_logging_wrapper(*args: Any) -> int: + try: + status = run_func(*args) + assert isinstance(status, int) + return status + except DiagnosticPipError as exc: + logger.error("[present-diagnostic] %s", exc) + logger.debug("Exception information:", exc_info=True) + + return ERROR + except PreviousBuildDirError as exc: + logger.critical(str(exc)) + logger.debug("Exception information:", exc_info=True) + + return PREVIOUS_BUILD_DIR_ERROR + except ( + InstallationError, + UninstallationError, + BadCommand, + NetworkConnectionError, + ) as exc: + logger.critical(str(exc)) + logger.debug("Exception information:", exc_info=True) + + return ERROR + except CommandError as exc: + logger.critical("%s", exc) + logger.debug("Exception information:", exc_info=True) + + return ERROR + except BrokenStdoutLoggingError: + # Bypass our logger and write any remaining messages to + # stderr because stdout no longer works. + print("ERROR: Pipe to stdout was broken", file=sys.stderr) + if level_number <= logging.DEBUG: + traceback.print_exc(file=sys.stderr) + + return ERROR + except KeyboardInterrupt: + logger.critical("Operation cancelled by user") + logger.debug("Exception information:", exc_info=True) + + return ERROR + except BaseException: + logger.critical("Exception:", exc_info=True) + + return UNKNOWN_ERROR + + return exc_logging_wrapper + + try: + if not options.debug_mode: + run = intercepts_unhandled_exc(self.run) + else: + run = self.run + return run(options, args) + finally: + self.handle_pip_version_check(options) diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/cmdoptions.py b/python/lib/python3.10/site-packages/pip/_internal/cli/cmdoptions.py new file mode 100644 index 0000000..b7e54f7 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/cmdoptions.py @@ -0,0 +1,1018 @@ +""" +shared options and groups + +The principle here is to define options once, but *not* instantiate them +globally. One reason being that options with action='append' can carry state +between parses. pip parses general options twice internally, and shouldn't +pass on state. To be consistent, all options will follow this design. +""" + +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import logging +import os +import textwrap +from functools import partial +from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values +from textwrap import dedent +from typing import Any, Callable, Dict, Optional, Tuple + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.cli.parser import ConfigOptionParser +from pip._internal.cli.progress_bars import BAR_TYPES +from pip._internal.exceptions import CommandError +from pip._internal.locations import USER_CACHE_DIR, get_src_prefix +from pip._internal.models.format_control import FormatControl +from pip._internal.models.index import PyPI +from pip._internal.models.target_python import TargetPython +from pip._internal.utils.hashes import STRONG_HASHES +from pip._internal.utils.misc import strtobool + +logger = logging.getLogger(__name__) + + +def raise_option_error(parser: OptionParser, option: Option, msg: str) -> None: + """ + Raise an option parsing error using parser.error(). + + Args: + parser: an OptionParser instance. + option: an Option instance. + msg: the error text. + """ + msg = f"{option} error: {msg}" + msg = textwrap.fill(" ".join(msg.split())) + parser.error(msg) + + +def make_option_group(group: Dict[str, Any], parser: ConfigOptionParser) -> OptionGroup: + """ + Return an OptionGroup object + group -- assumed to be dict with 'name' and 'options' keys + parser -- an optparse Parser + """ + option_group = OptionGroup(parser, group["name"]) + for option in group["options"]: + option_group.add_option(option()) + return option_group + + +def check_install_build_global( + options: Values, check_options: Optional[Values] = None +) -> None: + """Disable wheels if per-setup.py call options are set. + + :param options: The OptionParser options to update. + :param check_options: The options to check, if not supplied defaults to + options. + """ + if check_options is None: + check_options = options + + def getname(n: str) -> Optional[Any]: + return getattr(check_options, n, None) + + names = ["build_options", "global_options", "install_options"] + if any(map(getname, names)): + control = options.format_control + control.disallow_binaries() + logger.warning( + "Disabling all use of wheels due to the use of --build-option " + "/ --global-option / --install-option.", + ) + + +def check_dist_restriction(options: Values, check_target: bool = False) -> None: + """Function for determining if custom platform options are allowed. + + :param options: The OptionParser options. + :param check_target: Whether or not to check if --target is being used. + """ + dist_restriction_set = any( + [ + options.python_version, + options.platforms, + options.abis, + options.implementation, + ] + ) + + binary_only = FormatControl(set(), {":all:"}) + sdist_dependencies_allowed = ( + options.format_control != binary_only and not options.ignore_dependencies + ) + + # Installations or downloads using dist restrictions must not combine + # source distributions and dist-specific wheels, as they are not + # guaranteed to be locally compatible. + if dist_restriction_set and sdist_dependencies_allowed: + raise CommandError( + "When restricting platform and interpreter constraints using " + "--python-version, --platform, --abi, or --implementation, " + "either --no-deps must be set, or --only-binary=:all: must be " + "set and --no-binary must not be set (or must be set to " + ":none:)." + ) + + if check_target: + if dist_restriction_set and not options.target_dir: + raise CommandError( + "Can not use any platform or abi specific options unless " + "installing via '--target'" + ) + + +def _path_option_check(option: Option, opt: str, value: str) -> str: + return os.path.expanduser(value) + + +def _package_name_option_check(option: Option, opt: str, value: str) -> str: + return canonicalize_name(value) + + +class PipOption(Option): + TYPES = Option.TYPES + ("path", "package_name") + TYPE_CHECKER = Option.TYPE_CHECKER.copy() + TYPE_CHECKER["package_name"] = _package_name_option_check + TYPE_CHECKER["path"] = _path_option_check + + +########### +# options # +########### + +help_: Callable[..., Option] = partial( + Option, + "-h", + "--help", + dest="help", + action="help", + help="Show help.", +) + +debug_mode: Callable[..., Option] = partial( + Option, + "--debug", + dest="debug_mode", + action="store_true", + default=False, + help=( + "Let unhandled exceptions propagate outside the main subroutine, " + "instead of logging them to stderr." + ), +) + +isolated_mode: Callable[..., Option] = partial( + Option, + "--isolated", + dest="isolated_mode", + action="store_true", + default=False, + help=( + "Run pip in an isolated mode, ignoring environment variables and user " + "configuration." + ), +) + +require_virtualenv: Callable[..., Option] = partial( + Option, + "--require-virtualenv", + "--require-venv", + dest="require_venv", + action="store_true", + default=False, + help=( + "Allow pip to only run in a virtual environment; " + "exit with an error otherwise." + ), +) + +verbose: Callable[..., Option] = partial( + Option, + "-v", + "--verbose", + dest="verbose", + action="count", + default=0, + help="Give more output. Option is additive, and can be used up to 3 times.", +) + +no_color: Callable[..., Option] = partial( + Option, + "--no-color", + dest="no_color", + action="store_true", + default=False, + help="Suppress colored output.", +) + +version: Callable[..., Option] = partial( + Option, + "-V", + "--version", + dest="version", + action="store_true", + help="Show version and exit.", +) + +quiet: Callable[..., Option] = partial( + Option, + "-q", + "--quiet", + dest="quiet", + action="count", + default=0, + help=( + "Give less output. Option is additive, and can be used up to 3" + " times (corresponding to WARNING, ERROR, and CRITICAL logging" + " levels)." + ), +) + +progress_bar: Callable[..., Option] = partial( + Option, + "--progress-bar", + dest="progress_bar", + type="choice", + choices=list(BAR_TYPES.keys()), + default="on", + help=( + "Specify type of progress to be displayed [" + + "|".join(BAR_TYPES.keys()) + + "] (default: %default)" + ), +) + +log: Callable[..., Option] = partial( + PipOption, + "--log", + "--log-file", + "--local-log", + dest="log", + metavar="path", + type="path", + help="Path to a verbose appending log.", +) + +no_input: Callable[..., Option] = partial( + Option, + # Don't ask for input + "--no-input", + dest="no_input", + action="store_true", + default=False, + help="Disable prompting for input.", +) + +proxy: Callable[..., Option] = partial( + Option, + "--proxy", + dest="proxy", + type="str", + default="", + help="Specify a proxy in the form [user:passwd@]proxy.server:port.", +) + +retries: Callable[..., Option] = partial( + Option, + "--retries", + dest="retries", + type="int", + default=5, + help="Maximum number of retries each connection should attempt " + "(default %default times).", +) + +timeout: Callable[..., Option] = partial( + Option, + "--timeout", + "--default-timeout", + metavar="sec", + dest="timeout", + type="float", + default=15, + help="Set the socket timeout (default %default seconds).", +) + + +def exists_action() -> Option: + return Option( + # Option when path already exist + "--exists-action", + dest="exists_action", + type="choice", + choices=["s", "i", "w", "b", "a"], + default=[], + action="append", + metavar="action", + help="Default action when a path already exists: " + "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.", + ) + + +cert: Callable[..., Option] = partial( + PipOption, + "--cert", + dest="cert", + type="path", + metavar="path", + help=( + "Path to PEM-encoded CA certificate bundle. " + "If provided, overrides the default. " + "See 'SSL Certificate Verification' in pip documentation " + "for more information." + ), +) + +client_cert: Callable[..., Option] = partial( + PipOption, + "--client-cert", + dest="client_cert", + type="path", + default=None, + metavar="path", + help="Path to SSL client certificate, a single file containing the " + "private key and the certificate in PEM format.", +) + +index_url: Callable[..., Option] = partial( + Option, + "-i", + "--index-url", + "--pypi-url", + dest="index_url", + metavar="URL", + default=PyPI.simple_url, + help="Base URL of the Python Package Index (default %default). " + "This should point to a repository compliant with PEP 503 " + "(the simple repository API) or a local directory laid out " + "in the same format.", +) + + +def extra_index_url() -> Option: + return Option( + "--extra-index-url", + dest="extra_index_urls", + metavar="URL", + action="append", + default=[], + help="Extra URLs of package indexes to use in addition to " + "--index-url. Should follow the same rules as " + "--index-url.", + ) + + +no_index: Callable[..., Option] = partial( + Option, + "--no-index", + dest="no_index", + action="store_true", + default=False, + help="Ignore package index (only looking at --find-links URLs instead).", +) + + +def find_links() -> Option: + return Option( + "-f", + "--find-links", + dest="find_links", + action="append", + default=[], + metavar="url", + help="If a URL or path to an html file, then parse for links to " + "archives such as sdist (.tar.gz) or wheel (.whl) files. " + "If a local path or file:// URL that's a directory, " + "then look for archives in the directory listing. " + "Links to VCS project URLs are not supported.", + ) + + +def trusted_host() -> Option: + return Option( + "--trusted-host", + dest="trusted_hosts", + action="append", + metavar="HOSTNAME", + default=[], + help="Mark this host or host:port pair as trusted, even though it " + "does not have valid or any HTTPS.", + ) + + +def constraints() -> Option: + return Option( + "-c", + "--constraint", + dest="constraints", + action="append", + default=[], + metavar="file", + help="Constrain versions using the given constraints file. " + "This option can be used multiple times.", + ) + + +def requirements() -> Option: + return Option( + "-r", + "--requirement", + dest="requirements", + action="append", + default=[], + metavar="file", + help="Install from the given requirements file. " + "This option can be used multiple times.", + ) + + +def editable() -> Option: + return Option( + "-e", + "--editable", + dest="editables", + action="append", + default=[], + metavar="path/url", + help=( + "Install a project in editable mode (i.e. setuptools " + '"develop mode") from a local project path or a VCS url.' + ), + ) + + +def _handle_src(option: Option, opt_str: str, value: str, parser: OptionParser) -> None: + value = os.path.abspath(value) + setattr(parser.values, option.dest, value) + + +src: Callable[..., Option] = partial( + PipOption, + "--src", + "--source", + "--source-dir", + "--source-directory", + dest="src_dir", + type="path", + metavar="dir", + default=get_src_prefix(), + action="callback", + callback=_handle_src, + help="Directory to check out editable projects into. " + 'The default in a virtualenv is "/src". ' + 'The default for global installs is "/src".', +) + + +def _get_format_control(values: Values, option: Option) -> Any: + """Get a format_control object.""" + return getattr(values, option.dest) + + +def _handle_no_binary( + option: Option, opt_str: str, value: str, parser: OptionParser +) -> None: + existing = _get_format_control(parser.values, option) + FormatControl.handle_mutual_excludes( + value, + existing.no_binary, + existing.only_binary, + ) + + +def _handle_only_binary( + option: Option, opt_str: str, value: str, parser: OptionParser +) -> None: + existing = _get_format_control(parser.values, option) + FormatControl.handle_mutual_excludes( + value, + existing.only_binary, + existing.no_binary, + ) + + +def no_binary() -> Option: + format_control = FormatControl(set(), set()) + return Option( + "--no-binary", + dest="format_control", + action="callback", + callback=_handle_no_binary, + type="str", + default=format_control, + help="Do not use binary packages. Can be supplied multiple times, and " + 'each time adds to the existing value. Accepts either ":all:" to ' + 'disable all binary packages, ":none:" to empty the set (notice ' + "the colons), or one or more package names with commas between " + "them (no colons). Note that some packages are tricky to compile " + "and may fail to install when this option is used on them.", + ) + + +def only_binary() -> Option: + format_control = FormatControl(set(), set()) + return Option( + "--only-binary", + dest="format_control", + action="callback", + callback=_handle_only_binary, + type="str", + default=format_control, + help="Do not use source packages. Can be supplied multiple times, and " + 'each time adds to the existing value. Accepts either ":all:" to ' + 'disable all source packages, ":none:" to empty the set, or one ' + "or more package names with commas between them. Packages " + "without binary distributions will fail to install when this " + "option is used on them.", + ) + + +platforms: Callable[..., Option] = partial( + Option, + "--platform", + dest="platforms", + metavar="platform", + action="append", + default=None, + help=( + "Only use wheels compatible with . Defaults to the " + "platform of the running system. Use this option multiple times to " + "specify multiple platforms supported by the target interpreter." + ), +) + + +# This was made a separate function for unit-testing purposes. +def _convert_python_version(value: str) -> Tuple[Tuple[int, ...], Optional[str]]: + """ + Convert a version string like "3", "37", or "3.7.3" into a tuple of ints. + + :return: A 2-tuple (version_info, error_msg), where `error_msg` is + non-None if and only if there was a parsing error. + """ + if not value: + # The empty string is the same as not providing a value. + return (None, None) + + parts = value.split(".") + if len(parts) > 3: + return ((), "at most three version parts are allowed") + + if len(parts) == 1: + # Then we are in the case of "3" or "37". + value = parts[0] + if len(value) > 1: + parts = [value[0], value[1:]] + + try: + version_info = tuple(int(part) for part in parts) + except ValueError: + return ((), "each version part must be an integer") + + return (version_info, None) + + +def _handle_python_version( + option: Option, opt_str: str, value: str, parser: OptionParser +) -> None: + """ + Handle a provided --python-version value. + """ + version_info, error_msg = _convert_python_version(value) + if error_msg is not None: + msg = "invalid --python-version value: {!r}: {}".format( + value, + error_msg, + ) + raise_option_error(parser, option=option, msg=msg) + + parser.values.python_version = version_info + + +python_version: Callable[..., Option] = partial( + Option, + "--python-version", + dest="python_version", + metavar="python_version", + action="callback", + callback=_handle_python_version, + type="str", + default=None, + help=dedent( + """\ + The Python interpreter version to use for wheel and "Requires-Python" + compatibility checks. Defaults to a version derived from the running + interpreter. The version can be specified using up to three dot-separated + integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor + version can also be given as a string without dots (e.g. "37" for 3.7.0). + """ + ), +) + + +implementation: Callable[..., Option] = partial( + Option, + "--implementation", + dest="implementation", + metavar="implementation", + default=None, + help=( + "Only use wheels compatible with Python " + "implementation , e.g. 'pp', 'jy', 'cp', " + " or 'ip'. If not specified, then the current " + "interpreter implementation is used. Use 'py' to force " + "implementation-agnostic wheels." + ), +) + + +abis: Callable[..., Option] = partial( + Option, + "--abi", + dest="abis", + metavar="abi", + action="append", + default=None, + help=( + "Only use wheels compatible with Python abi , e.g. 'pypy_41'. " + "If not specified, then the current interpreter abi tag is used. " + "Use this option multiple times to specify multiple abis supported " + "by the target interpreter. Generally you will need to specify " + "--implementation, --platform, and --python-version when using this " + "option." + ), +) + + +def add_target_python_options(cmd_opts: OptionGroup) -> None: + cmd_opts.add_option(platforms()) + cmd_opts.add_option(python_version()) + cmd_opts.add_option(implementation()) + cmd_opts.add_option(abis()) + + +def make_target_python(options: Values) -> TargetPython: + target_python = TargetPython( + platforms=options.platforms, + py_version_info=options.python_version, + abis=options.abis, + implementation=options.implementation, + ) + + return target_python + + +def prefer_binary() -> Option: + return Option( + "--prefer-binary", + dest="prefer_binary", + action="store_true", + default=False, + help="Prefer older binary packages over newer source packages.", + ) + + +cache_dir: Callable[..., Option] = partial( + PipOption, + "--cache-dir", + dest="cache_dir", + default=USER_CACHE_DIR, + metavar="dir", + type="path", + help="Store the cache data in .", +) + + +def _handle_no_cache_dir( + option: Option, opt: str, value: str, parser: OptionParser +) -> None: + """ + Process a value provided for the --no-cache-dir option. + + This is an optparse.Option callback for the --no-cache-dir option. + """ + # The value argument will be None if --no-cache-dir is passed via the + # command-line, since the option doesn't accept arguments. However, + # the value can be non-None if the option is triggered e.g. by an + # environment variable, like PIP_NO_CACHE_DIR=true. + if value is not None: + # Then parse the string value to get argument error-checking. + try: + strtobool(value) + except ValueError as exc: + raise_option_error(parser, option=option, msg=str(exc)) + + # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool() + # converted to 0 (like "false" or "no") caused cache_dir to be disabled + # rather than enabled (logic would say the latter). Thus, we disable + # the cache directory not just on values that parse to True, but (for + # backwards compatibility reasons) also on values that parse to False. + # In other words, always set it to False if the option is provided in + # some (valid) form. + parser.values.cache_dir = False + + +no_cache: Callable[..., Option] = partial( + Option, + "--no-cache-dir", + dest="cache_dir", + action="callback", + callback=_handle_no_cache_dir, + help="Disable the cache.", +) + +no_deps: Callable[..., Option] = partial( + Option, + "--no-deps", + "--no-dependencies", + dest="ignore_dependencies", + action="store_true", + default=False, + help="Don't install package dependencies.", +) + +ignore_requires_python: Callable[..., Option] = partial( + Option, + "--ignore-requires-python", + dest="ignore_requires_python", + action="store_true", + help="Ignore the Requires-Python information.", +) + +no_build_isolation: Callable[..., Option] = partial( + Option, + "--no-build-isolation", + dest="build_isolation", + action="store_false", + default=True, + help="Disable isolation when building a modern source distribution. " + "Build dependencies specified by PEP 518 must be already installed " + "if this option is used.", +) + + +def _handle_no_use_pep517( + option: Option, opt: str, value: str, parser: OptionParser +) -> None: + """ + Process a value provided for the --no-use-pep517 option. + + This is an optparse.Option callback for the no_use_pep517 option. + """ + # Since --no-use-pep517 doesn't accept arguments, the value argument + # will be None if --no-use-pep517 is passed via the command-line. + # However, the value can be non-None if the option is triggered e.g. + # by an environment variable, for example "PIP_NO_USE_PEP517=true". + if value is not None: + msg = """A value was passed for --no-use-pep517, + probably using either the PIP_NO_USE_PEP517 environment variable + or the "no-use-pep517" config file option. Use an appropriate value + of the PIP_USE_PEP517 environment variable or the "use-pep517" + config file option instead. + """ + raise_option_error(parser, option=option, msg=msg) + + # Otherwise, --no-use-pep517 was passed via the command-line. + parser.values.use_pep517 = False + + +use_pep517: Any = partial( + Option, + "--use-pep517", + dest="use_pep517", + action="store_true", + default=None, + help="Use PEP 517 for building source distributions " + "(use --no-use-pep517 to force legacy behaviour).", +) + +no_use_pep517: Any = partial( + Option, + "--no-use-pep517", + dest="use_pep517", + action="callback", + callback=_handle_no_use_pep517, + default=None, + help=SUPPRESS_HELP, +) + +install_options: Callable[..., Option] = partial( + Option, + "--install-option", + dest="install_options", + action="append", + metavar="options", + help="Extra arguments to be supplied to the setup.py install " + 'command (use like --install-option="--install-scripts=/usr/local/' + 'bin"). Use multiple --install-option options to pass multiple ' + "options to setup.py install. If you are using an option with a " + "directory path, be sure to use absolute path.", +) + +build_options: Callable[..., Option] = partial( + Option, + "--build-option", + dest="build_options", + metavar="options", + action="append", + help="Extra arguments to be supplied to 'setup.py bdist_wheel'.", +) + +global_options: Callable[..., Option] = partial( + Option, + "--global-option", + dest="global_options", + action="append", + metavar="options", + help="Extra global options to be supplied to the setup.py " + "call before the install or bdist_wheel command.", +) + +no_clean: Callable[..., Option] = partial( + Option, + "--no-clean", + action="store_true", + default=False, + help="Don't clean up build directories.", +) + +pre: Callable[..., Option] = partial( + Option, + "--pre", + action="store_true", + default=False, + help="Include pre-release and development versions. By default, " + "pip only finds stable versions.", +) + +disable_pip_version_check: Callable[..., Option] = partial( + Option, + "--disable-pip-version-check", + dest="disable_pip_version_check", + action="store_true", + default=True, + help="Don't periodically check PyPI to determine whether a new version " + "of pip is available for download. Implied with --no-index.", +) + + +def _handle_merge_hash( + option: Option, opt_str: str, value: str, parser: OptionParser +) -> None: + """Given a value spelled "algo:digest", append the digest to a list + pointed to in a dict by the algo name.""" + if not parser.values.hashes: + parser.values.hashes = {} + try: + algo, digest = value.split(":", 1) + except ValueError: + parser.error( + "Arguments to {} must be a hash name " # noqa + "followed by a value, like --hash=sha256:" + "abcde...".format(opt_str) + ) + if algo not in STRONG_HASHES: + parser.error( + "Allowed hash algorithms for {} are {}.".format( # noqa + opt_str, ", ".join(STRONG_HASHES) + ) + ) + parser.values.hashes.setdefault(algo, []).append(digest) + + +hash: Callable[..., Option] = partial( + Option, + "--hash", + # Hash values eventually end up in InstallRequirement.hashes due to + # __dict__ copying in process_line(). + dest="hashes", + action="callback", + callback=_handle_merge_hash, + type="string", + help="Verify that the package's archive matches this " + "hash before installing. Example: --hash=sha256:abcdef...", +) + + +require_hashes: Callable[..., Option] = partial( + Option, + "--require-hashes", + dest="require_hashes", + action="store_true", + default=False, + help="Require a hash to check each requirement against, for " + "repeatable installs. This option is implied when any package in a " + "requirements file has a --hash option.", +) + + +list_path: Callable[..., Option] = partial( + PipOption, + "--path", + dest="path", + type="path", + action="append", + help="Restrict to the specified installation path for listing " + "packages (can be used multiple times).", +) + + +def check_list_path_option(options: Values) -> None: + if options.path and (options.user or options.local): + raise CommandError("Cannot combine '--path' with '--user' or '--local'") + + +list_exclude: Callable[..., Option] = partial( + PipOption, + "--exclude", + dest="excludes", + action="append", + metavar="package", + type="package_name", + help="Exclude specified package from the output", +) + + +no_python_version_warning: Callable[..., Option] = partial( + Option, + "--no-python-version-warning", + dest="no_python_version_warning", + action="store_true", + default=False, + help="Silence deprecation warnings for upcoming unsupported Pythons.", +) + + +use_new_feature: Callable[..., Option] = partial( + Option, + "--use-feature", + dest="features_enabled", + metavar="feature", + action="append", + default=[], + choices=["2020-resolver", "fast-deps", "in-tree-build"], + help="Enable new functionality, that may be backward incompatible.", +) + +use_deprecated_feature: Callable[..., Option] = partial( + Option, + "--use-deprecated", + dest="deprecated_features_enabled", + metavar="feature", + action="append", + default=[], + choices=[ + "legacy-resolver", + "out-of-tree-build", + "backtrack-on-build-failures", + "html5lib", + ], + help=("Enable deprecated functionality, that will be removed in the future."), +) + + +########## +# groups # +########## + +general_group: Dict[str, Any] = { + "name": "General Options", + "options": [ + help_, + debug_mode, + isolated_mode, + require_virtualenv, + verbose, + version, + quiet, + log, + no_input, + proxy, + retries, + timeout, + exists_action, + trusted_host, + cert, + client_cert, + cache_dir, + no_cache, + disable_pip_version_check, + no_color, + no_python_version_warning, + use_new_feature, + use_deprecated_feature, + ], +} + +index_group: Dict[str, Any] = { + "name": "Package Index Options", + "options": [ + index_url, + extra_index_url, + no_index, + find_links, + ], +} diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/command_context.py b/python/lib/python3.10/site-packages/pip/_internal/cli/command_context.py new file mode 100644 index 0000000..ed68322 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/command_context.py @@ -0,0 +1,27 @@ +from contextlib import ExitStack, contextmanager +from typing import ContextManager, Iterator, TypeVar + +_T = TypeVar("_T", covariant=True) + + +class CommandContextMixIn: + def __init__(self) -> None: + super().__init__() + self._in_main_context = False + self._main_context = ExitStack() + + @contextmanager + def main_context(self) -> Iterator[None]: + assert not self._in_main_context + + self._in_main_context = True + try: + with self._main_context: + yield + finally: + self._in_main_context = False + + def enter_context(self, context_provider: ContextManager[_T]) -> _T: + assert self._in_main_context + + return self._main_context.enter_context(context_provider) diff --git a/lib/python3.11/site-packages/pip/_internal/cli/main.py b/python/lib/python3.10/site-packages/pip/_internal/cli/main.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/cli/main.py rename to python/lib/python3.10/site-packages/pip/_internal/cli/main.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py b/python/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py new file mode 100644 index 0000000..3666ab0 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py @@ -0,0 +1,87 @@ +"""A single place for constructing and exposing the main parser +""" + +import os +import sys +from typing import List, Tuple + +from pip._internal.cli import cmdoptions +from pip._internal.cli.parser import ConfigOptionParser, UpdatingDefaultsHelpFormatter +from pip._internal.commands import commands_dict, get_similar_commands +from pip._internal.exceptions import CommandError +from pip._internal.utils.misc import get_pip_version, get_prog + +__all__ = ["create_main_parser", "parse_command"] + + +def create_main_parser() -> ConfigOptionParser: + """Creates and returns the main parser for pip's CLI""" + + parser = ConfigOptionParser( + usage="\n%prog [options]", + add_help_option=False, + formatter=UpdatingDefaultsHelpFormatter(), + name="global", + prog=get_prog(), + ) + parser.disable_interspersed_args() + + parser.version = get_pip_version() + + # add the general options + gen_opts = cmdoptions.make_option_group(cmdoptions.general_group, parser) + parser.add_option_group(gen_opts) + + # so the help formatter knows + parser.main = True # type: ignore + + # create command listing for description + description = [""] + [ + f"{name:27} {command_info.summary}" + for name, command_info in commands_dict.items() + ] + parser.description = "\n".join(description) + + return parser + + +def parse_command(args: List[str]) -> Tuple[str, List[str]]: + parser = create_main_parser() + + # Note: parser calls disable_interspersed_args(), so the result of this + # call is to split the initial args into the general options before the + # subcommand and everything else. + # For example: + # args: ['--timeout=5', 'install', '--user', 'INITools'] + # general_options: ['--timeout==5'] + # args_else: ['install', '--user', 'INITools'] + general_options, args_else = parser.parse_args(args) + + # --version + if general_options.version: + sys.stdout.write(parser.version) + sys.stdout.write(os.linesep) + sys.exit() + + # pip || pip help -> print_help() + if not args_else or (args_else[0] == "help" and len(args_else) == 1): + parser.print_help() + sys.exit() + + # the subcommand name + cmd_name = args_else[0] + + if cmd_name not in commands_dict: + guess = get_similar_commands(cmd_name) + + msg = [f'unknown command "{cmd_name}"'] + if guess: + msg.append(f'maybe you meant "{guess}"') + + raise CommandError(" - ".join(msg)) + + # all the args without the subcommand + cmd_args = args[:] + cmd_args.remove(cmd_name) + + return cmd_name, cmd_args diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/parser.py b/python/lib/python3.10/site-packages/pip/_internal/cli/parser.py new file mode 100644 index 0000000..a1c99a8 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/parser.py @@ -0,0 +1,292 @@ +"""Base option parser setup""" + +import logging +import optparse +import shutil +import sys +import textwrap +from contextlib import suppress +from typing import Any, Dict, Iterator, List, Tuple + +from pip._internal.cli.status_codes import UNKNOWN_ERROR +from pip._internal.configuration import Configuration, ConfigurationError +from pip._internal.utils.misc import redact_auth_from_url, strtobool + +logger = logging.getLogger(__name__) + + +class PrettyHelpFormatter(optparse.IndentedHelpFormatter): + """A prettier/less verbose help formatter for optparse.""" + + def __init__(self, *args: Any, **kwargs: Any) -> None: + # help position must be aligned with __init__.parseopts.description + kwargs["max_help_position"] = 30 + kwargs["indent_increment"] = 1 + kwargs["width"] = shutil.get_terminal_size()[0] - 2 + super().__init__(*args, **kwargs) + + def format_option_strings(self, option: optparse.Option) -> str: + return self._format_option_strings(option) + + def _format_option_strings( + self, option: optparse.Option, mvarfmt: str = " <{}>", optsep: str = ", " + ) -> str: + """ + Return a comma-separated list of option strings and metavars. + + :param option: tuple of (short opt, long opt), e.g: ('-f', '--format') + :param mvarfmt: metavar format string + :param optsep: separator + """ + opts = [] + + if option._short_opts: + opts.append(option._short_opts[0]) + if option._long_opts: + opts.append(option._long_opts[0]) + if len(opts) > 1: + opts.insert(1, optsep) + + if option.takes_value(): + assert option.dest is not None + metavar = option.metavar or option.dest.lower() + opts.append(mvarfmt.format(metavar.lower())) + + return "".join(opts) + + def format_heading(self, heading: str) -> str: + if heading == "Options": + return "" + return heading + ":\n" + + def format_usage(self, usage: str) -> str: + """ + Ensure there is only one newline between usage and the first heading + if there is no description. + """ + msg = "\nUsage: {}\n".format(self.indent_lines(textwrap.dedent(usage), " ")) + return msg + + def format_description(self, description: str) -> str: + # leave full control over description to us + if description: + if hasattr(self.parser, "main"): + label = "Commands" + else: + label = "Description" + # some doc strings have initial newlines, some don't + description = description.lstrip("\n") + # some doc strings have final newlines and spaces, some don't + description = description.rstrip() + # dedent, then reindent + description = self.indent_lines(textwrap.dedent(description), " ") + description = f"{label}:\n{description}\n" + return description + else: + return "" + + def format_epilog(self, epilog: str) -> str: + # leave full control over epilog to us + if epilog: + return epilog + else: + return "" + + def indent_lines(self, text: str, indent: str) -> str: + new_lines = [indent + line for line in text.split("\n")] + return "\n".join(new_lines) + + +class UpdatingDefaultsHelpFormatter(PrettyHelpFormatter): + """Custom help formatter for use in ConfigOptionParser. + + This is updates the defaults before expanding them, allowing + them to show up correctly in the help listing. + + Also redact auth from url type options + """ + + def expand_default(self, option: optparse.Option) -> str: + default_values = None + if self.parser is not None: + assert isinstance(self.parser, ConfigOptionParser) + self.parser._update_defaults(self.parser.defaults) + assert option.dest is not None + default_values = self.parser.defaults.get(option.dest) + help_text = super().expand_default(option) + + if default_values and option.metavar == "URL": + if isinstance(default_values, str): + default_values = [default_values] + + # If its not a list, we should abort and just return the help text + if not isinstance(default_values, list): + default_values = [] + + for val in default_values: + help_text = help_text.replace(val, redact_auth_from_url(val)) + + return help_text + + +class CustomOptionParser(optparse.OptionParser): + def insert_option_group( + self, idx: int, *args: Any, **kwargs: Any + ) -> optparse.OptionGroup: + """Insert an OptionGroup at a given position.""" + group = self.add_option_group(*args, **kwargs) + + self.option_groups.pop() + self.option_groups.insert(idx, group) + + return group + + @property + def option_list_all(self) -> List[optparse.Option]: + """Get a list of all options, including those in option groups.""" + res = self.option_list[:] + for i in self.option_groups: + res.extend(i.option_list) + + return res + + +class ConfigOptionParser(CustomOptionParser): + """Custom option parser which updates its defaults by checking the + configuration files and environmental variables""" + + def __init__( + self, + *args: Any, + name: str, + isolated: bool = False, + **kwargs: Any, + ) -> None: + self.name = name + self.config = Configuration(isolated) + + assert self.name + super().__init__(*args, **kwargs) + + def check_default(self, option: optparse.Option, key: str, val: Any) -> Any: + try: + return option.check_value(key, val) + except optparse.OptionValueError as exc: + print(f"An error occurred during configuration: {exc}") + sys.exit(3) + + def _get_ordered_configuration_items(self) -> Iterator[Tuple[str, Any]]: + # Configuration gives keys in an unordered manner. Order them. + override_order = ["global", self.name, ":env:"] + + # Pool the options into different groups + section_items: Dict[str, List[Tuple[str, Any]]] = { + name: [] for name in override_order + } + for section_key, val in self.config.items(): + # ignore empty values + if not val: + logger.debug( + "Ignoring configuration key '%s' as it's value is empty.", + section_key, + ) + continue + + section, key = section_key.split(".", 1) + if section in override_order: + section_items[section].append((key, val)) + + # Yield each group in their override order + for section in override_order: + for key, val in section_items[section]: + yield key, val + + def _update_defaults(self, defaults: Dict[str, Any]) -> Dict[str, Any]: + """Updates the given defaults with values from the config files and + the environ. Does a little special handling for certain types of + options (lists).""" + + # Accumulate complex default state. + self.values = optparse.Values(self.defaults) + late_eval = set() + # Then set the options with those values + for key, val in self._get_ordered_configuration_items(): + # '--' because configuration supports only long names + option = self.get_option("--" + key) + + # Ignore options not present in this parser. E.g. non-globals put + # in [global] by users that want them to apply to all applicable + # commands. + if option is None: + continue + + assert option.dest is not None + + if option.action in ("store_true", "store_false"): + try: + val = strtobool(val) + except ValueError: + self.error( + "{} is not a valid value for {} option, " # noqa + "please specify a boolean value like yes/no, " + "true/false or 1/0 instead.".format(val, key) + ) + elif option.action == "count": + with suppress(ValueError): + val = strtobool(val) + with suppress(ValueError): + val = int(val) + if not isinstance(val, int) or val < 0: + self.error( + "{} is not a valid value for {} option, " # noqa + "please instead specify either a non-negative integer " + "or a boolean value like yes/no or false/true " + "which is equivalent to 1/0.".format(val, key) + ) + elif option.action == "append": + val = val.split() + val = [self.check_default(option, key, v) for v in val] + elif option.action == "callback": + assert option.callback is not None + late_eval.add(option.dest) + opt_str = option.get_opt_string() + val = option.convert_value(opt_str, val) + # From take_action + args = option.callback_args or () + kwargs = option.callback_kwargs or {} + option.callback(option, opt_str, val, self, *args, **kwargs) + else: + val = self.check_default(option, key, val) + + defaults[option.dest] = val + + for key in late_eval: + defaults[key] = getattr(self.values, key) + self.values = None + return defaults + + def get_default_values(self) -> optparse.Values: + """Overriding to make updating the defaults after instantiation of + the option parser possible, _update_defaults() does the dirty work.""" + if not self.process_default_values: + # Old, pre-Optik 1.5 behaviour. + return optparse.Values(self.defaults) + + # Load the configuration, or error out in case of an error + try: + self.config.load() + except ConfigurationError as err: + self.exit(UNKNOWN_ERROR, str(err)) + + defaults = self._update_defaults(self.defaults.copy()) # ours + for option in self._get_all_options(): + assert option.dest is not None + default = defaults.get(option.dest) + if isinstance(default, str): + opt_str = option.get_opt_string() + defaults[option.dest] = option.check_value(opt_str, default) + return optparse.Values(defaults) + + def error(self, msg: str) -> None: + self.print_usage(sys.stderr) + self.exit(UNKNOWN_ERROR, f"{msg}\n") diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/progress_bars.py b/python/lib/python3.10/site-packages/pip/_internal/cli/progress_bars.py new file mode 100644 index 0000000..ffa1964 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/progress_bars.py @@ -0,0 +1,321 @@ +import functools +import itertools +import sys +from signal import SIGINT, default_int_handler, signal +from typing import Any, Callable, Iterator, Optional, Tuple + +from pip._vendor.progress.bar import Bar, FillingCirclesBar, IncrementalBar +from pip._vendor.progress.spinner import Spinner +from pip._vendor.rich.progress import ( + BarColumn, + DownloadColumn, + FileSizeColumn, + Progress, + ProgressColumn, + SpinnerColumn, + TextColumn, + TimeElapsedColumn, + TimeRemainingColumn, + TransferSpeedColumn, +) + +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.logging import get_indentation +from pip._internal.utils.misc import format_size + +try: + from pip._vendor import colorama +# Lots of different errors can come from this, including SystemError and +# ImportError. +except Exception: + colorama = None + +DownloadProgressRenderer = Callable[[Iterator[bytes]], Iterator[bytes]] + + +def _select_progress_class(preferred: Bar, fallback: Bar) -> Bar: + encoding = getattr(preferred.file, "encoding", None) + + # If we don't know what encoding this file is in, then we'll just assume + # that it doesn't support unicode and use the ASCII bar. + if not encoding: + return fallback + + # Collect all of the possible characters we want to use with the preferred + # bar. + characters = [ + getattr(preferred, "empty_fill", ""), + getattr(preferred, "fill", ""), + ] + characters += list(getattr(preferred, "phases", [])) + + # Try to decode the characters we're using for the bar using the encoding + # of the given file, if this works then we'll assume that we can use the + # fancier bar and if not we'll fall back to the plaintext bar. + try: + "".join(characters).encode(encoding) + except UnicodeEncodeError: + return fallback + else: + return preferred + + +_BaseBar: Any = _select_progress_class(IncrementalBar, Bar) + + +class InterruptibleMixin: + """ + Helper to ensure that self.finish() gets called on keyboard interrupt. + + This allows downloads to be interrupted without leaving temporary state + (like hidden cursors) behind. + + This class is similar to the progress library's existing SigIntMixin + helper, but as of version 1.2, that helper has the following problems: + + 1. It calls sys.exit(). + 2. It discards the existing SIGINT handler completely. + 3. It leaves its own handler in place even after an uninterrupted finish, + which will have unexpected delayed effects if the user triggers an + unrelated keyboard interrupt some time after a progress-displaying + download has already completed, for example. + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + """ + Save the original SIGINT handler for later. + """ + # https://github.com/python/mypy/issues/5887 + super().__init__(*args, **kwargs) # type: ignore + + self.original_handler = signal(SIGINT, self.handle_sigint) + + # If signal() returns None, the previous handler was not installed from + # Python, and we cannot restore it. This probably should not happen, + # but if it does, we must restore something sensible instead, at least. + # The least bad option should be Python's default SIGINT handler, which + # just raises KeyboardInterrupt. + if self.original_handler is None: + self.original_handler = default_int_handler + + def finish(self) -> None: + """ + Restore the original SIGINT handler after finishing. + + This should happen regardless of whether the progress display finishes + normally, or gets interrupted. + """ + super().finish() # type: ignore + signal(SIGINT, self.original_handler) + + def handle_sigint(self, signum, frame): # type: ignore + """ + Call self.finish() before delegating to the original SIGINT handler. + + This handler should only be in place while the progress display is + active. + """ + self.finish() + self.original_handler(signum, frame) + + +class SilentBar(Bar): + def update(self) -> None: + pass + + +class BlueEmojiBar(IncrementalBar): + + suffix = "%(percent)d%%" + bar_prefix = " " + bar_suffix = " " + phases = ("\U0001F539", "\U0001F537", "\U0001F535") + + +class DownloadProgressMixin: + def __init__(self, *args: Any, **kwargs: Any) -> None: + # https://github.com/python/mypy/issues/5887 + super().__init__(*args, **kwargs) # type: ignore + self.message: str = (" " * (get_indentation() + 2)) + self.message + + @property + def downloaded(self) -> str: + return format_size(self.index) # type: ignore + + @property + def download_speed(self) -> str: + # Avoid zero division errors... + if self.avg == 0.0: # type: ignore + return "..." + return format_size(1 / self.avg) + "/s" # type: ignore + + @property + def pretty_eta(self) -> str: + if self.eta: # type: ignore + return f"eta {self.eta_td}" # type: ignore + return "" + + def iter(self, it): # type: ignore + for x in it: + yield x + # B305 is incorrectly raised here + # https://github.com/PyCQA/flake8-bugbear/issues/59 + self.next(len(x)) # noqa: B305 + self.finish() + + +class WindowsMixin: + def __init__(self, *args: Any, **kwargs: Any) -> None: + # The Windows terminal does not support the hide/show cursor ANSI codes + # even with colorama. So we'll ensure that hide_cursor is False on + # Windows. + # This call needs to go before the super() call, so that hide_cursor + # is set in time. The base progress bar class writes the "hide cursor" + # code to the terminal in its init, so if we don't set this soon + # enough, we get a "hide" with no corresponding "show"... + if WINDOWS and self.hide_cursor: # type: ignore + self.hide_cursor = False + + # https://github.com/python/mypy/issues/5887 + super().__init__(*args, **kwargs) # type: ignore + + # Check if we are running on Windows and we have the colorama module, + # if we do then wrap our file with it. + if WINDOWS and colorama: + self.file = colorama.AnsiToWin32(self.file) # type: ignore + # The progress code expects to be able to call self.file.isatty() + # but the colorama.AnsiToWin32() object doesn't have that, so we'll + # add it. + self.file.isatty = lambda: self.file.wrapped.isatty() + # The progress code expects to be able to call self.file.flush() + # but the colorama.AnsiToWin32() object doesn't have that, so we'll + # add it. + self.file.flush = lambda: self.file.wrapped.flush() + + +class BaseDownloadProgressBar(WindowsMixin, InterruptibleMixin, DownloadProgressMixin): + + file = sys.stdout + message = "%(percent)d%%" + suffix = "%(downloaded)s %(download_speed)s %(pretty_eta)s" + + +class DefaultDownloadProgressBar(BaseDownloadProgressBar, _BaseBar): + pass + + +class DownloadSilentBar(BaseDownloadProgressBar, SilentBar): + pass + + +class DownloadBar(BaseDownloadProgressBar, Bar): + pass + + +class DownloadFillingCirclesBar(BaseDownloadProgressBar, FillingCirclesBar): + pass + + +class DownloadBlueEmojiProgressBar(BaseDownloadProgressBar, BlueEmojiBar): + pass + + +class DownloadProgressSpinner( + WindowsMixin, InterruptibleMixin, DownloadProgressMixin, Spinner +): + + file = sys.stdout + suffix = "%(downloaded)s %(download_speed)s" + + def next_phase(self) -> str: + if not hasattr(self, "_phaser"): + self._phaser = itertools.cycle(self.phases) + return next(self._phaser) + + def update(self) -> None: + message = self.message % self + phase = self.next_phase() + suffix = self.suffix % self + line = "".join( + [ + message, + " " if message else "", + phase, + " " if suffix else "", + suffix, + ] + ) + + self.writeln(line) + + +BAR_TYPES = { + "off": (DownloadSilentBar, DownloadSilentBar), + "on": (DefaultDownloadProgressBar, DownloadProgressSpinner), + "ascii": (DownloadBar, DownloadProgressSpinner), + "pretty": (DownloadFillingCirclesBar, DownloadProgressSpinner), + "emoji": (DownloadBlueEmojiProgressBar, DownloadProgressSpinner), +} + + +def _legacy_progress_bar( + progress_bar: str, max: Optional[int] +) -> DownloadProgressRenderer: + if max is None or max == 0: + return BAR_TYPES[progress_bar][1]().iter # type: ignore + else: + return BAR_TYPES[progress_bar][0](max=max).iter + + +# +# Modern replacement, for our legacy progress bars. +# +def _rich_progress_bar( + iterable: Iterator[bytes], + *, + bar_type: str, + size: int, +) -> Iterator[bytes]: + assert bar_type == "on", "This should only be used in the default mode." + + if not size: + total = float("inf") + columns: Tuple[ProgressColumn, ...] = ( + TextColumn("[progress.description]{task.description}"), + SpinnerColumn("line", speed=1.5), + FileSizeColumn(), + TransferSpeedColumn(), + TimeElapsedColumn(), + ) + else: + total = size + columns = ( + TextColumn("[progress.description]{task.description}"), + BarColumn(), + DownloadColumn(), + TransferSpeedColumn(), + TextColumn("eta"), + TimeRemainingColumn(), + ) + + progress = Progress(*columns, refresh_per_second=30) + task_id = progress.add_task(" " * (get_indentation() + 2), total=total) + with progress: + for chunk in iterable: + yield chunk + progress.update(task_id, advance=len(chunk)) + + +def get_download_progress_renderer( + *, bar_type: str, size: Optional[int] = None +) -> DownloadProgressRenderer: + """Get an object that can be used to render the download progress. + + Returns a callable, that takes an iterable to "wrap". + """ + if bar_type == "on": + return functools.partial(_rich_progress_bar, bar_type=bar_type, size=size) + elif bar_type == "off": + return iter # no-op, when passed an iterator + else: + return _legacy_progress_bar(bar_type, size) diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/req_command.py b/python/lib/python3.10/site-packages/pip/_internal/cli/req_command.py new file mode 100644 index 0000000..5d4d1f0 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/req_command.py @@ -0,0 +1,506 @@ +"""Contains the Command base classes that depend on PipSession. + +The classes in this module are in a separate module so the commands not +needing download / PackageFinder capability don't unnecessarily import the +PackageFinder machinery and all its vendored dependencies, etc. +""" + +import logging +import os +import sys +from functools import partial +from optparse import Values +from typing import Any, List, Optional, Tuple + +from pip._internal.cache import WheelCache +from pip._internal.cli import cmdoptions +from pip._internal.cli.base_command import Command +from pip._internal.cli.command_context import CommandContextMixIn +from pip._internal.exceptions import CommandError, PreviousBuildDirError +from pip._internal.index.collector import LinkCollector +from pip._internal.index.package_finder import PackageFinder +from pip._internal.models.selection_prefs import SelectionPreferences +from pip._internal.models.target_python import TargetPython +from pip._internal.network.session import PipSession +from pip._internal.operations.prepare import RequirementPreparer +from pip._internal.req.constructors import ( + install_req_from_editable, + install_req_from_line, + install_req_from_parsed_requirement, + install_req_from_req_string, +) +from pip._internal.req.req_file import parse_requirements +from pip._internal.req.req_install import InstallRequirement +from pip._internal.req.req_tracker import RequirementTracker +from pip._internal.resolution.base import BaseResolver +from pip._internal.self_outdated_check import pip_self_version_check +from pip._internal.utils.deprecation import deprecated +from pip._internal.utils.temp_dir import ( + TempDirectory, + TempDirectoryTypeRegistry, + tempdir_kinds, +) +from pip._internal.utils.virtualenv import running_under_virtualenv + +logger = logging.getLogger(__name__) + + +class SessionCommandMixin(CommandContextMixIn): + + """ + A class mixin for command classes needing _build_session(). + """ + + def __init__(self) -> None: + super().__init__() + self._session: Optional[PipSession] = None + + @classmethod + def _get_index_urls(cls, options: Values) -> Optional[List[str]]: + """Return a list of index urls from user-provided options.""" + index_urls = [] + if not getattr(options, "no_index", False): + url = getattr(options, "index_url", None) + if url: + index_urls.append(url) + urls = getattr(options, "extra_index_urls", None) + if urls: + index_urls.extend(urls) + # Return None rather than an empty list + return index_urls or None + + def get_default_session(self, options: Values) -> PipSession: + """Get a default-managed session.""" + if self._session is None: + self._session = self.enter_context(self._build_session(options)) + # there's no type annotation on requests.Session, so it's + # automatically ContextManager[Any] and self._session becomes Any, + # then https://github.com/python/mypy/issues/7696 kicks in + assert self._session is not None + return self._session + + def _build_session( + self, + options: Values, + retries: Optional[int] = None, + timeout: Optional[int] = None, + ) -> PipSession: + assert not options.cache_dir or os.path.isabs(options.cache_dir) + session = PipSession( + cache=( + os.path.join(options.cache_dir, "http") if options.cache_dir else None + ), + retries=retries if retries is not None else options.retries, + trusted_hosts=options.trusted_hosts, + index_urls=self._get_index_urls(options), + ) + + # Handle custom ca-bundles from the user + if options.cert: + session.verify = options.cert + + # Handle SSL client certificate + if options.client_cert: + session.cert = options.client_cert + + # Handle timeouts + if options.timeout or timeout: + session.timeout = timeout if timeout is not None else options.timeout + + # Handle configured proxies + if options.proxy: + session.proxies = { + "http": options.proxy, + "https": options.proxy, + } + + # Determine if we can prompt the user for authentication or not + session.auth.prompting = not options.no_input + + return session + + +class IndexGroupCommand(Command, SessionCommandMixin): + + """ + Abstract base class for commands with the index_group options. + + This also corresponds to the commands that permit the pip version check. + """ + + def handle_pip_version_check(self, options: Values) -> None: + """ + Do the pip version check if not disabled. + + This overrides the default behavior of not doing the check. + """ + # Make sure the index_group options are present. + assert hasattr(options, "no_index") + + if options.disable_pip_version_check or options.no_index: + return + + # Otherwise, check if we're using the latest version of pip available. + session = self._build_session( + options, retries=0, timeout=min(5, options.timeout) + ) + with session: + pip_self_version_check(session, options) + + +KEEPABLE_TEMPDIR_TYPES = [ + tempdir_kinds.BUILD_ENV, + tempdir_kinds.EPHEM_WHEEL_CACHE, + tempdir_kinds.REQ_BUILD, +] + + +def warn_if_run_as_root() -> None: + """Output a warning for sudo users on Unix. + + In a virtual environment, sudo pip still writes to virtualenv. + On Windows, users may run pip as Administrator without issues. + This warning only applies to Unix root users outside of virtualenv. + """ + if running_under_virtualenv(): + return + if not hasattr(os, "getuid"): + return + # On Windows, there are no "system managed" Python packages. Installing as + # Administrator via pip is the correct way of updating system environments. + # + # We choose sys.platform over utils.compat.WINDOWS here to enable Mypy platform + # checks: https://mypy.readthedocs.io/en/stable/common_issues.html + if sys.platform == "win32" or sys.platform == "cygwin": + return + + if os.getuid() != 0: + return + + logger.warning( + "Running pip as the 'root' user can result in broken permissions and " + "conflicting behaviour with the system package manager. " + "It is recommended to use a virtual environment instead: " + "https://pip.pypa.io/warnings/venv" + ) + + +def with_cleanup(func: Any) -> Any: + """Decorator for common logic related to managing temporary + directories. + """ + + def configure_tempdir_registry(registry: TempDirectoryTypeRegistry) -> None: + for t in KEEPABLE_TEMPDIR_TYPES: + registry.set_delete(t, False) + + def wrapper( + self: RequirementCommand, options: Values, args: List[Any] + ) -> Optional[int]: + assert self.tempdir_registry is not None + if options.no_clean: + configure_tempdir_registry(self.tempdir_registry) + + try: + return func(self, options, args) + except PreviousBuildDirError: + # This kind of conflict can occur when the user passes an explicit + # build directory with a pre-existing folder. In that case we do + # not want to accidentally remove it. + configure_tempdir_registry(self.tempdir_registry) + raise + + return wrapper + + +class RequirementCommand(IndexGroupCommand): + def __init__(self, *args: Any, **kw: Any) -> None: + super().__init__(*args, **kw) + + self.cmd_opts.add_option(cmdoptions.no_clean()) + + @staticmethod + def determine_resolver_variant(options: Values) -> str: + """Determines which resolver should be used, based on the given options.""" + if "legacy-resolver" in options.deprecated_features_enabled: + return "legacy" + + return "2020-resolver" + + @staticmethod + def determine_build_failure_suppression(options: Values) -> bool: + """Determines whether build failures should be suppressed and backtracked on.""" + if "backtrack-on-build-failures" not in options.deprecated_features_enabled: + return False + + if "legacy-resolver" in options.deprecated_features_enabled: + raise CommandError("Cannot backtrack with legacy resolver.") + + deprecated( + reason=( + "Backtracking on build failures can mask issues related to how " + "a package generates metadata or builds a wheel. This flag will " + "be removed in pip 22.2." + ), + gone_in=None, + replacement=( + "avoiding known-bad versions by explicitly telling pip to ignore them " + "(either directly as requirements, or via a constraints file)" + ), + feature_flag=None, + issue=10655, + ) + return True + + @classmethod + def make_requirement_preparer( + cls, + temp_build_dir: TempDirectory, + options: Values, + req_tracker: RequirementTracker, + session: PipSession, + finder: PackageFinder, + use_user_site: bool, + download_dir: Optional[str] = None, + verbosity: int = 0, + ) -> RequirementPreparer: + """ + Create a RequirementPreparer instance for the given parameters. + """ + temp_build_dir_path = temp_build_dir.path + assert temp_build_dir_path is not None + + resolver_variant = cls.determine_resolver_variant(options) + if resolver_variant == "2020-resolver": + lazy_wheel = "fast-deps" in options.features_enabled + if lazy_wheel: + logger.warning( + "pip is using lazily downloaded wheels using HTTP " + "range requests to obtain dependency information. " + "This experimental feature is enabled through " + "--use-feature=fast-deps and it is not ready for " + "production." + ) + else: + lazy_wheel = False + if "fast-deps" in options.features_enabled: + logger.warning( + "fast-deps has no effect when used with the legacy resolver." + ) + + in_tree_build = "out-of-tree-build" not in options.deprecated_features_enabled + if "in-tree-build" in options.features_enabled: + deprecated( + reason="In-tree builds are now the default.", + replacement="to remove the --use-feature=in-tree-build flag", + gone_in="22.1", + ) + if "out-of-tree-build" in options.deprecated_features_enabled: + deprecated( + reason="Out-of-tree builds are deprecated.", + replacement=None, + gone_in="22.1", + ) + + if options.progress_bar not in {"on", "off"}: + deprecated( + reason="Custom progress bar styles are deprecated", + replacement="to use the default progress bar style.", + gone_in="22.1", + ) + + return RequirementPreparer( + build_dir=temp_build_dir_path, + src_dir=options.src_dir, + download_dir=download_dir, + build_isolation=options.build_isolation, + req_tracker=req_tracker, + session=session, + progress_bar=options.progress_bar, + finder=finder, + require_hashes=options.require_hashes, + use_user_site=use_user_site, + lazy_wheel=lazy_wheel, + verbosity=verbosity, + in_tree_build=in_tree_build, + ) + + @classmethod + def make_resolver( + cls, + preparer: RequirementPreparer, + finder: PackageFinder, + options: Values, + wheel_cache: Optional[WheelCache] = None, + use_user_site: bool = False, + ignore_installed: bool = True, + ignore_requires_python: bool = False, + force_reinstall: bool = False, + upgrade_strategy: str = "to-satisfy-only", + use_pep517: Optional[bool] = None, + py_version_info: Optional[Tuple[int, ...]] = None, + ) -> BaseResolver: + """ + Create a Resolver instance for the given parameters. + """ + make_install_req = partial( + install_req_from_req_string, + isolated=options.isolated_mode, + use_pep517=use_pep517, + ) + suppress_build_failures = cls.determine_build_failure_suppression(options) + resolver_variant = cls.determine_resolver_variant(options) + # The long import name and duplicated invocation is needed to convince + # Mypy into correctly typechecking. Otherwise it would complain the + # "Resolver" class being redefined. + if resolver_variant == "2020-resolver": + import pip._internal.resolution.resolvelib.resolver + + return pip._internal.resolution.resolvelib.resolver.Resolver( + preparer=preparer, + finder=finder, + wheel_cache=wheel_cache, + make_install_req=make_install_req, + use_user_site=use_user_site, + ignore_dependencies=options.ignore_dependencies, + ignore_installed=ignore_installed, + ignore_requires_python=ignore_requires_python, + force_reinstall=force_reinstall, + upgrade_strategy=upgrade_strategy, + py_version_info=py_version_info, + suppress_build_failures=suppress_build_failures, + ) + import pip._internal.resolution.legacy.resolver + + return pip._internal.resolution.legacy.resolver.Resolver( + preparer=preparer, + finder=finder, + wheel_cache=wheel_cache, + make_install_req=make_install_req, + use_user_site=use_user_site, + ignore_dependencies=options.ignore_dependencies, + ignore_installed=ignore_installed, + ignore_requires_python=ignore_requires_python, + force_reinstall=force_reinstall, + upgrade_strategy=upgrade_strategy, + py_version_info=py_version_info, + ) + + def get_requirements( + self, + args: List[str], + options: Values, + finder: PackageFinder, + session: PipSession, + ) -> List[InstallRequirement]: + """ + Parse command-line arguments into the corresponding requirements. + """ + requirements: List[InstallRequirement] = [] + for filename in options.constraints: + for parsed_req in parse_requirements( + filename, + constraint=True, + finder=finder, + options=options, + session=session, + ): + req_to_add = install_req_from_parsed_requirement( + parsed_req, + isolated=options.isolated_mode, + user_supplied=False, + ) + requirements.append(req_to_add) + + for req in args: + req_to_add = install_req_from_line( + req, + None, + isolated=options.isolated_mode, + use_pep517=options.use_pep517, + user_supplied=True, + ) + requirements.append(req_to_add) + + for req in options.editables: + req_to_add = install_req_from_editable( + req, + user_supplied=True, + isolated=options.isolated_mode, + use_pep517=options.use_pep517, + ) + requirements.append(req_to_add) + + # NOTE: options.require_hashes may be set if --require-hashes is True + for filename in options.requirements: + for parsed_req in parse_requirements( + filename, finder=finder, options=options, session=session + ): + req_to_add = install_req_from_parsed_requirement( + parsed_req, + isolated=options.isolated_mode, + use_pep517=options.use_pep517, + user_supplied=True, + ) + requirements.append(req_to_add) + + # If any requirement has hash options, enable hash checking. + if any(req.has_hash_options for req in requirements): + options.require_hashes = True + + if not (args or options.editables or options.requirements): + opts = {"name": self.name} + if options.find_links: + raise CommandError( + "You must give at least one requirement to {name} " + '(maybe you meant "pip {name} {links}"?)'.format( + **dict(opts, links=" ".join(options.find_links)) + ) + ) + else: + raise CommandError( + "You must give at least one requirement to {name} " + '(see "pip help {name}")'.format(**opts) + ) + + return requirements + + @staticmethod + def trace_basic_info(finder: PackageFinder) -> None: + """ + Trace basic information about the provided objects. + """ + # Display where finder is looking for packages + search_scope = finder.search_scope + locations = search_scope.get_formatted_locations() + if locations: + logger.info(locations) + + def _build_package_finder( + self, + options: Values, + session: PipSession, + target_python: Optional[TargetPython] = None, + ignore_requires_python: Optional[bool] = None, + ) -> PackageFinder: + """ + Create a package finder appropriate to this requirement command. + + :param ignore_requires_python: Whether to ignore incompatible + "Requires-Python" values in links. Defaults to False. + """ + link_collector = LinkCollector.create(session, options=options) + selection_prefs = SelectionPreferences( + allow_yanked=True, + format_control=options.format_control, + allow_all_prereleases=options.pre, + prefer_binary=options.prefer_binary, + ignore_requires_python=ignore_requires_python, + ) + + return PackageFinder.create( + link_collector=link_collector, + selection_prefs=selection_prefs, + target_python=target_python, + use_deprecated_html5lib="html5lib" in options.deprecated_features_enabled, + ) diff --git a/python/lib/python3.10/site-packages/pip/_internal/cli/spinners.py b/python/lib/python3.10/site-packages/pip/_internal/cli/spinners.py new file mode 100644 index 0000000..1e313e1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/cli/spinners.py @@ -0,0 +1,157 @@ +import contextlib +import itertools +import logging +import sys +import time +from typing import IO, Iterator + +from pip._vendor.progress import HIDE_CURSOR, SHOW_CURSOR + +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.logging import get_indentation + +logger = logging.getLogger(__name__) + + +class SpinnerInterface: + def spin(self) -> None: + raise NotImplementedError() + + def finish(self, final_status: str) -> None: + raise NotImplementedError() + + +class InteractiveSpinner(SpinnerInterface): + def __init__( + self, + message: str, + file: IO[str] = None, + spin_chars: str = "-\\|/", + # Empirically, 8 updates/second looks nice + min_update_interval_seconds: float = 0.125, + ): + self._message = message + if file is None: + file = sys.stdout + self._file = file + self._rate_limiter = RateLimiter(min_update_interval_seconds) + self._finished = False + + self._spin_cycle = itertools.cycle(spin_chars) + + self._file.write(" " * get_indentation() + self._message + " ... ") + self._width = 0 + + def _write(self, status: str) -> None: + assert not self._finished + # Erase what we wrote before by backspacing to the beginning, writing + # spaces to overwrite the old text, and then backspacing again + backup = "\b" * self._width + self._file.write(backup + " " * self._width + backup) + # Now we have a blank slate to add our status + self._file.write(status) + self._width = len(status) + self._file.flush() + self._rate_limiter.reset() + + def spin(self) -> None: + if self._finished: + return + if not self._rate_limiter.ready(): + return + self._write(next(self._spin_cycle)) + + def finish(self, final_status: str) -> None: + if self._finished: + return + self._write(final_status) + self._file.write("\n") + self._file.flush() + self._finished = True + + +# Used for dumb terminals, non-interactive installs (no tty), etc. +# We still print updates occasionally (once every 60 seconds by default) to +# act as a keep-alive for systems like Travis-CI that take lack-of-output as +# an indication that a task has frozen. +class NonInteractiveSpinner(SpinnerInterface): + def __init__(self, message: str, min_update_interval_seconds: float = 60.0) -> None: + self._message = message + self._finished = False + self._rate_limiter = RateLimiter(min_update_interval_seconds) + self._update("started") + + def _update(self, status: str) -> None: + assert not self._finished + self._rate_limiter.reset() + logger.info("%s: %s", self._message, status) + + def spin(self) -> None: + if self._finished: + return + if not self._rate_limiter.ready(): + return + self._update("still running...") + + def finish(self, final_status: str) -> None: + if self._finished: + return + self._update(f"finished with status '{final_status}'") + self._finished = True + + +class RateLimiter: + def __init__(self, min_update_interval_seconds: float) -> None: + self._min_update_interval_seconds = min_update_interval_seconds + self._last_update: float = 0 + + def ready(self) -> bool: + now = time.time() + delta = now - self._last_update + return delta >= self._min_update_interval_seconds + + def reset(self) -> None: + self._last_update = time.time() + + +@contextlib.contextmanager +def open_spinner(message: str) -> Iterator[SpinnerInterface]: + # Interactive spinner goes directly to sys.stdout rather than being routed + # through the logging system, but it acts like it has level INFO, + # i.e. it's only displayed if we're at level INFO or better. + # Non-interactive spinner goes through the logging system, so it is always + # in sync with logging configuration. + if sys.stdout.isatty() and logger.getEffectiveLevel() <= logging.INFO: + spinner: SpinnerInterface = InteractiveSpinner(message) + else: + spinner = NonInteractiveSpinner(message) + try: + with hidden_cursor(sys.stdout): + yield spinner + except KeyboardInterrupt: + spinner.finish("canceled") + raise + except Exception: + spinner.finish("error") + raise + else: + spinner.finish("done") + + +@contextlib.contextmanager +def hidden_cursor(file: IO[str]) -> Iterator[None]: + # The Windows terminal does not support the hide/show cursor ANSI codes, + # even via colorama. So don't even try. + if WINDOWS: + yield + # We don't want to clutter the output with control characters if we're + # writing to a file, or if the user is running with --quiet. + # See https://github.com/pypa/pip/issues/3418 + elif not file.isatty() or logger.getEffectiveLevel() > logging.INFO: + yield + else: + file.write(HIDE_CURSOR) + try: + yield + finally: + file.write(SHOW_CURSOR) diff --git a/lib/python3.11/site-packages/pip/_internal/cli/status_codes.py b/python/lib/python3.10/site-packages/pip/_internal/cli/status_codes.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/cli/status_codes.py rename to python/lib/python3.10/site-packages/pip/_internal/cli/status_codes.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/commands/__init__.py new file mode 100644 index 0000000..c72f24f --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/__init__.py @@ -0,0 +1,127 @@ +""" +Package containing all pip commands +""" + +import importlib +from collections import namedtuple +from typing import Any, Dict, Optional + +from pip._internal.cli.base_command import Command + +CommandInfo = namedtuple("CommandInfo", "module_path, class_name, summary") + +# This dictionary does a bunch of heavy lifting for help output: +# - Enables avoiding additional (costly) imports for presenting `--help`. +# - The ordering matters for help display. +# +# Even though the module path starts with the same "pip._internal.commands" +# prefix, the full path makes testing easier (specifically when modifying +# `commands_dict` in test setup / teardown). +commands_dict: Dict[str, CommandInfo] = { + "install": CommandInfo( + "pip._internal.commands.install", + "InstallCommand", + "Install packages.", + ), + "download": CommandInfo( + "pip._internal.commands.download", + "DownloadCommand", + "Download packages.", + ), + "uninstall": CommandInfo( + "pip._internal.commands.uninstall", + "UninstallCommand", + "Uninstall packages.", + ), + "freeze": CommandInfo( + "pip._internal.commands.freeze", + "FreezeCommand", + "Output installed packages in requirements format.", + ), + "list": CommandInfo( + "pip._internal.commands.list", + "ListCommand", + "List installed packages.", + ), + "show": CommandInfo( + "pip._internal.commands.show", + "ShowCommand", + "Show information about installed packages.", + ), + "check": CommandInfo( + "pip._internal.commands.check", + "CheckCommand", + "Verify installed packages have compatible dependencies.", + ), + "config": CommandInfo( + "pip._internal.commands.configuration", + "ConfigurationCommand", + "Manage local and global configuration.", + ), + "search": CommandInfo( + "pip._internal.commands.search", + "SearchCommand", + "Search PyPI for packages.", + ), + "cache": CommandInfo( + "pip._internal.commands.cache", + "CacheCommand", + "Inspect and manage pip's wheel cache.", + ), + "index": CommandInfo( + "pip._internal.commands.index", + "IndexCommand", + "Inspect information available from package indexes.", + ), + "wheel": CommandInfo( + "pip._internal.commands.wheel", + "WheelCommand", + "Build wheels from your requirements.", + ), + "hash": CommandInfo( + "pip._internal.commands.hash", + "HashCommand", + "Compute hashes of package archives.", + ), + "completion": CommandInfo( + "pip._internal.commands.completion", + "CompletionCommand", + "A helper command used for command completion.", + ), + "debug": CommandInfo( + "pip._internal.commands.debug", + "DebugCommand", + "Show information useful for debugging.", + ), + "help": CommandInfo( + "pip._internal.commands.help", + "HelpCommand", + "Show help for commands.", + ), +} + + +def create_command(name: str, **kwargs: Any) -> Command: + """ + Create an instance of the Command class with the given name. + """ + module_path, class_name, summary = commands_dict[name] + module = importlib.import_module(module_path) + command_class = getattr(module, class_name) + command = command_class(name=name, summary=summary, **kwargs) + + return command + + +def get_similar_commands(name: str) -> Optional[str]: + """Command name auto-correct.""" + from difflib import get_close_matches + + name = name.lower() + + close_commands = get_close_matches(name, commands_dict.keys()) + + if close_commands: + return close_commands[0] + else: + return None diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/cache.py b/python/lib/python3.10/site-packages/pip/_internal/commands/cache.py new file mode 100644 index 0000000..f1a489d --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/cache.py @@ -0,0 +1,223 @@ +import os +import textwrap +from optparse import Values +from typing import Any, List + +import pip._internal.utils.filesystem as filesystem +from pip._internal.cli.base_command import Command +from pip._internal.cli.status_codes import ERROR, SUCCESS +from pip._internal.exceptions import CommandError, PipError +from pip._internal.utils.logging import getLogger + +logger = getLogger(__name__) + + +class CacheCommand(Command): + """ + Inspect and manage pip's wheel cache. + + Subcommands: + + - dir: Show the cache directory. + - info: Show information about the cache. + - list: List filenames of packages stored in the cache. + - remove: Remove one or more package from the cache. + - purge: Remove all items from the cache. + + ```` can be a glob expression or a package name. + """ + + ignore_require_venv = True + usage = """ + %prog dir + %prog info + %prog list [] [--format=[human, abspath]] + %prog remove + %prog purge + """ + + def add_options(self) -> None: + + self.cmd_opts.add_option( + "--format", + action="store", + dest="list_format", + default="human", + choices=("human", "abspath"), + help="Select the output format among: human (default) or abspath", + ) + + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + handlers = { + "dir": self.get_cache_dir, + "info": self.get_cache_info, + "list": self.list_cache_items, + "remove": self.remove_cache_items, + "purge": self.purge_cache, + } + + if not options.cache_dir: + logger.error("pip cache commands can not function since cache is disabled.") + return ERROR + + # Determine action + if not args or args[0] not in handlers: + logger.error( + "Need an action (%s) to perform.", + ", ".join(sorted(handlers)), + ) + return ERROR + + action = args[0] + + # Error handling happens here, not in the action-handlers. + try: + handlers[action](options, args[1:]) + except PipError as e: + logger.error(e.args[0]) + return ERROR + + return SUCCESS + + def get_cache_dir(self, options: Values, args: List[Any]) -> None: + if args: + raise CommandError("Too many arguments") + + logger.info(options.cache_dir) + + def get_cache_info(self, options: Values, args: List[Any]) -> None: + if args: + raise CommandError("Too many arguments") + + num_http_files = len(self._find_http_files(options)) + num_packages = len(self._find_wheels(options, "*")) + + http_cache_location = self._cache_dir(options, "http") + wheels_cache_location = self._cache_dir(options, "wheels") + http_cache_size = filesystem.format_directory_size(http_cache_location) + wheels_cache_size = filesystem.format_directory_size(wheels_cache_location) + + message = ( + textwrap.dedent( + """ + Package index page cache location: {http_cache_location} + Package index page cache size: {http_cache_size} + Number of HTTP files: {num_http_files} + Wheels location: {wheels_cache_location} + Wheels size: {wheels_cache_size} + Number of wheels: {package_count} + """ + ) + .format( + http_cache_location=http_cache_location, + http_cache_size=http_cache_size, + num_http_files=num_http_files, + wheels_cache_location=wheels_cache_location, + package_count=num_packages, + wheels_cache_size=wheels_cache_size, + ) + .strip() + ) + + logger.info(message) + + def list_cache_items(self, options: Values, args: List[Any]) -> None: + if len(args) > 1: + raise CommandError("Too many arguments") + + if args: + pattern = args[0] + else: + pattern = "*" + + files = self._find_wheels(options, pattern) + if options.list_format == "human": + self.format_for_human(files) + else: + self.format_for_abspath(files) + + def format_for_human(self, files: List[str]) -> None: + if not files: + logger.info("Nothing cached.") + return + + results = [] + for filename in files: + wheel = os.path.basename(filename) + size = filesystem.format_file_size(filename) + results.append(f" - {wheel} ({size})") + logger.info("Cache contents:\n") + logger.info("\n".join(sorted(results))) + + def format_for_abspath(self, files: List[str]) -> None: + if not files: + return + + results = [] + for filename in files: + results.append(filename) + + logger.info("\n".join(sorted(results))) + + def remove_cache_items(self, options: Values, args: List[Any]) -> None: + if len(args) > 1: + raise CommandError("Too many arguments") + + if not args: + raise CommandError("Please provide a pattern") + + files = self._find_wheels(options, args[0]) + + no_matching_msg = "No matching packages" + if args[0] == "*": + # Only fetch http files if no specific pattern given + files += self._find_http_files(options) + else: + # Add the pattern to the log message + no_matching_msg += ' for pattern "{}"'.format(args[0]) + + if not files: + logger.warning(no_matching_msg) + + for filename in files: + os.unlink(filename) + logger.verbose("Removed %s", filename) + logger.info("Files removed: %s", len(files)) + + def purge_cache(self, options: Values, args: List[Any]) -> None: + if args: + raise CommandError("Too many arguments") + + return self.remove_cache_items(options, ["*"]) + + def _cache_dir(self, options: Values, subdir: str) -> str: + return os.path.join(options.cache_dir, subdir) + + def _find_http_files(self, options: Values) -> List[str]: + http_dir = self._cache_dir(options, "http") + return filesystem.find_files(http_dir, "*") + + def _find_wheels(self, options: Values, pattern: str) -> List[str]: + wheel_dir = self._cache_dir(options, "wheels") + + # The wheel filename format, as specified in PEP 427, is: + # {distribution}-{version}(-{build})?-{python}-{abi}-{platform}.whl + # + # Additionally, non-alphanumeric values in the distribution are + # normalized to underscores (_), meaning hyphens can never occur + # before `-{version}`. + # + # Given that information: + # - If the pattern we're given contains a hyphen (-), the user is + # providing at least the version. Thus, we can just append `*.whl` + # to match the rest of it. + # - If the pattern we're given doesn't contain a hyphen (-), the + # user is only providing the name. Thus, we append `-*.whl` to + # match the hyphen before the version, followed by anything else. + # + # PEP 427: https://www.python.org/dev/peps/pep-0427/ + pattern = pattern + ("*.whl" if "-" in pattern else "-*.whl") + + return filesystem.find_files(wheel_dir, pattern) diff --git a/lib/python3.11/site-packages/pip/_internal/commands/check.py b/python/lib/python3.10/site-packages/pip/_internal/commands/check.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/commands/check.py rename to python/lib/python3.10/site-packages/pip/_internal/commands/check.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/completion.py b/python/lib/python3.10/site-packages/pip/_internal/commands/completion.py new file mode 100644 index 0000000..c0fb4ca --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/completion.py @@ -0,0 +1,96 @@ +import sys +import textwrap +from optparse import Values +from typing import List + +from pip._internal.cli.base_command import Command +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.utils.misc import get_prog + +BASE_COMPLETION = """ +# pip {shell} completion start{script}# pip {shell} completion end +""" + +COMPLETION_SCRIPTS = { + "bash": """ + _pip_completion() + {{ + COMPREPLY=( $( COMP_WORDS="${{COMP_WORDS[*]}}" \\ + COMP_CWORD=$COMP_CWORD \\ + PIP_AUTO_COMPLETE=1 $1 2>/dev/null ) ) + }} + complete -o default -F _pip_completion {prog} + """, + "zsh": """ + function _pip_completion {{ + local words cword + read -Ac words + read -cn cword + reply=( $( COMP_WORDS="$words[*]" \\ + COMP_CWORD=$(( cword-1 )) \\ + PIP_AUTO_COMPLETE=1 $words[1] 2>/dev/null )) + }} + compctl -K _pip_completion {prog} + """, + "fish": """ + function __fish_complete_pip + set -lx COMP_WORDS (commandline -o) "" + set -lx COMP_CWORD ( \\ + math (contains -i -- (commandline -t) $COMP_WORDS)-1 \\ + ) + set -lx PIP_AUTO_COMPLETE 1 + string split \\ -- (eval $COMP_WORDS[1]) + end + complete -fa "(__fish_complete_pip)" -c {prog} + """, +} + + +class CompletionCommand(Command): + """A helper command to be used for command completion.""" + + ignore_require_venv = True + + def add_options(self) -> None: + self.cmd_opts.add_option( + "--bash", + "-b", + action="store_const", + const="bash", + dest="shell", + help="Emit completion code for bash", + ) + self.cmd_opts.add_option( + "--zsh", + "-z", + action="store_const", + const="zsh", + dest="shell", + help="Emit completion code for zsh", + ) + self.cmd_opts.add_option( + "--fish", + "-f", + action="store_const", + const="fish", + dest="shell", + help="Emit completion code for fish", + ) + + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + """Prints the completion code of the given shell""" + shells = COMPLETION_SCRIPTS.keys() + shell_options = ["--" + shell for shell in sorted(shells)] + if options.shell in shells: + script = textwrap.dedent( + COMPLETION_SCRIPTS.get(options.shell, "").format(prog=get_prog()) + ) + print(BASE_COMPLETION.format(script=script, shell=options.shell)) + return SUCCESS + else: + sys.stderr.write( + "ERROR: You must pass {}\n".format(" or ".join(shell_options)) + ) + return SUCCESS diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/configuration.py b/python/lib/python3.10/site-packages/pip/_internal/commands/configuration.py new file mode 100644 index 0000000..c6c74ed --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/configuration.py @@ -0,0 +1,266 @@ +import logging +import os +import subprocess +from optparse import Values +from typing import Any, List, Optional + +from pip._internal.cli.base_command import Command +from pip._internal.cli.status_codes import ERROR, SUCCESS +from pip._internal.configuration import ( + Configuration, + Kind, + get_configuration_files, + kinds, +) +from pip._internal.exceptions import PipError +from pip._internal.utils.logging import indent_log +from pip._internal.utils.misc import get_prog, write_output + +logger = logging.getLogger(__name__) + + +class ConfigurationCommand(Command): + """ + Manage local and global configuration. + + Subcommands: + + - list: List the active configuration (or from the file specified) + - edit: Edit the configuration file in an editor + - get: Get the value associated with name + - set: Set the name=value + - unset: Unset the value associated with name + - debug: List the configuration files and values defined under them + + If none of --user, --global and --site are passed, a virtual + environment configuration file is used if one is active and the file + exists. Otherwise, all modifications happen to the user file by + default. + """ + + ignore_require_venv = True + usage = """ + %prog [] list + %prog [] [--editor ] edit + + %prog [] get name + %prog [] set name value + %prog [] unset name + %prog [] debug + """ + + def add_options(self) -> None: + self.cmd_opts.add_option( + "--editor", + dest="editor", + action="store", + default=None, + help=( + "Editor to use to edit the file. Uses VISUAL or EDITOR " + "environment variables if not provided." + ), + ) + + self.cmd_opts.add_option( + "--global", + dest="global_file", + action="store_true", + default=False, + help="Use the system-wide configuration file only", + ) + + self.cmd_opts.add_option( + "--user", + dest="user_file", + action="store_true", + default=False, + help="Use the user configuration file only", + ) + + self.cmd_opts.add_option( + "--site", + dest="site_file", + action="store_true", + default=False, + help="Use the current environment configuration file only", + ) + + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + handlers = { + "list": self.list_values, + "edit": self.open_in_editor, + "get": self.get_name, + "set": self.set_name_value, + "unset": self.unset_name, + "debug": self.list_config_values, + } + + # Determine action + if not args or args[0] not in handlers: + logger.error( + "Need an action (%s) to perform.", + ", ".join(sorted(handlers)), + ) + return ERROR + + action = args[0] + + # Determine which configuration files are to be loaded + # Depends on whether the command is modifying. + try: + load_only = self._determine_file( + options, need_value=(action in ["get", "set", "unset", "edit"]) + ) + except PipError as e: + logger.error(e.args[0]) + return ERROR + + # Load a new configuration + self.configuration = Configuration( + isolated=options.isolated_mode, load_only=load_only + ) + self.configuration.load() + + # Error handling happens here, not in the action-handlers. + try: + handlers[action](options, args[1:]) + except PipError as e: + logger.error(e.args[0]) + return ERROR + + return SUCCESS + + def _determine_file(self, options: Values, need_value: bool) -> Optional[Kind]: + file_options = [ + key + for key, value in ( + (kinds.USER, options.user_file), + (kinds.GLOBAL, options.global_file), + (kinds.SITE, options.site_file), + ) + if value + ] + + if not file_options: + if not need_value: + return None + # Default to user, unless there's a site file. + elif any( + os.path.exists(site_config_file) + for site_config_file in get_configuration_files()[kinds.SITE] + ): + return kinds.SITE + else: + return kinds.USER + elif len(file_options) == 1: + return file_options[0] + + raise PipError( + "Need exactly one file to operate upon " + "(--user, --site, --global) to perform." + ) + + def list_values(self, options: Values, args: List[str]) -> None: + self._get_n_args(args, "list", n=0) + + for key, value in sorted(self.configuration.items()): + write_output("%s=%r", key, value) + + def get_name(self, options: Values, args: List[str]) -> None: + key = self._get_n_args(args, "get [name]", n=1) + value = self.configuration.get_value(key) + + write_output("%s", value) + + def set_name_value(self, options: Values, args: List[str]) -> None: + key, value = self._get_n_args(args, "set [name] [value]", n=2) + self.configuration.set_value(key, value) + + self._save_configuration() + + def unset_name(self, options: Values, args: List[str]) -> None: + key = self._get_n_args(args, "unset [name]", n=1) + self.configuration.unset_value(key) + + self._save_configuration() + + def list_config_values(self, options: Values, args: List[str]) -> None: + """List config key-value pairs across different config files""" + self._get_n_args(args, "debug", n=0) + + self.print_env_var_values() + # Iterate over config files and print if they exist, and the + # key-value pairs present in them if they do + for variant, files in sorted(self.configuration.iter_config_files()): + write_output("%s:", variant) + for fname in files: + with indent_log(): + file_exists = os.path.exists(fname) + write_output("%s, exists: %r", fname, file_exists) + if file_exists: + self.print_config_file_values(variant) + + def print_config_file_values(self, variant: Kind) -> None: + """Get key-value pairs from the file of a variant""" + for name, value in self.configuration.get_values_in_config(variant).items(): + with indent_log(): + write_output("%s: %s", name, value) + + def print_env_var_values(self) -> None: + """Get key-values pairs present as environment variables""" + write_output("%s:", "env_var") + with indent_log(): + for key, value in sorted(self.configuration.get_environ_vars()): + env_var = f"PIP_{key.upper()}" + write_output("%s=%r", env_var, value) + + def open_in_editor(self, options: Values, args: List[str]) -> None: + editor = self._determine_editor(options) + + fname = self.configuration.get_file_to_edit() + if fname is None: + raise PipError("Could not determine appropriate file.") + + try: + subprocess.check_call([editor, fname]) + except subprocess.CalledProcessError as e: + raise PipError( + "Editor Subprocess exited with exit code {}".format(e.returncode) + ) + + def _get_n_args(self, args: List[str], example: str, n: int) -> Any: + """Helper to make sure the command got the right number of arguments""" + if len(args) != n: + msg = ( + "Got unexpected number of arguments, expected {}. " + '(example: "{} config {}")' + ).format(n, get_prog(), example) + raise PipError(msg) + + if n == 1: + return args[0] + else: + return args + + def _save_configuration(self) -> None: + # We successfully ran a modifying command. Need to save the + # configuration. + try: + self.configuration.save() + except Exception: + logger.exception( + "Unable to save configuration. Please report this as a bug." + ) + raise PipError("Internal Error.") + + def _determine_editor(self, options: Values) -> str: + if options.editor is not None: + return options.editor + elif "VISUAL" in os.environ: + return os.environ["VISUAL"] + elif "EDITOR" in os.environ: + return os.environ["EDITOR"] + else: + raise PipError("Could not determine editor to use.") diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/debug.py b/python/lib/python3.10/site-packages/pip/_internal/commands/debug.py new file mode 100644 index 0000000..d3f1f28 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/debug.py @@ -0,0 +1,202 @@ +import locale +import logging +import os +import sys +from optparse import Values +from types import ModuleType +from typing import Any, Dict, List, Optional + +import pip._vendor +from pip._vendor.certifi import where +from pip._vendor.packaging.version import parse as parse_version + +from pip import __file__ as pip_location +from pip._internal.cli import cmdoptions +from pip._internal.cli.base_command import Command +from pip._internal.cli.cmdoptions import make_target_python +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.configuration import Configuration +from pip._internal.metadata import get_environment +from pip._internal.utils.logging import indent_log +from pip._internal.utils.misc import get_pip_version + +logger = logging.getLogger(__name__) + + +def show_value(name: str, value: Any) -> None: + logger.info("%s: %s", name, value) + + +def show_sys_implementation() -> None: + logger.info("sys.implementation:") + implementation_name = sys.implementation.name + with indent_log(): + show_value("name", implementation_name) + + +def create_vendor_txt_map() -> Dict[str, str]: + vendor_txt_path = os.path.join( + os.path.dirname(pip_location), "_vendor", "vendor.txt" + ) + + with open(vendor_txt_path) as f: + # Purge non version specifying lines. + # Also, remove any space prefix or suffixes (including comments). + lines = [ + line.strip().split(" ", 1)[0] for line in f.readlines() if "==" in line + ] + + # Transform into "module" -> version dict. + return dict(line.split("==", 1) for line in lines) # type: ignore + + +def get_module_from_module_name(module_name: str) -> ModuleType: + # Module name can be uppercase in vendor.txt for some reason... + module_name = module_name.lower() + # PATCH: setuptools is actually only pkg_resources. + if module_name == "setuptools": + module_name = "pkg_resources" + + __import__(f"pip._vendor.{module_name}", globals(), locals(), level=0) + return getattr(pip._vendor, module_name) + + +def get_vendor_version_from_module(module_name: str) -> Optional[str]: + module = get_module_from_module_name(module_name) + version = getattr(module, "__version__", None) + + if not version: + # Try to find version in debundled module info. + env = get_environment([os.path.dirname(module.__file__)]) + dist = env.get_distribution(module_name) + if dist: + version = str(dist.version) + + return version + + +def show_actual_vendor_versions(vendor_txt_versions: Dict[str, str]) -> None: + """Log the actual version and print extra info if there is + a conflict or if the actual version could not be imported. + """ + for module_name, expected_version in vendor_txt_versions.items(): + extra_message = "" + actual_version = get_vendor_version_from_module(module_name) + if not actual_version: + extra_message = ( + " (Unable to locate actual module version, using" + " vendor.txt specified version)" + ) + actual_version = expected_version + elif parse_version(actual_version) != parse_version(expected_version): + extra_message = ( + " (CONFLICT: vendor.txt suggests version should" + " be {})".format(expected_version) + ) + logger.info("%s==%s%s", module_name, actual_version, extra_message) + + +def show_vendor_versions() -> None: + logger.info("vendored library versions:") + + vendor_txt_versions = create_vendor_txt_map() + with indent_log(): + show_actual_vendor_versions(vendor_txt_versions) + + +def show_tags(options: Values) -> None: + tag_limit = 10 + + target_python = make_target_python(options) + tags = target_python.get_tags() + + # Display the target options that were explicitly provided. + formatted_target = target_python.format_given() + suffix = "" + if formatted_target: + suffix = f" (target: {formatted_target})" + + msg = "Compatible tags: {}{}".format(len(tags), suffix) + logger.info(msg) + + if options.verbose < 1 and len(tags) > tag_limit: + tags_limited = True + tags = tags[:tag_limit] + else: + tags_limited = False + + with indent_log(): + for tag in tags: + logger.info(str(tag)) + + if tags_limited: + msg = ( + "...\n[First {tag_limit} tags shown. Pass --verbose to show all.]" + ).format(tag_limit=tag_limit) + logger.info(msg) + + +def ca_bundle_info(config: Configuration) -> str: + levels = set() + for key, _ in config.items(): + levels.add(key.split(".")[0]) + + if not levels: + return "Not specified" + + levels_that_override_global = ["install", "wheel", "download"] + global_overriding_level = [ + level for level in levels if level in levels_that_override_global + ] + if not global_overriding_level: + return "global" + + if "global" in levels: + levels.remove("global") + return ", ".join(levels) + + +class DebugCommand(Command): + """ + Display debug information. + """ + + usage = """ + %prog """ + ignore_require_venv = True + + def add_options(self) -> None: + cmdoptions.add_target_python_options(self.cmd_opts) + self.parser.insert_option_group(0, self.cmd_opts) + self.parser.config.load() + + def run(self, options: Values, args: List[str]) -> int: + logger.warning( + "This command is only meant for debugging. " + "Do not use this with automation for parsing and getting these " + "details, since the output and options of this command may " + "change without notice." + ) + show_value("pip version", get_pip_version()) + show_value("sys.version", sys.version) + show_value("sys.executable", sys.executable) + show_value("sys.getdefaultencoding", sys.getdefaultencoding()) + show_value("sys.getfilesystemencoding", sys.getfilesystemencoding()) + show_value( + "locale.getpreferredencoding", + locale.getpreferredencoding(), + ) + show_value("sys.platform", sys.platform) + show_sys_implementation() + + show_value("'cert' config value", ca_bundle_info(self.parser.config)) + show_value("REQUESTS_CA_BUNDLE", os.environ.get("REQUESTS_CA_BUNDLE")) + show_value("CURL_CA_BUNDLE", os.environ.get("CURL_CA_BUNDLE")) + show_value("pip._vendor.certifi.where()", where()) + show_value("pip._vendor.DEBUNDLED", pip._vendor.DEBUNDLED) + + show_vendor_versions() + + show_tags(options) + + return SUCCESS diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/download.py b/python/lib/python3.10/site-packages/pip/_internal/commands/download.py new file mode 100644 index 0000000..233b7e9 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/download.py @@ -0,0 +1,140 @@ +import logging +import os +from optparse import Values +from typing import List + +from pip._internal.cli import cmdoptions +from pip._internal.cli.cmdoptions import make_target_python +from pip._internal.cli.req_command import RequirementCommand, with_cleanup +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.req.req_tracker import get_requirement_tracker +from pip._internal.utils.misc import ensure_dir, normalize_path, write_output +from pip._internal.utils.temp_dir import TempDirectory + +logger = logging.getLogger(__name__) + + +class DownloadCommand(RequirementCommand): + """ + Download packages from: + + - PyPI (and other indexes) using requirement specifiers. + - VCS project urls. + - Local project directories. + - Local or remote source archives. + + pip also supports downloading from "requirements files", which provide + an easy way to specify a whole environment to be downloaded. + """ + + usage = """ + %prog [options] [package-index-options] ... + %prog [options] -r [package-index-options] ... + %prog [options] ... + %prog [options] ... + %prog [options] ...""" + + def add_options(self) -> None: + self.cmd_opts.add_option(cmdoptions.constraints()) + self.cmd_opts.add_option(cmdoptions.requirements()) + self.cmd_opts.add_option(cmdoptions.no_deps()) + self.cmd_opts.add_option(cmdoptions.global_options()) + self.cmd_opts.add_option(cmdoptions.no_binary()) + self.cmd_opts.add_option(cmdoptions.only_binary()) + self.cmd_opts.add_option(cmdoptions.prefer_binary()) + self.cmd_opts.add_option(cmdoptions.src()) + self.cmd_opts.add_option(cmdoptions.pre()) + self.cmd_opts.add_option(cmdoptions.require_hashes()) + self.cmd_opts.add_option(cmdoptions.progress_bar()) + self.cmd_opts.add_option(cmdoptions.no_build_isolation()) + self.cmd_opts.add_option(cmdoptions.use_pep517()) + self.cmd_opts.add_option(cmdoptions.no_use_pep517()) + self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) + + self.cmd_opts.add_option( + "-d", + "--dest", + "--destination-dir", + "--destination-directory", + dest="download_dir", + metavar="dir", + default=os.curdir, + help="Download packages into .", + ) + + cmdoptions.add_target_python_options(self.cmd_opts) + + index_opts = cmdoptions.make_option_group( + cmdoptions.index_group, + self.parser, + ) + + self.parser.insert_option_group(0, index_opts) + self.parser.insert_option_group(0, self.cmd_opts) + + @with_cleanup + def run(self, options: Values, args: List[str]) -> int: + + options.ignore_installed = True + # editable doesn't really make sense for `pip download`, but the bowels + # of the RequirementSet code require that property. + options.editables = [] + + cmdoptions.check_dist_restriction(options) + + options.download_dir = normalize_path(options.download_dir) + ensure_dir(options.download_dir) + + session = self.get_default_session(options) + + target_python = make_target_python(options) + finder = self._build_package_finder( + options=options, + session=session, + target_python=target_python, + ignore_requires_python=options.ignore_requires_python, + ) + + req_tracker = self.enter_context(get_requirement_tracker()) + + directory = TempDirectory( + delete=not options.no_clean, + kind="download", + globally_managed=True, + ) + + reqs = self.get_requirements(args, options, finder, session) + + preparer = self.make_requirement_preparer( + temp_build_dir=directory, + options=options, + req_tracker=req_tracker, + session=session, + finder=finder, + download_dir=options.download_dir, + use_user_site=False, + verbosity=self.verbosity, + ) + + resolver = self.make_resolver( + preparer=preparer, + finder=finder, + options=options, + ignore_requires_python=options.ignore_requires_python, + py_version_info=options.python_version, + ) + + self.trace_basic_info(finder) + + requirement_set = resolver.resolve(reqs, check_supported_wheels=True) + + downloaded: List[str] = [] + for req in requirement_set.requirements.values(): + if req.satisfied_by is None: + assert req.name is not None + preparer.save_linked_requirement(req) + downloaded.append(req.name) + if downloaded: + write_output("Successfully downloaded %s", " ".join(downloaded)) + + return SUCCESS diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/freeze.py b/python/lib/python3.10/site-packages/pip/_internal/commands/freeze.py new file mode 100644 index 0000000..6e9cc76 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/freeze.py @@ -0,0 +1,97 @@ +import sys +from optparse import Values +from typing import List + +from pip._internal.cli import cmdoptions +from pip._internal.cli.base_command import Command +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.operations.freeze import freeze +from pip._internal.utils.compat import stdlib_pkgs + +DEV_PKGS = {"pip", "setuptools", "distribute", "wheel", "pkg-resources"} + + +class FreezeCommand(Command): + """ + Output installed packages in requirements format. + + packages are listed in a case-insensitive sorted order. + """ + + usage = """ + %prog [options]""" + log_streams = ("ext://sys.stderr", "ext://sys.stderr") + + def add_options(self) -> None: + self.cmd_opts.add_option( + "-r", + "--requirement", + dest="requirements", + action="append", + default=[], + metavar="file", + help=( + "Use the order in the given requirements file and its " + "comments when generating output. This option can be " + "used multiple times." + ), + ) + self.cmd_opts.add_option( + "-l", + "--local", + dest="local", + action="store_true", + default=False, + help=( + "If in a virtualenv that has global access, do not output " + "globally-installed packages." + ), + ) + self.cmd_opts.add_option( + "--user", + dest="user", + action="store_true", + default=False, + help="Only output packages installed in user-site.", + ) + self.cmd_opts.add_option(cmdoptions.list_path()) + self.cmd_opts.add_option( + "--all", + dest="freeze_all", + action="store_true", + help=( + "Do not skip these packages in the output:" + " {}".format(", ".join(DEV_PKGS)) + ), + ) + self.cmd_opts.add_option( + "--exclude-editable", + dest="exclude_editable", + action="store_true", + help="Exclude editable package from output.", + ) + self.cmd_opts.add_option(cmdoptions.list_exclude()) + + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + skip = set(stdlib_pkgs) + if not options.freeze_all: + skip.update(DEV_PKGS) + + if options.excludes: + skip.update(options.excludes) + + cmdoptions.check_list_path_option(options) + + for line in freeze( + requirement=options.requirements, + local_only=options.local, + user_only=options.user, + paths=options.path, + isolated=options.isolated_mode, + skip=skip, + exclude_editable=options.exclude_editable, + ): + sys.stdout.write(line + "\n") + return SUCCESS diff --git a/lib/python3.11/site-packages/pip/_internal/commands/hash.py b/python/lib/python3.10/site-packages/pip/_internal/commands/hash.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/commands/hash.py rename to python/lib/python3.10/site-packages/pip/_internal/commands/hash.py diff --git a/lib/python3.11/site-packages/pip/_internal/commands/help.py b/python/lib/python3.10/site-packages/pip/_internal/commands/help.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/commands/help.py rename to python/lib/python3.10/site-packages/pip/_internal/commands/help.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/index.py b/python/lib/python3.10/site-packages/pip/_internal/commands/index.py new file mode 100644 index 0000000..9d8aae3 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/index.py @@ -0,0 +1,139 @@ +import logging +from optparse import Values +from typing import Any, Iterable, List, Optional, Union + +from pip._vendor.packaging.version import LegacyVersion, Version + +from pip._internal.cli import cmdoptions +from pip._internal.cli.req_command import IndexGroupCommand +from pip._internal.cli.status_codes import ERROR, SUCCESS +from pip._internal.commands.search import print_dist_installation_info +from pip._internal.exceptions import CommandError, DistributionNotFound, PipError +from pip._internal.index.collector import LinkCollector +from pip._internal.index.package_finder import PackageFinder +from pip._internal.models.selection_prefs import SelectionPreferences +from pip._internal.models.target_python import TargetPython +from pip._internal.network.session import PipSession +from pip._internal.utils.misc import write_output + +logger = logging.getLogger(__name__) + + +class IndexCommand(IndexGroupCommand): + """ + Inspect information available from package indexes. + """ + + usage = """ + %prog versions + """ + + def add_options(self) -> None: + cmdoptions.add_target_python_options(self.cmd_opts) + + self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) + self.cmd_opts.add_option(cmdoptions.pre()) + self.cmd_opts.add_option(cmdoptions.no_binary()) + self.cmd_opts.add_option(cmdoptions.only_binary()) + + index_opts = cmdoptions.make_option_group( + cmdoptions.index_group, + self.parser, + ) + + self.parser.insert_option_group(0, index_opts) + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + handlers = { + "versions": self.get_available_package_versions, + } + + logger.warning( + "pip index is currently an experimental command. " + "It may be removed/changed in a future release " + "without prior warning." + ) + + # Determine action + if not args or args[0] not in handlers: + logger.error( + "Need an action (%s) to perform.", + ", ".join(sorted(handlers)), + ) + return ERROR + + action = args[0] + + # Error handling happens here, not in the action-handlers. + try: + handlers[action](options, args[1:]) + except PipError as e: + logger.error(e.args[0]) + return ERROR + + return SUCCESS + + def _build_package_finder( + self, + options: Values, + session: PipSession, + target_python: Optional[TargetPython] = None, + ignore_requires_python: Optional[bool] = None, + ) -> PackageFinder: + """ + Create a package finder appropriate to the index command. + """ + link_collector = LinkCollector.create(session, options=options) + + # Pass allow_yanked=False to ignore yanked versions. + selection_prefs = SelectionPreferences( + allow_yanked=False, + allow_all_prereleases=options.pre, + ignore_requires_python=ignore_requires_python, + ) + + return PackageFinder.create( + link_collector=link_collector, + selection_prefs=selection_prefs, + target_python=target_python, + use_deprecated_html5lib="html5lib" in options.deprecated_features_enabled, + ) + + def get_available_package_versions(self, options: Values, args: List[Any]) -> None: + if len(args) != 1: + raise CommandError("You need to specify exactly one argument") + + target_python = cmdoptions.make_target_python(options) + query = args[0] + + with self._build_session(options) as session: + finder = self._build_package_finder( + options=options, + session=session, + target_python=target_python, + ignore_requires_python=options.ignore_requires_python, + ) + + versions: Iterable[Union[LegacyVersion, Version]] = ( + candidate.version for candidate in finder.find_all_candidates(query) + ) + + if not options.pre: + # Remove prereleases + versions = ( + version for version in versions if not version.is_prerelease + ) + versions = set(versions) + + if not versions: + raise DistributionNotFound( + "No matching distribution found for {}".format(query) + ) + + formatted_versions = [str(ver) for ver in sorted(versions, reverse=True)] + latest = formatted_versions[0] + + write_output("{} ({})".format(query, latest)) + write_output("Available versions: {}".format(", ".join(formatted_versions))) + print_dist_installation_info(query, latest) diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/install.py b/python/lib/python3.10/site-packages/pip/_internal/commands/install.py new file mode 100644 index 0000000..34e4c2f --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/install.py @@ -0,0 +1,771 @@ +import errno +import operator +import os +import shutil +import site +from optparse import SUPPRESS_HELP, Values +from typing import Iterable, List, Optional + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.cache import WheelCache +from pip._internal.cli import cmdoptions +from pip._internal.cli.cmdoptions import make_target_python +from pip._internal.cli.req_command import ( + RequirementCommand, + warn_if_run_as_root, + with_cleanup, +) +from pip._internal.cli.status_codes import ERROR, SUCCESS +from pip._internal.exceptions import CommandError, InstallationError +from pip._internal.locations import get_scheme +from pip._internal.metadata import get_environment +from pip._internal.models.format_control import FormatControl +from pip._internal.operations.check import ConflictDetails, check_install_conflicts +from pip._internal.req import install_given_reqs +from pip._internal.req.req_install import InstallRequirement +from pip._internal.req.req_tracker import get_requirement_tracker +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.distutils_args import parse_distutils_args +from pip._internal.utils.filesystem import test_writable_dir +from pip._internal.utils.logging import getLogger +from pip._internal.utils.misc import ( + ensure_dir, + get_pip_version, + protect_pip_from_modification_on_windows, + write_output, +) +from pip._internal.utils.temp_dir import TempDirectory +from pip._internal.utils.virtualenv import ( + running_under_virtualenv, + virtualenv_no_global, +) +from pip._internal.wheel_builder import ( + BinaryAllowedPredicate, + build, + should_build_for_install_command, +) + +logger = getLogger(__name__) + + +def get_check_binary_allowed(format_control: FormatControl) -> BinaryAllowedPredicate: + def check_binary_allowed(req: InstallRequirement) -> bool: + canonical_name = canonicalize_name(req.name or "") + allowed_formats = format_control.get_allowed_formats(canonical_name) + return "binary" in allowed_formats + + return check_binary_allowed + + +class InstallCommand(RequirementCommand): + """ + Install packages from: + + - PyPI (and other indexes) using requirement specifiers. + - VCS project urls. + - Local project directories. + - Local or remote source archives. + + pip also supports installing from "requirements files", which provide + an easy way to specify a whole environment to be installed. + """ + + usage = """ + %prog [options] [package-index-options] ... + %prog [options] -r [package-index-options] ... + %prog [options] [-e] ... + %prog [options] [-e] ... + %prog [options] ...""" + + def add_options(self) -> None: + self.cmd_opts.add_option(cmdoptions.requirements()) + self.cmd_opts.add_option(cmdoptions.constraints()) + self.cmd_opts.add_option(cmdoptions.no_deps()) + self.cmd_opts.add_option(cmdoptions.pre()) + + self.cmd_opts.add_option(cmdoptions.editable()) + self.cmd_opts.add_option( + "-t", + "--target", + dest="target_dir", + metavar="dir", + default=None, + help=( + "Install packages into . " + "By default this will not replace existing files/folders in " + ". Use --upgrade to replace existing packages in " + "with new versions." + ), + ) + cmdoptions.add_target_python_options(self.cmd_opts) + + self.cmd_opts.add_option( + "--user", + dest="use_user_site", + action="store_true", + help=( + "Install to the Python user install directory for your " + "platform. Typically ~/.local/, or %APPDATA%\\Python on " + "Windows. (See the Python documentation for site.USER_BASE " + "for full details.)" + ), + ) + self.cmd_opts.add_option( + "--no-user", + dest="use_user_site", + action="store_false", + help=SUPPRESS_HELP, + ) + self.cmd_opts.add_option( + "--root", + dest="root_path", + metavar="dir", + default=None, + help="Install everything relative to this alternate root directory.", + ) + self.cmd_opts.add_option( + "--prefix", + dest="prefix_path", + metavar="dir", + default=None, + help=( + "Installation prefix where lib, bin and other top-level " + "folders are placed" + ), + ) + + self.cmd_opts.add_option(cmdoptions.src()) + + self.cmd_opts.add_option( + "-U", + "--upgrade", + dest="upgrade", + action="store_true", + help=( + "Upgrade all specified packages to the newest available " + "version. The handling of dependencies depends on the " + "upgrade-strategy used." + ), + ) + + self.cmd_opts.add_option( + "--upgrade-strategy", + dest="upgrade_strategy", + default="only-if-needed", + choices=["only-if-needed", "eager"], + help=( + "Determines how dependency upgrading should be handled " + "[default: %default]. " + '"eager" - dependencies are upgraded regardless of ' + "whether the currently installed version satisfies the " + "requirements of the upgraded package(s). " + '"only-if-needed" - are upgraded only when they do not ' + "satisfy the requirements of the upgraded package(s)." + ), + ) + + self.cmd_opts.add_option( + "--force-reinstall", + dest="force_reinstall", + action="store_true", + help="Reinstall all packages even if they are already up-to-date.", + ) + + self.cmd_opts.add_option( + "-I", + "--ignore-installed", + dest="ignore_installed", + action="store_true", + help=( + "Ignore the installed packages, overwriting them. " + "This can break your system if the existing package " + "is of a different version or was installed " + "with a different package manager!" + ), + ) + + self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) + self.cmd_opts.add_option(cmdoptions.no_build_isolation()) + self.cmd_opts.add_option(cmdoptions.use_pep517()) + self.cmd_opts.add_option(cmdoptions.no_use_pep517()) + + self.cmd_opts.add_option(cmdoptions.install_options()) + self.cmd_opts.add_option(cmdoptions.global_options()) + + self.cmd_opts.add_option( + "--compile", + action="store_true", + dest="compile", + default=True, + help="Compile Python source files to bytecode", + ) + + self.cmd_opts.add_option( + "--no-compile", + action="store_false", + dest="compile", + help="Do not compile Python source files to bytecode", + ) + + self.cmd_opts.add_option( + "--no-warn-script-location", + action="store_false", + dest="warn_script_location", + default=True, + help="Do not warn when installing scripts outside PATH", + ) + self.cmd_opts.add_option( + "--no-warn-conflicts", + action="store_false", + dest="warn_about_conflicts", + default=True, + help="Do not warn about broken dependencies", + ) + + self.cmd_opts.add_option(cmdoptions.no_binary()) + self.cmd_opts.add_option(cmdoptions.only_binary()) + self.cmd_opts.add_option(cmdoptions.prefer_binary()) + self.cmd_opts.add_option(cmdoptions.require_hashes()) + self.cmd_opts.add_option(cmdoptions.progress_bar()) + + index_opts = cmdoptions.make_option_group( + cmdoptions.index_group, + self.parser, + ) + + self.parser.insert_option_group(0, index_opts) + self.parser.insert_option_group(0, self.cmd_opts) + + @with_cleanup + def run(self, options: Values, args: List[str]) -> int: + if options.use_user_site and options.target_dir is not None: + raise CommandError("Can not combine '--user' and '--target'") + + cmdoptions.check_install_build_global(options) + upgrade_strategy = "to-satisfy-only" + if options.upgrade: + upgrade_strategy = options.upgrade_strategy + + cmdoptions.check_dist_restriction(options, check_target=True) + + install_options = options.install_options or [] + + logger.verbose("Using %s", get_pip_version()) + options.use_user_site = decide_user_install( + options.use_user_site, + prefix_path=options.prefix_path, + target_dir=options.target_dir, + root_path=options.root_path, + isolated_mode=options.isolated_mode, + ) + + target_temp_dir: Optional[TempDirectory] = None + target_temp_dir_path: Optional[str] = None + if options.target_dir: + options.ignore_installed = True + options.target_dir = os.path.abspath(options.target_dir) + if ( + # fmt: off + os.path.exists(options.target_dir) and + not os.path.isdir(options.target_dir) + # fmt: on + ): + raise CommandError( + "Target path exists but is not a directory, will not continue." + ) + + # Create a target directory for using with the target option + target_temp_dir = TempDirectory(kind="target") + target_temp_dir_path = target_temp_dir.path + self.enter_context(target_temp_dir) + + global_options = options.global_options or [] + + session = self.get_default_session(options) + + target_python = make_target_python(options) + finder = self._build_package_finder( + options=options, + session=session, + target_python=target_python, + ignore_requires_python=options.ignore_requires_python, + ) + wheel_cache = WheelCache(options.cache_dir, options.format_control) + + req_tracker = self.enter_context(get_requirement_tracker()) + + directory = TempDirectory( + delete=not options.no_clean, + kind="install", + globally_managed=True, + ) + + try: + reqs = self.get_requirements(args, options, finder, session) + + # Only when installing is it permitted to use PEP 660. + # In other circumstances (pip wheel, pip download) we generate + # regular (i.e. non editable) metadata and wheels. + for req in reqs: + req.permit_editable_wheels = True + + reject_location_related_install_options(reqs, options.install_options) + + preparer = self.make_requirement_preparer( + temp_build_dir=directory, + options=options, + req_tracker=req_tracker, + session=session, + finder=finder, + use_user_site=options.use_user_site, + verbosity=self.verbosity, + ) + resolver = self.make_resolver( + preparer=preparer, + finder=finder, + options=options, + wheel_cache=wheel_cache, + use_user_site=options.use_user_site, + ignore_installed=options.ignore_installed, + ignore_requires_python=options.ignore_requires_python, + force_reinstall=options.force_reinstall, + upgrade_strategy=upgrade_strategy, + use_pep517=options.use_pep517, + ) + + self.trace_basic_info(finder) + + requirement_set = resolver.resolve( + reqs, check_supported_wheels=not options.target_dir + ) + + try: + pip_req = requirement_set.get_requirement("pip") + except KeyError: + modifying_pip = False + else: + # If we're not replacing an already installed pip, + # we're not modifying it. + modifying_pip = pip_req.satisfied_by is None + protect_pip_from_modification_on_windows(modifying_pip=modifying_pip) + + check_binary_allowed = get_check_binary_allowed(finder.format_control) + + reqs_to_build = [ + r + for r in requirement_set.requirements.values() + if should_build_for_install_command(r, check_binary_allowed) + ] + + _, build_failures = build( + reqs_to_build, + wheel_cache=wheel_cache, + verify=True, + build_options=[], + global_options=[], + ) + + # If we're using PEP 517, we cannot do a legacy setup.py install + # so we fail here. + pep517_build_failure_names: List[str] = [ + r.name for r in build_failures if r.use_pep517 # type: ignore + ] + if pep517_build_failure_names: + raise InstallationError( + "Could not build wheels for {}, which is required to " + "install pyproject.toml-based projects".format( + ", ".join(pep517_build_failure_names) + ) + ) + + # For now, we just warn about failures building legacy + # requirements, as we'll fall through to a setup.py install for + # those. + for r in build_failures: + if not r.use_pep517: + r.legacy_install_reason = 8368 + + to_install = resolver.get_installation_order(requirement_set) + + # Check for conflicts in the package set we're installing. + conflicts: Optional[ConflictDetails] = None + should_warn_about_conflicts = ( + not options.ignore_dependencies and options.warn_about_conflicts + ) + if should_warn_about_conflicts: + conflicts = self._determine_conflicts(to_install) + + # Don't warn about script install locations if + # --target or --prefix has been specified + warn_script_location = options.warn_script_location + if options.target_dir or options.prefix_path: + warn_script_location = False + + installed = install_given_reqs( + to_install, + install_options, + global_options, + root=options.root_path, + home=target_temp_dir_path, + prefix=options.prefix_path, + warn_script_location=warn_script_location, + use_user_site=options.use_user_site, + pycompile=options.compile, + ) + + lib_locations = get_lib_location_guesses( + user=options.use_user_site, + home=target_temp_dir_path, + root=options.root_path, + prefix=options.prefix_path, + isolated=options.isolated_mode, + ) + env = get_environment(lib_locations) + + installed.sort(key=operator.attrgetter("name")) + items = [] + for result in installed: + item = result.name + try: + installed_dist = env.get_distribution(item) + if installed_dist is not None: + item = f"{item}-{installed_dist.version}" + except Exception: + pass + items.append(item) + + if conflicts is not None: + self._warn_about_conflicts( + conflicts, + resolver_variant=self.determine_resolver_variant(options), + ) + + installed_desc = " ".join(items) + if installed_desc: + write_output( + "Successfully installed %s", + installed_desc, + ) + except OSError as error: + show_traceback = self.verbosity >= 1 + + message = create_os_error_message( + error, + show_traceback, + options.use_user_site, + ) + logger.error(message, exc_info=show_traceback) # noqa + + return ERROR + + if options.target_dir: + assert target_temp_dir + self._handle_target_dir( + options.target_dir, target_temp_dir, options.upgrade + ) + + warn_if_run_as_root() + return SUCCESS + + def _handle_target_dir( + self, target_dir: str, target_temp_dir: TempDirectory, upgrade: bool + ) -> None: + ensure_dir(target_dir) + + # Checking both purelib and platlib directories for installed + # packages to be moved to target directory + lib_dir_list = [] + + # Checking both purelib and platlib directories for installed + # packages to be moved to target directory + scheme = get_scheme("", home=target_temp_dir.path) + purelib_dir = scheme.purelib + platlib_dir = scheme.platlib + data_dir = scheme.data + + if os.path.exists(purelib_dir): + lib_dir_list.append(purelib_dir) + if os.path.exists(platlib_dir) and platlib_dir != purelib_dir: + lib_dir_list.append(platlib_dir) + if os.path.exists(data_dir): + lib_dir_list.append(data_dir) + + for lib_dir in lib_dir_list: + for item in os.listdir(lib_dir): + if lib_dir == data_dir: + ddir = os.path.join(data_dir, item) + if any(s.startswith(ddir) for s in lib_dir_list[:-1]): + continue + target_item_dir = os.path.join(target_dir, item) + if os.path.exists(target_item_dir): + if not upgrade: + logger.warning( + "Target directory %s already exists. Specify " + "--upgrade to force replacement.", + target_item_dir, + ) + continue + if os.path.islink(target_item_dir): + logger.warning( + "Target directory %s already exists and is " + "a link. pip will not automatically replace " + "links, please remove if replacement is " + "desired.", + target_item_dir, + ) + continue + if os.path.isdir(target_item_dir): + shutil.rmtree(target_item_dir) + else: + os.remove(target_item_dir) + + shutil.move(os.path.join(lib_dir, item), target_item_dir) + + def _determine_conflicts( + self, to_install: List[InstallRequirement] + ) -> Optional[ConflictDetails]: + try: + return check_install_conflicts(to_install) + except Exception: + logger.exception( + "Error while checking for conflicts. Please file an issue on " + "pip's issue tracker: https://github.com/pypa/pip/issues/new" + ) + return None + + def _warn_about_conflicts( + self, conflict_details: ConflictDetails, resolver_variant: str + ) -> None: + package_set, (missing, conflicting) = conflict_details + if not missing and not conflicting: + return + + parts: List[str] = [] + if resolver_variant == "legacy": + parts.append( + "pip's legacy dependency resolver does not consider dependency " + "conflicts when selecting packages. This behaviour is the " + "source of the following dependency conflicts." + ) + else: + assert resolver_variant == "2020-resolver" + parts.append( + "pip's dependency resolver does not currently take into account " + "all the packages that are installed. This behaviour is the " + "source of the following dependency conflicts." + ) + + # NOTE: There is some duplication here, with commands/check.py + for project_name in missing: + version = package_set[project_name][0] + for dependency in missing[project_name]: + message = ( + "{name} {version} requires {requirement}, " + "which is not installed." + ).format( + name=project_name, + version=version, + requirement=dependency[1], + ) + parts.append(message) + + for project_name in conflicting: + version = package_set[project_name][0] + for dep_name, dep_version, req in conflicting[project_name]: + message = ( + "{name} {version} requires {requirement}, but {you} have " + "{dep_name} {dep_version} which is incompatible." + ).format( + name=project_name, + version=version, + requirement=req, + dep_name=dep_name, + dep_version=dep_version, + you=("you" if resolver_variant == "2020-resolver" else "you'll"), + ) + parts.append(message) + + logger.critical("\n".join(parts)) + + +def get_lib_location_guesses( + user: bool = False, + home: Optional[str] = None, + root: Optional[str] = None, + isolated: bool = False, + prefix: Optional[str] = None, +) -> List[str]: + scheme = get_scheme( + "", + user=user, + home=home, + root=root, + isolated=isolated, + prefix=prefix, + ) + return [scheme.purelib, scheme.platlib] + + +def site_packages_writable(root: Optional[str], isolated: bool) -> bool: + return all( + test_writable_dir(d) + for d in set(get_lib_location_guesses(root=root, isolated=isolated)) + ) + + +def decide_user_install( + use_user_site: Optional[bool], + prefix_path: Optional[str] = None, + target_dir: Optional[str] = None, + root_path: Optional[str] = None, + isolated_mode: bool = False, +) -> bool: + """Determine whether to do a user install based on the input options. + + If use_user_site is False, no additional checks are done. + If use_user_site is True, it is checked for compatibility with other + options. + If use_user_site is None, the default behaviour depends on the environment, + which is provided by the other arguments. + """ + # In some cases (config from tox), use_user_site can be set to an integer + # rather than a bool, which 'use_user_site is False' wouldn't catch. + if (use_user_site is not None) and (not use_user_site): + logger.debug("Non-user install by explicit request") + return False + + if use_user_site: + if prefix_path: + raise CommandError( + "Can not combine '--user' and '--prefix' as they imply " + "different installation locations" + ) + if virtualenv_no_global(): + raise InstallationError( + "Can not perform a '--user' install. User site-packages " + "are not visible in this virtualenv." + ) + logger.debug("User install by explicit request") + return True + + # If we are here, user installs have not been explicitly requested/avoided + assert use_user_site is None + + # user install incompatible with --prefix/--target + if prefix_path or target_dir: + logger.debug("Non-user install due to --prefix or --target option") + return False + + # If user installs are not enabled, choose a non-user install + if not site.ENABLE_USER_SITE: + logger.debug("Non-user install because user site-packages disabled") + return False + + # If we have permission for a non-user install, do that, + # otherwise do a user install. + if site_packages_writable(root=root_path, isolated=isolated_mode): + logger.debug("Non-user install because site-packages writeable") + return False + + logger.info( + "Defaulting to user installation because normal site-packages " + "is not writeable" + ) + return True + + +def reject_location_related_install_options( + requirements: List[InstallRequirement], options: Optional[List[str]] +) -> None: + """If any location-changing --install-option arguments were passed for + requirements or on the command-line, then show a deprecation warning. + """ + + def format_options(option_names: Iterable[str]) -> List[str]: + return ["--{}".format(name.replace("_", "-")) for name in option_names] + + offenders = [] + + for requirement in requirements: + install_options = requirement.install_options + location_options = parse_distutils_args(install_options) + if location_options: + offenders.append( + "{!r} from {}".format( + format_options(location_options.keys()), requirement + ) + ) + + if options: + location_options = parse_distutils_args(options) + if location_options: + offenders.append( + "{!r} from command line".format(format_options(location_options.keys())) + ) + + if not offenders: + return + + raise CommandError( + "Location-changing options found in --install-option: {}." + " This is unsupported, use pip-level options like --user," + " --prefix, --root, and --target instead.".format("; ".join(offenders)) + ) + + +def create_os_error_message( + error: OSError, show_traceback: bool, using_user_site: bool +) -> str: + """Format an error message for an OSError + + It may occur anytime during the execution of the install command. + """ + parts = [] + + # Mention the error if we are not going to show a traceback + parts.append("Could not install packages due to an OSError") + if not show_traceback: + parts.append(": ") + parts.append(str(error)) + else: + parts.append(".") + + # Spilt the error indication from a helper message (if any) + parts[-1] += "\n" + + # Suggest useful actions to the user: + # (1) using user site-packages or (2) verifying the permissions + if error.errno == errno.EACCES: + user_option_part = "Consider using the `--user` option" + permissions_part = "Check the permissions" + + if not running_under_virtualenv() and not using_user_site: + parts.extend( + [ + user_option_part, + " or ", + permissions_part.lower(), + ] + ) + else: + parts.append(permissions_part) + parts.append(".\n") + + # Suggest the user to enable Long Paths if path length is + # more than 260 + if ( + WINDOWS + and error.errno == errno.ENOENT + and error.filename + and len(error.filename) > 260 + ): + parts.append( + "HINT: This error might have occurred since " + "this system does not have Windows Long Path " + "support enabled. You can find information on " + "how to enable this at " + "https://pip.pypa.io/warnings/enable-long-paths\n" + ) + + return "".join(parts).strip() + "\n" diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/list.py b/python/lib/python3.10/site-packages/pip/_internal/commands/list.py new file mode 100644 index 0000000..3a545e9 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/list.py @@ -0,0 +1,363 @@ +import json +import logging +from optparse import Values +from typing import TYPE_CHECKING, Iterator, List, Optional, Sequence, Tuple, cast + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.cli import cmdoptions +from pip._internal.cli.req_command import IndexGroupCommand +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.exceptions import CommandError +from pip._internal.index.collector import LinkCollector +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import BaseDistribution, get_environment +from pip._internal.models.selection_prefs import SelectionPreferences +from pip._internal.network.session import PipSession +from pip._internal.utils.compat import stdlib_pkgs +from pip._internal.utils.misc import tabulate, write_output + +if TYPE_CHECKING: + from pip._internal.metadata.base import DistributionVersion + + class _DistWithLatestInfo(BaseDistribution): + """Give the distribution object a couple of extra fields. + + These will be populated during ``get_outdated()``. This is dirty but + makes the rest of the code much cleaner. + """ + + latest_version: DistributionVersion + latest_filetype: str + + _ProcessedDists = Sequence[_DistWithLatestInfo] + + +from pip._vendor.packaging.version import parse + +logger = logging.getLogger(__name__) + + +class ListCommand(IndexGroupCommand): + """ + List installed packages, including editables. + + Packages are listed in a case-insensitive sorted order. + """ + + ignore_require_venv = True + usage = """ + %prog [options]""" + + def add_options(self) -> None: + self.cmd_opts.add_option( + "-o", + "--outdated", + action="store_true", + default=False, + help="List outdated packages", + ) + self.cmd_opts.add_option( + "-u", + "--uptodate", + action="store_true", + default=False, + help="List uptodate packages", + ) + self.cmd_opts.add_option( + "-e", + "--editable", + action="store_true", + default=False, + help="List editable projects.", + ) + self.cmd_opts.add_option( + "-l", + "--local", + action="store_true", + default=False, + help=( + "If in a virtualenv that has global access, do not list " + "globally-installed packages." + ), + ) + self.cmd_opts.add_option( + "--user", + dest="user", + action="store_true", + default=False, + help="Only output packages installed in user-site.", + ) + self.cmd_opts.add_option(cmdoptions.list_path()) + self.cmd_opts.add_option( + "--pre", + action="store_true", + default=False, + help=( + "Include pre-release and development versions. By default, " + "pip only finds stable versions." + ), + ) + + self.cmd_opts.add_option( + "--format", + action="store", + dest="list_format", + default="columns", + choices=("columns", "freeze", "json"), + help="Select the output format among: columns (default), freeze, or json", + ) + + self.cmd_opts.add_option( + "--not-required", + action="store_true", + dest="not_required", + help="List packages that are not dependencies of installed packages.", + ) + + self.cmd_opts.add_option( + "--exclude-editable", + action="store_false", + dest="include_editable", + help="Exclude editable package from output.", + ) + self.cmd_opts.add_option( + "--include-editable", + action="store_true", + dest="include_editable", + help="Include editable package from output.", + default=True, + ) + self.cmd_opts.add_option(cmdoptions.list_exclude()) + index_opts = cmdoptions.make_option_group(cmdoptions.index_group, self.parser) + + self.parser.insert_option_group(0, index_opts) + self.parser.insert_option_group(0, self.cmd_opts) + + def _build_package_finder( + self, options: Values, session: PipSession + ) -> PackageFinder: + """ + Create a package finder appropriate to this list command. + """ + link_collector = LinkCollector.create(session, options=options) + + # Pass allow_yanked=False to ignore yanked versions. + selection_prefs = SelectionPreferences( + allow_yanked=False, + allow_all_prereleases=options.pre, + ) + + return PackageFinder.create( + link_collector=link_collector, + selection_prefs=selection_prefs, + use_deprecated_html5lib="html5lib" in options.deprecated_features_enabled, + ) + + def run(self, options: Values, args: List[str]) -> int: + if options.outdated and options.uptodate: + raise CommandError("Options --outdated and --uptodate cannot be combined.") + + cmdoptions.check_list_path_option(options) + + skip = set(stdlib_pkgs) + if options.excludes: + skip.update(canonicalize_name(n) for n in options.excludes) + + packages: "_ProcessedDists" = [ + cast("_DistWithLatestInfo", d) + for d in get_environment(options.path).iter_installed_distributions( + local_only=options.local, + user_only=options.user, + editables_only=options.editable, + include_editables=options.include_editable, + skip=skip, + ) + ] + + # get_not_required must be called firstly in order to find and + # filter out all dependencies correctly. Otherwise a package + # can't be identified as requirement because some parent packages + # could be filtered out before. + if options.not_required: + packages = self.get_not_required(packages, options) + + if options.outdated: + packages = self.get_outdated(packages, options) + elif options.uptodate: + packages = self.get_uptodate(packages, options) + + self.output_package_listing(packages, options) + return SUCCESS + + def get_outdated( + self, packages: "_ProcessedDists", options: Values + ) -> "_ProcessedDists": + return [ + dist + for dist in self.iter_packages_latest_infos(packages, options) + if parse(str(dist.latest_version)) > parse(str(dist.version)) + ] + + def get_uptodate( + self, packages: "_ProcessedDists", options: Values + ) -> "_ProcessedDists": + return [ + dist + for dist in self.iter_packages_latest_infos(packages, options) + if parse(str(dist.latest_version)) == parse(str(dist.version)) + ] + + def get_not_required( + self, packages: "_ProcessedDists", options: Values + ) -> "_ProcessedDists": + dep_keys = { + canonicalize_name(dep.name) + for dist in packages + for dep in (dist.iter_dependencies() or ()) + } + + # Create a set to remove duplicate packages, and cast it to a list + # to keep the return type consistent with get_outdated and + # get_uptodate + return list({pkg for pkg in packages if pkg.canonical_name not in dep_keys}) + + def iter_packages_latest_infos( + self, packages: "_ProcessedDists", options: Values + ) -> Iterator["_DistWithLatestInfo"]: + with self._build_session(options) as session: + finder = self._build_package_finder(options, session) + + def latest_info( + dist: "_DistWithLatestInfo", + ) -> Optional["_DistWithLatestInfo"]: + all_candidates = finder.find_all_candidates(dist.canonical_name) + if not options.pre: + # Remove prereleases + all_candidates = [ + candidate + for candidate in all_candidates + if not candidate.version.is_prerelease + ] + + evaluator = finder.make_candidate_evaluator( + project_name=dist.canonical_name, + ) + best_candidate = evaluator.sort_best_candidate(all_candidates) + if best_candidate is None: + return None + + remote_version = best_candidate.version + if best_candidate.link.is_wheel: + typ = "wheel" + else: + typ = "sdist" + dist.latest_version = remote_version + dist.latest_filetype = typ + return dist + + for dist in map(latest_info, packages): + if dist is not None: + yield dist + + def output_package_listing( + self, packages: "_ProcessedDists", options: Values + ) -> None: + packages = sorted( + packages, + key=lambda dist: dist.canonical_name, + ) + if options.list_format == "columns" and packages: + data, header = format_for_columns(packages, options) + self.output_package_listing_columns(data, header) + elif options.list_format == "freeze": + for dist in packages: + if options.verbose >= 1: + write_output( + "%s==%s (%s)", dist.raw_name, dist.version, dist.location + ) + else: + write_output("%s==%s", dist.raw_name, dist.version) + elif options.list_format == "json": + write_output(format_for_json(packages, options)) + + def output_package_listing_columns( + self, data: List[List[str]], header: List[str] + ) -> None: + # insert the header first: we need to know the size of column names + if len(data) > 0: + data.insert(0, header) + + pkg_strings, sizes = tabulate(data) + + # Create and add a separator. + if len(data) > 0: + pkg_strings.insert(1, " ".join(map(lambda x: "-" * x, sizes))) + + for val in pkg_strings: + write_output(val) + + +def format_for_columns( + pkgs: "_ProcessedDists", options: Values +) -> Tuple[List[List[str]], List[str]]: + """ + Convert the package data into something usable + by output_package_listing_columns. + """ + header = ["Package", "Version"] + + running_outdated = options.outdated + if running_outdated: + header.extend(["Latest", "Type"]) + + has_editables = any(x.editable for x in pkgs) + if has_editables: + header.append("Editable project location") + + if options.verbose >= 1: + header.append("Location") + if options.verbose >= 1: + header.append("Installer") + + data = [] + for proj in pkgs: + # if we're working on the 'outdated' list, separate out the + # latest_version and type + row = [proj.raw_name, str(proj.version)] + + if running_outdated: + row.append(str(proj.latest_version)) + row.append(proj.latest_filetype) + + if has_editables: + row.append(proj.editable_project_location or "") + + if options.verbose >= 1: + row.append(proj.location or "") + if options.verbose >= 1: + row.append(proj.installer) + + data.append(row) + + return data, header + + +def format_for_json(packages: "_ProcessedDists", options: Values) -> str: + data = [] + for dist in packages: + info = { + "name": dist.raw_name, + "version": str(dist.version), + } + if options.verbose >= 1: + info["location"] = dist.location or "" + info["installer"] = dist.installer + if options.outdated: + info["latest_version"] = str(dist.latest_version) + info["latest_filetype"] = dist.latest_filetype + editable_project_location = dist.editable_project_location + if editable_project_location: + info["editable_project_location"] = editable_project_location + data.append(info) + return json.dumps(data) diff --git a/lib/python3.11/site-packages/pip/_internal/commands/search.py b/python/lib/python3.10/site-packages/pip/_internal/commands/search.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/commands/search.py rename to python/lib/python3.10/site-packages/pip/_internal/commands/search.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/show.py b/python/lib/python3.10/site-packages/pip/_internal/commands/show.py new file mode 100644 index 0000000..d5540d6 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/show.py @@ -0,0 +1,178 @@ +import logging +from optparse import Values +from typing import Iterator, List, NamedTuple, Optional + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.cli.base_command import Command +from pip._internal.cli.status_codes import ERROR, SUCCESS +from pip._internal.metadata import BaseDistribution, get_default_environment +from pip._internal.utils.misc import write_output + +logger = logging.getLogger(__name__) + + +class ShowCommand(Command): + """ + Show information about one or more installed packages. + + The output is in RFC-compliant mail header format. + """ + + usage = """ + %prog [options] ...""" + ignore_require_venv = True + + def add_options(self) -> None: + self.cmd_opts.add_option( + "-f", + "--files", + dest="files", + action="store_true", + default=False, + help="Show the full list of installed files for each package.", + ) + + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + if not args: + logger.warning("ERROR: Please provide a package name or names.") + return ERROR + query = args + + results = search_packages_info(query) + if not print_results( + results, list_files=options.files, verbose=options.verbose + ): + return ERROR + return SUCCESS + + +class _PackageInfo(NamedTuple): + name: str + version: str + location: str + requires: List[str] + required_by: List[str] + installer: str + metadata_version: str + classifiers: List[str] + summary: str + homepage: str + author: str + author_email: str + license: str + entry_points: List[str] + files: Optional[List[str]] + + +def search_packages_info(query: List[str]) -> Iterator[_PackageInfo]: + """ + Gather details from installed distributions. Print distribution name, + version, location, and installed files. Installed files requires a + pip generated 'installed-files.txt' in the distributions '.egg-info' + directory. + """ + env = get_default_environment() + + installed = {dist.canonical_name: dist for dist in env.iter_distributions()} + query_names = [canonicalize_name(name) for name in query] + missing = sorted( + [name for name, pkg in zip(query, query_names) if pkg not in installed] + ) + if missing: + logger.warning("Package(s) not found: %s", ", ".join(missing)) + + def _get_requiring_packages(current_dist: BaseDistribution) -> Iterator[str]: + return ( + dist.metadata["Name"] or "UNKNOWN" + for dist in installed.values() + if current_dist.canonical_name + in {canonicalize_name(d.name) for d in dist.iter_dependencies()} + ) + + for query_name in query_names: + try: + dist = installed[query_name] + except KeyError: + continue + + requires = sorted((req.name for req in dist.iter_dependencies()), key=str.lower) + required_by = sorted(_get_requiring_packages(dist), key=str.lower) + + try: + entry_points_text = dist.read_text("entry_points.txt") + entry_points = entry_points_text.splitlines(keepends=False) + except FileNotFoundError: + entry_points = [] + + files_iter = dist.iter_declared_entries() + if files_iter is None: + files: Optional[List[str]] = None + else: + files = sorted(files_iter) + + metadata = dist.metadata + + yield _PackageInfo( + name=dist.raw_name, + version=str(dist.version), + location=dist.location or "", + requires=requires, + required_by=required_by, + installer=dist.installer, + metadata_version=dist.metadata_version or "", + classifiers=metadata.get_all("Classifier", []), + summary=metadata.get("Summary", ""), + homepage=metadata.get("Home-page", ""), + author=metadata.get("Author", ""), + author_email=metadata.get("Author-email", ""), + license=metadata.get("License", ""), + entry_points=entry_points, + files=files, + ) + + +def print_results( + distributions: Iterator[_PackageInfo], + list_files: bool, + verbose: bool, +) -> bool: + """ + Print the information from installed distributions found. + """ + results_printed = False + for i, dist in enumerate(distributions): + results_printed = True + if i > 0: + write_output("---") + + write_output("Name: %s", dist.name) + write_output("Version: %s", dist.version) + write_output("Summary: %s", dist.summary) + write_output("Home-page: %s", dist.homepage) + write_output("Author: %s", dist.author) + write_output("Author-email: %s", dist.author_email) + write_output("License: %s", dist.license) + write_output("Location: %s", dist.location) + write_output("Requires: %s", ", ".join(dist.requires)) + write_output("Required-by: %s", ", ".join(dist.required_by)) + + if verbose: + write_output("Metadata-Version: %s", dist.metadata_version) + write_output("Installer: %s", dist.installer) + write_output("Classifiers:") + for classifier in dist.classifiers: + write_output(" %s", classifier) + write_output("Entry-points:") + for entry in dist.entry_points: + write_output(" %s", entry.strip()) + if list_files: + write_output("Files:") + if dist.files is None: + write_output("Cannot locate RECORD or installed-files.txt") + else: + for line in dist.files: + write_output(" %s", line.strip()) + return results_printed diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/uninstall.py b/python/lib/python3.10/site-packages/pip/_internal/commands/uninstall.py new file mode 100644 index 0000000..bb9e8e6 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/uninstall.py @@ -0,0 +1,105 @@ +import logging +from optparse import Values +from typing import List + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.cli.base_command import Command +from pip._internal.cli.req_command import SessionCommandMixin, warn_if_run_as_root +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.exceptions import InstallationError +from pip._internal.req import parse_requirements +from pip._internal.req.constructors import ( + install_req_from_line, + install_req_from_parsed_requirement, +) +from pip._internal.utils.misc import protect_pip_from_modification_on_windows + +logger = logging.getLogger(__name__) + + +class UninstallCommand(Command, SessionCommandMixin): + """ + Uninstall packages. + + pip is able to uninstall most installed packages. Known exceptions are: + + - Pure distutils packages installed with ``python setup.py install``, which + leave behind no metadata to determine what files were installed. + - Script wrappers installed by ``python setup.py develop``. + """ + + usage = """ + %prog [options] ... + %prog [options] -r ...""" + + def add_options(self) -> None: + self.cmd_opts.add_option( + "-r", + "--requirement", + dest="requirements", + action="append", + default=[], + metavar="file", + help=( + "Uninstall all the packages listed in the given requirements " + "file. This option can be used multiple times." + ), + ) + self.cmd_opts.add_option( + "-y", + "--yes", + dest="yes", + action="store_true", + help="Don't ask for confirmation of uninstall deletions.", + ) + + self.parser.insert_option_group(0, self.cmd_opts) + + def run(self, options: Values, args: List[str]) -> int: + session = self.get_default_session(options) + + reqs_to_uninstall = {} + for name in args: + req = install_req_from_line( + name, + isolated=options.isolated_mode, + ) + if req.name: + reqs_to_uninstall[canonicalize_name(req.name)] = req + else: + logger.warning( + "Invalid requirement: %r ignored -" + " the uninstall command expects named" + " requirements.", + name, + ) + for filename in options.requirements: + for parsed_req in parse_requirements( + filename, options=options, session=session + ): + req = install_req_from_parsed_requirement( + parsed_req, isolated=options.isolated_mode + ) + if req.name: + reqs_to_uninstall[canonicalize_name(req.name)] = req + if not reqs_to_uninstall: + raise InstallationError( + f"You must give at least one requirement to {self.name} (see " + f'"pip help {self.name}")' + ) + + protect_pip_from_modification_on_windows( + modifying_pip="pip" in reqs_to_uninstall + ) + + for req in reqs_to_uninstall.values(): + uninstall_pathset = req.uninstall( + auto_confirm=options.yes, + verbose=self.verbosity > 0, + ) + if uninstall_pathset: + uninstall_pathset.commit() + + warn_if_run_as_root() + return SUCCESS diff --git a/python/lib/python3.10/site-packages/pip/_internal/commands/wheel.py b/python/lib/python3.10/site-packages/pip/_internal/commands/wheel.py new file mode 100644 index 0000000..d5b20dc --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/commands/wheel.py @@ -0,0 +1,178 @@ +import logging +import os +import shutil +from optparse import Values +from typing import List + +from pip._internal.cache import WheelCache +from pip._internal.cli import cmdoptions +from pip._internal.cli.req_command import RequirementCommand, with_cleanup +from pip._internal.cli.status_codes import SUCCESS +from pip._internal.exceptions import CommandError +from pip._internal.req.req_install import InstallRequirement +from pip._internal.req.req_tracker import get_requirement_tracker +from pip._internal.utils.misc import ensure_dir, normalize_path +from pip._internal.utils.temp_dir import TempDirectory +from pip._internal.wheel_builder import build, should_build_for_wheel_command + +logger = logging.getLogger(__name__) + + +class WheelCommand(RequirementCommand): + """ + Build Wheel archives for your requirements and dependencies. + + Wheel is a built-package format, and offers the advantage of not + recompiling your software during every install. For more details, see the + wheel docs: https://wheel.readthedocs.io/en/latest/ + + Requirements: setuptools>=0.8, and wheel. + + 'pip wheel' uses the bdist_wheel setuptools extension from the wheel + package to build individual wheels. + + """ + + usage = """ + %prog [options] ... + %prog [options] -r ... + %prog [options] [-e] ... + %prog [options] [-e] ... + %prog [options] ...""" + + def add_options(self) -> None: + + self.cmd_opts.add_option( + "-w", + "--wheel-dir", + dest="wheel_dir", + metavar="dir", + default=os.curdir, + help=( + "Build wheels into , where the default is the " + "current working directory." + ), + ) + self.cmd_opts.add_option(cmdoptions.no_binary()) + self.cmd_opts.add_option(cmdoptions.only_binary()) + self.cmd_opts.add_option(cmdoptions.prefer_binary()) + self.cmd_opts.add_option(cmdoptions.no_build_isolation()) + self.cmd_opts.add_option(cmdoptions.use_pep517()) + self.cmd_opts.add_option(cmdoptions.no_use_pep517()) + self.cmd_opts.add_option(cmdoptions.constraints()) + self.cmd_opts.add_option(cmdoptions.editable()) + self.cmd_opts.add_option(cmdoptions.requirements()) + self.cmd_opts.add_option(cmdoptions.src()) + self.cmd_opts.add_option(cmdoptions.ignore_requires_python()) + self.cmd_opts.add_option(cmdoptions.no_deps()) + self.cmd_opts.add_option(cmdoptions.progress_bar()) + + self.cmd_opts.add_option( + "--no-verify", + dest="no_verify", + action="store_true", + default=False, + help="Don't verify if built wheel is valid.", + ) + + self.cmd_opts.add_option(cmdoptions.build_options()) + self.cmd_opts.add_option(cmdoptions.global_options()) + + self.cmd_opts.add_option( + "--pre", + action="store_true", + default=False, + help=( + "Include pre-release and development versions. By default, " + "pip only finds stable versions." + ), + ) + + self.cmd_opts.add_option(cmdoptions.require_hashes()) + + index_opts = cmdoptions.make_option_group( + cmdoptions.index_group, + self.parser, + ) + + self.parser.insert_option_group(0, index_opts) + self.parser.insert_option_group(0, self.cmd_opts) + + @with_cleanup + def run(self, options: Values, args: List[str]) -> int: + cmdoptions.check_install_build_global(options) + + session = self.get_default_session(options) + + finder = self._build_package_finder(options, session) + wheel_cache = WheelCache(options.cache_dir, options.format_control) + + options.wheel_dir = normalize_path(options.wheel_dir) + ensure_dir(options.wheel_dir) + + req_tracker = self.enter_context(get_requirement_tracker()) + + directory = TempDirectory( + delete=not options.no_clean, + kind="wheel", + globally_managed=True, + ) + + reqs = self.get_requirements(args, options, finder, session) + + preparer = self.make_requirement_preparer( + temp_build_dir=directory, + options=options, + req_tracker=req_tracker, + session=session, + finder=finder, + download_dir=options.wheel_dir, + use_user_site=False, + verbosity=self.verbosity, + ) + + resolver = self.make_resolver( + preparer=preparer, + finder=finder, + options=options, + wheel_cache=wheel_cache, + ignore_requires_python=options.ignore_requires_python, + use_pep517=options.use_pep517, + ) + + self.trace_basic_info(finder) + + requirement_set = resolver.resolve(reqs, check_supported_wheels=True) + + reqs_to_build: List[InstallRequirement] = [] + for req in requirement_set.requirements.values(): + if req.is_wheel: + preparer.save_linked_requirement(req) + elif should_build_for_wheel_command(req): + reqs_to_build.append(req) + + # build wheels + build_successes, build_failures = build( + reqs_to_build, + wheel_cache=wheel_cache, + verify=(not options.no_verify), + build_options=options.build_options or [], + global_options=options.global_options or [], + ) + for req in build_successes: + assert req.link and req.link.is_wheel + assert req.local_file_path + # copy from cache to target directory + try: + shutil.copy(req.local_file_path, options.wheel_dir) + except OSError as e: + logger.warning( + "Building wheel for %s failed: %s", + req.name, + e, + ) + build_failures.append(req) + if len(build_failures) != 0: + raise CommandError("Failed to build one or more wheels") + + return SUCCESS diff --git a/python/lib/python3.10/site-packages/pip/_internal/configuration.py b/python/lib/python3.10/site-packages/pip/_internal/configuration.py new file mode 100644 index 0000000..a8092d1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/configuration.py @@ -0,0 +1,366 @@ +"""Configuration management setup + +Some terminology: +- name + As written in config files. +- value + Value associated with a name +- key + Name combined with it's section (section.name) +- variant + A single word describing where the configuration key-value pair came from +""" + +import configparser +import locale +import os +import sys +from typing import Any, Dict, Iterable, List, NewType, Optional, Tuple + +from pip._internal.exceptions import ( + ConfigurationError, + ConfigurationFileCouldNotBeLoaded, +) +from pip._internal.utils import appdirs +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.logging import getLogger +from pip._internal.utils.misc import ensure_dir, enum + +RawConfigParser = configparser.RawConfigParser # Shorthand +Kind = NewType("Kind", str) + +CONFIG_BASENAME = "pip.ini" if WINDOWS else "pip.conf" +ENV_NAMES_IGNORED = "version", "help" + +# The kinds of configurations there are. +kinds = enum( + USER="user", # User Specific + GLOBAL="global", # System Wide + SITE="site", # [Virtual] Environment Specific + ENV="env", # from PIP_CONFIG_FILE + ENV_VAR="env-var", # from Environment Variables +) +OVERRIDE_ORDER = kinds.GLOBAL, kinds.USER, kinds.SITE, kinds.ENV, kinds.ENV_VAR +VALID_LOAD_ONLY = kinds.USER, kinds.GLOBAL, kinds.SITE + +logger = getLogger(__name__) + + +# NOTE: Maybe use the optionx attribute to normalize keynames. +def _normalize_name(name: str) -> str: + """Make a name consistent regardless of source (environment or file)""" + name = name.lower().replace("_", "-") + if name.startswith("--"): + name = name[2:] # only prefer long opts + return name + + +def _disassemble_key(name: str) -> List[str]: + if "." not in name: + error_message = ( + "Key does not contain dot separated section and key. " + "Perhaps you wanted to use 'global.{}' instead?" + ).format(name) + raise ConfigurationError(error_message) + return name.split(".", 1) + + +def get_configuration_files() -> Dict[Kind, List[str]]: + global_config_files = [ + os.path.join(path, CONFIG_BASENAME) for path in appdirs.site_config_dirs("pip") + ] + + site_config_file = os.path.join(sys.prefix, CONFIG_BASENAME) + legacy_config_file = os.path.join( + os.path.expanduser("~"), + "pip" if WINDOWS else ".pip", + CONFIG_BASENAME, + ) + new_config_file = os.path.join(appdirs.user_config_dir("pip"), CONFIG_BASENAME) + return { + kinds.GLOBAL: global_config_files, + kinds.SITE: [site_config_file], + kinds.USER: [legacy_config_file, new_config_file], + } + + +class Configuration: + """Handles management of configuration. + + Provides an interface to accessing and managing configuration files. + + This class converts provides an API that takes "section.key-name" style + keys and stores the value associated with it as "key-name" under the + section "section". + + This allows for a clean interface wherein the both the section and the + key-name are preserved in an easy to manage form in the configuration files + and the data stored is also nice. + """ + + def __init__(self, isolated: bool, load_only: Optional[Kind] = None) -> None: + super().__init__() + + if load_only is not None and load_only not in VALID_LOAD_ONLY: + raise ConfigurationError( + "Got invalid value for load_only - should be one of {}".format( + ", ".join(map(repr, VALID_LOAD_ONLY)) + ) + ) + self.isolated = isolated + self.load_only = load_only + + # Because we keep track of where we got the data from + self._parsers: Dict[Kind, List[Tuple[str, RawConfigParser]]] = { + variant: [] for variant in OVERRIDE_ORDER + } + self._config: Dict[Kind, Dict[str, Any]] = { + variant: {} for variant in OVERRIDE_ORDER + } + self._modified_parsers: List[Tuple[str, RawConfigParser]] = [] + + def load(self) -> None: + """Loads configuration from configuration files and environment""" + self._load_config_files() + if not self.isolated: + self._load_environment_vars() + + def get_file_to_edit(self) -> Optional[str]: + """Returns the file with highest priority in configuration""" + assert self.load_only is not None, "Need to be specified a file to be editing" + + try: + return self._get_parser_to_modify()[0] + except IndexError: + return None + + def items(self) -> Iterable[Tuple[str, Any]]: + """Returns key-value pairs like dict.items() representing the loaded + configuration + """ + return self._dictionary.items() + + def get_value(self, key: str) -> Any: + """Get a value from the configuration.""" + try: + return self._dictionary[key] + except KeyError: + raise ConfigurationError(f"No such key - {key}") + + def set_value(self, key: str, value: Any) -> None: + """Modify a value in the configuration.""" + self._ensure_have_load_only() + + assert self.load_only + fname, parser = self._get_parser_to_modify() + + if parser is not None: + section, name = _disassemble_key(key) + + # Modify the parser and the configuration + if not parser.has_section(section): + parser.add_section(section) + parser.set(section, name, value) + + self._config[self.load_only][key] = value + self._mark_as_modified(fname, parser) + + def unset_value(self, key: str) -> None: + """Unset a value in the configuration.""" + self._ensure_have_load_only() + + assert self.load_only + if key not in self._config[self.load_only]: + raise ConfigurationError(f"No such key - {key}") + + fname, parser = self._get_parser_to_modify() + + if parser is not None: + section, name = _disassemble_key(key) + if not ( + parser.has_section(section) and parser.remove_option(section, name) + ): + # The option was not removed. + raise ConfigurationError( + "Fatal Internal error [id=1]. Please report as a bug." + ) + + # The section may be empty after the option was removed. + if not parser.items(section): + parser.remove_section(section) + self._mark_as_modified(fname, parser) + + del self._config[self.load_only][key] + + def save(self) -> None: + """Save the current in-memory state.""" + self._ensure_have_load_only() + + for fname, parser in self._modified_parsers: + logger.info("Writing to %s", fname) + + # Ensure directory exists. + ensure_dir(os.path.dirname(fname)) + + with open(fname, "w") as f: + parser.write(f) + + # + # Private routines + # + + def _ensure_have_load_only(self) -> None: + if self.load_only is None: + raise ConfigurationError("Needed a specific file to be modifying.") + logger.debug("Will be working with %s variant only", self.load_only) + + @property + def _dictionary(self) -> Dict[str, Any]: + """A dictionary representing the loaded configuration.""" + # NOTE: Dictionaries are not populated if not loaded. So, conditionals + # are not needed here. + retval = {} + + for variant in OVERRIDE_ORDER: + retval.update(self._config[variant]) + + return retval + + def _load_config_files(self) -> None: + """Loads configuration from configuration files""" + config_files = dict(self.iter_config_files()) + if config_files[kinds.ENV][0:1] == [os.devnull]: + logger.debug( + "Skipping loading configuration files due to " + "environment's PIP_CONFIG_FILE being os.devnull" + ) + return + + for variant, files in config_files.items(): + for fname in files: + # If there's specific variant set in `load_only`, load only + # that variant, not the others. + if self.load_only is not None and variant != self.load_only: + logger.debug("Skipping file '%s' (variant: %s)", fname, variant) + continue + + parser = self._load_file(variant, fname) + + # Keeping track of the parsers used + self._parsers[variant].append((fname, parser)) + + def _load_file(self, variant: Kind, fname: str) -> RawConfigParser: + logger.verbose("For variant '%s', will try loading '%s'", variant, fname) + parser = self._construct_parser(fname) + + for section in parser.sections(): + items = parser.items(section) + self._config[variant].update(self._normalized_keys(section, items)) + + return parser + + def _construct_parser(self, fname: str) -> RawConfigParser: + parser = configparser.RawConfigParser() + # If there is no such file, don't bother reading it but create the + # parser anyway, to hold the data. + # Doing this is useful when modifying and saving files, where we don't + # need to construct a parser. + if os.path.exists(fname): + locale_encoding = locale.getpreferredencoding(False) + try: + parser.read(fname, encoding=locale_encoding) + except UnicodeDecodeError: + # See https://github.com/pypa/pip/issues/4963 + raise ConfigurationFileCouldNotBeLoaded( + reason=f"contains invalid {locale_encoding} characters", + fname=fname, + ) + except configparser.Error as error: + # See https://github.com/pypa/pip/issues/4893 + raise ConfigurationFileCouldNotBeLoaded(error=error) + return parser + + def _load_environment_vars(self) -> None: + """Loads configuration from environment variables""" + self._config[kinds.ENV_VAR].update( + self._normalized_keys(":env:", self.get_environ_vars()) + ) + + def _normalized_keys( + self, section: str, items: Iterable[Tuple[str, Any]] + ) -> Dict[str, Any]: + """Normalizes items to construct a dictionary with normalized keys. + + This routine is where the names become keys and are made the same + regardless of source - configuration files or environment. + """ + normalized = {} + for name, val in items: + key = section + "." + _normalize_name(name) + normalized[key] = val + return normalized + + def get_environ_vars(self) -> Iterable[Tuple[str, str]]: + """Returns a generator with all environmental vars with prefix PIP_""" + for key, val in os.environ.items(): + if key.startswith("PIP_"): + name = key[4:].lower() + if name not in ENV_NAMES_IGNORED: + yield name, val + + # XXX: This is patched in the tests. + def iter_config_files(self) -> Iterable[Tuple[Kind, List[str]]]: + """Yields variant and configuration files associated with it. + + This should be treated like items of a dictionary. + """ + # SMELL: Move the conditions out of this function + + # environment variables have the lowest priority + config_file = os.environ.get("PIP_CONFIG_FILE", None) + if config_file is not None: + yield kinds.ENV, [config_file] + else: + yield kinds.ENV, [] + + config_files = get_configuration_files() + + # at the base we have any global configuration + yield kinds.GLOBAL, config_files[kinds.GLOBAL] + + # per-user configuration next + should_load_user_config = not self.isolated and not ( + config_file and os.path.exists(config_file) + ) + if should_load_user_config: + # The legacy config file is overridden by the new config file + yield kinds.USER, config_files[kinds.USER] + + # finally virtualenv configuration first trumping others + yield kinds.SITE, config_files[kinds.SITE] + + def get_values_in_config(self, variant: Kind) -> Dict[str, Any]: + """Get values present in a config file""" + return self._config[variant] + + def _get_parser_to_modify(self) -> Tuple[str, RawConfigParser]: + # Determine which parser to modify + assert self.load_only + parsers = self._parsers[self.load_only] + if not parsers: + # This should not happen if everything works correctly. + raise ConfigurationError( + "Fatal Internal error [id=2]. Please report as a bug." + ) + + # Use the highest priority parser. + return parsers[-1] + + # XXX: This is patched in the tests. + def _mark_as_modified(self, fname: str, parser: RawConfigParser) -> None: + file_parser_tuple = (fname, parser) + if file_parser_tuple not in self._modified_parsers: + self._modified_parsers.append(file_parser_tuple) + + def __repr__(self) -> str: + return f"{self.__class__.__name__}({self._dictionary!r})" diff --git a/lib/python3.11/site-packages/pip/_internal/distributions/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/distributions/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/distributions/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/distributions/base.py b/python/lib/python3.10/site-packages/pip/_internal/distributions/base.py new file mode 100644 index 0000000..149fff5 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/distributions/base.py @@ -0,0 +1,36 @@ +import abc + +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata.base import BaseDistribution +from pip._internal.req import InstallRequirement + + +class AbstractDistribution(metaclass=abc.ABCMeta): + """A base class for handling installable artifacts. + + The requirements for anything installable are as follows: + + - we must be able to determine the requirement name + (or we can't correctly handle the non-upgrade case). + + - for packages with setup requirements, we must also be able + to determine their requirements without installing additional + packages (for the same reason as run-time dependencies) + + - we must be able to create a Distribution object exposing the + above metadata. + """ + + def __init__(self, req: InstallRequirement) -> None: + super().__init__() + self.req = req + + @abc.abstractmethod + def get_metadata_distribution(self) -> BaseDistribution: + raise NotImplementedError() + + @abc.abstractmethod + def prepare_distribution_metadata( + self, finder: PackageFinder, build_isolation: bool + ) -> None: + raise NotImplementedError() diff --git a/python/lib/python3.10/site-packages/pip/_internal/distributions/installed.py b/python/lib/python3.10/site-packages/pip/_internal/distributions/installed.py new file mode 100644 index 0000000..be5962f --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/distributions/installed.py @@ -0,0 +1,20 @@ +from pip._internal.distributions.base import AbstractDistribution +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import BaseDistribution + + +class InstalledDistribution(AbstractDistribution): + """Represents an installed package. + + This does not need any preparation as the required information has already + been computed. + """ + + def get_metadata_distribution(self) -> BaseDistribution: + assert self.req.satisfied_by is not None, "not actually installed" + return self.req.satisfied_by + + def prepare_distribution_metadata( + self, finder: PackageFinder, build_isolation: bool + ) -> None: + pass diff --git a/python/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py b/python/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py new file mode 100644 index 0000000..bdaf403 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py @@ -0,0 +1,127 @@ +import logging +from typing import Iterable, Set, Tuple + +from pip._internal.build_env import BuildEnvironment +from pip._internal.distributions.base import AbstractDistribution +from pip._internal.exceptions import InstallationError +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import BaseDistribution +from pip._internal.utils.subprocess import runner_with_spinner_message + +logger = logging.getLogger(__name__) + + +class SourceDistribution(AbstractDistribution): + """Represents a source distribution. + + The preparation step for these needs metadata for the packages to be + generated, either using PEP 517 or using the legacy `setup.py egg_info`. + """ + + def get_metadata_distribution(self) -> BaseDistribution: + return self.req.get_dist() + + def prepare_distribution_metadata( + self, finder: PackageFinder, build_isolation: bool + ) -> None: + # Load pyproject.toml, to determine whether PEP 517 is to be used + self.req.load_pyproject_toml() + + # Set up the build isolation, if this requirement should be isolated + should_isolate = self.req.use_pep517 and build_isolation + if should_isolate: + # Setup an isolated environment and install the build backend static + # requirements in it. + self._prepare_build_backend(finder) + # Check that if the requirement is editable, it either supports PEP 660 or + # has a setup.py or a setup.cfg. This cannot be done earlier because we need + # to setup the build backend to verify it supports build_editable, nor can + # it be done later, because we want to avoid installing build requirements + # needlessly. Doing it here also works around setuptools generating + # UNKNOWN.egg-info when running get_requires_for_build_wheel on a directory + # without setup.py nor setup.cfg. + self.req.isolated_editable_sanity_check() + # Install the dynamic build requirements. + self._install_build_reqs(finder) + + self.req.prepare_metadata() + + def _prepare_build_backend(self, finder: PackageFinder) -> None: + # Isolate in a BuildEnvironment and install the build-time + # requirements. + pyproject_requires = self.req.pyproject_requires + assert pyproject_requires is not None + + self.req.build_env = BuildEnvironment() + self.req.build_env.install_requirements( + finder, pyproject_requires, "overlay", kind="build dependencies" + ) + conflicting, missing = self.req.build_env.check_requirements( + self.req.requirements_to_check + ) + if conflicting: + self._raise_conflicts("PEP 517/518 supported requirements", conflicting) + if missing: + logger.warning( + "Missing build requirements in pyproject.toml for %s.", + self.req, + ) + logger.warning( + "The project does not specify a build backend, and " + "pip cannot fall back to setuptools without %s.", + " and ".join(map(repr, sorted(missing))), + ) + + def _get_build_requires_wheel(self) -> Iterable[str]: + with self.req.build_env: + runner = runner_with_spinner_message("Getting requirements to build wheel") + backend = self.req.pep517_backend + assert backend is not None + with backend.subprocess_runner(runner): + return backend.get_requires_for_build_wheel() + + def _get_build_requires_editable(self) -> Iterable[str]: + with self.req.build_env: + runner = runner_with_spinner_message( + "Getting requirements to build editable" + ) + backend = self.req.pep517_backend + assert backend is not None + with backend.subprocess_runner(runner): + return backend.get_requires_for_build_editable() + + def _install_build_reqs(self, finder: PackageFinder) -> None: + # Install any extra build dependencies that the backend requests. + # This must be done in a second pass, as the pyproject.toml + # dependencies must be installed before we can call the backend. + if ( + self.req.editable + and self.req.permit_editable_wheels + and self.req.supports_pyproject_editable() + ): + build_reqs = self._get_build_requires_editable() + else: + build_reqs = self._get_build_requires_wheel() + conflicting, missing = self.req.build_env.check_requirements(build_reqs) + if conflicting: + self._raise_conflicts("the backend dependencies", conflicting) + self.req.build_env.install_requirements( + finder, missing, "normal", kind="backend dependencies" + ) + + def _raise_conflicts( + self, conflicting_with: str, conflicting_reqs: Set[Tuple[str, str]] + ) -> None: + format_string = ( + "Some build dependencies for {requirement} " + "conflict with {conflicting_with}: {description}." + ) + error_message = format_string.format( + requirement=self.req, + conflicting_with=conflicting_with, + description=", ".join( + f"{installed} is incompatible with {wanted}" + for installed, wanted in sorted(conflicting_reqs) + ), + ) + raise InstallationError(error_message) diff --git a/python/lib/python3.10/site-packages/pip/_internal/distributions/wheel.py b/python/lib/python3.10/site-packages/pip/_internal/distributions/wheel.py new file mode 100644 index 0000000..340b0f3 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/distributions/wheel.py @@ -0,0 +1,31 @@ +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.distributions.base import AbstractDistribution +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import ( + BaseDistribution, + FilesystemWheel, + get_wheel_distribution, +) + + +class WheelDistribution(AbstractDistribution): + """Represents a wheel distribution. + + This does not need any preparation as wheels can be directly unpacked. + """ + + def get_metadata_distribution(self) -> BaseDistribution: + """Loads the metadata from the wheel file into memory and returns a + Distribution that uses it, not relying on the wheel file or + requirement. + """ + assert self.req.local_file_path, "Set as part of preparation during download" + assert self.req.name, "Wheels are never unnamed" + wheel = FilesystemWheel(self.req.local_file_path) + return get_wheel_distribution(wheel, canonicalize_name(self.req.name)) + + def prepare_distribution_metadata( + self, finder: PackageFinder, build_isolation: bool + ) -> None: + pass diff --git a/python/lib/python3.10/site-packages/pip/_internal/exceptions.py b/python/lib/python3.10/site-packages/pip/_internal/exceptions.py new file mode 100644 index 0000000..97b9612 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/exceptions.py @@ -0,0 +1,658 @@ +"""Exceptions used throughout package. + +This module MUST NOT try to import from anything within `pip._internal` to +operate. This is expected to be importable from any/all files within the +subpackage and, thus, should not depend on them. +""" + +import configparser +import re +from itertools import chain, groupby, repeat +from typing import TYPE_CHECKING, Dict, List, Optional, Union + +from pip._vendor.requests.models import Request, Response +from pip._vendor.rich.console import Console, ConsoleOptions, RenderResult +from pip._vendor.rich.markup import escape +from pip._vendor.rich.text import Text + +if TYPE_CHECKING: + from hashlib import _Hash + from typing import Literal + + from pip._internal.metadata import BaseDistribution + from pip._internal.req.req_install import InstallRequirement + + +# +# Scaffolding +# +def _is_kebab_case(s: str) -> bool: + return re.match(r"^[a-z]+(-[a-z]+)*$", s) is not None + + +def _prefix_with_indent( + s: Union[Text, str], + console: Console, + *, + prefix: str, + indent: str, +) -> Text: + if isinstance(s, Text): + text = s + else: + text = console.render_str(s) + + return console.render_str(prefix, overflow="ignore") + console.render_str( + f"\n{indent}", overflow="ignore" + ).join(text.split(allow_blank=True)) + + +class PipError(Exception): + """The base pip error.""" + + +class DiagnosticPipError(PipError): + """An error, that presents diagnostic information to the user. + + This contains a bunch of logic, to enable pretty presentation of our error + messages. Each error gets a unique reference. Each error can also include + additional context, a hint and/or a note -- which are presented with the + main error message in a consistent style. + + This is adapted from the error output styling in `sphinx-theme-builder`. + """ + + reference: str + + def __init__( + self, + *, + kind: 'Literal["error", "warning"]' = "error", + reference: Optional[str] = None, + message: Union[str, Text], + context: Optional[Union[str, Text]], + hint_stmt: Optional[Union[str, Text]], + note_stmt: Optional[Union[str, Text]] = None, + link: Optional[str] = None, + ) -> None: + # Ensure a proper reference is provided. + if reference is None: + assert hasattr(self, "reference"), "error reference not provided!" + reference = self.reference + assert _is_kebab_case(reference), "error reference must be kebab-case!" + + self.kind = kind + self.reference = reference + + self.message = message + self.context = context + + self.note_stmt = note_stmt + self.hint_stmt = hint_stmt + + self.link = link + + super().__init__(f"<{self.__class__.__name__}: {self.reference}>") + + def __repr__(self) -> str: + return ( + f"<{self.__class__.__name__}(" + f"reference={self.reference!r}, " + f"message={self.message!r}, " + f"context={self.context!r}, " + f"note_stmt={self.note_stmt!r}, " + f"hint_stmt={self.hint_stmt!r}" + ")>" + ) + + def __rich_console__( + self, + console: Console, + options: ConsoleOptions, + ) -> RenderResult: + colour = "red" if self.kind == "error" else "yellow" + + yield f"[{colour} bold]{self.kind}[/]: [bold]{self.reference}[/]" + yield "" + + if not options.ascii_only: + # Present the main message, with relevant context indented. + if self.context is not None: + yield _prefix_with_indent( + self.message, + console, + prefix=f"[{colour}]×[/] ", + indent=f"[{colour}]│[/] ", + ) + yield _prefix_with_indent( + self.context, + console, + prefix=f"[{colour}]╰─>[/] ", + indent=f"[{colour}] [/] ", + ) + else: + yield _prefix_with_indent( + self.message, + console, + prefix="[red]×[/] ", + indent=" ", + ) + else: + yield self.message + if self.context is not None: + yield "" + yield self.context + + if self.note_stmt is not None or self.hint_stmt is not None: + yield "" + + if self.note_stmt is not None: + yield _prefix_with_indent( + self.note_stmt, + console, + prefix="[magenta bold]note[/]: ", + indent=" ", + ) + if self.hint_stmt is not None: + yield _prefix_with_indent( + self.hint_stmt, + console, + prefix="[cyan bold]hint[/]: ", + indent=" ", + ) + + if self.link is not None: + yield "" + yield f"Link: {self.link}" + + +# +# Actual Errors +# +class ConfigurationError(PipError): + """General exception in configuration""" + + +class InstallationError(PipError): + """General exception during installation""" + + +class UninstallationError(PipError): + """General exception during uninstallation""" + + +class MissingPyProjectBuildRequires(DiagnosticPipError): + """Raised when pyproject.toml has `build-system`, but no `build-system.requires`.""" + + reference = "missing-pyproject-build-system-requires" + + def __init__(self, *, package: str) -> None: + super().__init__( + message=f"Can not process {escape(package)}", + context=Text( + "This package has an invalid pyproject.toml file.\n" + "The [build-system] table is missing the mandatory `requires` key." + ), + note_stmt="This is an issue with the package mentioned above, not pip.", + hint_stmt=Text("See PEP 518 for the detailed specification."), + ) + + +class InvalidPyProjectBuildRequires(DiagnosticPipError): + """Raised when pyproject.toml an invalid `build-system.requires`.""" + + reference = "invalid-pyproject-build-system-requires" + + def __init__(self, *, package: str, reason: str) -> None: + super().__init__( + message=f"Can not process {escape(package)}", + context=Text( + "This package has an invalid `build-system.requires` key in " + f"pyproject.toml.\n{reason}" + ), + note_stmt="This is an issue with the package mentioned above, not pip.", + hint_stmt=Text("See PEP 518 for the detailed specification."), + ) + + +class NoneMetadataError(PipError): + """Raised when accessing a Distribution's "METADATA" or "PKG-INFO". + + This signifies an inconsistency, when the Distribution claims to have + the metadata file (if not, raise ``FileNotFoundError`` instead), but is + not actually able to produce its content. This may be due to permission + errors. + """ + + def __init__( + self, + dist: "BaseDistribution", + metadata_name: str, + ) -> None: + """ + :param dist: A Distribution object. + :param metadata_name: The name of the metadata being accessed + (can be "METADATA" or "PKG-INFO"). + """ + self.dist = dist + self.metadata_name = metadata_name + + def __str__(self) -> str: + # Use `dist` in the error message because its stringification + # includes more information, like the version and location. + return "None {} metadata found for distribution: {}".format( + self.metadata_name, + self.dist, + ) + + +class UserInstallationInvalid(InstallationError): + """A --user install is requested on an environment without user site.""" + + def __str__(self) -> str: + return "User base directory is not specified" + + +class InvalidSchemeCombination(InstallationError): + def __str__(self) -> str: + before = ", ".join(str(a) for a in self.args[:-1]) + return f"Cannot set {before} and {self.args[-1]} together" + + +class DistributionNotFound(InstallationError): + """Raised when a distribution cannot be found to satisfy a requirement""" + + +class RequirementsFileParseError(InstallationError): + """Raised when a general error occurs parsing a requirements file line.""" + + +class BestVersionAlreadyInstalled(PipError): + """Raised when the most up-to-date version of a package is already + installed.""" + + +class BadCommand(PipError): + """Raised when virtualenv or a command is not found""" + + +class CommandError(PipError): + """Raised when there is an error in command-line arguments""" + + +class PreviousBuildDirError(PipError): + """Raised when there's a previous conflicting build directory""" + + +class NetworkConnectionError(PipError): + """HTTP connection error""" + + def __init__( + self, error_msg: str, response: Response = None, request: Request = None + ) -> None: + """ + Initialize NetworkConnectionError with `request` and `response` + objects. + """ + self.response = response + self.request = request + self.error_msg = error_msg + if ( + self.response is not None + and not self.request + and hasattr(response, "request") + ): + self.request = self.response.request + super().__init__(error_msg, response, request) + + def __str__(self) -> str: + return str(self.error_msg) + + +class InvalidWheelFilename(InstallationError): + """Invalid wheel filename.""" + + +class UnsupportedWheel(InstallationError): + """Unsupported wheel.""" + + +class InvalidWheel(InstallationError): + """Invalid (e.g. corrupt) wheel.""" + + def __init__(self, location: str, name: str): + self.location = location + self.name = name + + def __str__(self) -> str: + return f"Wheel '{self.name}' located at {self.location} is invalid." + + +class MetadataInconsistent(InstallationError): + """Built metadata contains inconsistent information. + + This is raised when the metadata contains values (e.g. name and version) + that do not match the information previously obtained from sdist filename + or user-supplied ``#egg=`` value. + """ + + def __init__( + self, ireq: "InstallRequirement", field: str, f_val: str, m_val: str + ) -> None: + self.ireq = ireq + self.field = field + self.f_val = f_val + self.m_val = m_val + + def __str__(self) -> str: + template = ( + "Requested {} has inconsistent {}: " + "filename has {!r}, but metadata has {!r}" + ) + return template.format(self.ireq, self.field, self.f_val, self.m_val) + + +class LegacyInstallFailure(DiagnosticPipError): + """Error occurred while executing `setup.py install`""" + + reference = "legacy-install-failure" + + def __init__(self, package_details: str) -> None: + super().__init__( + message="Encountered error while trying to install package.", + context=package_details, + hint_stmt="See above for output from the failure.", + note_stmt="This is an issue with the package mentioned above, not pip.", + ) + + +class InstallationSubprocessError(DiagnosticPipError, InstallationError): + """A subprocess call failed.""" + + reference = "subprocess-exited-with-error" + + def __init__( + self, + *, + command_description: str, + exit_code: int, + output_lines: Optional[List[str]], + ) -> None: + if output_lines is None: + output_prompt = Text("See above for output.") + else: + output_prompt = ( + Text.from_markup(f"[red][{len(output_lines)} lines of output][/]\n") + + Text("".join(output_lines)) + + Text.from_markup(R"[red]\[end of output][/]") + ) + + super().__init__( + message=( + f"[green]{escape(command_description)}[/] did not run successfully.\n" + f"exit code: {exit_code}" + ), + context=output_prompt, + hint_stmt=None, + note_stmt=( + "This error originates from a subprocess, and is likely not a " + "problem with pip." + ), + ) + + self.command_description = command_description + self.exit_code = exit_code + + def __str__(self) -> str: + return f"{self.command_description} exited with {self.exit_code}" + + +class MetadataGenerationFailed(InstallationSubprocessError, InstallationError): + reference = "metadata-generation-failed" + + def __init__( + self, + *, + package_details: str, + ) -> None: + super(InstallationSubprocessError, self).__init__( + message="Encountered error while generating package metadata.", + context=escape(package_details), + hint_stmt="See above for details.", + note_stmt="This is an issue with the package mentioned above, not pip.", + ) + + def __str__(self) -> str: + return "metadata generation failed" + + +class HashErrors(InstallationError): + """Multiple HashError instances rolled into one for reporting""" + + def __init__(self) -> None: + self.errors: List["HashError"] = [] + + def append(self, error: "HashError") -> None: + self.errors.append(error) + + def __str__(self) -> str: + lines = [] + self.errors.sort(key=lambda e: e.order) + for cls, errors_of_cls in groupby(self.errors, lambda e: e.__class__): + lines.append(cls.head) + lines.extend(e.body() for e in errors_of_cls) + if lines: + return "\n".join(lines) + return "" + + def __bool__(self) -> bool: + return bool(self.errors) + + +class HashError(InstallationError): + """ + A failure to verify a package against known-good hashes + + :cvar order: An int sorting hash exception classes by difficulty of + recovery (lower being harder), so the user doesn't bother fretting + about unpinned packages when he has deeper issues, like VCS + dependencies, to deal with. Also keeps error reports in a + deterministic order. + :cvar head: A section heading for display above potentially many + exceptions of this kind + :ivar req: The InstallRequirement that triggered this error. This is + pasted on after the exception is instantiated, because it's not + typically available earlier. + + """ + + req: Optional["InstallRequirement"] = None + head = "" + order: int = -1 + + def body(self) -> str: + """Return a summary of me for display under the heading. + + This default implementation simply prints a description of the + triggering requirement. + + :param req: The InstallRequirement that provoked this error, with + its link already populated by the resolver's _populate_link(). + + """ + return f" {self._requirement_name()}" + + def __str__(self) -> str: + return f"{self.head}\n{self.body()}" + + def _requirement_name(self) -> str: + """Return a description of the requirement that triggered me. + + This default implementation returns long description of the req, with + line numbers + + """ + return str(self.req) if self.req else "unknown package" + + +class VcsHashUnsupported(HashError): + """A hash was provided for a version-control-system-based requirement, but + we don't have a method for hashing those.""" + + order = 0 + head = ( + "Can't verify hashes for these requirements because we don't " + "have a way to hash version control repositories:" + ) + + +class DirectoryUrlHashUnsupported(HashError): + """A hash was provided for a version-control-system-based requirement, but + we don't have a method for hashing those.""" + + order = 1 + head = ( + "Can't verify hashes for these file:// requirements because they " + "point to directories:" + ) + + +class HashMissing(HashError): + """A hash was needed for a requirement but is absent.""" + + order = 2 + head = ( + "Hashes are required in --require-hashes mode, but they are " + "missing from some requirements. Here is a list of those " + "requirements along with the hashes their downloaded archives " + "actually had. Add lines like these to your requirements files to " + "prevent tampering. (If you did not enable --require-hashes " + "manually, note that it turns on automatically when any package " + "has a hash.)" + ) + + def __init__(self, gotten_hash: str) -> None: + """ + :param gotten_hash: The hash of the (possibly malicious) archive we + just downloaded + """ + self.gotten_hash = gotten_hash + + def body(self) -> str: + # Dodge circular import. + from pip._internal.utils.hashes import FAVORITE_HASH + + package = None + if self.req: + # In the case of URL-based requirements, display the original URL + # seen in the requirements file rather than the package name, + # so the output can be directly copied into the requirements file. + package = ( + self.req.original_link + if self.req.original_link + # In case someone feeds something downright stupid + # to InstallRequirement's constructor. + else getattr(self.req, "req", None) + ) + return " {} --hash={}:{}".format( + package or "unknown package", FAVORITE_HASH, self.gotten_hash + ) + + +class HashUnpinned(HashError): + """A requirement had a hash specified but was not pinned to a specific + version.""" + + order = 3 + head = ( + "In --require-hashes mode, all requirements must have their " + "versions pinned with ==. These do not:" + ) + + +class HashMismatch(HashError): + """ + Distribution file hash values don't match. + + :ivar package_name: The name of the package that triggered the hash + mismatch. Feel free to write to this after the exception is raise to + improve its error message. + + """ + + order = 4 + head = ( + "THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS " + "FILE. If you have updated the package versions, please update " + "the hashes. Otherwise, examine the package contents carefully; " + "someone may have tampered with them." + ) + + def __init__(self, allowed: Dict[str, List[str]], gots: Dict[str, "_Hash"]) -> None: + """ + :param allowed: A dict of algorithm names pointing to lists of allowed + hex digests + :param gots: A dict of algorithm names pointing to hashes we + actually got from the files under suspicion + """ + self.allowed = allowed + self.gots = gots + + def body(self) -> str: + return " {}:\n{}".format(self._requirement_name(), self._hash_comparison()) + + def _hash_comparison(self) -> str: + """ + Return a comparison of actual and expected hash values. + + Example:: + + Expected sha256 abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde + or 123451234512345123451234512345123451234512345 + Got bcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdefbcdef + + """ + + def hash_then_or(hash_name: str) -> "chain[str]": + # For now, all the decent hashes have 6-char names, so we can get + # away with hard-coding space literals. + return chain([hash_name], repeat(" or")) + + lines: List[str] = [] + for hash_name, expecteds in self.allowed.items(): + prefix = hash_then_or(hash_name) + lines.extend( + (" Expected {} {}".format(next(prefix), e)) for e in expecteds + ) + lines.append( + " Got {}\n".format(self.gots[hash_name].hexdigest()) + ) + return "\n".join(lines) + + +class UnsupportedPythonVersion(InstallationError): + """Unsupported python version according to Requires-Python package + metadata.""" + + +class ConfigurationFileCouldNotBeLoaded(ConfigurationError): + """When there are errors while loading a configuration file""" + + def __init__( + self, + reason: str = "could not be loaded", + fname: Optional[str] = None, + error: Optional[configparser.Error] = None, + ) -> None: + super().__init__(error) + self.reason = reason + self.fname = fname + self.error = error + + def __str__(self) -> str: + if self.fname is not None: + message_part = f" in {self.fname}." + else: + assert self.error is not None + message_part = f".\n{self.error}\n" + return f"Configuration file {self.reason}{message_part}" diff --git a/lib/python3.11/site-packages/pip/_internal/index/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/index/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/index/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/index/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/index/collector.py b/python/lib/python3.10/site-packages/pip/_internal/index/collector.py new file mode 100644 index 0000000..4ecbb33 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/index/collector.py @@ -0,0 +1,648 @@ +""" +The main purpose of this module is to expose LinkCollector.collect_sources(). +""" + +import cgi +import collections +import functools +import itertools +import logging +import os +import re +import urllib.parse +import urllib.request +import xml.etree.ElementTree +from html.parser import HTMLParser +from optparse import Values +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Dict, + Iterable, + List, + MutableMapping, + NamedTuple, + Optional, + Sequence, + Tuple, + Union, +) + +from pip._vendor import html5lib, requests +from pip._vendor.requests import Response +from pip._vendor.requests.exceptions import RetryError, SSLError + +from pip._internal.exceptions import NetworkConnectionError +from pip._internal.models.link import Link +from pip._internal.models.search_scope import SearchScope +from pip._internal.network.session import PipSession +from pip._internal.network.utils import raise_for_status +from pip._internal.utils.deprecation import deprecated +from pip._internal.utils.filetypes import is_archive_file +from pip._internal.utils.misc import pairwise, redact_auth_from_url +from pip._internal.vcs import vcs + +from .sources import CandidatesFromPage, LinkSource, build_source + +if TYPE_CHECKING: + from typing import Protocol +else: + Protocol = object + +logger = logging.getLogger(__name__) + +HTMLElement = xml.etree.ElementTree.Element +ResponseHeaders = MutableMapping[str, str] + + +def _match_vcs_scheme(url: str) -> Optional[str]: + """Look for VCS schemes in the URL. + + Returns the matched VCS scheme, or None if there's no match. + """ + for scheme in vcs.schemes: + if url.lower().startswith(scheme) and url[len(scheme)] in "+:": + return scheme + return None + + +class _NotHTML(Exception): + def __init__(self, content_type: str, request_desc: str) -> None: + super().__init__(content_type, request_desc) + self.content_type = content_type + self.request_desc = request_desc + + +def _ensure_html_header(response: Response) -> None: + """Check the Content-Type header to ensure the response contains HTML. + + Raises `_NotHTML` if the content type is not text/html. + """ + content_type = response.headers.get("Content-Type", "") + if not content_type.lower().startswith("text/html"): + raise _NotHTML(content_type, response.request.method) + + +class _NotHTTP(Exception): + pass + + +def _ensure_html_response(url: str, session: PipSession) -> None: + """Send a HEAD request to the URL, and ensure the response contains HTML. + + Raises `_NotHTTP` if the URL is not available for a HEAD request, or + `_NotHTML` if the content type is not text/html. + """ + scheme, netloc, path, query, fragment = urllib.parse.urlsplit(url) + if scheme not in {"http", "https"}: + raise _NotHTTP() + + resp = session.head(url, allow_redirects=True) + raise_for_status(resp) + + _ensure_html_header(resp) + + +def _get_html_response(url: str, session: PipSession) -> Response: + """Access an HTML page with GET, and return the response. + + This consists of three parts: + + 1. If the URL looks suspiciously like an archive, send a HEAD first to + check the Content-Type is HTML, to avoid downloading a large file. + Raise `_NotHTTP` if the content type cannot be determined, or + `_NotHTML` if it is not HTML. + 2. Actually perform the request. Raise HTTP exceptions on network failures. + 3. Check the Content-Type header to make sure we got HTML, and raise + `_NotHTML` otherwise. + """ + if is_archive_file(Link(url).filename): + _ensure_html_response(url, session=session) + + logger.debug("Getting page %s", redact_auth_from_url(url)) + + resp = session.get( + url, + headers={ + "Accept": "text/html", + # We don't want to blindly returned cached data for + # /simple/, because authors generally expecting that + # twine upload && pip install will function, but if + # they've done a pip install in the last ~10 minutes + # it won't. Thus by setting this to zero we will not + # blindly use any cached data, however the benefit of + # using max-age=0 instead of no-cache, is that we will + # still support conditional requests, so we will still + # minimize traffic sent in cases where the page hasn't + # changed at all, we will just always incur the round + # trip for the conditional GET now instead of only + # once per 10 minutes. + # For more information, please see pypa/pip#5670. + "Cache-Control": "max-age=0", + }, + ) + raise_for_status(resp) + + # The check for archives above only works if the url ends with + # something that looks like an archive. However that is not a + # requirement of an url. Unless we issue a HEAD request on every + # url we cannot know ahead of time for sure if something is HTML + # or not. However we can check after we've downloaded it. + _ensure_html_header(resp) + + return resp + + +def _get_encoding_from_headers(headers: ResponseHeaders) -> Optional[str]: + """Determine if we have any encoding information in our headers.""" + if headers and "Content-Type" in headers: + content_type, params = cgi.parse_header(headers["Content-Type"]) + if "charset" in params: + return params["charset"] + return None + + +def _determine_base_url(document: HTMLElement, page_url: str) -> str: + """Determine the HTML document's base URL. + + This looks for a ```` tag in the HTML document. If present, its href + attribute denotes the base URL of anchor tags in the document. If there is + no such tag (or if it does not have a valid href attribute), the HTML + file's URL is used as the base URL. + + :param document: An HTML document representation. The current + implementation expects the result of ``html5lib.parse()``. + :param page_url: The URL of the HTML document. + + TODO: Remove when `html5lib` is dropped. + """ + for base in document.findall(".//base"): + href = base.get("href") + if href is not None: + return href + return page_url + + +def _clean_url_path_part(part: str) -> str: + """ + Clean a "part" of a URL path (i.e. after splitting on "@" characters). + """ + # We unquote prior to quoting to make sure nothing is double quoted. + return urllib.parse.quote(urllib.parse.unquote(part)) + + +def _clean_file_url_path(part: str) -> str: + """ + Clean the first part of a URL path that corresponds to a local + filesystem path (i.e. the first part after splitting on "@" characters). + """ + # We unquote prior to quoting to make sure nothing is double quoted. + # Also, on Windows the path part might contain a drive letter which + # should not be quoted. On Linux where drive letters do not + # exist, the colon should be quoted. We rely on urllib.request + # to do the right thing here. + return urllib.request.pathname2url(urllib.request.url2pathname(part)) + + +# percent-encoded: / +_reserved_chars_re = re.compile("(@|%2F)", re.IGNORECASE) + + +def _clean_url_path(path: str, is_local_path: bool) -> str: + """ + Clean the path portion of a URL. + """ + if is_local_path: + clean_func = _clean_file_url_path + else: + clean_func = _clean_url_path_part + + # Split on the reserved characters prior to cleaning so that + # revision strings in VCS URLs are properly preserved. + parts = _reserved_chars_re.split(path) + + cleaned_parts = [] + for to_clean, reserved in pairwise(itertools.chain(parts, [""])): + cleaned_parts.append(clean_func(to_clean)) + # Normalize %xx escapes (e.g. %2f -> %2F) + cleaned_parts.append(reserved.upper()) + + return "".join(cleaned_parts) + + +def _clean_link(url: str) -> str: + """ + Make sure a link is fully quoted. + For example, if ' ' occurs in the URL, it will be replaced with "%20", + and without double-quoting other characters. + """ + # Split the URL into parts according to the general structure + # `scheme://netloc/path;parameters?query#fragment`. + result = urllib.parse.urlparse(url) + # If the netloc is empty, then the URL refers to a local filesystem path. + is_local_path = not result.netloc + path = _clean_url_path(result.path, is_local_path=is_local_path) + return urllib.parse.urlunparse(result._replace(path=path)) + + +def _create_link_from_element( + element_attribs: Dict[str, Optional[str]], + page_url: str, + base_url: str, +) -> Optional[Link]: + """ + Convert an anchor element's attributes in a simple repository page to a Link. + """ + href = element_attribs.get("href") + if not href: + return None + + url = _clean_link(urllib.parse.urljoin(base_url, href)) + pyrequire = element_attribs.get("data-requires-python") + yanked_reason = element_attribs.get("data-yanked") + + link = Link( + url, + comes_from=page_url, + requires_python=pyrequire, + yanked_reason=yanked_reason, + ) + + return link + + +class CacheablePageContent: + def __init__(self, page: "HTMLPage") -> None: + assert page.cache_link_parsing + self.page = page + + def __eq__(self, other: object) -> bool: + return isinstance(other, type(self)) and self.page.url == other.page.url + + def __hash__(self) -> int: + return hash(self.page.url) + + +class ParseLinks(Protocol): + def __call__( + self, page: "HTMLPage", use_deprecated_html5lib: bool + ) -> Iterable[Link]: + ... + + +def with_cached_html_pages(fn: ParseLinks) -> ParseLinks: + """ + Given a function that parses an Iterable[Link] from an HTMLPage, cache the + function's result (keyed by CacheablePageContent), unless the HTMLPage + `page` has `page.cache_link_parsing == False`. + """ + + @functools.lru_cache(maxsize=None) + def wrapper( + cacheable_page: CacheablePageContent, use_deprecated_html5lib: bool + ) -> List[Link]: + return list(fn(cacheable_page.page, use_deprecated_html5lib)) + + @functools.wraps(fn) + def wrapper_wrapper(page: "HTMLPage", use_deprecated_html5lib: bool) -> List[Link]: + if page.cache_link_parsing: + return wrapper(CacheablePageContent(page), use_deprecated_html5lib) + return list(fn(page, use_deprecated_html5lib)) + + return wrapper_wrapper + + +def _parse_links_html5lib(page: "HTMLPage") -> Iterable[Link]: + """ + Parse an HTML document, and yield its anchor elements as Link objects. + + TODO: Remove when `html5lib` is dropped. + """ + document = html5lib.parse( + page.content, + transport_encoding=page.encoding, + namespaceHTMLElements=False, + ) + + url = page.url + base_url = _determine_base_url(document, url) + for anchor in document.findall(".//a"): + link = _create_link_from_element( + anchor.attrib, + page_url=url, + base_url=base_url, + ) + if link is None: + continue + yield link + + +@with_cached_html_pages +def parse_links(page: "HTMLPage", use_deprecated_html5lib: bool) -> Iterable[Link]: + """ + Parse an HTML document, and yield its anchor elements as Link objects. + """ + encoding = page.encoding or "utf-8" + + # Check if the page starts with a valid doctype, to decide whether to use + # http.parser or (deprecated) html5lib for parsing -- unless explicitly + # requested to use html5lib. + if not use_deprecated_html5lib: + expected_doctype = "".encode(encoding) + actual_start = page.content[: len(expected_doctype)] + if actual_start.decode(encoding).lower() != "": + deprecated( + reason=( + f"The HTML index page being used ({page.url}) is not a proper " + "HTML 5 document. This is in violation of PEP 503 which requires " + "these pages to be well-formed HTML 5 documents. Please reach out " + "to the owners of this index page, and ask them to update this " + "index page to a valid HTML 5 document." + ), + replacement=None, + gone_in="22.2", + issue=10825, + ) + use_deprecated_html5lib = True + + if use_deprecated_html5lib: + yield from _parse_links_html5lib(page) + return + + parser = HTMLLinkParser() + parser.feed(page.content.decode(encoding)) + + url = page.url + base_url = parser.base_url or url + for anchor in parser.anchors: + link = _create_link_from_element( + anchor, + page_url=url, + base_url=base_url, + ) + if link is None: + continue + yield link + + +class HTMLPage: + """Represents one page, along with its URL""" + + def __init__( + self, + content: bytes, + encoding: Optional[str], + url: str, + cache_link_parsing: bool = True, + ) -> None: + """ + :param encoding: the encoding to decode the given content. + :param url: the URL from which the HTML was downloaded. + :param cache_link_parsing: whether links parsed from this page's url + should be cached. PyPI index urls should + have this set to False, for example. + """ + self.content = content + self.encoding = encoding + self.url = url + self.cache_link_parsing = cache_link_parsing + + def __str__(self) -> str: + return redact_auth_from_url(self.url) + + +class HTMLLinkParser(HTMLParser): + """ + HTMLParser that keeps the first base HREF and a list of all anchor + elements' attributes. + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self._seen_decl = False + self.base_url: Optional[str] = None + self.anchors: List[Dict[str, Optional[str]]] = [] + + def handle_decl(self, decl: str) -> None: + if decl.lower() != "doctype html": + self._raise_error() + self._seen_decl = True + + def handle_starttag(self, tag: str, attrs: List[Tuple[str, Optional[str]]]) -> None: + if not self._seen_decl: + self._raise_error() + + if tag == "base" and self.base_url is None: + href = self.get_href(attrs) + if href is not None: + self.base_url = href + elif tag == "a": + self.anchors.append(dict(attrs)) + + def get_href(self, attrs: List[Tuple[str, Optional[str]]]) -> Optional[str]: + for name, value in attrs: + if name == "href": + return value + return None + + def _raise_error(self) -> None: + raise ValueError( + "HTML doctype missing or incorrect. Expected .\n\n" + "If you believe this error to be incorrect, try passing the " + "command line option --use-deprecated=html5lib and please leave " + "a comment on the pip issue at https://github.com/pypa/pip/issues/10825." + ) + + +def _handle_get_page_fail( + link: Link, + reason: Union[str, Exception], + meth: Optional[Callable[..., None]] = None, +) -> None: + if meth is None: + meth = logger.debug + meth("Could not fetch URL %s: %s - skipping", link, reason) + + +def _make_html_page(response: Response, cache_link_parsing: bool = True) -> HTMLPage: + encoding = _get_encoding_from_headers(response.headers) + return HTMLPage( + response.content, + encoding=encoding, + url=response.url, + cache_link_parsing=cache_link_parsing, + ) + + +def _get_html_page( + link: Link, session: Optional[PipSession] = None +) -> Optional["HTMLPage"]: + if session is None: + raise TypeError( + "_get_html_page() missing 1 required keyword argument: 'session'" + ) + + url = link.url.split("#", 1)[0] + + # Check for VCS schemes that do not support lookup as web pages. + vcs_scheme = _match_vcs_scheme(url) + if vcs_scheme: + logger.warning( + "Cannot look at %s URL %s because it does not support lookup as web pages.", + vcs_scheme, + link, + ) + return None + + # Tack index.html onto file:// URLs that point to directories + scheme, _, path, _, _, _ = urllib.parse.urlparse(url) + if scheme == "file" and os.path.isdir(urllib.request.url2pathname(path)): + # add trailing slash if not present so urljoin doesn't trim + # final segment + if not url.endswith("/"): + url += "/" + url = urllib.parse.urljoin(url, "index.html") + logger.debug(" file: URL is directory, getting %s", url) + + try: + resp = _get_html_response(url, session=session) + except _NotHTTP: + logger.warning( + "Skipping page %s because it looks like an archive, and cannot " + "be checked by a HTTP HEAD request.", + link, + ) + except _NotHTML as exc: + logger.warning( + "Skipping page %s because the %s request got Content-Type: %s." + "The only supported Content-Type is text/html", + link, + exc.request_desc, + exc.content_type, + ) + except NetworkConnectionError as exc: + _handle_get_page_fail(link, exc) + except RetryError as exc: + _handle_get_page_fail(link, exc) + except SSLError as exc: + reason = "There was a problem confirming the ssl certificate: " + reason += str(exc) + _handle_get_page_fail(link, reason, meth=logger.info) + except requests.ConnectionError as exc: + _handle_get_page_fail(link, f"connection error: {exc}") + except requests.Timeout: + _handle_get_page_fail(link, "timed out") + else: + return _make_html_page(resp, cache_link_parsing=link.cache_link_parsing) + return None + + +class CollectedSources(NamedTuple): + find_links: Sequence[Optional[LinkSource]] + index_urls: Sequence[Optional[LinkSource]] + + +class LinkCollector: + + """ + Responsible for collecting Link objects from all configured locations, + making network requests as needed. + + The class's main method is its collect_sources() method. + """ + + def __init__( + self, + session: PipSession, + search_scope: SearchScope, + ) -> None: + self.search_scope = search_scope + self.session = session + + @classmethod + def create( + cls, + session: PipSession, + options: Values, + suppress_no_index: bool = False, + ) -> "LinkCollector": + """ + :param session: The Session to use to make requests. + :param suppress_no_index: Whether to ignore the --no-index option + when constructing the SearchScope object. + """ + index_urls = [options.index_url] + options.extra_index_urls + if options.no_index and not suppress_no_index: + logger.debug( + "Ignoring indexes: %s", + ",".join(redact_auth_from_url(url) for url in index_urls), + ) + index_urls = [] + + # Make sure find_links is a list before passing to create(). + find_links = options.find_links or [] + + search_scope = SearchScope.create( + find_links=find_links, + index_urls=index_urls, + ) + link_collector = LinkCollector( + session=session, + search_scope=search_scope, + ) + return link_collector + + @property + def find_links(self) -> List[str]: + return self.search_scope.find_links + + def fetch_page(self, location: Link) -> Optional[HTMLPage]: + """ + Fetch an HTML page containing package links. + """ + return _get_html_page(location, session=self.session) + + def collect_sources( + self, + project_name: str, + candidates_from_page: CandidatesFromPage, + ) -> CollectedSources: + # The OrderedDict calls deduplicate sources by URL. + index_url_sources = collections.OrderedDict( + build_source( + loc, + candidates_from_page=candidates_from_page, + page_validator=self.session.is_secure_origin, + expand_dir=False, + cache_link_parsing=False, + ) + for loc in self.search_scope.get_index_urls_locations(project_name) + ).values() + find_links_sources = collections.OrderedDict( + build_source( + loc, + candidates_from_page=candidates_from_page, + page_validator=self.session.is_secure_origin, + expand_dir=True, + cache_link_parsing=True, + ) + for loc in self.find_links + ).values() + + if logger.isEnabledFor(logging.DEBUG): + lines = [ + f"* {s.link}" + for s in itertools.chain(find_links_sources, index_url_sources) + if s is not None and s.link is not None + ] + lines = [ + f"{len(lines)} location(s) to search " + f"for versions of {project_name}:" + ] + lines + logger.debug("\n".join(lines)) + + return CollectedSources( + find_links=list(find_links_sources), + index_urls=list(index_url_sources), + ) diff --git a/python/lib/python3.10/site-packages/pip/_internal/index/package_finder.py b/python/lib/python3.10/site-packages/pip/_internal/index/package_finder.py new file mode 100644 index 0000000..223d06d --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/index/package_finder.py @@ -0,0 +1,1004 @@ +"""Routines related to PyPI, indexes""" + +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import functools +import itertools +import logging +import re +from typing import FrozenSet, Iterable, List, Optional, Set, Tuple, Union + +from pip._vendor.packaging import specifiers +from pip._vendor.packaging.tags import Tag +from pip._vendor.packaging.utils import canonicalize_name +from pip._vendor.packaging.version import _BaseVersion +from pip._vendor.packaging.version import parse as parse_version + +from pip._internal.exceptions import ( + BestVersionAlreadyInstalled, + DistributionNotFound, + InvalidWheelFilename, + UnsupportedWheel, +) +from pip._internal.index.collector import LinkCollector, parse_links +from pip._internal.models.candidate import InstallationCandidate +from pip._internal.models.format_control import FormatControl +from pip._internal.models.link import Link +from pip._internal.models.search_scope import SearchScope +from pip._internal.models.selection_prefs import SelectionPreferences +from pip._internal.models.target_python import TargetPython +from pip._internal.models.wheel import Wheel +from pip._internal.req import InstallRequirement +from pip._internal.utils._log import getLogger +from pip._internal.utils.filetypes import WHEEL_EXTENSION +from pip._internal.utils.hashes import Hashes +from pip._internal.utils.logging import indent_log +from pip._internal.utils.misc import build_netloc +from pip._internal.utils.packaging import check_requires_python +from pip._internal.utils.unpacking import SUPPORTED_EXTENSIONS + +__all__ = ["FormatControl", "BestCandidateResult", "PackageFinder"] + + +logger = getLogger(__name__) + +BuildTag = Union[Tuple[()], Tuple[int, str]] +CandidateSortingKey = Tuple[int, int, int, _BaseVersion, Optional[int], BuildTag] + + +def _check_link_requires_python( + link: Link, + version_info: Tuple[int, int, int], + ignore_requires_python: bool = False, +) -> bool: + """ + Return whether the given Python version is compatible with a link's + "Requires-Python" value. + + :param version_info: A 3-tuple of ints representing the Python + major-minor-micro version to check. + :param ignore_requires_python: Whether to ignore the "Requires-Python" + value if the given Python version isn't compatible. + """ + try: + is_compatible = check_requires_python( + link.requires_python, + version_info=version_info, + ) + except specifiers.InvalidSpecifier: + logger.debug( + "Ignoring invalid Requires-Python (%r) for link: %s", + link.requires_python, + link, + ) + else: + if not is_compatible: + version = ".".join(map(str, version_info)) + if not ignore_requires_python: + logger.verbose( + "Link requires a different Python (%s not in: %r): %s", + version, + link.requires_python, + link, + ) + return False + + logger.debug( + "Ignoring failed Requires-Python check (%s not in: %r) for link: %s", + version, + link.requires_python, + link, + ) + + return True + + +class LinkEvaluator: + + """ + Responsible for evaluating links for a particular project. + """ + + _py_version_re = re.compile(r"-py([123]\.?[0-9]?)$") + + # Don't include an allow_yanked default value to make sure each call + # site considers whether yanked releases are allowed. This also causes + # that decision to be made explicit in the calling code, which helps + # people when reading the code. + def __init__( + self, + project_name: str, + canonical_name: str, + formats: FrozenSet[str], + target_python: TargetPython, + allow_yanked: bool, + ignore_requires_python: Optional[bool] = None, + ) -> None: + """ + :param project_name: The user supplied package name. + :param canonical_name: The canonical package name. + :param formats: The formats allowed for this package. Should be a set + with 'binary' or 'source' or both in it. + :param target_python: The target Python interpreter to use when + evaluating link compatibility. This is used, for example, to + check wheel compatibility, as well as when checking the Python + version, e.g. the Python version embedded in a link filename + (or egg fragment) and against an HTML link's optional PEP 503 + "data-requires-python" attribute. + :param allow_yanked: Whether files marked as yanked (in the sense + of PEP 592) are permitted to be candidates for install. + :param ignore_requires_python: Whether to ignore incompatible + PEP 503 "data-requires-python" values in HTML links. Defaults + to False. + """ + if ignore_requires_python is None: + ignore_requires_python = False + + self._allow_yanked = allow_yanked + self._canonical_name = canonical_name + self._ignore_requires_python = ignore_requires_python + self._formats = formats + self._target_python = target_python + + self.project_name = project_name + + def evaluate_link(self, link: Link) -> Tuple[bool, Optional[str]]: + """ + Determine whether a link is a candidate for installation. + + :return: A tuple (is_candidate, result), where `result` is (1) a + version string if `is_candidate` is True, and (2) if + `is_candidate` is False, an optional string to log the reason + the link fails to qualify. + """ + version = None + if link.is_yanked and not self._allow_yanked: + reason = link.yanked_reason or "" + return (False, f"yanked for reason: {reason}") + + if link.egg_fragment: + egg_info = link.egg_fragment + ext = link.ext + else: + egg_info, ext = link.splitext() + if not ext: + return (False, "not a file") + if ext not in SUPPORTED_EXTENSIONS: + return (False, f"unsupported archive format: {ext}") + if "binary" not in self._formats and ext == WHEEL_EXTENSION: + reason = "No binaries permitted for {}".format(self.project_name) + return (False, reason) + if "macosx10" in link.path and ext == ".zip": + return (False, "macosx10 one") + if ext == WHEEL_EXTENSION: + try: + wheel = Wheel(link.filename) + except InvalidWheelFilename: + return (False, "invalid wheel filename") + if canonicalize_name(wheel.name) != self._canonical_name: + reason = "wrong project name (not {})".format(self.project_name) + return (False, reason) + + supported_tags = self._target_python.get_tags() + if not wheel.supported(supported_tags): + # Include the wheel's tags in the reason string to + # simplify troubleshooting compatibility issues. + file_tags = wheel.get_formatted_file_tags() + reason = ( + "none of the wheel's tags ({}) are compatible " + "(run pip debug --verbose to show compatible tags)".format( + ", ".join(file_tags) + ) + ) + return (False, reason) + + version = wheel.version + + # This should be up by the self.ok_binary check, but see issue 2700. + if "source" not in self._formats and ext != WHEEL_EXTENSION: + reason = f"No sources permitted for {self.project_name}" + return (False, reason) + + if not version: + version = _extract_version_from_fragment( + egg_info, + self._canonical_name, + ) + if not version: + reason = f"Missing project version for {self.project_name}" + return (False, reason) + + match = self._py_version_re.search(version) + if match: + version = version[: match.start()] + py_version = match.group(1) + if py_version != self._target_python.py_version: + return (False, "Python version is incorrect") + + supports_python = _check_link_requires_python( + link, + version_info=self._target_python.py_version_info, + ignore_requires_python=self._ignore_requires_python, + ) + if not supports_python: + # Return None for the reason text to suppress calling + # _log_skipped_link(). + return (False, None) + + logger.debug("Found link %s, version: %s", link, version) + + return (True, version) + + +def filter_unallowed_hashes( + candidates: List[InstallationCandidate], + hashes: Hashes, + project_name: str, +) -> List[InstallationCandidate]: + """ + Filter out candidates whose hashes aren't allowed, and return a new + list of candidates. + + If at least one candidate has an allowed hash, then all candidates with + either an allowed hash or no hash specified are returned. Otherwise, + the given candidates are returned. + + Including the candidates with no hash specified when there is a match + allows a warning to be logged if there is a more preferred candidate + with no hash specified. Returning all candidates in the case of no + matches lets pip report the hash of the candidate that would otherwise + have been installed (e.g. permitting the user to more easily update + their requirements file with the desired hash). + """ + if not hashes: + logger.debug( + "Given no hashes to check %s links for project %r: " + "discarding no candidates", + len(candidates), + project_name, + ) + # Make sure we're not returning back the given value. + return list(candidates) + + matches_or_no_digest = [] + # Collect the non-matches for logging purposes. + non_matches = [] + match_count = 0 + for candidate in candidates: + link = candidate.link + if not link.has_hash: + pass + elif link.is_hash_allowed(hashes=hashes): + match_count += 1 + else: + non_matches.append(candidate) + continue + + matches_or_no_digest.append(candidate) + + if match_count: + filtered = matches_or_no_digest + else: + # Make sure we're not returning back the given value. + filtered = list(candidates) + + if len(filtered) == len(candidates): + discard_message = "discarding no candidates" + else: + discard_message = "discarding {} non-matches:\n {}".format( + len(non_matches), + "\n ".join(str(candidate.link) for candidate in non_matches), + ) + + logger.debug( + "Checked %s links for project %r against %s hashes " + "(%s matches, %s no digest): %s", + len(candidates), + project_name, + hashes.digest_count, + match_count, + len(matches_or_no_digest) - match_count, + discard_message, + ) + + return filtered + + +class CandidatePreferences: + + """ + Encapsulates some of the preferences for filtering and sorting + InstallationCandidate objects. + """ + + def __init__( + self, + prefer_binary: bool = False, + allow_all_prereleases: bool = False, + ) -> None: + """ + :param allow_all_prereleases: Whether to allow all pre-releases. + """ + self.allow_all_prereleases = allow_all_prereleases + self.prefer_binary = prefer_binary + + +class BestCandidateResult: + """A collection of candidates, returned by `PackageFinder.find_best_candidate`. + + This class is only intended to be instantiated by CandidateEvaluator's + `compute_best_candidate()` method. + """ + + def __init__( + self, + candidates: List[InstallationCandidate], + applicable_candidates: List[InstallationCandidate], + best_candidate: Optional[InstallationCandidate], + ) -> None: + """ + :param candidates: A sequence of all available candidates found. + :param applicable_candidates: The applicable candidates. + :param best_candidate: The most preferred candidate found, or None + if no applicable candidates were found. + """ + assert set(applicable_candidates) <= set(candidates) + + if best_candidate is None: + assert not applicable_candidates + else: + assert best_candidate in applicable_candidates + + self._applicable_candidates = applicable_candidates + self._candidates = candidates + + self.best_candidate = best_candidate + + def iter_all(self) -> Iterable[InstallationCandidate]: + """Iterate through all candidates.""" + return iter(self._candidates) + + def iter_applicable(self) -> Iterable[InstallationCandidate]: + """Iterate through the applicable candidates.""" + return iter(self._applicable_candidates) + + +class CandidateEvaluator: + + """ + Responsible for filtering and sorting candidates for installation based + on what tags are valid. + """ + + @classmethod + def create( + cls, + project_name: str, + target_python: Optional[TargetPython] = None, + prefer_binary: bool = False, + allow_all_prereleases: bool = False, + specifier: Optional[specifiers.BaseSpecifier] = None, + hashes: Optional[Hashes] = None, + ) -> "CandidateEvaluator": + """Create a CandidateEvaluator object. + + :param target_python: The target Python interpreter to use when + checking compatibility. If None (the default), a TargetPython + object will be constructed from the running Python. + :param specifier: An optional object implementing `filter` + (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable + versions. + :param hashes: An optional collection of allowed hashes. + """ + if target_python is None: + target_python = TargetPython() + if specifier is None: + specifier = specifiers.SpecifierSet() + + supported_tags = target_python.get_tags() + + return cls( + project_name=project_name, + supported_tags=supported_tags, + specifier=specifier, + prefer_binary=prefer_binary, + allow_all_prereleases=allow_all_prereleases, + hashes=hashes, + ) + + def __init__( + self, + project_name: str, + supported_tags: List[Tag], + specifier: specifiers.BaseSpecifier, + prefer_binary: bool = False, + allow_all_prereleases: bool = False, + hashes: Optional[Hashes] = None, + ) -> None: + """ + :param supported_tags: The PEP 425 tags supported by the target + Python in order of preference (most preferred first). + """ + self._allow_all_prereleases = allow_all_prereleases + self._hashes = hashes + self._prefer_binary = prefer_binary + self._project_name = project_name + self._specifier = specifier + self._supported_tags = supported_tags + # Since the index of the tag in the _supported_tags list is used + # as a priority, precompute a map from tag to index/priority to be + # used in wheel.find_most_preferred_tag. + self._wheel_tag_preferences = { + tag: idx for idx, tag in enumerate(supported_tags) + } + + def get_applicable_candidates( + self, + candidates: List[InstallationCandidate], + ) -> List[InstallationCandidate]: + """ + Return the applicable candidates from a list of candidates. + """ + # Using None infers from the specifier instead. + allow_prereleases = self._allow_all_prereleases or None + specifier = self._specifier + versions = { + str(v) + for v in specifier.filter( + # We turn the version object into a str here because otherwise + # when we're debundled but setuptools isn't, Python will see + # packaging.version.Version and + # pkg_resources._vendor.packaging.version.Version as different + # types. This way we'll use a str as a common data interchange + # format. If we stop using the pkg_resources provided specifier + # and start using our own, we can drop the cast to str(). + (str(c.version) for c in candidates), + prereleases=allow_prereleases, + ) + } + + # Again, converting version to str to deal with debundling. + applicable_candidates = [c for c in candidates if str(c.version) in versions] + + filtered_applicable_candidates = filter_unallowed_hashes( + candidates=applicable_candidates, + hashes=self._hashes, + project_name=self._project_name, + ) + + return sorted(filtered_applicable_candidates, key=self._sort_key) + + def _sort_key(self, candidate: InstallationCandidate) -> CandidateSortingKey: + """ + Function to pass as the `key` argument to a call to sorted() to sort + InstallationCandidates by preference. + + Returns a tuple such that tuples sorting as greater using Python's + default comparison operator are more preferred. + + The preference is as follows: + + First and foremost, candidates with allowed (matching) hashes are + always preferred over candidates without matching hashes. This is + because e.g. if the only candidate with an allowed hash is yanked, + we still want to use that candidate. + + Second, excepting hash considerations, candidates that have been + yanked (in the sense of PEP 592) are always less preferred than + candidates that haven't been yanked. Then: + + If not finding wheels, they are sorted by version only. + If finding wheels, then the sort order is by version, then: + 1. existing installs + 2. wheels ordered via Wheel.support_index_min(self._supported_tags) + 3. source archives + If prefer_binary was set, then all wheels are sorted above sources. + + Note: it was considered to embed this logic into the Link + comparison operators, but then different sdist links + with the same version, would have to be considered equal + """ + valid_tags = self._supported_tags + support_num = len(valid_tags) + build_tag: BuildTag = () + binary_preference = 0 + link = candidate.link + if link.is_wheel: + # can raise InvalidWheelFilename + wheel = Wheel(link.filename) + try: + pri = -( + wheel.find_most_preferred_tag( + valid_tags, self._wheel_tag_preferences + ) + ) + except ValueError: + raise UnsupportedWheel( + "{} is not a supported wheel for this platform. It " + "can't be sorted.".format(wheel.filename) + ) + if self._prefer_binary: + binary_preference = 1 + if wheel.build_tag is not None: + match = re.match(r"^(\d+)(.*)$", wheel.build_tag) + build_tag_groups = match.groups() + build_tag = (int(build_tag_groups[0]), build_tag_groups[1]) + else: # sdist + pri = -(support_num) + has_allowed_hash = int(link.is_hash_allowed(self._hashes)) + yank_value = -1 * int(link.is_yanked) # -1 for yanked. + return ( + has_allowed_hash, + yank_value, + binary_preference, + candidate.version, + pri, + build_tag, + ) + + def sort_best_candidate( + self, + candidates: List[InstallationCandidate], + ) -> Optional[InstallationCandidate]: + """ + Return the best candidate per the instance's sort order, or None if + no candidate is acceptable. + """ + if not candidates: + return None + best_candidate = max(candidates, key=self._sort_key) + return best_candidate + + def compute_best_candidate( + self, + candidates: List[InstallationCandidate], + ) -> BestCandidateResult: + """ + Compute and return a `BestCandidateResult` instance. + """ + applicable_candidates = self.get_applicable_candidates(candidates) + + best_candidate = self.sort_best_candidate(applicable_candidates) + + return BestCandidateResult( + candidates, + applicable_candidates=applicable_candidates, + best_candidate=best_candidate, + ) + + +class PackageFinder: + """This finds packages. + + This is meant to match easy_install's technique for looking for + packages, by reading pages and looking for appropriate links. + """ + + def __init__( + self, + link_collector: LinkCollector, + target_python: TargetPython, + allow_yanked: bool, + use_deprecated_html5lib: bool, + format_control: Optional[FormatControl] = None, + candidate_prefs: Optional[CandidatePreferences] = None, + ignore_requires_python: Optional[bool] = None, + ) -> None: + """ + This constructor is primarily meant to be used by the create() class + method and from tests. + + :param format_control: A FormatControl object, used to control + the selection of source packages / binary packages when consulting + the index and links. + :param candidate_prefs: Options to use when creating a + CandidateEvaluator object. + """ + if candidate_prefs is None: + candidate_prefs = CandidatePreferences() + + format_control = format_control or FormatControl(set(), set()) + + self._allow_yanked = allow_yanked + self._candidate_prefs = candidate_prefs + self._ignore_requires_python = ignore_requires_python + self._link_collector = link_collector + self._target_python = target_python + self._use_deprecated_html5lib = use_deprecated_html5lib + + self.format_control = format_control + + # These are boring links that have already been logged somehow. + self._logged_links: Set[Link] = set() + + # Don't include an allow_yanked default value to make sure each call + # site considers whether yanked releases are allowed. This also causes + # that decision to be made explicit in the calling code, which helps + # people when reading the code. + @classmethod + def create( + cls, + link_collector: LinkCollector, + selection_prefs: SelectionPreferences, + target_python: Optional[TargetPython] = None, + *, + use_deprecated_html5lib: bool, + ) -> "PackageFinder": + """Create a PackageFinder. + + :param selection_prefs: The candidate selection preferences, as a + SelectionPreferences object. + :param target_python: The target Python interpreter to use when + checking compatibility. If None (the default), a TargetPython + object will be constructed from the running Python. + """ + if target_python is None: + target_python = TargetPython() + + candidate_prefs = CandidatePreferences( + prefer_binary=selection_prefs.prefer_binary, + allow_all_prereleases=selection_prefs.allow_all_prereleases, + ) + + return cls( + candidate_prefs=candidate_prefs, + link_collector=link_collector, + target_python=target_python, + allow_yanked=selection_prefs.allow_yanked, + format_control=selection_prefs.format_control, + ignore_requires_python=selection_prefs.ignore_requires_python, + use_deprecated_html5lib=use_deprecated_html5lib, + ) + + @property + def target_python(self) -> TargetPython: + return self._target_python + + @property + def search_scope(self) -> SearchScope: + return self._link_collector.search_scope + + @search_scope.setter + def search_scope(self, search_scope: SearchScope) -> None: + self._link_collector.search_scope = search_scope + + @property + def find_links(self) -> List[str]: + return self._link_collector.find_links + + @property + def index_urls(self) -> List[str]: + return self.search_scope.index_urls + + @property + def trusted_hosts(self) -> Iterable[str]: + for host_port in self._link_collector.session.pip_trusted_origins: + yield build_netloc(*host_port) + + @property + def allow_all_prereleases(self) -> bool: + return self._candidate_prefs.allow_all_prereleases + + def set_allow_all_prereleases(self) -> None: + self._candidate_prefs.allow_all_prereleases = True + + @property + def prefer_binary(self) -> bool: + return self._candidate_prefs.prefer_binary + + def set_prefer_binary(self) -> None: + self._candidate_prefs.prefer_binary = True + + def make_link_evaluator(self, project_name: str) -> LinkEvaluator: + canonical_name = canonicalize_name(project_name) + formats = self.format_control.get_allowed_formats(canonical_name) + + return LinkEvaluator( + project_name=project_name, + canonical_name=canonical_name, + formats=formats, + target_python=self._target_python, + allow_yanked=self._allow_yanked, + ignore_requires_python=self._ignore_requires_python, + ) + + def _sort_links(self, links: Iterable[Link]) -> List[Link]: + """ + Returns elements of links in order, non-egg links first, egg links + second, while eliminating duplicates + """ + eggs, no_eggs = [], [] + seen: Set[Link] = set() + for link in links: + if link not in seen: + seen.add(link) + if link.egg_fragment: + eggs.append(link) + else: + no_eggs.append(link) + return no_eggs + eggs + + def _log_skipped_link(self, link: Link, reason: str) -> None: + if link not in self._logged_links: + # Put the link at the end so the reason is more visible and because + # the link string is usually very long. + logger.debug("Skipping link: %s: %s", reason, link) + self._logged_links.add(link) + + def get_install_candidate( + self, link_evaluator: LinkEvaluator, link: Link + ) -> Optional[InstallationCandidate]: + """ + If the link is a candidate for install, convert it to an + InstallationCandidate and return it. Otherwise, return None. + """ + is_candidate, result = link_evaluator.evaluate_link(link) + if not is_candidate: + if result: + self._log_skipped_link(link, reason=result) + return None + + return InstallationCandidate( + name=link_evaluator.project_name, + link=link, + version=result, + ) + + def evaluate_links( + self, link_evaluator: LinkEvaluator, links: Iterable[Link] + ) -> List[InstallationCandidate]: + """ + Convert links that are candidates to InstallationCandidate objects. + """ + candidates = [] + for link in self._sort_links(links): + candidate = self.get_install_candidate(link_evaluator, link) + if candidate is not None: + candidates.append(candidate) + + return candidates + + def process_project_url( + self, project_url: Link, link_evaluator: LinkEvaluator + ) -> List[InstallationCandidate]: + logger.debug( + "Fetching project page and analyzing links: %s", + project_url, + ) + html_page = self._link_collector.fetch_page(project_url) + if html_page is None: + return [] + + page_links = list(parse_links(html_page, self._use_deprecated_html5lib)) + + with indent_log(): + package_links = self.evaluate_links( + link_evaluator, + links=page_links, + ) + + return package_links + + @functools.lru_cache(maxsize=None) + def find_all_candidates(self, project_name: str) -> List[InstallationCandidate]: + """Find all available InstallationCandidate for project_name + + This checks index_urls and find_links. + All versions found are returned as an InstallationCandidate list. + + See LinkEvaluator.evaluate_link() for details on which files + are accepted. + """ + link_evaluator = self.make_link_evaluator(project_name) + + collected_sources = self._link_collector.collect_sources( + project_name=project_name, + candidates_from_page=functools.partial( + self.process_project_url, + link_evaluator=link_evaluator, + ), + ) + + page_candidates_it = itertools.chain.from_iterable( + source.page_candidates() + for sources in collected_sources + for source in sources + if source is not None + ) + page_candidates = list(page_candidates_it) + + file_links_it = itertools.chain.from_iterable( + source.file_links() + for sources in collected_sources + for source in sources + if source is not None + ) + file_candidates = self.evaluate_links( + link_evaluator, + sorted(file_links_it, reverse=True), + ) + + if logger.isEnabledFor(logging.DEBUG) and file_candidates: + paths = [] + for candidate in file_candidates: + assert candidate.link.url # we need to have a URL + try: + paths.append(candidate.link.file_path) + except Exception: + paths.append(candidate.link.url) # it's not a local file + + logger.debug("Local files found: %s", ", ".join(paths)) + + # This is an intentional priority ordering + return file_candidates + page_candidates + + def make_candidate_evaluator( + self, + project_name: str, + specifier: Optional[specifiers.BaseSpecifier] = None, + hashes: Optional[Hashes] = None, + ) -> CandidateEvaluator: + """Create a CandidateEvaluator object to use.""" + candidate_prefs = self._candidate_prefs + return CandidateEvaluator.create( + project_name=project_name, + target_python=self._target_python, + prefer_binary=candidate_prefs.prefer_binary, + allow_all_prereleases=candidate_prefs.allow_all_prereleases, + specifier=specifier, + hashes=hashes, + ) + + @functools.lru_cache(maxsize=None) + def find_best_candidate( + self, + project_name: str, + specifier: Optional[specifiers.BaseSpecifier] = None, + hashes: Optional[Hashes] = None, + ) -> BestCandidateResult: + """Find matches for the given project and specifier. + + :param specifier: An optional object implementing `filter` + (e.g. `packaging.specifiers.SpecifierSet`) to filter applicable + versions. + + :return: A `BestCandidateResult` instance. + """ + candidates = self.find_all_candidates(project_name) + candidate_evaluator = self.make_candidate_evaluator( + project_name=project_name, + specifier=specifier, + hashes=hashes, + ) + return candidate_evaluator.compute_best_candidate(candidates) + + def find_requirement( + self, req: InstallRequirement, upgrade: bool + ) -> Optional[InstallationCandidate]: + """Try to find a Link matching req + + Expects req, an InstallRequirement and upgrade, a boolean + Returns a InstallationCandidate if found, + Raises DistributionNotFound or BestVersionAlreadyInstalled otherwise + """ + hashes = req.hashes(trust_internet=False) + best_candidate_result = self.find_best_candidate( + req.name, + specifier=req.specifier, + hashes=hashes, + ) + best_candidate = best_candidate_result.best_candidate + + installed_version: Optional[_BaseVersion] = None + if req.satisfied_by is not None: + installed_version = req.satisfied_by.version + + def _format_versions(cand_iter: Iterable[InstallationCandidate]) -> str: + # This repeated parse_version and str() conversion is needed to + # handle different vendoring sources from pip and pkg_resources. + # If we stop using the pkg_resources provided specifier and start + # using our own, we can drop the cast to str(). + return ( + ", ".join( + sorted( + {str(c.version) for c in cand_iter}, + key=parse_version, + ) + ) + or "none" + ) + + if installed_version is None and best_candidate is None: + logger.critical( + "Could not find a version that satisfies the requirement %s " + "(from versions: %s)", + req, + _format_versions(best_candidate_result.iter_all()), + ) + + raise DistributionNotFound( + "No matching distribution found for {}".format(req) + ) + + best_installed = False + if installed_version and ( + best_candidate is None or best_candidate.version <= installed_version + ): + best_installed = True + + if not upgrade and installed_version is not None: + if best_installed: + logger.debug( + "Existing installed version (%s) is most up-to-date and " + "satisfies requirement", + installed_version, + ) + else: + logger.debug( + "Existing installed version (%s) satisfies requirement " + "(most up-to-date version is %s)", + installed_version, + best_candidate.version, + ) + return None + + if best_installed: + # We have an existing version, and its the best version + logger.debug( + "Installed version (%s) is most up-to-date (past versions: %s)", + installed_version, + _format_versions(best_candidate_result.iter_applicable()), + ) + raise BestVersionAlreadyInstalled + + logger.debug( + "Using version %s (newest of versions: %s)", + best_candidate.version, + _format_versions(best_candidate_result.iter_applicable()), + ) + return best_candidate + + +def _find_name_version_sep(fragment: str, canonical_name: str) -> int: + """Find the separator's index based on the package's canonical name. + + :param fragment: A + filename "fragment" (stem) or + egg fragment. + :param canonical_name: The package's canonical name. + + This function is needed since the canonicalized name does not necessarily + have the same length as the egg info's name part. An example:: + + >>> fragment = 'foo__bar-1.0' + >>> canonical_name = 'foo-bar' + >>> _find_name_version_sep(fragment, canonical_name) + 8 + """ + # Project name and version must be separated by one single dash. Find all + # occurrences of dashes; if the string in front of it matches the canonical + # name, this is the one separating the name and version parts. + for i, c in enumerate(fragment): + if c != "-": + continue + if canonicalize_name(fragment[:i]) == canonical_name: + return i + raise ValueError(f"{fragment} does not match {canonical_name}") + + +def _extract_version_from_fragment(fragment: str, canonical_name: str) -> Optional[str]: + """Parse the version string from a + filename + "fragment" (stem) or egg fragment. + + :param fragment: The string to parse. E.g. foo-2.1 + :param canonical_name: The canonicalized name of the package this + belongs to. + """ + try: + version_start = _find_name_version_sep(fragment, canonical_name) + 1 + except ValueError: + return None + version = fragment[version_start:] + if not version: + return None + return version diff --git a/lib/python3.11/site-packages/pip/_internal/index/sources.py b/python/lib/python3.10/site-packages/pip/_internal/index/sources.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/index/sources.py rename to python/lib/python3.10/site-packages/pip/_internal/index/sources.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/locations/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/locations/__init__.py new file mode 100644 index 0000000..ac0c166 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/locations/__init__.py @@ -0,0 +1,520 @@ +import functools +import logging +import os +import pathlib +import sys +import sysconfig +from typing import Any, Dict, Iterator, List, Optional, Tuple + +from pip._internal.models.scheme import SCHEME_KEYS, Scheme +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.deprecation import deprecated +from pip._internal.utils.virtualenv import running_under_virtualenv + +from . import _distutils, _sysconfig +from .base import ( + USER_CACHE_DIR, + get_major_minor_version, + get_src_prefix, + is_osx_framework, + site_packages, + user_site, +) + +__all__ = [ + "USER_CACHE_DIR", + "get_bin_prefix", + "get_bin_user", + "get_major_minor_version", + "get_platlib", + "get_prefixed_libs", + "get_purelib", + "get_scheme", + "get_src_prefix", + "site_packages", + "user_site", +] + + +logger = logging.getLogger(__name__) + + +_PLATLIBDIR: str = getattr(sys, "platlibdir", "lib") + +_USE_SYSCONFIG_DEFAULT = sys.version_info >= (3, 10) + + +def _should_use_sysconfig() -> bool: + """This function determines the value of _USE_SYSCONFIG. + + By default, pip uses sysconfig on Python 3.10+. + But Python distributors can override this decision by setting: + sysconfig._PIP_USE_SYSCONFIG = True / False + Rationale in https://github.com/pypa/pip/issues/10647 + + This is a function for testability, but should be constant during any one + run. + """ + return bool(getattr(sysconfig, "_PIP_USE_SYSCONFIG", _USE_SYSCONFIG_DEFAULT)) + + +_USE_SYSCONFIG = _should_use_sysconfig() + +# Be noisy about incompatibilities if this platforms "should" be using +# sysconfig, but is explicitly opting out and using distutils instead. +if _USE_SYSCONFIG_DEFAULT and not _USE_SYSCONFIG: + _MISMATCH_LEVEL = logging.WARNING +else: + _MISMATCH_LEVEL = logging.DEBUG + + +def _looks_like_bpo_44860() -> bool: + """The resolution to bpo-44860 will change this incorrect platlib. + + See . + """ + from distutils.command.install import INSTALL_SCHEMES # type: ignore + + try: + unix_user_platlib = INSTALL_SCHEMES["unix_user"]["platlib"] + except KeyError: + return False + return unix_user_platlib == "$usersite" + + +def _looks_like_red_hat_patched_platlib_purelib(scheme: Dict[str, str]) -> bool: + platlib = scheme["platlib"] + if "/$platlibdir/" in platlib: + platlib = platlib.replace("/$platlibdir/", f"/{_PLATLIBDIR}/") + if "/lib64/" not in platlib: + return False + unpatched = platlib.replace("/lib64/", "/lib/") + return unpatched.replace("$platbase/", "$base/") == scheme["purelib"] + + +@functools.lru_cache(maxsize=None) +def _looks_like_red_hat_lib() -> bool: + """Red Hat patches platlib in unix_prefix and unix_home, but not purelib. + + This is the only way I can see to tell a Red Hat-patched Python. + """ + from distutils.command.install import INSTALL_SCHEMES # type: ignore + + return all( + k in INSTALL_SCHEMES + and _looks_like_red_hat_patched_platlib_purelib(INSTALL_SCHEMES[k]) + for k in ("unix_prefix", "unix_home") + ) + + +@functools.lru_cache(maxsize=None) +def _looks_like_debian_scheme() -> bool: + """Debian adds two additional schemes.""" + from distutils.command.install import INSTALL_SCHEMES # type: ignore + + return "deb_system" in INSTALL_SCHEMES and "unix_local" in INSTALL_SCHEMES + + +@functools.lru_cache(maxsize=None) +def _looks_like_red_hat_scheme() -> bool: + """Red Hat patches ``sys.prefix`` and ``sys.exec_prefix``. + + Red Hat's ``00251-change-user-install-location.patch`` changes the install + command's ``prefix`` and ``exec_prefix`` to append ``"/local"``. This is + (fortunately?) done quite unconditionally, so we create a default command + object without any configuration to detect this. + """ + from distutils.command.install import install + from distutils.dist import Distribution + + cmd: Any = install(Distribution()) + cmd.finalize_options() + return ( + cmd.exec_prefix == f"{os.path.normpath(sys.exec_prefix)}/local" + and cmd.prefix == f"{os.path.normpath(sys.prefix)}/local" + ) + + +@functools.lru_cache(maxsize=None) +def _looks_like_slackware_scheme() -> bool: + """Slackware patches sysconfig but fails to patch distutils and site. + + Slackware changes sysconfig's user scheme to use ``"lib64"`` for the lib + path, but does not do the same to the site module. + """ + if user_site is None: # User-site not available. + return False + try: + paths = sysconfig.get_paths(scheme="posix_user", expand=False) + except KeyError: # User-site not available. + return False + return "/lib64/" in paths["purelib"] and "/lib64/" not in user_site + + +@functools.lru_cache(maxsize=None) +def _looks_like_msys2_mingw_scheme() -> bool: + """MSYS2 patches distutils and sysconfig to use a UNIX-like scheme. + + However, MSYS2 incorrectly patches sysconfig ``nt`` scheme. The fix is + likely going to be included in their 3.10 release, so we ignore the warning. + See msys2/MINGW-packages#9319. + + MSYS2 MINGW's patch uses lowercase ``"lib"`` instead of the usual uppercase, + and is missing the final ``"site-packages"``. + """ + paths = sysconfig.get_paths("nt", expand=False) + return all( + "Lib" not in p and "lib" in p and not p.endswith("site-packages") + for p in (paths[key] for key in ("platlib", "purelib")) + ) + + +def _fix_abiflags(parts: Tuple[str]) -> Iterator[str]: + ldversion = sysconfig.get_config_var("LDVERSION") + abiflags: str = getattr(sys, "abiflags", None) + + # LDVERSION does not end with sys.abiflags. Just return the path unchanged. + if not ldversion or not abiflags or not ldversion.endswith(abiflags): + yield from parts + return + + # Strip sys.abiflags from LDVERSION-based path components. + for part in parts: + if part.endswith(ldversion): + part = part[: (0 - len(abiflags))] + yield part + + +@functools.lru_cache(maxsize=None) +def _warn_mismatched(old: pathlib.Path, new: pathlib.Path, *, key: str) -> None: + issue_url = "https://github.com/pypa/pip/issues/10151" + message = ( + "Value for %s does not match. Please report this to <%s>" + "\ndistutils: %s" + "\nsysconfig: %s" + ) + logger.log(_MISMATCH_LEVEL, message, key, issue_url, old, new) + + +def _warn_if_mismatch(old: pathlib.Path, new: pathlib.Path, *, key: str) -> bool: + if old == new: + return False + _warn_mismatched(old, new, key=key) + return True + + +@functools.lru_cache(maxsize=None) +def _log_context( + *, + user: bool = False, + home: Optional[str] = None, + root: Optional[str] = None, + prefix: Optional[str] = None, +) -> None: + parts = [ + "Additional context:", + "user = %r", + "home = %r", + "root = %r", + "prefix = %r", + ] + + logger.log(_MISMATCH_LEVEL, "\n".join(parts), user, home, root, prefix) + + +def get_scheme( + dist_name: str, + user: bool = False, + home: Optional[str] = None, + root: Optional[str] = None, + isolated: bool = False, + prefix: Optional[str] = None, +) -> Scheme: + new = _sysconfig.get_scheme( + dist_name, + user=user, + home=home, + root=root, + isolated=isolated, + prefix=prefix, + ) + if _USE_SYSCONFIG: + return new + + old = _distutils.get_scheme( + dist_name, + user=user, + home=home, + root=root, + isolated=isolated, + prefix=prefix, + ) + + warning_contexts = [] + for k in SCHEME_KEYS: + old_v = pathlib.Path(getattr(old, k)) + new_v = pathlib.Path(getattr(new, k)) + + if old_v == new_v: + continue + + # distutils incorrectly put PyPy packages under ``site-packages/python`` + # in the ``posix_home`` scheme, but PyPy devs said they expect the + # directory name to be ``pypy`` instead. So we treat this as a bug fix + # and not warn about it. See bpo-43307 and python/cpython#24628. + skip_pypy_special_case = ( + sys.implementation.name == "pypy" + and home is not None + and k in ("platlib", "purelib") + and old_v.parent == new_v.parent + and old_v.name.startswith("python") + and new_v.name.startswith("pypy") + ) + if skip_pypy_special_case: + continue + + # sysconfig's ``osx_framework_user`` does not include ``pythonX.Y`` in + # the ``include`` value, but distutils's ``headers`` does. We'll let + # CPython decide whether this is a bug or feature. See bpo-43948. + skip_osx_framework_user_special_case = ( + user + and is_osx_framework() + and k == "headers" + and old_v.parent.parent == new_v.parent + and old_v.parent.name.startswith("python") + ) + if skip_osx_framework_user_special_case: + continue + + # On Red Hat and derived Linux distributions, distutils is patched to + # use "lib64" instead of "lib" for platlib. + if k == "platlib" and _looks_like_red_hat_lib(): + continue + + # On Python 3.9+, sysconfig's posix_user scheme sets platlib against + # sys.platlibdir, but distutils's unix_user incorrectly coninutes + # using the same $usersite for both platlib and purelib. This creates a + # mismatch when sys.platlibdir is not "lib". + skip_bpo_44860 = ( + user + and k == "platlib" + and not WINDOWS + and sys.version_info >= (3, 9) + and _PLATLIBDIR != "lib" + and _looks_like_bpo_44860() + ) + if skip_bpo_44860: + continue + + # Slackware incorrectly patches posix_user to use lib64 instead of lib, + # but not usersite to match the location. + skip_slackware_user_scheme = ( + user + and k in ("platlib", "purelib") + and not WINDOWS + and _looks_like_slackware_scheme() + ) + if skip_slackware_user_scheme: + continue + + # Both Debian and Red Hat patch Python to place the system site under + # /usr/local instead of /usr. Debian also places lib in dist-packages + # instead of site-packages, but the /usr/local check should cover it. + skip_linux_system_special_case = ( + not (user or home or prefix or running_under_virtualenv()) + and old_v.parts[1:3] == ("usr", "local") + and len(new_v.parts) > 1 + and new_v.parts[1] == "usr" + and (len(new_v.parts) < 3 or new_v.parts[2] != "local") + and (_looks_like_red_hat_scheme() or _looks_like_debian_scheme()) + ) + if skip_linux_system_special_case: + continue + + # On Python 3.7 and earlier, sysconfig does not include sys.abiflags in + # the "pythonX.Y" part of the path, but distutils does. + skip_sysconfig_abiflag_bug = ( + sys.version_info < (3, 8) + and not WINDOWS + and k in ("headers", "platlib", "purelib") + and tuple(_fix_abiflags(old_v.parts)) == new_v.parts + ) + if skip_sysconfig_abiflag_bug: + continue + + # MSYS2 MINGW's sysconfig patch does not include the "site-packages" + # part of the path. This is incorrect and will be fixed in MSYS. + skip_msys2_mingw_bug = ( + WINDOWS and k in ("platlib", "purelib") and _looks_like_msys2_mingw_scheme() + ) + if skip_msys2_mingw_bug: + continue + + # CPython's POSIX install script invokes pip (via ensurepip) against the + # interpreter located in the source tree, not the install site. This + # triggers special logic in sysconfig that's not present in distutils. + # https://github.com/python/cpython/blob/8c21941ddaf/Lib/sysconfig.py#L178-L194 + skip_cpython_build = ( + sysconfig.is_python_build(check_home=True) + and not WINDOWS + and k in ("headers", "include", "platinclude") + ) + if skip_cpython_build: + continue + + warning_contexts.append((old_v, new_v, f"scheme.{k}")) + + if not warning_contexts: + return old + + # Check if this path mismatch is caused by distutils config files. Those + # files will no longer work once we switch to sysconfig, so this raises a + # deprecation message for them. + default_old = _distutils.distutils_scheme( + dist_name, + user, + home, + root, + isolated, + prefix, + ignore_config_files=True, + ) + if any(default_old[k] != getattr(old, k) for k in SCHEME_KEYS): + deprecated( + reason=( + "Configuring installation scheme with distutils config files " + "is deprecated and will no longer work in the near future. If you " + "are using a Homebrew or Linuxbrew Python, please see discussion " + "at https://github.com/Homebrew/homebrew-core/issues/76621" + ), + replacement=None, + gone_in=None, + ) + return old + + # Post warnings about this mismatch so user can report them back. + for old_v, new_v, key in warning_contexts: + _warn_mismatched(old_v, new_v, key=key) + _log_context(user=user, home=home, root=root, prefix=prefix) + + return old + + +def get_bin_prefix() -> str: + new = _sysconfig.get_bin_prefix() + if _USE_SYSCONFIG: + return new + + old = _distutils.get_bin_prefix() + if _warn_if_mismatch(pathlib.Path(old), pathlib.Path(new), key="bin_prefix"): + _log_context() + return old + + +def get_bin_user() -> str: + return _sysconfig.get_scheme("", user=True).scripts + + +def _looks_like_deb_system_dist_packages(value: str) -> bool: + """Check if the value is Debian's APT-controlled dist-packages. + + Debian's ``distutils.sysconfig.get_python_lib()`` implementation returns the + default package path controlled by APT, but does not patch ``sysconfig`` to + do the same. This is similar to the bug worked around in ``get_scheme()``, + but here the default is ``deb_system`` instead of ``unix_local``. Ultimately + we can't do anything about this Debian bug, and this detection allows us to + skip the warning when needed. + """ + if not _looks_like_debian_scheme(): + return False + if value == "/usr/lib/python3/dist-packages": + return True + return False + + +def get_purelib() -> str: + """Return the default pure-Python lib location.""" + new = _sysconfig.get_purelib() + if _USE_SYSCONFIG: + return new + + old = _distutils.get_purelib() + if _looks_like_deb_system_dist_packages(old): + return old + if _warn_if_mismatch(pathlib.Path(old), pathlib.Path(new), key="purelib"): + _log_context() + return old + + +def get_platlib() -> str: + """Return the default platform-shared lib location.""" + new = _sysconfig.get_platlib() + if _USE_SYSCONFIG: + return new + + old = _distutils.get_platlib() + if _looks_like_deb_system_dist_packages(old): + return old + if _warn_if_mismatch(pathlib.Path(old), pathlib.Path(new), key="platlib"): + _log_context() + return old + + +def _deduplicated(v1: str, v2: str) -> List[str]: + """Deduplicate values from a list.""" + if v1 == v2: + return [v1] + return [v1, v2] + + +def _looks_like_apple_library(path: str) -> bool: + """Apple patches sysconfig to *always* look under */Library/Python*.""" + if sys.platform[:6] != "darwin": + return False + return path == f"/Library/Python/{get_major_minor_version()}/site-packages" + + +def get_prefixed_libs(prefix: str) -> List[str]: + """Return the lib locations under ``prefix``.""" + new_pure, new_plat = _sysconfig.get_prefixed_libs(prefix) + if _USE_SYSCONFIG: + return _deduplicated(new_pure, new_plat) + + old_pure, old_plat = _distutils.get_prefixed_libs(prefix) + old_lib_paths = _deduplicated(old_pure, old_plat) + + # Apple's Python (shipped with Xcode and Command Line Tools) hard-code + # platlib and purelib to '/Library/Python/X.Y/site-packages'. This will + # cause serious build isolation bugs when Apple starts shipping 3.10 because + # pip will install build backends to the wrong location. This tells users + # who is at fault so Apple may notice it and fix the issue in time. + if all(_looks_like_apple_library(p) for p in old_lib_paths): + deprecated( + reason=( + "Python distributed by Apple's Command Line Tools incorrectly " + "patches sysconfig to always point to '/Library/Python'. This " + "will cause build isolation to operate incorrectly on Python " + "3.10 or later. Please help report this to Apple so they can " + "fix this. https://developer.apple.com/bug-reporting/" + ), + replacement=None, + gone_in=None, + ) + return old_lib_paths + + warned = [ + _warn_if_mismatch( + pathlib.Path(old_pure), + pathlib.Path(new_pure), + key="prefixed-purelib", + ), + _warn_if_mismatch( + pathlib.Path(old_plat), + pathlib.Path(new_plat), + key="prefixed-platlib", + ), + ] + if any(warned): + _log_context(prefix=prefix) + + return old_lib_paths diff --git a/python/lib/python3.10/site-packages/pip/_internal/locations/_distutils.py b/python/lib/python3.10/site-packages/pip/_internal/locations/_distutils.py new file mode 100644 index 0000000..2ec79e6 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/locations/_distutils.py @@ -0,0 +1,169 @@ +"""Locations where we look for configs, install stuff, etc""" + +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import logging +import os +import sys +from distutils.cmd import Command as DistutilsCommand +from distutils.command.install import SCHEME_KEYS +from distutils.command.install import install as distutils_install_command +from distutils.sysconfig import get_python_lib +from typing import Dict, List, Optional, Tuple, Union, cast + +from pip._internal.models.scheme import Scheme +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.virtualenv import running_under_virtualenv + +from .base import get_major_minor_version + +logger = logging.getLogger(__name__) + + +def distutils_scheme( + dist_name: str, + user: bool = False, + home: str = None, + root: str = None, + isolated: bool = False, + prefix: str = None, + *, + ignore_config_files: bool = False, +) -> Dict[str, str]: + """ + Return a distutils install scheme + """ + from distutils.dist import Distribution + + dist_args: Dict[str, Union[str, List[str]]] = {"name": dist_name} + if isolated: + dist_args["script_args"] = ["--no-user-cfg"] + + d = Distribution(dist_args) + if not ignore_config_files: + try: + d.parse_config_files() + except UnicodeDecodeError: + # Typeshed does not include find_config_files() for some reason. + paths = d.find_config_files() # type: ignore + logger.warning( + "Ignore distutils configs in %s due to encoding errors.", + ", ".join(os.path.basename(p) for p in paths), + ) + obj: Optional[DistutilsCommand] = None + obj = d.get_command_obj("install", create=True) + assert obj is not None + i = cast(distutils_install_command, obj) + # NOTE: setting user or home has the side-effect of creating the home dir + # or user base for installations during finalize_options() + # ideally, we'd prefer a scheme class that has no side-effects. + assert not (user and prefix), f"user={user} prefix={prefix}" + assert not (home and prefix), f"home={home} prefix={prefix}" + i.user = user or i.user + if user or home: + i.prefix = "" + i.prefix = prefix or i.prefix + i.home = home or i.home + i.root = root or i.root + i.finalize_options() + + scheme = {} + for key in SCHEME_KEYS: + scheme[key] = getattr(i, "install_" + key) + + # install_lib specified in setup.cfg should install *everything* + # into there (i.e. it takes precedence over both purelib and + # platlib). Note, i.install_lib is *always* set after + # finalize_options(); we only want to override here if the user + # has explicitly requested it hence going back to the config + if "install_lib" in d.get_option_dict("install"): + scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib)) + + if running_under_virtualenv(): + if home: + prefix = home + elif user: + prefix = i.install_userbase # type: ignore + else: + prefix = i.prefix + scheme["headers"] = os.path.join( + prefix, + "include", + "site", + f"python{get_major_minor_version()}", + dist_name, + ) + + if root is not None: + path_no_drive = os.path.splitdrive(os.path.abspath(scheme["headers"]))[1] + scheme["headers"] = os.path.join(root, path_no_drive[1:]) + + return scheme + + +def get_scheme( + dist_name: str, + user: bool = False, + home: Optional[str] = None, + root: Optional[str] = None, + isolated: bool = False, + prefix: Optional[str] = None, +) -> Scheme: + """ + Get the "scheme" corresponding to the input parameters. The distutils + documentation provides the context for the available schemes: + https://docs.python.org/3/install/index.html#alternate-installation + + :param dist_name: the name of the package to retrieve the scheme for, used + in the headers scheme path + :param user: indicates to use the "user" scheme + :param home: indicates to use the "home" scheme and provides the base + directory for the same + :param root: root under which other directories are re-based + :param isolated: equivalent to --no-user-cfg, i.e. do not consider + ~/.pydistutils.cfg (posix) or ~/pydistutils.cfg (non-posix) for + scheme paths + :param prefix: indicates to use the "prefix" scheme and provides the + base directory for the same + """ + scheme = distutils_scheme(dist_name, user, home, root, isolated, prefix) + return Scheme( + platlib=scheme["platlib"], + purelib=scheme["purelib"], + headers=scheme["headers"], + scripts=scheme["scripts"], + data=scheme["data"], + ) + + +def get_bin_prefix() -> str: + # XXX: In old virtualenv versions, sys.prefix can contain '..' components, + # so we need to call normpath to eliminate them. + prefix = os.path.normpath(sys.prefix) + if WINDOWS: + bin_py = os.path.join(prefix, "Scripts") + # buildout uses 'bin' on Windows too? + if not os.path.exists(bin_py): + bin_py = os.path.join(prefix, "bin") + return bin_py + # Forcing to use /usr/local/bin for standard macOS framework installs + # Also log to ~/Library/Logs/ for use with the Console.app log viewer + if sys.platform[:6] == "darwin" and prefix[:16] == "/System/Library/": + return "/usr/local/bin" + return os.path.join(prefix, "bin") + + +def get_purelib() -> str: + return get_python_lib(plat_specific=False) + + +def get_platlib() -> str: + return get_python_lib(plat_specific=True) + + +def get_prefixed_libs(prefix: str) -> Tuple[str, str]: + return ( + get_python_lib(plat_specific=False, prefix=prefix), + get_python_lib(plat_specific=True, prefix=prefix), + ) diff --git a/python/lib/python3.10/site-packages/pip/_internal/locations/_sysconfig.py b/python/lib/python3.10/site-packages/pip/_internal/locations/_sysconfig.py new file mode 100644 index 0000000..5e141aa --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/locations/_sysconfig.py @@ -0,0 +1,219 @@ +import distutils.util # FIXME: For change_root. +import logging +import os +import sys +import sysconfig +import typing + +from pip._internal.exceptions import InvalidSchemeCombination, UserInstallationInvalid +from pip._internal.models.scheme import SCHEME_KEYS, Scheme +from pip._internal.utils.virtualenv import running_under_virtualenv + +from .base import get_major_minor_version, is_osx_framework + +logger = logging.getLogger(__name__) + + +# Notes on _infer_* functions. +# Unfortunately ``get_default_scheme()`` didn't exist before 3.10, so there's no +# way to ask things like "what is the '_prefix' scheme on this platform". These +# functions try to answer that with some heuristics while accounting for ad-hoc +# platforms not covered by CPython's default sysconfig implementation. If the +# ad-hoc implementation does not fully implement sysconfig, we'll fall back to +# a POSIX scheme. + +_AVAILABLE_SCHEMES = set(sysconfig.get_scheme_names()) + +_PREFERRED_SCHEME_API = getattr(sysconfig, "get_preferred_scheme", None) + + +def _should_use_osx_framework_prefix() -> bool: + """Check for Apple's ``osx_framework_library`` scheme. + + Python distributed by Apple's Command Line Tools has this special scheme + that's used when: + + * This is a framework build. + * We are installing into the system prefix. + + This does not account for ``pip install --prefix`` (also means we're not + installing to the system prefix), which should use ``posix_prefix``, but + logic here means ``_infer_prefix()`` outputs ``osx_framework_library``. But + since ``prefix`` is not available for ``sysconfig.get_default_scheme()``, + which is the stdlib replacement for ``_infer_prefix()``, presumably Apple + wouldn't be able to magically switch between ``osx_framework_library`` and + ``posix_prefix``. ``_infer_prefix()`` returning ``osx_framework_library`` + means its behavior is consistent whether we use the stdlib implementation + or our own, and we deal with this special case in ``get_scheme()`` instead. + """ + return ( + "osx_framework_library" in _AVAILABLE_SCHEMES + and not running_under_virtualenv() + and is_osx_framework() + ) + + +def _infer_prefix() -> str: + """Try to find a prefix scheme for the current platform. + + This tries: + + * A special ``osx_framework_library`` for Python distributed by Apple's + Command Line Tools, when not running in a virtual environment. + * Implementation + OS, used by PyPy on Windows (``pypy_nt``). + * Implementation without OS, used by PyPy on POSIX (``pypy``). + * OS + "prefix", used by CPython on POSIX (``posix_prefix``). + * Just the OS name, used by CPython on Windows (``nt``). + + If none of the above works, fall back to ``posix_prefix``. + """ + if _PREFERRED_SCHEME_API: + return _PREFERRED_SCHEME_API("prefix") + if _should_use_osx_framework_prefix(): + return "osx_framework_library" + implementation_suffixed = f"{sys.implementation.name}_{os.name}" + if implementation_suffixed in _AVAILABLE_SCHEMES: + return implementation_suffixed + if sys.implementation.name in _AVAILABLE_SCHEMES: + return sys.implementation.name + suffixed = f"{os.name}_prefix" + if suffixed in _AVAILABLE_SCHEMES: + return suffixed + if os.name in _AVAILABLE_SCHEMES: # On Windows, prefx is just called "nt". + return os.name + return "posix_prefix" + + +def _infer_user() -> str: + """Try to find a user scheme for the current platform.""" + if _PREFERRED_SCHEME_API: + return _PREFERRED_SCHEME_API("user") + if is_osx_framework() and not running_under_virtualenv(): + suffixed = "osx_framework_user" + else: + suffixed = f"{os.name}_user" + if suffixed in _AVAILABLE_SCHEMES: + return suffixed + if "posix_user" not in _AVAILABLE_SCHEMES: # User scheme unavailable. + raise UserInstallationInvalid() + return "posix_user" + + +def _infer_home() -> str: + """Try to find a home for the current platform.""" + if _PREFERRED_SCHEME_API: + return _PREFERRED_SCHEME_API("home") + suffixed = f"{os.name}_home" + if suffixed in _AVAILABLE_SCHEMES: + return suffixed + return "posix_home" + + +# Update these keys if the user sets a custom home. +_HOME_KEYS = [ + "installed_base", + "base", + "installed_platbase", + "platbase", + "prefix", + "exec_prefix", +] +if sysconfig.get_config_var("userbase") is not None: + _HOME_KEYS.append("userbase") + + +def get_scheme( + dist_name: str, + user: bool = False, + home: typing.Optional[str] = None, + root: typing.Optional[str] = None, + isolated: bool = False, + prefix: typing.Optional[str] = None, +) -> Scheme: + """ + Get the "scheme" corresponding to the input parameters. + + :param dist_name: the name of the package to retrieve the scheme for, used + in the headers scheme path + :param user: indicates to use the "user" scheme + :param home: indicates to use the "home" scheme + :param root: root under which other directories are re-based + :param isolated: ignored, but kept for distutils compatibility (where + this controls whether the user-site pydistutils.cfg is honored) + :param prefix: indicates to use the "prefix" scheme and provides the + base directory for the same + """ + if user and prefix: + raise InvalidSchemeCombination("--user", "--prefix") + if home and prefix: + raise InvalidSchemeCombination("--home", "--prefix") + + if home is not None: + scheme_name = _infer_home() + elif user: + scheme_name = _infer_user() + else: + scheme_name = _infer_prefix() + + # Special case: When installing into a custom prefix, use posix_prefix + # instead of osx_framework_library. See _should_use_osx_framework_prefix() + # docstring for details. + if prefix is not None and scheme_name == "osx_framework_library": + scheme_name = "posix_prefix" + + if home is not None: + variables = {k: home for k in _HOME_KEYS} + elif prefix is not None: + variables = {k: prefix for k in _HOME_KEYS} + else: + variables = {} + + paths = sysconfig.get_paths(scheme=scheme_name, vars=variables) + + # Logic here is very arbitrary, we're doing it for compatibility, don't ask. + # 1. Pip historically uses a special header path in virtual environments. + # 2. If the distribution name is not known, distutils uses 'UNKNOWN'. We + # only do the same when not running in a virtual environment because + # pip's historical header path logic (see point 1) did not do this. + if running_under_virtualenv(): + if user: + base = variables.get("userbase", sys.prefix) + else: + base = variables.get("base", sys.prefix) + python_xy = f"python{get_major_minor_version()}" + paths["include"] = os.path.join(base, "include", "site", python_xy) + elif not dist_name: + dist_name = "UNKNOWN" + + scheme = Scheme( + platlib=paths["platlib"], + purelib=paths["purelib"], + headers=os.path.join(paths["include"], dist_name), + scripts=paths["scripts"], + data=paths["data"], + ) + if root is not None: + for key in SCHEME_KEYS: + value = distutils.util.change_root(root, getattr(scheme, key)) + setattr(scheme, key, value) + return scheme + + +def get_bin_prefix() -> str: + # Forcing to use /usr/local/bin for standard macOS framework installs. + if sys.platform[:6] == "darwin" and sys.prefix[:16] == "/System/Library/": + return "/usr/local/bin" + return sysconfig.get_paths()["scripts"] + + +def get_purelib() -> str: + return sysconfig.get_paths()["purelib"] + + +def get_platlib() -> str: + return sysconfig.get_paths()["platlib"] + + +def get_prefixed_libs(prefix: str) -> typing.Tuple[str, str]: + paths = sysconfig.get_paths(vars={"base": prefix, "platbase": prefix}) + return (paths["purelib"], paths["platlib"]) diff --git a/python/lib/python3.10/site-packages/pip/_internal/locations/base.py b/python/lib/python3.10/site-packages/pip/_internal/locations/base.py new file mode 100644 index 0000000..86dad4a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/locations/base.py @@ -0,0 +1,52 @@ +import functools +import os +import site +import sys +import sysconfig +import typing + +from pip._internal.utils import appdirs +from pip._internal.utils.virtualenv import running_under_virtualenv + +# Application Directories +USER_CACHE_DIR = appdirs.user_cache_dir("pip") + +# FIXME doesn't account for venv linked to global site-packages +site_packages: typing.Optional[str] = sysconfig.get_path("purelib") + + +def get_major_minor_version() -> str: + """ + Return the major-minor version of the current Python as a string, e.g. + "3.7" or "3.10". + """ + return "{}.{}".format(*sys.version_info) + + +def get_src_prefix() -> str: + if running_under_virtualenv(): + src_prefix = os.path.join(sys.prefix, "src") + else: + # FIXME: keep src in cwd for now (it is not a temporary folder) + try: + src_prefix = os.path.join(os.getcwd(), "src") + except OSError: + # In case the current working directory has been renamed or deleted + sys.exit("The folder you are executing pip from can no longer be found.") + + # under macOS + virtualenv sys.prefix is not properly resolved + # it is something like /path/to/python/bin/.. + return os.path.abspath(src_prefix) + + +try: + # Use getusersitepackages if this is present, as it ensures that the + # value is initialised properly. + user_site: typing.Optional[str] = site.getusersitepackages() +except AttributeError: + user_site = site.USER_SITE + + +@functools.lru_cache(maxsize=None) +def is_osx_framework() -> bool: + return bool(sysconfig.get_config_var("PYTHONFRAMEWORK")) diff --git a/lib/python3.11/site-packages/pip/_internal/main.py b/python/lib/python3.10/site-packages/pip/_internal/main.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/main.py rename to python/lib/python3.10/site-packages/pip/_internal/main.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/metadata/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/metadata/__init__.py new file mode 100644 index 0000000..cc037c1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/metadata/__init__.py @@ -0,0 +1,62 @@ +from typing import List, Optional + +from .base import BaseDistribution, BaseEnvironment, FilesystemWheel, MemoryWheel, Wheel + +__all__ = [ + "BaseDistribution", + "BaseEnvironment", + "FilesystemWheel", + "MemoryWheel", + "Wheel", + "get_default_environment", + "get_environment", + "get_wheel_distribution", +] + + +def get_default_environment() -> BaseEnvironment: + """Get the default representation for the current environment. + + This returns an Environment instance from the chosen backend. The default + Environment instance should be built from ``sys.path`` and may use caching + to share instance state accorss calls. + """ + from .pkg_resources import Environment + + return Environment.default() + + +def get_environment(paths: Optional[List[str]]) -> BaseEnvironment: + """Get a representation of the environment specified by ``paths``. + + This returns an Environment instance from the chosen backend based on the + given import paths. The backend must build a fresh instance representing + the state of installed distributions when this function is called. + """ + from .pkg_resources import Environment + + return Environment.from_paths(paths) + + +def get_directory_distribution(directory: str) -> BaseDistribution: + """Get the distribution metadata representation in the specified directory. + + This returns a Distribution instance from the chosen backend based on + the given on-disk ``.dist-info`` directory. + """ + from .pkg_resources import Distribution + + return Distribution.from_directory(directory) + + +def get_wheel_distribution(wheel: Wheel, canonical_name: str) -> BaseDistribution: + """Get the representation of the specified wheel's distribution metadata. + + This returns a Distribution instance from the chosen backend based on + the given wheel's ``.dist-info`` directory. + + :param canonical_name: Normalized project name of the given wheel. + """ + from .pkg_resources import Distribution + + return Distribution.from_wheel(wheel, canonical_name) diff --git a/python/lib/python3.10/site-packages/pip/_internal/metadata/base.py b/python/lib/python3.10/site-packages/pip/_internal/metadata/base.py new file mode 100644 index 0000000..1a5a781 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/metadata/base.py @@ -0,0 +1,546 @@ +import csv +import email.message +import json +import logging +import pathlib +import re +import zipfile +from typing import ( + IO, + TYPE_CHECKING, + Collection, + Container, + Iterable, + Iterator, + List, + Optional, + Tuple, + Union, +) + +from pip._vendor.packaging.requirements import Requirement +from pip._vendor.packaging.specifiers import InvalidSpecifier, SpecifierSet +from pip._vendor.packaging.utils import NormalizedName +from pip._vendor.packaging.version import LegacyVersion, Version + +from pip._internal.exceptions import NoneMetadataError +from pip._internal.locations import site_packages, user_site +from pip._internal.models.direct_url import ( + DIRECT_URL_METADATA_NAME, + DirectUrl, + DirectUrlValidationError, +) +from pip._internal.utils.compat import stdlib_pkgs # TODO: Move definition here. +from pip._internal.utils.egg_link import ( + egg_link_path_from_location, + egg_link_path_from_sys_path, +) +from pip._internal.utils.misc import is_local, normalize_path +from pip._internal.utils.urls import url_to_path + +if TYPE_CHECKING: + from typing import Protocol +else: + Protocol = object + +DistributionVersion = Union[LegacyVersion, Version] + +InfoPath = Union[str, pathlib.PurePosixPath] + +logger = logging.getLogger(__name__) + + +class BaseEntryPoint(Protocol): + @property + def name(self) -> str: + raise NotImplementedError() + + @property + def value(self) -> str: + raise NotImplementedError() + + @property + def group(self) -> str: + raise NotImplementedError() + + +def _convert_installed_files_path( + entry: Tuple[str, ...], + info: Tuple[str, ...], +) -> str: + """Convert a legacy installed-files.txt path into modern RECORD path. + + The legacy format stores paths relative to the info directory, while the + modern format stores paths relative to the package root, e.g. the + site-packages directory. + + :param entry: Path parts of the installed-files.txt entry. + :param info: Path parts of the egg-info directory relative to package root. + :returns: The converted entry. + + For best compatibility with symlinks, this does not use ``abspath()`` or + ``Path.resolve()``, but tries to work with path parts: + + 1. While ``entry`` starts with ``..``, remove the equal amounts of parts + from ``info``; if ``info`` is empty, start appending ``..`` instead. + 2. Join the two directly. + """ + while entry and entry[0] == "..": + if not info or info[-1] == "..": + info += ("..",) + else: + info = info[:-1] + entry = entry[1:] + return str(pathlib.Path(*info, *entry)) + + +class BaseDistribution(Protocol): + def __repr__(self) -> str: + return f"{self.raw_name} {self.version} ({self.location})" + + def __str__(self) -> str: + return f"{self.raw_name} {self.version}" + + @property + def location(self) -> Optional[str]: + """Where the distribution is loaded from. + + A string value is not necessarily a filesystem path, since distributions + can be loaded from other sources, e.g. arbitrary zip archives. ``None`` + means the distribution is created in-memory. + + Do not canonicalize this value with e.g. ``pathlib.Path.resolve()``. If + this is a symbolic link, we want to preserve the relative path between + it and files in the distribution. + """ + raise NotImplementedError() + + @property + def editable_project_location(self) -> Optional[str]: + """The project location for editable distributions. + + This is the directory where pyproject.toml or setup.py is located. + None if the distribution is not installed in editable mode. + """ + # TODO: this property is relatively costly to compute, memoize it ? + direct_url = self.direct_url + if direct_url: + if direct_url.is_local_editable(): + return url_to_path(direct_url.url) + else: + # Search for an .egg-link file by walking sys.path, as it was + # done before by dist_is_editable(). + egg_link_path = egg_link_path_from_sys_path(self.raw_name) + if egg_link_path: + # TODO: get project location from second line of egg_link file + # (https://github.com/pypa/pip/issues/10243) + return self.location + return None + + @property + def installed_location(self) -> Optional[str]: + """The distribution's "installed" location. + + This should generally be a ``site-packages`` directory. This is + usually ``dist.location``, except for legacy develop-installed packages, + where ``dist.location`` is the source code location, and this is where + the ``.egg-link`` file is. + + The returned location is normalized (in particular, with symlinks removed). + """ + egg_link = egg_link_path_from_location(self.raw_name) + if egg_link: + location = egg_link + elif self.location: + location = self.location + else: + return None + return normalize_path(location) + + @property + def info_location(self) -> Optional[str]: + """Location of the .[egg|dist]-info directory or file. + + Similarly to ``location``, a string value is not necessarily a + filesystem path. ``None`` means the distribution is created in-memory. + + For a modern .dist-info installation on disk, this should be something + like ``{location}/{raw_name}-{version}.dist-info``. + + Do not canonicalize this value with e.g. ``pathlib.Path.resolve()``. If + this is a symbolic link, we want to preserve the relative path between + it and other files in the distribution. + """ + raise NotImplementedError() + + @property + def installed_by_distutils(self) -> bool: + """Whether this distribution is installed with legacy distutils format. + + A distribution installed with "raw" distutils not patched by setuptools + uses one single file at ``info_location`` to store metadata. We need to + treat this specially on uninstallation. + """ + info_location = self.info_location + if not info_location: + return False + return pathlib.Path(info_location).is_file() + + @property + def installed_as_egg(self) -> bool: + """Whether this distribution is installed as an egg. + + This usually indicates the distribution was installed by (older versions + of) easy_install. + """ + location = self.location + if not location: + return False + return location.endswith(".egg") + + @property + def installed_with_setuptools_egg_info(self) -> bool: + """Whether this distribution is installed with the ``.egg-info`` format. + + This usually indicates the distribution was installed with setuptools + with an old pip version or with ``single-version-externally-managed``. + + Note that this ensure the metadata store is a directory. distutils can + also installs an ``.egg-info``, but as a file, not a directory. This + property is *False* for that case. Also see ``installed_by_distutils``. + """ + info_location = self.info_location + if not info_location: + return False + if not info_location.endswith(".egg-info"): + return False + return pathlib.Path(info_location).is_dir() + + @property + def installed_with_dist_info(self) -> bool: + """Whether this distribution is installed with the "modern format". + + This indicates a "modern" installation, e.g. storing metadata in the + ``.dist-info`` directory. This applies to installations made by + setuptools (but through pip, not directly), or anything using the + standardized build backend interface (PEP 517). + """ + info_location = self.info_location + if not info_location: + return False + if not info_location.endswith(".dist-info"): + return False + return pathlib.Path(info_location).is_dir() + + @property + def canonical_name(self) -> NormalizedName: + raise NotImplementedError() + + @property + def version(self) -> DistributionVersion: + raise NotImplementedError() + + @property + def setuptools_filename(self) -> str: + """Convert a project name to its setuptools-compatible filename. + + This is a copy of ``pkg_resources.to_filename()`` for compatibility. + """ + return self.raw_name.replace("-", "_") + + @property + def direct_url(self) -> Optional[DirectUrl]: + """Obtain a DirectUrl from this distribution. + + Returns None if the distribution has no `direct_url.json` metadata, + or if `direct_url.json` is invalid. + """ + try: + content = self.read_text(DIRECT_URL_METADATA_NAME) + except FileNotFoundError: + return None + try: + return DirectUrl.from_json(content) + except ( + UnicodeDecodeError, + json.JSONDecodeError, + DirectUrlValidationError, + ) as e: + logger.warning( + "Error parsing %s for %s: %s", + DIRECT_URL_METADATA_NAME, + self.canonical_name, + e, + ) + return None + + @property + def installer(self) -> str: + try: + installer_text = self.read_text("INSTALLER") + except (OSError, ValueError, NoneMetadataError): + return "" # Fail silently if the installer file cannot be read. + for line in installer_text.splitlines(): + cleaned_line = line.strip() + if cleaned_line: + return cleaned_line + return "" + + @property + def editable(self) -> bool: + return bool(self.editable_project_location) + + @property + def local(self) -> bool: + """If distribution is installed in the current virtual environment. + + Always True if we're not in a virtualenv. + """ + if self.installed_location is None: + return False + return is_local(self.installed_location) + + @property + def in_usersite(self) -> bool: + if self.installed_location is None or user_site is None: + return False + return self.installed_location.startswith(normalize_path(user_site)) + + @property + def in_site_packages(self) -> bool: + if self.installed_location is None or site_packages is None: + return False + return self.installed_location.startswith(normalize_path(site_packages)) + + def is_file(self, path: InfoPath) -> bool: + """Check whether an entry in the info directory is a file.""" + raise NotImplementedError() + + def iterdir(self, path: InfoPath) -> Iterator[pathlib.PurePosixPath]: + """Iterate through a directory in the info directory. + + Each item yielded would be a path relative to the info directory. + + :raise FileNotFoundError: If ``name`` does not exist in the directory. + :raise NotADirectoryError: If ``name`` does not point to a directory. + """ + raise NotImplementedError() + + def read_text(self, path: InfoPath) -> str: + """Read a file in the info directory. + + :raise FileNotFoundError: If ``name`` does not exist in the directory. + :raise NoneMetadataError: If ``name`` exists in the info directory, but + cannot be read. + """ + raise NotImplementedError() + + def iter_entry_points(self) -> Iterable[BaseEntryPoint]: + raise NotImplementedError() + + @property + def metadata(self) -> email.message.Message: + """Metadata of distribution parsed from e.g. METADATA or PKG-INFO. + + This should return an empty message if the metadata file is unavailable. + + :raises NoneMetadataError: If the metadata file is available, but does + not contain valid metadata. + """ + raise NotImplementedError() + + @property + def metadata_version(self) -> Optional[str]: + """Value of "Metadata-Version:" in distribution metadata, if available.""" + return self.metadata.get("Metadata-Version") + + @property + def raw_name(self) -> str: + """Value of "Name:" in distribution metadata.""" + # The metadata should NEVER be missing the Name: key, but if it somehow + # does, fall back to the known canonical name. + return self.metadata.get("Name", self.canonical_name) + + @property + def requires_python(self) -> SpecifierSet: + """Value of "Requires-Python:" in distribution metadata. + + If the key does not exist or contains an invalid value, an empty + SpecifierSet should be returned. + """ + value = self.metadata.get("Requires-Python") + if value is None: + return SpecifierSet() + try: + # Convert to str to satisfy the type checker; this can be a Header object. + spec = SpecifierSet(str(value)) + except InvalidSpecifier as e: + message = "Package %r has an invalid Requires-Python: %s" + logger.warning(message, self.raw_name, e) + return SpecifierSet() + return spec + + def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: + """Dependencies of this distribution. + + For modern .dist-info distributions, this is the collection of + "Requires-Dist:" entries in distribution metadata. + """ + raise NotImplementedError() + + def iter_provided_extras(self) -> Iterable[str]: + """Extras provided by this distribution. + + For modern .dist-info distributions, this is the collection of + "Provides-Extra:" entries in distribution metadata. + """ + raise NotImplementedError() + + def _iter_declared_entries_from_record(self) -> Optional[Iterator[str]]: + try: + text = self.read_text("RECORD") + except FileNotFoundError: + return None + # This extra Path-str cast normalizes entries. + return (str(pathlib.Path(row[0])) for row in csv.reader(text.splitlines())) + + def _iter_declared_entries_from_legacy(self) -> Optional[Iterator[str]]: + try: + text = self.read_text("installed-files.txt") + except FileNotFoundError: + return None + paths = (p for p in text.splitlines(keepends=False) if p) + root = self.location + info = self.info_location + if root is None or info is None: + return paths + try: + info_rel = pathlib.Path(info).relative_to(root) + except ValueError: # info is not relative to root. + return paths + if not info_rel.parts: # info *is* root. + return paths + return ( + _convert_installed_files_path(pathlib.Path(p).parts, info_rel.parts) + for p in paths + ) + + def iter_declared_entries(self) -> Optional[Iterator[str]]: + """Iterate through file entires declared in this distribution. + + For modern .dist-info distributions, this is the files listed in the + ``RECORD`` metadata file. For legacy setuptools distributions, this + comes from ``installed-files.txt``, with entries normalized to be + compatible with the format used by ``RECORD``. + + :return: An iterator for listed entries, or None if the distribution + contains neither ``RECORD`` nor ``installed-files.txt``. + """ + return ( + self._iter_declared_entries_from_record() + or self._iter_declared_entries_from_legacy() + ) + + +class BaseEnvironment: + """An environment containing distributions to introspect.""" + + @classmethod + def default(cls) -> "BaseEnvironment": + raise NotImplementedError() + + @classmethod + def from_paths(cls, paths: Optional[List[str]]) -> "BaseEnvironment": + raise NotImplementedError() + + def get_distribution(self, name: str) -> Optional["BaseDistribution"]: + """Given a requirement name, return the installed distributions. + + The name may not be normalized. The implementation must canonicalize + it for lookup. + """ + raise NotImplementedError() + + def _iter_distributions(self) -> Iterator["BaseDistribution"]: + """Iterate through installed distributions. + + This function should be implemented by subclass, but never called + directly. Use the public ``iter_distribution()`` instead, which + implements additional logic to make sure the distributions are valid. + """ + raise NotImplementedError() + + def iter_distributions(self) -> Iterator["BaseDistribution"]: + """Iterate through installed distributions.""" + for dist in self._iter_distributions(): + # Make sure the distribution actually comes from a valid Python + # packaging distribution. Pip's AdjacentTempDirectory leaves folders + # e.g. ``~atplotlib.dist-info`` if cleanup was interrupted. The + # valid project name pattern is taken from PEP 508. + project_name_valid = re.match( + r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", + dist.canonical_name, + flags=re.IGNORECASE, + ) + if not project_name_valid: + logger.warning( + "Ignoring invalid distribution %s (%s)", + dist.canonical_name, + dist.location, + ) + continue + yield dist + + def iter_installed_distributions( + self, + local_only: bool = True, + skip: Container[str] = stdlib_pkgs, + include_editables: bool = True, + editables_only: bool = False, + user_only: bool = False, + ) -> Iterator[BaseDistribution]: + """Return a list of installed distributions. + + :param local_only: If True (default), only return installations + local to the current virtualenv, if in a virtualenv. + :param skip: An iterable of canonicalized project names to ignore; + defaults to ``stdlib_pkgs``. + :param include_editables: If False, don't report editables. + :param editables_only: If True, only report editables. + :param user_only: If True, only report installations in the user + site directory. + """ + it = self.iter_distributions() + if local_only: + it = (d for d in it if d.local) + if not include_editables: + it = (d for d in it if not d.editable) + if editables_only: + it = (d for d in it if d.editable) + if user_only: + it = (d for d in it if d.in_usersite) + return (d for d in it if d.canonical_name not in skip) + + +class Wheel(Protocol): + location: str + + def as_zipfile(self) -> zipfile.ZipFile: + raise NotImplementedError() + + +class FilesystemWheel(Wheel): + def __init__(self, location: str) -> None: + self.location = location + + def as_zipfile(self) -> zipfile.ZipFile: + return zipfile.ZipFile(self.location, allowZip64=True) + + +class MemoryWheel(Wheel): + def __init__(self, location: str, stream: IO[bytes]) -> None: + self.location = location + self.stream = stream + + def as_zipfile(self) -> zipfile.ZipFile: + return zipfile.ZipFile(self.stream, allowZip64=True) diff --git a/python/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py b/python/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py new file mode 100644 index 0000000..d39f0ba --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py @@ -0,0 +1,256 @@ +import email.message +import email.parser +import logging +import os +import pathlib +import zipfile +from typing import Collection, Iterable, Iterator, List, Mapping, NamedTuple, Optional + +from pip._vendor import pkg_resources +from pip._vendor.packaging.requirements import Requirement +from pip._vendor.packaging.utils import NormalizedName, canonicalize_name +from pip._vendor.packaging.version import parse as parse_version + +from pip._internal.exceptions import InvalidWheel, NoneMetadataError, UnsupportedWheel +from pip._internal.utils.misc import display_path +from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file + +from .base import ( + BaseDistribution, + BaseEntryPoint, + BaseEnvironment, + DistributionVersion, + InfoPath, + Wheel, +) + +logger = logging.getLogger(__name__) + + +class EntryPoint(NamedTuple): + name: str + value: str + group: str + + +class WheelMetadata: + """IMetadataProvider that reads metadata files from a dictionary. + + This also maps metadata decoding exceptions to our internal exception type. + """ + + def __init__(self, metadata: Mapping[str, bytes], wheel_name: str) -> None: + self._metadata = metadata + self._wheel_name = wheel_name + + def has_metadata(self, name: str) -> bool: + return name in self._metadata + + def get_metadata(self, name: str) -> str: + try: + return self._metadata[name].decode() + except UnicodeDecodeError as e: + # Augment the default error with the origin of the file. + raise UnsupportedWheel( + f"Error decoding metadata for {self._wheel_name}: {e} in {name} file" + ) + + def get_metadata_lines(self, name: str) -> Iterable[str]: + return pkg_resources.yield_lines(self.get_metadata(name)) + + def metadata_isdir(self, name: str) -> bool: + return False + + def metadata_listdir(self, name: str) -> List[str]: + return [] + + def run_script(self, script_name: str, namespace: str) -> None: + pass + + +class Distribution(BaseDistribution): + def __init__(self, dist: pkg_resources.Distribution) -> None: + self._dist = dist + + @classmethod + def from_directory(cls, directory: str) -> "Distribution": + dist_dir = directory.rstrip(os.sep) + + # Build a PathMetadata object, from path to metadata. :wink: + base_dir, dist_dir_name = os.path.split(dist_dir) + metadata = pkg_resources.PathMetadata(base_dir, dist_dir) + + # Determine the correct Distribution object type. + if dist_dir.endswith(".egg-info"): + dist_cls = pkg_resources.Distribution + dist_name = os.path.splitext(dist_dir_name)[0] + else: + assert dist_dir.endswith(".dist-info") + dist_cls = pkg_resources.DistInfoDistribution + dist_name = os.path.splitext(dist_dir_name)[0].split("-")[0] + + dist = dist_cls(base_dir, project_name=dist_name, metadata=metadata) + return cls(dist) + + @classmethod + def from_wheel(cls, wheel: Wheel, name: str) -> "Distribution": + """Load the distribution from a given wheel. + + :raises InvalidWheel: Whenever loading of the wheel causes a + :py:exc:`zipfile.BadZipFile` exception to be thrown. + :raises UnsupportedWheel: If the wheel is a valid zip, but malformed + internally. + """ + try: + with wheel.as_zipfile() as zf: + info_dir, _ = parse_wheel(zf, name) + metadata_text = { + path.split("/", 1)[-1]: read_wheel_metadata_file(zf, path) + for path in zf.namelist() + if path.startswith(f"{info_dir}/") + } + except zipfile.BadZipFile as e: + raise InvalidWheel(wheel.location, name) from e + except UnsupportedWheel as e: + raise UnsupportedWheel(f"{name} has an invalid wheel, {e}") + dist = pkg_resources.DistInfoDistribution( + location=wheel.location, + metadata=WheelMetadata(metadata_text, wheel.location), + project_name=name, + ) + return cls(dist) + + @property + def location(self) -> Optional[str]: + return self._dist.location + + @property + def info_location(self) -> Optional[str]: + return self._dist.egg_info + + @property + def installed_by_distutils(self) -> bool: + # A distutils-installed distribution is provided by FileMetadata. This + # provider has a "path" attribute not present anywhere else. Not the + # best introspection logic, but pip has been doing this for a long time. + try: + return bool(self._dist._provider.path) + except AttributeError: + return False + + @property + def canonical_name(self) -> NormalizedName: + return canonicalize_name(self._dist.project_name) + + @property + def version(self) -> DistributionVersion: + return parse_version(self._dist.version) + + def is_file(self, path: InfoPath) -> bool: + return self._dist.has_metadata(str(path)) + + def iterdir(self, path: InfoPath) -> Iterator[pathlib.PurePosixPath]: + name = str(path) + if not self._dist.has_metadata(name): + raise FileNotFoundError(name) + if not self._dist.isdir(name): + raise NotADirectoryError(name) + for child in self._dist.metadata_listdir(name): + yield pathlib.PurePosixPath(path, child) + + def read_text(self, path: InfoPath) -> str: + name = str(path) + if not self._dist.has_metadata(name): + raise FileNotFoundError(name) + content = self._dist.get_metadata(name) + if content is None: + raise NoneMetadataError(self, name) + return content + + def iter_entry_points(self) -> Iterable[BaseEntryPoint]: + for group, entries in self._dist.get_entry_map().items(): + for name, entry_point in entries.items(): + name, _, value = str(entry_point).partition("=") + yield EntryPoint(name=name.strip(), value=value.strip(), group=group) + + @property + def metadata(self) -> email.message.Message: + """ + :raises NoneMetadataError: if the distribution reports `has_metadata()` + True but `get_metadata()` returns None. + """ + if isinstance(self._dist, pkg_resources.DistInfoDistribution): + metadata_name = "METADATA" + else: + metadata_name = "PKG-INFO" + try: + metadata = self.read_text(metadata_name) + except FileNotFoundError: + if self.location: + displaying_path = display_path(self.location) + else: + displaying_path = repr(self.location) + logger.warning("No metadata found in %s", displaying_path) + metadata = "" + feed_parser = email.parser.FeedParser() + feed_parser.feed(metadata) + return feed_parser.close() + + def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: + if extras: # pkg_resources raises on invalid extras, so we sanitize. + extras = frozenset(extras).intersection(self._dist.extras) + return self._dist.requires(extras) + + def iter_provided_extras(self) -> Iterable[str]: + return self._dist.extras + + +class Environment(BaseEnvironment): + def __init__(self, ws: pkg_resources.WorkingSet) -> None: + self._ws = ws + + @classmethod + def default(cls) -> BaseEnvironment: + return cls(pkg_resources.working_set) + + @classmethod + def from_paths(cls, paths: Optional[List[str]]) -> BaseEnvironment: + return cls(pkg_resources.WorkingSet(paths)) + + def _search_distribution(self, name: str) -> Optional[BaseDistribution]: + """Find a distribution matching the ``name`` in the environment. + + This searches from *all* distributions available in the environment, to + match the behavior of ``pkg_resources.get_distribution()``. + """ + canonical_name = canonicalize_name(name) + for dist in self.iter_distributions(): + if dist.canonical_name == canonical_name: + return dist + return None + + def get_distribution(self, name: str) -> Optional[BaseDistribution]: + # Search the distribution by looking through the working set. + dist = self._search_distribution(name) + if dist: + return dist + + # If distribution could not be found, call working_set.require to + # update the working set, and try to find the distribution again. + # This might happen for e.g. when you install a package twice, once + # using setup.py develop and again using setup.py install. Now when + # running pip uninstall twice, the package gets removed from the + # working set in the first uninstall, so we have to populate the + # working set again so that pip knows about it and the packages gets + # picked up and is successfully uninstalled the second time too. + try: + # We didn't pass in any version specifiers, so this can never + # raise pkg_resources.VersionConflict. + self._ws.require(name) + except pkg_resources.DistributionNotFound: + return None + return self._search_distribution(name) + + def _iter_distributions(self) -> Iterator[BaseDistribution]: + for dist in self._ws: + yield Distribution(dist) diff --git a/lib/python3.11/site-packages/pip/_internal/models/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/models/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/models/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/models/candidate.py b/python/lib/python3.10/site-packages/pip/_internal/models/candidate.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/candidate.py rename to python/lib/python3.10/site-packages/pip/_internal/models/candidate.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/models/direct_url.py b/python/lib/python3.10/site-packages/pip/_internal/models/direct_url.py new file mode 100644 index 0000000..92060d4 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/models/direct_url.py @@ -0,0 +1,220 @@ +""" PEP 610 """ +import json +import re +import urllib.parse +from typing import Any, Dict, Iterable, Optional, Type, TypeVar, Union + +__all__ = [ + "DirectUrl", + "DirectUrlValidationError", + "DirInfo", + "ArchiveInfo", + "VcsInfo", +] + +T = TypeVar("T") + +DIRECT_URL_METADATA_NAME = "direct_url.json" +ENV_VAR_RE = re.compile(r"^\$\{[A-Za-z0-9-_]+\}(:\$\{[A-Za-z0-9-_]+\})?$") + + +class DirectUrlValidationError(Exception): + pass + + +def _get( + d: Dict[str, Any], expected_type: Type[T], key: str, default: Optional[T] = None +) -> Optional[T]: + """Get value from dictionary and verify expected type.""" + if key not in d: + return default + value = d[key] + if not isinstance(value, expected_type): + raise DirectUrlValidationError( + "{!r} has unexpected type for {} (expected {})".format( + value, key, expected_type + ) + ) + return value + + +def _get_required( + d: Dict[str, Any], expected_type: Type[T], key: str, default: Optional[T] = None +) -> T: + value = _get(d, expected_type, key, default) + if value is None: + raise DirectUrlValidationError(f"{key} must have a value") + return value + + +def _exactly_one_of(infos: Iterable[Optional["InfoType"]]) -> "InfoType": + infos = [info for info in infos if info is not None] + if not infos: + raise DirectUrlValidationError( + "missing one of archive_info, dir_info, vcs_info" + ) + if len(infos) > 1: + raise DirectUrlValidationError( + "more than one of archive_info, dir_info, vcs_info" + ) + assert infos[0] is not None + return infos[0] + + +def _filter_none(**kwargs: Any) -> Dict[str, Any]: + """Make dict excluding None values.""" + return {k: v for k, v in kwargs.items() if v is not None} + + +class VcsInfo: + name = "vcs_info" + + def __init__( + self, + vcs: str, + commit_id: str, + requested_revision: Optional[str] = None, + resolved_revision: Optional[str] = None, + resolved_revision_type: Optional[str] = None, + ) -> None: + self.vcs = vcs + self.requested_revision = requested_revision + self.commit_id = commit_id + self.resolved_revision = resolved_revision + self.resolved_revision_type = resolved_revision_type + + @classmethod + def _from_dict(cls, d: Optional[Dict[str, Any]]) -> Optional["VcsInfo"]: + if d is None: + return None + return cls( + vcs=_get_required(d, str, "vcs"), + commit_id=_get_required(d, str, "commit_id"), + requested_revision=_get(d, str, "requested_revision"), + resolved_revision=_get(d, str, "resolved_revision"), + resolved_revision_type=_get(d, str, "resolved_revision_type"), + ) + + def _to_dict(self) -> Dict[str, Any]: + return _filter_none( + vcs=self.vcs, + requested_revision=self.requested_revision, + commit_id=self.commit_id, + resolved_revision=self.resolved_revision, + resolved_revision_type=self.resolved_revision_type, + ) + + +class ArchiveInfo: + name = "archive_info" + + def __init__( + self, + hash: Optional[str] = None, + ) -> None: + self.hash = hash + + @classmethod + def _from_dict(cls, d: Optional[Dict[str, Any]]) -> Optional["ArchiveInfo"]: + if d is None: + return None + return cls(hash=_get(d, str, "hash")) + + def _to_dict(self) -> Dict[str, Any]: + return _filter_none(hash=self.hash) + + +class DirInfo: + name = "dir_info" + + def __init__( + self, + editable: bool = False, + ) -> None: + self.editable = editable + + @classmethod + def _from_dict(cls, d: Optional[Dict[str, Any]]) -> Optional["DirInfo"]: + if d is None: + return None + return cls(editable=_get_required(d, bool, "editable", default=False)) + + def _to_dict(self) -> Dict[str, Any]: + return _filter_none(editable=self.editable or None) + + +InfoType = Union[ArchiveInfo, DirInfo, VcsInfo] + + +class DirectUrl: + def __init__( + self, + url: str, + info: InfoType, + subdirectory: Optional[str] = None, + ) -> None: + self.url = url + self.info = info + self.subdirectory = subdirectory + + def _remove_auth_from_netloc(self, netloc: str) -> str: + if "@" not in netloc: + return netloc + user_pass, netloc_no_user_pass = netloc.split("@", 1) + if ( + isinstance(self.info, VcsInfo) + and self.info.vcs == "git" + and user_pass == "git" + ): + return netloc + if ENV_VAR_RE.match(user_pass): + return netloc + return netloc_no_user_pass + + @property + def redacted_url(self) -> str: + """url with user:password part removed unless it is formed with + environment variables as specified in PEP 610, or it is ``git`` + in the case of a git URL. + """ + purl = urllib.parse.urlsplit(self.url) + netloc = self._remove_auth_from_netloc(purl.netloc) + surl = urllib.parse.urlunsplit( + (purl.scheme, netloc, purl.path, purl.query, purl.fragment) + ) + return surl + + def validate(self) -> None: + self.from_dict(self.to_dict()) + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> "DirectUrl": + return DirectUrl( + url=_get_required(d, str, "url"), + subdirectory=_get(d, str, "subdirectory"), + info=_exactly_one_of( + [ + ArchiveInfo._from_dict(_get(d, dict, "archive_info")), + DirInfo._from_dict(_get(d, dict, "dir_info")), + VcsInfo._from_dict(_get(d, dict, "vcs_info")), + ] + ), + ) + + def to_dict(self) -> Dict[str, Any]: + res = _filter_none( + url=self.redacted_url, + subdirectory=self.subdirectory, + ) + res[self.info.name] = self.info._to_dict() + return res + + @classmethod + def from_json(cls, s: str) -> "DirectUrl": + return cls.from_dict(json.loads(s)) + + def to_json(self) -> str: + return json.dumps(self.to_dict(), sort_keys=True) + + def is_local_editable(self) -> bool: + return isinstance(self.info, DirInfo) and self.info.editable diff --git a/lib/python3.11/site-packages/pip/_internal/models/format_control.py b/python/lib/python3.10/site-packages/pip/_internal/models/format_control.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/format_control.py rename to python/lib/python3.10/site-packages/pip/_internal/models/format_control.py diff --git a/lib/python3.11/site-packages/pip/_internal/models/index.py b/python/lib/python3.10/site-packages/pip/_internal/models/index.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/index.py rename to python/lib/python3.10/site-packages/pip/_internal/models/index.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/models/link.py b/python/lib/python3.10/site-packages/pip/_internal/models/link.py new file mode 100644 index 0000000..6069b27 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/models/link.py @@ -0,0 +1,288 @@ +import functools +import logging +import os +import posixpath +import re +import urllib.parse +from typing import TYPE_CHECKING, Dict, List, NamedTuple, Optional, Tuple, Union + +from pip._internal.utils.filetypes import WHEEL_EXTENSION +from pip._internal.utils.hashes import Hashes +from pip._internal.utils.misc import ( + redact_auth_from_url, + split_auth_from_netloc, + splitext, +) +from pip._internal.utils.models import KeyBasedCompareMixin +from pip._internal.utils.urls import path_to_url, url_to_path + +if TYPE_CHECKING: + from pip._internal.index.collector import HTMLPage + +logger = logging.getLogger(__name__) + + +_SUPPORTED_HASHES = ("sha1", "sha224", "sha384", "sha256", "sha512", "md5") + + +class Link(KeyBasedCompareMixin): + """Represents a parsed link from a Package Index's simple URL""" + + __slots__ = [ + "_parsed_url", + "_url", + "comes_from", + "requires_python", + "yanked_reason", + "cache_link_parsing", + ] + + def __init__( + self, + url: str, + comes_from: Optional[Union[str, "HTMLPage"]] = None, + requires_python: Optional[str] = None, + yanked_reason: Optional[str] = None, + cache_link_parsing: bool = True, + ) -> None: + """ + :param url: url of the resource pointed to (href of the link) + :param comes_from: instance of HTMLPage where the link was found, + or string. + :param requires_python: String containing the `Requires-Python` + metadata field, specified in PEP 345. This may be specified by + a data-requires-python attribute in the HTML link tag, as + described in PEP 503. + :param yanked_reason: the reason the file has been yanked, if the + file has been yanked, or None if the file hasn't been yanked. + This is the value of the "data-yanked" attribute, if present, in + a simple repository HTML link. If the file has been yanked but + no reason was provided, this should be the empty string. See + PEP 592 for more information and the specification. + :param cache_link_parsing: A flag that is used elsewhere to determine + whether resources retrieved from this link + should be cached. PyPI index urls should + generally have this set to False, for + example. + """ + + # url can be a UNC windows share + if url.startswith("\\\\"): + url = path_to_url(url) + + self._parsed_url = urllib.parse.urlsplit(url) + # Store the url as a private attribute to prevent accidentally + # trying to set a new value. + self._url = url + + self.comes_from = comes_from + self.requires_python = requires_python if requires_python else None + self.yanked_reason = yanked_reason + + super().__init__(key=url, defining_class=Link) + + self.cache_link_parsing = cache_link_parsing + + def __str__(self) -> str: + if self.requires_python: + rp = f" (requires-python:{self.requires_python})" + else: + rp = "" + if self.comes_from: + return "{} (from {}){}".format( + redact_auth_from_url(self._url), self.comes_from, rp + ) + else: + return redact_auth_from_url(str(self._url)) + + def __repr__(self) -> str: + return f"" + + @property + def url(self) -> str: + return self._url + + @property + def filename(self) -> str: + path = self.path.rstrip("/") + name = posixpath.basename(path) + if not name: + # Make sure we don't leak auth information if the netloc + # includes a username and password. + netloc, user_pass = split_auth_from_netloc(self.netloc) + return netloc + + name = urllib.parse.unquote(name) + assert name, f"URL {self._url!r} produced no filename" + return name + + @property + def file_path(self) -> str: + return url_to_path(self.url) + + @property + def scheme(self) -> str: + return self._parsed_url.scheme + + @property + def netloc(self) -> str: + """ + This can contain auth information. + """ + return self._parsed_url.netloc + + @property + def path(self) -> str: + return urllib.parse.unquote(self._parsed_url.path) + + def splitext(self) -> Tuple[str, str]: + return splitext(posixpath.basename(self.path.rstrip("/"))) + + @property + def ext(self) -> str: + return self.splitext()[1] + + @property + def url_without_fragment(self) -> str: + scheme, netloc, path, query, fragment = self._parsed_url + return urllib.parse.urlunsplit((scheme, netloc, path, query, "")) + + _egg_fragment_re = re.compile(r"[#&]egg=([^&]*)") + + @property + def egg_fragment(self) -> Optional[str]: + match = self._egg_fragment_re.search(self._url) + if not match: + return None + return match.group(1) + + _subdirectory_fragment_re = re.compile(r"[#&]subdirectory=([^&]*)") + + @property + def subdirectory_fragment(self) -> Optional[str]: + match = self._subdirectory_fragment_re.search(self._url) + if not match: + return None + return match.group(1) + + _hash_re = re.compile( + r"({choices})=([a-f0-9]+)".format(choices="|".join(_SUPPORTED_HASHES)) + ) + + @property + def hash(self) -> Optional[str]: + match = self._hash_re.search(self._url) + if match: + return match.group(2) + return None + + @property + def hash_name(self) -> Optional[str]: + match = self._hash_re.search(self._url) + if match: + return match.group(1) + return None + + @property + def show_url(self) -> str: + return posixpath.basename(self._url.split("#", 1)[0].split("?", 1)[0]) + + @property + def is_file(self) -> bool: + return self.scheme == "file" + + def is_existing_dir(self) -> bool: + return self.is_file and os.path.isdir(self.file_path) + + @property + def is_wheel(self) -> bool: + return self.ext == WHEEL_EXTENSION + + @property + def is_vcs(self) -> bool: + from pip._internal.vcs import vcs + + return self.scheme in vcs.all_schemes + + @property + def is_yanked(self) -> bool: + return self.yanked_reason is not None + + @property + def has_hash(self) -> bool: + return self.hash_name is not None + + def is_hash_allowed(self, hashes: Optional[Hashes]) -> bool: + """ + Return True if the link has a hash and it is allowed. + """ + if hashes is None or not self.has_hash: + return False + # Assert non-None so mypy knows self.hash_name and self.hash are str. + assert self.hash_name is not None + assert self.hash is not None + + return hashes.is_hash_allowed(self.hash_name, hex_digest=self.hash) + + +class _CleanResult(NamedTuple): + """Convert link for equivalency check. + + This is used in the resolver to check whether two URL-specified requirements + likely point to the same distribution and can be considered equivalent. This + equivalency logic avoids comparing URLs literally, which can be too strict + (e.g. "a=1&b=2" vs "b=2&a=1") and produce conflicts unexpecting to users. + + Currently this does three things: + + 1. Drop the basic auth part. This is technically wrong since a server can + serve different content based on auth, but if it does that, it is even + impossible to guarantee two URLs without auth are equivalent, since + the user can input different auth information when prompted. So the + practical solution is to assume the auth doesn't affect the response. + 2. Parse the query to avoid the ordering issue. Note that ordering under the + same key in the query are NOT cleaned; i.e. "a=1&a=2" and "a=2&a=1" are + still considered different. + 3. Explicitly drop most of the fragment part, except ``subdirectory=`` and + hash values, since it should have no impact the downloaded content. Note + that this drops the "egg=" part historically used to denote the requested + project (and extras), which is wrong in the strictest sense, but too many + people are supplying it inconsistently to cause superfluous resolution + conflicts, so we choose to also ignore them. + """ + + parsed: urllib.parse.SplitResult + query: Dict[str, List[str]] + subdirectory: str + hashes: Dict[str, str] + + +def _clean_link(link: Link) -> _CleanResult: + parsed = link._parsed_url + netloc = parsed.netloc.rsplit("@", 1)[-1] + # According to RFC 8089, an empty host in file: means localhost. + if parsed.scheme == "file" and not netloc: + netloc = "localhost" + fragment = urllib.parse.parse_qs(parsed.fragment) + if "egg" in fragment: + logger.debug("Ignoring egg= fragment in %s", link) + try: + # If there are multiple subdirectory values, use the first one. + # This matches the behavior of Link.subdirectory_fragment. + subdirectory = fragment["subdirectory"][0] + except (IndexError, KeyError): + subdirectory = "" + # If there are multiple hash values under the same algorithm, use the + # first one. This matches the behavior of Link.hash_value. + hashes = {k: fragment[k][0] for k in _SUPPORTED_HASHES if k in fragment} + return _CleanResult( + parsed=parsed._replace(netloc=netloc, query="", fragment=""), + query=urllib.parse.parse_qs(parsed.query), + subdirectory=subdirectory, + hashes=hashes, + ) + + +@functools.lru_cache(maxsize=None) +def links_equivalent(link1: Link, link2: Link) -> bool: + return _clean_link(link1) == _clean_link(link2) diff --git a/lib/python3.11/site-packages/pip/_internal/models/scheme.py b/python/lib/python3.10/site-packages/pip/_internal/models/scheme.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/scheme.py rename to python/lib/python3.10/site-packages/pip/_internal/models/scheme.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/models/search_scope.py b/python/lib/python3.10/site-packages/pip/_internal/models/search_scope.py new file mode 100644 index 0000000..e4e54c2 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/models/search_scope.py @@ -0,0 +1,129 @@ +import itertools +import logging +import os +import posixpath +import urllib.parse +from typing import List + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.models.index import PyPI +from pip._internal.utils.compat import has_tls +from pip._internal.utils.misc import normalize_path, redact_auth_from_url + +logger = logging.getLogger(__name__) + + +class SearchScope: + + """ + Encapsulates the locations that pip is configured to search. + """ + + __slots__ = ["find_links", "index_urls"] + + @classmethod + def create( + cls, + find_links: List[str], + index_urls: List[str], + ) -> "SearchScope": + """ + Create a SearchScope object after normalizing the `find_links`. + """ + # Build find_links. If an argument starts with ~, it may be + # a local file relative to a home directory. So try normalizing + # it and if it exists, use the normalized version. + # This is deliberately conservative - it might be fine just to + # blindly normalize anything starting with a ~... + built_find_links: List[str] = [] + for link in find_links: + if link.startswith("~"): + new_link = normalize_path(link) + if os.path.exists(new_link): + link = new_link + built_find_links.append(link) + + # If we don't have TLS enabled, then WARN if anyplace we're looking + # relies on TLS. + if not has_tls(): + for link in itertools.chain(index_urls, built_find_links): + parsed = urllib.parse.urlparse(link) + if parsed.scheme == "https": + logger.warning( + "pip is configured with locations that require " + "TLS/SSL, however the ssl module in Python is not " + "available." + ) + break + + return cls( + find_links=built_find_links, + index_urls=index_urls, + ) + + def __init__( + self, + find_links: List[str], + index_urls: List[str], + ) -> None: + self.find_links = find_links + self.index_urls = index_urls + + def get_formatted_locations(self) -> str: + lines = [] + redacted_index_urls = [] + if self.index_urls and self.index_urls != [PyPI.simple_url]: + for url in self.index_urls: + + redacted_index_url = redact_auth_from_url(url) + + # Parse the URL + purl = urllib.parse.urlsplit(redacted_index_url) + + # URL is generally invalid if scheme and netloc is missing + # there are issues with Python and URL parsing, so this test + # is a bit crude. See bpo-20271, bpo-23505. Python doesn't + # always parse invalid URLs correctly - it should raise + # exceptions for malformed URLs + if not purl.scheme and not purl.netloc: + logger.warning( + 'The index url "%s" seems invalid, please provide a scheme.', + redacted_index_url, + ) + + redacted_index_urls.append(redacted_index_url) + + lines.append( + "Looking in indexes: {}".format(", ".join(redacted_index_urls)) + ) + + if self.find_links: + lines.append( + "Looking in links: {}".format( + ", ".join(redact_auth_from_url(url) for url in self.find_links) + ) + ) + return "\n".join(lines) + + def get_index_urls_locations(self, project_name: str) -> List[str]: + """Returns the locations found via self.index_urls + + Checks the url_name on the main (first in the list) index and + use this url_name to produce all locations + """ + + def mkurl_pypi_url(url: str) -> str: + loc = posixpath.join( + url, urllib.parse.quote(canonicalize_name(project_name)) + ) + # For maximum compatibility with easy_install, ensure the path + # ends in a trailing slash. Although this isn't in the spec + # (and PyPI can handle it without the slash) some other index + # implementations might break if they relied on easy_install's + # behavior. + if not loc.endswith("/"): + loc = loc + "/" + return loc + + return [mkurl_pypi_url(url) for url in self.index_urls] diff --git a/lib/python3.11/site-packages/pip/_internal/models/selection_prefs.py b/python/lib/python3.10/site-packages/pip/_internal/models/selection_prefs.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/selection_prefs.py rename to python/lib/python3.10/site-packages/pip/_internal/models/selection_prefs.py diff --git a/lib/python3.11/site-packages/pip/_internal/models/target_python.py b/python/lib/python3.10/site-packages/pip/_internal/models/target_python.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/models/target_python.py rename to python/lib/python3.10/site-packages/pip/_internal/models/target_python.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/models/wheel.py b/python/lib/python3.10/site-packages/pip/_internal/models/wheel.py new file mode 100644 index 0000000..aaf218d --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/models/wheel.py @@ -0,0 +1,89 @@ +"""Represents a wheel file and provides access to the various parts of the +name that have meaning. +""" +import re +from typing import Dict, Iterable, List + +from pip._vendor.packaging.tags import Tag + +from pip._internal.exceptions import InvalidWheelFilename + + +class Wheel: + """A wheel file""" + + wheel_file_re = re.compile( + r"""^(?P(?P[^\s-]+?)-(?P[^\s-]*?)) + ((-(?P\d[^-]*?))?-(?P[^\s-]+?)-(?P[^\s-]+?)-(?P[^\s-]+?) + \.whl|\.dist-info)$""", + re.VERBOSE, + ) + + def __init__(self, filename: str) -> None: + """ + :raises InvalidWheelFilename: when the filename is invalid for a wheel + """ + wheel_info = self.wheel_file_re.match(filename) + if not wheel_info: + raise InvalidWheelFilename(f"{filename} is not a valid wheel filename.") + self.filename = filename + self.name = wheel_info.group("name").replace("_", "-") + # we'll assume "_" means "-" due to wheel naming scheme + # (https://github.com/pypa/pip/issues/1150) + self.version = wheel_info.group("ver").replace("_", "-") + self.build_tag = wheel_info.group("build") + self.pyversions = wheel_info.group("pyver").split(".") + self.abis = wheel_info.group("abi").split(".") + self.plats = wheel_info.group("plat").split(".") + + # All the tag combinations from this file + self.file_tags = { + Tag(x, y, z) for x in self.pyversions for y in self.abis for z in self.plats + } + + def get_formatted_file_tags(self) -> List[str]: + """Return the wheel's tags as a sorted list of strings.""" + return sorted(str(tag) for tag in self.file_tags) + + def support_index_min(self, tags: List[Tag]) -> int: + """Return the lowest index that one of the wheel's file_tag combinations + achieves in the given list of supported tags. + + For example, if there are 8 supported tags and one of the file tags + is first in the list, then return 0. + + :param tags: the PEP 425 tags to check the wheel against, in order + with most preferred first. + + :raises ValueError: If none of the wheel's file tags match one of + the supported tags. + """ + return min(tags.index(tag) for tag in self.file_tags if tag in tags) + + def find_most_preferred_tag( + self, tags: List[Tag], tag_to_priority: Dict[Tag, int] + ) -> int: + """Return the priority of the most preferred tag that one of the wheel's file + tag combinations achieves in the given list of supported tags using the given + tag_to_priority mapping, where lower priorities are more-preferred. + + This is used in place of support_index_min in some cases in order to avoid + an expensive linear scan of a large list of tags. + + :param tags: the PEP 425 tags to check the wheel against. + :param tag_to_priority: a mapping from tag to priority of that tag, where + lower is more preferred. + + :raises ValueError: If none of the wheel's file tags match one of + the supported tags. + """ + return min( + tag_to_priority[tag] for tag in self.file_tags if tag in tag_to_priority + ) + + def supported(self, tags: Iterable[Tag]) -> bool: + """Return whether the wheel is compatible with one of the given tags. + + :param tags: the PEP 425 tags to check the wheel against. + """ + return not self.file_tags.isdisjoint(tags) diff --git a/lib/python3.11/site-packages/pip/_internal/network/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/network/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/network/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/network/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/network/auth.py b/python/lib/python3.10/site-packages/pip/_internal/network/auth.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/network/auth.py rename to python/lib/python3.10/site-packages/pip/_internal/network/auth.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/network/cache.py b/python/lib/python3.10/site-packages/pip/_internal/network/cache.py new file mode 100644 index 0000000..9dba7ed --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/network/cache.py @@ -0,0 +1,69 @@ +"""HTTP cache implementation. +""" + +import os +from contextlib import contextmanager +from typing import Iterator, Optional + +from pip._vendor.cachecontrol.cache import BaseCache +from pip._vendor.cachecontrol.caches import FileCache +from pip._vendor.requests.models import Response + +from pip._internal.utils.filesystem import adjacent_tmp_file, replace +from pip._internal.utils.misc import ensure_dir + + +def is_from_cache(response: Response) -> bool: + return getattr(response, "from_cache", False) + + +@contextmanager +def suppressed_cache_errors() -> Iterator[None]: + """If we can't access the cache then we can just skip caching and process + requests as if caching wasn't enabled. + """ + try: + yield + except OSError: + pass + + +class SafeFileCache(BaseCache): + """ + A file based cache which is safe to use even when the target directory may + not be accessible or writable. + """ + + def __init__(self, directory: str) -> None: + assert directory is not None, "Cache directory must not be None." + super().__init__() + self.directory = directory + + def _get_cache_path(self, name: str) -> str: + # From cachecontrol.caches.file_cache.FileCache._fn, brought into our + # class for backwards-compatibility and to avoid using a non-public + # method. + hashed = FileCache.encode(name) + parts = list(hashed[:5]) + [hashed] + return os.path.join(self.directory, *parts) + + def get(self, key: str) -> Optional[bytes]: + path = self._get_cache_path(key) + with suppressed_cache_errors(): + with open(path, "rb") as f: + return f.read() + + def set(self, key: str, value: bytes, expires: Optional[int] = None) -> None: + path = self._get_cache_path(key) + with suppressed_cache_errors(): + ensure_dir(os.path.dirname(path)) + + with adjacent_tmp_file(path) as f: + f.write(value) + + replace(f.name, path) + + def delete(self, key: str) -> None: + path = self._get_cache_path(key) + with suppressed_cache_errors(): + os.remove(path) diff --git a/python/lib/python3.10/site-packages/pip/_internal/network/download.py b/python/lib/python3.10/site-packages/pip/_internal/network/download.py new file mode 100644 index 0000000..35bc970 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/network/download.py @@ -0,0 +1,185 @@ +"""Download files with progress indicators. +""" +import cgi +import logging +import mimetypes +import os +from typing import Iterable, Optional, Tuple + +from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response + +from pip._internal.cli.progress_bars import get_download_progress_renderer +from pip._internal.exceptions import NetworkConnectionError +from pip._internal.models.index import PyPI +from pip._internal.models.link import Link +from pip._internal.network.cache import is_from_cache +from pip._internal.network.session import PipSession +from pip._internal.network.utils import HEADERS, raise_for_status, response_chunks +from pip._internal.utils.misc import format_size, redact_auth_from_url, splitext + +logger = logging.getLogger(__name__) + + +def _get_http_response_size(resp: Response) -> Optional[int]: + try: + return int(resp.headers["content-length"]) + except (ValueError, KeyError, TypeError): + return None + + +def _prepare_download( + resp: Response, + link: Link, + progress_bar: str, +) -> Iterable[bytes]: + total_length = _get_http_response_size(resp) + + if link.netloc == PyPI.file_storage_domain: + url = link.show_url + else: + url = link.url_without_fragment + + logged_url = redact_auth_from_url(url) + + if total_length: + logged_url = "{} ({})".format(logged_url, format_size(total_length)) + + if is_from_cache(resp): + logger.info("Using cached %s", logged_url) + else: + logger.info("Downloading %s", logged_url) + + if logger.getEffectiveLevel() > logging.INFO: + show_progress = False + elif is_from_cache(resp): + show_progress = False + elif not total_length: + show_progress = True + elif total_length > (40 * 1000): + show_progress = True + else: + show_progress = False + + chunks = response_chunks(resp, CONTENT_CHUNK_SIZE) + + if not show_progress: + return chunks + + renderer = get_download_progress_renderer(bar_type=progress_bar, size=total_length) + return renderer(chunks) + + +def sanitize_content_filename(filename: str) -> str: + """ + Sanitize the "filename" value from a Content-Disposition header. + """ + return os.path.basename(filename) + + +def parse_content_disposition(content_disposition: str, default_filename: str) -> str: + """ + Parse the "filename" value from a Content-Disposition header, and + return the default filename if the result is empty. + """ + _type, params = cgi.parse_header(content_disposition) + filename = params.get("filename") + if filename: + # We need to sanitize the filename to prevent directory traversal + # in case the filename contains ".." path parts. + filename = sanitize_content_filename(filename) + return filename or default_filename + + +def _get_http_response_filename(resp: Response, link: Link) -> str: + """Get an ideal filename from the given HTTP response, falling back to + the link filename if not provided. + """ + filename = link.filename # fallback + # Have a look at the Content-Disposition header for a better guess + content_disposition = resp.headers.get("content-disposition") + if content_disposition: + filename = parse_content_disposition(content_disposition, filename) + ext: Optional[str] = splitext(filename)[1] + if not ext: + ext = mimetypes.guess_extension(resp.headers.get("content-type", "")) + if ext: + filename += ext + if not ext and link.url != resp.url: + ext = os.path.splitext(resp.url)[1] + if ext: + filename += ext + return filename + + +def _http_get_download(session: PipSession, link: Link) -> Response: + target_url = link.url.split("#", 1)[0] + resp = session.get(target_url, headers=HEADERS, stream=True) + raise_for_status(resp) + return resp + + +class Downloader: + def __init__( + self, + session: PipSession, + progress_bar: str, + ) -> None: + self._session = session + self._progress_bar = progress_bar + + def __call__(self, link: Link, location: str) -> Tuple[str, str]: + """Download the file given by link into location.""" + try: + resp = _http_get_download(self._session, link) + except NetworkConnectionError as e: + assert e.response is not None + logger.critical( + "HTTP error %s while getting %s", e.response.status_code, link + ) + raise + + filename = _get_http_response_filename(resp, link) + filepath = os.path.join(location, filename) + + chunks = _prepare_download(resp, link, self._progress_bar) + with open(filepath, "wb") as content_file: + for chunk in chunks: + content_file.write(chunk) + content_type = resp.headers.get("Content-Type", "") + return filepath, content_type + + +class BatchDownloader: + def __init__( + self, + session: PipSession, + progress_bar: str, + ) -> None: + self._session = session + self._progress_bar = progress_bar + + def __call__( + self, links: Iterable[Link], location: str + ) -> Iterable[Tuple[Link, Tuple[str, str]]]: + """Download the files given by links into location.""" + for link in links: + try: + resp = _http_get_download(self._session, link) + except NetworkConnectionError as e: + assert e.response is not None + logger.critical( + "HTTP error %s while getting %s", + e.response.status_code, + link, + ) + raise + + filename = _get_http_response_filename(resp, link) + filepath = os.path.join(location, filename) + + chunks = _prepare_download(resp, link, self._progress_bar) + with open(filepath, "wb") as content_file: + for chunk in chunks: + content_file.write(chunk) + content_type = resp.headers.get("Content-Type", "") + yield link, (filepath, content_type) diff --git a/python/lib/python3.10/site-packages/pip/_internal/network/lazy_wheel.py b/python/lib/python3.10/site-packages/pip/_internal/network/lazy_wheel.py new file mode 100644 index 0000000..c9e44d5 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/network/lazy_wheel.py @@ -0,0 +1,210 @@ +"""Lazy ZIP over HTTP""" + +__all__ = ["HTTPRangeRequestUnsupported", "dist_from_wheel_url"] + +from bisect import bisect_left, bisect_right +from contextlib import contextmanager +from tempfile import NamedTemporaryFile +from typing import Any, Dict, Iterator, List, Optional, Tuple +from zipfile import BadZipfile, ZipFile + +from pip._vendor.packaging.utils import canonicalize_name +from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response + +from pip._internal.metadata import BaseDistribution, MemoryWheel, get_wheel_distribution +from pip._internal.network.session import PipSession +from pip._internal.network.utils import HEADERS, raise_for_status, response_chunks + + +class HTTPRangeRequestUnsupported(Exception): + pass + + +def dist_from_wheel_url(name: str, url: str, session: PipSession) -> BaseDistribution: + """Return a distribution object from the given wheel URL. + + This uses HTTP range requests to only fetch the potion of the wheel + containing metadata, just enough for the object to be constructed. + If such requests are not supported, HTTPRangeRequestUnsupported + is raised. + """ + with LazyZipOverHTTP(url, session) as zf: + # For read-only ZIP files, ZipFile only needs methods read, + # seek, seekable and tell, not the whole IO protocol. + wheel = MemoryWheel(zf.name, zf) # type: ignore + # After context manager exit, wheel.name + # is an invalid file by intention. + return get_wheel_distribution(wheel, canonicalize_name(name)) + + +class LazyZipOverHTTP: + """File-like object mapped to a ZIP file over HTTP. + + This uses HTTP range requests to lazily fetch the file's content, + which is supposed to be fed to ZipFile. If such requests are not + supported by the server, raise HTTPRangeRequestUnsupported + during initialization. + """ + + def __init__( + self, url: str, session: PipSession, chunk_size: int = CONTENT_CHUNK_SIZE + ) -> None: + head = session.head(url, headers=HEADERS) + raise_for_status(head) + assert head.status_code == 200 + self._session, self._url, self._chunk_size = session, url, chunk_size + self._length = int(head.headers["Content-Length"]) + self._file = NamedTemporaryFile() + self.truncate(self._length) + self._left: List[int] = [] + self._right: List[int] = [] + if "bytes" not in head.headers.get("Accept-Ranges", "none"): + raise HTTPRangeRequestUnsupported("range request is not supported") + self._check_zip() + + @property + def mode(self) -> str: + """Opening mode, which is always rb.""" + return "rb" + + @property + def name(self) -> str: + """Path to the underlying file.""" + return self._file.name + + def seekable(self) -> bool: + """Return whether random access is supported, which is True.""" + return True + + def close(self) -> None: + """Close the file.""" + self._file.close() + + @property + def closed(self) -> bool: + """Whether the file is closed.""" + return self._file.closed + + def read(self, size: int = -1) -> bytes: + """Read up to size bytes from the object and return them. + + As a convenience, if size is unspecified or -1, + all bytes until EOF are returned. Fewer than + size bytes may be returned if EOF is reached. + """ + download_size = max(size, self._chunk_size) + start, length = self.tell(), self._length + stop = length if size < 0 else min(start + download_size, length) + start = max(0, stop - download_size) + self._download(start, stop - 1) + return self._file.read(size) + + def readable(self) -> bool: + """Return whether the file is readable, which is True.""" + return True + + def seek(self, offset: int, whence: int = 0) -> int: + """Change stream position and return the new absolute position. + + Seek to offset relative position indicated by whence: + * 0: Start of stream (the default). pos should be >= 0; + * 1: Current position - pos may be negative; + * 2: End of stream - pos usually negative. + """ + return self._file.seek(offset, whence) + + def tell(self) -> int: + """Return the current position.""" + return self._file.tell() + + def truncate(self, size: Optional[int] = None) -> int: + """Resize the stream to the given size in bytes. + + If size is unspecified resize to the current position. + The current stream position isn't changed. + + Return the new file size. + """ + return self._file.truncate(size) + + def writable(self) -> bool: + """Return False.""" + return False + + def __enter__(self) -> "LazyZipOverHTTP": + self._file.__enter__() + return self + + def __exit__(self, *exc: Any) -> Optional[bool]: + return self._file.__exit__(*exc) + + @contextmanager + def _stay(self) -> Iterator[None]: + """Return a context manager keeping the position. + + At the end of the block, seek back to original position. + """ + pos = self.tell() + try: + yield + finally: + self.seek(pos) + + def _check_zip(self) -> None: + """Check and download until the file is a valid ZIP.""" + end = self._length - 1 + for start in reversed(range(0, end, self._chunk_size)): + self._download(start, end) + with self._stay(): + try: + # For read-only ZIP files, ZipFile only needs + # methods read, seek, seekable and tell. + ZipFile(self) # type: ignore + except BadZipfile: + pass + else: + break + + def _stream_response( + self, start: int, end: int, base_headers: Dict[str, str] = HEADERS + ) -> Response: + """Return HTTP response to a range request from start to end.""" + headers = base_headers.copy() + headers["Range"] = f"bytes={start}-{end}" + # TODO: Get range requests to be correctly cached + headers["Cache-Control"] = "no-cache" + return self._session.get(self._url, headers=headers, stream=True) + + def _merge( + self, start: int, end: int, left: int, right: int + ) -> Iterator[Tuple[int, int]]: + """Return an iterator of intervals to be fetched. + + Args: + start (int): Start of needed interval + end (int): End of needed interval + left (int): Index of first overlapping downloaded data + right (int): Index after last overlapping downloaded data + """ + lslice, rslice = self._left[left:right], self._right[left:right] + i = start = min([start] + lslice[:1]) + end = max([end] + rslice[-1:]) + for j, k in zip(lslice, rslice): + if j > i: + yield i, j - 1 + i = k + 1 + if i <= end: + yield i, end + self._left[left:right], self._right[left:right] = [start], [end] + + def _download(self, start: int, end: int) -> None: + """Download bytes from start to end inclusively.""" + with self._stay(): + left = bisect_left(self._right, start) + right = bisect_right(self._left, end) + for start, end in self._merge(start, end, left, right): + response = self._stream_response(start, end) + response.raise_for_status() + self.seek(start) + for chunk in response_chunks(response, self._chunk_size): + self._file.write(chunk) diff --git a/python/lib/python3.10/site-packages/pip/_internal/network/session.py b/python/lib/python3.10/site-packages/pip/_internal/network/session.py new file mode 100644 index 0000000..cbe743b --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/network/session.py @@ -0,0 +1,454 @@ +"""PipSession and supporting code, containing all pip-specific +network request configuration and behavior. +""" + +import email.utils +import io +import ipaddress +import json +import logging +import mimetypes +import os +import platform +import shutil +import subprocess +import sys +import urllib.parse +import warnings +from typing import Any, Dict, Iterator, List, Mapping, Optional, Sequence, Tuple, Union + +from pip._vendor import requests, urllib3 +from pip._vendor.cachecontrol import CacheControlAdapter +from pip._vendor.requests.adapters import BaseAdapter, HTTPAdapter +from pip._vendor.requests.models import PreparedRequest, Response +from pip._vendor.requests.structures import CaseInsensitiveDict +from pip._vendor.urllib3.connectionpool import ConnectionPool +from pip._vendor.urllib3.exceptions import InsecureRequestWarning + +from pip import __version__ +from pip._internal.metadata import get_default_environment +from pip._internal.models.link import Link +from pip._internal.network.auth import MultiDomainBasicAuth +from pip._internal.network.cache import SafeFileCache + +# Import ssl from compat so the initial import occurs in only one place. +from pip._internal.utils.compat import has_tls +from pip._internal.utils.glibc import libc_ver +from pip._internal.utils.misc import build_url_from_netloc, parse_netloc +from pip._internal.utils.urls import url_to_path + +logger = logging.getLogger(__name__) + +SecureOrigin = Tuple[str, str, Optional[Union[int, str]]] + + +# Ignore warning raised when using --trusted-host. +warnings.filterwarnings("ignore", category=InsecureRequestWarning) + + +SECURE_ORIGINS: List[SecureOrigin] = [ + # protocol, hostname, port + # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC) + ("https", "*", "*"), + ("*", "localhost", "*"), + ("*", "127.0.0.0/8", "*"), + ("*", "::1/128", "*"), + ("file", "*", None), + # ssh is always secure. + ("ssh", "*", "*"), +] + + +# These are environment variables present when running under various +# CI systems. For each variable, some CI systems that use the variable +# are indicated. The collection was chosen so that for each of a number +# of popular systems, at least one of the environment variables is used. +# This list is used to provide some indication of and lower bound for +# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive. +# For more background, see: https://github.com/pypa/pip/issues/5499 +CI_ENVIRONMENT_VARIABLES = ( + # Azure Pipelines + "BUILD_BUILDID", + # Jenkins + "BUILD_ID", + # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI + "CI", + # Explicit environment variable. + "PIP_IS_CI", +) + + +def looks_like_ci() -> bool: + """ + Return whether it looks like pip is running under CI. + """ + # We don't use the method of checking for a tty (e.g. using isatty()) + # because some CI systems mimic a tty (e.g. Travis CI). Thus that + # method doesn't provide definitive information in either direction. + return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES) + + +def user_agent() -> str: + """ + Return a string representing the user agent. + """ + data: Dict[str, Any] = { + "installer": {"name": "pip", "version": __version__}, + "python": platform.python_version(), + "implementation": { + "name": platform.python_implementation(), + }, + } + + if data["implementation"]["name"] == "CPython": + data["implementation"]["version"] = platform.python_version() + elif data["implementation"]["name"] == "PyPy": + pypy_version_info = sys.pypy_version_info # type: ignore + if pypy_version_info.releaselevel == "final": + pypy_version_info = pypy_version_info[:3] + data["implementation"]["version"] = ".".join( + [str(x) for x in pypy_version_info] + ) + elif data["implementation"]["name"] == "Jython": + # Complete Guess + data["implementation"]["version"] = platform.python_version() + elif data["implementation"]["name"] == "IronPython": + # Complete Guess + data["implementation"]["version"] = platform.python_version() + + if sys.platform.startswith("linux"): + from pip._vendor import distro + + linux_distribution = distro.name(), distro.version(), distro.codename() + distro_infos: Dict[str, Any] = dict( + filter( + lambda x: x[1], + zip(["name", "version", "id"], linux_distribution), + ) + ) + libc = dict( + filter( + lambda x: x[1], + zip(["lib", "version"], libc_ver()), + ) + ) + if libc: + distro_infos["libc"] = libc + if distro_infos: + data["distro"] = distro_infos + + if sys.platform.startswith("darwin") and platform.mac_ver()[0]: + data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]} + + if platform.system(): + data.setdefault("system", {})["name"] = platform.system() + + if platform.release(): + data.setdefault("system", {})["release"] = platform.release() + + if platform.machine(): + data["cpu"] = platform.machine() + + if has_tls(): + import _ssl as ssl + + data["openssl_version"] = ssl.OPENSSL_VERSION + + setuptools_dist = get_default_environment().get_distribution("setuptools") + if setuptools_dist is not None: + data["setuptools_version"] = str(setuptools_dist.version) + + if shutil.which("rustc") is not None: + # If for any reason `rustc --version` fails, silently ignore it + try: + rustc_output = subprocess.check_output( + ["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5 + ) + except Exception: + pass + else: + if rustc_output.startswith(b"rustc "): + # The format of `rustc --version` is: + # `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'` + # We extract just the middle (1.52.1) part + data["rustc_version"] = rustc_output.split(b" ")[1].decode() + + # Use None rather than False so as not to give the impression that + # pip knows it is not being run under CI. Rather, it is a null or + # inconclusive result. Also, we include some value rather than no + # value to make it easier to know that the check has been run. + data["ci"] = True if looks_like_ci() else None + + user_data = os.environ.get("PIP_USER_AGENT_USER_DATA") + if user_data is not None: + data["user_data"] = user_data + + return "{data[installer][name]}/{data[installer][version]} {json}".format( + data=data, + json=json.dumps(data, separators=(",", ":"), sort_keys=True), + ) + + +class LocalFSAdapter(BaseAdapter): + def send( + self, + request: PreparedRequest, + stream: bool = False, + timeout: Optional[Union[float, Tuple[float, float]]] = None, + verify: Union[bool, str] = True, + cert: Optional[Union[str, Tuple[str, str]]] = None, + proxies: Optional[Mapping[str, str]] = None, + ) -> Response: + pathname = url_to_path(request.url) + + resp = Response() + resp.status_code = 200 + resp.url = request.url + + try: + stats = os.stat(pathname) + except OSError as exc: + # format the exception raised as a io.BytesIO object, + # to return a better error message: + resp.status_code = 404 + resp.reason = type(exc).__name__ + resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8")) + else: + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + content_type = mimetypes.guess_type(pathname)[0] or "text/plain" + resp.headers = CaseInsensitiveDict( + { + "Content-Type": content_type, + "Content-Length": stats.st_size, + "Last-Modified": modified, + } + ) + + resp.raw = open(pathname, "rb") + resp.close = resp.raw.close + + return resp + + def close(self) -> None: + pass + + +class InsecureHTTPAdapter(HTTPAdapter): + def cert_verify( + self, + conn: ConnectionPool, + url: str, + verify: Union[bool, str], + cert: Optional[Union[str, Tuple[str, str]]], + ) -> None: + super().cert_verify(conn=conn, url=url, verify=False, cert=cert) + + +class InsecureCacheControlAdapter(CacheControlAdapter): + def cert_verify( + self, + conn: ConnectionPool, + url: str, + verify: Union[bool, str], + cert: Optional[Union[str, Tuple[str, str]]], + ) -> None: + super().cert_verify(conn=conn, url=url, verify=False, cert=cert) + + +class PipSession(requests.Session): + + timeout: Optional[int] = None + + def __init__( + self, + *args: Any, + retries: int = 0, + cache: Optional[str] = None, + trusted_hosts: Sequence[str] = (), + index_urls: Optional[List[str]] = None, + **kwargs: Any, + ) -> None: + """ + :param trusted_hosts: Domains not to emit warnings for when not using + HTTPS. + """ + super().__init__(*args, **kwargs) + + # Namespace the attribute with "pip_" just in case to prevent + # possible conflicts with the base class. + self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = [] + + # Attach our User Agent to the request + self.headers["User-Agent"] = user_agent() + + # Attach our Authentication handler to the session + self.auth = MultiDomainBasicAuth(index_urls=index_urls) + + # Create our urllib3.Retry instance which will allow us to customize + # how we handle retries. + retries = urllib3.Retry( + # Set the total number of retries that a particular request can + # have. + total=retries, + # A 503 error from PyPI typically means that the Fastly -> Origin + # connection got interrupted in some way. A 503 error in general + # is typically considered a transient error so we'll go ahead and + # retry it. + # A 500 may indicate transient error in Amazon S3 + # A 520 or 527 - may indicate transient error in CloudFlare + status_forcelist=[500, 503, 520, 527], + # Add a small amount of back off between failed requests in + # order to prevent hammering the service. + backoff_factor=0.25, + ) # type: ignore + + # Our Insecure HTTPAdapter disables HTTPS validation. It does not + # support caching so we'll use it for all http:// URLs. + # If caching is disabled, we will also use it for + # https:// hosts that we've marked as ignoring + # TLS errors for (trusted-hosts). + insecure_adapter = InsecureHTTPAdapter(max_retries=retries) + + # We want to _only_ cache responses on securely fetched origins or when + # the host is specified as trusted. We do this because + # we can't validate the response of an insecurely/untrusted fetched + # origin, and we don't want someone to be able to poison the cache and + # require manual eviction from the cache to fix it. + if cache: + secure_adapter = CacheControlAdapter( + cache=SafeFileCache(cache), + max_retries=retries, + ) + self._trusted_host_adapter = InsecureCacheControlAdapter( + cache=SafeFileCache(cache), + max_retries=retries, + ) + else: + secure_adapter = HTTPAdapter(max_retries=retries) + self._trusted_host_adapter = insecure_adapter + + self.mount("https://", secure_adapter) + self.mount("http://", insecure_adapter) + + # Enable file:// urls + self.mount("file://", LocalFSAdapter()) + + for host in trusted_hosts: + self.add_trusted_host(host, suppress_logging=True) + + def update_index_urls(self, new_index_urls: List[str]) -> None: + """ + :param new_index_urls: New index urls to update the authentication + handler with. + """ + self.auth.index_urls = new_index_urls + + def add_trusted_host( + self, host: str, source: Optional[str] = None, suppress_logging: bool = False + ) -> None: + """ + :param host: It is okay to provide a host that has previously been + added. + :param source: An optional source string, for logging where the host + string came from. + """ + if not suppress_logging: + msg = f"adding trusted host: {host!r}" + if source is not None: + msg += f" (from {source})" + logger.info(msg) + + host_port = parse_netloc(host) + if host_port not in self.pip_trusted_origins: + self.pip_trusted_origins.append(host_port) + + self.mount( + build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter + ) + self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter) + if not host_port[1]: + self.mount( + build_url_from_netloc(host, scheme="http") + ":", + self._trusted_host_adapter, + ) + # Mount wildcard ports for the same host. + self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter) + + def iter_secure_origins(self) -> Iterator[SecureOrigin]: + yield from SECURE_ORIGINS + for host, port in self.pip_trusted_origins: + yield ("*", host, "*" if port is None else port) + + def is_secure_origin(self, location: Link) -> bool: + # Determine if this url used a secure transport mechanism + parsed = urllib.parse.urlparse(str(location)) + origin_protocol, origin_host, origin_port = ( + parsed.scheme, + parsed.hostname, + parsed.port, + ) + + # The protocol to use to see if the protocol matches. + # Don't count the repository type as part of the protocol: in + # cases such as "git+ssh", only use "ssh". (I.e., Only verify against + # the last scheme.) + origin_protocol = origin_protocol.rsplit("+", 1)[-1] + + # Determine if our origin is a secure origin by looking through our + # hardcoded list of secure origins, as well as any additional ones + # configured on this PackageFinder instance. + for secure_origin in self.iter_secure_origins(): + secure_protocol, secure_host, secure_port = secure_origin + if origin_protocol != secure_protocol and secure_protocol != "*": + continue + + try: + addr = ipaddress.ip_address(origin_host) + network = ipaddress.ip_network(secure_host) + except ValueError: + # We don't have both a valid address or a valid network, so + # we'll check this origin against hostnames. + if ( + origin_host + and origin_host.lower() != secure_host.lower() + and secure_host != "*" + ): + continue + else: + # We have a valid address and network, so see if the address + # is contained within the network. + if addr not in network: + continue + + # Check to see if the port matches. + if ( + origin_port != secure_port + and secure_port != "*" + and secure_port is not None + ): + continue + + # If we've gotten here, then this origin matches the current + # secure origin and we should return True + return True + + # If we've gotten to this point, then the origin isn't secure and we + # will not accept it as a valid location to search. We will however + # log a warning that we are ignoring it. + logger.warning( + "The repository located at %s is not a trusted or secure host and " + "is being ignored. If this repository is available via HTTPS we " + "recommend you use HTTPS instead, otherwise you may silence " + "this warning and allow it anyway with '--trusted-host %s'.", + origin_host, + origin_host, + ) + + return False + + def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response: + # Allow setting a default timeout on a session + kwargs.setdefault("timeout", self.timeout) + + # Dispatch the actual request + return super().request(method, url, *args, **kwargs) diff --git a/python/lib/python3.10/site-packages/pip/_internal/network/utils.py b/python/lib/python3.10/site-packages/pip/_internal/network/utils.py new file mode 100644 index 0000000..094cf1b --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/network/utils.py @@ -0,0 +1,96 @@ +from typing import Dict, Iterator + +from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response + +from pip._internal.exceptions import NetworkConnectionError + +# The following comments and HTTP headers were originally added by +# Donald Stufft in git commit 22c562429a61bb77172039e480873fb239dd8c03. +# +# We use Accept-Encoding: identity here because requests defaults to +# accepting compressed responses. This breaks in a variety of ways +# depending on how the server is configured. +# - Some servers will notice that the file isn't a compressible file +# and will leave the file alone and with an empty Content-Encoding +# - Some servers will notice that the file is already compressed and +# will leave the file alone, adding a Content-Encoding: gzip header +# - Some servers won't notice anything at all and will take a file +# that's already been compressed and compress it again, and set +# the Content-Encoding: gzip header +# By setting this to request only the identity encoding we're hoping +# to eliminate the third case. Hopefully there does not exist a server +# which when given a file will notice it is already compressed and that +# you're not asking for a compressed file and will then decompress it +# before sending because if that's the case I don't think it'll ever be +# possible to make this work. +HEADERS: Dict[str, str] = {"Accept-Encoding": "identity"} + + +def raise_for_status(resp: Response) -> None: + http_error_msg = "" + if isinstance(resp.reason, bytes): + # We attempt to decode utf-8 first because some servers + # choose to localize their reason strings. If the string + # isn't utf-8, we fall back to iso-8859-1 for all other + # encodings. + try: + reason = resp.reason.decode("utf-8") + except UnicodeDecodeError: + reason = resp.reason.decode("iso-8859-1") + else: + reason = resp.reason + + if 400 <= resp.status_code < 500: + http_error_msg = ( + f"{resp.status_code} Client Error: {reason} for url: {resp.url}" + ) + + elif 500 <= resp.status_code < 600: + http_error_msg = ( + f"{resp.status_code} Server Error: {reason} for url: {resp.url}" + ) + + if http_error_msg: + raise NetworkConnectionError(http_error_msg, response=resp) + + +def response_chunks( + response: Response, chunk_size: int = CONTENT_CHUNK_SIZE +) -> Iterator[bytes]: + """Given a requests Response, provide the data chunks.""" + try: + # Special case for urllib3. + for chunk in response.raw.stream( + chunk_size, + # We use decode_content=False here because we don't + # want urllib3 to mess with the raw bytes we get + # from the server. If we decompress inside of + # urllib3 then we cannot verify the checksum + # because the checksum will be of the compressed + # file. This breakage will only occur if the + # server adds a Content-Encoding header, which + # depends on how the server was configured: + # - Some servers will notice that the file isn't a + # compressible file and will leave the file alone + # and with an empty Content-Encoding + # - Some servers will notice that the file is + # already compressed and will leave the file + # alone and will add a Content-Encoding: gzip + # header + # - Some servers won't notice anything at all and + # will take a file that's already been compressed + # and compress it again and set the + # Content-Encoding: gzip header + # + # By setting this not to decode automatically we + # hope to eliminate problems with the second case. + decode_content=False, + ): + yield chunk + except AttributeError: + # Standard file-like object. + while True: + chunk = response.raw.read(chunk_size) + if not chunk: + break + yield chunk diff --git a/lib/python3.11/site-packages/pip/_internal/network/xmlrpc.py b/python/lib/python3.10/site-packages/pip/_internal/network/xmlrpc.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/network/xmlrpc.py rename to python/lib/python3.10/site-packages/pip/_internal/network/xmlrpc.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/operations/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/metadata.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/metadata.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/metadata.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/metadata.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/metadata_editable.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/metadata_editable.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_legacy.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/metadata_legacy.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/metadata_legacy.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/metadata_legacy.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/wheel.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/wheel.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/wheel.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/wheel.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/wheel_editable.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/wheel_editable.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_editable.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/build/wheel_legacy.py b/python/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_legacy.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/build/wheel_legacy.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/build/wheel_legacy.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/check.py b/python/lib/python3.10/site-packages/pip/_internal/operations/check.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/check.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/check.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/operations/freeze.py b/python/lib/python3.10/site-packages/pip/_internal/operations/freeze.py new file mode 100644 index 0000000..4565540 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/operations/freeze.py @@ -0,0 +1,254 @@ +import collections +import logging +import os +from typing import Container, Dict, Iterable, Iterator, List, NamedTuple, Optional, Set + +from pip._vendor.packaging.utils import canonicalize_name +from pip._vendor.packaging.version import Version + +from pip._internal.exceptions import BadCommand, InstallationError +from pip._internal.metadata import BaseDistribution, get_environment +from pip._internal.req.constructors import ( + install_req_from_editable, + install_req_from_line, +) +from pip._internal.req.req_file import COMMENT_RE +from pip._internal.utils.direct_url_helpers import direct_url_as_pep440_direct_reference + +logger = logging.getLogger(__name__) + + +class _EditableInfo(NamedTuple): + requirement: str + comments: List[str] + + +def freeze( + requirement: Optional[List[str]] = None, + local_only: bool = False, + user_only: bool = False, + paths: Optional[List[str]] = None, + isolated: bool = False, + exclude_editable: bool = False, + skip: Container[str] = (), +) -> Iterator[str]: + installations: Dict[str, FrozenRequirement] = {} + + dists = get_environment(paths).iter_installed_distributions( + local_only=local_only, + skip=(), + user_only=user_only, + ) + for dist in dists: + req = FrozenRequirement.from_dist(dist) + if exclude_editable and req.editable: + continue + installations[req.canonical_name] = req + + if requirement: + # the options that don't get turned into an InstallRequirement + # should only be emitted once, even if the same option is in multiple + # requirements files, so we need to keep track of what has been emitted + # so that we don't emit it again if it's seen again + emitted_options: Set[str] = set() + # keep track of which files a requirement is in so that we can + # give an accurate warning if a requirement appears multiple times. + req_files: Dict[str, List[str]] = collections.defaultdict(list) + for req_file_path in requirement: + with open(req_file_path) as req_file: + for line in req_file: + if ( + not line.strip() + or line.strip().startswith("#") + or line.startswith( + ( + "-r", + "--requirement", + "-f", + "--find-links", + "-i", + "--index-url", + "--pre", + "--trusted-host", + "--process-dependency-links", + "--extra-index-url", + "--use-feature", + ) + ) + ): + line = line.rstrip() + if line not in emitted_options: + emitted_options.add(line) + yield line + continue + + if line.startswith("-e") or line.startswith("--editable"): + if line.startswith("-e"): + line = line[2:].strip() + else: + line = line[len("--editable") :].strip().lstrip("=") + line_req = install_req_from_editable( + line, + isolated=isolated, + ) + else: + line_req = install_req_from_line( + COMMENT_RE.sub("", line).strip(), + isolated=isolated, + ) + + if not line_req.name: + logger.info( + "Skipping line in requirement file [%s] because " + "it's not clear what it would install: %s", + req_file_path, + line.strip(), + ) + logger.info( + " (add #egg=PackageName to the URL to avoid" + " this warning)" + ) + else: + line_req_canonical_name = canonicalize_name(line_req.name) + if line_req_canonical_name not in installations: + # either it's not installed, or it is installed + # but has been processed already + if not req_files[line_req.name]: + logger.warning( + "Requirement file [%s] contains %s, but " + "package %r is not installed", + req_file_path, + COMMENT_RE.sub("", line).strip(), + line_req.name, + ) + else: + req_files[line_req.name].append(req_file_path) + else: + yield str(installations[line_req_canonical_name]).rstrip() + del installations[line_req_canonical_name] + req_files[line_req.name].append(req_file_path) + + # Warn about requirements that were included multiple times (in a + # single requirements file or in different requirements files). + for name, files in req_files.items(): + if len(files) > 1: + logger.warning( + "Requirement %s included multiple times [%s]", + name, + ", ".join(sorted(set(files))), + ) + + yield ("## The following requirements were added by pip freeze:") + for installation in sorted(installations.values(), key=lambda x: x.name.lower()): + if installation.canonical_name not in skip: + yield str(installation).rstrip() + + +def _format_as_name_version(dist: BaseDistribution) -> str: + if isinstance(dist.version, Version): + return f"{dist.raw_name}=={dist.version}" + return f"{dist.raw_name}==={dist.version}" + + +def _get_editable_info(dist: BaseDistribution) -> _EditableInfo: + """ + Compute and return values (req, comments) for use in + FrozenRequirement.from_dist(). + """ + editable_project_location = dist.editable_project_location + assert editable_project_location + location = os.path.normcase(os.path.abspath(editable_project_location)) + + from pip._internal.vcs import RemoteNotFoundError, RemoteNotValidError, vcs + + vcs_backend = vcs.get_backend_for_dir(location) + + if vcs_backend is None: + display = _format_as_name_version(dist) + logger.debug( + 'No VCS found for editable requirement "%s" in: %r', + display, + location, + ) + return _EditableInfo( + requirement=location, + comments=[f"# Editable install with no version control ({display})"], + ) + + vcs_name = type(vcs_backend).__name__ + + try: + req = vcs_backend.get_src_requirement(location, dist.raw_name) + except RemoteNotFoundError: + display = _format_as_name_version(dist) + return _EditableInfo( + requirement=location, + comments=[f"# Editable {vcs_name} install with no remote ({display})"], + ) + except RemoteNotValidError as ex: + display = _format_as_name_version(dist) + return _EditableInfo( + requirement=location, + comments=[ + f"# Editable {vcs_name} install ({display}) with either a deleted " + f"local remote or invalid URI:", + f"# '{ex.url}'", + ], + ) + except BadCommand: + logger.warning( + "cannot determine version of editable source in %s " + "(%s command not found in path)", + location, + vcs_backend.name, + ) + return _EditableInfo(requirement=location, comments=[]) + except InstallationError as exc: + logger.warning("Error when trying to get requirement for VCS system %s", exc) + else: + return _EditableInfo(requirement=req, comments=[]) + + logger.warning("Could not determine repository location of %s", location) + + return _EditableInfo( + requirement=location, + comments=["## !! Could not determine repository location"], + ) + + +class FrozenRequirement: + def __init__( + self, + name: str, + req: str, + editable: bool, + comments: Iterable[str] = (), + ) -> None: + self.name = name + self.canonical_name = canonicalize_name(name) + self.req = req + self.editable = editable + self.comments = comments + + @classmethod + def from_dist(cls, dist: BaseDistribution) -> "FrozenRequirement": + editable = dist.editable + if editable: + req, comments = _get_editable_info(dist) + else: + comments = [] + direct_url = dist.direct_url + if direct_url: + # if PEP 610 metadata is present, use it + req = direct_url_as_pep440_direct_reference(direct_url, dist.raw_name) + else: + # name==version requirement + req = _format_as_name_version(dist) + + return cls(dist.raw_name, req, editable, comments=comments) + + def __str__(self) -> str: + req = self.req + if self.editable: + req = f"-e {req}" + return "\n".join(list(self.comments) + [str(req)]) + "\n" diff --git a/lib/python3.11/site-packages/pip/_internal/operations/install/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/operations/install/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/install/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/install/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/operations/install/editable_legacy.py b/python/lib/python3.10/site-packages/pip/_internal/operations/install/editable_legacy.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/operations/install/editable_legacy.py rename to python/lib/python3.10/site-packages/pip/_internal/operations/install/editable_legacy.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/operations/install/legacy.py b/python/lib/python3.10/site-packages/pip/_internal/operations/install/legacy.py new file mode 100644 index 0000000..5b7ef90 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/operations/install/legacy.py @@ -0,0 +1,120 @@ +"""Legacy installation process, i.e. `setup.py install`. +""" + +import logging +import os +from distutils.util import change_root +from typing import List, Optional, Sequence + +from pip._internal.build_env import BuildEnvironment +from pip._internal.exceptions import InstallationError, LegacyInstallFailure +from pip._internal.models.scheme import Scheme +from pip._internal.utils.misc import ensure_dir +from pip._internal.utils.setuptools_build import make_setuptools_install_args +from pip._internal.utils.subprocess import runner_with_spinner_message +from pip._internal.utils.temp_dir import TempDirectory + +logger = logging.getLogger(__name__) + + +def write_installed_files_from_setuptools_record( + record_lines: List[str], + root: Optional[str], + req_description: str, +) -> None: + def prepend_root(path: str) -> str: + if root is None or not os.path.isabs(path): + return path + else: + return change_root(root, path) + + for line in record_lines: + directory = os.path.dirname(line) + if directory.endswith(".egg-info"): + egg_info_dir = prepend_root(directory) + break + else: + message = ( + "{} did not indicate that it installed an " + ".egg-info directory. Only setup.py projects " + "generating .egg-info directories are supported." + ).format(req_description) + raise InstallationError(message) + + new_lines = [] + for line in record_lines: + filename = line.strip() + if os.path.isdir(filename): + filename += os.path.sep + new_lines.append(os.path.relpath(prepend_root(filename), egg_info_dir)) + new_lines.sort() + ensure_dir(egg_info_dir) + inst_files_path = os.path.join(egg_info_dir, "installed-files.txt") + with open(inst_files_path, "w") as f: + f.write("\n".join(new_lines) + "\n") + + +def install( + install_options: List[str], + global_options: Sequence[str], + root: Optional[str], + home: Optional[str], + prefix: Optional[str], + use_user_site: bool, + pycompile: bool, + scheme: Scheme, + setup_py_path: str, + isolated: bool, + req_name: str, + build_env: BuildEnvironment, + unpacked_source_directory: str, + req_description: str, +) -> bool: + + header_dir = scheme.headers + + with TempDirectory(kind="record") as temp_dir: + try: + record_filename = os.path.join(temp_dir.path, "install-record.txt") + install_args = make_setuptools_install_args( + setup_py_path, + global_options=global_options, + install_options=install_options, + record_filename=record_filename, + root=root, + prefix=prefix, + header_dir=header_dir, + home=home, + use_user_site=use_user_site, + no_user_config=isolated, + pycompile=pycompile, + ) + + runner = runner_with_spinner_message( + f"Running setup.py install for {req_name}" + ) + with build_env: + runner( + cmd=install_args, + cwd=unpacked_source_directory, + ) + + if not os.path.exists(record_filename): + logger.debug("Record file %s not found", record_filename) + # Signal to the caller that we didn't install the new package + return False + + except Exception as e: + # Signal to the caller that we didn't install the new package + raise LegacyInstallFailure(package_details=req_name) from e + + # At this point, we have successfully installed the requirement. + + # We intentionally do not use any encoding to read the file because + # setuptools writes the file using distutils.file_util.write_file, + # which does not specify an encoding. + with open(record_filename) as f: + record_lines = f.read().splitlines() + + write_installed_files_from_setuptools_record(record_lines, root, req_description) + return True diff --git a/python/lib/python3.10/site-packages/pip/_internal/operations/install/wheel.py b/python/lib/python3.10/site-packages/pip/_internal/operations/install/wheel.py new file mode 100644 index 0000000..e191b13 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/operations/install/wheel.py @@ -0,0 +1,738 @@ +"""Support for installing and building the "wheel" binary package format. +""" + +import collections +import compileall +import contextlib +import csv +import importlib +import logging +import os.path +import re +import shutil +import sys +import warnings +from base64 import urlsafe_b64encode +from email.message import Message +from itertools import chain, filterfalse, starmap +from typing import ( + IO, + TYPE_CHECKING, + Any, + BinaryIO, + Callable, + Dict, + Iterable, + Iterator, + List, + NewType, + Optional, + Sequence, + Set, + Tuple, + Union, + cast, +) +from zipfile import ZipFile, ZipInfo + +from pip._vendor.distlib.scripts import ScriptMaker +from pip._vendor.distlib.util import get_export_entry +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.exceptions import InstallationError +from pip._internal.locations import get_major_minor_version +from pip._internal.metadata import ( + BaseDistribution, + FilesystemWheel, + get_wheel_distribution, +) +from pip._internal.models.direct_url import DIRECT_URL_METADATA_NAME, DirectUrl +from pip._internal.models.scheme import SCHEME_KEYS, Scheme +from pip._internal.utils.filesystem import adjacent_tmp_file, replace +from pip._internal.utils.misc import captured_stdout, ensure_dir, hash_file, partition +from pip._internal.utils.unpacking import ( + current_umask, + is_within_directory, + set_extracted_file_to_default_mode_plus_executable, + zip_item_is_executable, +) +from pip._internal.utils.wheel import parse_wheel + +if TYPE_CHECKING: + from typing import Protocol + + class File(Protocol): + src_record_path: "RecordPath" + dest_path: str + changed: bool + + def save(self) -> None: + pass + + +logger = logging.getLogger(__name__) + +RecordPath = NewType("RecordPath", str) +InstalledCSVRow = Tuple[RecordPath, str, Union[int, str]] + + +def rehash(path: str, blocksize: int = 1 << 20) -> Tuple[str, str]: + """Return (encoded_digest, length) for path using hashlib.sha256()""" + h, length = hash_file(path, blocksize) + digest = "sha256=" + urlsafe_b64encode(h.digest()).decode("latin1").rstrip("=") + return (digest, str(length)) + + +def csv_io_kwargs(mode: str) -> Dict[str, Any]: + """Return keyword arguments to properly open a CSV file + in the given mode. + """ + return {"mode": mode, "newline": "", "encoding": "utf-8"} + + +def fix_script(path: str) -> bool: + """Replace #!python with #!/path/to/python + Return True if file was changed. + """ + # XXX RECORD hashes will need to be updated + assert os.path.isfile(path) + + with open(path, "rb") as script: + firstline = script.readline() + if not firstline.startswith(b"#!python"): + return False + exename = sys.executable.encode(sys.getfilesystemencoding()) + firstline = b"#!" + exename + os.linesep.encode("ascii") + rest = script.read() + with open(path, "wb") as script: + script.write(firstline) + script.write(rest) + return True + + +def wheel_root_is_purelib(metadata: Message) -> bool: + return metadata.get("Root-Is-Purelib", "").lower() == "true" + + +def get_entrypoints(dist: BaseDistribution) -> Tuple[Dict[str, str], Dict[str, str]]: + console_scripts = {} + gui_scripts = {} + for entry_point in dist.iter_entry_points(): + if entry_point.group == "console_scripts": + console_scripts[entry_point.name] = entry_point.value + elif entry_point.group == "gui_scripts": + gui_scripts[entry_point.name] = entry_point.value + return console_scripts, gui_scripts + + +def message_about_scripts_not_on_PATH(scripts: Sequence[str]) -> Optional[str]: + """Determine if any scripts are not on PATH and format a warning. + Returns a warning message if one or more scripts are not on PATH, + otherwise None. + """ + if not scripts: + return None + + # Group scripts by the path they were installed in + grouped_by_dir: Dict[str, Set[str]] = collections.defaultdict(set) + for destfile in scripts: + parent_dir = os.path.dirname(destfile) + script_name = os.path.basename(destfile) + grouped_by_dir[parent_dir].add(script_name) + + # We don't want to warn for directories that are on PATH. + not_warn_dirs = [ + os.path.normcase(i).rstrip(os.sep) + for i in os.environ.get("PATH", "").split(os.pathsep) + ] + # If an executable sits with sys.executable, we don't warn for it. + # This covers the case of venv invocations without activating the venv. + not_warn_dirs.append(os.path.normcase(os.path.dirname(sys.executable))) + warn_for: Dict[str, Set[str]] = { + parent_dir: scripts + for parent_dir, scripts in grouped_by_dir.items() + if os.path.normcase(parent_dir) not in not_warn_dirs + } + if not warn_for: + return None + + # Format a message + msg_lines = [] + for parent_dir, dir_scripts in warn_for.items(): + sorted_scripts: List[str] = sorted(dir_scripts) + if len(sorted_scripts) == 1: + start_text = "script {} is".format(sorted_scripts[0]) + else: + start_text = "scripts {} are".format( + ", ".join(sorted_scripts[:-1]) + " and " + sorted_scripts[-1] + ) + + msg_lines.append( + "The {} installed in '{}' which is not on PATH.".format( + start_text, parent_dir + ) + ) + + last_line_fmt = ( + "Consider adding {} to PATH or, if you prefer " + "to suppress this warning, use --no-warn-script-location." + ) + if len(msg_lines) == 1: + msg_lines.append(last_line_fmt.format("this directory")) + else: + msg_lines.append(last_line_fmt.format("these directories")) + + # Add a note if any directory starts with ~ + warn_for_tilde = any( + i[0] == "~" for i in os.environ.get("PATH", "").split(os.pathsep) if i + ) + if warn_for_tilde: + tilde_warning_msg = ( + "NOTE: The current PATH contains path(s) starting with `~`, " + "which may not be expanded by all applications." + ) + msg_lines.append(tilde_warning_msg) + + # Returns the formatted multiline message + return "\n".join(msg_lines) + + +def _normalized_outrows( + outrows: Iterable[InstalledCSVRow], +) -> List[Tuple[str, str, str]]: + """Normalize the given rows of a RECORD file. + + Items in each row are converted into str. Rows are then sorted to make + the value more predictable for tests. + + Each row is a 3-tuple (path, hash, size) and corresponds to a record of + a RECORD file (see PEP 376 and PEP 427 for details). For the rows + passed to this function, the size can be an integer as an int or string, + or the empty string. + """ + # Normally, there should only be one row per path, in which case the + # second and third elements don't come into play when sorting. + # However, in cases in the wild where a path might happen to occur twice, + # we don't want the sort operation to trigger an error (but still want + # determinism). Since the third element can be an int or string, we + # coerce each element to a string to avoid a TypeError in this case. + # For additional background, see-- + # https://github.com/pypa/pip/issues/5868 + return sorted( + (record_path, hash_, str(size)) for record_path, hash_, size in outrows + ) + + +def _record_to_fs_path(record_path: RecordPath) -> str: + return record_path + + +def _fs_to_record_path(path: str, relative_to: Optional[str] = None) -> RecordPath: + if relative_to is not None: + # On Windows, do not handle relative paths if they belong to different + # logical disks + if ( + os.path.splitdrive(path)[0].lower() + == os.path.splitdrive(relative_to)[0].lower() + ): + path = os.path.relpath(path, relative_to) + path = path.replace(os.path.sep, "/") + return cast("RecordPath", path) + + +def get_csv_rows_for_installed( + old_csv_rows: List[List[str]], + installed: Dict[RecordPath, RecordPath], + changed: Set[RecordPath], + generated: List[str], + lib_dir: str, +) -> List[InstalledCSVRow]: + """ + :param installed: A map from archive RECORD path to installation RECORD + path. + """ + installed_rows: List[InstalledCSVRow] = [] + for row in old_csv_rows: + if len(row) > 3: + logger.warning("RECORD line has more than three elements: %s", row) + old_record_path = cast("RecordPath", row[0]) + new_record_path = installed.pop(old_record_path, old_record_path) + if new_record_path in changed: + digest, length = rehash(_record_to_fs_path(new_record_path)) + else: + digest = row[1] if len(row) > 1 else "" + length = row[2] if len(row) > 2 else "" + installed_rows.append((new_record_path, digest, length)) + for f in generated: + path = _fs_to_record_path(f, lib_dir) + digest, length = rehash(f) + installed_rows.append((path, digest, length)) + for installed_record_path in installed.values(): + installed_rows.append((installed_record_path, "", "")) + return installed_rows + + +def get_console_script_specs(console: Dict[str, str]) -> List[str]: + """ + Given the mapping from entrypoint name to callable, return the relevant + console script specs. + """ + # Don't mutate caller's version + console = console.copy() + + scripts_to_generate = [] + + # Special case pip and setuptools to generate versioned wrappers + # + # The issue is that some projects (specifically, pip and setuptools) use + # code in setup.py to create "versioned" entry points - pip2.7 on Python + # 2.7, pip3.3 on Python 3.3, etc. But these entry points are baked into + # the wheel metadata at build time, and so if the wheel is installed with + # a *different* version of Python the entry points will be wrong. The + # correct fix for this is to enhance the metadata to be able to describe + # such versioned entry points, but that won't happen till Metadata 2.0 is + # available. + # In the meantime, projects using versioned entry points will either have + # incorrect versioned entry points, or they will not be able to distribute + # "universal" wheels (i.e., they will need a wheel per Python version). + # + # Because setuptools and pip are bundled with _ensurepip and virtualenv, + # we need to use universal wheels. So, as a stopgap until Metadata 2.0, we + # override the versioned entry points in the wheel and generate the + # correct ones. This code is purely a short-term measure until Metadata 2.0 + # is available. + # + # To add the level of hack in this section of code, in order to support + # ensurepip this code will look for an ``ENSUREPIP_OPTIONS`` environment + # variable which will control which version scripts get installed. + # + # ENSUREPIP_OPTIONS=altinstall + # - Only pipX.Y and easy_install-X.Y will be generated and installed + # ENSUREPIP_OPTIONS=install + # - pipX.Y, pipX, easy_install-X.Y will be generated and installed. Note + # that this option is technically if ENSUREPIP_OPTIONS is set and is + # not altinstall + # DEFAULT + # - The default behavior is to install pip, pipX, pipX.Y, easy_install + # and easy_install-X.Y. + pip_script = console.pop("pip", None) + if pip_script: + if "ENSUREPIP_OPTIONS" not in os.environ: + scripts_to_generate.append("pip = " + pip_script) + + if os.environ.get("ENSUREPIP_OPTIONS", "") != "altinstall": + scripts_to_generate.append( + "pip{} = {}".format(sys.version_info[0], pip_script) + ) + + scripts_to_generate.append(f"pip{get_major_minor_version()} = {pip_script}") + # Delete any other versioned pip entry points + pip_ep = [k for k in console if re.match(r"pip(\d(\.\d)?)?$", k)] + for k in pip_ep: + del console[k] + easy_install_script = console.pop("easy_install", None) + if easy_install_script: + if "ENSUREPIP_OPTIONS" not in os.environ: + scripts_to_generate.append("easy_install = " + easy_install_script) + + scripts_to_generate.append( + "easy_install-{} = {}".format( + get_major_minor_version(), easy_install_script + ) + ) + # Delete any other versioned easy_install entry points + easy_install_ep = [ + k for k in console if re.match(r"easy_install(-\d\.\d)?$", k) + ] + for k in easy_install_ep: + del console[k] + + # Generate the console entry points specified in the wheel + scripts_to_generate.extend(starmap("{} = {}".format, console.items())) + + return scripts_to_generate + + +class ZipBackedFile: + def __init__( + self, src_record_path: RecordPath, dest_path: str, zip_file: ZipFile + ) -> None: + self.src_record_path = src_record_path + self.dest_path = dest_path + self._zip_file = zip_file + self.changed = False + + def _getinfo(self) -> ZipInfo: + return self._zip_file.getinfo(self.src_record_path) + + def save(self) -> None: + # directory creation is lazy and after file filtering + # to ensure we don't install empty dirs; empty dirs can't be + # uninstalled. + parent_dir = os.path.dirname(self.dest_path) + ensure_dir(parent_dir) + + # When we open the output file below, any existing file is truncated + # before we start writing the new contents. This is fine in most + # cases, but can cause a segfault if pip has loaded a shared + # object (e.g. from pyopenssl through its vendored urllib3) + # Since the shared object is mmap'd an attempt to call a + # symbol in it will then cause a segfault. Unlinking the file + # allows writing of new contents while allowing the process to + # continue to use the old copy. + if os.path.exists(self.dest_path): + os.unlink(self.dest_path) + + zipinfo = self._getinfo() + + with self._zip_file.open(zipinfo) as f: + with open(self.dest_path, "wb") as dest: + shutil.copyfileobj(f, dest) + + if zip_item_is_executable(zipinfo): + set_extracted_file_to_default_mode_plus_executable(self.dest_path) + + +class ScriptFile: + def __init__(self, file: "File") -> None: + self._file = file + self.src_record_path = self._file.src_record_path + self.dest_path = self._file.dest_path + self.changed = False + + def save(self) -> None: + self._file.save() + self.changed = fix_script(self.dest_path) + + +class MissingCallableSuffix(InstallationError): + def __init__(self, entry_point: str) -> None: + super().__init__( + "Invalid script entry point: {} - A callable " + "suffix is required. Cf https://packaging.python.org/" + "specifications/entry-points/#use-for-scripts for more " + "information.".format(entry_point) + ) + + +def _raise_for_invalid_entrypoint(specification: str) -> None: + entry = get_export_entry(specification) + if entry is not None and entry.suffix is None: + raise MissingCallableSuffix(str(entry)) + + +class PipScriptMaker(ScriptMaker): + def make(self, specification: str, options: Dict[str, Any] = None) -> List[str]: + _raise_for_invalid_entrypoint(specification) + return super().make(specification, options) + + +def _install_wheel( + name: str, + wheel_zip: ZipFile, + wheel_path: str, + scheme: Scheme, + pycompile: bool = True, + warn_script_location: bool = True, + direct_url: Optional[DirectUrl] = None, + requested: bool = False, +) -> None: + """Install a wheel. + + :param name: Name of the project to install + :param wheel_zip: open ZipFile for wheel being installed + :param scheme: Distutils scheme dictating the install directories + :param req_description: String used in place of the requirement, for + logging + :param pycompile: Whether to byte-compile installed Python files + :param warn_script_location: Whether to check that scripts are installed + into a directory on PATH + :raises UnsupportedWheel: + * when the directory holds an unpacked wheel with incompatible + Wheel-Version + * when the .dist-info dir does not match the wheel + """ + info_dir, metadata = parse_wheel(wheel_zip, name) + + if wheel_root_is_purelib(metadata): + lib_dir = scheme.purelib + else: + lib_dir = scheme.platlib + + # Record details of the files moved + # installed = files copied from the wheel to the destination + # changed = files changed while installing (scripts #! line typically) + # generated = files newly generated during the install (script wrappers) + installed: Dict[RecordPath, RecordPath] = {} + changed: Set[RecordPath] = set() + generated: List[str] = [] + + def record_installed( + srcfile: RecordPath, destfile: str, modified: bool = False + ) -> None: + """Map archive RECORD paths to installation RECORD paths.""" + newpath = _fs_to_record_path(destfile, lib_dir) + installed[srcfile] = newpath + if modified: + changed.add(_fs_to_record_path(destfile)) + + def is_dir_path(path: RecordPath) -> bool: + return path.endswith("/") + + def assert_no_path_traversal(dest_dir_path: str, target_path: str) -> None: + if not is_within_directory(dest_dir_path, target_path): + message = ( + "The wheel {!r} has a file {!r} trying to install" + " outside the target directory {!r}" + ) + raise InstallationError( + message.format(wheel_path, target_path, dest_dir_path) + ) + + def root_scheme_file_maker( + zip_file: ZipFile, dest: str + ) -> Callable[[RecordPath], "File"]: + def make_root_scheme_file(record_path: RecordPath) -> "File": + normed_path = os.path.normpath(record_path) + dest_path = os.path.join(dest, normed_path) + assert_no_path_traversal(dest, dest_path) + return ZipBackedFile(record_path, dest_path, zip_file) + + return make_root_scheme_file + + def data_scheme_file_maker( + zip_file: ZipFile, scheme: Scheme + ) -> Callable[[RecordPath], "File"]: + scheme_paths = {key: getattr(scheme, key) for key in SCHEME_KEYS} + + def make_data_scheme_file(record_path: RecordPath) -> "File": + normed_path = os.path.normpath(record_path) + try: + _, scheme_key, dest_subpath = normed_path.split(os.path.sep, 2) + except ValueError: + message = ( + "Unexpected file in {}: {!r}. .data directory contents" + " should be named like: '/'." + ).format(wheel_path, record_path) + raise InstallationError(message) + + try: + scheme_path = scheme_paths[scheme_key] + except KeyError: + valid_scheme_keys = ", ".join(sorted(scheme_paths)) + message = ( + "Unknown scheme key used in {}: {} (for file {!r}). .data" + " directory contents should be in subdirectories named" + " with a valid scheme key ({})" + ).format(wheel_path, scheme_key, record_path, valid_scheme_keys) + raise InstallationError(message) + + dest_path = os.path.join(scheme_path, dest_subpath) + assert_no_path_traversal(scheme_path, dest_path) + return ZipBackedFile(record_path, dest_path, zip_file) + + return make_data_scheme_file + + def is_data_scheme_path(path: RecordPath) -> bool: + return path.split("/", 1)[0].endswith(".data") + + paths = cast(List[RecordPath], wheel_zip.namelist()) + file_paths = filterfalse(is_dir_path, paths) + root_scheme_paths, data_scheme_paths = partition(is_data_scheme_path, file_paths) + + make_root_scheme_file = root_scheme_file_maker(wheel_zip, lib_dir) + files: Iterator[File] = map(make_root_scheme_file, root_scheme_paths) + + def is_script_scheme_path(path: RecordPath) -> bool: + parts = path.split("/", 2) + return len(parts) > 2 and parts[0].endswith(".data") and parts[1] == "scripts" + + other_scheme_paths, script_scheme_paths = partition( + is_script_scheme_path, data_scheme_paths + ) + + make_data_scheme_file = data_scheme_file_maker(wheel_zip, scheme) + other_scheme_files = map(make_data_scheme_file, other_scheme_paths) + files = chain(files, other_scheme_files) + + # Get the defined entry points + distribution = get_wheel_distribution( + FilesystemWheel(wheel_path), + canonicalize_name(name), + ) + console, gui = get_entrypoints(distribution) + + def is_entrypoint_wrapper(file: "File") -> bool: + # EP, EP.exe and EP-script.py are scripts generated for + # entry point EP by setuptools + path = file.dest_path + name = os.path.basename(path) + if name.lower().endswith(".exe"): + matchname = name[:-4] + elif name.lower().endswith("-script.py"): + matchname = name[:-10] + elif name.lower().endswith(".pya"): + matchname = name[:-4] + else: + matchname = name + # Ignore setuptools-generated scripts + return matchname in console or matchname in gui + + script_scheme_files: Iterator[File] = map( + make_data_scheme_file, script_scheme_paths + ) + script_scheme_files = filterfalse(is_entrypoint_wrapper, script_scheme_files) + script_scheme_files = map(ScriptFile, script_scheme_files) + files = chain(files, script_scheme_files) + + for file in files: + file.save() + record_installed(file.src_record_path, file.dest_path, file.changed) + + def pyc_source_file_paths() -> Iterator[str]: + # We de-duplicate installation paths, since there can be overlap (e.g. + # file in .data maps to same location as file in wheel root). + # Sorting installation paths makes it easier to reproduce and debug + # issues related to permissions on existing files. + for installed_path in sorted(set(installed.values())): + full_installed_path = os.path.join(lib_dir, installed_path) + if not os.path.isfile(full_installed_path): + continue + if not full_installed_path.endswith(".py"): + continue + yield full_installed_path + + def pyc_output_path(path: str) -> str: + """Return the path the pyc file would have been written to.""" + return importlib.util.cache_from_source(path) + + # Compile all of the pyc files for the installed files + if pycompile: + with captured_stdout() as stdout: + with warnings.catch_warnings(): + warnings.filterwarnings("ignore") + for path in pyc_source_file_paths(): + success = compileall.compile_file(path, force=True, quiet=True) + if success: + pyc_path = pyc_output_path(path) + assert os.path.exists(pyc_path) + pyc_record_path = cast( + "RecordPath", pyc_path.replace(os.path.sep, "/") + ) + record_installed(pyc_record_path, pyc_path) + logger.debug(stdout.getvalue()) + + maker = PipScriptMaker(None, scheme.scripts) + + # Ensure old scripts are overwritten. + # See https://github.com/pypa/pip/issues/1800 + maker.clobber = True + + # Ensure we don't generate any variants for scripts because this is almost + # never what somebody wants. + # See https://bitbucket.org/pypa/distlib/issue/35/ + maker.variants = {""} + + # This is required because otherwise distlib creates scripts that are not + # executable. + # See https://bitbucket.org/pypa/distlib/issue/32/ + maker.set_mode = True + + # Generate the console and GUI entry points specified in the wheel + scripts_to_generate = get_console_script_specs(console) + + gui_scripts_to_generate = list(starmap("{} = {}".format, gui.items())) + + generated_console_scripts = maker.make_multiple(scripts_to_generate) + generated.extend(generated_console_scripts) + + generated.extend(maker.make_multiple(gui_scripts_to_generate, {"gui": True})) + + if warn_script_location: + msg = message_about_scripts_not_on_PATH(generated_console_scripts) + if msg is not None: + logger.warning(msg) + + generated_file_mode = 0o666 & ~current_umask() + + @contextlib.contextmanager + def _generate_file(path: str, **kwargs: Any) -> Iterator[BinaryIO]: + with adjacent_tmp_file(path, **kwargs) as f: + yield f + os.chmod(f.name, generated_file_mode) + replace(f.name, path) + + dest_info_dir = os.path.join(lib_dir, info_dir) + + # Record pip as the installer + installer_path = os.path.join(dest_info_dir, "INSTALLER") + with _generate_file(installer_path) as installer_file: + installer_file.write(b"pip\n") + generated.append(installer_path) + + # Record the PEP 610 direct URL reference + if direct_url is not None: + direct_url_path = os.path.join(dest_info_dir, DIRECT_URL_METADATA_NAME) + with _generate_file(direct_url_path) as direct_url_file: + direct_url_file.write(direct_url.to_json().encode("utf-8")) + generated.append(direct_url_path) + + # Record the REQUESTED file + if requested: + requested_path = os.path.join(dest_info_dir, "REQUESTED") + with open(requested_path, "wb"): + pass + generated.append(requested_path) + + record_text = distribution.read_text("RECORD") + record_rows = list(csv.reader(record_text.splitlines())) + + rows = get_csv_rows_for_installed( + record_rows, + installed=installed, + changed=changed, + generated=generated, + lib_dir=lib_dir, + ) + + # Record details of all files installed + record_path = os.path.join(dest_info_dir, "RECORD") + + with _generate_file(record_path, **csv_io_kwargs("w")) as record_file: + # Explicitly cast to typing.IO[str] as a workaround for the mypy error: + # "writer" has incompatible type "BinaryIO"; expected "_Writer" + writer = csv.writer(cast("IO[str]", record_file)) + writer.writerows(_normalized_outrows(rows)) + + +@contextlib.contextmanager +def req_error_context(req_description: str) -> Iterator[None]: + try: + yield + except InstallationError as e: + message = "For req: {}. {}".format(req_description, e.args[0]) + raise InstallationError(message) from e + + +def install_wheel( + name: str, + wheel_path: str, + scheme: Scheme, + req_description: str, + pycompile: bool = True, + warn_script_location: bool = True, + direct_url: Optional[DirectUrl] = None, + requested: bool = False, +) -> None: + with ZipFile(wheel_path, allowZip64=True) as z: + with req_error_context(req_description): + _install_wheel( + name=name, + wheel_zip=z, + wheel_path=wheel_path, + scheme=scheme, + pycompile=pycompile, + warn_script_location=warn_script_location, + direct_url=direct_url, + requested=requested, + ) diff --git a/python/lib/python3.10/site-packages/pip/_internal/operations/prepare.py b/python/lib/python3.10/site-packages/pip/_internal/operations/prepare.py new file mode 100644 index 0000000..a726f03 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/operations/prepare.py @@ -0,0 +1,642 @@ +"""Prepares a distribution for installation +""" + +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import logging +import mimetypes +import os +import shutil +from typing import Dict, Iterable, List, Optional + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.distributions import make_distribution_for_install_requirement +from pip._internal.distributions.installed import InstalledDistribution +from pip._internal.exceptions import ( + DirectoryUrlHashUnsupported, + HashMismatch, + HashUnpinned, + InstallationError, + NetworkConnectionError, + PreviousBuildDirError, + VcsHashUnsupported, +) +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import BaseDistribution +from pip._internal.models.link import Link +from pip._internal.models.wheel import Wheel +from pip._internal.network.download import BatchDownloader, Downloader +from pip._internal.network.lazy_wheel import ( + HTTPRangeRequestUnsupported, + dist_from_wheel_url, +) +from pip._internal.network.session import PipSession +from pip._internal.req.req_install import InstallRequirement +from pip._internal.req.req_tracker import RequirementTracker +from pip._internal.utils.filesystem import copy2_fixed +from pip._internal.utils.hashes import Hashes, MissingHashes +from pip._internal.utils.logging import indent_log +from pip._internal.utils.misc import display_path, hide_url, is_installable_dir, rmtree +from pip._internal.utils.temp_dir import TempDirectory +from pip._internal.utils.unpacking import unpack_file +from pip._internal.vcs import vcs + +logger = logging.getLogger(__name__) + + +def _get_prepared_distribution( + req: InstallRequirement, + req_tracker: RequirementTracker, + finder: PackageFinder, + build_isolation: bool, +) -> BaseDistribution: + """Prepare a distribution for installation.""" + abstract_dist = make_distribution_for_install_requirement(req) + with req_tracker.track(req): + abstract_dist.prepare_distribution_metadata(finder, build_isolation) + return abstract_dist.get_metadata_distribution() + + +def unpack_vcs_link(link: Link, location: str, verbosity: int) -> None: + vcs_backend = vcs.get_backend_for_scheme(link.scheme) + assert vcs_backend is not None + vcs_backend.unpack(location, url=hide_url(link.url), verbosity=verbosity) + + +class File: + def __init__(self, path: str, content_type: Optional[str]) -> None: + self.path = path + if content_type is None: + self.content_type = mimetypes.guess_type(path)[0] + else: + self.content_type = content_type + + +def get_http_url( + link: Link, + download: Downloader, + download_dir: Optional[str] = None, + hashes: Optional[Hashes] = None, +) -> File: + temp_dir = TempDirectory(kind="unpack", globally_managed=True) + # If a download dir is specified, is the file already downloaded there? + already_downloaded_path = None + if download_dir: + already_downloaded_path = _check_download_dir(link, download_dir, hashes) + + if already_downloaded_path: + from_path = already_downloaded_path + content_type = None + else: + # let's download to a tmp dir + from_path, content_type = download(link, temp_dir.path) + if hashes: + hashes.check_against_path(from_path) + + return File(from_path, content_type) + + +def _copy2_ignoring_special_files(src: str, dest: str) -> None: + """Copying special files is not supported, but as a convenience to users + we skip errors copying them. This supports tools that may create e.g. + socket files in the project source directory. + """ + try: + copy2_fixed(src, dest) + except shutil.SpecialFileError as e: + # SpecialFileError may be raised due to either the source or + # destination. If the destination was the cause then we would actually + # care, but since the destination directory is deleted prior to + # copy we ignore all of them assuming it is caused by the source. + logger.warning( + "Ignoring special file error '%s' encountered copying %s to %s.", + str(e), + src, + dest, + ) + + +def _copy_source_tree(source: str, target: str) -> None: + target_abspath = os.path.abspath(target) + target_basename = os.path.basename(target_abspath) + target_dirname = os.path.dirname(target_abspath) + + def ignore(d: str, names: List[str]) -> List[str]: + skipped: List[str] = [] + if d == source: + # Pulling in those directories can potentially be very slow, + # exclude the following directories if they appear in the top + # level dir (and only it). + # See discussion at https://github.com/pypa/pip/pull/6770 + skipped += [".tox", ".nox"] + if os.path.abspath(d) == target_dirname: + # Prevent an infinite recursion if the target is in source. + # This can happen when TMPDIR is set to ${PWD}/... + # and we copy PWD to TMPDIR. + skipped += [target_basename] + return skipped + + shutil.copytree( + source, + target, + ignore=ignore, + symlinks=True, + copy_function=_copy2_ignoring_special_files, + ) + + +def get_file_url( + link: Link, download_dir: Optional[str] = None, hashes: Optional[Hashes] = None +) -> File: + """Get file and optionally check its hash.""" + # If a download dir is specified, is the file already there and valid? + already_downloaded_path = None + if download_dir: + already_downloaded_path = _check_download_dir(link, download_dir, hashes) + + if already_downloaded_path: + from_path = already_downloaded_path + else: + from_path = link.file_path + + # If --require-hashes is off, `hashes` is either empty, the + # link's embedded hash, or MissingHashes; it is required to + # match. If --require-hashes is on, we are satisfied by any + # hash in `hashes` matching: a URL-based or an option-based + # one; no internet-sourced hash will be in `hashes`. + if hashes: + hashes.check_against_path(from_path) + return File(from_path, None) + + +def unpack_url( + link: Link, + location: str, + download: Downloader, + verbosity: int, + download_dir: Optional[str] = None, + hashes: Optional[Hashes] = None, +) -> Optional[File]: + """Unpack link into location, downloading if required. + + :param hashes: A Hashes object, one of whose embedded hashes must match, + or HashMismatch will be raised. If the Hashes is empty, no matches are + required, and unhashable types of requirements (like VCS ones, which + would ordinarily raise HashUnsupported) are allowed. + """ + # non-editable vcs urls + if link.is_vcs: + unpack_vcs_link(link, location, verbosity=verbosity) + return None + + # Once out-of-tree-builds are no longer supported, could potentially + # replace the below condition with `assert not link.is_existing_dir` + # - unpack_url does not need to be called for in-tree-builds. + # + # As further cleanup, _copy_source_tree and accompanying tests can + # be removed. + # + # TODO when use-deprecated=out-of-tree-build is removed + if link.is_existing_dir(): + if os.path.isdir(location): + rmtree(location) + _copy_source_tree(link.file_path, location) + return None + + # file urls + if link.is_file: + file = get_file_url(link, download_dir, hashes=hashes) + + # http urls + else: + file = get_http_url( + link, + download, + download_dir, + hashes=hashes, + ) + + # unpack the archive to the build dir location. even when only downloading + # archives, they have to be unpacked to parse dependencies, except wheels + if not link.is_wheel: + unpack_file(file.path, location, file.content_type) + + return file + + +def _check_download_dir( + link: Link, download_dir: str, hashes: Optional[Hashes] +) -> Optional[str]: + """Check download_dir for previously downloaded file with correct hash + If a correct file is found return its path else None + """ + download_path = os.path.join(download_dir, link.filename) + + if not os.path.exists(download_path): + return None + + # If already downloaded, does its hash match? + logger.info("File was already downloaded %s", download_path) + if hashes: + try: + hashes.check_against_path(download_path) + except HashMismatch: + logger.warning( + "Previously-downloaded file %s has bad hash. Re-downloading.", + download_path, + ) + os.unlink(download_path) + return None + return download_path + + +class RequirementPreparer: + """Prepares a Requirement""" + + def __init__( + self, + build_dir: str, + download_dir: Optional[str], + src_dir: str, + build_isolation: bool, + req_tracker: RequirementTracker, + session: PipSession, + progress_bar: str, + finder: PackageFinder, + require_hashes: bool, + use_user_site: bool, + lazy_wheel: bool, + verbosity: int, + in_tree_build: bool, + ) -> None: + super().__init__() + + self.src_dir = src_dir + self.build_dir = build_dir + self.req_tracker = req_tracker + self._session = session + self._download = Downloader(session, progress_bar) + self._batch_download = BatchDownloader(session, progress_bar) + self.finder = finder + + # Where still-packed archives should be written to. If None, they are + # not saved, and are deleted immediately after unpacking. + self.download_dir = download_dir + + # Is build isolation allowed? + self.build_isolation = build_isolation + + # Should hash-checking be required? + self.require_hashes = require_hashes + + # Should install in user site-packages? + self.use_user_site = use_user_site + + # Should wheels be downloaded lazily? + self.use_lazy_wheel = lazy_wheel + + # How verbose should underlying tooling be? + self.verbosity = verbosity + + # Should in-tree builds be used for local paths? + self.in_tree_build = in_tree_build + + # Memoized downloaded files, as mapping of url: path. + self._downloaded: Dict[str, str] = {} + + # Previous "header" printed for a link-based InstallRequirement + self._previous_requirement_header = ("", "") + + def _log_preparing_link(self, req: InstallRequirement) -> None: + """Provide context for the requirement being prepared.""" + if req.link.is_file and not req.original_link_is_in_wheel_cache: + message = "Processing %s" + information = str(display_path(req.link.file_path)) + else: + message = "Collecting %s" + information = str(req.req or req) + + if (message, information) != self._previous_requirement_header: + self._previous_requirement_header = (message, information) + logger.info(message, information) + + if req.original_link_is_in_wheel_cache: + with indent_log(): + logger.info("Using cached %s", req.link.filename) + + def _ensure_link_req_src_dir( + self, req: InstallRequirement, parallel_builds: bool + ) -> None: + """Ensure source_dir of a linked InstallRequirement.""" + # Since source_dir is only set for editable requirements. + if req.link.is_wheel: + # We don't need to unpack wheels, so no need for a source + # directory. + return + assert req.source_dir is None + if req.link.is_existing_dir() and self.in_tree_build: + # build local directories in-tree + req.source_dir = req.link.file_path + return + + # We always delete unpacked sdists after pip runs. + req.ensure_has_source_dir( + self.build_dir, + autodelete=True, + parallel_builds=parallel_builds, + ) + + # If a checkout exists, it's unwise to keep going. version + # inconsistencies are logged later, but do not fail the + # installation. + # FIXME: this won't upgrade when there's an existing + # package unpacked in `req.source_dir` + # TODO: this check is now probably dead code + if is_installable_dir(req.source_dir): + raise PreviousBuildDirError( + "pip can't proceed with requirements '{}' due to a" + "pre-existing build directory ({}). This is likely " + "due to a previous installation that failed . pip is " + "being responsible and not assuming it can delete this. " + "Please delete it and try again.".format(req, req.source_dir) + ) + + def _get_linked_req_hashes(self, req: InstallRequirement) -> Hashes: + # By the time this is called, the requirement's link should have + # been checked so we can tell what kind of requirements req is + # and raise some more informative errors than otherwise. + # (For example, we can raise VcsHashUnsupported for a VCS URL + # rather than HashMissing.) + if not self.require_hashes: + return req.hashes(trust_internet=True) + + # We could check these first 2 conditions inside unpack_url + # and save repetition of conditions, but then we would + # report less-useful error messages for unhashable + # requirements, complaining that there's no hash provided. + if req.link.is_vcs: + raise VcsHashUnsupported() + if req.link.is_existing_dir(): + raise DirectoryUrlHashUnsupported() + + # Unpinned packages are asking for trouble when a new version + # is uploaded. This isn't a security check, but it saves users + # a surprising hash mismatch in the future. + # file:/// URLs aren't pinnable, so don't complain about them + # not being pinned. + if req.original_link is None and not req.is_pinned: + raise HashUnpinned() + + # If known-good hashes are missing for this requirement, + # shim it with a facade object that will provoke hash + # computation and then raise a HashMissing exception + # showing the user what the hash should be. + return req.hashes(trust_internet=False) or MissingHashes() + + def _fetch_metadata_using_lazy_wheel( + self, + link: Link, + ) -> Optional[BaseDistribution]: + """Fetch metadata using lazy wheel, if possible.""" + if not self.use_lazy_wheel: + return None + if self.require_hashes: + logger.debug("Lazy wheel is not used as hash checking is required") + return None + if link.is_file or not link.is_wheel: + logger.debug( + "Lazy wheel is not used as %r does not points to a remote wheel", + link, + ) + return None + + wheel = Wheel(link.filename) + name = canonicalize_name(wheel.name) + logger.info( + "Obtaining dependency information from %s %s", + name, + wheel.version, + ) + url = link.url.split("#", 1)[0] + try: + return dist_from_wheel_url(name, url, self._session) + except HTTPRangeRequestUnsupported: + logger.debug("%s does not support range requests", url) + return None + + def _complete_partial_requirements( + self, + partially_downloaded_reqs: Iterable[InstallRequirement], + parallel_builds: bool = False, + ) -> None: + """Download any requirements which were only fetched by metadata.""" + # Download to a temporary directory. These will be copied over as + # needed for downstream 'download', 'wheel', and 'install' commands. + temp_dir = TempDirectory(kind="unpack", globally_managed=True).path + + # Map each link to the requirement that owns it. This allows us to set + # `req.local_file_path` on the appropriate requirement after passing + # all the links at once into BatchDownloader. + links_to_fully_download: Dict[Link, InstallRequirement] = {} + for req in partially_downloaded_reqs: + assert req.link + links_to_fully_download[req.link] = req + + batch_download = self._batch_download( + links_to_fully_download.keys(), + temp_dir, + ) + for link, (filepath, _) in batch_download: + logger.debug("Downloading link %s to %s", link, filepath) + req = links_to_fully_download[link] + req.local_file_path = filepath + + # This step is necessary to ensure all lazy wheels are processed + # successfully by the 'download', 'wheel', and 'install' commands. + for req in partially_downloaded_reqs: + self._prepare_linked_requirement(req, parallel_builds) + + def prepare_linked_requirement( + self, req: InstallRequirement, parallel_builds: bool = False + ) -> BaseDistribution: + """Prepare a requirement to be obtained from req.link.""" + assert req.link + link = req.link + self._log_preparing_link(req) + with indent_log(): + # Check if the relevant file is already available + # in the download directory + file_path = None + if self.download_dir is not None and link.is_wheel: + hashes = self._get_linked_req_hashes(req) + file_path = _check_download_dir(req.link, self.download_dir, hashes) + + if file_path is not None: + # The file is already available, so mark it as downloaded + self._downloaded[req.link.url] = file_path + else: + # The file is not available, attempt to fetch only metadata + wheel_dist = self._fetch_metadata_using_lazy_wheel(link) + if wheel_dist is not None: + req.needs_more_preparation = True + return wheel_dist + + # None of the optimizations worked, fully prepare the requirement + return self._prepare_linked_requirement(req, parallel_builds) + + def prepare_linked_requirements_more( + self, reqs: Iterable[InstallRequirement], parallel_builds: bool = False + ) -> None: + """Prepare linked requirements more, if needed.""" + reqs = [req for req in reqs if req.needs_more_preparation] + for req in reqs: + # Determine if any of these requirements were already downloaded. + if self.download_dir is not None and req.link.is_wheel: + hashes = self._get_linked_req_hashes(req) + file_path = _check_download_dir(req.link, self.download_dir, hashes) + if file_path is not None: + self._downloaded[req.link.url] = file_path + req.needs_more_preparation = False + + # Prepare requirements we found were already downloaded for some + # reason. The other downloads will be completed separately. + partially_downloaded_reqs: List[InstallRequirement] = [] + for req in reqs: + if req.needs_more_preparation: + partially_downloaded_reqs.append(req) + else: + self._prepare_linked_requirement(req, parallel_builds) + + # TODO: separate this part out from RequirementPreparer when the v1 + # resolver can be removed! + self._complete_partial_requirements( + partially_downloaded_reqs, + parallel_builds=parallel_builds, + ) + + def _prepare_linked_requirement( + self, req: InstallRequirement, parallel_builds: bool + ) -> BaseDistribution: + assert req.link + link = req.link + + self._ensure_link_req_src_dir(req, parallel_builds) + hashes = self._get_linked_req_hashes(req) + + if link.is_existing_dir() and self.in_tree_build: + local_file = None + elif link.url not in self._downloaded: + try: + local_file = unpack_url( + link, + req.source_dir, + self._download, + self.verbosity, + self.download_dir, + hashes, + ) + except NetworkConnectionError as exc: + raise InstallationError( + "Could not install requirement {} because of HTTP " + "error {} for URL {}".format(req, exc, link) + ) + else: + file_path = self._downloaded[link.url] + if hashes: + hashes.check_against_path(file_path) + local_file = File(file_path, content_type=None) + + # For use in later processing, + # preserve the file path on the requirement. + if local_file: + req.local_file_path = local_file.path + + dist = _get_prepared_distribution( + req, + self.req_tracker, + self.finder, + self.build_isolation, + ) + return dist + + def save_linked_requirement(self, req: InstallRequirement) -> None: + assert self.download_dir is not None + assert req.link is not None + link = req.link + if link.is_vcs or (link.is_existing_dir() and req.editable): + # Make a .zip of the source_dir we already created. + req.archive(self.download_dir) + return + + if link.is_existing_dir(): + logger.debug( + "Not copying link to destination directory " + "since it is a directory: %s", + link, + ) + return + if req.local_file_path is None: + # No distribution was downloaded for this requirement. + return + + download_location = os.path.join(self.download_dir, link.filename) + if not os.path.exists(download_location): + shutil.copy(req.local_file_path, download_location) + download_path = display_path(download_location) + logger.info("Saved %s", download_path) + + def prepare_editable_requirement( + self, + req: InstallRequirement, + ) -> BaseDistribution: + """Prepare an editable requirement.""" + assert req.editable, "cannot prepare a non-editable req as editable" + + logger.info("Obtaining %s", req) + + with indent_log(): + if self.require_hashes: + raise InstallationError( + "The editable requirement {} cannot be installed when " + "requiring hashes, because there is no single file to " + "hash.".format(req) + ) + req.ensure_has_source_dir(self.src_dir) + req.update_editable() + + dist = _get_prepared_distribution( + req, + self.req_tracker, + self.finder, + self.build_isolation, + ) + + req.check_if_exists(self.use_user_site) + + return dist + + def prepare_installed_requirement( + self, + req: InstallRequirement, + skip_reason: str, + ) -> BaseDistribution: + """Prepare an already-installed requirement.""" + assert req.satisfied_by, "req should have been satisfied but isn't" + assert skip_reason is not None, ( + "did not get skip reason skipped but req.satisfied_by " + "is set to {}".format(req.satisfied_by) + ) + logger.info( + "Requirement %s: %s (%s)", skip_reason, req, req.satisfied_by.version + ) + with indent_log(): + if self.require_hashes: + logger.debug( + "Since it is already installed, we are trusting this " + "package without checking its hash. To ensure a " + "completely repeatable environment, install into an " + "empty virtualenv." + ) + return InstalledDistribution(req).get_metadata_distribution() diff --git a/python/lib/python3.10/site-packages/pip/_internal/pyproject.py b/python/lib/python3.10/site-packages/pip/_internal/pyproject.py new file mode 100644 index 0000000..e183eaf --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/pyproject.py @@ -0,0 +1,168 @@ +import os +from collections import namedtuple +from typing import Any, List, Optional + +from pip._vendor import tomli +from pip._vendor.packaging.requirements import InvalidRequirement, Requirement + +from pip._internal.exceptions import ( + InstallationError, + InvalidPyProjectBuildRequires, + MissingPyProjectBuildRequires, +) + + +def _is_list_of_str(obj: Any) -> bool: + return isinstance(obj, list) and all(isinstance(item, str) for item in obj) + + +def make_pyproject_path(unpacked_source_directory: str) -> str: + return os.path.join(unpacked_source_directory, "pyproject.toml") + + +BuildSystemDetails = namedtuple( + "BuildSystemDetails", ["requires", "backend", "check", "backend_path"] +) + + +def load_pyproject_toml( + use_pep517: Optional[bool], pyproject_toml: str, setup_py: str, req_name: str +) -> Optional[BuildSystemDetails]: + """Load the pyproject.toml file. + + Parameters: + use_pep517 - Has the user requested PEP 517 processing? None + means the user hasn't explicitly specified. + pyproject_toml - Location of the project's pyproject.toml file + setup_py - Location of the project's setup.py file + req_name - The name of the requirement we're processing (for + error reporting) + + Returns: + None if we should use the legacy code path, otherwise a tuple + ( + requirements from pyproject.toml, + name of PEP 517 backend, + requirements we should check are installed after setting + up the build environment + directory paths to import the backend from (backend-path), + relative to the project root. + ) + """ + has_pyproject = os.path.isfile(pyproject_toml) + has_setup = os.path.isfile(setup_py) + + if not has_pyproject and not has_setup: + raise InstallationError( + f"{req_name} does not appear to be a Python project: " + f"neither 'setup.py' nor 'pyproject.toml' found." + ) + + if has_pyproject: + with open(pyproject_toml, encoding="utf-8") as f: + pp_toml = tomli.loads(f.read()) + build_system = pp_toml.get("build-system") + else: + build_system = None + + # The following cases must use PEP 517 + # We check for use_pep517 being non-None and falsey because that means + # the user explicitly requested --no-use-pep517. The value 0 as + # opposed to False can occur when the value is provided via an + # environment variable or config file option (due to the quirk of + # strtobool() returning an integer in pip's configuration code). + if has_pyproject and not has_setup: + if use_pep517 is not None and not use_pep517: + raise InstallationError( + "Disabling PEP 517 processing is invalid: " + "project does not have a setup.py" + ) + use_pep517 = True + elif build_system and "build-backend" in build_system: + if use_pep517 is not None and not use_pep517: + raise InstallationError( + "Disabling PEP 517 processing is invalid: " + "project specifies a build backend of {} " + "in pyproject.toml".format(build_system["build-backend"]) + ) + use_pep517 = True + + # If we haven't worked out whether to use PEP 517 yet, + # and the user hasn't explicitly stated a preference, + # we do so if the project has a pyproject.toml file. + elif use_pep517 is None: + use_pep517 = has_pyproject + + # At this point, we know whether we're going to use PEP 517. + assert use_pep517 is not None + + # If we're using the legacy code path, there is nothing further + # for us to do here. + if not use_pep517: + return None + + if build_system is None: + # Either the user has a pyproject.toml with no build-system + # section, or the user has no pyproject.toml, but has opted in + # explicitly via --use-pep517. + # In the absence of any explicit backend specification, we + # assume the setuptools backend that most closely emulates the + # traditional direct setup.py execution, and require wheel and + # a version of setuptools that supports that backend. + + build_system = { + "requires": ["setuptools>=40.8.0", "wheel"], + "build-backend": "setuptools.build_meta:__legacy__", + } + + # If we're using PEP 517, we have build system information (either + # from pyproject.toml, or defaulted by the code above). + # Note that at this point, we do not know if the user has actually + # specified a backend, though. + assert build_system is not None + + # Ensure that the build-system section in pyproject.toml conforms + # to PEP 518. + + # Specifying the build-system table but not the requires key is invalid + if "requires" not in build_system: + raise MissingPyProjectBuildRequires(package=req_name) + + # Error out if requires is not a list of strings + requires = build_system["requires"] + if not _is_list_of_str(requires): + raise InvalidPyProjectBuildRequires( + package=req_name, + reason="It is not a list of strings.", + ) + + # Each requirement must be valid as per PEP 508 + for requirement in requires: + try: + Requirement(requirement) + except InvalidRequirement as error: + raise InvalidPyProjectBuildRequires( + package=req_name, + reason=f"It contains an invalid requirement: {requirement!r}", + ) from error + + backend = build_system.get("build-backend") + backend_path = build_system.get("backend-path", []) + check: List[str] = [] + if backend is None: + # If the user didn't specify a backend, we assume they want to use + # the setuptools backend. But we can't be sure they have included + # a version of setuptools which supplies the backend, or wheel + # (which is needed by the backend) in their requirements. So we + # make a note to check that those requirements are present once + # we have set up the environment. + # This is quite a lot of work to check for a very specific case. But + # the problem is, that case is potentially quite common - projects that + # adopted PEP 518 early for the ability to specify requirements to + # execute setup.py, but never considered needing to mention the build + # tools themselves. The original PEP 518 code had a similar check (but + # implemented in a different way). + backend = "setuptools.build_meta:__legacy__" + check = ["setuptools>=40.8.0", "wheel"] + + return BuildSystemDetails(requires, backend, check, backend_path) diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/req/__init__.py new file mode 100644 index 0000000..70dea27 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/__init__.py @@ -0,0 +1,94 @@ +import collections +import logging +from typing import Iterator, List, Optional, Sequence, Tuple + +from pip._internal.utils.logging import indent_log + +from .req_file import parse_requirements +from .req_install import InstallRequirement +from .req_set import RequirementSet + +__all__ = [ + "RequirementSet", + "InstallRequirement", + "parse_requirements", + "install_given_reqs", +] + +logger = logging.getLogger(__name__) + + +class InstallationResult: + def __init__(self, name: str) -> None: + self.name = name + + def __repr__(self) -> str: + return f"InstallationResult(name={self.name!r})" + + +def _validate_requirements( + requirements: List[InstallRequirement], +) -> Iterator[Tuple[str, InstallRequirement]]: + for req in requirements: + assert req.name, f"invalid to-be-installed requirement: {req}" + yield req.name, req + + +def install_given_reqs( + requirements: List[InstallRequirement], + install_options: List[str], + global_options: Sequence[str], + root: Optional[str], + home: Optional[str], + prefix: Optional[str], + warn_script_location: bool, + use_user_site: bool, + pycompile: bool, +) -> List[InstallationResult]: + """ + Install everything in the given list. + + (to be called after having downloaded and unpacked the packages) + """ + to_install = collections.OrderedDict(_validate_requirements(requirements)) + + if to_install: + logger.info( + "Installing collected packages: %s", + ", ".join(to_install.keys()), + ) + + installed = [] + + with indent_log(): + for req_name, requirement in to_install.items(): + if requirement.should_reinstall: + logger.info("Attempting uninstall: %s", req_name) + with indent_log(): + uninstalled_pathset = requirement.uninstall(auto_confirm=True) + else: + uninstalled_pathset = None + + try: + requirement.install( + install_options, + global_options, + root=root, + home=home, + prefix=prefix, + warn_script_location=warn_script_location, + use_user_site=use_user_site, + pycompile=pycompile, + ) + except Exception: + # if install did not succeed, rollback previous uninstall + if uninstalled_pathset and not requirement.install_succeeded: + uninstalled_pathset.rollback() + raise + else: + if uninstalled_pathset and requirement.install_succeeded: + uninstalled_pathset.commit() + + installed.append(InstallationResult(req_name)) + + return installed diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/constructors.py b/python/lib/python3.10/site-packages/pip/_internal/req/constructors.py new file mode 100644 index 0000000..25bfb39 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/constructors.py @@ -0,0 +1,490 @@ +"""Backing implementation for InstallRequirement's various constructors + +The idea here is that these formed a major chunk of InstallRequirement's size +so, moving them and support code dedicated to them outside of that class +helps creates for better understandability for the rest of the code. + +These are meant to be used elsewhere within pip to create instances of +InstallRequirement. +""" + +import logging +import os +import re +from typing import Any, Dict, Optional, Set, Tuple, Union + +from pip._vendor.packaging.markers import Marker +from pip._vendor.packaging.requirements import InvalidRequirement, Requirement +from pip._vendor.packaging.specifiers import Specifier + +from pip._internal.exceptions import InstallationError +from pip._internal.models.index import PyPI, TestPyPI +from pip._internal.models.link import Link +from pip._internal.models.wheel import Wheel +from pip._internal.req.req_file import ParsedRequirement +from pip._internal.req.req_install import InstallRequirement +from pip._internal.utils.filetypes import is_archive_file +from pip._internal.utils.misc import is_installable_dir +from pip._internal.utils.packaging import get_requirement +from pip._internal.utils.urls import path_to_url +from pip._internal.vcs import is_url, vcs + +__all__ = [ + "install_req_from_editable", + "install_req_from_line", + "parse_editable", +] + +logger = logging.getLogger(__name__) +operators = Specifier._operators.keys() + + +def _strip_extras(path: str) -> Tuple[str, Optional[str]]: + m = re.match(r"^(.+)(\[[^\]]+\])$", path) + extras = None + if m: + path_no_extras = m.group(1) + extras = m.group(2) + else: + path_no_extras = path + + return path_no_extras, extras + + +def convert_extras(extras: Optional[str]) -> Set[str]: + if not extras: + return set() + return get_requirement("placeholder" + extras.lower()).extras + + +def parse_editable(editable_req: str) -> Tuple[Optional[str], str, Set[str]]: + """Parses an editable requirement into: + - a requirement name + - an URL + - extras + - editable options + Accepted requirements: + svn+http://blahblah@rev#egg=Foobar[baz]&subdirectory=version_subdir + .[some_extra] + """ + + url = editable_req + + # If a file path is specified with extras, strip off the extras. + url_no_extras, extras = _strip_extras(url) + + if os.path.isdir(url_no_extras): + # Treating it as code that has already been checked out + url_no_extras = path_to_url(url_no_extras) + + if url_no_extras.lower().startswith("file:"): + package_name = Link(url_no_extras).egg_fragment + if extras: + return ( + package_name, + url_no_extras, + get_requirement("placeholder" + extras.lower()).extras, + ) + else: + return package_name, url_no_extras, set() + + for version_control in vcs: + if url.lower().startswith(f"{version_control}:"): + url = f"{version_control}+{url}" + break + + link = Link(url) + + if not link.is_vcs: + backends = ", ".join(vcs.all_schemes) + raise InstallationError( + f"{editable_req} is not a valid editable requirement. " + f"It should either be a path to a local project or a VCS URL " + f"(beginning with {backends})." + ) + + package_name = link.egg_fragment + if not package_name: + raise InstallationError( + "Could not detect requirement name for '{}', please specify one " + "with #egg=your_package_name".format(editable_req) + ) + return package_name, url, set() + + +def check_first_requirement_in_file(filename: str) -> None: + """Check if file is parsable as a requirements file. + + This is heavily based on ``pkg_resources.parse_requirements``, but + simplified to just check the first meaningful line. + + :raises InvalidRequirement: If the first meaningful line cannot be parsed + as an requirement. + """ + with open(filename, encoding="utf-8", errors="ignore") as f: + # Create a steppable iterator, so we can handle \-continuations. + lines = ( + line + for line in (line.strip() for line in f) + if line and not line.startswith("#") # Skip blank lines/comments. + ) + + for line in lines: + # Drop comments -- a hash without a space may be in a URL. + if " #" in line: + line = line[: line.find(" #")] + # If there is a line continuation, drop it, and append the next line. + if line.endswith("\\"): + line = line[:-2].strip() + next(lines, "") + Requirement(line) + return + + +def deduce_helpful_msg(req: str) -> str: + """Returns helpful msg in case requirements file does not exist, + or cannot be parsed. + + :params req: Requirements file path + """ + if not os.path.exists(req): + return f" File '{req}' does not exist." + msg = " The path does exist. " + # Try to parse and check if it is a requirements file. + try: + check_first_requirement_in_file(req) + except InvalidRequirement: + logger.debug("Cannot parse '%s' as requirements file", req) + else: + msg += ( + f"The argument you provided " + f"({req}) appears to be a" + f" requirements file. If that is the" + f" case, use the '-r' flag to install" + f" the packages specified within it." + ) + return msg + + +class RequirementParts: + def __init__( + self, + requirement: Optional[Requirement], + link: Optional[Link], + markers: Optional[Marker], + extras: Set[str], + ): + self.requirement = requirement + self.link = link + self.markers = markers + self.extras = extras + + +def parse_req_from_editable(editable_req: str) -> RequirementParts: + name, url, extras_override = parse_editable(editable_req) + + if name is not None: + try: + req: Optional[Requirement] = Requirement(name) + except InvalidRequirement: + raise InstallationError(f"Invalid requirement: '{name}'") + else: + req = None + + link = Link(url) + + return RequirementParts(req, link, None, extras_override) + + +# ---- The actual constructors follow ---- + + +def install_req_from_editable( + editable_req: str, + comes_from: Optional[Union[InstallRequirement, str]] = None, + use_pep517: Optional[bool] = None, + isolated: bool = False, + options: Optional[Dict[str, Any]] = None, + constraint: bool = False, + user_supplied: bool = False, + permit_editable_wheels: bool = False, +) -> InstallRequirement: + + parts = parse_req_from_editable(editable_req) + + return InstallRequirement( + parts.requirement, + comes_from=comes_from, + user_supplied=user_supplied, + editable=True, + permit_editable_wheels=permit_editable_wheels, + link=parts.link, + constraint=constraint, + use_pep517=use_pep517, + isolated=isolated, + install_options=options.get("install_options", []) if options else [], + global_options=options.get("global_options", []) if options else [], + hash_options=options.get("hashes", {}) if options else {}, + extras=parts.extras, + ) + + +def _looks_like_path(name: str) -> bool: + """Checks whether the string "looks like" a path on the filesystem. + + This does not check whether the target actually exists, only judge from the + appearance. + + Returns true if any of the following conditions is true: + * a path separator is found (either os.path.sep or os.path.altsep); + * a dot is found (which represents the current directory). + """ + if os.path.sep in name: + return True + if os.path.altsep is not None and os.path.altsep in name: + return True + if name.startswith("."): + return True + return False + + +def _get_url_from_path(path: str, name: str) -> Optional[str]: + """ + First, it checks whether a provided path is an installable directory. If it + is, returns the path. + + If false, check if the path is an archive file (such as a .whl). + The function checks if the path is a file. If false, if the path has + an @, it will treat it as a PEP 440 URL requirement and return the path. + """ + if _looks_like_path(name) and os.path.isdir(path): + if is_installable_dir(path): + return path_to_url(path) + # TODO: The is_installable_dir test here might not be necessary + # now that it is done in load_pyproject_toml too. + raise InstallationError( + f"Directory {name!r} is not installable. Neither 'setup.py' " + "nor 'pyproject.toml' found." + ) + if not is_archive_file(path): + return None + if os.path.isfile(path): + return path_to_url(path) + urlreq_parts = name.split("@", 1) + if len(urlreq_parts) >= 2 and not _looks_like_path(urlreq_parts[0]): + # If the path contains '@' and the part before it does not look + # like a path, try to treat it as a PEP 440 URL req instead. + return None + logger.warning( + "Requirement %r looks like a filename, but the file does not exist", + name, + ) + return path_to_url(path) + + +def parse_req_from_line(name: str, line_source: Optional[str]) -> RequirementParts: + if is_url(name): + marker_sep = "; " + else: + marker_sep = ";" + if marker_sep in name: + name, markers_as_string = name.split(marker_sep, 1) + markers_as_string = markers_as_string.strip() + if not markers_as_string: + markers = None + else: + markers = Marker(markers_as_string) + else: + markers = None + name = name.strip() + req_as_string = None + path = os.path.normpath(os.path.abspath(name)) + link = None + extras_as_string = None + + if is_url(name): + link = Link(name) + else: + p, extras_as_string = _strip_extras(path) + url = _get_url_from_path(p, name) + if url is not None: + link = Link(url) + + # it's a local file, dir, or url + if link: + # Handle relative file URLs + if link.scheme == "file" and re.search(r"\.\./", link.url): + link = Link(path_to_url(os.path.normpath(os.path.abspath(link.path)))) + # wheel file + if link.is_wheel: + wheel = Wheel(link.filename) # can raise InvalidWheelFilename + req_as_string = f"{wheel.name}=={wheel.version}" + else: + # set the req to the egg fragment. when it's not there, this + # will become an 'unnamed' requirement + req_as_string = link.egg_fragment + + # a requirement specifier + else: + req_as_string = name + + extras = convert_extras(extras_as_string) + + def with_source(text: str) -> str: + if not line_source: + return text + return f"{text} (from {line_source})" + + def _parse_req_string(req_as_string: str) -> Requirement: + try: + req = get_requirement(req_as_string) + except InvalidRequirement: + if os.path.sep in req_as_string: + add_msg = "It looks like a path." + add_msg += deduce_helpful_msg(req_as_string) + elif "=" in req_as_string and not any( + op in req_as_string for op in operators + ): + add_msg = "= is not a valid operator. Did you mean == ?" + else: + add_msg = "" + msg = with_source(f"Invalid requirement: {req_as_string!r}") + if add_msg: + msg += f"\nHint: {add_msg}" + raise InstallationError(msg) + else: + # Deprecate extras after specifiers: "name>=1.0[extras]" + # This currently works by accident because _strip_extras() parses + # any extras in the end of the string and those are saved in + # RequirementParts + for spec in req.specifier: + spec_str = str(spec) + if spec_str.endswith("]"): + msg = f"Extras after version '{spec_str}'." + raise InstallationError(msg) + return req + + if req_as_string is not None: + req: Optional[Requirement] = _parse_req_string(req_as_string) + else: + req = None + + return RequirementParts(req, link, markers, extras) + + +def install_req_from_line( + name: str, + comes_from: Optional[Union[str, InstallRequirement]] = None, + use_pep517: Optional[bool] = None, + isolated: bool = False, + options: Optional[Dict[str, Any]] = None, + constraint: bool = False, + line_source: Optional[str] = None, + user_supplied: bool = False, +) -> InstallRequirement: + """Creates an InstallRequirement from a name, which might be a + requirement, directory containing 'setup.py', filename, or URL. + + :param line_source: An optional string describing where the line is from, + for logging purposes in case of an error. + """ + parts = parse_req_from_line(name, line_source) + + return InstallRequirement( + parts.requirement, + comes_from, + link=parts.link, + markers=parts.markers, + use_pep517=use_pep517, + isolated=isolated, + install_options=options.get("install_options", []) if options else [], + global_options=options.get("global_options", []) if options else [], + hash_options=options.get("hashes", {}) if options else {}, + constraint=constraint, + extras=parts.extras, + user_supplied=user_supplied, + ) + + +def install_req_from_req_string( + req_string: str, + comes_from: Optional[InstallRequirement] = None, + isolated: bool = False, + use_pep517: Optional[bool] = None, + user_supplied: bool = False, +) -> InstallRequirement: + try: + req = get_requirement(req_string) + except InvalidRequirement: + raise InstallationError(f"Invalid requirement: '{req_string}'") + + domains_not_allowed = [ + PyPI.file_storage_domain, + TestPyPI.file_storage_domain, + ] + if ( + req.url + and comes_from + and comes_from.link + and comes_from.link.netloc in domains_not_allowed + ): + # Explicitly disallow pypi packages that depend on external urls + raise InstallationError( + "Packages installed from PyPI cannot depend on packages " + "which are not also hosted on PyPI.\n" + "{} depends on {} ".format(comes_from.name, req) + ) + + return InstallRequirement( + req, + comes_from, + isolated=isolated, + use_pep517=use_pep517, + user_supplied=user_supplied, + ) + + +def install_req_from_parsed_requirement( + parsed_req: ParsedRequirement, + isolated: bool = False, + use_pep517: Optional[bool] = None, + user_supplied: bool = False, +) -> InstallRequirement: + if parsed_req.is_editable: + req = install_req_from_editable( + parsed_req.requirement, + comes_from=parsed_req.comes_from, + use_pep517=use_pep517, + constraint=parsed_req.constraint, + isolated=isolated, + user_supplied=user_supplied, + ) + + else: + req = install_req_from_line( + parsed_req.requirement, + comes_from=parsed_req.comes_from, + use_pep517=use_pep517, + isolated=isolated, + options=parsed_req.options, + constraint=parsed_req.constraint, + line_source=parsed_req.line_source, + user_supplied=user_supplied, + ) + return req + + +def install_req_from_link_and_ireq( + link: Link, ireq: InstallRequirement +) -> InstallRequirement: + return InstallRequirement( + req=ireq.req, + comes_from=ireq.comes_from, + editable=ireq.editable, + link=link, + markers=ireq.markers, + use_pep517=ireq.use_pep517, + isolated=ireq.isolated, + install_options=ireq.install_options, + global_options=ireq.global_options, + hash_options=ireq.hash_options, + ) diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/req_file.py b/python/lib/python3.10/site-packages/pip/_internal/req/req_file.py new file mode 100644 index 0000000..03ae504 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/req_file.py @@ -0,0 +1,536 @@ +""" +Requirements file parsing +""" + +import optparse +import os +import re +import shlex +import urllib.parse +from optparse import Values +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Dict, + Iterable, + Iterator, + List, + Optional, + Tuple, +) + +from pip._internal.cli import cmdoptions +from pip._internal.exceptions import InstallationError, RequirementsFileParseError +from pip._internal.models.search_scope import SearchScope +from pip._internal.network.session import PipSession +from pip._internal.network.utils import raise_for_status +from pip._internal.utils.encoding import auto_decode +from pip._internal.utils.urls import get_url_scheme + +if TYPE_CHECKING: + # NoReturn introduced in 3.6.2; imported only for type checking to maintain + # pip compatibility with older patch versions of Python 3.6 + from typing import NoReturn + + from pip._internal.index.package_finder import PackageFinder + +__all__ = ["parse_requirements"] + +ReqFileLines = Iterable[Tuple[int, str]] + +LineParser = Callable[[str], Tuple[str, Values]] + +SCHEME_RE = re.compile(r"^(http|https|file):", re.I) +COMMENT_RE = re.compile(r"(^|\s+)#.*$") + +# Matches environment variable-style values in '${MY_VARIABLE_1}' with the +# variable name consisting of only uppercase letters, digits or the '_' +# (underscore). This follows the POSIX standard defined in IEEE Std 1003.1, +# 2013 Edition. +ENV_VAR_RE = re.compile(r"(?P\$\{(?P[A-Z0-9_]+)\})") + +SUPPORTED_OPTIONS: List[Callable[..., optparse.Option]] = [ + cmdoptions.index_url, + cmdoptions.extra_index_url, + cmdoptions.no_index, + cmdoptions.constraints, + cmdoptions.requirements, + cmdoptions.editable, + cmdoptions.find_links, + cmdoptions.no_binary, + cmdoptions.only_binary, + cmdoptions.prefer_binary, + cmdoptions.require_hashes, + cmdoptions.pre, + cmdoptions.trusted_host, + cmdoptions.use_new_feature, +] + +# options to be passed to requirements +SUPPORTED_OPTIONS_REQ: List[Callable[..., optparse.Option]] = [ + cmdoptions.install_options, + cmdoptions.global_options, + cmdoptions.hash, +] + +# the 'dest' string values +SUPPORTED_OPTIONS_REQ_DEST = [str(o().dest) for o in SUPPORTED_OPTIONS_REQ] + + +class ParsedRequirement: + def __init__( + self, + requirement: str, + is_editable: bool, + comes_from: str, + constraint: bool, + options: Optional[Dict[str, Any]] = None, + line_source: Optional[str] = None, + ) -> None: + self.requirement = requirement + self.is_editable = is_editable + self.comes_from = comes_from + self.options = options + self.constraint = constraint + self.line_source = line_source + + +class ParsedLine: + def __init__( + self, + filename: str, + lineno: int, + args: str, + opts: Values, + constraint: bool, + ) -> None: + self.filename = filename + self.lineno = lineno + self.opts = opts + self.constraint = constraint + + if args: + self.is_requirement = True + self.is_editable = False + self.requirement = args + elif opts.editables: + self.is_requirement = True + self.is_editable = True + # We don't support multiple -e on one line + self.requirement = opts.editables[0] + else: + self.is_requirement = False + + +def parse_requirements( + filename: str, + session: PipSession, + finder: Optional["PackageFinder"] = None, + options: Optional[optparse.Values] = None, + constraint: bool = False, +) -> Iterator[ParsedRequirement]: + """Parse a requirements file and yield ParsedRequirement instances. + + :param filename: Path or url of requirements file. + :param session: PipSession instance. + :param finder: Instance of pip.index.PackageFinder. + :param options: cli options. + :param constraint: If true, parsing a constraint file rather than + requirements file. + """ + line_parser = get_line_parser(finder) + parser = RequirementsFileParser(session, line_parser) + + for parsed_line in parser.parse(filename, constraint): + parsed_req = handle_line( + parsed_line, options=options, finder=finder, session=session + ) + if parsed_req is not None: + yield parsed_req + + +def preprocess(content: str) -> ReqFileLines: + """Split, filter, and join lines, and return a line iterator + + :param content: the content of the requirements file + """ + lines_enum: ReqFileLines = enumerate(content.splitlines(), start=1) + lines_enum = join_lines(lines_enum) + lines_enum = ignore_comments(lines_enum) + lines_enum = expand_env_variables(lines_enum) + return lines_enum + + +def handle_requirement_line( + line: ParsedLine, + options: Optional[optparse.Values] = None, +) -> ParsedRequirement: + + # preserve for the nested code path + line_comes_from = "{} {} (line {})".format( + "-c" if line.constraint else "-r", + line.filename, + line.lineno, + ) + + assert line.is_requirement + + if line.is_editable: + # For editable requirements, we don't support per-requirement + # options, so just return the parsed requirement. + return ParsedRequirement( + requirement=line.requirement, + is_editable=line.is_editable, + comes_from=line_comes_from, + constraint=line.constraint, + ) + else: + if options: + # Disable wheels if the user has specified build options + cmdoptions.check_install_build_global(options, line.opts) + + # get the options that apply to requirements + req_options = {} + for dest in SUPPORTED_OPTIONS_REQ_DEST: + if dest in line.opts.__dict__ and line.opts.__dict__[dest]: + req_options[dest] = line.opts.__dict__[dest] + + line_source = f"line {line.lineno} of {line.filename}" + return ParsedRequirement( + requirement=line.requirement, + is_editable=line.is_editable, + comes_from=line_comes_from, + constraint=line.constraint, + options=req_options, + line_source=line_source, + ) + + +def handle_option_line( + opts: Values, + filename: str, + lineno: int, + finder: Optional["PackageFinder"] = None, + options: Optional[optparse.Values] = None, + session: Optional[PipSession] = None, +) -> None: + + if options: + # percolate options upward + if opts.require_hashes: + options.require_hashes = opts.require_hashes + if opts.features_enabled: + options.features_enabled.extend( + f for f in opts.features_enabled if f not in options.features_enabled + ) + + # set finder options + if finder: + find_links = finder.find_links + index_urls = finder.index_urls + if opts.index_url: + index_urls = [opts.index_url] + if opts.no_index is True: + index_urls = [] + if opts.extra_index_urls: + index_urls.extend(opts.extra_index_urls) + if opts.find_links: + # FIXME: it would be nice to keep track of the source + # of the find_links: support a find-links local path + # relative to a requirements file. + value = opts.find_links[0] + req_dir = os.path.dirname(os.path.abspath(filename)) + relative_to_reqs_file = os.path.join(req_dir, value) + if os.path.exists(relative_to_reqs_file): + value = relative_to_reqs_file + find_links.append(value) + + if session: + # We need to update the auth urls in session + session.update_index_urls(index_urls) + + search_scope = SearchScope( + find_links=find_links, + index_urls=index_urls, + ) + finder.search_scope = search_scope + + if opts.pre: + finder.set_allow_all_prereleases() + + if opts.prefer_binary: + finder.set_prefer_binary() + + if session: + for host in opts.trusted_hosts or []: + source = f"line {lineno} of {filename}" + session.add_trusted_host(host, source=source) + + +def handle_line( + line: ParsedLine, + options: Optional[optparse.Values] = None, + finder: Optional["PackageFinder"] = None, + session: Optional[PipSession] = None, +) -> Optional[ParsedRequirement]: + """Handle a single parsed requirements line; This can result in + creating/yielding requirements, or updating the finder. + + :param line: The parsed line to be processed. + :param options: CLI options. + :param finder: The finder - updated by non-requirement lines. + :param session: The session - updated by non-requirement lines. + + Returns a ParsedRequirement object if the line is a requirement line, + otherwise returns None. + + For lines that contain requirements, the only options that have an effect + are from SUPPORTED_OPTIONS_REQ, and they are scoped to the + requirement. Other options from SUPPORTED_OPTIONS may be present, but are + ignored. + + For lines that do not contain requirements, the only options that have an + effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may + be present, but are ignored. These lines may contain multiple options + (although our docs imply only one is supported), and all our parsed and + affect the finder. + """ + + if line.is_requirement: + parsed_req = handle_requirement_line(line, options) + return parsed_req + else: + handle_option_line( + line.opts, + line.filename, + line.lineno, + finder, + options, + session, + ) + return None + + +class RequirementsFileParser: + def __init__( + self, + session: PipSession, + line_parser: LineParser, + ) -> None: + self._session = session + self._line_parser = line_parser + + def parse(self, filename: str, constraint: bool) -> Iterator[ParsedLine]: + """Parse a given file, yielding parsed lines.""" + yield from self._parse_and_recurse(filename, constraint) + + def _parse_and_recurse( + self, filename: str, constraint: bool + ) -> Iterator[ParsedLine]: + for line in self._parse_file(filename, constraint): + if not line.is_requirement and ( + line.opts.requirements or line.opts.constraints + ): + # parse a nested requirements file + if line.opts.requirements: + req_path = line.opts.requirements[0] + nested_constraint = False + else: + req_path = line.opts.constraints[0] + nested_constraint = True + + # original file is over http + if SCHEME_RE.search(filename): + # do a url join so relative paths work + req_path = urllib.parse.urljoin(filename, req_path) + # original file and nested file are paths + elif not SCHEME_RE.search(req_path): + # do a join so relative paths work + req_path = os.path.join( + os.path.dirname(filename), + req_path, + ) + + yield from self._parse_and_recurse(req_path, nested_constraint) + else: + yield line + + def _parse_file(self, filename: str, constraint: bool) -> Iterator[ParsedLine]: + _, content = get_file_content(filename, self._session) + + lines_enum = preprocess(content) + + for line_number, line in lines_enum: + try: + args_str, opts = self._line_parser(line) + except OptionParsingError as e: + # add offending line + msg = f"Invalid requirement: {line}\n{e.msg}" + raise RequirementsFileParseError(msg) + + yield ParsedLine( + filename, + line_number, + args_str, + opts, + constraint, + ) + + +def get_line_parser(finder: Optional["PackageFinder"]) -> LineParser: + def parse_line(line: str) -> Tuple[str, Values]: + # Build new parser for each line since it accumulates appendable + # options. + parser = build_parser() + defaults = parser.get_default_values() + defaults.index_url = None + if finder: + defaults.format_control = finder.format_control + + args_str, options_str = break_args_options(line) + + opts, _ = parser.parse_args(shlex.split(options_str), defaults) + + return args_str, opts + + return parse_line + + +def break_args_options(line: str) -> Tuple[str, str]: + """Break up the line into an args and options string. We only want to shlex + (and then optparse) the options, not the args. args can contain markers + which are corrupted by shlex. + """ + tokens = line.split(" ") + args = [] + options = tokens[:] + for token in tokens: + if token.startswith("-") or token.startswith("--"): + break + else: + args.append(token) + options.pop(0) + return " ".join(args), " ".join(options) + + +class OptionParsingError(Exception): + def __init__(self, msg: str) -> None: + self.msg = msg + + +def build_parser() -> optparse.OptionParser: + """ + Return a parser for parsing requirement lines + """ + parser = optparse.OptionParser(add_help_option=False) + + option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ + for option_factory in option_factories: + option = option_factory() + parser.add_option(option) + + # By default optparse sys.exits on parsing errors. We want to wrap + # that in our own exception. + def parser_exit(self: Any, msg: str) -> "NoReturn": + raise OptionParsingError(msg) + + # NOTE: mypy disallows assigning to a method + # https://github.com/python/mypy/issues/2427 + parser.exit = parser_exit # type: ignore + + return parser + + +def join_lines(lines_enum: ReqFileLines) -> ReqFileLines: + """Joins a line ending in '\' with the previous line (except when following + comments). The joined line takes on the index of the first line. + """ + primary_line_number = None + new_line: List[str] = [] + for line_number, line in lines_enum: + if not line.endswith("\\") or COMMENT_RE.match(line): + if COMMENT_RE.match(line): + # this ensures comments are always matched later + line = " " + line + if new_line: + new_line.append(line) + assert primary_line_number is not None + yield primary_line_number, "".join(new_line) + new_line = [] + else: + yield line_number, line + else: + if not new_line: + primary_line_number = line_number + new_line.append(line.strip("\\")) + + # last line contains \ + if new_line: + assert primary_line_number is not None + yield primary_line_number, "".join(new_line) + + # TODO: handle space after '\'. + + +def ignore_comments(lines_enum: ReqFileLines) -> ReqFileLines: + """ + Strips comments and filter empty lines. + """ + for line_number, line in lines_enum: + line = COMMENT_RE.sub("", line) + line = line.strip() + if line: + yield line_number, line + + +def expand_env_variables(lines_enum: ReqFileLines) -> ReqFileLines: + """Replace all environment variables that can be retrieved via `os.getenv`. + + The only allowed format for environment variables defined in the + requirement file is `${MY_VARIABLE_1}` to ensure two things: + + 1. Strings that contain a `$` aren't accidentally (partially) expanded. + 2. Ensure consistency across platforms for requirement files. + + These points are the result of a discussion on the `github pull + request #3514 `_. + + Valid characters in variable names follow the `POSIX standard + `_ and are limited + to uppercase letter, digits and the `_` (underscore). + """ + for line_number, line in lines_enum: + for env_var, var_name in ENV_VAR_RE.findall(line): + value = os.getenv(var_name) + if not value: + continue + + line = line.replace(env_var, value) + + yield line_number, line + + +def get_file_content(url: str, session: PipSession) -> Tuple[str, str]: + """Gets the content of a file; it may be a filename, file: URL, or + http: URL. Returns (location, content). Content is unicode. + Respects # -*- coding: declarations on the retrieved files. + + :param url: File path or url. + :param session: PipSession instance. + """ + scheme = get_url_scheme(url) + + # Pip has special support for file:// URLs (LocalFSAdapter). + if scheme in ["http", "https", "file"]: + resp = session.get(url) + raise_for_status(resp) + return resp.url, resp.text + + # Assume this is a bare path. + try: + with open(url, "rb") as f: + content = auto_decode(f.read()) + except OSError as exc: + raise InstallationError(f"Could not open requirements file: {exc}") + return url, content diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/req_install.py b/python/lib/python3.10/site-packages/pip/_internal/req/req_install.py new file mode 100644 index 0000000..02dbda1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/req_install.py @@ -0,0 +1,858 @@ +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import functools +import logging +import os +import shutil +import sys +import uuid +import zipfile +from typing import Any, Collection, Dict, Iterable, List, Optional, Sequence, Union + +from pip._vendor.packaging.markers import Marker +from pip._vendor.packaging.requirements import Requirement +from pip._vendor.packaging.specifiers import SpecifierSet +from pip._vendor.packaging.utils import canonicalize_name +from pip._vendor.packaging.version import Version +from pip._vendor.packaging.version import parse as parse_version +from pip._vendor.pep517.wrappers import Pep517HookCaller + +from pip._internal.build_env import BuildEnvironment, NoOpBuildEnvironment +from pip._internal.exceptions import InstallationError, LegacyInstallFailure +from pip._internal.locations import get_scheme +from pip._internal.metadata import ( + BaseDistribution, + get_default_environment, + get_directory_distribution, +) +from pip._internal.models.link import Link +from pip._internal.operations.build.metadata import generate_metadata +from pip._internal.operations.build.metadata_editable import generate_editable_metadata +from pip._internal.operations.build.metadata_legacy import ( + generate_metadata as generate_metadata_legacy, +) +from pip._internal.operations.install.editable_legacy import ( + install_editable as install_editable_legacy, +) +from pip._internal.operations.install.legacy import install as install_legacy +from pip._internal.operations.install.wheel import install_wheel +from pip._internal.pyproject import load_pyproject_toml, make_pyproject_path +from pip._internal.req.req_uninstall import UninstallPathSet +from pip._internal.utils.deprecation import deprecated +from pip._internal.utils.direct_url_helpers import ( + direct_url_for_editable, + direct_url_from_link, +) +from pip._internal.utils.hashes import Hashes +from pip._internal.utils.misc import ( + ask_path_exists, + backup_dir, + display_path, + hide_url, + redact_auth_from_url, +) +from pip._internal.utils.packaging import safe_extra +from pip._internal.utils.subprocess import runner_with_spinner_message +from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds +from pip._internal.utils.virtualenv import running_under_virtualenv +from pip._internal.vcs import vcs + +logger = logging.getLogger(__name__) + + +class InstallRequirement: + """ + Represents something that may be installed later on, may have information + about where to fetch the relevant requirement and also contains logic for + installing the said requirement. + """ + + def __init__( + self, + req: Optional[Requirement], + comes_from: Optional[Union[str, "InstallRequirement"]], + editable: bool = False, + link: Optional[Link] = None, + markers: Optional[Marker] = None, + use_pep517: Optional[bool] = None, + isolated: bool = False, + install_options: Optional[List[str]] = None, + global_options: Optional[List[str]] = None, + hash_options: Optional[Dict[str, List[str]]] = None, + constraint: bool = False, + extras: Collection[str] = (), + user_supplied: bool = False, + permit_editable_wheels: bool = False, + ) -> None: + assert req is None or isinstance(req, Requirement), req + self.req = req + self.comes_from = comes_from + self.constraint = constraint + self.editable = editable + self.permit_editable_wheels = permit_editable_wheels + self.legacy_install_reason: Optional[int] = None + + # source_dir is the local directory where the linked requirement is + # located, or unpacked. In case unpacking is needed, creating and + # populating source_dir is done by the RequirementPreparer. Note this + # is not necessarily the directory where pyproject.toml or setup.py is + # located - that one is obtained via unpacked_source_directory. + self.source_dir: Optional[str] = None + if self.editable: + assert link + if link.is_file: + self.source_dir = os.path.normpath(os.path.abspath(link.file_path)) + + if link is None and req and req.url: + # PEP 508 URL requirement + link = Link(req.url) + self.link = self.original_link = link + self.original_link_is_in_wheel_cache = False + + # Path to any downloaded or already-existing package. + self.local_file_path: Optional[str] = None + if self.link and self.link.is_file: + self.local_file_path = self.link.file_path + + if extras: + self.extras = extras + elif req: + self.extras = {safe_extra(extra) for extra in req.extras} + else: + self.extras = set() + if markers is None and req: + markers = req.marker + self.markers = markers + + # This holds the Distribution object if this requirement is already installed. + self.satisfied_by: Optional[BaseDistribution] = None + # Whether the installation process should try to uninstall an existing + # distribution before installing this requirement. + self.should_reinstall = False + # Temporary build location + self._temp_build_dir: Optional[TempDirectory] = None + # Set to True after successful installation + self.install_succeeded: Optional[bool] = None + # Supplied options + self.install_options = install_options if install_options else [] + self.global_options = global_options if global_options else [] + self.hash_options = hash_options if hash_options else {} + # Set to True after successful preparation of this requirement + self.prepared = False + # User supplied requirement are explicitly requested for installation + # by the user via CLI arguments or requirements files, as opposed to, + # e.g. dependencies, extras or constraints. + self.user_supplied = user_supplied + + self.isolated = isolated + self.build_env: BuildEnvironment = NoOpBuildEnvironment() + + # For PEP 517, the directory where we request the project metadata + # gets stored. We need this to pass to build_wheel, so the backend + # can ensure that the wheel matches the metadata (see the PEP for + # details). + self.metadata_directory: Optional[str] = None + + # The static build requirements (from pyproject.toml) + self.pyproject_requires: Optional[List[str]] = None + + # Build requirements that we will check are available + self.requirements_to_check: List[str] = [] + + # The PEP 517 backend we should use to build the project + self.pep517_backend: Optional[Pep517HookCaller] = None + + # Are we using PEP 517 for this requirement? + # After pyproject.toml has been loaded, the only valid values are True + # and False. Before loading, None is valid (meaning "use the default"). + # Setting an explicit value before loading pyproject.toml is supported, + # but after loading this flag should be treated as read only. + self.use_pep517 = use_pep517 + + # This requirement needs more preparation before it can be built + self.needs_more_preparation = False + + def __str__(self) -> str: + if self.req: + s = str(self.req) + if self.link: + s += " from {}".format(redact_auth_from_url(self.link.url)) + elif self.link: + s = redact_auth_from_url(self.link.url) + else: + s = "" + if self.satisfied_by is not None: + s += " in {}".format(display_path(self.satisfied_by.location)) + if self.comes_from: + if isinstance(self.comes_from, str): + comes_from: Optional[str] = self.comes_from + else: + comes_from = self.comes_from.from_path() + if comes_from: + s += f" (from {comes_from})" + return s + + def __repr__(self) -> str: + return "<{} object: {} editable={!r}>".format( + self.__class__.__name__, str(self), self.editable + ) + + def format_debug(self) -> str: + """An un-tested helper for getting state, for debugging.""" + attributes = vars(self) + names = sorted(attributes) + + state = ("{}={!r}".format(attr, attributes[attr]) for attr in sorted(names)) + return "<{name} object: {{{state}}}>".format( + name=self.__class__.__name__, + state=", ".join(state), + ) + + # Things that are valid for all kinds of requirements? + @property + def name(self) -> Optional[str]: + if self.req is None: + return None + return self.req.name + + @functools.lru_cache() # use cached_property in python 3.8+ + def supports_pyproject_editable(self) -> bool: + if not self.use_pep517: + return False + assert self.pep517_backend + with self.build_env: + runner = runner_with_spinner_message( + "Checking if build backend supports build_editable" + ) + with self.pep517_backend.subprocess_runner(runner): + return "build_editable" in self.pep517_backend._supported_features() + + @property + def specifier(self) -> SpecifierSet: + return self.req.specifier + + @property + def is_pinned(self) -> bool: + """Return whether I am pinned to an exact version. + + For example, some-package==1.2 is pinned; some-package>1.2 is not. + """ + specifiers = self.specifier + return len(specifiers) == 1 and next(iter(specifiers)).operator in {"==", "==="} + + def match_markers(self, extras_requested: Optional[Iterable[str]] = None) -> bool: + if not extras_requested: + # Provide an extra to safely evaluate the markers + # without matching any extra + extras_requested = ("",) + if self.markers is not None: + return any( + self.markers.evaluate({"extra": extra}) for extra in extras_requested + ) + else: + return True + + @property + def has_hash_options(self) -> bool: + """Return whether any known-good hashes are specified as options. + + These activate --require-hashes mode; hashes specified as part of a + URL do not. + + """ + return bool(self.hash_options) + + def hashes(self, trust_internet: bool = True) -> Hashes: + """Return a hash-comparer that considers my option- and URL-based + hashes to be known-good. + + Hashes in URLs--ones embedded in the requirements file, not ones + downloaded from an index server--are almost peers with ones from + flags. They satisfy --require-hashes (whether it was implicitly or + explicitly activated) but do not activate it. md5 and sha224 are not + allowed in flags, which should nudge people toward good algos. We + always OR all hashes together, even ones from URLs. + + :param trust_internet: Whether to trust URL-based (#md5=...) hashes + downloaded from the internet, as by populate_link() + + """ + good_hashes = self.hash_options.copy() + link = self.link if trust_internet else self.original_link + if link and link.hash: + good_hashes.setdefault(link.hash_name, []).append(link.hash) + return Hashes(good_hashes) + + def from_path(self) -> Optional[str]: + """Format a nice indicator to show where this "comes from" """ + if self.req is None: + return None + s = str(self.req) + if self.comes_from: + if isinstance(self.comes_from, str): + comes_from = self.comes_from + else: + comes_from = self.comes_from.from_path() + if comes_from: + s += "->" + comes_from + return s + + def ensure_build_location( + self, build_dir: str, autodelete: bool, parallel_builds: bool + ) -> str: + assert build_dir is not None + if self._temp_build_dir is not None: + assert self._temp_build_dir.path + return self._temp_build_dir.path + if self.req is None: + # Some systems have /tmp as a symlink which confuses custom + # builds (such as numpy). Thus, we ensure that the real path + # is returned. + self._temp_build_dir = TempDirectory( + kind=tempdir_kinds.REQ_BUILD, globally_managed=True + ) + + return self._temp_build_dir.path + + # This is the only remaining place where we manually determine the path + # for the temporary directory. It is only needed for editables where + # it is the value of the --src option. + + # When parallel builds are enabled, add a UUID to the build directory + # name so multiple builds do not interfere with each other. + dir_name: str = canonicalize_name(self.name) + if parallel_builds: + dir_name = f"{dir_name}_{uuid.uuid4().hex}" + + # FIXME: Is there a better place to create the build_dir? (hg and bzr + # need this) + if not os.path.exists(build_dir): + logger.debug("Creating directory %s", build_dir) + os.makedirs(build_dir) + actual_build_dir = os.path.join(build_dir, dir_name) + # `None` indicates that we respect the globally-configured deletion + # settings, which is what we actually want when auto-deleting. + delete_arg = None if autodelete else False + return TempDirectory( + path=actual_build_dir, + delete=delete_arg, + kind=tempdir_kinds.REQ_BUILD, + globally_managed=True, + ).path + + def _set_requirement(self) -> None: + """Set requirement after generating metadata.""" + assert self.req is None + assert self.metadata is not None + assert self.source_dir is not None + + # Construct a Requirement object from the generated metadata + if isinstance(parse_version(self.metadata["Version"]), Version): + op = "==" + else: + op = "===" + + self.req = Requirement( + "".join( + [ + self.metadata["Name"], + op, + self.metadata["Version"], + ] + ) + ) + + def warn_on_mismatching_name(self) -> None: + metadata_name = canonicalize_name(self.metadata["Name"]) + if canonicalize_name(self.req.name) == metadata_name: + # Everything is fine. + return + + # If we're here, there's a mismatch. Log a warning about it. + logger.warning( + "Generating metadata for package %s " + "produced metadata for project name %s. Fix your " + "#egg=%s fragments.", + self.name, + metadata_name, + self.name, + ) + self.req = Requirement(metadata_name) + + def check_if_exists(self, use_user_site: bool) -> None: + """Find an installed distribution that satisfies or conflicts + with this requirement, and set self.satisfied_by or + self.should_reinstall appropriately. + """ + if self.req is None: + return + existing_dist = get_default_environment().get_distribution(self.req.name) + if not existing_dist: + return + + version_compatible = self.req.specifier.contains( + existing_dist.version, + prereleases=True, + ) + if not version_compatible: + self.satisfied_by = None + if use_user_site: + if existing_dist.in_usersite: + self.should_reinstall = True + elif running_under_virtualenv() and existing_dist.in_site_packages: + raise InstallationError( + f"Will not install to the user site because it will " + f"lack sys.path precedence to {existing_dist.raw_name} " + f"in {existing_dist.location}" + ) + else: + self.should_reinstall = True + else: + if self.editable: + self.should_reinstall = True + # when installing editables, nothing pre-existing should ever + # satisfy + self.satisfied_by = None + else: + self.satisfied_by = existing_dist + + # Things valid for wheels + @property + def is_wheel(self) -> bool: + if not self.link: + return False + return self.link.is_wheel + + # Things valid for sdists + @property + def unpacked_source_directory(self) -> str: + return os.path.join( + self.source_dir, self.link and self.link.subdirectory_fragment or "" + ) + + @property + def setup_py_path(self) -> str: + assert self.source_dir, f"No source dir for {self}" + setup_py = os.path.join(self.unpacked_source_directory, "setup.py") + + return setup_py + + @property + def setup_cfg_path(self) -> str: + assert self.source_dir, f"No source dir for {self}" + setup_cfg = os.path.join(self.unpacked_source_directory, "setup.cfg") + + return setup_cfg + + @property + def pyproject_toml_path(self) -> str: + assert self.source_dir, f"No source dir for {self}" + return make_pyproject_path(self.unpacked_source_directory) + + def load_pyproject_toml(self) -> None: + """Load the pyproject.toml file. + + After calling this routine, all of the attributes related to PEP 517 + processing for this requirement have been set. In particular, the + use_pep517 attribute can be used to determine whether we should + follow the PEP 517 or legacy (setup.py) code path. + """ + pyproject_toml_data = load_pyproject_toml( + self.use_pep517, self.pyproject_toml_path, self.setup_py_path, str(self) + ) + + if pyproject_toml_data is None: + self.use_pep517 = False + return + + self.use_pep517 = True + requires, backend, check, backend_path = pyproject_toml_data + self.requirements_to_check = check + self.pyproject_requires = requires + self.pep517_backend = Pep517HookCaller( + self.unpacked_source_directory, + backend, + backend_path=backend_path, + ) + + def isolated_editable_sanity_check(self) -> None: + """Check that an editable requirement if valid for use with PEP 517/518. + + This verifies that an editable that has a pyproject.toml either supports PEP 660 + or as a setup.py or a setup.cfg + """ + if ( + self.editable + and self.use_pep517 + and not self.supports_pyproject_editable() + and not os.path.isfile(self.setup_py_path) + and not os.path.isfile(self.setup_cfg_path) + ): + raise InstallationError( + f"Project {self} has a 'pyproject.toml' and its build " + f"backend is missing the 'build_editable' hook. Since it does not " + f"have a 'setup.py' nor a 'setup.cfg', " + f"it cannot be installed in editable mode. " + f"Consider using a build backend that supports PEP 660." + ) + + def prepare_metadata(self) -> None: + """Ensure that project metadata is available. + + Under PEP 517 and PEP 660, call the backend hook to prepare the metadata. + Under legacy processing, call setup.py egg-info. + """ + assert self.source_dir + details = self.name or f"from {self.link}" + + if self.use_pep517: + assert self.pep517_backend is not None + if ( + self.editable + and self.permit_editable_wheels + and self.supports_pyproject_editable() + ): + self.metadata_directory = generate_editable_metadata( + build_env=self.build_env, + backend=self.pep517_backend, + details=details, + ) + else: + self.metadata_directory = generate_metadata( + build_env=self.build_env, + backend=self.pep517_backend, + details=details, + ) + else: + self.metadata_directory = generate_metadata_legacy( + build_env=self.build_env, + setup_py_path=self.setup_py_path, + source_dir=self.unpacked_source_directory, + isolated=self.isolated, + details=details, + ) + + # Act on the newly generated metadata, based on the name and version. + if not self.name: + self._set_requirement() + else: + self.warn_on_mismatching_name() + + self.assert_source_matches_version() + + @property + def metadata(self) -> Any: + if not hasattr(self, "_metadata"): + self._metadata = self.get_dist().metadata + + return self._metadata + + def get_dist(self) -> BaseDistribution: + return get_directory_distribution(self.metadata_directory) + + def assert_source_matches_version(self) -> None: + assert self.source_dir + version = self.metadata["version"] + if self.req.specifier and version not in self.req.specifier: + logger.warning( + "Requested %s, but installing version %s", + self, + version, + ) + else: + logger.debug( + "Source in %s has version %s, which satisfies requirement %s", + display_path(self.source_dir), + version, + self, + ) + + # For both source distributions and editables + def ensure_has_source_dir( + self, + parent_dir: str, + autodelete: bool = False, + parallel_builds: bool = False, + ) -> None: + """Ensure that a source_dir is set. + + This will create a temporary build dir if the name of the requirement + isn't known yet. + + :param parent_dir: The ideal pip parent_dir for the source_dir. + Generally src_dir for editables and build_dir for sdists. + :return: self.source_dir + """ + if self.source_dir is None: + self.source_dir = self.ensure_build_location( + parent_dir, + autodelete=autodelete, + parallel_builds=parallel_builds, + ) + + # For editable installations + def update_editable(self) -> None: + if not self.link: + logger.debug( + "Cannot update repository at %s; repository location is unknown", + self.source_dir, + ) + return + assert self.editable + assert self.source_dir + if self.link.scheme == "file": + # Static paths don't get updated + return + vcs_backend = vcs.get_backend_for_scheme(self.link.scheme) + # Editable requirements are validated in Requirement constructors. + # So here, if it's neither a path nor a valid VCS URL, it's a bug. + assert vcs_backend, f"Unsupported VCS URL {self.link.url}" + hidden_url = hide_url(self.link.url) + vcs_backend.obtain(self.source_dir, url=hidden_url, verbosity=0) + + # Top-level Actions + def uninstall( + self, auto_confirm: bool = False, verbose: bool = False + ) -> Optional[UninstallPathSet]: + """ + Uninstall the distribution currently satisfying this requirement. + + Prompts before removing or modifying files unless + ``auto_confirm`` is True. + + Refuses to delete or modify files outside of ``sys.prefix`` - + thus uninstallation within a virtual environment can only + modify that virtual environment, even if the virtualenv is + linked to global site-packages. + + """ + assert self.req + dist = get_default_environment().get_distribution(self.req.name) + if not dist: + logger.warning("Skipping %s as it is not installed.", self.name) + return None + logger.info("Found existing installation: %s", dist) + + uninstalled_pathset = UninstallPathSet.from_dist(dist) + uninstalled_pathset.remove(auto_confirm, verbose) + return uninstalled_pathset + + def _get_archive_name(self, path: str, parentdir: str, rootdir: str) -> str: + def _clean_zip_name(name: str, prefix: str) -> str: + assert name.startswith( + prefix + os.path.sep + ), f"name {name!r} doesn't start with prefix {prefix!r}" + name = name[len(prefix) + 1 :] + name = name.replace(os.path.sep, "/") + return name + + path = os.path.join(parentdir, path) + name = _clean_zip_name(path, rootdir) + return self.name + "/" + name + + def archive(self, build_dir: Optional[str]) -> None: + """Saves archive to provided build_dir. + + Used for saving downloaded VCS requirements as part of `pip download`. + """ + assert self.source_dir + if build_dir is None: + return + + create_archive = True + archive_name = "{}-{}.zip".format(self.name, self.metadata["version"]) + archive_path = os.path.join(build_dir, archive_name) + + if os.path.exists(archive_path): + response = ask_path_exists( + "The file {} exists. (i)gnore, (w)ipe, " + "(b)ackup, (a)bort ".format(display_path(archive_path)), + ("i", "w", "b", "a"), + ) + if response == "i": + create_archive = False + elif response == "w": + logger.warning("Deleting %s", display_path(archive_path)) + os.remove(archive_path) + elif response == "b": + dest_file = backup_dir(archive_path) + logger.warning( + "Backing up %s to %s", + display_path(archive_path), + display_path(dest_file), + ) + shutil.move(archive_path, dest_file) + elif response == "a": + sys.exit(-1) + + if not create_archive: + return + + zip_output = zipfile.ZipFile( + archive_path, + "w", + zipfile.ZIP_DEFLATED, + allowZip64=True, + ) + with zip_output: + dir = os.path.normcase(os.path.abspath(self.unpacked_source_directory)) + for dirpath, dirnames, filenames in os.walk(dir): + for dirname in dirnames: + dir_arcname = self._get_archive_name( + dirname, + parentdir=dirpath, + rootdir=dir, + ) + zipdir = zipfile.ZipInfo(dir_arcname + "/") + zipdir.external_attr = 0x1ED << 16 # 0o755 + zip_output.writestr(zipdir, "") + for filename in filenames: + file_arcname = self._get_archive_name( + filename, + parentdir=dirpath, + rootdir=dir, + ) + filename = os.path.join(dirpath, filename) + zip_output.write(filename, file_arcname) + + logger.info("Saved %s", display_path(archive_path)) + + def install( + self, + install_options: List[str], + global_options: Optional[Sequence[str]] = None, + root: Optional[str] = None, + home: Optional[str] = None, + prefix: Optional[str] = None, + warn_script_location: bool = True, + use_user_site: bool = False, + pycompile: bool = True, + ) -> None: + scheme = get_scheme( + self.name, + user=use_user_site, + home=home, + root=root, + isolated=self.isolated, + prefix=prefix, + ) + + global_options = global_options if global_options is not None else [] + if self.editable and not self.is_wheel: + install_editable_legacy( + install_options, + global_options, + prefix=prefix, + home=home, + use_user_site=use_user_site, + name=self.name, + setup_py_path=self.setup_py_path, + isolated=self.isolated, + build_env=self.build_env, + unpacked_source_directory=self.unpacked_source_directory, + ) + self.install_succeeded = True + return + + if self.is_wheel: + assert self.local_file_path + direct_url = None + if self.editable: + direct_url = direct_url_for_editable(self.unpacked_source_directory) + elif self.original_link: + direct_url = direct_url_from_link( + self.original_link, + self.source_dir, + self.original_link_is_in_wheel_cache, + ) + install_wheel( + self.name, + self.local_file_path, + scheme=scheme, + req_description=str(self.req), + pycompile=pycompile, + warn_script_location=warn_script_location, + direct_url=direct_url, + requested=self.user_supplied, + ) + self.install_succeeded = True + return + + # TODO: Why don't we do this for editable installs? + + # Extend the list of global and install options passed on to + # the setup.py call with the ones from the requirements file. + # Options specified in requirements file override those + # specified on the command line, since the last option given + # to setup.py is the one that is used. + global_options = list(global_options) + self.global_options + install_options = list(install_options) + self.install_options + + try: + success = install_legacy( + install_options=install_options, + global_options=global_options, + root=root, + home=home, + prefix=prefix, + use_user_site=use_user_site, + pycompile=pycompile, + scheme=scheme, + setup_py_path=self.setup_py_path, + isolated=self.isolated, + req_name=self.name, + build_env=self.build_env, + unpacked_source_directory=self.unpacked_source_directory, + req_description=str(self.req), + ) + except LegacyInstallFailure as exc: + self.install_succeeded = False + raise exc + except Exception: + self.install_succeeded = True + raise + + self.install_succeeded = success + + if success and self.legacy_install_reason == 8368: + deprecated( + reason=( + "{} was installed using the legacy 'setup.py install' " + "method, because a wheel could not be built for it.".format( + self.name + ) + ), + replacement="to fix the wheel build issue reported above", + gone_in=None, + issue=8368, + ) + + +def check_invalid_constraint_type(req: InstallRequirement) -> str: + + # Check for unsupported forms + problem = "" + if not req.name: + problem = "Unnamed requirements are not allowed as constraints" + elif req.editable: + problem = "Editable requirements are not allowed as constraints" + elif req.extras: + problem = "Constraints cannot have extras" + + if problem: + deprecated( + reason=( + "Constraints are only allowed to take the form of a package " + "name and a version specifier. Other forms were originally " + "permitted as an accident of the implementation, but were " + "undocumented. The new implementation of the resolver no " + "longer supports these forms." + ), + replacement="replacing the constraint with a requirement", + # No plan yet for when the new resolver becomes default + gone_in=None, + issue=8210, + ) + + return problem diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/req_set.py b/python/lib/python3.10/site-packages/pip/_internal/req/req_set.py new file mode 100644 index 0000000..6626c37 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/req_set.py @@ -0,0 +1,189 @@ +import logging +from collections import OrderedDict +from typing import Dict, Iterable, List, Optional, Tuple + +from pip._vendor.packaging.utils import canonicalize_name + +from pip._internal.exceptions import InstallationError +from pip._internal.models.wheel import Wheel +from pip._internal.req.req_install import InstallRequirement +from pip._internal.utils import compatibility_tags + +logger = logging.getLogger(__name__) + + +class RequirementSet: + def __init__(self, check_supported_wheels: bool = True) -> None: + """Create a RequirementSet.""" + + self.requirements: Dict[str, InstallRequirement] = OrderedDict() + self.check_supported_wheels = check_supported_wheels + + self.unnamed_requirements: List[InstallRequirement] = [] + + def __str__(self) -> str: + requirements = sorted( + (req for req in self.requirements.values() if not req.comes_from), + key=lambda req: canonicalize_name(req.name or ""), + ) + return " ".join(str(req.req) for req in requirements) + + def __repr__(self) -> str: + requirements = sorted( + self.requirements.values(), + key=lambda req: canonicalize_name(req.name or ""), + ) + + format_string = "<{classname} object; {count} requirement(s): {reqs}>" + return format_string.format( + classname=self.__class__.__name__, + count=len(requirements), + reqs=", ".join(str(req.req) for req in requirements), + ) + + def add_unnamed_requirement(self, install_req: InstallRequirement) -> None: + assert not install_req.name + self.unnamed_requirements.append(install_req) + + def add_named_requirement(self, install_req: InstallRequirement) -> None: + assert install_req.name + + project_name = canonicalize_name(install_req.name) + self.requirements[project_name] = install_req + + def add_requirement( + self, + install_req: InstallRequirement, + parent_req_name: Optional[str] = None, + extras_requested: Optional[Iterable[str]] = None, + ) -> Tuple[List[InstallRequirement], Optional[InstallRequirement]]: + """Add install_req as a requirement to install. + + :param parent_req_name: The name of the requirement that needed this + added. The name is used because when multiple unnamed requirements + resolve to the same name, we could otherwise end up with dependency + links that point outside the Requirements set. parent_req must + already be added. Note that None implies that this is a user + supplied requirement, vs an inferred one. + :param extras_requested: an iterable of extras used to evaluate the + environment markers. + :return: Additional requirements to scan. That is either [] if + the requirement is not applicable, or [install_req] if the + requirement is applicable and has just been added. + """ + # If the markers do not match, ignore this requirement. + if not install_req.match_markers(extras_requested): + logger.info( + "Ignoring %s: markers '%s' don't match your environment", + install_req.name, + install_req.markers, + ) + return [], None + + # If the wheel is not supported, raise an error. + # Should check this after filtering out based on environment markers to + # allow specifying different wheels based on the environment/OS, in a + # single requirements file. + if install_req.link and install_req.link.is_wheel: + wheel = Wheel(install_req.link.filename) + tags = compatibility_tags.get_supported() + if self.check_supported_wheels and not wheel.supported(tags): + raise InstallationError( + "{} is not a supported wheel on this platform.".format( + wheel.filename + ) + ) + + # This next bit is really a sanity check. + assert ( + not install_req.user_supplied or parent_req_name is None + ), "a user supplied req shouldn't have a parent" + + # Unnamed requirements are scanned again and the requirement won't be + # added as a dependency until after scanning. + if not install_req.name: + self.add_unnamed_requirement(install_req) + return [install_req], None + + try: + existing_req: Optional[InstallRequirement] = self.get_requirement( + install_req.name + ) + except KeyError: + existing_req = None + + has_conflicting_requirement = ( + parent_req_name is None + and existing_req + and not existing_req.constraint + and existing_req.extras == install_req.extras + and existing_req.req + and install_req.req + and existing_req.req.specifier != install_req.req.specifier + ) + if has_conflicting_requirement: + raise InstallationError( + "Double requirement given: {} (already in {}, name={!r})".format( + install_req, existing_req, install_req.name + ) + ) + + # When no existing requirement exists, add the requirement as a + # dependency and it will be scanned again after. + if not existing_req: + self.add_named_requirement(install_req) + # We'd want to rescan this requirement later + return [install_req], install_req + + # Assume there's no need to scan, and that we've already + # encountered this for scanning. + if install_req.constraint or not existing_req.constraint: + return [], existing_req + + does_not_satisfy_constraint = install_req.link and not ( + existing_req.link and install_req.link.path == existing_req.link.path + ) + if does_not_satisfy_constraint: + raise InstallationError( + "Could not satisfy constraints for '{}': " + "installation from path or url cannot be " + "constrained to a version".format(install_req.name) + ) + # If we're now installing a constraint, mark the existing + # object for real installation. + existing_req.constraint = False + # If we're now installing a user supplied requirement, + # mark the existing object as such. + if install_req.user_supplied: + existing_req.user_supplied = True + existing_req.extras = tuple( + sorted(set(existing_req.extras) | set(install_req.extras)) + ) + logger.debug( + "Setting %s extras to: %s", + existing_req, + existing_req.extras, + ) + # Return the existing requirement for addition to the parent and + # scanning again. + return [existing_req], existing_req + + def has_requirement(self, name: str) -> bool: + project_name = canonicalize_name(name) + + return ( + project_name in self.requirements + and not self.requirements[project_name].constraint + ) + + def get_requirement(self, name: str) -> InstallRequirement: + project_name = canonicalize_name(name) + + if project_name in self.requirements: + return self.requirements[project_name] + + raise KeyError(f"No project with the name {name!r}") + + @property + def all_requirements(self) -> List[InstallRequirement]: + return self.unnamed_requirements + list(self.requirements.values()) diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/req_tracker.py b/python/lib/python3.10/site-packages/pip/_internal/req/req_tracker.py new file mode 100644 index 0000000..24d3c53 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/req_tracker.py @@ -0,0 +1,124 @@ +import contextlib +import hashlib +import logging +import os +from types import TracebackType +from typing import Dict, Iterator, Optional, Set, Type, Union + +from pip._internal.models.link import Link +from pip._internal.req.req_install import InstallRequirement +from pip._internal.utils.temp_dir import TempDirectory + +logger = logging.getLogger(__name__) + + +@contextlib.contextmanager +def update_env_context_manager(**changes: str) -> Iterator[None]: + target = os.environ + + # Save values from the target and change them. + non_existent_marker = object() + saved_values: Dict[str, Union[object, str]] = {} + for name, new_value in changes.items(): + try: + saved_values[name] = target[name] + except KeyError: + saved_values[name] = non_existent_marker + target[name] = new_value + + try: + yield + finally: + # Restore original values in the target. + for name, original_value in saved_values.items(): + if original_value is non_existent_marker: + del target[name] + else: + assert isinstance(original_value, str) # for mypy + target[name] = original_value + + +@contextlib.contextmanager +def get_requirement_tracker() -> Iterator["RequirementTracker"]: + root = os.environ.get("PIP_REQ_TRACKER") + with contextlib.ExitStack() as ctx: + if root is None: + root = ctx.enter_context(TempDirectory(kind="req-tracker")).path + ctx.enter_context(update_env_context_manager(PIP_REQ_TRACKER=root)) + logger.debug("Initialized build tracking at %s", root) + + with RequirementTracker(root) as tracker: + yield tracker + + +class RequirementTracker: + def __init__(self, root: str) -> None: + self._root = root + self._entries: Set[InstallRequirement] = set() + logger.debug("Created build tracker: %s", self._root) + + def __enter__(self) -> "RequirementTracker": + logger.debug("Entered build tracker: %s", self._root) + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> None: + self.cleanup() + + def _entry_path(self, link: Link) -> str: + hashed = hashlib.sha224(link.url_without_fragment.encode()).hexdigest() + return os.path.join(self._root, hashed) + + def add(self, req: InstallRequirement) -> None: + """Add an InstallRequirement to build tracking.""" + + assert req.link + # Get the file to write information about this requirement. + entry_path = self._entry_path(req.link) + + # Try reading from the file. If it exists and can be read from, a build + # is already in progress, so a LookupError is raised. + try: + with open(entry_path) as fp: + contents = fp.read() + except FileNotFoundError: + pass + else: + message = "{} is already being built: {}".format(req.link, contents) + raise LookupError(message) + + # If we're here, req should really not be building already. + assert req not in self._entries + + # Start tracking this requirement. + with open(entry_path, "w", encoding="utf-8") as fp: + fp.write(str(req)) + self._entries.add(req) + + logger.debug("Added %s to build tracker %r", req, self._root) + + def remove(self, req: InstallRequirement) -> None: + """Remove an InstallRequirement from build tracking.""" + + assert req.link + # Delete the created file and the corresponding entries. + os.unlink(self._entry_path(req.link)) + self._entries.remove(req) + + logger.debug("Removed %s from build tracker %r", req, self._root) + + def cleanup(self) -> None: + for req in set(self._entries): + self.remove(req) + + logger.debug("Removed build tracker: %r", self._root) + + @contextlib.contextmanager + def track(self, req: InstallRequirement) -> Iterator[None]: + self.add(req) + yield + self.remove(req) diff --git a/python/lib/python3.10/site-packages/pip/_internal/req/req_uninstall.py b/python/lib/python3.10/site-packages/pip/_internal/req/req_uninstall.py new file mode 100644 index 0000000..472090a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/req/req_uninstall.py @@ -0,0 +1,633 @@ +import functools +import os +import sys +import sysconfig +from importlib.util import cache_from_source +from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Set, Tuple + +from pip._internal.exceptions import UninstallationError +from pip._internal.locations import get_bin_prefix, get_bin_user +from pip._internal.metadata import BaseDistribution +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.egg_link import egg_link_path_from_location +from pip._internal.utils.logging import getLogger, indent_log +from pip._internal.utils.misc import ask, is_local, normalize_path, renames, rmtree +from pip._internal.utils.temp_dir import AdjacentTempDirectory, TempDirectory + +logger = getLogger(__name__) + + +def _script_names(bin_dir: str, script_name: str, is_gui: bool) -> Iterator[str]: + """Create the fully qualified name of the files created by + {console,gui}_scripts for the given ``dist``. + Returns the list of file names + """ + exe_name = os.path.join(bin_dir, script_name) + yield exe_name + if not WINDOWS: + return + yield f"{exe_name}.exe" + yield f"{exe_name}.exe.manifest" + if is_gui: + yield f"{exe_name}-script.pyw" + else: + yield f"{exe_name}-script.py" + + +def _unique(fn: Callable[..., Iterator[Any]]) -> Callable[..., Iterator[Any]]: + @functools.wraps(fn) + def unique(*args: Any, **kw: Any) -> Iterator[Any]: + seen: Set[Any] = set() + for item in fn(*args, **kw): + if item not in seen: + seen.add(item) + yield item + + return unique + + +@_unique +def uninstallation_paths(dist: BaseDistribution) -> Iterator[str]: + """ + Yield all the uninstallation paths for dist based on RECORD-without-.py[co] + + Yield paths to all the files in RECORD. For each .py file in RECORD, add + the .pyc and .pyo in the same directory. + + UninstallPathSet.add() takes care of the __pycache__ .py[co]. + + If RECORD is not found, raises UninstallationError, + with possible information from the INSTALLER file. + + https://packaging.python.org/specifications/recording-installed-packages/ + """ + location = dist.location + assert location is not None, "not installed" + + entries = dist.iter_declared_entries() + if entries is None: + msg = "Cannot uninstall {dist}, RECORD file not found.".format(dist=dist) + installer = dist.installer + if not installer or installer == "pip": + dep = "{}=={}".format(dist.raw_name, dist.version) + msg += ( + " You might be able to recover from this via: " + "'pip install --force-reinstall --no-deps {}'.".format(dep) + ) + else: + msg += " Hint: The package was installed by {}.".format(installer) + raise UninstallationError(msg) + + for entry in entries: + path = os.path.join(location, entry) + yield path + if path.endswith(".py"): + dn, fn = os.path.split(path) + base = fn[:-3] + path = os.path.join(dn, base + ".pyc") + yield path + path = os.path.join(dn, base + ".pyo") + yield path + + +def compact(paths: Iterable[str]) -> Set[str]: + """Compact a path set to contain the minimal number of paths + necessary to contain all paths in the set. If /a/path/ and + /a/path/to/a/file.txt are both in the set, leave only the + shorter path.""" + + sep = os.path.sep + short_paths: Set[str] = set() + for path in sorted(paths, key=len): + should_skip = any( + path.startswith(shortpath.rstrip("*")) + and path[len(shortpath.rstrip("*").rstrip(sep))] == sep + for shortpath in short_paths + ) + if not should_skip: + short_paths.add(path) + return short_paths + + +def compress_for_rename(paths: Iterable[str]) -> Set[str]: + """Returns a set containing the paths that need to be renamed. + + This set may include directories when the original sequence of paths + included every file on disk. + """ + case_map = {os.path.normcase(p): p for p in paths} + remaining = set(case_map) + unchecked = sorted({os.path.split(p)[0] for p in case_map.values()}, key=len) + wildcards: Set[str] = set() + + def norm_join(*a: str) -> str: + return os.path.normcase(os.path.join(*a)) + + for root in unchecked: + if any(os.path.normcase(root).startswith(w) for w in wildcards): + # This directory has already been handled. + continue + + all_files: Set[str] = set() + all_subdirs: Set[str] = set() + for dirname, subdirs, files in os.walk(root): + all_subdirs.update(norm_join(root, dirname, d) for d in subdirs) + all_files.update(norm_join(root, dirname, f) for f in files) + # If all the files we found are in our remaining set of files to + # remove, then remove them from the latter set and add a wildcard + # for the directory. + if not (all_files - remaining): + remaining.difference_update(all_files) + wildcards.add(root + os.sep) + + return set(map(case_map.__getitem__, remaining)) | wildcards + + +def compress_for_output_listing(paths: Iterable[str]) -> Tuple[Set[str], Set[str]]: + """Returns a tuple of 2 sets of which paths to display to user + + The first set contains paths that would be deleted. Files of a package + are not added and the top-level directory of the package has a '*' added + at the end - to signify that all it's contents are removed. + + The second set contains files that would have been skipped in the above + folders. + """ + + will_remove = set(paths) + will_skip = set() + + # Determine folders and files + folders = set() + files = set() + for path in will_remove: + if path.endswith(".pyc"): + continue + if path.endswith("__init__.py") or ".dist-info" in path: + folders.add(os.path.dirname(path)) + files.add(path) + + # probably this one https://github.com/python/mypy/issues/390 + _normcased_files = set(map(os.path.normcase, files)) # type: ignore + + folders = compact(folders) + + # This walks the tree using os.walk to not miss extra folders + # that might get added. + for folder in folders: + for dirpath, _, dirfiles in os.walk(folder): + for fname in dirfiles: + if fname.endswith(".pyc"): + continue + + file_ = os.path.join(dirpath, fname) + if ( + os.path.isfile(file_) + and os.path.normcase(file_) not in _normcased_files + ): + # We are skipping this file. Add it to the set. + will_skip.add(file_) + + will_remove = files | {os.path.join(folder, "*") for folder in folders} + + return will_remove, will_skip + + +class StashedUninstallPathSet: + """A set of file rename operations to stash files while + tentatively uninstalling them.""" + + def __init__(self) -> None: + # Mapping from source file root to [Adjacent]TempDirectory + # for files under that directory. + self._save_dirs: Dict[str, TempDirectory] = {} + # (old path, new path) tuples for each move that may need + # to be undone. + self._moves: List[Tuple[str, str]] = [] + + def _get_directory_stash(self, path: str) -> str: + """Stashes a directory. + + Directories are stashed adjacent to their original location if + possible, or else moved/copied into the user's temp dir.""" + + try: + save_dir: TempDirectory = AdjacentTempDirectory(path) + except OSError: + save_dir = TempDirectory(kind="uninstall") + self._save_dirs[os.path.normcase(path)] = save_dir + + return save_dir.path + + def _get_file_stash(self, path: str) -> str: + """Stashes a file. + + If no root has been provided, one will be created for the directory + in the user's temp directory.""" + path = os.path.normcase(path) + head, old_head = os.path.dirname(path), None + save_dir = None + + while head != old_head: + try: + save_dir = self._save_dirs[head] + break + except KeyError: + pass + head, old_head = os.path.dirname(head), head + else: + # Did not find any suitable root + head = os.path.dirname(path) + save_dir = TempDirectory(kind="uninstall") + self._save_dirs[head] = save_dir + + relpath = os.path.relpath(path, head) + if relpath and relpath != os.path.curdir: + return os.path.join(save_dir.path, relpath) + return save_dir.path + + def stash(self, path: str) -> str: + """Stashes the directory or file and returns its new location. + Handle symlinks as files to avoid modifying the symlink targets. + """ + path_is_dir = os.path.isdir(path) and not os.path.islink(path) + if path_is_dir: + new_path = self._get_directory_stash(path) + else: + new_path = self._get_file_stash(path) + + self._moves.append((path, new_path)) + if path_is_dir and os.path.isdir(new_path): + # If we're moving a directory, we need to + # remove the destination first or else it will be + # moved to inside the existing directory. + # We just created new_path ourselves, so it will + # be removable. + os.rmdir(new_path) + renames(path, new_path) + return new_path + + def commit(self) -> None: + """Commits the uninstall by removing stashed files.""" + for _, save_dir in self._save_dirs.items(): + save_dir.cleanup() + self._moves = [] + self._save_dirs = {} + + def rollback(self) -> None: + """Undoes the uninstall by moving stashed files back.""" + for p in self._moves: + logger.info("Moving to %s\n from %s", *p) + + for new_path, path in self._moves: + try: + logger.debug("Replacing %s from %s", new_path, path) + if os.path.isfile(new_path) or os.path.islink(new_path): + os.unlink(new_path) + elif os.path.isdir(new_path): + rmtree(new_path) + renames(path, new_path) + except OSError as ex: + logger.error("Failed to restore %s", new_path) + logger.debug("Exception: %s", ex) + + self.commit() + + @property + def can_rollback(self) -> bool: + return bool(self._moves) + + +class UninstallPathSet: + """A set of file paths to be removed in the uninstallation of a + requirement.""" + + def __init__(self, dist: BaseDistribution) -> None: + self._paths: Set[str] = set() + self._refuse: Set[str] = set() + self._pth: Dict[str, UninstallPthEntries] = {} + self._dist = dist + self._moved_paths = StashedUninstallPathSet() + + def _permitted(self, path: str) -> bool: + """ + Return True if the given path is one we are permitted to + remove/modify, False otherwise. + + """ + return is_local(path) + + def add(self, path: str) -> None: + head, tail = os.path.split(path) + + # we normalize the head to resolve parent directory symlinks, but not + # the tail, since we only want to uninstall symlinks, not their targets + path = os.path.join(normalize_path(head), os.path.normcase(tail)) + + if not os.path.exists(path): + return + if self._permitted(path): + self._paths.add(path) + else: + self._refuse.add(path) + + # __pycache__ files can show up after 'installed-files.txt' is created, + # due to imports + if os.path.splitext(path)[1] == ".py": + self.add(cache_from_source(path)) + + def add_pth(self, pth_file: str, entry: str) -> None: + pth_file = normalize_path(pth_file) + if self._permitted(pth_file): + if pth_file not in self._pth: + self._pth[pth_file] = UninstallPthEntries(pth_file) + self._pth[pth_file].add(entry) + else: + self._refuse.add(pth_file) + + def remove(self, auto_confirm: bool = False, verbose: bool = False) -> None: + """Remove paths in ``self._paths`` with confirmation (unless + ``auto_confirm`` is True).""" + + if not self._paths: + logger.info( + "Can't uninstall '%s'. No files were found to uninstall.", + self._dist.raw_name, + ) + return + + dist_name_version = f"{self._dist.raw_name}-{self._dist.version}" + logger.info("Uninstalling %s:", dist_name_version) + + with indent_log(): + if auto_confirm or self._allowed_to_proceed(verbose): + moved = self._moved_paths + + for_rename = compress_for_rename(self._paths) + + for path in sorted(compact(for_rename)): + moved.stash(path) + logger.verbose("Removing file or directory %s", path) + + for pth in self._pth.values(): + pth.remove() + + logger.info("Successfully uninstalled %s", dist_name_version) + + def _allowed_to_proceed(self, verbose: bool) -> bool: + """Display which files would be deleted and prompt for confirmation""" + + def _display(msg: str, paths: Iterable[str]) -> None: + if not paths: + return + + logger.info(msg) + with indent_log(): + for path in sorted(compact(paths)): + logger.info(path) + + if not verbose: + will_remove, will_skip = compress_for_output_listing(self._paths) + else: + # In verbose mode, display all the files that are going to be + # deleted. + will_remove = set(self._paths) + will_skip = set() + + _display("Would remove:", will_remove) + _display("Would not remove (might be manually added):", will_skip) + _display("Would not remove (outside of prefix):", self._refuse) + if verbose: + _display("Will actually move:", compress_for_rename(self._paths)) + + return ask("Proceed (Y/n)? ", ("y", "n", "")) != "n" + + def rollback(self) -> None: + """Rollback the changes previously made by remove().""" + if not self._moved_paths.can_rollback: + logger.error( + "Can't roll back %s; was not uninstalled", + self._dist.raw_name, + ) + return + logger.info("Rolling back uninstall of %s", self._dist.raw_name) + self._moved_paths.rollback() + for pth in self._pth.values(): + pth.rollback() + + def commit(self) -> None: + """Remove temporary save dir: rollback will no longer be possible.""" + self._moved_paths.commit() + + @classmethod + def from_dist(cls, dist: BaseDistribution) -> "UninstallPathSet": + dist_location = dist.location + info_location = dist.info_location + if dist_location is None: + logger.info( + "Not uninstalling %s since it is not installed", + dist.canonical_name, + ) + return cls(dist) + + normalized_dist_location = normalize_path(dist_location) + if not dist.local: + logger.info( + "Not uninstalling %s at %s, outside environment %s", + dist.canonical_name, + normalized_dist_location, + sys.prefix, + ) + return cls(dist) + + if normalized_dist_location in { + p + for p in {sysconfig.get_path("stdlib"), sysconfig.get_path("platstdlib")} + if p + }: + logger.info( + "Not uninstalling %s at %s, as it is in the standard library.", + dist.canonical_name, + normalized_dist_location, + ) + return cls(dist) + + paths_to_remove = cls(dist) + develop_egg_link = egg_link_path_from_location(dist.raw_name) + + # Distribution is installed with metadata in a "flat" .egg-info + # directory. This means it is not a modern .dist-info installation, an + # egg, or legacy editable. + setuptools_flat_installation = ( + dist.installed_with_setuptools_egg_info + and info_location is not None + and os.path.exists(info_location) + # If dist is editable and the location points to a ``.egg-info``, + # we are in fact in the legacy editable case. + and not info_location.endswith(f"{dist.setuptools_filename}.egg-info") + ) + + # Uninstall cases order do matter as in the case of 2 installs of the + # same package, pip needs to uninstall the currently detected version + if setuptools_flat_installation: + if info_location is not None: + paths_to_remove.add(info_location) + installed_files = dist.iter_declared_entries() + if installed_files is not None: + for installed_file in installed_files: + paths_to_remove.add(os.path.join(dist_location, installed_file)) + # FIXME: need a test for this elif block + # occurs with --single-version-externally-managed/--record outside + # of pip + elif dist.is_file("top_level.txt"): + try: + namespace_packages = dist.read_text("namespace_packages.txt") + except FileNotFoundError: + namespaces = [] + else: + namespaces = namespace_packages.splitlines(keepends=False) + for top_level_pkg in [ + p + for p in dist.read_text("top_level.txt").splitlines() + if p and p not in namespaces + ]: + path = os.path.join(dist_location, top_level_pkg) + paths_to_remove.add(path) + paths_to_remove.add(f"{path}.py") + paths_to_remove.add(f"{path}.pyc") + paths_to_remove.add(f"{path}.pyo") + + elif dist.installed_by_distutils: + raise UninstallationError( + "Cannot uninstall {!r}. It is a distutils installed project " + "and thus we cannot accurately determine which files belong " + "to it which would lead to only a partial uninstall.".format( + dist.raw_name, + ) + ) + + elif dist.installed_as_egg: + # package installed by easy_install + # We cannot match on dist.egg_name because it can slightly vary + # i.e. setuptools-0.6c11-py2.6.egg vs setuptools-0.6rc11-py2.6.egg + paths_to_remove.add(dist_location) + easy_install_egg = os.path.split(dist_location)[1] + easy_install_pth = os.path.join( + os.path.dirname(dist_location), + "easy-install.pth", + ) + paths_to_remove.add_pth(easy_install_pth, "./" + easy_install_egg) + + elif dist.installed_with_dist_info: + for path in uninstallation_paths(dist): + paths_to_remove.add(path) + + elif develop_egg_link: + # PEP 660 modern editable is handled in the ``.dist-info`` case + # above, so this only covers the setuptools-style editable. + with open(develop_egg_link) as fh: + link_pointer = os.path.normcase(fh.readline().strip()) + assert link_pointer == dist_location, ( + f"Egg-link {link_pointer} does not match installed location of " + f"{dist.raw_name} (at {dist_location})" + ) + paths_to_remove.add(develop_egg_link) + easy_install_pth = os.path.join( + os.path.dirname(develop_egg_link), "easy-install.pth" + ) + paths_to_remove.add_pth(easy_install_pth, dist_location) + + else: + logger.debug( + "Not sure how to uninstall: %s - Check: %s", + dist, + dist_location, + ) + + if dist.in_usersite: + bin_dir = get_bin_user() + else: + bin_dir = get_bin_prefix() + + # find distutils scripts= scripts + try: + for script in dist.iterdir("scripts"): + paths_to_remove.add(os.path.join(bin_dir, script.name)) + if WINDOWS: + paths_to_remove.add(os.path.join(bin_dir, f"{script.name}.bat")) + except (FileNotFoundError, NotADirectoryError): + pass + + # find console_scripts and gui_scripts + def iter_scripts_to_remove( + dist: BaseDistribution, + bin_dir: str, + ) -> Iterator[str]: + for entry_point in dist.iter_entry_points(): + if entry_point.group == "console_scripts": + yield from _script_names(bin_dir, entry_point.name, False) + elif entry_point.group == "gui_scripts": + yield from _script_names(bin_dir, entry_point.name, True) + + for s in iter_scripts_to_remove(dist, bin_dir): + paths_to_remove.add(s) + + return paths_to_remove + + +class UninstallPthEntries: + def __init__(self, pth_file: str) -> None: + self.file = pth_file + self.entries: Set[str] = set() + self._saved_lines: Optional[List[bytes]] = None + + def add(self, entry: str) -> None: + entry = os.path.normcase(entry) + # On Windows, os.path.normcase converts the entry to use + # backslashes. This is correct for entries that describe absolute + # paths outside of site-packages, but all the others use forward + # slashes. + # os.path.splitdrive is used instead of os.path.isabs because isabs + # treats non-absolute paths with drive letter markings like c:foo\bar + # as absolute paths. It also does not recognize UNC paths if they don't + # have more than "\\sever\share". Valid examples: "\\server\share\" or + # "\\server\share\folder". + if WINDOWS and not os.path.splitdrive(entry)[0]: + entry = entry.replace("\\", "/") + self.entries.add(entry) + + def remove(self) -> None: + logger.verbose("Removing pth entries from %s:", self.file) + + # If the file doesn't exist, log a warning and return + if not os.path.isfile(self.file): + logger.warning("Cannot remove entries from nonexistent file %s", self.file) + return + with open(self.file, "rb") as fh: + # windows uses '\r\n' with py3k, but uses '\n' with py2.x + lines = fh.readlines() + self._saved_lines = lines + if any(b"\r\n" in line for line in lines): + endline = "\r\n" + else: + endline = "\n" + # handle missing trailing newline + if lines and not lines[-1].endswith(endline.encode("utf-8")): + lines[-1] = lines[-1] + endline.encode("utf-8") + for entry in self.entries: + try: + logger.verbose("Removing entry: %s", entry) + lines.remove((entry + endline).encode("utf-8")) + except ValueError: + pass + with open(self.file, "wb") as fh: + fh.writelines(lines) + + def rollback(self) -> bool: + if self._saved_lines is None: + logger.error("Cannot roll back changes to %s, none were made", self.file) + return False + logger.debug("Rolling %s back to previous state", self.file) + with open(self.file, "wb") as fh: + fh.writelines(self._saved_lines) + return True diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/base.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/base.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/base.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/base.py diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/legacy/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/legacy/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/legacy/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/legacy/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/resolution/legacy/resolver.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/legacy/resolver.py new file mode 100644 index 0000000..8c149d4 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/resolution/legacy/resolver.py @@ -0,0 +1,467 @@ +"""Dependency Resolution + +The dependency resolution in pip is performed as follows: + +for top-level requirements: + a. only one spec allowed per project, regardless of conflicts or not. + otherwise a "double requirement" exception is raised + b. they override sub-dependency requirements. +for sub-dependencies + a. "first found, wins" (where the order is breadth first) +""" + +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import logging +import sys +from collections import defaultdict +from itertools import chain +from typing import DefaultDict, Iterable, List, Optional, Set, Tuple + +from pip._vendor.packaging import specifiers +from pip._vendor.packaging.requirements import Requirement + +from pip._internal.cache import WheelCache +from pip._internal.exceptions import ( + BestVersionAlreadyInstalled, + DistributionNotFound, + HashError, + HashErrors, + NoneMetadataError, + UnsupportedPythonVersion, +) +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import BaseDistribution +from pip._internal.models.link import Link +from pip._internal.operations.prepare import RequirementPreparer +from pip._internal.req.req_install import ( + InstallRequirement, + check_invalid_constraint_type, +) +from pip._internal.req.req_set import RequirementSet +from pip._internal.resolution.base import BaseResolver, InstallRequirementProvider +from pip._internal.utils.compatibility_tags import get_supported +from pip._internal.utils.logging import indent_log +from pip._internal.utils.misc import normalize_version_info +from pip._internal.utils.packaging import check_requires_python + +logger = logging.getLogger(__name__) + +DiscoveredDependencies = DefaultDict[str, List[InstallRequirement]] + + +def _check_dist_requires_python( + dist: BaseDistribution, + version_info: Tuple[int, int, int], + ignore_requires_python: bool = False, +) -> None: + """ + Check whether the given Python version is compatible with a distribution's + "Requires-Python" value. + + :param version_info: A 3-tuple of ints representing the Python + major-minor-micro version to check. + :param ignore_requires_python: Whether to ignore the "Requires-Python" + value if the given Python version isn't compatible. + + :raises UnsupportedPythonVersion: When the given Python version isn't + compatible. + """ + # This idiosyncratically converts the SpecifierSet to str and let + # check_requires_python then parse it again into SpecifierSet. But this + # is the legacy resolver so I'm just not going to bother refactoring. + try: + requires_python = str(dist.requires_python) + except FileNotFoundError as e: + raise NoneMetadataError(dist, str(e)) + try: + is_compatible = check_requires_python( + requires_python, + version_info=version_info, + ) + except specifiers.InvalidSpecifier as exc: + logger.warning( + "Package %r has an invalid Requires-Python: %s", dist.raw_name, exc + ) + return + + if is_compatible: + return + + version = ".".join(map(str, version_info)) + if ignore_requires_python: + logger.debug( + "Ignoring failed Requires-Python check for package %r: %s not in %r", + dist.raw_name, + version, + requires_python, + ) + return + + raise UnsupportedPythonVersion( + "Package {!r} requires a different Python: {} not in {!r}".format( + dist.raw_name, version, requires_python + ) + ) + + +class Resolver(BaseResolver): + """Resolves which packages need to be installed/uninstalled to perform \ + the requested operation without breaking the requirements of any package. + """ + + _allowed_strategies = {"eager", "only-if-needed", "to-satisfy-only"} + + def __init__( + self, + preparer: RequirementPreparer, + finder: PackageFinder, + wheel_cache: Optional[WheelCache], + make_install_req: InstallRequirementProvider, + use_user_site: bool, + ignore_dependencies: bool, + ignore_installed: bool, + ignore_requires_python: bool, + force_reinstall: bool, + upgrade_strategy: str, + py_version_info: Optional[Tuple[int, ...]] = None, + ) -> None: + super().__init__() + assert upgrade_strategy in self._allowed_strategies + + if py_version_info is None: + py_version_info = sys.version_info[:3] + else: + py_version_info = normalize_version_info(py_version_info) + + self._py_version_info = py_version_info + + self.preparer = preparer + self.finder = finder + self.wheel_cache = wheel_cache + + self.upgrade_strategy = upgrade_strategy + self.force_reinstall = force_reinstall + self.ignore_dependencies = ignore_dependencies + self.ignore_installed = ignore_installed + self.ignore_requires_python = ignore_requires_python + self.use_user_site = use_user_site + self._make_install_req = make_install_req + + self._discovered_dependencies: DiscoveredDependencies = defaultdict(list) + + def resolve( + self, root_reqs: List[InstallRequirement], check_supported_wheels: bool + ) -> RequirementSet: + """Resolve what operations need to be done + + As a side-effect of this method, the packages (and their dependencies) + are downloaded, unpacked and prepared for installation. This + preparation is done by ``pip.operations.prepare``. + + Once PyPI has static dependency metadata available, it would be + possible to move the preparation to become a step separated from + dependency resolution. + """ + requirement_set = RequirementSet(check_supported_wheels=check_supported_wheels) + for req in root_reqs: + if req.constraint: + check_invalid_constraint_type(req) + requirement_set.add_requirement(req) + + # Actually prepare the files, and collect any exceptions. Most hash + # exceptions cannot be checked ahead of time, because + # _populate_link() needs to be called before we can make decisions + # based on link type. + discovered_reqs: List[InstallRequirement] = [] + hash_errors = HashErrors() + for req in chain(requirement_set.all_requirements, discovered_reqs): + try: + discovered_reqs.extend(self._resolve_one(requirement_set, req)) + except HashError as exc: + exc.req = req + hash_errors.append(exc) + + if hash_errors: + raise hash_errors + + return requirement_set + + def _is_upgrade_allowed(self, req: InstallRequirement) -> bool: + if self.upgrade_strategy == "to-satisfy-only": + return False + elif self.upgrade_strategy == "eager": + return True + else: + assert self.upgrade_strategy == "only-if-needed" + return req.user_supplied or req.constraint + + def _set_req_to_reinstall(self, req: InstallRequirement) -> None: + """ + Set a requirement to be installed. + """ + # Don't uninstall the conflict if doing a user install and the + # conflict is not a user install. + if not self.use_user_site or req.satisfied_by.in_usersite: + req.should_reinstall = True + req.satisfied_by = None + + def _check_skip_installed( + self, req_to_install: InstallRequirement + ) -> Optional[str]: + """Check if req_to_install should be skipped. + + This will check if the req is installed, and whether we should upgrade + or reinstall it, taking into account all the relevant user options. + + After calling this req_to_install will only have satisfied_by set to + None if the req_to_install is to be upgraded/reinstalled etc. Any + other value will be a dist recording the current thing installed that + satisfies the requirement. + + Note that for vcs urls and the like we can't assess skipping in this + routine - we simply identify that we need to pull the thing down, + then later on it is pulled down and introspected to assess upgrade/ + reinstalls etc. + + :return: A text reason for why it was skipped, or None. + """ + if self.ignore_installed: + return None + + req_to_install.check_if_exists(self.use_user_site) + if not req_to_install.satisfied_by: + return None + + if self.force_reinstall: + self._set_req_to_reinstall(req_to_install) + return None + + if not self._is_upgrade_allowed(req_to_install): + if self.upgrade_strategy == "only-if-needed": + return "already satisfied, skipping upgrade" + return "already satisfied" + + # Check for the possibility of an upgrade. For link-based + # requirements we have to pull the tree down and inspect to assess + # the version #, so it's handled way down. + if not req_to_install.link: + try: + self.finder.find_requirement(req_to_install, upgrade=True) + except BestVersionAlreadyInstalled: + # Then the best version is installed. + return "already up-to-date" + except DistributionNotFound: + # No distribution found, so we squash the error. It will + # be raised later when we re-try later to do the install. + # Why don't we just raise here? + pass + + self._set_req_to_reinstall(req_to_install) + return None + + def _find_requirement_link(self, req: InstallRequirement) -> Optional[Link]: + upgrade = self._is_upgrade_allowed(req) + best_candidate = self.finder.find_requirement(req, upgrade) + if not best_candidate: + return None + + # Log a warning per PEP 592 if necessary before returning. + link = best_candidate.link + if link.is_yanked: + reason = link.yanked_reason or "" + msg = ( + # Mark this as a unicode string to prevent + # "UnicodeEncodeError: 'ascii' codec can't encode character" + # in Python 2 when the reason contains non-ascii characters. + "The candidate selected for download or install is a " + "yanked version: {candidate}\n" + "Reason for being yanked: {reason}" + ).format(candidate=best_candidate, reason=reason) + logger.warning(msg) + + return link + + def _populate_link(self, req: InstallRequirement) -> None: + """Ensure that if a link can be found for this, that it is found. + + Note that req.link may still be None - if the requirement is already + installed and not needed to be upgraded based on the return value of + _is_upgrade_allowed(). + + If preparer.require_hashes is True, don't use the wheel cache, because + cached wheels, always built locally, have different hashes than the + files downloaded from the index server and thus throw false hash + mismatches. Furthermore, cached wheels at present have undeterministic + contents due to file modification times. + """ + if req.link is None: + req.link = self._find_requirement_link(req) + + if self.wheel_cache is None or self.preparer.require_hashes: + return + cache_entry = self.wheel_cache.get_cache_entry( + link=req.link, + package_name=req.name, + supported_tags=get_supported(), + ) + if cache_entry is not None: + logger.debug("Using cached wheel link: %s", cache_entry.link) + if req.link is req.original_link and cache_entry.persistent: + req.original_link_is_in_wheel_cache = True + req.link = cache_entry.link + + def _get_dist_for(self, req: InstallRequirement) -> BaseDistribution: + """Takes a InstallRequirement and returns a single AbstractDist \ + representing a prepared variant of the same. + """ + if req.editable: + return self.preparer.prepare_editable_requirement(req) + + # satisfied_by is only evaluated by calling _check_skip_installed, + # so it must be None here. + assert req.satisfied_by is None + skip_reason = self._check_skip_installed(req) + + if req.satisfied_by: + return self.preparer.prepare_installed_requirement(req, skip_reason) + + # We eagerly populate the link, since that's our "legacy" behavior. + self._populate_link(req) + dist = self.preparer.prepare_linked_requirement(req) + + # NOTE + # The following portion is for determining if a certain package is + # going to be re-installed/upgraded or not and reporting to the user. + # This should probably get cleaned up in a future refactor. + + # req.req is only avail after unpack for URL + # pkgs repeat check_if_exists to uninstall-on-upgrade + # (#14) + if not self.ignore_installed: + req.check_if_exists(self.use_user_site) + + if req.satisfied_by: + should_modify = ( + self.upgrade_strategy != "to-satisfy-only" + or self.force_reinstall + or self.ignore_installed + or req.link.scheme == "file" + ) + if should_modify: + self._set_req_to_reinstall(req) + else: + logger.info( + "Requirement already satisfied (use --upgrade to upgrade): %s", + req, + ) + return dist + + def _resolve_one( + self, + requirement_set: RequirementSet, + req_to_install: InstallRequirement, + ) -> List[InstallRequirement]: + """Prepare a single requirements file. + + :return: A list of additional InstallRequirements to also install. + """ + # Tell user what we are doing for this requirement: + # obtain (editable), skipping, processing (local url), collecting + # (remote url or package name) + if req_to_install.constraint or req_to_install.prepared: + return [] + + req_to_install.prepared = True + + # Parse and return dependencies + dist = self._get_dist_for(req_to_install) + # This will raise UnsupportedPythonVersion if the given Python + # version isn't compatible with the distribution's Requires-Python. + _check_dist_requires_python( + dist, + version_info=self._py_version_info, + ignore_requires_python=self.ignore_requires_python, + ) + + more_reqs: List[InstallRequirement] = [] + + def add_req(subreq: Requirement, extras_requested: Iterable[str]) -> None: + # This idiosyncratically converts the Requirement to str and let + # make_install_req then parse it again into Requirement. But this is + # the legacy resolver so I'm just not going to bother refactoring. + sub_install_req = self._make_install_req(str(subreq), req_to_install) + parent_req_name = req_to_install.name + to_scan_again, add_to_parent = requirement_set.add_requirement( + sub_install_req, + parent_req_name=parent_req_name, + extras_requested=extras_requested, + ) + if parent_req_name and add_to_parent: + self._discovered_dependencies[parent_req_name].append(add_to_parent) + more_reqs.extend(to_scan_again) + + with indent_log(): + # We add req_to_install before its dependencies, so that we + # can refer to it when adding dependencies. + if not requirement_set.has_requirement(req_to_install.name): + # 'unnamed' requirements will get added here + # 'unnamed' requirements can only come from being directly + # provided by the user. + assert req_to_install.user_supplied + requirement_set.add_requirement(req_to_install, parent_req_name=None) + + if not self.ignore_dependencies: + if req_to_install.extras: + logger.debug( + "Installing extra requirements: %r", + ",".join(req_to_install.extras), + ) + missing_requested = sorted( + set(req_to_install.extras) - set(dist.iter_provided_extras()) + ) + for missing in missing_requested: + logger.warning( + "%s %s does not provide the extra '%s'", + dist.raw_name, + dist.version, + missing, + ) + + available_requested = sorted( + set(dist.iter_provided_extras()) & set(req_to_install.extras) + ) + for subreq in dist.iter_dependencies(available_requested): + add_req(subreq, extras_requested=available_requested) + + return more_reqs + + def get_installation_order( + self, req_set: RequirementSet + ) -> List[InstallRequirement]: + """Create the installation order. + + The installation order is topological - requirements are installed + before the requiring thing. We break cycles at an arbitrary point, + and make no other guarantees. + """ + # The current implementation, which we may change at any point + # installs the user specified things in the order given, except when + # dependencies must come earlier to achieve topological order. + order = [] + ordered_reqs: Set[InstallRequirement] = set() + + def schedule(req: InstallRequirement) -> None: + if req.satisfied_by or req in ordered_reqs: + return + if req.constraint: + return + ordered_reqs.add(req) + for dep in self._discovered_dependencies[req.name]: + schedule(dep) + order.append(req) + + for install_req in req_set.requirements.values(): + schedule(install_req) + return order diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/base.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/base.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/base.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/base.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py new file mode 100644 index 0000000..9b8450e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py @@ -0,0 +1,547 @@ +import logging +import sys +from typing import TYPE_CHECKING, Any, FrozenSet, Iterable, Optional, Tuple, Union, cast + +from pip._vendor.packaging.utils import NormalizedName, canonicalize_name +from pip._vendor.packaging.version import Version + +from pip._internal.exceptions import ( + HashError, + InstallationSubprocessError, + MetadataInconsistent, +) +from pip._internal.metadata import BaseDistribution +from pip._internal.models.link import Link, links_equivalent +from pip._internal.models.wheel import Wheel +from pip._internal.req.constructors import ( + install_req_from_editable, + install_req_from_line, +) +from pip._internal.req.req_install import InstallRequirement +from pip._internal.utils.misc import normalize_version_info + +from .base import Candidate, CandidateVersion, Requirement, format_name + +if TYPE_CHECKING: + from .factory import Factory + +logger = logging.getLogger(__name__) + +BaseCandidate = Union[ + "AlreadyInstalledCandidate", + "EditableCandidate", + "LinkCandidate", +] + +# Avoid conflicting with the PyPI package "Python". +REQUIRES_PYTHON_IDENTIFIER = cast(NormalizedName, "") + + +def as_base_candidate(candidate: Candidate) -> Optional[BaseCandidate]: + """The runtime version of BaseCandidate.""" + base_candidate_classes = ( + AlreadyInstalledCandidate, + EditableCandidate, + LinkCandidate, + ) + if isinstance(candidate, base_candidate_classes): + return candidate + return None + + +def make_install_req_from_link( + link: Link, template: InstallRequirement +) -> InstallRequirement: + assert not template.editable, "template is editable" + if template.req: + line = str(template.req) + else: + line = link.url + ireq = install_req_from_line( + line, + user_supplied=template.user_supplied, + comes_from=template.comes_from, + use_pep517=template.use_pep517, + isolated=template.isolated, + constraint=template.constraint, + options=dict( + install_options=template.install_options, + global_options=template.global_options, + hashes=template.hash_options, + ), + ) + ireq.original_link = template.original_link + ireq.link = link + return ireq + + +def make_install_req_from_editable( + link: Link, template: InstallRequirement +) -> InstallRequirement: + assert template.editable, "template not editable" + return install_req_from_editable( + link.url, + user_supplied=template.user_supplied, + comes_from=template.comes_from, + use_pep517=template.use_pep517, + isolated=template.isolated, + constraint=template.constraint, + permit_editable_wheels=template.permit_editable_wheels, + options=dict( + install_options=template.install_options, + global_options=template.global_options, + hashes=template.hash_options, + ), + ) + + +def _make_install_req_from_dist( + dist: BaseDistribution, template: InstallRequirement +) -> InstallRequirement: + if template.req: + line = str(template.req) + elif template.link: + line = f"{dist.canonical_name} @ {template.link.url}" + else: + line = f"{dist.canonical_name}=={dist.version}" + ireq = install_req_from_line( + line, + user_supplied=template.user_supplied, + comes_from=template.comes_from, + use_pep517=template.use_pep517, + isolated=template.isolated, + constraint=template.constraint, + options=dict( + install_options=template.install_options, + global_options=template.global_options, + hashes=template.hash_options, + ), + ) + ireq.satisfied_by = dist + return ireq + + +class _InstallRequirementBackedCandidate(Candidate): + """A candidate backed by an ``InstallRequirement``. + + This represents a package request with the target not being already + in the environment, and needs to be fetched and installed. The backing + ``InstallRequirement`` is responsible for most of the leg work; this + class exposes appropriate information to the resolver. + + :param link: The link passed to the ``InstallRequirement``. The backing + ``InstallRequirement`` will use this link to fetch the distribution. + :param source_link: The link this candidate "originates" from. This is + different from ``link`` when the link is found in the wheel cache. + ``link`` would point to the wheel cache, while this points to the + found remote link (e.g. from pypi.org). + """ + + dist: BaseDistribution + is_installed = False + + def __init__( + self, + link: Link, + source_link: Link, + ireq: InstallRequirement, + factory: "Factory", + name: Optional[NormalizedName] = None, + version: Optional[CandidateVersion] = None, + ) -> None: + self._link = link + self._source_link = source_link + self._factory = factory + self._ireq = ireq + self._name = name + self._version = version + self.dist = self._prepare() + + def __str__(self) -> str: + return f"{self.name} {self.version}" + + def __repr__(self) -> str: + return "{class_name}({link!r})".format( + class_name=self.__class__.__name__, + link=str(self._link), + ) + + def __hash__(self) -> int: + return hash((self.__class__, self._link)) + + def __eq__(self, other: Any) -> bool: + if isinstance(other, self.__class__): + return links_equivalent(self._link, other._link) + return False + + @property + def source_link(self) -> Optional[Link]: + return self._source_link + + @property + def project_name(self) -> NormalizedName: + """The normalised name of the project the candidate refers to""" + if self._name is None: + self._name = self.dist.canonical_name + return self._name + + @property + def name(self) -> str: + return self.project_name + + @property + def version(self) -> CandidateVersion: + if self._version is None: + self._version = self.dist.version + return self._version + + def format_for_error(self) -> str: + return "{} {} (from {})".format( + self.name, + self.version, + self._link.file_path if self._link.is_file else self._link, + ) + + def _prepare_distribution(self) -> BaseDistribution: + raise NotImplementedError("Override in subclass") + + def _check_metadata_consistency(self, dist: BaseDistribution) -> None: + """Check for consistency of project name and version of dist.""" + if self._name is not None and self._name != dist.canonical_name: + raise MetadataInconsistent( + self._ireq, + "name", + self._name, + dist.canonical_name, + ) + if self._version is not None and self._version != dist.version: + raise MetadataInconsistent( + self._ireq, + "version", + str(self._version), + str(dist.version), + ) + + def _prepare(self) -> BaseDistribution: + try: + dist = self._prepare_distribution() + except HashError as e: + # Provide HashError the underlying ireq that caused it. This + # provides context for the resulting error message to show the + # offending line to the user. + e.req = self._ireq + raise + except InstallationSubprocessError as exc: + # The output has been presented already, so don't duplicate it. + exc.context = "See above for output." + raise + + self._check_metadata_consistency(dist) + return dist + + def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: + requires = self.dist.iter_dependencies() if with_requires else () + for r in requires: + yield self._factory.make_requirement_from_spec(str(r), self._ireq) + yield self._factory.make_requires_python_requirement(self.dist.requires_python) + + def get_install_requirement(self) -> Optional[InstallRequirement]: + return self._ireq + + +class LinkCandidate(_InstallRequirementBackedCandidate): + is_editable = False + + def __init__( + self, + link: Link, + template: InstallRequirement, + factory: "Factory", + name: Optional[NormalizedName] = None, + version: Optional[CandidateVersion] = None, + ) -> None: + source_link = link + cache_entry = factory.get_wheel_cache_entry(link, name) + if cache_entry is not None: + logger.debug("Using cached wheel link: %s", cache_entry.link) + link = cache_entry.link + ireq = make_install_req_from_link(link, template) + assert ireq.link == link + if ireq.link.is_wheel and not ireq.link.is_file: + wheel = Wheel(ireq.link.filename) + wheel_name = canonicalize_name(wheel.name) + assert name == wheel_name, f"{name!r} != {wheel_name!r} for wheel" + # Version may not be present for PEP 508 direct URLs + if version is not None: + wheel_version = Version(wheel.version) + assert version == wheel_version, "{!r} != {!r} for wheel {}".format( + version, wheel_version, name + ) + + if ( + cache_entry is not None + and cache_entry.persistent + and template.link is template.original_link + ): + ireq.original_link_is_in_wheel_cache = True + + super().__init__( + link=link, + source_link=source_link, + ireq=ireq, + factory=factory, + name=name, + version=version, + ) + + def _prepare_distribution(self) -> BaseDistribution: + preparer = self._factory.preparer + return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True) + + +class EditableCandidate(_InstallRequirementBackedCandidate): + is_editable = True + + def __init__( + self, + link: Link, + template: InstallRequirement, + factory: "Factory", + name: Optional[NormalizedName] = None, + version: Optional[CandidateVersion] = None, + ) -> None: + super().__init__( + link=link, + source_link=link, + ireq=make_install_req_from_editable(link, template), + factory=factory, + name=name, + version=version, + ) + + def _prepare_distribution(self) -> BaseDistribution: + return self._factory.preparer.prepare_editable_requirement(self._ireq) + + +class AlreadyInstalledCandidate(Candidate): + is_installed = True + source_link = None + + def __init__( + self, + dist: BaseDistribution, + template: InstallRequirement, + factory: "Factory", + ) -> None: + self.dist = dist + self._ireq = _make_install_req_from_dist(dist, template) + self._factory = factory + + # This is just logging some messages, so we can do it eagerly. + # The returned dist would be exactly the same as self.dist because we + # set satisfied_by in _make_install_req_from_dist. + # TODO: Supply reason based on force_reinstall and upgrade_strategy. + skip_reason = "already satisfied" + factory.preparer.prepare_installed_requirement(self._ireq, skip_reason) + + def __str__(self) -> str: + return str(self.dist) + + def __repr__(self) -> str: + return "{class_name}({distribution!r})".format( + class_name=self.__class__.__name__, + distribution=self.dist, + ) + + def __hash__(self) -> int: + return hash((self.__class__, self.name, self.version)) + + def __eq__(self, other: Any) -> bool: + if isinstance(other, self.__class__): + return self.name == other.name and self.version == other.version + return False + + @property + def project_name(self) -> NormalizedName: + return self.dist.canonical_name + + @property + def name(self) -> str: + return self.project_name + + @property + def version(self) -> CandidateVersion: + return self.dist.version + + @property + def is_editable(self) -> bool: + return self.dist.editable + + def format_for_error(self) -> str: + return f"{self.name} {self.version} (Installed)" + + def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: + if not with_requires: + return + for r in self.dist.iter_dependencies(): + yield self._factory.make_requirement_from_spec(str(r), self._ireq) + + def get_install_requirement(self) -> Optional[InstallRequirement]: + return None + + +class ExtrasCandidate(Candidate): + """A candidate that has 'extras', indicating additional dependencies. + + Requirements can be for a project with dependencies, something like + foo[extra]. The extras don't affect the project/version being installed + directly, but indicate that we need additional dependencies. We model that + by having an artificial ExtrasCandidate that wraps the "base" candidate. + + The ExtrasCandidate differs from the base in the following ways: + + 1. It has a unique name, of the form foo[extra]. This causes the resolver + to treat it as a separate node in the dependency graph. + 2. When we're getting the candidate's dependencies, + a) We specify that we want the extra dependencies as well. + b) We add a dependency on the base candidate. + See below for why this is needed. + 3. We return None for the underlying InstallRequirement, as the base + candidate will provide it, and we don't want to end up with duplicates. + + The dependency on the base candidate is needed so that the resolver can't + decide that it should recommend foo[extra1] version 1.0 and foo[extra2] + version 2.0. Having those candidates depend on foo=1.0 and foo=2.0 + respectively forces the resolver to recognise that this is a conflict. + """ + + def __init__( + self, + base: BaseCandidate, + extras: FrozenSet[str], + ) -> None: + self.base = base + self.extras = extras + + def __str__(self) -> str: + name, rest = str(self.base).split(" ", 1) + return "{}[{}] {}".format(name, ",".join(self.extras), rest) + + def __repr__(self) -> str: + return "{class_name}(base={base!r}, extras={extras!r})".format( + class_name=self.__class__.__name__, + base=self.base, + extras=self.extras, + ) + + def __hash__(self) -> int: + return hash((self.base, self.extras)) + + def __eq__(self, other: Any) -> bool: + if isinstance(other, self.__class__): + return self.base == other.base and self.extras == other.extras + return False + + @property + def project_name(self) -> NormalizedName: + return self.base.project_name + + @property + def name(self) -> str: + """The normalised name of the project the candidate refers to""" + return format_name(self.base.project_name, self.extras) + + @property + def version(self) -> CandidateVersion: + return self.base.version + + def format_for_error(self) -> str: + return "{} [{}]".format( + self.base.format_for_error(), ", ".join(sorted(self.extras)) + ) + + @property + def is_installed(self) -> bool: + return self.base.is_installed + + @property + def is_editable(self) -> bool: + return self.base.is_editable + + @property + def source_link(self) -> Optional[Link]: + return self.base.source_link + + def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: + factory = self.base._factory + + # Add a dependency on the exact base + # (See note 2b in the class docstring) + yield factory.make_requirement_from_candidate(self.base) + if not with_requires: + return + + # The user may have specified extras that the candidate doesn't + # support. We ignore any unsupported extras here. + valid_extras = self.extras.intersection(self.base.dist.iter_provided_extras()) + invalid_extras = self.extras.difference(self.base.dist.iter_provided_extras()) + for extra in sorted(invalid_extras): + logger.warning( + "%s %s does not provide the extra '%s'", + self.base.name, + self.version, + extra, + ) + + for r in self.base.dist.iter_dependencies(valid_extras): + requirement = factory.make_requirement_from_spec( + str(r), self.base._ireq, valid_extras + ) + if requirement: + yield requirement + + def get_install_requirement(self) -> Optional[InstallRequirement]: + # We don't return anything here, because we always + # depend on the base candidate, and we'll get the + # install requirement from that. + return None + + +class RequiresPythonCandidate(Candidate): + is_installed = False + source_link = None + + def __init__(self, py_version_info: Optional[Tuple[int, ...]]) -> None: + if py_version_info is not None: + version_info = normalize_version_info(py_version_info) + else: + version_info = sys.version_info[:3] + self._version = Version(".".join(str(c) for c in version_info)) + + # We don't need to implement __eq__() and __ne__() since there is always + # only one RequiresPythonCandidate in a resolution, i.e. the host Python. + # The built-in object.__eq__() and object.__ne__() do exactly what we want. + + def __str__(self) -> str: + return f"Python {self._version}" + + @property + def project_name(self) -> NormalizedName: + return REQUIRES_PYTHON_IDENTIFIER + + @property + def name(self) -> str: + return REQUIRES_PYTHON_IDENTIFIER + + @property + def version(self) -> CandidateVersion: + return self._version + + def format_for_error(self) -> str: + return f"Python {self.version}" + + def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]: + return () + + def get_install_requirement(self) -> Optional[InstallRequirement]: + return None diff --git a/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py new file mode 100644 index 0000000..261d8d5 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py @@ -0,0 +1,739 @@ +import contextlib +import functools +import logging +from typing import ( + TYPE_CHECKING, + Dict, + FrozenSet, + Iterable, + Iterator, + List, + Mapping, + NamedTuple, + Optional, + Sequence, + Set, + Tuple, + TypeVar, + cast, +) + +from pip._vendor.packaging.requirements import InvalidRequirement +from pip._vendor.packaging.specifiers import SpecifierSet +from pip._vendor.packaging.utils import NormalizedName, canonicalize_name +from pip._vendor.resolvelib import ResolutionImpossible + +from pip._internal.cache import CacheEntry, WheelCache +from pip._internal.exceptions import ( + DistributionNotFound, + InstallationError, + InstallationSubprocessError, + MetadataInconsistent, + UnsupportedPythonVersion, + UnsupportedWheel, +) +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import BaseDistribution, get_default_environment +from pip._internal.models.link import Link +from pip._internal.models.wheel import Wheel +from pip._internal.operations.prepare import RequirementPreparer +from pip._internal.req.constructors import install_req_from_link_and_ireq +from pip._internal.req.req_install import ( + InstallRequirement, + check_invalid_constraint_type, +) +from pip._internal.resolution.base import InstallRequirementProvider +from pip._internal.utils.compatibility_tags import get_supported +from pip._internal.utils.hashes import Hashes +from pip._internal.utils.packaging import get_requirement +from pip._internal.utils.virtualenv import running_under_virtualenv + +from .base import Candidate, CandidateVersion, Constraint, Requirement +from .candidates import ( + AlreadyInstalledCandidate, + BaseCandidate, + EditableCandidate, + ExtrasCandidate, + LinkCandidate, + RequiresPythonCandidate, + as_base_candidate, +) +from .found_candidates import FoundCandidates, IndexCandidateInfo +from .requirements import ( + ExplicitRequirement, + RequiresPythonRequirement, + SpecifierRequirement, + UnsatisfiableRequirement, +) + +if TYPE_CHECKING: + from typing import Protocol + + class ConflictCause(Protocol): + requirement: RequiresPythonRequirement + parent: Candidate + + +logger = logging.getLogger(__name__) + +C = TypeVar("C") +Cache = Dict[Link, C] + + +class CollectedRootRequirements(NamedTuple): + requirements: List[Requirement] + constraints: Dict[str, Constraint] + user_requested: Dict[str, int] + + +class Factory: + def __init__( + self, + finder: PackageFinder, + preparer: RequirementPreparer, + make_install_req: InstallRequirementProvider, + wheel_cache: Optional[WheelCache], + use_user_site: bool, + force_reinstall: bool, + ignore_installed: bool, + ignore_requires_python: bool, + suppress_build_failures: bool, + py_version_info: Optional[Tuple[int, ...]] = None, + ) -> None: + self._finder = finder + self.preparer = preparer + self._wheel_cache = wheel_cache + self._python_candidate = RequiresPythonCandidate(py_version_info) + self._make_install_req_from_spec = make_install_req + self._use_user_site = use_user_site + self._force_reinstall = force_reinstall + self._ignore_requires_python = ignore_requires_python + self._suppress_build_failures = suppress_build_failures + + self._build_failures: Cache[InstallationError] = {} + self._link_candidate_cache: Cache[LinkCandidate] = {} + self._editable_candidate_cache: Cache[EditableCandidate] = {} + self._installed_candidate_cache: Dict[str, AlreadyInstalledCandidate] = {} + self._extras_candidate_cache: Dict[ + Tuple[int, FrozenSet[str]], ExtrasCandidate + ] = {} + + if not ignore_installed: + env = get_default_environment() + self._installed_dists = { + dist.canonical_name: dist + for dist in env.iter_installed_distributions(local_only=False) + } + else: + self._installed_dists = {} + + @property + def force_reinstall(self) -> bool: + return self._force_reinstall + + def _fail_if_link_is_unsupported_wheel(self, link: Link) -> None: + if not link.is_wheel: + return + wheel = Wheel(link.filename) + if wheel.supported(self._finder.target_python.get_tags()): + return + msg = f"{link.filename} is not a supported wheel on this platform." + raise UnsupportedWheel(msg) + + def _make_extras_candidate( + self, base: BaseCandidate, extras: FrozenSet[str] + ) -> ExtrasCandidate: + cache_key = (id(base), extras) + try: + candidate = self._extras_candidate_cache[cache_key] + except KeyError: + candidate = ExtrasCandidate(base, extras) + self._extras_candidate_cache[cache_key] = candidate + return candidate + + def _make_candidate_from_dist( + self, + dist: BaseDistribution, + extras: FrozenSet[str], + template: InstallRequirement, + ) -> Candidate: + try: + base = self._installed_candidate_cache[dist.canonical_name] + except KeyError: + base = AlreadyInstalledCandidate(dist, template, factory=self) + self._installed_candidate_cache[dist.canonical_name] = base + if not extras: + return base + return self._make_extras_candidate(base, extras) + + def _make_candidate_from_link( + self, + link: Link, + extras: FrozenSet[str], + template: InstallRequirement, + name: Optional[NormalizedName], + version: Optional[CandidateVersion], + ) -> Optional[Candidate]: + # TODO: Check already installed candidate, and use it if the link and + # editable flag match. + + if link in self._build_failures: + # We already tried this candidate before, and it does not build. + # Don't bother trying again. + return None + + if template.editable: + if link not in self._editable_candidate_cache: + try: + self._editable_candidate_cache[link] = EditableCandidate( + link, + template, + factory=self, + name=name, + version=version, + ) + except MetadataInconsistent as e: + logger.info( + "Discarding [blue underline]%s[/]: [yellow]%s[reset]", + link, + e, + extra={"markup": True}, + ) + self._build_failures[link] = e + return None + except InstallationSubprocessError as e: + if not self._suppress_build_failures: + raise + logger.warning("Discarding %s due to build failure: %s", link, e) + self._build_failures[link] = e + return None + + base: BaseCandidate = self._editable_candidate_cache[link] + else: + if link not in self._link_candidate_cache: + try: + self._link_candidate_cache[link] = LinkCandidate( + link, + template, + factory=self, + name=name, + version=version, + ) + except MetadataInconsistent as e: + logger.info( + "Discarding [blue underline]%s[/]: [yellow]%s[reset]", + link, + e, + extra={"markup": True}, + ) + self._build_failures[link] = e + return None + except InstallationSubprocessError as e: + if not self._suppress_build_failures: + raise + logger.warning("Discarding %s due to build failure: %s", link, e) + self._build_failures[link] = e + return None + base = self._link_candidate_cache[link] + + if not extras: + return base + return self._make_extras_candidate(base, extras) + + def _iter_found_candidates( + self, + ireqs: Sequence[InstallRequirement], + specifier: SpecifierSet, + hashes: Hashes, + prefers_installed: bool, + incompatible_ids: Set[int], + ) -> Iterable[Candidate]: + if not ireqs: + return () + + # The InstallRequirement implementation requires us to give it a + # "template". Here we just choose the first requirement to represent + # all of them. + # Hopefully the Project model can correct this mismatch in the future. + template = ireqs[0] + assert template.req, "Candidates found on index must be PEP 508" + name = canonicalize_name(template.req.name) + + extras: FrozenSet[str] = frozenset() + for ireq in ireqs: + assert ireq.req, "Candidates found on index must be PEP 508" + specifier &= ireq.req.specifier + hashes &= ireq.hashes(trust_internet=False) + extras |= frozenset(ireq.extras) + + def _get_installed_candidate() -> Optional[Candidate]: + """Get the candidate for the currently-installed version.""" + # If --force-reinstall is set, we want the version from the index + # instead, so we "pretend" there is nothing installed. + if self._force_reinstall: + return None + try: + installed_dist = self._installed_dists[name] + except KeyError: + return None + # Don't use the installed distribution if its version does not fit + # the current dependency graph. + if not specifier.contains(installed_dist.version, prereleases=True): + return None + candidate = self._make_candidate_from_dist( + dist=installed_dist, + extras=extras, + template=template, + ) + # The candidate is a known incompatibility. Don't use it. + if id(candidate) in incompatible_ids: + return None + return candidate + + def iter_index_candidate_infos() -> Iterator[IndexCandidateInfo]: + result = self._finder.find_best_candidate( + project_name=name, + specifier=specifier, + hashes=hashes, + ) + icans = list(result.iter_applicable()) + + # PEP 592: Yanked releases are ignored unless the specifier + # explicitly pins a version (via '==' or '===') that can be + # solely satisfied by a yanked release. + all_yanked = all(ican.link.is_yanked for ican in icans) + + def is_pinned(specifier: SpecifierSet) -> bool: + for sp in specifier: + if sp.operator == "===": + return True + if sp.operator != "==": + continue + if sp.version.endswith(".*"): + continue + return True + return False + + pinned = is_pinned(specifier) + + # PackageFinder returns earlier versions first, so we reverse. + for ican in reversed(icans): + if not (all_yanked and pinned) and ican.link.is_yanked: + continue + func = functools.partial( + self._make_candidate_from_link, + link=ican.link, + extras=extras, + template=template, + name=name, + version=ican.version, + ) + yield ican.version, func + + return FoundCandidates( + iter_index_candidate_infos, + _get_installed_candidate(), + prefers_installed, + incompatible_ids, + ) + + def _iter_explicit_candidates_from_base( + self, + base_requirements: Iterable[Requirement], + extras: FrozenSet[str], + ) -> Iterator[Candidate]: + """Produce explicit candidates from the base given an extra-ed package. + + :param base_requirements: Requirements known to the resolver. The + requirements are guaranteed to not have extras. + :param extras: The extras to inject into the explicit requirements' + candidates. + """ + for req in base_requirements: + lookup_cand, _ = req.get_candidate_lookup() + if lookup_cand is None: # Not explicit. + continue + # We've stripped extras from the identifier, and should always + # get a BaseCandidate here, unless there's a bug elsewhere. + base_cand = as_base_candidate(lookup_cand) + assert base_cand is not None, "no extras here" + yield self._make_extras_candidate(base_cand, extras) + + def _iter_candidates_from_constraints( + self, + identifier: str, + constraint: Constraint, + template: InstallRequirement, + ) -> Iterator[Candidate]: + """Produce explicit candidates from constraints. + + This creates "fake" InstallRequirement objects that are basically clones + of what "should" be the template, but with original_link set to link. + """ + for link in constraint.links: + self._fail_if_link_is_unsupported_wheel(link) + candidate = self._make_candidate_from_link( + link, + extras=frozenset(), + template=install_req_from_link_and_ireq(link, template), + name=canonicalize_name(identifier), + version=None, + ) + if candidate: + yield candidate + + def find_candidates( + self, + identifier: str, + requirements: Mapping[str, Iterable[Requirement]], + incompatibilities: Mapping[str, Iterator[Candidate]], + constraint: Constraint, + prefers_installed: bool, + ) -> Iterable[Candidate]: + # Collect basic lookup information from the requirements. + explicit_candidates: Set[Candidate] = set() + ireqs: List[InstallRequirement] = [] + for req in requirements[identifier]: + cand, ireq = req.get_candidate_lookup() + if cand is not None: + explicit_candidates.add(cand) + if ireq is not None: + ireqs.append(ireq) + + # If the current identifier contains extras, add explicit candidates + # from entries from extra-less identifier. + with contextlib.suppress(InvalidRequirement): + parsed_requirement = get_requirement(identifier) + explicit_candidates.update( + self._iter_explicit_candidates_from_base( + requirements.get(parsed_requirement.name, ()), + frozenset(parsed_requirement.extras), + ), + ) + + # Add explicit candidates from constraints. We only do this if there are + # known ireqs, which represent requirements not already explicit. If + # there are no ireqs, we're constraining already-explicit requirements, + # which is handled later when we return the explicit candidates. + if ireqs: + try: + explicit_candidates.update( + self._iter_candidates_from_constraints( + identifier, + constraint, + template=ireqs[0], + ), + ) + except UnsupportedWheel: + # If we're constrained to install a wheel incompatible with the + # target architecture, no candidates will ever be valid. + return () + + # Since we cache all the candidates, incompatibility identification + # can be made quicker by comparing only the id() values. + incompat_ids = {id(c) for c in incompatibilities.get(identifier, ())} + + # If none of the requirements want an explicit candidate, we can ask + # the finder for candidates. + if not explicit_candidates: + return self._iter_found_candidates( + ireqs, + constraint.specifier, + constraint.hashes, + prefers_installed, + incompat_ids, + ) + + return ( + c + for c in explicit_candidates + if id(c) not in incompat_ids + and constraint.is_satisfied_by(c) + and all(req.is_satisfied_by(c) for req in requirements[identifier]) + ) + + def _make_requirement_from_install_req( + self, ireq: InstallRequirement, requested_extras: Iterable[str] + ) -> Optional[Requirement]: + if not ireq.match_markers(requested_extras): + logger.info( + "Ignoring %s: markers '%s' don't match your environment", + ireq.name, + ireq.markers, + ) + return None + if not ireq.link: + return SpecifierRequirement(ireq) + self._fail_if_link_is_unsupported_wheel(ireq.link) + cand = self._make_candidate_from_link( + ireq.link, + extras=frozenset(ireq.extras), + template=ireq, + name=canonicalize_name(ireq.name) if ireq.name else None, + version=None, + ) + if cand is None: + # There's no way we can satisfy a URL requirement if the underlying + # candidate fails to build. An unnamed URL must be user-supplied, so + # we fail eagerly. If the URL is named, an unsatisfiable requirement + # can make the resolver do the right thing, either backtrack (and + # maybe find some other requirement that's buildable) or raise a + # ResolutionImpossible eventually. + if not ireq.name: + raise self._build_failures[ireq.link] + return UnsatisfiableRequirement(canonicalize_name(ireq.name)) + return self.make_requirement_from_candidate(cand) + + def collect_root_requirements( + self, root_ireqs: List[InstallRequirement] + ) -> CollectedRootRequirements: + collected = CollectedRootRequirements([], {}, {}) + for i, ireq in enumerate(root_ireqs): + if ireq.constraint: + # Ensure we only accept valid constraints + problem = check_invalid_constraint_type(ireq) + if problem: + raise InstallationError(problem) + if not ireq.match_markers(): + continue + assert ireq.name, "Constraint must be named" + name = canonicalize_name(ireq.name) + if name in collected.constraints: + collected.constraints[name] &= ireq + else: + collected.constraints[name] = Constraint.from_ireq(ireq) + else: + req = self._make_requirement_from_install_req( + ireq, + requested_extras=(), + ) + if req is None: + continue + if ireq.user_supplied and req.name not in collected.user_requested: + collected.user_requested[req.name] = i + collected.requirements.append(req) + return collected + + def make_requirement_from_candidate( + self, candidate: Candidate + ) -> ExplicitRequirement: + return ExplicitRequirement(candidate) + + def make_requirement_from_spec( + self, + specifier: str, + comes_from: Optional[InstallRequirement], + requested_extras: Iterable[str] = (), + ) -> Optional[Requirement]: + ireq = self._make_install_req_from_spec(specifier, comes_from) + return self._make_requirement_from_install_req(ireq, requested_extras) + + def make_requires_python_requirement( + self, + specifier: SpecifierSet, + ) -> Optional[Requirement]: + if self._ignore_requires_python: + return None + # Don't bother creating a dependency for an empty Requires-Python. + if not str(specifier): + return None + return RequiresPythonRequirement(specifier, self._python_candidate) + + def get_wheel_cache_entry( + self, link: Link, name: Optional[str] + ) -> Optional[CacheEntry]: + """Look up the link in the wheel cache. + + If ``preparer.require_hashes`` is True, don't use the wheel cache, + because cached wheels, always built locally, have different hashes + than the files downloaded from the index server and thus throw false + hash mismatches. Furthermore, cached wheels at present have + nondeterministic contents due to file modification times. + """ + if self._wheel_cache is None or self.preparer.require_hashes: + return None + return self._wheel_cache.get_cache_entry( + link=link, + package_name=name, + supported_tags=get_supported(), + ) + + def get_dist_to_uninstall(self, candidate: Candidate) -> Optional[BaseDistribution]: + # TODO: Are there more cases this needs to return True? Editable? + dist = self._installed_dists.get(candidate.project_name) + if dist is None: # Not installed, no uninstallation required. + return None + + # We're installing into global site. The current installation must + # be uninstalled, no matter it's in global or user site, because the + # user site installation has precedence over global. + if not self._use_user_site: + return dist + + # We're installing into user site. Remove the user site installation. + if dist.in_usersite: + return dist + + # We're installing into user site, but the installed incompatible + # package is in global site. We can't uninstall that, and would let + # the new user installation to "shadow" it. But shadowing won't work + # in virtual environments, so we error out. + if running_under_virtualenv() and dist.in_site_packages: + message = ( + f"Will not install to the user site because it will lack " + f"sys.path precedence to {dist.raw_name} in {dist.location}" + ) + raise InstallationError(message) + return None + + def _report_requires_python_error( + self, causes: Sequence["ConflictCause"] + ) -> UnsupportedPythonVersion: + assert causes, "Requires-Python error reported with no cause" + + version = self._python_candidate.version + + if len(causes) == 1: + specifier = str(causes[0].requirement.specifier) + message = ( + f"Package {causes[0].parent.name!r} requires a different " + f"Python: {version} not in {specifier!r}" + ) + return UnsupportedPythonVersion(message) + + message = f"Packages require a different Python. {version} not in:" + for cause in causes: + package = cause.parent.format_for_error() + specifier = str(cause.requirement.specifier) + message += f"\n{specifier!r} (required by {package})" + return UnsupportedPythonVersion(message) + + def _report_single_requirement_conflict( + self, req: Requirement, parent: Optional[Candidate] + ) -> DistributionNotFound: + if parent is None: + req_disp = str(req) + else: + req_disp = f"{req} (from {parent.name})" + + cands = self._finder.find_all_candidates(req.project_name) + versions = [str(v) for v in sorted({c.version for c in cands})] + + logger.critical( + "Could not find a version that satisfies the requirement %s " + "(from versions: %s)", + req_disp, + ", ".join(versions) or "none", + ) + if str(req) == "requirements.txt": + logger.info( + "HINT: You are attempting to install a package literally " + 'named "requirements.txt" (which cannot exist). Consider ' + "using the '-r' flag to install the packages listed in " + "requirements.txt" + ) + + return DistributionNotFound(f"No matching distribution found for {req}") + + def get_installation_error( + self, + e: "ResolutionImpossible[Requirement, Candidate]", + constraints: Dict[str, Constraint], + ) -> InstallationError: + + assert e.causes, "Installation error reported with no cause" + + # If one of the things we can't solve is "we need Python X.Y", + # that is what we report. + requires_python_causes = [ + cause + for cause in e.causes + if isinstance(cause.requirement, RequiresPythonRequirement) + and not cause.requirement.is_satisfied_by(self._python_candidate) + ] + if requires_python_causes: + # The comprehension above makes sure all Requirement instances are + # RequiresPythonRequirement, so let's cast for convenience. + return self._report_requires_python_error( + cast("Sequence[ConflictCause]", requires_python_causes), + ) + + # Otherwise, we have a set of causes which can't all be satisfied + # at once. + + # The simplest case is when we have *one* cause that can't be + # satisfied. We just report that case. + if len(e.causes) == 1: + req, parent = e.causes[0] + if req.name not in constraints: + return self._report_single_requirement_conflict(req, parent) + + # OK, we now have a list of requirements that can't all be + # satisfied at once. + + # A couple of formatting helpers + def text_join(parts: List[str]) -> str: + if len(parts) == 1: + return parts[0] + + return ", ".join(parts[:-1]) + " and " + parts[-1] + + def describe_trigger(parent: Candidate) -> str: + ireq = parent.get_install_requirement() + if not ireq or not ireq.comes_from: + return f"{parent.name}=={parent.version}" + if isinstance(ireq.comes_from, InstallRequirement): + return str(ireq.comes_from.name) + return str(ireq.comes_from) + + triggers = set() + for req, parent in e.causes: + if parent is None: + # This is a root requirement, so we can report it directly + trigger = req.format_for_error() + else: + trigger = describe_trigger(parent) + triggers.add(trigger) + + if triggers: + info = text_join(sorted(triggers)) + else: + info = "the requested packages" + + msg = ( + "Cannot install {} because these package versions " + "have conflicting dependencies.".format(info) + ) + logger.critical(msg) + msg = "\nThe conflict is caused by:" + + relevant_constraints = set() + for req, parent in e.causes: + if req.name in constraints: + relevant_constraints.add(req.name) + msg = msg + "\n " + if parent: + msg = msg + f"{parent.name} {parent.version} depends on " + else: + msg = msg + "The user requested " + msg = msg + req.format_for_error() + for key in relevant_constraints: + spec = constraints[key].specifier + msg += f"\n The user requested (constraint) {key}{spec}" + + msg = ( + msg + + "\n\n" + + "To fix this you could try to:\n" + + "1. loosen the range of package versions you've specified\n" + + "2. remove package versions to allow pip attempt to solve " + + "the dependency conflict\n" + ) + + logger.info(msg) + + return DistributionNotFound( + "ResolutionImpossible: for help visit " + "https://pip.pypa.io/en/latest/topics/dependency-resolution/" + "#dealing-with-dependency-conflicts" + ) diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/provider.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/provider.py new file mode 100644 index 0000000..e6ec959 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/provider.py @@ -0,0 +1,248 @@ +import collections +import math +from typing import ( + TYPE_CHECKING, + Dict, + Iterable, + Iterator, + Mapping, + Sequence, + TypeVar, + Union, +) + +from pip._vendor.resolvelib.providers import AbstractProvider + +from .base import Candidate, Constraint, Requirement +from .candidates import REQUIRES_PYTHON_IDENTIFIER +from .factory import Factory + +if TYPE_CHECKING: + from pip._vendor.resolvelib.providers import Preference + from pip._vendor.resolvelib.resolvers import RequirementInformation + + PreferenceInformation = RequirementInformation[Requirement, Candidate] + + _ProviderBase = AbstractProvider[Requirement, Candidate, str] +else: + _ProviderBase = AbstractProvider + +# Notes on the relationship between the provider, the factory, and the +# candidate and requirement classes. +# +# The provider is a direct implementation of the resolvelib class. Its role +# is to deliver the API that resolvelib expects. +# +# Rather than work with completely abstract "requirement" and "candidate" +# concepts as resolvelib does, pip has concrete classes implementing these two +# ideas. The API of Requirement and Candidate objects are defined in the base +# classes, but essentially map fairly directly to the equivalent provider +# methods. In particular, `find_matches` and `is_satisfied_by` are +# requirement methods, and `get_dependencies` is a candidate method. +# +# The factory is the interface to pip's internal mechanisms. It is stateless, +# and is created by the resolver and held as a property of the provider. It is +# responsible for creating Requirement and Candidate objects, and provides +# services to those objects (access to pip's finder and preparer). + + +D = TypeVar("D") +V = TypeVar("V") + + +def _get_with_identifier( + mapping: Mapping[str, V], + identifier: str, + default: D, +) -> Union[D, V]: + """Get item from a package name lookup mapping with a resolver identifier. + + This extra logic is needed when the target mapping is keyed by package + name, which cannot be directly looked up with an identifier (which may + contain requested extras). Additional logic is added to also look up a value + by "cleaning up" the extras from the identifier. + """ + if identifier in mapping: + return mapping[identifier] + # HACK: Theoretically we should check whether this identifier is a valid + # "NAME[EXTRAS]" format, and parse out the name part with packaging or + # some regular expression. But since pip's resolver only spits out three + # kinds of identifiers: normalized PEP 503 names, normalized names plus + # extras, and Requires-Python, we can cheat a bit here. + name, open_bracket, _ = identifier.partition("[") + if open_bracket and name in mapping: + return mapping[name] + return default + + +class PipProvider(_ProviderBase): + """Pip's provider implementation for resolvelib. + + :params constraints: A mapping of constraints specified by the user. Keys + are canonicalized project names. + :params ignore_dependencies: Whether the user specified ``--no-deps``. + :params upgrade_strategy: The user-specified upgrade strategy. + :params user_requested: A set of canonicalized package names that the user + supplied for pip to install/upgrade. + """ + + def __init__( + self, + factory: Factory, + constraints: Dict[str, Constraint], + ignore_dependencies: bool, + upgrade_strategy: str, + user_requested: Dict[str, int], + ) -> None: + self._factory = factory + self._constraints = constraints + self._ignore_dependencies = ignore_dependencies + self._upgrade_strategy = upgrade_strategy + self._user_requested = user_requested + self._known_depths: Dict[str, float] = collections.defaultdict(lambda: math.inf) + + def identify(self, requirement_or_candidate: Union[Requirement, Candidate]) -> str: + return requirement_or_candidate.name + + def get_preference( # type: ignore + self, + identifier: str, + resolutions: Mapping[str, Candidate], + candidates: Mapping[str, Iterator[Candidate]], + information: Mapping[str, Iterable["PreferenceInformation"]], + backtrack_causes: Sequence["PreferenceInformation"], + ) -> "Preference": + """Produce a sort key for given requirement based on preference. + + The lower the return value is, the more preferred this group of + arguments is. + + Currently pip considers the followings in order: + + * Prefer if any of the known requirements is "direct", e.g. points to an + explicit URL. + * If equal, prefer if any requirement is "pinned", i.e. contains + operator ``===`` or ``==``. + * If equal, calculate an approximate "depth" and resolve requirements + closer to the user-specified requirements first. + * Order user-specified requirements by the order they are specified. + * If equal, prefers "non-free" requirements, i.e. contains at least one + operator, such as ``>=`` or ``<``. + * If equal, order alphabetically for consistency (helps debuggability). + """ + lookups = (r.get_candidate_lookup() for r, _ in information[identifier]) + candidate, ireqs = zip(*lookups) + operators = [ + specifier.operator + for specifier_set in (ireq.specifier for ireq in ireqs if ireq) + for specifier in specifier_set + ] + + direct = candidate is not None + pinned = any(op[:2] == "==" for op in operators) + unfree = bool(operators) + + try: + requested_order: Union[int, float] = self._user_requested[identifier] + except KeyError: + requested_order = math.inf + parent_depths = ( + self._known_depths[parent.name] if parent is not None else 0.0 + for _, parent in information[identifier] + ) + inferred_depth = min(d for d in parent_depths) + 1.0 + else: + inferred_depth = 1.0 + self._known_depths[identifier] = inferred_depth + + requested_order = self._user_requested.get(identifier, math.inf) + + # Requires-Python has only one candidate and the check is basically + # free, so we always do it first to avoid needless work if it fails. + requires_python = identifier == REQUIRES_PYTHON_IDENTIFIER + + # HACK: Setuptools have a very long and solid backward compatibility + # track record, and extremely few projects would request a narrow, + # non-recent version range of it since that would break a lot things. + # (Most projects specify it only to request for an installer feature, + # which does not work, but that's another topic.) Intentionally + # delaying Setuptools helps reduce branches the resolver has to check. + # This serves as a temporary fix for issues like "apache-airflow[all]" + # while we work on "proper" branch pruning techniques. + delay_this = identifier == "setuptools" + + # Prefer the causes of backtracking on the assumption that the problem + # resolving the dependency tree is related to the failures that caused + # the backtracking + backtrack_cause = self.is_backtrack_cause(identifier, backtrack_causes) + + return ( + not requires_python, + delay_this, + not direct, + not pinned, + not backtrack_cause, + inferred_depth, + requested_order, + not unfree, + identifier, + ) + + def find_matches( + self, + identifier: str, + requirements: Mapping[str, Iterator[Requirement]], + incompatibilities: Mapping[str, Iterator[Candidate]], + ) -> Iterable[Candidate]: + def _eligible_for_upgrade(identifier: str) -> bool: + """Are upgrades allowed for this project? + + This checks the upgrade strategy, and whether the project was one + that the user specified in the command line, in order to decide + whether we should upgrade if there's a newer version available. + + (Note that we don't need access to the `--upgrade` flag, because + an upgrade strategy of "to-satisfy-only" means that `--upgrade` + was not specified). + """ + if self._upgrade_strategy == "eager": + return True + elif self._upgrade_strategy == "only-if-needed": + user_order = _get_with_identifier( + self._user_requested, + identifier, + default=None, + ) + return user_order is not None + return False + + constraint = _get_with_identifier( + self._constraints, + identifier, + default=Constraint.empty(), + ) + return self._factory.find_candidates( + identifier=identifier, + requirements=requirements, + constraint=constraint, + prefers_installed=(not _eligible_for_upgrade(identifier)), + incompatibilities=incompatibilities, + ) + + def is_satisfied_by(self, requirement: Requirement, candidate: Candidate) -> bool: + return requirement.is_satisfied_by(candidate) + + def get_dependencies(self, candidate: Candidate) -> Sequence[Requirement]: + with_requires = not self._ignore_dependencies + return [r for r in candidate.iter_dependencies(with_requires) if r is not None] + + @staticmethod + def is_backtrack_cause( + identifier: str, backtrack_causes: Sequence["PreferenceInformation"] + ) -> bool: + for backtrack_cause in backtrack_causes: + if identifier == backtrack_cause.requirement.name: + return True + if backtrack_cause.parent and identifier == backtrack_cause.parent.name: + return True + return False diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/reporter.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/reporter.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/reporter.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/reporter.py diff --git a/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/requirements.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/requirements.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/requirements.py rename to python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/requirements.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py new file mode 100644 index 0000000..618f1e1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py @@ -0,0 +1,292 @@ +import functools +import logging +import os +from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple, cast + +from pip._vendor.packaging.utils import canonicalize_name +from pip._vendor.resolvelib import BaseReporter, ResolutionImpossible +from pip._vendor.resolvelib import Resolver as RLResolver +from pip._vendor.resolvelib.structs import DirectedGraph + +from pip._internal.cache import WheelCache +from pip._internal.index.package_finder import PackageFinder +from pip._internal.operations.prepare import RequirementPreparer +from pip._internal.req.req_install import InstallRequirement +from pip._internal.req.req_set import RequirementSet +from pip._internal.resolution.base import BaseResolver, InstallRequirementProvider +from pip._internal.resolution.resolvelib.provider import PipProvider +from pip._internal.resolution.resolvelib.reporter import ( + PipDebuggingReporter, + PipReporter, +) + +from .base import Candidate, Requirement +from .factory import Factory + +if TYPE_CHECKING: + from pip._vendor.resolvelib.resolvers import Result as RLResult + + Result = RLResult[Requirement, Candidate, str] + + +logger = logging.getLogger(__name__) + + +class Resolver(BaseResolver): + _allowed_strategies = {"eager", "only-if-needed", "to-satisfy-only"} + + def __init__( + self, + preparer: RequirementPreparer, + finder: PackageFinder, + wheel_cache: Optional[WheelCache], + make_install_req: InstallRequirementProvider, + use_user_site: bool, + ignore_dependencies: bool, + ignore_installed: bool, + ignore_requires_python: bool, + force_reinstall: bool, + upgrade_strategy: str, + suppress_build_failures: bool, + py_version_info: Optional[Tuple[int, ...]] = None, + ): + super().__init__() + assert upgrade_strategy in self._allowed_strategies + + self.factory = Factory( + finder=finder, + preparer=preparer, + make_install_req=make_install_req, + wheel_cache=wheel_cache, + use_user_site=use_user_site, + force_reinstall=force_reinstall, + ignore_installed=ignore_installed, + ignore_requires_python=ignore_requires_python, + suppress_build_failures=suppress_build_failures, + py_version_info=py_version_info, + ) + self.ignore_dependencies = ignore_dependencies + self.upgrade_strategy = upgrade_strategy + self._result: Optional[Result] = None + + def resolve( + self, root_reqs: List[InstallRequirement], check_supported_wheels: bool + ) -> RequirementSet: + collected = self.factory.collect_root_requirements(root_reqs) + provider = PipProvider( + factory=self.factory, + constraints=collected.constraints, + ignore_dependencies=self.ignore_dependencies, + upgrade_strategy=self.upgrade_strategy, + user_requested=collected.user_requested, + ) + if "PIP_RESOLVER_DEBUG" in os.environ: + reporter: BaseReporter = PipDebuggingReporter() + else: + reporter = PipReporter() + resolver: RLResolver[Requirement, Candidate, str] = RLResolver( + provider, + reporter, + ) + + try: + try_to_avoid_resolution_too_deep = 2000000 + result = self._result = resolver.resolve( + collected.requirements, max_rounds=try_to_avoid_resolution_too_deep + ) + + except ResolutionImpossible as e: + error = self.factory.get_installation_error( + cast("ResolutionImpossible[Requirement, Candidate]", e), + collected.constraints, + ) + raise error from e + + req_set = RequirementSet(check_supported_wheels=check_supported_wheels) + for candidate in result.mapping.values(): + ireq = candidate.get_install_requirement() + if ireq is None: + continue + + # Check if there is already an installation under the same name, + # and set a flag for later stages to uninstall it, if needed. + installed_dist = self.factory.get_dist_to_uninstall(candidate) + if installed_dist is None: + # There is no existing installation -- nothing to uninstall. + ireq.should_reinstall = False + elif self.factory.force_reinstall: + # The --force-reinstall flag is set -- reinstall. + ireq.should_reinstall = True + elif installed_dist.version != candidate.version: + # The installation is different in version -- reinstall. + ireq.should_reinstall = True + elif candidate.is_editable or installed_dist.editable: + # The incoming distribution is editable, or different in + # editable-ness to installation -- reinstall. + ireq.should_reinstall = True + elif candidate.source_link and candidate.source_link.is_file: + # The incoming distribution is under file:// + if candidate.source_link.is_wheel: + # is a local wheel -- do nothing. + logger.info( + "%s is already installed with the same version as the " + "provided wheel. Use --force-reinstall to force an " + "installation of the wheel.", + ireq.name, + ) + continue + + # is a local sdist or path -- reinstall + ireq.should_reinstall = True + else: + continue + + link = candidate.source_link + if link and link.is_yanked: + # The reason can contain non-ASCII characters, Unicode + # is required for Python 2. + msg = ( + "The candidate selected for download or install is a " + "yanked version: {name!r} candidate (version {version} " + "at {link})\nReason for being yanked: {reason}" + ).format( + name=candidate.name, + version=candidate.version, + link=link, + reason=link.yanked_reason or "", + ) + logger.warning(msg) + + req_set.add_named_requirement(ireq) + + reqs = req_set.all_requirements + self.factory.preparer.prepare_linked_requirements_more(reqs) + return req_set + + def get_installation_order( + self, req_set: RequirementSet + ) -> List[InstallRequirement]: + """Get order for installation of requirements in RequirementSet. + + The returned list contains a requirement before another that depends on + it. This helps ensure that the environment is kept consistent as they + get installed one-by-one. + + The current implementation creates a topological ordering of the + dependency graph, giving more weight to packages with less + or no dependencies, while breaking any cycles in the graph at + arbitrary points. We make no guarantees about where the cycle + would be broken, other than it *would* be broken. + """ + assert self._result is not None, "must call resolve() first" + + if not req_set.requirements: + # Nothing is left to install, so we do not need an order. + return [] + + graph = self._result.graph + weights = get_topological_weights( + graph, + expected_node_count=len(self._result.mapping) + 1, + ) + + sorted_items = sorted( + req_set.requirements.items(), + key=functools.partial(_req_set_item_sorter, weights=weights), + reverse=True, + ) + return [ireq for _, ireq in sorted_items] + + +def get_topological_weights( + graph: "DirectedGraph[Optional[str]]", expected_node_count: int +) -> Dict[Optional[str], int]: + """Assign weights to each node based on how "deep" they are. + + This implementation may change at any point in the future without prior + notice. + + We first simplify the dependency graph by pruning any leaves and giving them + the highest weight: a package without any dependencies should be installed + first. This is done again and again in the same way, giving ever less weight + to the newly found leaves. The loop stops when no leaves are left: all + remaining packages have at least one dependency left in the graph. + + Then we continue with the remaining graph, by taking the length for the + longest path to any node from root, ignoring any paths that contain a single + node twice (i.e. cycles). This is done through a depth-first search through + the graph, while keeping track of the path to the node. + + Cycles in the graph result would result in node being revisited while also + being on its own path. In this case, take no action. This helps ensure we + don't get stuck in a cycle. + + When assigning weight, the longer path (i.e. larger length) is preferred. + """ + path: Set[Optional[str]] = set() + weights: Dict[Optional[str], int] = {} + + def visit(node: Optional[str]) -> None: + if node in path: + # We hit a cycle, so we'll break it here. + return + + # Time to visit the children! + path.add(node) + for child in graph.iter_children(node): + visit(child) + path.remove(node) + + last_known_parent_count = weights.get(node, 0) + weights[node] = max(last_known_parent_count, len(path)) + + # Simplify the graph, pruning leaves that have no dependencies. + # This is needed for large graphs (say over 200 packages) because the + # `visit` function is exponentially slower then, taking minutes. + # See https://github.com/pypa/pip/issues/10557 + # We will loop until we explicitly break the loop. + while True: + leaves = set() + for key in graph: + if key is None: + continue + for _child in graph.iter_children(key): + # This means we have at least one child + break + else: + # No child. + leaves.add(key) + if not leaves: + # We are done simplifying. + break + # Calculate the weight for the leaves. + weight = len(graph) - 1 + for leaf in leaves: + weights[leaf] = weight + # Remove the leaves from the graph, making it simpler. + for leaf in leaves: + graph.remove(leaf) + + # Visit the remaining graph. + # `None` is guaranteed to be the root node by resolvelib. + visit(None) + + # Sanity checks + assert weights[None] == 0 + assert len(weights) == expected_node_count + + return weights + + +def _req_set_item_sorter( + item: Tuple[str, InstallRequirement], + weights: Dict[Optional[str], int], +) -> Tuple[int, str]: + """Key function used to sort install requirements for installation. + + Based on the "weight" mapping calculated in ``get_installation_order()``. + The canonical package name is returned as the second member as a tie- + breaker to ensure the result is predictable, which is useful in tests. + """ + name = canonicalize_name(item[0]) + return weights[name], name diff --git a/python/lib/python3.10/site-packages/pip/_internal/self_outdated_check.py b/python/lib/python3.10/site-packages/pip/_internal/self_outdated_check.py new file mode 100644 index 0000000..7300e0e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/self_outdated_check.py @@ -0,0 +1,189 @@ +import datetime +import hashlib +import json +import logging +import optparse +import os.path +import sys +from typing import Any, Dict + +from pip._vendor.packaging.version import parse as parse_version + +from pip._internal.index.collector import LinkCollector +from pip._internal.index.package_finder import PackageFinder +from pip._internal.metadata import get_default_environment +from pip._internal.models.selection_prefs import SelectionPreferences +from pip._internal.network.session import PipSession +from pip._internal.utils.filesystem import adjacent_tmp_file, check_path_owner, replace +from pip._internal.utils.misc import ensure_dir + +SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ" + + +logger = logging.getLogger(__name__) + + +def _get_statefile_name(key: str) -> str: + key_bytes = key.encode() + name = hashlib.sha224(key_bytes).hexdigest() + return name + + +class SelfCheckState: + def __init__(self, cache_dir: str) -> None: + self.state: Dict[str, Any] = {} + self.statefile_path = None + + # Try to load the existing state + if cache_dir: + self.statefile_path = os.path.join( + cache_dir, "selfcheck", _get_statefile_name(self.key) + ) + try: + with open(self.statefile_path, encoding="utf-8") as statefile: + self.state = json.load(statefile) + except (OSError, ValueError, KeyError): + # Explicitly suppressing exceptions, since we don't want to + # error out if the cache file is invalid. + pass + + @property + def key(self) -> str: + return sys.prefix + + def save(self, pypi_version: str, current_time: datetime.datetime) -> None: + # If we do not have a path to cache in, don't bother saving. + if not self.statefile_path: + return + + # Check to make sure that we own the directory + if not check_path_owner(os.path.dirname(self.statefile_path)): + return + + # Now that we've ensured the directory is owned by this user, we'll go + # ahead and make sure that all our directories are created. + ensure_dir(os.path.dirname(self.statefile_path)) + + state = { + # Include the key so it's easy to tell which pip wrote the + # file. + "key": self.key, + "last_check": current_time.strftime(SELFCHECK_DATE_FMT), + "pypi_version": pypi_version, + } + + text = json.dumps(state, sort_keys=True, separators=(",", ":")) + + with adjacent_tmp_file(self.statefile_path) as f: + f.write(text.encode()) + + try: + # Since we have a prefix-specific state file, we can just + # overwrite whatever is there, no need to check. + replace(f.name, self.statefile_path) + except OSError: + # Best effort. + pass + + +def was_installed_by_pip(pkg: str) -> bool: + """Checks whether pkg was installed by pip + + This is used not to display the upgrade message when pip is in fact + installed by system package manager, such as dnf on Fedora. + """ + dist = get_default_environment().get_distribution(pkg) + return dist is not None and "pip" == dist.installer + + +def pip_self_version_check(session: PipSession, options: optparse.Values) -> None: + """Check for an update for pip. + + Limit the frequency of checks to once per week. State is stored either in + the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix + of the pip script path. + """ + installed_dist = get_default_environment().get_distribution("pip") + if not installed_dist: + return + + pip_version = installed_dist.version + pypi_version = None + + try: + state = SelfCheckState(cache_dir=options.cache_dir) + + current_time = datetime.datetime.utcnow() + # Determine if we need to refresh the state + if "last_check" in state.state and "pypi_version" in state.state: + last_check = datetime.datetime.strptime( + state.state["last_check"], SELFCHECK_DATE_FMT + ) + if (current_time - last_check).total_seconds() < 7 * 24 * 60 * 60: + pypi_version = state.state["pypi_version"] + + # Refresh the version if we need to or just see if we need to warn + if pypi_version is None: + # Lets use PackageFinder to see what the latest pip version is + link_collector = LinkCollector.create( + session, + options=options, + suppress_no_index=True, + ) + + # Pass allow_yanked=False so we don't suggest upgrading to a + # yanked version. + selection_prefs = SelectionPreferences( + allow_yanked=False, + allow_all_prereleases=False, # Explicitly set to False + ) + + finder = PackageFinder.create( + link_collector=link_collector, + selection_prefs=selection_prefs, + use_deprecated_html5lib=( + "html5lib" in options.deprecated_features_enabled + ), + ) + best_candidate = finder.find_best_candidate("pip").best_candidate + if best_candidate is None: + return + pypi_version = str(best_candidate.version) + + # save that we've performed a check + state.save(pypi_version, current_time) + + remote_version = parse_version(pypi_version) + + local_version_is_older = ( + pip_version < remote_version + and pip_version.base_version != remote_version.base_version + and was_installed_by_pip("pip") + ) + + # Determine if our pypi_version is older + if not local_version_is_older: + return + + # We cannot tell how the current pip is available in the current + # command context, so be pragmatic here and suggest the command + # that's always available. This does not accommodate spaces in + # `sys.executable` on purpose as it is not possible to do it + # correctly without knowing the user's shell. Thus, + # it won't be done until possible through the standard library. + # Do not be tempted to use the undocumented subprocess.list2cmdline. + # It is considered an internal implementation detail for a reason. + pip_cmd = f"{sys.executable} -m pip" + logger.warning( + "You are using pip version %s; however, version %s is " + "available.\nYou should consider upgrading via the " + "'%s install --upgrade pip' command.", + pip_version, + pypi_version, + pip_cmd, + ) + except Exception: + logger.debug( + "There was an error checking the latest version of pip", + exc_info=True, + ) diff --git a/lib/python3.11/site-packages/pip/_internal/utils/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/utils/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/__init__.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/_log.py b/python/lib/python3.10/site-packages/pip/_internal/utils/_log.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/_log.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/_log.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/appdirs.py b/python/lib/python3.10/site-packages/pip/_internal/utils/appdirs.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/appdirs.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/appdirs.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/compat.py b/python/lib/python3.10/site-packages/pip/_internal/utils/compat.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/compat.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/compat.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/compatibility_tags.py b/python/lib/python3.10/site-packages/pip/_internal/utils/compatibility_tags.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/compatibility_tags.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/compatibility_tags.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/datetime.py b/python/lib/python3.10/site-packages/pip/_internal/utils/datetime.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/datetime.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/datetime.py diff --git a/lib/site-packages/pip/_internal/utils/deprecation.py b/python/lib/python3.10/site-packages/pip/_internal/utils/deprecation.py similarity index 100% rename from lib/site-packages/pip/_internal/utils/deprecation.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/deprecation.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/direct_url_helpers.py b/python/lib/python3.10/site-packages/pip/_internal/utils/direct_url_helpers.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/direct_url_helpers.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/direct_url_helpers.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/distutils_args.py b/python/lib/python3.10/site-packages/pip/_internal/utils/distutils_args.py new file mode 100644 index 0000000..e4aa5b8 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/distutils_args.py @@ -0,0 +1,42 @@ +from distutils.errors import DistutilsArgError +from distutils.fancy_getopt import FancyGetopt +from typing import Dict, List + +_options = [ + ("exec-prefix=", None, ""), + ("home=", None, ""), + ("install-base=", None, ""), + ("install-data=", None, ""), + ("install-headers=", None, ""), + ("install-lib=", None, ""), + ("install-platlib=", None, ""), + ("install-purelib=", None, ""), + ("install-scripts=", None, ""), + ("prefix=", None, ""), + ("root=", None, ""), + ("user", None, ""), +] + + +# typeshed doesn't permit Tuple[str, None, str], see python/typeshed#3469. +_distutils_getopt = FancyGetopt(_options) # type: ignore + + +def parse_distutils_args(args: List[str]) -> Dict[str, str]: + """Parse provided arguments, returning an object that has the + matched arguments. + + Any unknown arguments are ignored. + """ + result = {} + for arg in args: + try: + _, match = _distutils_getopt.getopt(args=[arg]) + except DistutilsArgError: + # We don't care about any other options, which here may be + # considered unrecognized since our option list is not + # exhaustive. + pass + else: + result.update(match.__dict__) + return result diff --git a/lib/python3.11/site-packages/pip/_internal/utils/egg_link.py b/python/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/egg_link.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/encoding.py b/python/lib/python3.10/site-packages/pip/_internal/utils/encoding.py new file mode 100644 index 0000000..1c73f6c --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/encoding.py @@ -0,0 +1,36 @@ +import codecs +import locale +import re +import sys +from typing import List, Tuple + +BOMS: List[Tuple[bytes, str]] = [ + (codecs.BOM_UTF8, "utf-8"), + (codecs.BOM_UTF16, "utf-16"), + (codecs.BOM_UTF16_BE, "utf-16-be"), + (codecs.BOM_UTF16_LE, "utf-16-le"), + (codecs.BOM_UTF32, "utf-32"), + (codecs.BOM_UTF32_BE, "utf-32-be"), + (codecs.BOM_UTF32_LE, "utf-32-le"), +] + +ENCODING_RE = re.compile(br"coding[:=]\s*([-\w.]+)") + + +def auto_decode(data: bytes) -> str: + """Check a bytes string for a BOM to correctly detect the encoding + + Fallback to locale.getpreferredencoding(False) like open() on Python3""" + for bom, encoding in BOMS: + if data.startswith(bom): + return data[len(bom) :].decode(encoding) + # Lets check the first two lines as in PEP263 + for line in data.split(b"\n")[:2]: + if line[0:1] == b"#" and ENCODING_RE.search(line): + result = ENCODING_RE.search(line) + assert result is not None + encoding = result.groups()[0].decode("ascii") + return data.decode(encoding) + return data.decode( + locale.getpreferredencoding(False) or sys.getdefaultencoding(), + ) diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/entrypoints.py b/python/lib/python3.10/site-packages/pip/_internal/utils/entrypoints.py new file mode 100644 index 0000000..1504a12 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/entrypoints.py @@ -0,0 +1,27 @@ +import sys +from typing import List, Optional + +from pip._internal.cli.main import main + + +def _wrapper(args: Optional[List[str]] = None) -> int: + """Central wrapper for all old entrypoints. + + Historically pip has had several entrypoints defined. Because of issues + arising from PATH, sys.path, multiple Pythons, their interactions, and most + of them having a pip installed, users suffer every time an entrypoint gets + moved. + + To alleviate this pain, and provide a mechanism for warning users and + directing them to an appropriate place for help, we now define all of + our old entrypoints as wrappers for the current one. + """ + sys.stderr.write( + "WARNING: pip is being invoked by an old script wrapper. This will " + "fail in a future version of pip.\n" + "Please see https://github.com/pypa/pip/issues/5599 for advice on " + "fixing the underlying issue.\n" + "To avoid this problem you can invoke Python with '-m pip' instead of " + "running pip directly.\n" + ) + return main(args) diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/filesystem.py b/python/lib/python3.10/site-packages/pip/_internal/utils/filesystem.py new file mode 100644 index 0000000..b7e6191 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/filesystem.py @@ -0,0 +1,182 @@ +import fnmatch +import os +import os.path +import random +import shutil +import stat +import sys +from contextlib import contextmanager +from tempfile import NamedTemporaryFile +from typing import Any, BinaryIO, Iterator, List, Union, cast + +from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed + +from pip._internal.utils.compat import get_path_uid +from pip._internal.utils.misc import format_size + + +def check_path_owner(path: str) -> bool: + # If we don't have a way to check the effective uid of this process, then + # we'll just assume that we own the directory. + if sys.platform == "win32" or not hasattr(os, "geteuid"): + return True + + assert os.path.isabs(path) + + previous = None + while path != previous: + if os.path.lexists(path): + # Check if path is writable by current user. + if os.geteuid() == 0: + # Special handling for root user in order to handle properly + # cases where users use sudo without -H flag. + try: + path_uid = get_path_uid(path) + except OSError: + return False + return path_uid == 0 + else: + return os.access(path, os.W_OK) + else: + previous, path = path, os.path.dirname(path) + return False # assume we don't own the path + + +def copy2_fixed(src: str, dest: str) -> None: + """Wrap shutil.copy2() but map errors copying socket files to + SpecialFileError as expected. + + See also https://bugs.python.org/issue37700. + """ + try: + shutil.copy2(src, dest) + except OSError: + for f in [src, dest]: + try: + is_socket_file = is_socket(f) + except OSError: + # An error has already occurred. Another error here is not + # a problem and we can ignore it. + pass + else: + if is_socket_file: + raise shutil.SpecialFileError(f"`{f}` is a socket") + + raise + + +def is_socket(path: str) -> bool: + return stat.S_ISSOCK(os.lstat(path).st_mode) + + +@contextmanager +def adjacent_tmp_file(path: str, **kwargs: Any) -> Iterator[BinaryIO]: + """Return a file-like object pointing to a tmp file next to path. + + The file is created securely and is ensured to be written to disk + after the context reaches its end. + + kwargs will be passed to tempfile.NamedTemporaryFile to control + the way the temporary file will be opened. + """ + with NamedTemporaryFile( + delete=False, + dir=os.path.dirname(path), + prefix=os.path.basename(path), + suffix=".tmp", + **kwargs, + ) as f: + result = cast(BinaryIO, f) + try: + yield result + finally: + result.flush() + os.fsync(result.fileno()) + + +# Tenacity raises RetryError by default, explicitly raise the original exception +_replace_retry = retry(reraise=True, stop=stop_after_delay(1), wait=wait_fixed(0.25)) + +replace = _replace_retry(os.replace) + + +# test_writable_dir and _test_writable_dir_win are copied from Flit, +# with the author's agreement to also place them under pip's license. +def test_writable_dir(path: str) -> bool: + """Check if a directory is writable. + + Uses os.access() on POSIX, tries creating files on Windows. + """ + # If the directory doesn't exist, find the closest parent that does. + while not os.path.isdir(path): + parent = os.path.dirname(path) + if parent == path: + break # Should never get here, but infinite loops are bad + path = parent + + if os.name == "posix": + return os.access(path, os.W_OK) + + return _test_writable_dir_win(path) + + +def _test_writable_dir_win(path: str) -> bool: + # os.access doesn't work on Windows: http://bugs.python.org/issue2528 + # and we can't use tempfile: http://bugs.python.org/issue22107 + basename = "accesstest_deleteme_fishfingers_custard_" + alphabet = "abcdefghijklmnopqrstuvwxyz0123456789" + for _ in range(10): + name = basename + "".join(random.choice(alphabet) for _ in range(6)) + file = os.path.join(path, name) + try: + fd = os.open(file, os.O_RDWR | os.O_CREAT | os.O_EXCL) + except FileExistsError: + pass + except PermissionError: + # This could be because there's a directory with the same name. + # But it's highly unlikely there's a directory called that, + # so we'll assume it's because the parent dir is not writable. + # This could as well be because the parent dir is not readable, + # due to non-privileged user access. + return False + else: + os.close(fd) + os.unlink(file) + return True + + # This should never be reached + raise OSError("Unexpected condition testing for writable directory") + + +def find_files(path: str, pattern: str) -> List[str]: + """Returns a list of absolute paths of files beneath path, recursively, + with filenames which match the UNIX-style shell glob pattern.""" + result: List[str] = [] + for root, _, files in os.walk(path): + matches = fnmatch.filter(files, pattern) + result.extend(os.path.join(root, f) for f in matches) + return result + + +def file_size(path: str) -> Union[int, float]: + # If it's a symlink, return 0. + if os.path.islink(path): + return 0 + return os.path.getsize(path) + + +def format_file_size(path: str) -> str: + return format_size(file_size(path)) + + +def directory_size(path: str) -> Union[int, float]: + size = 0.0 + for root, _dirs, files in os.walk(path): + for filename in files: + file_path = os.path.join(root, filename) + size += file_size(file_path) + return size + + +def format_directory_size(path: str) -> str: + return format_size(directory_size(path)) diff --git a/lib/python3.11/site-packages/pip/_internal/utils/filetypes.py b/python/lib/python3.10/site-packages/pip/_internal/utils/filetypes.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/filetypes.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/filetypes.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/glibc.py b/python/lib/python3.10/site-packages/pip/_internal/utils/glibc.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/glibc.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/glibc.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/hashes.py b/python/lib/python3.10/site-packages/pip/_internal/utils/hashes.py new file mode 100644 index 0000000..82eb035 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/hashes.py @@ -0,0 +1,144 @@ +import hashlib +from typing import TYPE_CHECKING, BinaryIO, Dict, Iterator, List + +from pip._internal.exceptions import HashMismatch, HashMissing, InstallationError +from pip._internal.utils.misc import read_chunks + +if TYPE_CHECKING: + from hashlib import _Hash + + # NoReturn introduced in 3.6.2; imported only for type checking to maintain + # pip compatibility with older patch versions of Python 3.6 + from typing import NoReturn + + +# The recommended hash algo of the moment. Change this whenever the state of +# the art changes; it won't hurt backward compatibility. +FAVORITE_HASH = "sha256" + + +# Names of hashlib algorithms allowed by the --hash option and ``pip hash`` +# Currently, those are the ones at least as collision-resistant as sha256. +STRONG_HASHES = ["sha256", "sha384", "sha512"] + + +class Hashes: + """A wrapper that builds multiple hashes at once and checks them against + known-good values + + """ + + def __init__(self, hashes: Dict[str, List[str]] = None) -> None: + """ + :param hashes: A dict of algorithm names pointing to lists of allowed + hex digests + """ + allowed = {} + if hashes is not None: + for alg, keys in hashes.items(): + # Make sure values are always sorted (to ease equality checks) + allowed[alg] = sorted(keys) + self._allowed = allowed + + def __and__(self, other: "Hashes") -> "Hashes": + if not isinstance(other, Hashes): + return NotImplemented + + # If either of the Hashes object is entirely empty (i.e. no hash + # specified at all), all hashes from the other object are allowed. + if not other: + return self + if not self: + return other + + # Otherwise only hashes that present in both objects are allowed. + new = {} + for alg, values in other._allowed.items(): + if alg not in self._allowed: + continue + new[alg] = [v for v in values if v in self._allowed[alg]] + return Hashes(new) + + @property + def digest_count(self) -> int: + return sum(len(digests) for digests in self._allowed.values()) + + def is_hash_allowed(self, hash_name: str, hex_digest: str) -> bool: + """Return whether the given hex digest is allowed.""" + return hex_digest in self._allowed.get(hash_name, []) + + def check_against_chunks(self, chunks: Iterator[bytes]) -> None: + """Check good hashes against ones built from iterable of chunks of + data. + + Raise HashMismatch if none match. + + """ + gots = {} + for hash_name in self._allowed.keys(): + try: + gots[hash_name] = hashlib.new(hash_name) + except (ValueError, TypeError): + raise InstallationError(f"Unknown hash name: {hash_name}") + + for chunk in chunks: + for hash in gots.values(): + hash.update(chunk) + + for hash_name, got in gots.items(): + if got.hexdigest() in self._allowed[hash_name]: + return + self._raise(gots) + + def _raise(self, gots: Dict[str, "_Hash"]) -> "NoReturn": + raise HashMismatch(self._allowed, gots) + + def check_against_file(self, file: BinaryIO) -> None: + """Check good hashes against a file-like object + + Raise HashMismatch if none match. + + """ + return self.check_against_chunks(read_chunks(file)) + + def check_against_path(self, path: str) -> None: + with open(path, "rb") as file: + return self.check_against_file(file) + + def __bool__(self) -> bool: + """Return whether I know any known-good hashes.""" + return bool(self._allowed) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Hashes): + return NotImplemented + return self._allowed == other._allowed + + def __hash__(self) -> int: + return hash( + ",".join( + sorted( + ":".join((alg, digest)) + for alg, digest_list in self._allowed.items() + for digest in digest_list + ) + ) + ) + + +class MissingHashes(Hashes): + """A workalike for Hashes used when we're missing a hash for a requirement + + It computes the actual hash of the requirement and raises a HashMissing + exception showing it to the user. + + """ + + def __init__(self) -> None: + """Don't offer the ``hashes`` kwarg.""" + # Pass our favorite hash in to generate a "gotten hash". With the + # empty list, it will never match, so an error will always raise. + super().__init__(hashes={FAVORITE_HASH: []}) + + def _raise(self, gots: Dict[str, "_Hash"]) -> "NoReturn": + raise HashMissing(gots[FAVORITE_HASH].hexdigest()) diff --git a/lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py b/python/lib/python3.10/site-packages/pip/_internal/utils/inject_securetransport.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/inject_securetransport.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/inject_securetransport.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/logging.py b/python/lib/python3.10/site-packages/pip/_internal/utils/logging.py new file mode 100644 index 0000000..6e001c5 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/logging.py @@ -0,0 +1,343 @@ +import contextlib +import errno +import logging +import logging.handlers +import os +import sys +import threading +from dataclasses import dataclass +from logging import Filter +from typing import IO, Any, ClassVar, Iterator, List, Optional, TextIO, Type + +from pip._vendor.rich.console import ( + Console, + ConsoleOptions, + ConsoleRenderable, + RenderResult, +) +from pip._vendor.rich.highlighter import NullHighlighter +from pip._vendor.rich.logging import RichHandler +from pip._vendor.rich.segment import Segment +from pip._vendor.rich.style import Style + +from pip._internal.exceptions import DiagnosticPipError +from pip._internal.utils._log import VERBOSE, getLogger +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.deprecation import DEPRECATION_MSG_PREFIX +from pip._internal.utils.misc import ensure_dir + +_log_state = threading.local() +subprocess_logger = getLogger("pip.subprocessor") + + +class BrokenStdoutLoggingError(Exception): + """ + Raised if BrokenPipeError occurs for the stdout stream while logging. + """ + + +def _is_broken_pipe_error(exc_class: Type[BaseException], exc: BaseException) -> bool: + if exc_class is BrokenPipeError: + return True + + # On Windows, a broken pipe can show up as EINVAL rather than EPIPE: + # https://bugs.python.org/issue19612 + # https://bugs.python.org/issue30418 + if not WINDOWS: + return False + + return isinstance(exc, OSError) and exc.errno in (errno.EINVAL, errno.EPIPE) + + +@contextlib.contextmanager +def indent_log(num: int = 2) -> Iterator[None]: + """ + A context manager which will cause the log output to be indented for any + log messages emitted inside it. + """ + # For thread-safety + _log_state.indentation = get_indentation() + _log_state.indentation += num + try: + yield + finally: + _log_state.indentation -= num + + +def get_indentation() -> int: + return getattr(_log_state, "indentation", 0) + + +class IndentingFormatter(logging.Formatter): + default_time_format = "%Y-%m-%dT%H:%M:%S" + + def __init__( + self, + *args: Any, + add_timestamp: bool = False, + **kwargs: Any, + ) -> None: + """ + A logging.Formatter that obeys the indent_log() context manager. + + :param add_timestamp: A bool indicating output lines should be prefixed + with their record's timestamp. + """ + self.add_timestamp = add_timestamp + super().__init__(*args, **kwargs) + + def get_message_start(self, formatted: str, levelno: int) -> str: + """ + Return the start of the formatted log message (not counting the + prefix to add to each line). + """ + if levelno < logging.WARNING: + return "" + if formatted.startswith(DEPRECATION_MSG_PREFIX): + # Then the message already has a prefix. We don't want it to + # look like "WARNING: DEPRECATION: ...." + return "" + if levelno < logging.ERROR: + return "WARNING: " + + return "ERROR: " + + def format(self, record: logging.LogRecord) -> str: + """ + Calls the standard formatter, but will indent all of the log message + lines by our current indentation level. + """ + formatted = super().format(record) + message_start = self.get_message_start(formatted, record.levelno) + formatted = message_start + formatted + + prefix = "" + if self.add_timestamp: + prefix = f"{self.formatTime(record)} " + prefix += " " * get_indentation() + formatted = "".join([prefix + line for line in formatted.splitlines(True)]) + return formatted + + +@dataclass +class IndentedRenderable: + renderable: ConsoleRenderable + indent: int + + def __rich_console__( + self, console: Console, options: ConsoleOptions + ) -> RenderResult: + segments = console.render(self.renderable, options) + lines = Segment.split_lines(segments) + for line in lines: + yield Segment(" " * self.indent) + yield from line + yield Segment("\n") + + +class RichPipStreamHandler(RichHandler): + KEYWORDS: ClassVar[Optional[List[str]]] = [] + + def __init__(self, stream: Optional[TextIO], no_color: bool) -> None: + super().__init__( + console=Console(file=stream, no_color=no_color, soft_wrap=True), + show_time=False, + show_level=False, + show_path=False, + highlighter=NullHighlighter(), + ) + + # Our custom override on Rich's logger, to make things work as we need them to. + def emit(self, record: logging.LogRecord) -> None: + style: Optional[Style] = None + + # If we are given a diagnostic error to present, present it with indentation. + if record.msg == "[present-diagnostic] %s" and len(record.args) == 1: + diagnostic_error: DiagnosticPipError = record.args[0] # type: ignore[index] + assert isinstance(diagnostic_error, DiagnosticPipError) + + renderable: ConsoleRenderable = IndentedRenderable( + diagnostic_error, indent=get_indentation() + ) + else: + message = self.format(record) + renderable = self.render_message(record, message) + if record.levelno is not None: + if record.levelno >= logging.ERROR: + style = Style(color="red") + elif record.levelno >= logging.WARNING: + style = Style(color="yellow") + + try: + self.console.print(renderable, overflow="ignore", crop=False, style=style) + except Exception: + self.handleError(record) + + def handleError(self, record: logging.LogRecord) -> None: + """Called when logging is unable to log some output.""" + + exc_class, exc = sys.exc_info()[:2] + # If a broken pipe occurred while calling write() or flush() on the + # stdout stream in logging's Handler.emit(), then raise our special + # exception so we can handle it in main() instead of logging the + # broken pipe error and continuing. + if ( + exc_class + and exc + and self.console.file is sys.stdout + and _is_broken_pipe_error(exc_class, exc) + ): + raise BrokenStdoutLoggingError() + + return super().handleError(record) + + +class BetterRotatingFileHandler(logging.handlers.RotatingFileHandler): + def _open(self) -> IO[Any]: + ensure_dir(os.path.dirname(self.baseFilename)) + return super()._open() + + +class MaxLevelFilter(Filter): + def __init__(self, level: int) -> None: + self.level = level + + def filter(self, record: logging.LogRecord) -> bool: + return record.levelno < self.level + + +class ExcludeLoggerFilter(Filter): + + """ + A logging Filter that excludes records from a logger (or its children). + """ + + def filter(self, record: logging.LogRecord) -> bool: + # The base Filter class allows only records from a logger (or its + # children). + return not super().filter(record) + + +def setup_logging(verbosity: int, no_color: bool, user_log_file: Optional[str]) -> int: + """Configures and sets up all of the logging + + Returns the requested logging level, as its integer value. + """ + + # Determine the level to be logging at. + if verbosity >= 2: + level_number = logging.DEBUG + elif verbosity == 1: + level_number = VERBOSE + elif verbosity == -1: + level_number = logging.WARNING + elif verbosity == -2: + level_number = logging.ERROR + elif verbosity <= -3: + level_number = logging.CRITICAL + else: + level_number = logging.INFO + + level = logging.getLevelName(level_number) + + # The "root" logger should match the "console" level *unless* we also need + # to log to a user log file. + include_user_log = user_log_file is not None + if include_user_log: + additional_log_file = user_log_file + root_level = "DEBUG" + else: + additional_log_file = "/dev/null" + root_level = level + + # Disable any logging besides WARNING unless we have DEBUG level logging + # enabled for vendored libraries. + vendored_log_level = "WARNING" if level in ["INFO", "ERROR"] else "DEBUG" + + # Shorthands for clarity + log_streams = { + "stdout": "ext://sys.stdout", + "stderr": "ext://sys.stderr", + } + handler_classes = { + "stream": "pip._internal.utils.logging.RichPipStreamHandler", + "file": "pip._internal.utils.logging.BetterRotatingFileHandler", + } + handlers = ["console", "console_errors", "console_subprocess"] + ( + ["user_log"] if include_user_log else [] + ) + + logging.config.dictConfig( + { + "version": 1, + "disable_existing_loggers": False, + "filters": { + "exclude_warnings": { + "()": "pip._internal.utils.logging.MaxLevelFilter", + "level": logging.WARNING, + }, + "restrict_to_subprocess": { + "()": "logging.Filter", + "name": subprocess_logger.name, + }, + "exclude_subprocess": { + "()": "pip._internal.utils.logging.ExcludeLoggerFilter", + "name": subprocess_logger.name, + }, + }, + "formatters": { + "indent": { + "()": IndentingFormatter, + "format": "%(message)s", + }, + "indent_with_timestamp": { + "()": IndentingFormatter, + "format": "%(message)s", + "add_timestamp": True, + }, + }, + "handlers": { + "console": { + "level": level, + "class": handler_classes["stream"], + "no_color": no_color, + "stream": log_streams["stdout"], + "filters": ["exclude_subprocess", "exclude_warnings"], + "formatter": "indent", + }, + "console_errors": { + "level": "WARNING", + "class": handler_classes["stream"], + "no_color": no_color, + "stream": log_streams["stderr"], + "filters": ["exclude_subprocess"], + "formatter": "indent", + }, + # A handler responsible for logging to the console messages + # from the "subprocessor" logger. + "console_subprocess": { + "level": level, + "class": handler_classes["stream"], + "stream": log_streams["stderr"], + "no_color": no_color, + "filters": ["restrict_to_subprocess"], + "formatter": "indent", + }, + "user_log": { + "level": "DEBUG", + "class": handler_classes["file"], + "filename": additional_log_file, + "encoding": "utf-8", + "delay": True, + "formatter": "indent_with_timestamp", + }, + }, + "root": { + "level": root_level, + "handlers": handlers, + }, + "loggers": {"pip._vendor": {"level": vendored_log_level}}, + } + ) + + return level_number diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/misc.py b/python/lib/python3.10/site-packages/pip/_internal/utils/misc.py new file mode 100644 index 0000000..0bf9e99 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/misc.py @@ -0,0 +1,653 @@ +# The following comment should be removed at some point in the future. +# mypy: strict-optional=False + +import contextlib +import errno +import getpass +import hashlib +import io +import logging +import os +import posixpath +import shutil +import stat +import sys +import urllib.parse +from io import StringIO +from itertools import filterfalse, tee, zip_longest +from types import TracebackType +from typing import ( + Any, + BinaryIO, + Callable, + ContextManager, + Iterable, + Iterator, + List, + Optional, + TextIO, + Tuple, + Type, + TypeVar, + cast, +) + +from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed + +from pip import __version__ +from pip._internal.exceptions import CommandError +from pip._internal.locations import get_major_minor_version +from pip._internal.utils.compat import WINDOWS +from pip._internal.utils.virtualenv import running_under_virtualenv + +__all__ = [ + "rmtree", + "display_path", + "backup_dir", + "ask", + "splitext", + "format_size", + "is_installable_dir", + "normalize_path", + "renames", + "get_prog", + "captured_stdout", + "ensure_dir", + "remove_auth_from_url", +] + + +logger = logging.getLogger(__name__) + +T = TypeVar("T") +ExcInfo = Tuple[Type[BaseException], BaseException, TracebackType] +VersionInfo = Tuple[int, int, int] +NetlocTuple = Tuple[str, Tuple[Optional[str], Optional[str]]] + + +def get_pip_version() -> str: + pip_pkg_dir = os.path.join(os.path.dirname(__file__), "..", "..") + pip_pkg_dir = os.path.abspath(pip_pkg_dir) + + return "pip {} from {} (python {})".format( + __version__, + pip_pkg_dir, + get_major_minor_version(), + ) + + +def normalize_version_info(py_version_info: Tuple[int, ...]) -> Tuple[int, int, int]: + """ + Convert a tuple of ints representing a Python version to one of length + three. + + :param py_version_info: a tuple of ints representing a Python version, + or None to specify no version. The tuple can have any length. + + :return: a tuple of length three if `py_version_info` is non-None. + Otherwise, return `py_version_info` unchanged (i.e. None). + """ + if len(py_version_info) < 3: + py_version_info += (3 - len(py_version_info)) * (0,) + elif len(py_version_info) > 3: + py_version_info = py_version_info[:3] + + return cast("VersionInfo", py_version_info) + + +def ensure_dir(path: str) -> None: + """os.path.makedirs without EEXIST.""" + try: + os.makedirs(path) + except OSError as e: + # Windows can raise spurious ENOTEMPTY errors. See #6426. + if e.errno != errno.EEXIST and e.errno != errno.ENOTEMPTY: + raise + + +def get_prog() -> str: + try: + prog = os.path.basename(sys.argv[0]) + if prog in ("__main__.py", "-c"): + return f"{sys.executable} -m pip" + else: + return prog + except (AttributeError, TypeError, IndexError): + pass + return "pip" + + +# Retry every half second for up to 3 seconds +# Tenacity raises RetryError by default, explicitly raise the original exception +@retry(reraise=True, stop=stop_after_delay(3), wait=wait_fixed(0.5)) +def rmtree(dir: str, ignore_errors: bool = False) -> None: + shutil.rmtree(dir, ignore_errors=ignore_errors, onerror=rmtree_errorhandler) + + +def rmtree_errorhandler(func: Callable[..., Any], path: str, exc_info: ExcInfo) -> None: + """On Windows, the files in .svn are read-only, so when rmtree() tries to + remove them, an exception is thrown. We catch that here, remove the + read-only attribute, and hopefully continue without problems.""" + try: + has_attr_readonly = not (os.stat(path).st_mode & stat.S_IWRITE) + except OSError: + # it's equivalent to os.path.exists + return + + if has_attr_readonly: + # convert to read/write + os.chmod(path, stat.S_IWRITE) + # use the original function to repeat the operation + func(path) + return + else: + raise + + +def display_path(path: str) -> str: + """Gives the display value for a given path, making it relative to cwd + if possible.""" + path = os.path.normcase(os.path.abspath(path)) + if path.startswith(os.getcwd() + os.path.sep): + path = "." + path[len(os.getcwd()) :] + return path + + +def backup_dir(dir: str, ext: str = ".bak") -> str: + """Figure out the name of a directory to back up the given dir to + (adding .bak, .bak2, etc)""" + n = 1 + extension = ext + while os.path.exists(dir + extension): + n += 1 + extension = ext + str(n) + return dir + extension + + +def ask_path_exists(message: str, options: Iterable[str]) -> str: + for action in os.environ.get("PIP_EXISTS_ACTION", "").split(): + if action in options: + return action + return ask(message, options) + + +def _check_no_input(message: str) -> None: + """Raise an error if no input is allowed.""" + if os.environ.get("PIP_NO_INPUT"): + raise Exception( + f"No input was expected ($PIP_NO_INPUT set); question: {message}" + ) + + +def ask(message: str, options: Iterable[str]) -> str: + """Ask the message interactively, with the given possible responses""" + while 1: + _check_no_input(message) + response = input(message) + response = response.strip().lower() + if response not in options: + print( + "Your response ({!r}) was not one of the expected responses: " + "{}".format(response, ", ".join(options)) + ) + else: + return response + + +def ask_input(message: str) -> str: + """Ask for input interactively.""" + _check_no_input(message) + return input(message) + + +def ask_password(message: str) -> str: + """Ask for a password interactively.""" + _check_no_input(message) + return getpass.getpass(message) + + +def strtobool(val: str) -> int: + """Convert a string representation of truth to true (1) or false (0). + + True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values + are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if + 'val' is anything else. + """ + val = val.lower() + if val in ("y", "yes", "t", "true", "on", "1"): + return 1 + elif val in ("n", "no", "f", "false", "off", "0"): + return 0 + else: + raise ValueError(f"invalid truth value {val!r}") + + +def format_size(bytes: float) -> str: + if bytes > 1000 * 1000: + return "{:.1f} MB".format(bytes / 1000.0 / 1000) + elif bytes > 10 * 1000: + return "{} kB".format(int(bytes / 1000)) + elif bytes > 1000: + return "{:.1f} kB".format(bytes / 1000.0) + else: + return "{} bytes".format(int(bytes)) + + +def tabulate(rows: Iterable[Iterable[Any]]) -> Tuple[List[str], List[int]]: + """Return a list of formatted rows and a list of column sizes. + + For example:: + + >>> tabulate([['foobar', 2000], [0xdeadbeef]]) + (['foobar 2000', '3735928559'], [10, 4]) + """ + rows = [tuple(map(str, row)) for row in rows] + sizes = [max(map(len, col)) for col in zip_longest(*rows, fillvalue="")] + table = [" ".join(map(str.ljust, row, sizes)).rstrip() for row in rows] + return table, sizes + + +def is_installable_dir(path: str) -> bool: + """Is path is a directory containing pyproject.toml or setup.py? + + If pyproject.toml exists, this is a PEP 517 project. Otherwise we look for + a legacy setuptools layout by identifying setup.py. We don't check for the + setup.cfg because using it without setup.py is only available for PEP 517 + projects, which are already covered by the pyproject.toml check. + """ + if not os.path.isdir(path): + return False + if os.path.isfile(os.path.join(path, "pyproject.toml")): + return True + if os.path.isfile(os.path.join(path, "setup.py")): + return True + return False + + +def read_chunks(file: BinaryIO, size: int = io.DEFAULT_BUFFER_SIZE) -> Iterator[bytes]: + """Yield pieces of data from a file-like object until EOF.""" + while True: + chunk = file.read(size) + if not chunk: + break + yield chunk + + +def normalize_path(path: str, resolve_symlinks: bool = True) -> str: + """ + Convert a path to its canonical, case-normalized, absolute version. + + """ + path = os.path.expanduser(path) + if resolve_symlinks: + path = os.path.realpath(path) + else: + path = os.path.abspath(path) + return os.path.normcase(path) + + +def splitext(path: str) -> Tuple[str, str]: + """Like os.path.splitext, but take off .tar too""" + base, ext = posixpath.splitext(path) + if base.lower().endswith(".tar"): + ext = base[-4:] + ext + base = base[:-4] + return base, ext + + +def renames(old: str, new: str) -> None: + """Like os.renames(), but handles renaming across devices.""" + # Implementation borrowed from os.renames(). + head, tail = os.path.split(new) + if head and tail and not os.path.exists(head): + os.makedirs(head) + + shutil.move(old, new) + + head, tail = os.path.split(old) + if head and tail: + try: + os.removedirs(head) + except OSError: + pass + + +def is_local(path: str) -> bool: + """ + Return True if this is a path pip is allowed to modify. + + If we're in a virtualenv, sys.prefix points to the virtualenv's + prefix; only sys.prefix is considered local. + + If we're not in a virtualenv, in general we can modify anything. + However, if the OS vendor has configured distutils to install + somewhere other than sys.prefix (which could be a subdirectory of + sys.prefix, e.g. /usr/local), we consider sys.prefix itself nonlocal + and the domain of the OS vendor. (In other words, everything _other + than_ sys.prefix is considered local.) + + Caution: this function assumes the head of path has been normalized + with normalize_path. + """ + + path = normalize_path(path) + # Hard-coded becouse PyPy uses a different sys.prefix on Debian + prefix = '/usr' + + if running_under_virtualenv(): + return path.startswith(normalize_path(sys.prefix)) + else: + from pip._internal.locations import get_scheme + from pip._internal.models.scheme import SCHEME_KEYS + if path.startswith(prefix): + scheme = get_scheme("") + for key in SCHEME_KEYS: + local_path = getattr(scheme, key) + if path.startswith(normalize_path(local_path)): + return True + return False + else: + return True + + +def write_output(msg: Any, *args: Any) -> None: + logger.info(msg, *args) + + +class StreamWrapper(StringIO): + orig_stream: TextIO = None + + @classmethod + def from_stream(cls, orig_stream: TextIO) -> "StreamWrapper": + cls.orig_stream = orig_stream + return cls() + + # compileall.compile_dir() needs stdout.encoding to print to stdout + # https://github.com/python/mypy/issues/4125 + @property + def encoding(self): # type: ignore + return self.orig_stream.encoding + + +@contextlib.contextmanager +def captured_output(stream_name: str) -> Iterator[StreamWrapper]: + """Return a context manager used by captured_stdout/stdin/stderr + that temporarily replaces the sys stream *stream_name* with a StringIO. + + Taken from Lib/support/__init__.py in the CPython repo. + """ + orig_stdout = getattr(sys, stream_name) + setattr(sys, stream_name, StreamWrapper.from_stream(orig_stdout)) + try: + yield getattr(sys, stream_name) + finally: + setattr(sys, stream_name, orig_stdout) + + +def captured_stdout() -> ContextManager[StreamWrapper]: + """Capture the output of sys.stdout: + + with captured_stdout() as stdout: + print('hello') + self.assertEqual(stdout.getvalue(), 'hello\n') + + Taken from Lib/support/__init__.py in the CPython repo. + """ + return captured_output("stdout") + + +def captured_stderr() -> ContextManager[StreamWrapper]: + """ + See captured_stdout(). + """ + return captured_output("stderr") + + +# Simulates an enum +def enum(*sequential: Any, **named: Any) -> Type[Any]: + enums = dict(zip(sequential, range(len(sequential))), **named) + reverse = {value: key for key, value in enums.items()} + enums["reverse_mapping"] = reverse + return type("Enum", (), enums) + + +def build_netloc(host: str, port: Optional[int]) -> str: + """ + Build a netloc from a host-port pair + """ + if port is None: + return host + if ":" in host: + # Only wrap host with square brackets when it is IPv6 + host = f"[{host}]" + return f"{host}:{port}" + + +def build_url_from_netloc(netloc: str, scheme: str = "https") -> str: + """ + Build a full URL from a netloc. + """ + if netloc.count(":") >= 2 and "@" not in netloc and "[" not in netloc: + # It must be a bare IPv6 address, so wrap it with brackets. + netloc = f"[{netloc}]" + return f"{scheme}://{netloc}" + + +def parse_netloc(netloc: str) -> Tuple[str, Optional[int]]: + """ + Return the host-port pair from a netloc. + """ + url = build_url_from_netloc(netloc) + parsed = urllib.parse.urlparse(url) + return parsed.hostname, parsed.port + + +def split_auth_from_netloc(netloc: str) -> NetlocTuple: + """ + Parse out and remove the auth information from a netloc. + + Returns: (netloc, (username, password)). + """ + if "@" not in netloc: + return netloc, (None, None) + + # Split from the right because that's how urllib.parse.urlsplit() + # behaves if more than one @ is present (which can be checked using + # the password attribute of urlsplit()'s return value). + auth, netloc = netloc.rsplit("@", 1) + pw: Optional[str] = None + if ":" in auth: + # Split from the left because that's how urllib.parse.urlsplit() + # behaves if more than one : is present (which again can be checked + # using the password attribute of the return value) + user, pw = auth.split(":", 1) + else: + user, pw = auth, None + + user = urllib.parse.unquote(user) + if pw is not None: + pw = urllib.parse.unquote(pw) + + return netloc, (user, pw) + + +def redact_netloc(netloc: str) -> str: + """ + Replace the sensitive data in a netloc with "****", if it exists. + + For example: + - "user:pass@example.com" returns "user:****@example.com" + - "accesstoken@example.com" returns "****@example.com" + """ + netloc, (user, password) = split_auth_from_netloc(netloc) + if user is None: + return netloc + if password is None: + user = "****" + password = "" + else: + user = urllib.parse.quote(user) + password = ":****" + return "{user}{password}@{netloc}".format( + user=user, password=password, netloc=netloc + ) + + +def _transform_url( + url: str, transform_netloc: Callable[[str], Tuple[Any, ...]] +) -> Tuple[str, NetlocTuple]: + """Transform and replace netloc in a url. + + transform_netloc is a function taking the netloc and returning a + tuple. The first element of this tuple is the new netloc. The + entire tuple is returned. + + Returns a tuple containing the transformed url as item 0 and the + original tuple returned by transform_netloc as item 1. + """ + purl = urllib.parse.urlsplit(url) + netloc_tuple = transform_netloc(purl.netloc) + # stripped url + url_pieces = (purl.scheme, netloc_tuple[0], purl.path, purl.query, purl.fragment) + surl = urllib.parse.urlunsplit(url_pieces) + return surl, cast("NetlocTuple", netloc_tuple) + + +def _get_netloc(netloc: str) -> NetlocTuple: + return split_auth_from_netloc(netloc) + + +def _redact_netloc(netloc: str) -> Tuple[str]: + return (redact_netloc(netloc),) + + +def split_auth_netloc_from_url(url: str) -> Tuple[str, str, Tuple[str, str]]: + """ + Parse a url into separate netloc, auth, and url with no auth. + + Returns: (url_without_auth, netloc, (username, password)) + """ + url_without_auth, (netloc, auth) = _transform_url(url, _get_netloc) + return url_without_auth, netloc, auth + + +def remove_auth_from_url(url: str) -> str: + """Return a copy of url with 'username:password@' removed.""" + # username/pass params are passed to subversion through flags + # and are not recognized in the url. + return _transform_url(url, _get_netloc)[0] + + +def redact_auth_from_url(url: str) -> str: + """Replace the password in a given url with ****.""" + return _transform_url(url, _redact_netloc)[0] + + +class HiddenText: + def __init__(self, secret: str, redacted: str) -> None: + self.secret = secret + self.redacted = redacted + + def __repr__(self) -> str: + return "".format(str(self)) + + def __str__(self) -> str: + return self.redacted + + # This is useful for testing. + def __eq__(self, other: Any) -> bool: + if type(self) != type(other): + return False + + # The string being used for redaction doesn't also have to match, + # just the raw, original string. + return self.secret == other.secret + + +def hide_value(value: str) -> HiddenText: + return HiddenText(value, redacted="****") + + +def hide_url(url: str) -> HiddenText: + redacted = redact_auth_from_url(url) + return HiddenText(url, redacted=redacted) + + +def protect_pip_from_modification_on_windows(modifying_pip: bool) -> None: + """Protection of pip.exe from modification on Windows + + On Windows, any operation modifying pip should be run as: + python -m pip ... + """ + pip_names = [ + "pip.exe", + "pip{}.exe".format(sys.version_info[0]), + "pip{}.{}.exe".format(*sys.version_info[:2]), + ] + + # See https://github.com/pypa/pip/issues/1299 for more discussion + should_show_use_python_msg = ( + modifying_pip and WINDOWS and os.path.basename(sys.argv[0]) in pip_names + ) + + if should_show_use_python_msg: + new_command = [sys.executable, "-m", "pip"] + sys.argv[1:] + raise CommandError( + "To modify pip, please run the following command:\n{}".format( + " ".join(new_command) + ) + ) + + +def is_console_interactive() -> bool: + """Is this console interactive?""" + return sys.stdin is not None and sys.stdin.isatty() + + +def hash_file(path: str, blocksize: int = 1 << 20) -> Tuple[Any, int]: + """Return (hash, length) for path using hashlib.sha256()""" + + h = hashlib.sha256() + length = 0 + with open(path, "rb") as f: + for block in read_chunks(f, size=blocksize): + length += len(block) + h.update(block) + return h, length + + +def is_wheel_installed() -> bool: + """ + Return whether the wheel package is installed. + """ + try: + import wheel # noqa: F401 + except ImportError: + return False + + return True + + +def pairwise(iterable: Iterable[Any]) -> Iterator[Tuple[Any, Any]]: + """ + Return paired elements. + + For example: + s -> (s0, s1), (s2, s3), (s4, s5), ... + """ + iterable = iter(iterable) + return zip_longest(iterable, iterable) + + +def partition( + pred: Callable[[T], bool], + iterable: Iterable[T], +) -> Tuple[Iterable[T], Iterable[T]]: + """ + Use a predicate to partition entries into false entries and true entries, + like + + partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9 + """ + t1, t2 = tee(iterable) + return filterfalse(pred, t1), filter(pred, t2) diff --git a/lib/python3.11/site-packages/pip/_internal/utils/models.py b/python/lib/python3.10/site-packages/pip/_internal/utils/models.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/models.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/models.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/packaging.py b/python/lib/python3.10/site-packages/pip/_internal/utils/packaging.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/packaging.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/packaging.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/setuptools_build.py b/python/lib/python3.10/site-packages/pip/_internal/utils/setuptools_build.py new file mode 100644 index 0000000..f460c40 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/setuptools_build.py @@ -0,0 +1,195 @@ +import sys +import textwrap +from typing import List, Optional, Sequence + +# Shim to wrap setup.py invocation with setuptools +# Note that __file__ is handled via two {!r} *and* %r, to ensure that paths on +# Windows are correctly handled (it should be "C:\\Users" not "C:\Users"). +_SETUPTOOLS_SHIM = textwrap.dedent( + """ + exec(compile(''' + # This is -- a caller that pip uses to run setup.py + # + # - It imports setuptools before invoking setup.py, to enable projects that directly + # import from `distutils.core` to work with newer packaging standards. + # - It provides a clear error message when setuptools is not installed. + # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so + # setuptools doesn't think the script is `-c`. This avoids the following warning: + # manifest_maker: standard file '-c' not found". + # - It generates a shim setup.py, for handling setup.cfg-only projects. + import os, sys, tokenize + + try: + import setuptools + except ImportError as error: + print( + "ERROR: Can not execute `setup.py` since setuptools is not available in " + "the build environment.", + file=sys.stderr, + ) + sys.exit(1) + + __file__ = %r + sys.argv[0] = __file__ + + if os.path.exists(__file__): + filename = __file__ + with tokenize.open(__file__) as f: + setup_py_code = f.read() + else: + filename = "" + setup_py_code = "from setuptools import setup; setup()" + + exec(compile(setup_py_code, filename, "exec")) + ''' % ({!r},), "", "exec")) + """ +).rstrip() + + +def make_setuptools_shim_args( + setup_py_path: str, + global_options: Sequence[str] = None, + no_user_config: bool = False, + unbuffered_output: bool = False, +) -> List[str]: + """ + Get setuptools command arguments with shim wrapped setup file invocation. + + :param setup_py_path: The path to setup.py to be wrapped. + :param global_options: Additional global options. + :param no_user_config: If True, disables personal user configuration. + :param unbuffered_output: If True, adds the unbuffered switch to the + argument list. + """ + args = [sys.executable] + if unbuffered_output: + args += ["-u"] + args += ["-c", _SETUPTOOLS_SHIM.format(setup_py_path)] + if global_options: + args += global_options + if no_user_config: + args += ["--no-user-cfg"] + return args + + +def make_setuptools_bdist_wheel_args( + setup_py_path: str, + global_options: Sequence[str], + build_options: Sequence[str], + destination_dir: str, +) -> List[str]: + # NOTE: Eventually, we'd want to also -S to the flags here, when we're + # isolating. Currently, it breaks Python in virtualenvs, because it + # relies on site.py to find parts of the standard library outside the + # virtualenv. + args = make_setuptools_shim_args( + setup_py_path, global_options=global_options, unbuffered_output=True + ) + args += ["bdist_wheel", "-d", destination_dir] + args += build_options + return args + + +def make_setuptools_clean_args( + setup_py_path: str, + global_options: Sequence[str], +) -> List[str]: + args = make_setuptools_shim_args( + setup_py_path, global_options=global_options, unbuffered_output=True + ) + args += ["clean", "--all"] + return args + + +def make_setuptools_develop_args( + setup_py_path: str, + global_options: Sequence[str], + install_options: Sequence[str], + no_user_config: bool, + prefix: Optional[str], + home: Optional[str], + use_user_site: bool, +) -> List[str]: + assert not (use_user_site and prefix) + + args = make_setuptools_shim_args( + setup_py_path, + global_options=global_options, + no_user_config=no_user_config, + ) + + args += ["develop", "--no-deps"] + + args += install_options + + if prefix: + args += ["--prefix", prefix] + if home is not None: + args += ["--install-dir", home] + + if use_user_site: + args += ["--user", "--prefix="] + + return args + + +def make_setuptools_egg_info_args( + setup_py_path: str, + egg_info_dir: Optional[str], + no_user_config: bool, +) -> List[str]: + args = make_setuptools_shim_args(setup_py_path, no_user_config=no_user_config) + + args += ["egg_info"] + + if egg_info_dir: + args += ["--egg-base", egg_info_dir] + + return args + + +def make_setuptools_install_args( + setup_py_path: str, + global_options: Sequence[str], + install_options: Sequence[str], + record_filename: str, + root: Optional[str], + prefix: Optional[str], + header_dir: Optional[str], + home: Optional[str], + use_user_site: bool, + no_user_config: bool, + pycompile: bool, +) -> List[str]: + assert not (use_user_site and prefix) + assert not (use_user_site and root) + + args = make_setuptools_shim_args( + setup_py_path, + global_options=global_options, + no_user_config=no_user_config, + unbuffered_output=True, + ) + args += ["install", "--record", record_filename] + args += ["--single-version-externally-managed"] + + if root is not None: + args += ["--root", root] + if prefix is not None: + args += ["--prefix", prefix] + if home is not None: + args += ["--home", home] + if use_user_site: + args += ["--user", "--prefix="] + + if pycompile: + args += ["--compile"] + else: + args += ["--no-compile"] + + if header_dir: + args += ["--install-headers", header_dir] + + args += install_options + + return args diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/subprocess.py b/python/lib/python3.10/site-packages/pip/_internal/utils/subprocess.py new file mode 100644 index 0000000..b5b7624 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/subprocess.py @@ -0,0 +1,260 @@ +import logging +import os +import shlex +import subprocess +from typing import ( + TYPE_CHECKING, + Any, + Callable, + Iterable, + List, + Mapping, + Optional, + Union, +) + +from pip._vendor.rich.markup import escape + +from pip._internal.cli.spinners import SpinnerInterface, open_spinner +from pip._internal.exceptions import InstallationSubprocessError +from pip._internal.utils.logging import VERBOSE, subprocess_logger +from pip._internal.utils.misc import HiddenText + +if TYPE_CHECKING: + # Literal was introduced in Python 3.8. + # + # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7. + from typing import Literal + +CommandArgs = List[Union[str, HiddenText]] + + +def make_command(*args: Union[str, HiddenText, CommandArgs]) -> CommandArgs: + """ + Create a CommandArgs object. + """ + command_args: CommandArgs = [] + for arg in args: + # Check for list instead of CommandArgs since CommandArgs is + # only known during type-checking. + if isinstance(arg, list): + command_args.extend(arg) + else: + # Otherwise, arg is str or HiddenText. + command_args.append(arg) + + return command_args + + +def format_command_args(args: Union[List[str], CommandArgs]) -> str: + """ + Format command arguments for display. + """ + # For HiddenText arguments, display the redacted form by calling str(). + # Also, we don't apply str() to arguments that aren't HiddenText since + # this can trigger a UnicodeDecodeError in Python 2 if the argument + # has type unicode and includes a non-ascii character. (The type + # checker doesn't ensure the annotations are correct in all cases.) + return " ".join( + shlex.quote(str(arg)) if isinstance(arg, HiddenText) else shlex.quote(arg) + for arg in args + ) + + +def reveal_command_args(args: Union[List[str], CommandArgs]) -> List[str]: + """ + Return the arguments in their raw, unredacted form. + """ + return [arg.secret if isinstance(arg, HiddenText) else arg for arg in args] + + +def call_subprocess( + cmd: Union[List[str], CommandArgs], + show_stdout: bool = False, + cwd: Optional[str] = None, + on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise", + extra_ok_returncodes: Optional[Iterable[int]] = None, + extra_environ: Optional[Mapping[str, Any]] = None, + unset_environ: Optional[Iterable[str]] = None, + spinner: Optional[SpinnerInterface] = None, + log_failed_cmd: Optional[bool] = True, + stdout_only: Optional[bool] = False, + *, + command_desc: str, +) -> str: + """ + Args: + show_stdout: if true, use INFO to log the subprocess's stderr and + stdout streams. Otherwise, use DEBUG. Defaults to False. + extra_ok_returncodes: an iterable of integer return codes that are + acceptable, in addition to 0. Defaults to None, which means []. + unset_environ: an iterable of environment variable names to unset + prior to calling subprocess.Popen(). + log_failed_cmd: if false, failed commands are not logged, only raised. + stdout_only: if true, return only stdout, else return both. When true, + logging of both stdout and stderr occurs when the subprocess has + terminated, else logging occurs as subprocess output is produced. + """ + if extra_ok_returncodes is None: + extra_ok_returncodes = [] + if unset_environ is None: + unset_environ = [] + # Most places in pip use show_stdout=False. What this means is-- + # + # - We connect the child's output (combined stderr and stdout) to a + # single pipe, which we read. + # - We log this output to stderr at DEBUG level as it is received. + # - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't + # requested), then we show a spinner so the user can still see the + # subprocess is in progress. + # - If the subprocess exits with an error, we log the output to stderr + # at ERROR level if it hasn't already been displayed to the console + # (e.g. if --verbose logging wasn't enabled). This way we don't log + # the output to the console twice. + # + # If show_stdout=True, then the above is still done, but with DEBUG + # replaced by INFO. + if show_stdout: + # Then log the subprocess output at INFO level. + log_subprocess = subprocess_logger.info + used_level = logging.INFO + else: + # Then log the subprocess output using VERBOSE. This also ensures + # it will be logged to the log file (aka user_log), if enabled. + log_subprocess = subprocess_logger.verbose + used_level = VERBOSE + + # Whether the subprocess will be visible in the console. + showing_subprocess = subprocess_logger.getEffectiveLevel() <= used_level + + # Only use the spinner if we're not showing the subprocess output + # and we have a spinner. + use_spinner = not showing_subprocess and spinner is not None + + log_subprocess("Running command %s", command_desc) + env = os.environ.copy() + if extra_environ: + env.update(extra_environ) + for name in unset_environ: + env.pop(name, None) + try: + proc = subprocess.Popen( + # Convert HiddenText objects to the underlying str. + reveal_command_args(cmd), + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT if not stdout_only else subprocess.PIPE, + cwd=cwd, + env=env, + errors="backslashreplace", + ) + except Exception as exc: + if log_failed_cmd: + subprocess_logger.critical( + "Error %s while executing command %s", + exc, + command_desc, + ) + raise + all_output = [] + if not stdout_only: + assert proc.stdout + assert proc.stdin + proc.stdin.close() + # In this mode, stdout and stderr are in the same pipe. + while True: + line: str = proc.stdout.readline() + if not line: + break + line = line.rstrip() + all_output.append(line + "\n") + + # Show the line immediately. + log_subprocess(line) + # Update the spinner. + if use_spinner: + assert spinner + spinner.spin() + try: + proc.wait() + finally: + if proc.stdout: + proc.stdout.close() + output = "".join(all_output) + else: + # In this mode, stdout and stderr are in different pipes. + # We must use communicate() which is the only safe way to read both. + out, err = proc.communicate() + # log line by line to preserve pip log indenting + for out_line in out.splitlines(): + log_subprocess(out_line) + all_output.append(out) + for err_line in err.splitlines(): + log_subprocess(err_line) + all_output.append(err) + output = out + + proc_had_error = proc.returncode and proc.returncode not in extra_ok_returncodes + if use_spinner: + assert spinner + if proc_had_error: + spinner.finish("error") + else: + spinner.finish("done") + if proc_had_error: + if on_returncode == "raise": + error = InstallationSubprocessError( + command_description=command_desc, + exit_code=proc.returncode, + output_lines=all_output if not showing_subprocess else None, + ) + if log_failed_cmd: + subprocess_logger.error("[present-diagnostic] %s", error) + subprocess_logger.verbose( + "[bold magenta]full command[/]: [blue]%s[/]", + escape(format_command_args(cmd)), + extra={"markup": True}, + ) + subprocess_logger.verbose( + "[bold magenta]cwd[/]: %s", + escape(cwd or "[inherit]"), + extra={"markup": True}, + ) + + raise error + elif on_returncode == "warn": + subprocess_logger.warning( + 'Command "%s" had error code %s in %s', + command_desc, + proc.returncode, + cwd, + ) + elif on_returncode == "ignore": + pass + else: + raise ValueError(f"Invalid value: on_returncode={on_returncode!r}") + return output + + +def runner_with_spinner_message(message: str) -> Callable[..., None]: + """Provide a subprocess_runner that shows a spinner message. + + Intended for use with for pep517's Pep517HookCaller. Thus, the runner has + an API that matches what's expected by Pep517HookCaller.subprocess_runner. + """ + + def runner( + cmd: List[str], + cwd: Optional[str] = None, + extra_environ: Optional[Mapping[str, Any]] = None, + ) -> None: + with open_spinner(message) as spinner: + call_subprocess( + cmd, + command_desc=message, + cwd=cwd, + extra_environ=extra_environ, + spinner=spinner, + ) + + return runner diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/temp_dir.py b/python/lib/python3.10/site-packages/pip/_internal/utils/temp_dir.py new file mode 100644 index 0000000..442679a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/temp_dir.py @@ -0,0 +1,246 @@ +import errno +import itertools +import logging +import os.path +import tempfile +from contextlib import ExitStack, contextmanager +from typing import Any, Dict, Iterator, Optional, TypeVar, Union + +from pip._internal.utils.misc import enum, rmtree + +logger = logging.getLogger(__name__) + +_T = TypeVar("_T", bound="TempDirectory") + + +# Kinds of temporary directories. Only needed for ones that are +# globally-managed. +tempdir_kinds = enum( + BUILD_ENV="build-env", + EPHEM_WHEEL_CACHE="ephem-wheel-cache", + REQ_BUILD="req-build", +) + + +_tempdir_manager: Optional[ExitStack] = None + + +@contextmanager +def global_tempdir_manager() -> Iterator[None]: + global _tempdir_manager + with ExitStack() as stack: + old_tempdir_manager, _tempdir_manager = _tempdir_manager, stack + try: + yield + finally: + _tempdir_manager = old_tempdir_manager + + +class TempDirectoryTypeRegistry: + """Manages temp directory behavior""" + + def __init__(self) -> None: + self._should_delete: Dict[str, bool] = {} + + def set_delete(self, kind: str, value: bool) -> None: + """Indicate whether a TempDirectory of the given kind should be + auto-deleted. + """ + self._should_delete[kind] = value + + def get_delete(self, kind: str) -> bool: + """Get configured auto-delete flag for a given TempDirectory type, + default True. + """ + return self._should_delete.get(kind, True) + + +_tempdir_registry: Optional[TempDirectoryTypeRegistry] = None + + +@contextmanager +def tempdir_registry() -> Iterator[TempDirectoryTypeRegistry]: + """Provides a scoped global tempdir registry that can be used to dictate + whether directories should be deleted. + """ + global _tempdir_registry + old_tempdir_registry = _tempdir_registry + _tempdir_registry = TempDirectoryTypeRegistry() + try: + yield _tempdir_registry + finally: + _tempdir_registry = old_tempdir_registry + + +class _Default: + pass + + +_default = _Default() + + +class TempDirectory: + """Helper class that owns and cleans up a temporary directory. + + This class can be used as a context manager or as an OO representation of a + temporary directory. + + Attributes: + path + Location to the created temporary directory + delete + Whether the directory should be deleted when exiting + (when used as a contextmanager) + + Methods: + cleanup() + Deletes the temporary directory + + When used as a context manager, if the delete attribute is True, on + exiting the context the temporary directory is deleted. + """ + + def __init__( + self, + path: Optional[str] = None, + delete: Union[bool, None, _Default] = _default, + kind: str = "temp", + globally_managed: bool = False, + ): + super().__init__() + + if delete is _default: + if path is not None: + # If we were given an explicit directory, resolve delete option + # now. + delete = False + else: + # Otherwise, we wait until cleanup and see what + # tempdir_registry says. + delete = None + + # The only time we specify path is in for editables where it + # is the value of the --src option. + if path is None: + path = self._create(kind) + + self._path = path + self._deleted = False + self.delete = delete + self.kind = kind + + if globally_managed: + assert _tempdir_manager is not None + _tempdir_manager.enter_context(self) + + @property + def path(self) -> str: + assert not self._deleted, f"Attempted to access deleted path: {self._path}" + return self._path + + def __repr__(self) -> str: + return f"<{self.__class__.__name__} {self.path!r}>" + + def __enter__(self: _T) -> _T: + return self + + def __exit__(self, exc: Any, value: Any, tb: Any) -> None: + if self.delete is not None: + delete = self.delete + elif _tempdir_registry: + delete = _tempdir_registry.get_delete(self.kind) + else: + delete = True + + if delete: + self.cleanup() + + def _create(self, kind: str) -> str: + """Create a temporary directory and store its path in self.path""" + # We realpath here because some systems have their default tmpdir + # symlinked to another directory. This tends to confuse build + # scripts, so we canonicalize the path by traversing potential + # symlinks here. + path = os.path.realpath(tempfile.mkdtemp(prefix=f"pip-{kind}-")) + logger.debug("Created temporary directory: %s", path) + return path + + def cleanup(self) -> None: + """Remove the temporary directory created and reset state""" + self._deleted = True + if not os.path.exists(self._path): + return + rmtree(self._path) + + +class AdjacentTempDirectory(TempDirectory): + """Helper class that creates a temporary directory adjacent to a real one. + + Attributes: + original + The original directory to create a temp directory for. + path + After calling create() or entering, contains the full + path to the temporary directory. + delete + Whether the directory should be deleted when exiting + (when used as a contextmanager) + + """ + + # The characters that may be used to name the temp directory + # We always prepend a ~ and then rotate through these until + # a usable name is found. + # pkg_resources raises a different error for .dist-info folder + # with leading '-' and invalid metadata + LEADING_CHARS = "-~.=%0123456789" + + def __init__(self, original: str, delete: Optional[bool] = None) -> None: + self.original = original.rstrip("/\\") + super().__init__(delete=delete) + + @classmethod + def _generate_names(cls, name: str) -> Iterator[str]: + """Generates a series of temporary names. + + The algorithm replaces the leading characters in the name + with ones that are valid filesystem characters, but are not + valid package names (for both Python and pip definitions of + package). + """ + for i in range(1, len(name)): + for candidate in itertools.combinations_with_replacement( + cls.LEADING_CHARS, i - 1 + ): + new_name = "~" + "".join(candidate) + name[i:] + if new_name != name: + yield new_name + + # If we make it this far, we will have to make a longer name + for i in range(len(cls.LEADING_CHARS)): + for candidate in itertools.combinations_with_replacement( + cls.LEADING_CHARS, i + ): + new_name = "~" + "".join(candidate) + name + if new_name != name: + yield new_name + + def _create(self, kind: str) -> str: + root, name = os.path.split(self.original) + for candidate in self._generate_names(name): + path = os.path.join(root, candidate) + try: + os.mkdir(path) + except OSError as ex: + # Continue if the name exists already + if ex.errno != errno.EEXIST: + raise + else: + path = os.path.realpath(path) + break + else: + # Final fallback on the default behavior. + path = os.path.realpath(tempfile.mkdtemp(prefix=f"pip-{kind}-")) + + logger.debug("Created temporary directory: %s", path) + return path diff --git a/python/lib/python3.10/site-packages/pip/_internal/utils/unpacking.py b/python/lib/python3.10/site-packages/pip/_internal/utils/unpacking.py new file mode 100644 index 0000000..5f63f97 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/utils/unpacking.py @@ -0,0 +1,258 @@ +"""Utilities related archives. +""" + +import logging +import os +import shutil +import stat +import tarfile +import zipfile +from typing import Iterable, List, Optional +from zipfile import ZipInfo + +from pip._internal.exceptions import InstallationError +from pip._internal.utils.filetypes import ( + BZ2_EXTENSIONS, + TAR_EXTENSIONS, + XZ_EXTENSIONS, + ZIP_EXTENSIONS, +) +from pip._internal.utils.misc import ensure_dir + +logger = logging.getLogger(__name__) + + +SUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS + +try: + import bz2 # noqa + + SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS +except ImportError: + logger.debug("bz2 module is not available") + +try: + # Only for Python 3.3+ + import lzma # noqa + + SUPPORTED_EXTENSIONS += XZ_EXTENSIONS +except ImportError: + logger.debug("lzma module is not available") + + +def current_umask() -> int: + """Get the current umask which involves having to set it temporarily.""" + mask = os.umask(0) + os.umask(mask) + return mask + + +def split_leading_dir(path: str) -> List[str]: + path = path.lstrip("/").lstrip("\\") + if "/" in path and ( + ("\\" in path and path.find("/") < path.find("\\")) or "\\" not in path + ): + return path.split("/", 1) + elif "\\" in path: + return path.split("\\", 1) + else: + return [path, ""] + + +def has_leading_dir(paths: Iterable[str]) -> bool: + """Returns true if all the paths have the same leading path name + (i.e., everything is in one subdirectory in an archive)""" + common_prefix = None + for path in paths: + prefix, rest = split_leading_dir(path) + if not prefix: + return False + elif common_prefix is None: + common_prefix = prefix + elif prefix != common_prefix: + return False + return True + + +def is_within_directory(directory: str, target: str) -> bool: + """ + Return true if the absolute path of target is within the directory + """ + abs_directory = os.path.abspath(directory) + abs_target = os.path.abspath(target) + + prefix = os.path.commonprefix([abs_directory, abs_target]) + return prefix == abs_directory + + +def set_extracted_file_to_default_mode_plus_executable(path: str) -> None: + """ + Make file present at path have execute for user/group/world + (chmod +x) is no-op on windows per python docs + """ + os.chmod(path, (0o777 & ~current_umask() | 0o111)) + + +def zip_item_is_executable(info: ZipInfo) -> bool: + mode = info.external_attr >> 16 + # if mode and regular file and any execute permissions for + # user/group/world? + return bool(mode and stat.S_ISREG(mode) and mode & 0o111) + + +def unzip_file(filename: str, location: str, flatten: bool = True) -> None: + """ + Unzip the file (with path `filename`) to the destination `location`. All + files are written based on system defaults and umask (i.e. permissions are + not preserved), except that regular file members with any execute + permissions (user, group, or world) have "chmod +x" applied after being + written. Note that for windows, any execute changes using os.chmod are + no-ops per the python docs. + """ + ensure_dir(location) + zipfp = open(filename, "rb") + try: + zip = zipfile.ZipFile(zipfp, allowZip64=True) + leading = has_leading_dir(zip.namelist()) and flatten + for info in zip.infolist(): + name = info.filename + fn = name + if leading: + fn = split_leading_dir(name)[1] + fn = os.path.join(location, fn) + dir = os.path.dirname(fn) + if not is_within_directory(location, fn): + message = ( + "The zip file ({}) has a file ({}) trying to install " + "outside target directory ({})" + ) + raise InstallationError(message.format(filename, fn, location)) + if fn.endswith("/") or fn.endswith("\\"): + # A directory + ensure_dir(fn) + else: + ensure_dir(dir) + # Don't use read() to avoid allocating an arbitrarily large + # chunk of memory for the file's content + fp = zip.open(name) + try: + with open(fn, "wb") as destfp: + shutil.copyfileobj(fp, destfp) + finally: + fp.close() + if zip_item_is_executable(info): + set_extracted_file_to_default_mode_plus_executable(fn) + finally: + zipfp.close() + + +def untar_file(filename: str, location: str) -> None: + """ + Untar the file (with path `filename`) to the destination `location`. + All files are written based on system defaults and umask (i.e. permissions + are not preserved), except that regular file members with any execute + permissions (user, group, or world) have "chmod +x" applied after being + written. Note that for windows, any execute changes using os.chmod are + no-ops per the python docs. + """ + ensure_dir(location) + if filename.lower().endswith(".gz") or filename.lower().endswith(".tgz"): + mode = "r:gz" + elif filename.lower().endswith(BZ2_EXTENSIONS): + mode = "r:bz2" + elif filename.lower().endswith(XZ_EXTENSIONS): + mode = "r:xz" + elif filename.lower().endswith(".tar"): + mode = "r" + else: + logger.warning( + "Cannot determine compression type for file %s", + filename, + ) + mode = "r:*" + tar = tarfile.open(filename, mode, encoding="utf-8") + try: + leading = has_leading_dir([member.name for member in tar.getmembers()]) + for member in tar.getmembers(): + fn = member.name + if leading: + fn = split_leading_dir(fn)[1] + path = os.path.join(location, fn) + if not is_within_directory(location, path): + message = ( + "The tar file ({}) has a file ({}) trying to install " + "outside target directory ({})" + ) + raise InstallationError(message.format(filename, path, location)) + if member.isdir(): + ensure_dir(path) + elif member.issym(): + try: + # https://github.com/python/typeshed/issues/2673 + tar._extract_member(member, path) # type: ignore + except Exception as exc: + # Some corrupt tar files seem to produce this + # (specifically bad symlinks) + logger.warning( + "In the tar file %s the member %s is invalid: %s", + filename, + member.name, + exc, + ) + continue + else: + try: + fp = tar.extractfile(member) + except (KeyError, AttributeError) as exc: + # Some corrupt tar files seem to produce this + # (specifically bad symlinks) + logger.warning( + "In the tar file %s the member %s is invalid: %s", + filename, + member.name, + exc, + ) + continue + ensure_dir(os.path.dirname(path)) + assert fp is not None + with open(path, "wb") as destfp: + shutil.copyfileobj(fp, destfp) + fp.close() + # Update the timestamp (useful for cython compiled files) + tar.utime(member, path) + # member have any execute permissions for user/group/world? + if member.mode & 0o111: + set_extracted_file_to_default_mode_plus_executable(path) + finally: + tar.close() + + +def unpack_file( + filename: str, + location: str, + content_type: Optional[str] = None, +) -> None: + filename = os.path.realpath(filename) + if ( + content_type == "application/zip" + or filename.lower().endswith(ZIP_EXTENSIONS) + or zipfile.is_zipfile(filename) + ): + unzip_file(filename, location, flatten=not filename.endswith(".whl")) + elif ( + content_type == "application/x-gzip" + or tarfile.is_tarfile(filename) + or filename.lower().endswith(TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS) + ): + untar_file(filename, location) + else: + # FIXME: handle? + # FIXME: magic signatures? + logger.critical( + "Cannot unpack file %s (downloaded from %s, content-type: %s); " + "cannot detect archive format", + filename, + location, + content_type, + ) + raise InstallationError(f"Cannot determine archive format of {location}") diff --git a/lib/python3.11/site-packages/pip/_internal/utils/urls.py b/python/lib/python3.10/site-packages/pip/_internal/utils/urls.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/urls.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/urls.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/virtualenv.py b/python/lib/python3.10/site-packages/pip/_internal/utils/virtualenv.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/virtualenv.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/virtualenv.py diff --git a/lib/python3.11/site-packages/pip/_internal/utils/wheel.py b/python/lib/python3.10/site-packages/pip/_internal/utils/wheel.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/utils/wheel.py rename to python/lib/python3.10/site-packages/pip/_internal/utils/wheel.py diff --git a/lib/python3.11/site-packages/pip/_internal/vcs/__init__.py b/python/lib/python3.10/site-packages/pip/_internal/vcs/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/vcs/__init__.py rename to python/lib/python3.10/site-packages/pip/_internal/vcs/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/vcs/bazaar.py b/python/lib/python3.10/site-packages/pip/_internal/vcs/bazaar.py new file mode 100644 index 0000000..a7b16e2 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/vcs/bazaar.py @@ -0,0 +1,101 @@ +import logging +from typing import List, Optional, Tuple + +from pip._internal.utils.misc import HiddenText, display_path +from pip._internal.utils.subprocess import make_command +from pip._internal.utils.urls import path_to_url +from pip._internal.vcs.versioncontrol import ( + AuthInfo, + RemoteNotFoundError, + RevOptions, + VersionControl, + vcs, +) + +logger = logging.getLogger(__name__) + + +class Bazaar(VersionControl): + name = "bzr" + dirname = ".bzr" + repo_name = "branch" + schemes = ( + "bzr+http", + "bzr+https", + "bzr+ssh", + "bzr+sftp", + "bzr+ftp", + "bzr+lp", + "bzr+file", + ) + + @staticmethod + def get_base_rev_args(rev: str) -> List[str]: + return ["-r", rev] + + def fetch_new( + self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int + ) -> None: + rev_display = rev_options.to_display() + logger.info( + "Checking out %s%s to %s", + url, + rev_display, + display_path(dest), + ) + if verbosity <= 0: + flag = "--quiet" + elif verbosity == 1: + flag = "" + else: + flag = f"-{'v'*verbosity}" + cmd_args = make_command("branch", flag, rev_options.to_args(), url, dest) + self.run_command(cmd_args) + + def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: + self.run_command(make_command("switch", url), cwd=dest) + + def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: + cmd_args = make_command("pull", "-q", rev_options.to_args()) + self.run_command(cmd_args, cwd=dest) + + @classmethod + def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: + # hotfix the URL scheme after removing bzr+ from bzr+ssh:// readd it + url, rev, user_pass = super().get_url_rev_and_auth(url) + if url.startswith("ssh://"): + url = "bzr+" + url + return url, rev, user_pass + + @classmethod + def get_remote_url(cls, location: str) -> str: + urls = cls.run_command( + ["info"], show_stdout=False, stdout_only=True, cwd=location + ) + for line in urls.splitlines(): + line = line.strip() + for x in ("checkout of branch: ", "parent branch: "): + if line.startswith(x): + repo = line.split(x)[1] + if cls._is_local_repository(repo): + return path_to_url(repo) + return repo + raise RemoteNotFoundError + + @classmethod + def get_revision(cls, location: str) -> str: + revision = cls.run_command( + ["revno"], + show_stdout=False, + stdout_only=True, + cwd=location, + ) + return revision.splitlines()[-1] + + @classmethod + def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: + """Always assume the versions don't match""" + return False + + +vcs.register(Bazaar) diff --git a/lib/python3.11/site-packages/pip/_internal/vcs/git.py b/python/lib/python3.10/site-packages/pip/_internal/vcs/git.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/vcs/git.py rename to python/lib/python3.10/site-packages/pip/_internal/vcs/git.py diff --git a/lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py b/python/lib/python3.10/site-packages/pip/_internal/vcs/mercurial.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py rename to python/lib/python3.10/site-packages/pip/_internal/vcs/mercurial.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py b/python/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py new file mode 100644 index 0000000..89c8754 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/vcs/subversion.py @@ -0,0 +1,324 @@ +import logging +import os +import re +from typing import List, Optional, Tuple + +from pip._internal.utils.misc import ( + HiddenText, + display_path, + is_console_interactive, + is_installable_dir, + split_auth_from_netloc, +) +from pip._internal.utils.subprocess import CommandArgs, make_command +from pip._internal.vcs.versioncontrol import ( + AuthInfo, + RemoteNotFoundError, + RevOptions, + VersionControl, + vcs, +) + +logger = logging.getLogger(__name__) + +_svn_xml_url_re = re.compile('url="([^"]+)"') +_svn_rev_re = re.compile(r'committed-rev="(\d+)"') +_svn_info_xml_rev_re = re.compile(r'\s*revision="(\d+)"') +_svn_info_xml_url_re = re.compile(r"(.*)") + + +class Subversion(VersionControl): + name = "svn" + dirname = ".svn" + repo_name = "checkout" + schemes = ("svn+ssh", "svn+http", "svn+https", "svn+svn", "svn+file") + + @classmethod + def should_add_vcs_url_prefix(cls, remote_url: str) -> bool: + return True + + @staticmethod + def get_base_rev_args(rev: str) -> List[str]: + return ["-r", rev] + + @classmethod + def get_revision(cls, location: str) -> str: + """ + Return the maximum revision for all files under a given location + """ + # Note: taken from setuptools.command.egg_info + revision = 0 + + for base, dirs, _ in os.walk(location): + if cls.dirname not in dirs: + dirs[:] = [] + continue # no sense walking uncontrolled subdirs + dirs.remove(cls.dirname) + entries_fn = os.path.join(base, cls.dirname, "entries") + if not os.path.exists(entries_fn): + # FIXME: should we warn? + continue + + dirurl, localrev = cls._get_svn_url_rev(base) + + if base == location: + assert dirurl is not None + base = dirurl + "/" # save the root url + elif not dirurl or not dirurl.startswith(base): + dirs[:] = [] + continue # not part of the same svn tree, skip it + revision = max(revision, localrev) + return str(revision) + + @classmethod + def get_netloc_and_auth( + cls, netloc: str, scheme: str + ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]: + """ + This override allows the auth information to be passed to svn via the + --username and --password options instead of via the URL. + """ + if scheme == "ssh": + # The --username and --password options can't be used for + # svn+ssh URLs, so keep the auth information in the URL. + return super().get_netloc_and_auth(netloc, scheme) + + return split_auth_from_netloc(netloc) + + @classmethod + def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: + # hotfix the URL scheme after removing svn+ from svn+ssh:// readd it + url, rev, user_pass = super().get_url_rev_and_auth(url) + if url.startswith("ssh://"): + url = "svn+" + url + return url, rev, user_pass + + @staticmethod + def make_rev_args( + username: Optional[str], password: Optional[HiddenText] + ) -> CommandArgs: + extra_args: CommandArgs = [] + if username: + extra_args += ["--username", username] + if password: + extra_args += ["--password", password] + + return extra_args + + @classmethod + def get_remote_url(cls, location: str) -> str: + # In cases where the source is in a subdirectory, we have to look up in + # the location until we find a valid project root. + orig_location = location + while not is_installable_dir(location): + last_location = location + location = os.path.dirname(location) + if location == last_location: + # We've traversed up to the root of the filesystem without + # finding a Python project. + logger.warning( + "Could not find Python project for directory %s (tried all " + "parent directories)", + orig_location, + ) + raise RemoteNotFoundError + + url, _rev = cls._get_svn_url_rev(location) + if url is None: + raise RemoteNotFoundError + + return url + + @classmethod + def _get_svn_url_rev(cls, location: str) -> Tuple[Optional[str], int]: + from pip._internal.exceptions import InstallationError + + entries_path = os.path.join(location, cls.dirname, "entries") + if os.path.exists(entries_path): + with open(entries_path) as f: + data = f.read() + else: # subversion >= 1.7 does not have the 'entries' file + data = "" + + url = None + if data.startswith("8") or data.startswith("9") or data.startswith("10"): + entries = list(map(str.splitlines, data.split("\n\x0c\n"))) + del entries[0][0] # get rid of the '8' + url = entries[0][3] + revs = [int(d[9]) for d in entries if len(d) > 9 and d[9]] + [0] + elif data.startswith("= 1.7 + # Note that using get_remote_call_options is not necessary here + # because `svn info` is being run against a local directory. + # We don't need to worry about making sure interactive mode + # is being used to prompt for passwords, because passwords + # are only potentially needed for remote server requests. + xml = cls.run_command( + ["info", "--xml", location], + show_stdout=False, + stdout_only=True, + ) + match = _svn_info_xml_url_re.search(xml) + assert match is not None + url = match.group(1) + revs = [int(m.group(1)) for m in _svn_info_xml_rev_re.finditer(xml)] + except InstallationError: + url, revs = None, [] + + if revs: + rev = max(revs) + else: + rev = 0 + + return url, rev + + @classmethod + def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: + """Always assume the versions don't match""" + return False + + def __init__(self, use_interactive: bool = None) -> None: + if use_interactive is None: + use_interactive = is_console_interactive() + self.use_interactive = use_interactive + + # This member is used to cache the fetched version of the current + # ``svn`` client. + # Special value definitions: + # None: Not evaluated yet. + # Empty tuple: Could not parse version. + self._vcs_version: Optional[Tuple[int, ...]] = None + + super().__init__() + + def call_vcs_version(self) -> Tuple[int, ...]: + """Query the version of the currently installed Subversion client. + + :return: A tuple containing the parts of the version information or + ``()`` if the version returned from ``svn`` could not be parsed. + :raises: BadCommand: If ``svn`` is not installed. + """ + # Example versions: + # svn, version 1.10.3 (r1842928) + # compiled Feb 25 2019, 14:20:39 on x86_64-apple-darwin17.0.0 + # svn, version 1.7.14 (r1542130) + # compiled Mar 28 2018, 08:49:13 on x86_64-pc-linux-gnu + # svn, version 1.12.0-SlikSvn (SlikSvn/1.12.0) + # compiled May 28 2019, 13:44:56 on x86_64-microsoft-windows6.2 + version_prefix = "svn, version " + version = self.run_command(["--version"], show_stdout=False, stdout_only=True) + if not version.startswith(version_prefix): + return () + + version = version[len(version_prefix) :].split()[0] + version_list = version.partition("-")[0].split(".") + try: + parsed_version = tuple(map(int, version_list)) + except ValueError: + return () + + return parsed_version + + def get_vcs_version(self) -> Tuple[int, ...]: + """Return the version of the currently installed Subversion client. + + If the version of the Subversion client has already been queried, + a cached value will be used. + + :return: A tuple containing the parts of the version information or + ``()`` if the version returned from ``svn`` could not be parsed. + :raises: BadCommand: If ``svn`` is not installed. + """ + if self._vcs_version is not None: + # Use cached version, if available. + # If parsing the version failed previously (empty tuple), + # do not attempt to parse it again. + return self._vcs_version + + vcs_version = self.call_vcs_version() + self._vcs_version = vcs_version + return vcs_version + + def get_remote_call_options(self) -> CommandArgs: + """Return options to be used on calls to Subversion that contact the server. + + These options are applicable for the following ``svn`` subcommands used + in this class. + + - checkout + - switch + - update + + :return: A list of command line arguments to pass to ``svn``. + """ + if not self.use_interactive: + # --non-interactive switch is available since Subversion 0.14.4. + # Subversion < 1.8 runs in interactive mode by default. + return ["--non-interactive"] + + svn_version = self.get_vcs_version() + # By default, Subversion >= 1.8 runs in non-interactive mode if + # stdin is not a TTY. Since that is how pip invokes SVN, in + # call_subprocess(), pip must pass --force-interactive to ensure + # the user can be prompted for a password, if required. + # SVN added the --force-interactive option in SVN 1.8. Since + # e.g. RHEL/CentOS 7, which is supported until 2024, ships with + # SVN 1.7, pip should continue to support SVN 1.7. Therefore, pip + # can't safely add the option if the SVN version is < 1.8 (or unknown). + if svn_version >= (1, 8): + return ["--force-interactive"] + + return [] + + def fetch_new( + self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int + ) -> None: + rev_display = rev_options.to_display() + logger.info( + "Checking out %s%s to %s", + url, + rev_display, + display_path(dest), + ) + if verbosity <= 0: + flag = "--quiet" + else: + flag = "" + cmd_args = make_command( + "checkout", + flag, + self.get_remote_call_options(), + rev_options.to_args(), + url, + dest, + ) + self.run_command(cmd_args) + + def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: + cmd_args = make_command( + "switch", + self.get_remote_call_options(), + rev_options.to_args(), + url, + dest, + ) + self.run_command(cmd_args) + + def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: + cmd_args = make_command( + "update", + self.get_remote_call_options(), + rev_options.to_args(), + dest, + ) + self.run_command(cmd_args) + + +vcs.register(Subversion) diff --git a/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py b/python/lib/python3.10/site-packages/pip/_internal/vcs/versioncontrol.py similarity index 100% rename from lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py rename to python/lib/python3.10/site-packages/pip/_internal/vcs/versioncontrol.py diff --git a/python/lib/python3.10/site-packages/pip/_internal/wheel_builder.py b/python/lib/python3.10/site-packages/pip/_internal/wheel_builder.py new file mode 100644 index 0000000..d066344 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_internal/wheel_builder.py @@ -0,0 +1,377 @@ +"""Orchestrator for building wheels from InstallRequirements. +""" + +import logging +import os.path +import re +import shutil +from typing import Any, Callable, Iterable, List, Optional, Tuple + +from pip._vendor.packaging.utils import canonicalize_name, canonicalize_version +from pip._vendor.packaging.version import InvalidVersion, Version + +from pip._internal.cache import WheelCache +from pip._internal.exceptions import InvalidWheelFilename, UnsupportedWheel +from pip._internal.metadata import FilesystemWheel, get_wheel_distribution +from pip._internal.models.link import Link +from pip._internal.models.wheel import Wheel +from pip._internal.operations.build.wheel import build_wheel_pep517 +from pip._internal.operations.build.wheel_editable import build_wheel_editable +from pip._internal.operations.build.wheel_legacy import build_wheel_legacy +from pip._internal.req.req_install import InstallRequirement +from pip._internal.utils.logging import indent_log +from pip._internal.utils.misc import ensure_dir, hash_file, is_wheel_installed +from pip._internal.utils.setuptools_build import make_setuptools_clean_args +from pip._internal.utils.subprocess import call_subprocess +from pip._internal.utils.temp_dir import TempDirectory +from pip._internal.utils.urls import path_to_url +from pip._internal.vcs import vcs + +logger = logging.getLogger(__name__) + +_egg_info_re = re.compile(r"([a-z0-9_.]+)-([a-z0-9_.!+-]+)", re.IGNORECASE) + +BinaryAllowedPredicate = Callable[[InstallRequirement], bool] +BuildResult = Tuple[List[InstallRequirement], List[InstallRequirement]] + + +def _contains_egg_info(s: str) -> bool: + """Determine whether the string looks like an egg_info. + + :param s: The string to parse. E.g. foo-2.1 + """ + return bool(_egg_info_re.search(s)) + + +def _should_build( + req: InstallRequirement, + need_wheel: bool, + check_binary_allowed: BinaryAllowedPredicate, +) -> bool: + """Return whether an InstallRequirement should be built into a wheel.""" + if req.constraint: + # never build requirements that are merely constraints + return False + if req.is_wheel: + if need_wheel: + logger.info( + "Skipping %s, due to already being wheel.", + req.name, + ) + return False + + if need_wheel: + # i.e. pip wheel, not pip install + return True + + # From this point, this concerns the pip install command only + # (need_wheel=False). + + if not req.source_dir: + return False + + if req.editable: + # we only build PEP 660 editable requirements + return req.supports_pyproject_editable() + + if req.use_pep517: + return True + + if not check_binary_allowed(req): + logger.info( + "Skipping wheel build for %s, due to binaries being disabled for it.", + req.name, + ) + return False + + if not is_wheel_installed(): + # we don't build legacy requirements if wheel is not installed + logger.info( + "Using legacy 'setup.py install' for %s, " + "since package 'wheel' is not installed.", + req.name, + ) + return False + + return True + + +def should_build_for_wheel_command( + req: InstallRequirement, +) -> bool: + return _should_build(req, need_wheel=True, check_binary_allowed=_always_true) + + +def should_build_for_install_command( + req: InstallRequirement, + check_binary_allowed: BinaryAllowedPredicate, +) -> bool: + return _should_build( + req, need_wheel=False, check_binary_allowed=check_binary_allowed + ) + + +def _should_cache( + req: InstallRequirement, +) -> Optional[bool]: + """ + Return whether a built InstallRequirement can be stored in the persistent + wheel cache, assuming the wheel cache is available, and _should_build() + has determined a wheel needs to be built. + """ + if req.editable or not req.source_dir: + # never cache editable requirements + return False + + if req.link and req.link.is_vcs: + # VCS checkout. Do not cache + # unless it points to an immutable commit hash. + assert not req.editable + assert req.source_dir + vcs_backend = vcs.get_backend_for_scheme(req.link.scheme) + assert vcs_backend + if vcs_backend.is_immutable_rev_checkout(req.link.url, req.source_dir): + return True + return False + + assert req.link + base, ext = req.link.splitext() + if _contains_egg_info(base): + return True + + # Otherwise, do not cache. + return False + + +def _get_cache_dir( + req: InstallRequirement, + wheel_cache: WheelCache, +) -> str: + """Return the persistent or temporary cache directory where the built + wheel need to be stored. + """ + cache_available = bool(wheel_cache.cache_dir) + assert req.link + if cache_available and _should_cache(req): + cache_dir = wheel_cache.get_path_for_link(req.link) + else: + cache_dir = wheel_cache.get_ephem_path_for_link(req.link) + return cache_dir + + +def _always_true(_: Any) -> bool: + return True + + +def _verify_one(req: InstallRequirement, wheel_path: str) -> None: + canonical_name = canonicalize_name(req.name or "") + w = Wheel(os.path.basename(wheel_path)) + if canonicalize_name(w.name) != canonical_name: + raise InvalidWheelFilename( + "Wheel has unexpected file name: expected {!r}, " + "got {!r}".format(canonical_name, w.name), + ) + dist = get_wheel_distribution(FilesystemWheel(wheel_path), canonical_name) + dist_verstr = str(dist.version) + if canonicalize_version(dist_verstr) != canonicalize_version(w.version): + raise InvalidWheelFilename( + "Wheel has unexpected file name: expected {!r}, " + "got {!r}".format(dist_verstr, w.version), + ) + metadata_version_value = dist.metadata_version + if metadata_version_value is None: + raise UnsupportedWheel("Missing Metadata-Version") + try: + metadata_version = Version(metadata_version_value) + except InvalidVersion: + msg = f"Invalid Metadata-Version: {metadata_version_value}" + raise UnsupportedWheel(msg) + if metadata_version >= Version("1.2") and not isinstance(dist.version, Version): + raise UnsupportedWheel( + "Metadata 1.2 mandates PEP 440 version, " + "but {!r} is not".format(dist_verstr) + ) + + +def _build_one( + req: InstallRequirement, + output_dir: str, + verify: bool, + build_options: List[str], + global_options: List[str], + editable: bool, +) -> Optional[str]: + """Build one wheel. + + :return: The filename of the built wheel, or None if the build failed. + """ + artifact = "editable" if editable else "wheel" + try: + ensure_dir(output_dir) + except OSError as e: + logger.warning( + "Building %s for %s failed: %s", + artifact, + req.name, + e, + ) + return None + + # Install build deps into temporary directory (PEP 518) + with req.build_env: + wheel_path = _build_one_inside_env( + req, output_dir, build_options, global_options, editable + ) + if wheel_path and verify: + try: + _verify_one(req, wheel_path) + except (InvalidWheelFilename, UnsupportedWheel) as e: + logger.warning("Built %s for %s is invalid: %s", artifact, req.name, e) + return None + return wheel_path + + +def _build_one_inside_env( + req: InstallRequirement, + output_dir: str, + build_options: List[str], + global_options: List[str], + editable: bool, +) -> Optional[str]: + with TempDirectory(kind="wheel") as temp_dir: + assert req.name + if req.use_pep517: + assert req.metadata_directory + assert req.pep517_backend + if global_options: + logger.warning( + "Ignoring --global-option when building %s using PEP 517", req.name + ) + if build_options: + logger.warning( + "Ignoring --build-option when building %s using PEP 517", req.name + ) + if editable: + wheel_path = build_wheel_editable( + name=req.name, + backend=req.pep517_backend, + metadata_directory=req.metadata_directory, + tempd=temp_dir.path, + ) + else: + wheel_path = build_wheel_pep517( + name=req.name, + backend=req.pep517_backend, + metadata_directory=req.metadata_directory, + tempd=temp_dir.path, + ) + else: + wheel_path = build_wheel_legacy( + name=req.name, + setup_py_path=req.setup_py_path, + source_dir=req.unpacked_source_directory, + global_options=global_options, + build_options=build_options, + tempd=temp_dir.path, + ) + + if wheel_path is not None: + wheel_name = os.path.basename(wheel_path) + dest_path = os.path.join(output_dir, wheel_name) + try: + wheel_hash, length = hash_file(wheel_path) + shutil.move(wheel_path, dest_path) + logger.info( + "Created wheel for %s: filename=%s size=%d sha256=%s", + req.name, + wheel_name, + length, + wheel_hash.hexdigest(), + ) + logger.info("Stored in directory: %s", output_dir) + return dest_path + except Exception as e: + logger.warning( + "Building wheel for %s failed: %s", + req.name, + e, + ) + # Ignore return, we can't do anything else useful. + if not req.use_pep517: + _clean_one_legacy(req, global_options) + return None + + +def _clean_one_legacy(req: InstallRequirement, global_options: List[str]) -> bool: + clean_args = make_setuptools_clean_args( + req.setup_py_path, + global_options=global_options, + ) + + logger.info("Running setup.py clean for %s", req.name) + try: + call_subprocess( + clean_args, command_desc="python setup.py clean", cwd=req.source_dir + ) + return True + except Exception: + logger.error("Failed cleaning build dir for %s", req.name) + return False + + +def build( + requirements: Iterable[InstallRequirement], + wheel_cache: WheelCache, + verify: bool, + build_options: List[str], + global_options: List[str], +) -> BuildResult: + """Build wheels. + + :return: The list of InstallRequirement that succeeded to build and + the list of InstallRequirement that failed to build. + """ + if not requirements: + return [], [] + + # Build the wheels. + logger.info( + "Building wheels for collected packages: %s", + ", ".join(req.name for req in requirements), # type: ignore + ) + + with indent_log(): + build_successes, build_failures = [], [] + for req in requirements: + assert req.name + cache_dir = _get_cache_dir(req, wheel_cache) + wheel_file = _build_one( + req, + cache_dir, + verify, + build_options, + global_options, + req.editable and req.permit_editable_wheels, + ) + if wheel_file: + # Update the link for this. + req.link = Link(path_to_url(wheel_file)) + req.local_file_path = req.link.file_path + assert req.link.is_wheel + build_successes.append(req) + else: + build_failures.append(req) + + # notify success/failure + if build_successes: + logger.info( + "Successfully built %s", + " ".join([req.name for req in build_successes]), # type: ignore + ) + if build_failures: + logger.info( + "Failed to build %s", + " ".join([req.name for req in build_failures]), # type: ignore + ) + # Return a list of requirements that failed to build + return build_successes, build_failures diff --git a/python/lib/python3.10/site-packages/pip/_vendor/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/__init__.py new file mode 100644 index 0000000..3843cb0 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/__init__.py @@ -0,0 +1,111 @@ +""" +pip._vendor is for vendoring dependencies of pip to prevent needing pip to +depend on something external. + +Files inside of pip._vendor should be considered immutable and should only be +updated to versions from upstream. +""" +from __future__ import absolute_import + +import glob +import os.path +import sys + +# Downstream redistributors which have debundled our dependencies should also +# patch this value to be true. This will trigger the additional patching +# to cause things like "six" to be available as pip. +DEBUNDLED = False + +# By default, look in this directory for a bunch of .whl files which we will +# add to the beginning of sys.path before attempting to import anything. This +# is done to support downstream re-distributors like Debian and Fedora who +# wish to create their own Wheels for our dependencies to aid in debundling. +WHEEL_DIR = os.path.abspath(os.path.dirname(__file__)) + + +# Define a small helper function to alias our vendored modules to the real ones +# if the vendored ones do not exist. This idea of this was taken from +# https://github.com/kennethreitz/requests/pull/2567. +def vendored(modulename): + vendored_name = "{0}.{1}".format(__name__, modulename) + + try: + __import__(modulename, globals(), locals(), level=0) + except ImportError: + # We can just silently allow import failures to pass here. If we + # got to this point it means that ``import pip._vendor.whatever`` + # failed and so did ``import whatever``. Since we're importing this + # upfront in an attempt to alias imports, not erroring here will + # just mean we get a regular import error whenever pip *actually* + # tries to import one of these modules to use it, which actually + # gives us a better error message than we would have otherwise + # gotten. + pass + else: + sys.modules[vendored_name] = sys.modules[modulename] + base, head = vendored_name.rsplit(".", 1) + setattr(sys.modules[base], head, sys.modules[modulename]) + + +# If we're operating in a debundled setup, then we want to go ahead and trigger +# the aliasing of our vendored libraries as well as looking for wheels to add +# to our sys.path. This will cause all of this code to be a no-op typically +# however downstream redistributors can enable it in a consistent way across +# all platforms. +if DEBUNDLED: + # Actually look inside of WHEEL_DIR to find .whl files and add them to the + # front of our sys.path. + sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path + + # Actually alias all of our vendored dependencies. + vendored("cachecontrol") + vendored("certifi") + vendored("colorama") + vendored("distlib") + vendored("distro") + vendored("html5lib") + vendored("six") + vendored("six.moves") + vendored("six.moves.urllib") + vendored("six.moves.urllib.parse") + vendored("packaging") + vendored("packaging.version") + vendored("packaging.specifiers") + vendored("pep517") + vendored("pkg_resources") + vendored("platformdirs") + vendored("progress") + vendored("requests") + vendored("requests.exceptions") + vendored("requests.packages") + vendored("requests.packages.urllib3") + vendored("requests.packages.urllib3._collections") + vendored("requests.packages.urllib3.connection") + vendored("requests.packages.urllib3.connectionpool") + vendored("requests.packages.urllib3.contrib") + vendored("requests.packages.urllib3.contrib.ntlmpool") + vendored("requests.packages.urllib3.contrib.pyopenssl") + vendored("requests.packages.urllib3.exceptions") + vendored("requests.packages.urllib3.fields") + vendored("requests.packages.urllib3.filepost") + vendored("requests.packages.urllib3.packages") + vendored("requests.packages.urllib3.packages.ordered_dict") + vendored("requests.packages.urllib3.packages.six") + vendored("requests.packages.urllib3.packages.ssl_match_hostname") + vendored("requests.packages.urllib3.packages.ssl_match_hostname." + "_implementation") + vendored("requests.packages.urllib3.poolmanager") + vendored("requests.packages.urllib3.request") + vendored("requests.packages.urllib3.response") + vendored("requests.packages.urllib3.util") + vendored("requests.packages.urllib3.util.connection") + vendored("requests.packages.urllib3.util.request") + vendored("requests.packages.urllib3.util.response") + vendored("requests.packages.urllib3.util.retry") + vendored("requests.packages.urllib3.util.ssl_") + vendored("requests.packages.urllib3.util.timeout") + vendored("requests.packages.urllib3.util.url") + vendored("resolvelib") + vendored("tenacity") + vendored("tomli") + vendored("urllib3") diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/__init__.py new file mode 100644 index 0000000..8435d62 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/__init__.py @@ -0,0 +1,18 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +"""CacheControl import Interface. + +Make it easy to import from cachecontrol without long namespaces. +""" +__author__ = "Eric Larson" +__email__ = "eric@ionrock.org" +__version__ = "0.12.10" + +from .wrapper import CacheControl +from .adapter import CacheControlAdapter +from .controller import CacheController + +import logging +logging.getLogger(__name__).addHandler(logging.NullHandler()) diff --git a/lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/_cmd.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py rename to python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/_cmd.py diff --git a/lib/python3.11/site-packages/pip/_vendor/cachecontrol/adapter.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/adapter.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/cachecontrol/adapter.py rename to python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/adapter.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py new file mode 100644 index 0000000..44e4309 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/cache.py @@ -0,0 +1,43 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +""" +The cache object API for implementing caches. The default is a thread +safe in-memory dictionary. +""" +from threading import Lock + + +class BaseCache(object): + + def get(self, key): + raise NotImplementedError() + + def set(self, key, value, expires=None): + raise NotImplementedError() + + def delete(self, key): + raise NotImplementedError() + + def close(self): + pass + + +class DictCache(BaseCache): + + def __init__(self, init_dict=None): + self.lock = Lock() + self.data = init_dict or {} + + def get(self, key): + return self.data.get(key, None) + + def set(self, key, value, expires=None): + with self.lock: + self.data.update({key: value}) + + def delete(self, key): + with self.lock: + if key in self.data: + self.data.pop(key) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/__init__.py new file mode 100644 index 0000000..44becd6 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/__init__.py @@ -0,0 +1,6 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +from .file_cache import FileCache # noqa +from .redis_cache import RedisCache # noqa diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/file_cache.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/file_cache.py new file mode 100644 index 0000000..6cd1106 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/file_cache.py @@ -0,0 +1,150 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import hashlib +import os +from textwrap import dedent + +from ..cache import BaseCache +from ..controller import CacheController + +try: + FileNotFoundError +except NameError: + # py2.X + FileNotFoundError = (IOError, OSError) + + +def _secure_open_write(filename, fmode): + # We only want to write to this file, so open it in write only mode + flags = os.O_WRONLY + + # os.O_CREAT | os.O_EXCL will fail if the file already exists, so we only + # will open *new* files. + # We specify this because we want to ensure that the mode we pass is the + # mode of the file. + flags |= os.O_CREAT | os.O_EXCL + + # Do not follow symlinks to prevent someone from making a symlink that + # we follow and insecurely open a cache file. + if hasattr(os, "O_NOFOLLOW"): + flags |= os.O_NOFOLLOW + + # On Windows we'll mark this file as binary + if hasattr(os, "O_BINARY"): + flags |= os.O_BINARY + + # Before we open our file, we want to delete any existing file that is + # there + try: + os.remove(filename) + except (IOError, OSError): + # The file must not exist already, so we can just skip ahead to opening + pass + + # Open our file, the use of os.O_CREAT | os.O_EXCL will ensure that if a + # race condition happens between the os.remove and this line, that an + # error will be raised. Because we utilize a lockfile this should only + # happen if someone is attempting to attack us. + fd = os.open(filename, flags, fmode) + try: + return os.fdopen(fd, "wb") + + except: + # An error occurred wrapping our FD in a file object + os.close(fd) + raise + + +class FileCache(BaseCache): + + def __init__( + self, + directory, + forever=False, + filemode=0o0600, + dirmode=0o0700, + use_dir_lock=None, + lock_class=None, + ): + + if use_dir_lock is not None and lock_class is not None: + raise ValueError("Cannot use use_dir_lock and lock_class together") + + try: + from lockfile import LockFile + from lockfile.mkdirlockfile import MkdirLockFile + except ImportError: + notice = dedent( + """ + NOTE: In order to use the FileCache you must have + lockfile installed. You can install it via pip: + pip install lockfile + """ + ) + raise ImportError(notice) + + else: + if use_dir_lock: + lock_class = MkdirLockFile + + elif lock_class is None: + lock_class = LockFile + + self.directory = directory + self.forever = forever + self.filemode = filemode + self.dirmode = dirmode + self.lock_class = lock_class + + @staticmethod + def encode(x): + return hashlib.sha224(x.encode()).hexdigest() + + def _fn(self, name): + # NOTE: This method should not change as some may depend on it. + # See: https://github.com/ionrock/cachecontrol/issues/63 + hashed = self.encode(name) + parts = list(hashed[:5]) + [hashed] + return os.path.join(self.directory, *parts) + + def get(self, key): + name = self._fn(key) + try: + with open(name, "rb") as fh: + return fh.read() + + except FileNotFoundError: + return None + + def set(self, key, value, expires=None): + name = self._fn(key) + + # Make sure the directory exists + try: + os.makedirs(os.path.dirname(name), self.dirmode) + except (IOError, OSError): + pass + + with self.lock_class(name) as lock: + # Write our actual file + with _secure_open_write(lock.path, self.filemode) as fh: + fh.write(value) + + def delete(self, key): + name = self._fn(key) + if not self.forever: + try: + os.remove(name) + except FileNotFoundError: + pass + + +def url_to_file_path(url, filecache): + """Return the file cache path based on the URL. + + This does not ensure the file exists! + """ + key = CacheController.cache_url(url) + return filecache._fn(key) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py new file mode 100644 index 0000000..720b507 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py @@ -0,0 +1,37 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +from __future__ import division + +from datetime import datetime +from pip._vendor.cachecontrol.cache import BaseCache + + +class RedisCache(BaseCache): + + def __init__(self, conn): + self.conn = conn + + def get(self, key): + return self.conn.get(key) + + def set(self, key, value, expires=None): + if not expires: + self.conn.set(key, value) + else: + expires = expires - datetime.utcnow() + self.conn.setex(key, int(expires.total_seconds()), value) + + def delete(self, key): + self.conn.delete(key) + + def clear(self): + """Helper for clearing all the keys in a database. Use with + caution!""" + for key in self.conn.keys(): + self.conn.delete(key) + + def close(self): + """Redis uses connection pooling, no need to close the connection.""" + pass diff --git a/lib/python3.11/site-packages/pip/_vendor/cachecontrol/compat.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/compat.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/cachecontrol/compat.py rename to python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/compat.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/controller.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/controller.py new file mode 100644 index 0000000..d7e7380 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/controller.py @@ -0,0 +1,415 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +""" +The httplib2 algorithms ported for use with requests. +""" +import logging +import re +import calendar +import time +from email.utils import parsedate_tz + +from pip._vendor.requests.structures import CaseInsensitiveDict + +from .cache import DictCache +from .serialize import Serializer + + +logger = logging.getLogger(__name__) + +URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?") + +PERMANENT_REDIRECT_STATUSES = (301, 308) + + +def parse_uri(uri): + """Parses a URI using the regex given in Appendix B of RFC 3986. + + (scheme, authority, path, query, fragment) = parse_uri(uri) + """ + groups = URI.match(uri).groups() + return (groups[1], groups[3], groups[4], groups[6], groups[8]) + + +class CacheController(object): + """An interface to see if request should cached or not. + """ + + def __init__( + self, cache=None, cache_etags=True, serializer=None, status_codes=None + ): + self.cache = DictCache() if cache is None else cache + self.cache_etags = cache_etags + self.serializer = serializer or Serializer() + self.cacheable_status_codes = status_codes or (200, 203, 300, 301, 308) + + @classmethod + def _urlnorm(cls, uri): + """Normalize the URL to create a safe key for the cache""" + (scheme, authority, path, query, fragment) = parse_uri(uri) + if not scheme or not authority: + raise Exception("Only absolute URIs are allowed. uri = %s" % uri) + + scheme = scheme.lower() + authority = authority.lower() + + if not path: + path = "/" + + # Could do syntax based normalization of the URI before + # computing the digest. See Section 6.2.2 of Std 66. + request_uri = query and "?".join([path, query]) or path + defrag_uri = scheme + "://" + authority + request_uri + + return defrag_uri + + @classmethod + def cache_url(cls, uri): + return cls._urlnorm(uri) + + def parse_cache_control(self, headers): + known_directives = { + # https://tools.ietf.org/html/rfc7234#section-5.2 + "max-age": (int, True), + "max-stale": (int, False), + "min-fresh": (int, True), + "no-cache": (None, False), + "no-store": (None, False), + "no-transform": (None, False), + "only-if-cached": (None, False), + "must-revalidate": (None, False), + "public": (None, False), + "private": (None, False), + "proxy-revalidate": (None, False), + "s-maxage": (int, True), + } + + cc_headers = headers.get("cache-control", headers.get("Cache-Control", "")) + + retval = {} + + for cc_directive in cc_headers.split(","): + if not cc_directive.strip(): + continue + + parts = cc_directive.split("=", 1) + directive = parts[0].strip() + + try: + typ, required = known_directives[directive] + except KeyError: + logger.debug("Ignoring unknown cache-control directive: %s", directive) + continue + + if not typ or not required: + retval[directive] = None + if typ: + try: + retval[directive] = typ(parts[1].strip()) + except IndexError: + if required: + logger.debug( + "Missing value for cache-control " "directive: %s", + directive, + ) + except ValueError: + logger.debug( + "Invalid value for cache-control directive " "%s, must be %s", + directive, + typ.__name__, + ) + + return retval + + def cached_request(self, request): + """ + Return a cached response if it exists in the cache, otherwise + return False. + """ + cache_url = self.cache_url(request.url) + logger.debug('Looking up "%s" in the cache', cache_url) + cc = self.parse_cache_control(request.headers) + + # Bail out if the request insists on fresh data + if "no-cache" in cc: + logger.debug('Request header has "no-cache", cache bypassed') + return False + + if "max-age" in cc and cc["max-age"] == 0: + logger.debug('Request header has "max_age" as 0, cache bypassed') + return False + + # Request allows serving from the cache, let's see if we find something + cache_data = self.cache.get(cache_url) + if cache_data is None: + logger.debug("No cache entry available") + return False + + # Check whether it can be deserialized + resp = self.serializer.loads(request, cache_data) + if not resp: + logger.warning("Cache entry deserialization failed, entry ignored") + return False + + # If we have a cached permanent redirect, return it immediately. We + # don't need to test our response for other headers b/c it is + # intrinsically "cacheable" as it is Permanent. + # + # See: + # https://tools.ietf.org/html/rfc7231#section-6.4.2 + # + # Client can try to refresh the value by repeating the request + # with cache busting headers as usual (ie no-cache). + if int(resp.status) in PERMANENT_REDIRECT_STATUSES: + msg = ( + "Returning cached permanent redirect response " + "(ignoring date and etag information)" + ) + logger.debug(msg) + return resp + + headers = CaseInsensitiveDict(resp.headers) + if not headers or "date" not in headers: + if "etag" not in headers: + # Without date or etag, the cached response can never be used + # and should be deleted. + logger.debug("Purging cached response: no date or etag") + self.cache.delete(cache_url) + logger.debug("Ignoring cached response: no date") + return False + + now = time.time() + date = calendar.timegm(parsedate_tz(headers["date"])) + current_age = max(0, now - date) + logger.debug("Current age based on date: %i", current_age) + + # TODO: There is an assumption that the result will be a + # urllib3 response object. This may not be best since we + # could probably avoid instantiating or constructing the + # response until we know we need it. + resp_cc = self.parse_cache_control(headers) + + # determine freshness + freshness_lifetime = 0 + + # Check the max-age pragma in the cache control header + if "max-age" in resp_cc: + freshness_lifetime = resp_cc["max-age"] + logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime) + + # If there isn't a max-age, check for an expires header + elif "expires" in headers: + expires = parsedate_tz(headers["expires"]) + if expires is not None: + expire_time = calendar.timegm(expires) - date + freshness_lifetime = max(0, expire_time) + logger.debug("Freshness lifetime from expires: %i", freshness_lifetime) + + # Determine if we are setting freshness limit in the + # request. Note, this overrides what was in the response. + if "max-age" in cc: + freshness_lifetime = cc["max-age"] + logger.debug( + "Freshness lifetime from request max-age: %i", freshness_lifetime + ) + + if "min-fresh" in cc: + min_fresh = cc["min-fresh"] + # adjust our current age by our min fresh + current_age += min_fresh + logger.debug("Adjusted current age from min-fresh: %i", current_age) + + # Return entry if it is fresh enough + if freshness_lifetime > current_age: + logger.debug('The response is "fresh", returning cached response') + logger.debug("%i > %i", freshness_lifetime, current_age) + return resp + + # we're not fresh. If we don't have an Etag, clear it out + if "etag" not in headers: + logger.debug('The cached response is "stale" with no etag, purging') + self.cache.delete(cache_url) + + # return the original handler + return False + + def conditional_headers(self, request): + cache_url = self.cache_url(request.url) + resp = self.serializer.loads(request, self.cache.get(cache_url)) + new_headers = {} + + if resp: + headers = CaseInsensitiveDict(resp.headers) + + if "etag" in headers: + new_headers["If-None-Match"] = headers["ETag"] + + if "last-modified" in headers: + new_headers["If-Modified-Since"] = headers["Last-Modified"] + + return new_headers + + def cache_response(self, request, response, body=None, status_codes=None): + """ + Algorithm for caching requests. + + This assumes a requests Response object. + """ + # From httplib2: Don't cache 206's since we aren't going to + # handle byte range requests + cacheable_status_codes = status_codes or self.cacheable_status_codes + if response.status not in cacheable_status_codes: + logger.debug( + "Status code %s not in %s", response.status, cacheable_status_codes + ) + return + + response_headers = CaseInsensitiveDict(response.headers) + + if "date" in response_headers: + date = calendar.timegm(parsedate_tz(response_headers["date"])) + else: + date = 0 + + # If we've been given a body, our response has a Content-Length, that + # Content-Length is valid then we can check to see if the body we've + # been given matches the expected size, and if it doesn't we'll just + # skip trying to cache it. + if ( + body is not None + and "content-length" in response_headers + and response_headers["content-length"].isdigit() + and int(response_headers["content-length"]) != len(body) + ): + return + + cc_req = self.parse_cache_control(request.headers) + cc = self.parse_cache_control(response_headers) + + cache_url = self.cache_url(request.url) + logger.debug('Updating cache with response from "%s"', cache_url) + + # Delete it from the cache if we happen to have it stored there + no_store = False + if "no-store" in cc: + no_store = True + logger.debug('Response header has "no-store"') + if "no-store" in cc_req: + no_store = True + logger.debug('Request header has "no-store"') + if no_store and self.cache.get(cache_url): + logger.debug('Purging existing cache entry to honor "no-store"') + self.cache.delete(cache_url) + if no_store: + return + + # https://tools.ietf.org/html/rfc7234#section-4.1: + # A Vary header field-value of "*" always fails to match. + # Storing such a response leads to a deserialization warning + # during cache lookup and is not allowed to ever be served, + # so storing it can be avoided. + if "*" in response_headers.get("vary", ""): + logger.debug('Response header has "Vary: *"') + return + + # If we've been given an etag, then keep the response + if self.cache_etags and "etag" in response_headers: + expires_time = 0 + if response_headers.get("expires"): + expires = parsedate_tz(response_headers["expires"]) + if expires is not None: + expires_time = calendar.timegm(expires) - date + + expires_time = max(expires_time, 14 * 86400) + + logger.debug("etag object cached for {0} seconds".format(expires_time)) + logger.debug("Caching due to etag") + self.cache.set( + cache_url, + self.serializer.dumps(request, response, body), + expires=expires_time, + ) + + # Add to the cache any permanent redirects. We do this before looking + # that the Date headers. + elif int(response.status) in PERMANENT_REDIRECT_STATUSES: + logger.debug("Caching permanent redirect") + self.cache.set(cache_url, self.serializer.dumps(request, response, b"")) + + # Add to the cache if the response headers demand it. If there + # is no date header then we can't do anything about expiring + # the cache. + elif "date" in response_headers: + date = calendar.timegm(parsedate_tz(response_headers["date"])) + # cache when there is a max-age > 0 + if "max-age" in cc and cc["max-age"] > 0: + logger.debug("Caching b/c date exists and max-age > 0") + expires_time = cc["max-age"] + self.cache.set( + cache_url, + self.serializer.dumps(request, response, body), + expires=expires_time, + ) + + # If the request can expire, it means we should cache it + # in the meantime. + elif "expires" in response_headers: + if response_headers["expires"]: + expires = parsedate_tz(response_headers["expires"]) + if expires is not None: + expires_time = calendar.timegm(expires) - date + else: + expires_time = None + + logger.debug( + "Caching b/c of expires header. expires in {0} seconds".format( + expires_time + ) + ) + self.cache.set( + cache_url, + self.serializer.dumps(request, response, body=body), + expires=expires_time, + ) + + def update_cached_response(self, request, response): + """On a 304 we will get a new set of headers that we want to + update our cached value with, assuming we have one. + + This should only ever be called when we've sent an ETag and + gotten a 304 as the response. + """ + cache_url = self.cache_url(request.url) + + cached_response = self.serializer.loads(request, self.cache.get(cache_url)) + + if not cached_response: + # we didn't have a cached response + return response + + # Lets update our headers with the headers from the new request: + # http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1 + # + # The server isn't supposed to send headers that would make + # the cached body invalid. But... just in case, we'll be sure + # to strip out ones we know that might be problmatic due to + # typical assumptions. + excluded_headers = ["content-length"] + + cached_response.headers.update( + dict( + (k, v) + for k, v in response.headers.items() + if k.lower() not in excluded_headers + ) + ) + + # we want a 200 b/c we have content via the cache + cached_response.status = 200 + + # update our cache + self.cache.set(cache_url, self.serializer.dumps(request, cached_response)) + + return cached_response diff --git a/lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/filewrapper.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py rename to python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/filewrapper.py diff --git a/lib/python3.11/site-packages/pip/_vendor/cachecontrol/heuristics.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/heuristics.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/cachecontrol/heuristics.py rename to python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/heuristics.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py new file mode 100644 index 0000000..b075df1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/serialize.py @@ -0,0 +1,186 @@ +# SPDX-FileCopyrightText: 2015 Eric Larson +# +# SPDX-License-Identifier: Apache-2.0 + +import base64 +import io +import json +import zlib + +from pip._vendor import msgpack +from pip._vendor.requests.structures import CaseInsensitiveDict + +from .compat import HTTPResponse, pickle, text_type + + +def _b64_decode_bytes(b): + return base64.b64decode(b.encode("ascii")) + + +def _b64_decode_str(s): + return _b64_decode_bytes(s).decode("utf8") + + +_default_body_read = object() + + +class Serializer(object): + def dumps(self, request, response, body=None): + response_headers = CaseInsensitiveDict(response.headers) + + if body is None: + # When a body isn't passed in, we'll read the response. We + # also update the response with a new file handler to be + # sure it acts as though it was never read. + body = response.read(decode_content=False) + response._fp = io.BytesIO(body) + + # NOTE: This is all a bit weird, but it's really important that on + # Python 2.x these objects are unicode and not str, even when + # they contain only ascii. The problem here is that msgpack + # understands the difference between unicode and bytes and we + # have it set to differentiate between them, however Python 2 + # doesn't know the difference. Forcing these to unicode will be + # enough to have msgpack know the difference. + data = { + u"response": { + u"body": body, + u"headers": dict( + (text_type(k), text_type(v)) for k, v in response.headers.items() + ), + u"status": response.status, + u"version": response.version, + u"reason": text_type(response.reason), + u"strict": response.strict, + u"decode_content": response.decode_content, + } + } + + # Construct our vary headers + data[u"vary"] = {} + if u"vary" in response_headers: + varied_headers = response_headers[u"vary"].split(",") + for header in varied_headers: + header = text_type(header).strip() + header_value = request.headers.get(header, None) + if header_value is not None: + header_value = text_type(header_value) + data[u"vary"][header] = header_value + + return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)]) + + def loads(self, request, data): + # Short circuit if we've been given an empty set of data + if not data: + return + + # Determine what version of the serializer the data was serialized + # with + try: + ver, data = data.split(b",", 1) + except ValueError: + ver = b"cc=0" + + # Make sure that our "ver" is actually a version and isn't a false + # positive from a , being in the data stream. + if ver[:3] != b"cc=": + data = ver + data + ver = b"cc=0" + + # Get the version number out of the cc=N + ver = ver.split(b"=", 1)[-1].decode("ascii") + + # Dispatch to the actual load method for the given version + try: + return getattr(self, "_loads_v{}".format(ver))(request, data) + + except AttributeError: + # This is a version we don't have a loads function for, so we'll + # just treat it as a miss and return None + return + + def prepare_response(self, request, cached): + """Verify our vary headers match and construct a real urllib3 + HTTPResponse object. + """ + # Special case the '*' Vary value as it means we cannot actually + # determine if the cached response is suitable for this request. + # This case is also handled in the controller code when creating + # a cache entry, but is left here for backwards compatibility. + if "*" in cached.get("vary", {}): + return + + # Ensure that the Vary headers for the cached response match our + # request + for header, value in cached.get("vary", {}).items(): + if request.headers.get(header, None) != value: + return + + body_raw = cached["response"].pop("body") + + headers = CaseInsensitiveDict(data=cached["response"]["headers"]) + if headers.get("transfer-encoding", "") == "chunked": + headers.pop("transfer-encoding") + + cached["response"]["headers"] = headers + + try: + body = io.BytesIO(body_raw) + except TypeError: + # This can happen if cachecontrol serialized to v1 format (pickle) + # using Python 2. A Python 2 str(byte string) will be unpickled as + # a Python 3 str (unicode string), which will cause the above to + # fail with: + # + # TypeError: 'str' does not support the buffer interface + body = io.BytesIO(body_raw.encode("utf8")) + + return HTTPResponse(body=body, preload_content=False, **cached["response"]) + + def _loads_v0(self, request, data): + # The original legacy cache data. This doesn't contain enough + # information to construct everything we need, so we'll treat this as + # a miss. + return + + def _loads_v1(self, request, data): + try: + cached = pickle.loads(data) + except ValueError: + return + + return self.prepare_response(request, cached) + + def _loads_v2(self, request, data): + try: + cached = json.loads(zlib.decompress(data).decode("utf8")) + except (ValueError, zlib.error): + return + + # We need to decode the items that we've base64 encoded + cached["response"]["body"] = _b64_decode_bytes(cached["response"]["body"]) + cached["response"]["headers"] = dict( + (_b64_decode_str(k), _b64_decode_str(v)) + for k, v in cached["response"]["headers"].items() + ) + cached["response"]["reason"] = _b64_decode_str(cached["response"]["reason"]) + cached["vary"] = dict( + (_b64_decode_str(k), _b64_decode_str(v) if v is not None else v) + for k, v in cached["vary"].items() + ) + + return self.prepare_response(request, cached) + + def _loads_v3(self, request, data): + # Due to Python 2 encoding issues, it's impossible to know for sure + # exactly how to load v3 entries, thus we'll treat these as a miss so + # that they get rewritten out as v4 entries. + return + + def _loads_v4(self, request, data): + try: + cached = msgpack.loads(data, raw=False) + except ValueError: + return + + return self.prepare_response(request, cached) diff --git a/lib/python3.11/site-packages/pip/_vendor/cachecontrol/wrapper.py b/python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/wrapper.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/cachecontrol/wrapper.py rename to python/lib/python3.10/site-packages/pip/_vendor/cachecontrol/wrapper.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/certifi/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/certifi/__init__.py new file mode 100644 index 0000000..8db1a0e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/certifi/__init__.py @@ -0,0 +1,3 @@ +from .core import contents, where + +__version__ = "2021.10.08" diff --git a/lib/python3.11/site-packages/pip/_vendor/certifi/__main__.py b/python/lib/python3.10/site-packages/pip/_vendor/certifi/__main__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/certifi/__main__.py rename to python/lib/python3.10/site-packages/pip/_vendor/certifi/__main__.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/certifi/cacert.pem b/python/lib/python3.10/site-packages/pip/_vendor/certifi/cacert.pem new file mode 100644 index 0000000..6d0ccc0 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/certifi/cacert.pem @@ -0,0 +1,4362 @@ + +# Issuer: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA +# Subject: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA +# Label: "GlobalSign Root CA" +# Serial: 4835703278459707669005204 +# MD5 Fingerprint: 3e:45:52:15:09:51:92:e1:b7:5d:37:9f:b1:87:29:8a +# SHA1 Fingerprint: b1:bc:96:8b:d4:f4:9d:62:2a:a8:9a:81:f2:15:01:52:a4:1d:82:9c +# SHA256 Fingerprint: eb:d4:10:40:e4:bb:3e:c7:42:c9:e3:81:d3:1e:f2:a4:1a:48:b6:68:5c:96:e7:ce:f3:c1:df:6c:d4:33:1c:99 +-----BEGIN CERTIFICATE----- +MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG +A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv +b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw +MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i +YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT +aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ +jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp +xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp +1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG +snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ +U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8 +9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E +BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B +AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz +yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE +38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP +AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad +DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME +HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A== +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R2 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R2 +# Label: "GlobalSign Root CA - R2" +# Serial: 4835703278459682885658125 +# MD5 Fingerprint: 94:14:77:7e:3e:5e:fd:8f:30:bd:41:b0:cf:e7:d0:30 +# SHA1 Fingerprint: 75:e0:ab:b6:13:85:12:27:1c:04:f8:5f:dd:de:38:e4:b7:24:2e:fe +# SHA256 Fingerprint: ca:42:dd:41:74:5f:d0:b8:1e:b9:02:36:2c:f9:d8:bf:71:9d:a1:bd:1b:1e:fc:94:6f:5b:4c:99:f4:2c:1b:9e +-----BEGIN CERTIFICATE----- +MIIDujCCAqKgAwIBAgILBAAAAAABD4Ym5g0wDQYJKoZIhvcNAQEFBQAwTDEgMB4G +A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjIxEzARBgNVBAoTCkdsb2JhbFNp +Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDYxMjE1MDgwMDAwWhcNMjExMjE1 +MDgwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEG +A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAKbPJA6+Lm8omUVCxKs+IVSbC9N/hHD6ErPL +v4dfxn+G07IwXNb9rfF73OX4YJYJkhD10FPe+3t+c4isUoh7SqbKSaZeqKeMWhG8 +eoLrvozps6yWJQeXSpkqBy+0Hne/ig+1AnwblrjFuTosvNYSuetZfeLQBoZfXklq +tTleiDTsvHgMCJiEbKjNS7SgfQx5TfC4LcshytVsW33hoCmEofnTlEnLJGKRILzd +C9XZzPnqJworc5HGnRusyMvo4KD0L5CLTfuwNhv2GXqF4G3yYROIXJ/gkwpRl4pa +zq+r1feqCapgvdzZX99yqWATXgAByUr6P6TqBwMhAo6CygPCm48CAwEAAaOBnDCB +mTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUm+IH +V2ccHsBqBt5ZtJot39wZhi4wNgYDVR0fBC8wLTAroCmgJ4YlaHR0cDovL2NybC5n +bG9iYWxzaWduLm5ldC9yb290LXIyLmNybDAfBgNVHSMEGDAWgBSb4gdXZxwewGoG +3lm0mi3f3BmGLjANBgkqhkiG9w0BAQUFAAOCAQEAmYFThxxol4aR7OBKuEQLq4Gs +J0/WwbgcQ3izDJr86iw8bmEbTUsp9Z8FHSbBuOmDAGJFtqkIk7mpM0sYmsL4h4hO +291xNBrBVNpGP+DTKqttVCL1OmLNIG+6KYnX3ZHu01yiPqFbQfXf5WRDLenVOavS +ot+3i9DAgBkcRcAtjOj4LaR0VknFBbVPFd5uRHg5h6h+u/N5GJG79G+dwfCMNYxd +AfvDbbnvRG15RjF+Cv6pgsH/76tuIMRQyV+dTZsXjAzlAcmgQWpzU/qlULRuJQ/7 +TBj0/VLZjmmx6BEP3ojY+x1J96relc8geMJgEtslQIxq/H5COEBkEveegeGTLg== +-----END CERTIFICATE----- + +# Issuer: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited +# Subject: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited +# Label: "Entrust.net Premium 2048 Secure Server CA" +# Serial: 946069240 +# MD5 Fingerprint: ee:29:31:bc:32:7e:9a:e6:e8:b5:f7:51:b4:34:71:90 +# SHA1 Fingerprint: 50:30:06:09:1d:97:d4:f5:ae:39:f7:cb:e7:92:7d:7d:65:2d:34:31 +# SHA256 Fingerprint: 6d:c4:71:72:e0:1c:bc:b0:bf:62:58:0d:89:5f:e2:b8:ac:9a:d4:f8:73:80:1e:0c:10:b9:c8:37:d2:1e:b1:77 +-----BEGIN CERTIFICATE----- +MIIEKjCCAxKgAwIBAgIEOGPe+DANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML +RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp +bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5 +IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0yOTA3 +MjQxNDE1MTJaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3 +LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp +YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG +A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq +K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe +sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX +MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT +XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/ +HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH +4QIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUVeSB0RGAvtiJuQijMfmhJAkWuXAwDQYJKoZIhvcNAQEFBQADggEBADub +j1abMOdTmXx6eadNl9cZlZD7Bh/KM3xGY4+WZiT6QBshJ8rmcnPyT/4xmf3IDExo +U8aAghOY+rat2l098c5u9hURlIIM7j+VrxGrD9cv3h8Dj1csHsm7mhpElesYT6Yf +zX1XEC+bBAlahLVu2B064dae0Wx5XnkcFMXj0EyTO2U87d89vqbllRrDtRnDvV5b +u/8j72gZyxKTJ1wDLW8w0B62GqzeWvfRqqgnpv55gcR5mTNXuhKwqeBCbJPKVt7+ +bYQLCIt+jerXmCHG8+c8eS9enNFMFY3h7CI3zJpDC5fcgJCNs2ebb0gIFVbPv/Er +fF6adulZkMV8gzURZVE= +-----END CERTIFICATE----- + +# Issuer: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust +# Subject: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust +# Label: "Baltimore CyberTrust Root" +# Serial: 33554617 +# MD5 Fingerprint: ac:b6:94:a5:9c:17:e0:d7:91:52:9b:b1:97:06:a6:e4 +# SHA1 Fingerprint: d4:de:20:d0:5e:66:fc:53:fe:1a:50:88:2c:78:db:28:52:ca:e4:74 +# SHA256 Fingerprint: 16:af:57:a9:f6:76:b0:ab:12:60:95:aa:5e:ba:de:f2:2a:b3:11:19:d6:44:ac:95:cd:4b:93:db:f3:f2:6a:eb +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ +RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD +VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX +DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y +ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy +VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr +mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr +IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK +mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu +XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy +dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye +jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1 +BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3 +DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92 +9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx +jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0 +Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz +ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS +R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc. +# Subject: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc. +# Label: "Entrust Root Certification Authority" +# Serial: 1164660820 +# MD5 Fingerprint: d6:a5:c3:ed:5d:dd:3e:00:c1:3d:87:92:1f:1d:3f:e4 +# SHA1 Fingerprint: b3:1e:b1:b7:40:e3:6c:84:02:da:dc:37:d4:4d:f5:d4:67:49:52:f9 +# SHA256 Fingerprint: 73:c1:76:43:4f:1b:c6:d5:ad:f4:5b:0e:76:e7:27:28:7c:8d:e5:76:16:c1:e6:e6:14:1a:2b:2c:bc:7d:8e:4c +-----BEGIN CERTIFICATE----- +MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC +VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0 +Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW +KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl +cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw +NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw +NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy +ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV +BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ +KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo +Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4 +4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9 +KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI +rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi +94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB +sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi +gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo +kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE +vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA +A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t +O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua +AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP +9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/ +eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m +0vdXcDazv/wor3ElhVsT/h5/WrQ8 +-----END CERTIFICATE----- + +# Issuer: CN=AAA Certificate Services O=Comodo CA Limited +# Subject: CN=AAA Certificate Services O=Comodo CA Limited +# Label: "Comodo AAA Services root" +# Serial: 1 +# MD5 Fingerprint: 49:79:04:b0:eb:87:19:ac:47:b0:bc:11:51:9b:74:d0 +# SHA1 Fingerprint: d1:eb:23:a4:6d:17:d6:8f:d9:25:64:c2:f1:f1:60:17:64:d8:e3:49 +# SHA256 Fingerprint: d7:a7:a0:fb:5d:7e:27:31:d7:71:e9:48:4e:bc:de:f7:1d:5f:0c:3e:0a:29:48:78:2b:c8:3e:e0:ea:69:9e:f4 +-----BEGIN CERTIFICATE----- +MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb +MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow +GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj +YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL +MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE +BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM +GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP +ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua +BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe +3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4 +YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR +rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm +ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU +oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF +MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v +QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t +b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF +AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q +GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz +Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2 +G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi +l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3 +smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 2 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 2 O=QuoVadis Limited +# Label: "QuoVadis Root CA 2" +# Serial: 1289 +# MD5 Fingerprint: 5e:39:7b:dd:f8:ba:ec:82:e9:ac:62:ba:0c:54:00:2b +# SHA1 Fingerprint: ca:3a:fb:cf:12:40:36:4b:44:b2:16:20:88:80:48:39:19:93:7c:f7 +# SHA256 Fingerprint: 85:a0:dd:7d:d7:20:ad:b7:ff:05:f8:3d:54:2b:20:9d:c7:ff:45:28:f7:d6:77:b1:83:89:fe:a5:e5:c4:9e:86 +-----BEGIN CERTIFICATE----- +MIIFtzCCA5+gAwIBAgICBQkwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x +GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv +b3QgQ0EgMjAeFw0wNjExMjQxODI3MDBaFw0zMTExMjQxODIzMzNaMEUxCzAJBgNV +BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W +YWRpcyBSb290IENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCa +GMpLlA0ALa8DKYrwD4HIrkwZhR0In6spRIXzL4GtMh6QRr+jhiYaHv5+HBg6XJxg +Fyo6dIMzMH1hVBHL7avg5tKifvVrbxi3Cgst/ek+7wrGsxDp3MJGF/hd/aTa/55J +WpzmM+Yklvc/ulsrHHo1wtZn/qtmUIttKGAr79dgw8eTvI02kfN/+NsRE8Scd3bB +rrcCaoF6qUWD4gXmuVbBlDePSHFjIuwXZQeVikvfj8ZaCuWw419eaxGrDPmF60Tp ++ARz8un+XJiM9XOva7R+zdRcAitMOeGylZUtQofX1bOQQ7dsE/He3fbE+Ik/0XX1 +ksOR1YqI0JDs3G3eicJlcZaLDQP9nL9bFqyS2+r+eXyt66/3FsvbzSUr5R/7mp/i +Ucw6UwxI5g69ybR2BlLmEROFcmMDBOAENisgGQLodKcftslWZvB1JdxnwQ5hYIiz +PtGo/KPaHbDRsSNU30R2be1B2MGyIrZTHN81Hdyhdyox5C315eXbyOD/5YDXC2Og +/zOhD7osFRXql7PSorW+8oyWHhqPHWykYTe5hnMz15eWniN9gqRMgeKh0bpnX5UH +oycR7hYQe7xFSkyyBNKr79X9DFHOUGoIMfmR2gyPZFwDwzqLID9ujWc9Otb+fVuI +yV77zGHcizN300QyNQliBJIWENieJ0f7OyHj+OsdWwIDAQABo4GwMIGtMA8GA1Ud +EwEB/wQFMAMBAf8wCwYDVR0PBAQDAgEGMB0GA1UdDgQWBBQahGK8SEwzJQTU7tD2 +A8QZRtGUazBuBgNVHSMEZzBlgBQahGK8SEwzJQTU7tD2A8QZRtGUa6FJpEcwRTEL +MAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMT +ElF1b1ZhZGlzIFJvb3QgQ0EgMoICBQkwDQYJKoZIhvcNAQEFBQADggIBAD4KFk2f +BluornFdLwUvZ+YTRYPENvbzwCYMDbVHZF34tHLJRqUDGCdViXh9duqWNIAXINzn +g/iN/Ae42l9NLmeyhP3ZRPx3UIHmfLTJDQtyU/h2BwdBR5YM++CCJpNVjP4iH2Bl +fF/nJrP3MpCYUNQ3cVX2kiF495V5+vgtJodmVjB3pjd4M1IQWK4/YY7yarHvGH5K +WWPKjaJW1acvvFYfzznB4vsKqBUsfU16Y8Zsl0Q80m/DShcK+JDSV6IZUaUtl0Ha +B0+pUNqQjZRG4T7wlP0QADj1O+hA4bRuVhogzG9Yje0uRY/W6ZM/57Es3zrWIozc +hLsib9D45MY56QSIPMO661V6bYCZJPVsAfv4l7CUW+v90m/xd2gNNWQjrLhVoQPR +TUIZ3Ph1WVaj+ahJefivDrkRoHy3au000LYmYjgahwz46P0u05B/B5EqHdZ+XIWD +mbA4CD/pXvk1B+TJYm5Xf6dQlfe6yJvmjqIBxdZmv3lh8zwc4bmCXF2gw+nYSL0Z +ohEUGW6yhhtoPkg3Goi3XZZenMfvJ2II4pEZXNLxId26F0KCl3GBUzGpn/Z9Yr9y +4aOTHcyKJloJONDO1w2AFrR4pTqHTI2KpdVGl/IsELm8VCLAAVBpQ570su9t+Oza +8eOx79+Rj1QqCyXBJhnEUhAFZdWCEOrCMc0u +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 3" +# Serial: 1478 +# MD5 Fingerprint: 31:85:3c:62:94:97:63:b9:aa:fd:89:4e:af:6f:e0:cf +# SHA1 Fingerprint: 1f:49:14:f7:d8:74:95:1d:dd:ae:02:c0:be:fd:3a:2d:82:75:51:85 +# SHA256 Fingerprint: 18:f1:fc:7f:20:5d:f8:ad:dd:eb:7f:e0:07:dd:57:e3:af:37:5a:9c:4d:8d:73:54:6b:f4:f1:fe:d1:e1:8d:35 +-----BEGIN CERTIFICATE----- +MIIGnTCCBIWgAwIBAgICBcYwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x +GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv +b3QgQ0EgMzAeFw0wNjExMjQxOTExMjNaFw0zMTExMjQxOTA2NDRaMEUxCzAJBgNV +BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W +YWRpcyBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDM +V0IWVJzmmNPTTe7+7cefQzlKZbPoFog02w1ZkXTPkrgEQK0CSzGrvI2RaNggDhoB +4hp7Thdd4oq3P5kazethq8Jlph+3t723j/z9cI8LoGe+AaJZz3HmDyl2/7FWeUUr +H556VOijKTVopAFPD6QuN+8bv+OPEKhyq1hX51SGyMnzW9os2l2ObjyjPtr7guXd +8lyyBTNvijbO0BNO/79KDDRMpsMhvVAEVeuxu537RR5kFd5VAYwCdrXLoT9Cabwv +vWhDFlaJKjdhkf2mrk7AyxRllDdLkgbvBNDInIjbC3uBr7E9KsRlOni27tyAsdLT +mZw67mtaa7ONt9XOnMK+pUsvFrGeaDsGb659n/je7Mwpp5ijJUMv7/FfJuGITfhe +btfZFG4ZM2mnO4SJk8RTVROhUXhA+LjJou57ulJCg54U7QVSWllWp5f8nT8KKdjc +T5EOE7zelaTfi5m+rJsziO+1ga8bxiJTyPbH7pcUsMV8eFLI8M5ud2CEpukqdiDt +WAEXMJPpGovgc2PZapKUSU60rUqFxKMiMPwJ7Wgic6aIDFUhWMXhOp8q3crhkODZ +c6tsgLjoC2SToJyMGf+z0gzskSaHirOi4XCPLArlzW1oUevaPwV/izLmE1xr/l9A +4iLItLRkT9a6fUg+qGkM17uGcclzuD87nSVL2v9A6wIDAQABo4IBlTCCAZEwDwYD +VR0TAQH/BAUwAwEB/zCB4QYDVR0gBIHZMIHWMIHTBgkrBgEEAb5YAAMwgcUwgZMG +CCsGAQUFBwICMIGGGoGDQW55IHVzZSBvZiB0aGlzIENlcnRpZmljYXRlIGNvbnN0 +aXR1dGVzIGFjY2VwdGFuY2Ugb2YgdGhlIFF1b1ZhZGlzIFJvb3QgQ0EgMyBDZXJ0 +aWZpY2F0ZSBQb2xpY3kgLyBDZXJ0aWZpY2F0aW9uIFByYWN0aWNlIFN0YXRlbWVu +dC4wLQYIKwYBBQUHAgEWIWh0dHA6Ly93d3cucXVvdmFkaXNnbG9iYWwuY29tL2Nw +czALBgNVHQ8EBAMCAQYwHQYDVR0OBBYEFPLAE+CCQz777i9nMpY1XNu4ywLQMG4G +A1UdIwRnMGWAFPLAE+CCQz777i9nMpY1XNu4ywLQoUmkRzBFMQswCQYDVQQGEwJC +TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDEbMBkGA1UEAxMSUXVvVmFkaXMg +Um9vdCBDQSAzggIFxjANBgkqhkiG9w0BAQUFAAOCAgEAT62gLEz6wPJv92ZVqyM0 +7ucp2sNbtrCD2dDQ4iH782CnO11gUyeim/YIIirnv6By5ZwkajGxkHon24QRiSem +d1o417+shvzuXYO8BsbRd2sPbSQvS3pspweWyuOEn62Iix2rFo1bZhfZFvSLgNLd ++LJ2w/w4E6oM3kJpK27zPOuAJ9v1pkQNn1pVWQvVDVJIxa6f8i+AxeoyUDUSly7B +4f/xI4hROJ/yZlZ25w9Rl6VSDE1JUZU2Pb+iSwwQHYaZTKrzchGT5Or2m9qoXadN +t54CrnMAyNojA+j56hl0YgCUyyIgvpSnWbWCar6ZeXqp8kokUvd0/bpO5qgdAm6x +DYBEwa7TIzdfu4V8K5Iu6H6li92Z4b8nby1dqnuH/grdS/yO9SbkbnBCbjPsMZ57 +k8HkyWkaPcBrTiJt7qtYTcbQQcEr6k8Sh17rRdhs9ZgC06DYVYoGmRmioHfRMJ6s +zHXug/WwYjnPbFfiTNKRCw51KBuav/0aQ/HKd/s7j2G4aSgWQgRecCocIdiP4b0j +Wy10QJLZYxkNc91pvGJHvOB0K7Lrfb5BG7XARsWhIstfTsEokt4YutUqKLsRixeT +mJlglFwjz1onl14LBQaTNx47aTbrqZ5hHY8y2o4M1nQ+ewkk2gF3R8Q7zTSMmfXK +4SVhM7JZG+Ju1zdXtg2pEto= +-----END CERTIFICATE----- + +# Issuer: O=SECOM Trust.net OU=Security Communication RootCA1 +# Subject: O=SECOM Trust.net OU=Security Communication RootCA1 +# Label: "Security Communication Root CA" +# Serial: 0 +# MD5 Fingerprint: f1:bc:63:6a:54:e0:b5:27:f5:cd:e7:1a:e3:4d:6e:4a +# SHA1 Fingerprint: 36:b1:2b:49:f9:81:9e:d7:4c:9e:bc:38:0f:c6:56:8f:5d:ac:b2:f7 +# SHA256 Fingerprint: e7:5e:72:ed:9f:56:0e:ec:6e:b4:80:00:73:a4:3f:c3:ad:19:19:5a:39:22:82:01:78:95:97:4a:99:02:6b:6c +-----BEGIN CERTIFICATE----- +MIIDWjCCAkKgAwIBAgIBADANBgkqhkiG9w0BAQUFADBQMQswCQYDVQQGEwJKUDEY +MBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYDVQQLEx5TZWN1cml0eSBDb21t +dW5pY2F0aW9uIFJvb3RDQTEwHhcNMDMwOTMwMDQyMDQ5WhcNMjMwOTMwMDQyMDQ5 +WjBQMQswCQYDVQQGEwJKUDEYMBYGA1UEChMPU0VDT00gVHJ1c3QubmV0MScwJQYD +VQQLEx5TZWN1cml0eSBDb21tdW5pY2F0aW9uIFJvb3RDQTEwggEiMA0GCSqGSIb3 +DQEBAQUAA4IBDwAwggEKAoIBAQCzs/5/022x7xZ8V6UMbXaKL0u/ZPtM7orw8yl8 +9f/uKuDp6bpbZCKamm8sOiZpUQWZJtzVHGpxxpp9Hp3dfGzGjGdnSj74cbAZJ6kJ +DKaVv0uMDPpVmDvY6CKhS3E4eayXkmmziX7qIWgGmBSWh9JhNrxtJ1aeV+7AwFb9 +Ms+k2Y7CI9eNqPPYJayX5HA49LY6tJ07lyZDo6G8SVlyTCMwhwFY9k6+HGhWZq/N +QV3Is00qVUarH9oe4kA92819uZKAnDfdDJZkndwi92SL32HeFZRSFaB9UslLqCHJ +xrHty8OVYNEP8Ktw+N/LTX7s1vqr2b1/VPKl6Xn62dZ2JChzAgMBAAGjPzA9MB0G +A1UdDgQWBBSgc0mZaNyFW2XjmygvV5+9M7wHSDALBgNVHQ8EBAMCAQYwDwYDVR0T +AQH/BAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAaECpqLvkT115swW1F7NgE+vG +kl3g0dNq/vu+m22/xwVtWSDEHPC32oRYAmP6SBbvT6UL90qY8j+eG61Ha2POCEfr +Uj94nK9NrvjVT8+amCoQQTlSxN3Zmw7vkwGusi7KaEIkQmywszo+zenaSMQVy+n5 +Bw+SUEmK3TGXX8npN6o7WWWXlDLJs58+OmJYxUmtYg5xpTKqL8aJdkNAExNnPaJU +JRDL8Try2frbSVa7pv6nQTXD4IhhyYjH3zYQIphZ6rBK+1YWc26sTfcioU+tHXot +RSflMMFe8toTyyVCUZVHA4xsIcx0Qu1T/zOLjw9XARYvz6buyXAiFL39vmwLAw== +-----END CERTIFICATE----- + +# Issuer: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com +# Subject: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com +# Label: "XRamp Global CA Root" +# Serial: 107108908803651509692980124233745014957 +# MD5 Fingerprint: a1:0b:44:b3:ca:10:d8:00:6e:9d:0f:d8:0f:92:0a:d1 +# SHA1 Fingerprint: b8:01:86:d1:eb:9c:86:a5:41:04:cf:30:54:f3:4c:52:b7:e5:58:c6 +# SHA256 Fingerprint: ce:cd:dc:90:50:99:d8:da:df:c5:b1:d2:09:b7:37:cb:e2:c1:8c:fb:2c:10:c0:ff:0b:cf:0d:32:86:fc:1a:a2 +-----BEGIN CERTIFICATE----- +MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB +gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk +MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY +UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx +NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3 +dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy +dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB +dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6 +38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP +KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q +DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4 +qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa +JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi +PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P +BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs +jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0 +eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD +ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR +vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt +qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa +IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy +i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ +O+7ETPTsJ3xCwnR8gooJybQDJbw= +-----END CERTIFICATE----- + +# Issuer: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority +# Subject: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority +# Label: "Go Daddy Class 2 CA" +# Serial: 0 +# MD5 Fingerprint: 91:de:06:25:ab:da:fd:32:17:0c:bb:25:17:2a:84:67 +# SHA1 Fingerprint: 27:96:ba:e6:3f:18:01:e2:77:26:1b:a0:d7:77:70:02:8f:20:ee:e4 +# SHA256 Fingerprint: c3:84:6b:f2:4b:9e:93:ca:64:27:4c:0e:c6:7c:1e:cc:5e:02:4f:fc:ac:d2:d7:40:19:35:0e:81:fe:54:6a:e4 +-----BEGIN CERTIFICATE----- +MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh +MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE +YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3 +MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo +ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg +MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN +ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA +PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w +wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi +EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY +avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+ +YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE +sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h +/t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5 +IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD +ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy +OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P +TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ +HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER +dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf +ReYNnyicsbkqWletNw+vHX/bvZ8= +-----END CERTIFICATE----- + +# Issuer: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority +# Subject: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority +# Label: "Starfield Class 2 CA" +# Serial: 0 +# MD5 Fingerprint: 32:4a:4b:bb:c8:63:69:9b:be:74:9a:c6:dd:1d:46:24 +# SHA1 Fingerprint: ad:7e:1c:28:b0:64:ef:8f:60:03:40:20:14:c3:d0:e3:37:0e:b5:8a +# SHA256 Fingerprint: 14:65:fa:20:53:97:b8:76:fa:a6:f0:a9:95:8e:55:90:e4:0f:cc:7f:aa:4f:b7:c2:c8:67:75:21:fb:5f:b6:58 +-----BEGIN CERTIFICATE----- +MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl +MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp +U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw +NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE +ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp +ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3 +DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf +8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN ++lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0 +X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa +K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA +1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G +A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR +zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0 +YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD +bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w +DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3 +L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D +eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl +xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp +VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY +WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root CA" +# Serial: 17154717934120587862167794914071425081 +# MD5 Fingerprint: 87:ce:0b:7b:2a:0e:49:00:e1:58:71:9b:37:a8:93:72 +# SHA1 Fingerprint: 05:63:b8:63:0d:62:d7:5a:bb:c8:ab:1e:4b:df:b5:a8:99:b2:4d:43 +# SHA256 Fingerprint: 3e:90:99:b5:01:5e:8f:48:6c:00:bc:ea:9d:11:1e:e7:21:fa:ba:35:5a:89:bc:f1:df:69:56:1e:3d:c6:32:5c +-----BEGIN CERTIFICATE----- +MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv +b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG +EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl +cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c +JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP +mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+ +wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4 +VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/ +AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB +AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW +BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun +pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC +dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf +fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm +NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx +H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe ++o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root CA" +# Serial: 10944719598952040374951832963794454346 +# MD5 Fingerprint: 79:e4:a9:84:0d:7d:3a:96:d7:c0:4f:e2:43:4c:89:2e +# SHA1 Fingerprint: a8:98:5d:3a:65:e5:e5:c4:b2:d7:d6:6d:40:c6:dd:2f:b1:9c:54:36 +# SHA256 Fingerprint: 43:48:a0:e9:44:4c:78:cb:26:5e:05:8d:5e:89:44:b4:d8:4f:96:62:bd:26:db:25:7f:89:34:a4:43:c7:01:61 +-----BEGIN CERTIFICATE----- +MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD +QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB +CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97 +nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt +43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P +T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4 +gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR +TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw +DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr +hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg +06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF +PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls +YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk +CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert High Assurance EV Root CA" +# Serial: 3553400076410547919724730734378100087 +# MD5 Fingerprint: d4:74:de:57:5c:39:b2:d3:9c:85:83:c5:c0:65:49:8a +# SHA1 Fingerprint: 5f:b7:ee:06:33:e2:59:db:ad:0c:4c:9a:e6:d3:8f:1a:61:c7:dc:25 +# SHA256 Fingerprint: 74:31:e5:f4:c3:c1:ce:46:90:77:4f:0b:61:e0:54:40:88:3b:a9:a0:1e:d0:0b:a6:ab:d7:80:6e:d3:b1:18:cf +-----BEGIN CERTIFICATE----- +MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j +ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL +MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3 +LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug +RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm ++9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW +PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM +xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB +Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3 +hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg +EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA +FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec +nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z +eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF +hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2 +Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe +vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep ++OkuE6N36B9K +-----END CERTIFICATE----- + +# Issuer: CN=DST Root CA X3 O=Digital Signature Trust Co. +# Subject: CN=DST Root CA X3 O=Digital Signature Trust Co. +# Label: "DST Root CA X3" +# Serial: 91299735575339953335919266965803778155 +# MD5 Fingerprint: 41:03:52:dc:0f:f7:50:1b:16:f0:02:8e:ba:6f:45:c5 +# SHA1 Fingerprint: da:c9:02:4f:54:d8:f6:df:94:93:5f:b1:73:26:38:ca:6a:d7:7c:13 +# SHA256 Fingerprint: 06:87:26:03:31:a7:24:03:d9:09:f1:05:e6:9b:cf:0d:32:e1:bd:24:93:ff:c6:d9:20:6d:11:bc:d6:77:07:39 +-----BEGIN CERTIFICATE----- +MIIDSjCCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/ +MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT +DkRTVCBSb290IENBIFgzMB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow +PzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMRcwFQYDVQQD +Ew5EU1QgUm9vdCBDQSBYMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB +AN+v6ZdQCINXtMxiZfaQguzH0yxrMMpb7NnDfcdAwRgUi+DoM3ZJKuM/IUmTrE4O +rz5Iy2Xu/NMhD2XSKtkyj4zl93ewEnu1lcCJo6m67XMuegwGMoOifooUMM0RoOEq +OLl5CjH9UL2AZd+3UWODyOKIYepLYYHsUmu5ouJLGiifSKOeDNoJjj4XLh7dIN9b +xiqKqy69cK3FCxolkHRyxXtqqzTWMIn/5WgTe1QLyNau7Fqckh49ZLOMxt+/yUFw +7BZy1SbsOFU5Q9D8/RhcQPGX69Wam40dutolucbY38EVAjqr2m7xPi71XAicPNaD +aeQQmxkqtilX4+U9m5/wAl0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNV +HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFMSnsaR7LHH62+FLkHX/xBVghYkQMA0GCSqG +SIb3DQEBBQUAA4IBAQCjGiybFwBcqR7uKGY3Or+Dxz9LwwmglSBd49lZRNI+DT69 +ikugdB/OEIKcdBodfpga3csTS7MgROSR6cz8faXbauX+5v3gTt23ADq1cEmv8uXr +AvHRAosZy5Q6XkjEGB5YGV8eAlrwDPGxrancWYaLbumR9YbK+rlmM6pZW87ipxZz +R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5 +JDGFoqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo +Ob8VZRzI9neWagqNdwvYkQsEjgfbKbYK7p2CNTUQ +-----END CERTIFICATE----- + +# Issuer: CN=SwissSign Gold CA - G2 O=SwissSign AG +# Subject: CN=SwissSign Gold CA - G2 O=SwissSign AG +# Label: "SwissSign Gold CA - G2" +# Serial: 13492815561806991280 +# MD5 Fingerprint: 24:77:d9:a8:91:d1:3b:fa:88:2d:c2:ff:f8:cd:33:93 +# SHA1 Fingerprint: d8:c5:38:8a:b7:30:1b:1b:6e:d4:7a:e6:45:25:3a:6f:9f:1a:27:61 +# SHA256 Fingerprint: 62:dd:0b:e9:b9:f5:0a:16:3e:a0:f8:e7:5c:05:3b:1e:ca:57:ea:55:c8:68:8f:64:7c:68:81:f2:c8:35:7b:95 +-----BEGIN CERTIFICATE----- +MIIFujCCA6KgAwIBAgIJALtAHEP1Xk+wMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV +BAYTAkNIMRUwEwYDVQQKEwxTd2lzc1NpZ24gQUcxHzAdBgNVBAMTFlN3aXNzU2ln +biBHb2xkIENBIC0gRzIwHhcNMDYxMDI1MDgzMDM1WhcNMzYxMDI1MDgzMDM1WjBF +MQswCQYDVQQGEwJDSDEVMBMGA1UEChMMU3dpc3NTaWduIEFHMR8wHQYDVQQDExZT +d2lzc1NpZ24gR29sZCBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC +CgKCAgEAr+TufoskDhJuqVAtFkQ7kpJcyrhdhJJCEyq8ZVeCQD5XJM1QiyUqt2/8 +76LQwB8CJEoTlo8jE+YoWACjR8cGp4QjK7u9lit/VcyLwVcfDmJlD909Vopz2q5+ +bbqBHH5CjCA12UNNhPqE21Is8w4ndwtrvxEvcnifLtg+5hg3Wipy+dpikJKVyh+c +6bM8K8vzARO/Ws/BtQpgvd21mWRTuKCWs2/iJneRjOBiEAKfNA+k1ZIzUd6+jbqE +emA8atufK+ze3gE/bk3lUIbLtK/tREDFylqM2tIrfKjuvqblCqoOpd8FUrdVxyJd +MmqXl2MT28nbeTZ7hTpKxVKJ+STnnXepgv9VHKVxaSvRAiTysybUa9oEVeXBCsdt +MDeQKuSeFDNeFhdVxVu1yzSJkvGdJo+hB9TGsnhQ2wwMC3wLjEHXuendjIj3o02y +MszYF9rNt85mndT9Xv+9lz4pded+p2JYryU0pUHHPbwNUMoDAw8IWh+Vc3hiv69y +FGkOpeUDDniOJihC8AcLYiAQZzlG+qkDzAQ4embvIIO1jEpWjpEA/I5cgt6IoMPi +aG59je883WX0XaxR7ySArqpWl2/5rX3aYT+YdzylkbYcjCbaZaIJbcHiVOO5ykxM +gI93e2CaHt+28kgeDrpOVG2Y4OGiGqJ3UM/EY5LsRxmd6+ZrzsECAwEAAaOBrDCB +qTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUWyV7 +lqRlUX64OfPAeGZe6Drn8O4wHwYDVR0jBBgwFoAUWyV7lqRlUX64OfPAeGZe6Drn +8O4wRgYDVR0gBD8wPTA7BglghXQBWQECAQEwLjAsBggrBgEFBQcCARYgaHR0cDov +L3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIBACe6 +45R88a7A3hfm5djV9VSwg/S7zV4Fe0+fdWavPOhWfvxyeDgD2StiGwC5+OlgzczO +UYrHUDFu4Up+GC9pWbY9ZIEr44OE5iKHjn3g7gKZYbge9LgriBIWhMIxkziWMaa5 +O1M/wySTVltpkuzFwbs4AOPsF6m43Md8AYOfMke6UiI0HTJ6CVanfCU2qT1L2sCC +bwq7EsiHSycR+R4tx5M/nttfJmtS2S6K8RTGRI0Vqbe/vd6mGu6uLftIdxf+u+yv +GPUqUfA5hJeVbG4bwyvEdGB5JbAKJ9/fXtI5z0V9QkvfsywexcZdylU6oJxpmo/a +77KwPJ+HbBIrZXAVUjEaJM9vMSNQH4xPjyPDdEFjHFWoFN0+4FFQz/EbMFYOkrCC +hdiDyyJkvC24JdVUorgG6q2SpCSgwYa1ShNqR88uC1aVVMvOmttqtKay20EIhid3 +92qgQmwLOM7XdVAyksLfKzAiSNDVQTglXaTpXZ/GlHXQRf0wl0OPkKsKx4ZzYEpp +Ld6leNcG2mqeSz53OiATIgHQv2ieY2BrNU0LbbqhPcCT4H8js1WtciVORvnSFu+w +ZMEBnunKoGqYDs/YYPIvSbjkQuE4NRb0yG5P94FW6LqjviOvrv1vA+ACOzB2+htt +Qc8Bsem4yWb02ybzOqR08kkkW8mw0FfB+j564ZfJ +-----END CERTIFICATE----- + +# Issuer: CN=SwissSign Silver CA - G2 O=SwissSign AG +# Subject: CN=SwissSign Silver CA - G2 O=SwissSign AG +# Label: "SwissSign Silver CA - G2" +# Serial: 5700383053117599563 +# MD5 Fingerprint: e0:06:a1:c9:7d:cf:c9:fc:0d:c0:56:75:96:d8:62:13 +# SHA1 Fingerprint: 9b:aa:e5:9f:56:ee:21:cb:43:5a:be:25:93:df:a7:f0:40:d1:1d:cb +# SHA256 Fingerprint: be:6c:4d:a2:bb:b9:ba:59:b6:f3:93:97:68:37:42:46:c3:c0:05:99:3f:a9:8f:02:0d:1d:ed:be:d4:8a:81:d5 +-----BEGIN CERTIFICATE----- +MIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE +BhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu +IFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow +RzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMY +U3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A +MIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv +Fz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gREpzn5/ASLHvGiTSf5YXu6t+WiE7br +YT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF +nbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH +6ATK72oxh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt +eJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/ +c8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0mRmrVn5bi4Y5FZGkECwJ +MoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH +HTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf +jNFusB3hB48IHpmccelM2KX3RxIfdNFRnobzwqIjQAtz20um53MGjMGg6cFZrEb6 +5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB +rDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU +F6DNweRBtjpbO8tFnb0cwpj6hlgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c +wpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0 +cDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIB +AHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp +WJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9 +xCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/L3XWgwF15kIwb4FDm3jH+mHtwX6WQ +2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ +IseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8 +aRl5xB9+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X +em1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR +dAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0IavdeQXRuwxF+B6wpYJE/ +OMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+ +hAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy +tGMU0gYqZ4yD9c7qB9iaah7s5Aq7KkzrCWA5zspi2C5u +-----END CERTIFICATE----- + +# Issuer: CN=SecureTrust CA O=SecureTrust Corporation +# Subject: CN=SecureTrust CA O=SecureTrust Corporation +# Label: "SecureTrust CA" +# Serial: 17199774589125277788362757014266862032 +# MD5 Fingerprint: dc:32:c3:a7:6d:25:57:c7:68:09:9d:ea:2d:a9:a2:d1 +# SHA1 Fingerprint: 87:82:c6:c3:04:35:3b:cf:d2:96:92:d2:59:3e:7d:44:d9:34:ff:11 +# SHA256 Fingerprint: f1:c1:b5:0a:e5:a2:0d:d8:03:0e:c9:f6:bc:24:82:3d:d3:67:b5:25:57:59:b4:e7:1b:61:fc:e9:f7:37:5d:73 +-----BEGIN CERTIFICATE----- +MIIDuDCCAqCgAwIBAgIQDPCOXAgWpa1Cf/DrJxhZ0DANBgkqhkiG9w0BAQUFADBI +MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x +FzAVBgNVBAMTDlNlY3VyZVRydXN0IENBMB4XDTA2MTEwNzE5MzExOFoXDTI5MTIz +MTE5NDA1NVowSDELMAkGA1UEBhMCVVMxIDAeBgNVBAoTF1NlY3VyZVRydXN0IENv +cnBvcmF0aW9uMRcwFQYDVQQDEw5TZWN1cmVUcnVzdCBDQTCCASIwDQYJKoZIhvcN +AQEBBQADggEPADCCAQoCggEBAKukgeWVzfX2FI7CT8rU4niVWJxB4Q2ZQCQXOZEz +Zum+4YOvYlyJ0fwkW2Gz4BERQRwdbvC4u/jep4G6pkjGnx29vo6pQT64lO0pGtSO +0gMdA+9tDWccV9cGrcrI9f4Or2YlSASWC12juhbDCE/RRvgUXPLIXgGZbf2IzIao +wW8xQmxSPmjL8xk037uHGFaAJsTQ3MBv396gwpEWoGQRS0S8Hvbn+mPeZqx2pHGj +7DaUaHp3pLHnDi+BeuK1cobvomuL8A/b01k/unK8RCSc43Oz969XL0Imnal0ugBS +8kvNU3xHCzaFDmapCJcWNFfBZveA4+1wVMeT4C4oFVmHursCAwEAAaOBnTCBmjAT +BgkrBgEEAYI3FAIEBh4EAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB +/zAdBgNVHQ4EFgQUQjK2FvoE/f5dS3rD/fdMQB1aQ68wNAYDVR0fBC0wKzApoCeg +JYYjaHR0cDovL2NybC5zZWN1cmV0cnVzdC5jb20vU1RDQS5jcmwwEAYJKwYBBAGC +NxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBADDtT0rhWDpSclu1pqNlGKa7UTt3 +6Z3q059c4EVlew3KW+JwULKUBRSuSceNQQcSc5R+DCMh/bwQf2AQWnL1mA6s7Ll/ +3XpvXdMc9P+IBWlCqQVxyLesJugutIxq/3HcuLHfmbx8IVQr5Fiiu1cprp6poxkm +D5kuCLDv/WnPmRoJjeOnnyvJNjR7JLN4TJUXpAYmHrZkUjZfYGfZnMUFdAvnZyPS +CPyI6a6Lf+Ew9Dd+/cYy2i2eRDAwbO4H3tI0/NL/QPZL9GZGBlSm8jIKYyYwa5vR +3ItHuuG51WLQoqD0ZwV4KWMabwTW+MZMo5qxN7SN5ShLHZ4swrhovO0C7jE= +-----END CERTIFICATE----- + +# Issuer: CN=Secure Global CA O=SecureTrust Corporation +# Subject: CN=Secure Global CA O=SecureTrust Corporation +# Label: "Secure Global CA" +# Serial: 9751836167731051554232119481456978597 +# MD5 Fingerprint: cf:f4:27:0d:d4:ed:dc:65:16:49:6d:3d:da:bf:6e:de +# SHA1 Fingerprint: 3a:44:73:5a:e5:81:90:1f:24:86:61:46:1e:3b:9c:c4:5f:f5:3a:1b +# SHA256 Fingerprint: 42:00:f5:04:3a:c8:59:0e:bb:52:7d:20:9e:d1:50:30:29:fb:cb:d4:1c:a1:b5:06:ec:27:f1:5a:de:7d:ac:69 +-----BEGIN CERTIFICATE----- +MIIDvDCCAqSgAwIBAgIQB1YipOjUiolN9BPI8PjqpTANBgkqhkiG9w0BAQUFADBK +MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x +GTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwHhcNMDYxMTA3MTk0MjI4WhcNMjkx +MjMxMTk1MjA2WjBKMQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3Qg +Q29ycG9yYXRpb24xGTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvNS7YrGxVaQZx5RNoJLNP2MwhR/jxYDiJ +iQPpvepeRlMJ3Fz1Wuj3RSoC6zFh1ykzTM7HfAo3fg+6MpjhHZevj8fcyTiW89sa +/FHtaMbQbqR8JNGuQsiWUGMu4P51/pinX0kuleM5M2SOHqRfkNJnPLLZ/kG5VacJ +jnIFHovdRIWCQtBJwB1g8NEXLJXr9qXBkqPFwqcIYA1gBBCWeZ4WNOaptvolRTnI +HmX5k/Wq8VLcmZg9pYYaDDUz+kulBAYVHDGA76oYa8J719rO+TMg1fW9ajMtgQT7 +sFzUnKPiXB3jqUJ1XnvUd+85VLrJChgbEplJL4hL/VBi0XPnj3pDAgMBAAGjgZ0w +gZowEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFK9EBMJBfkiD2045AuzshHrmzsmkMDQGA1UdHwQtMCsw +KaAnoCWGI2h0dHA6Ly9jcmwuc2VjdXJldHJ1c3QuY29tL1NHQ0EuY3JsMBAGCSsG +AQQBgjcVAQQDAgEAMA0GCSqGSIb3DQEBBQUAA4IBAQBjGghAfaReUw132HquHw0L +URYD7xh8yOOvaliTFGCRsoTciE6+OYo68+aCiV0BN7OrJKQVDpI1WkpEXk5X+nXO +H0jOZvQ8QCaSmGwb7iRGDBezUqXbpZGRzzfTb+cnCDpOGR86p1hcF895P4vkp9Mm +I50mD1hp/Ed+stCNi5O/KU9DaXR2Z0vPB4zmAve14bRDtUstFJ/53CYNv6ZHdAbY +iNE6KTCEztI5gGIbqMdXSbxqVVFnFUq+NQfk1XWYN3kwFNspnWzFacxHVaIw98xc +f8LDmBxrThaA63p4ZUWiABqvDA1VZDRIuJK58bRQKfJPIx/abKwfROHdI3hRW8cW +-----END CERTIFICATE----- + +# Issuer: CN=COMODO Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO Certification Authority O=COMODO CA Limited +# Label: "COMODO Certification Authority" +# Serial: 104350513648249232941998508985834464573 +# MD5 Fingerprint: 5c:48:dc:f7:42:72:ec:56:94:6d:1c:cc:71:35:80:75 +# SHA1 Fingerprint: 66:31:bf:9e:f7:4f:9e:b6:c9:d5:a6:0c:ba:6a:be:d1:f7:bd:ef:7b +# SHA256 Fingerprint: 0c:2c:d6:3d:f7:80:6f:a3:99:ed:e8:09:11:6b:57:5b:f8:79:89:f0:65:18:f9:80:8c:86:05:03:17:8b:af:66 +-----BEGIN CERTIFICATE----- +MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB +gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G +A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV +BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw +MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl +YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P +RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0 +aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3 +UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI +2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8 +Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp ++2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+ +DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O +nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW +/zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g +PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u +QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY +SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv +IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/ +RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4 +zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd +BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB +ZQ== +-----END CERTIFICATE----- + +# Issuer: CN=Network Solutions Certificate Authority O=Network Solutions L.L.C. +# Subject: CN=Network Solutions Certificate Authority O=Network Solutions L.L.C. +# Label: "Network Solutions Certificate Authority" +# Serial: 116697915152937497490437556386812487904 +# MD5 Fingerprint: d3:f3:a6:16:c0:fa:6b:1d:59:b1:2d:96:4d:0e:11:2e +# SHA1 Fingerprint: 74:f8:a3:c3:ef:e7:b3:90:06:4b:83:90:3c:21:64:60:20:e5:df:ce +# SHA256 Fingerprint: 15:f0:ba:00:a3:ac:7a:f3:ac:88:4c:07:2b:10:11:a0:77:bd:77:c0:97:f4:01:64:b2:f8:59:8a:bd:83:86:0c +-----BEGIN CERTIFICATE----- +MIID5jCCAs6gAwIBAgIQV8szb8JcFuZHFhfjkDFo4DANBgkqhkiG9w0BAQUFADBi +MQswCQYDVQQGEwJVUzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMu +MTAwLgYDVQQDEydOZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3Jp +dHkwHhcNMDYxMjAxMDAwMDAwWhcNMjkxMjMxMjM1OTU5WjBiMQswCQYDVQQGEwJV +UzEhMB8GA1UEChMYTmV0d29yayBTb2x1dGlvbnMgTC5MLkMuMTAwLgYDVQQDEydO +ZXR3b3JrIFNvbHV0aW9ucyBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggEiMA0GCSqG +SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDkvH6SMG3G2I4rC7xGzuAnlt7e+foS0zwz +c7MEL7xxjOWftiJgPl9dzgn/ggwbmlFQGiaJ3dVhXRncEg8tCqJDXRfQNJIg6nPP +OCwGJgl6cvf6UDL4wpPTaaIjzkGxzOTVHzbRijr4jGPiFFlp7Q3Tf2vouAPlT2rl +mGNpSAW+Lv8ztumXWWn4Zxmuk2GWRBXTcrA/vGp97Eh/jcOrqnErU2lBUzS1sLnF +BgrEsEX1QV1uiUV7PTsmjHTC5dLRfbIR1PtYMiKagMnc/Qzpf14Dl847ABSHJ3A4 +qY5usyd2mFHgBeMhqxrVhSI8KbWaFsWAqPS7azCPL0YCorEMIuDTAgMBAAGjgZcw +gZQwHQYDVR0OBBYEFCEwyfsA106Y2oeqKtCnLrFAMadMMA4GA1UdDwEB/wQEAwIB +BjAPBgNVHRMBAf8EBTADAQH/MFIGA1UdHwRLMEkwR6BFoEOGQWh0dHA6Ly9jcmwu +bmV0c29sc3NsLmNvbS9OZXR3b3JrU29sdXRpb25zQ2VydGlmaWNhdGVBdXRob3Jp +dHkuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQC7rkvnt1frf6ott3NHhWrB5KUd5Oc8 +6fRZZXe1eltajSU24HqXLjjAV2CDmAaDn7l2em5Q4LqILPxFzBiwmZVRDuwduIj/ +h1AcgsLj4DKAv6ALR8jDMe+ZZzKATxcheQxpXN5eNK4CtSbqUN9/GGUsyfJj4akH +/nxxH2szJGoeBfcFaMBqEssuXmHLrijTfsK0ZpEmXzwuJF/LWA/rKOyvEZbz3Htv +wKeI8lN3s2Berq4o2jUsbzRF0ybh3uxbTydrFny9RAQYgrOJeRcQcT16ohZO9QHN +pGxlaKFJdlxDydi8NmdspZS11My5vWo1ViHe2MPr+8ukYEywVaCge1ey +-----END CERTIFICATE----- + +# Issuer: CN=COMODO ECC Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO ECC Certification Authority O=COMODO CA Limited +# Label: "COMODO ECC Certification Authority" +# Serial: 41578283867086692638256921589707938090 +# MD5 Fingerprint: 7c:62:ff:74:9d:31:53:5e:68:4a:d5:78:aa:1e:bf:23 +# SHA1 Fingerprint: 9f:74:4e:9f:2b:4d:ba:ec:0f:31:2c:50:b6:56:3b:8e:2d:93:c3:11 +# SHA256 Fingerprint: 17:93:92:7a:06:14:54:97:89:ad:ce:2f:8f:34:f7:f0:b6:6d:0f:3a:e3:a3:b8:4d:21:ec:15:db:ba:4f:ad:c7 +-----BEGIN CERTIFICATE----- +MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL +MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE +BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT +IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw +MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy +ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N +T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv +biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR +FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J +cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW +BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ +BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm +fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv +GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY= +-----END CERTIFICATE----- + +# Issuer: CN=Certigna O=Dhimyotis +# Subject: CN=Certigna O=Dhimyotis +# Label: "Certigna" +# Serial: 18364802974209362175 +# MD5 Fingerprint: ab:57:a6:5b:7d:42:82:19:b5:d8:58:26:28:5e:fd:ff +# SHA1 Fingerprint: b1:2e:13:63:45:86:a4:6f:1a:b2:60:68:37:58:2d:c4:ac:fd:94:97 +# SHA256 Fingerprint: e3:b6:a2:db:2e:d7:ce:48:84:2f:7a:c5:32:41:c7:b7:1d:54:14:4b:fb:40:c1:1f:3f:1d:0b:42:f5:ee:a1:2d +-----BEGIN CERTIFICATE----- +MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV +BAYTAkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hMB4X +DTA3MDYyOTE1MTMwNVoXDTI3MDYyOTE1MTMwNVowNDELMAkGA1UEBhMCRlIxEjAQ +BgNVBAoMCURoaW15b3RpczERMA8GA1UEAwwIQ2VydGlnbmEwggEiMA0GCSqGSIb3 +DQEBAQUAA4IBDwAwggEKAoIBAQDIaPHJ1tazNHUmgh7stL7qXOEm7RFHYeGifBZ4 +QCHkYJ5ayGPhxLGWkv8YbWkj4Sti993iNi+RB7lIzw7sebYs5zRLcAglozyHGxny +gQcPOJAZ0xH+hrTy0V4eHpbNgGzOOzGTtvKg0KmVEn2lmsxryIRWijOp5yIVUxbw +zBfsV1/pogqYCd7jX5xv3EjjhQsVWqa6n6xI4wmy9/Qy3l40vhx4XUJbzg4ij02Q +130yGLMLLGq/jj8UEYkgDncUtT2UCIf3JR7VsmAA7G8qKCVuKj4YYxclPz5EIBb2 +JsglrgVKtOdjLPOMFlN+XPsRGgjBRmKfIrjxwo1p3Po6WAbfAgMBAAGjgbwwgbkw +DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUGu3+QTmQtCRZvgHyUtVF9lo53BEw +ZAYDVR0jBF0wW4AUGu3+QTmQtCRZvgHyUtVF9lo53BGhOKQ2MDQxCzAJBgNVBAYT +AkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hggkA/tzj +AQ/JSP8wDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzANBgkqhkiG +9w0BAQUFAAOCAQEAhQMeknH2Qq/ho2Ge6/PAD/Kl1NqV5ta+aDY9fm4fTIrv0Q8h +bV6lUmPOEvjvKtpv6zf+EwLHyzs+ImvaYS5/1HI93TDhHkxAGYwP15zRgzB7mFnc +fca5DClMoTOi62c6ZYTTluLtdkVwj7Ur3vkj1kluPBS1xp81HlDQwY9qcEQCYsuu +HWhBp6pX6FOqB9IG9tUUBguRA3UsbHK1YZWaDYu5Def131TN3ubY1gkIl2PlwS6w +t0QmwCbAr1UwnjvVNioZBPRcHv/PLLf/0P2HQBHVESO7SMAhqaQoLf0V+LBOK/Qw +WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg== +-----END CERTIFICATE----- + +# Issuer: CN=Cybertrust Global Root O=Cybertrust, Inc +# Subject: CN=Cybertrust Global Root O=Cybertrust, Inc +# Label: "Cybertrust Global Root" +# Serial: 4835703278459682877484360 +# MD5 Fingerprint: 72:e4:4a:87:e3:69:40:80:77:ea:bc:e3:f4:ff:f0:e1 +# SHA1 Fingerprint: 5f:43:e5:b1:bf:f8:78:8c:ac:1c:c7:ca:4a:9a:c6:22:2b:cc:34:c6 +# SHA256 Fingerprint: 96:0a:df:00:63:e9:63:56:75:0c:29:65:dd:0a:08:67:da:0b:9c:bd:6e:77:71:4a:ea:fb:23:49:ab:39:3d:a3 +-----BEGIN CERTIFICATE----- +MIIDoTCCAomgAwIBAgILBAAAAAABD4WqLUgwDQYJKoZIhvcNAQEFBQAwOzEYMBYG +A1UEChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2Jh +bCBSb290MB4XDTA2MTIxNTA4MDAwMFoXDTIxMTIxNTA4MDAwMFowOzEYMBYGA1UE +ChMPQ3liZXJ0cnVzdCwgSW5jMR8wHQYDVQQDExZDeWJlcnRydXN0IEdsb2JhbCBS +b290MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA+Mi8vRRQZhP/8NN5 +7CPytxrHjoXxEnOmGaoQ25yiZXRadz5RfVb23CO21O1fWLE3TdVJDm71aofW0ozS +J8bi/zafmGWgE07GKmSb1ZASzxQG9Dvj1Ci+6A74q05IlG2OlTEQXO2iLb3VOm2y +HLtgwEZLAfVJrn5GitB0jaEMAs7u/OePuGtm839EAL9mJRQr3RAwHQeWP032a7iP +t3sMpTjr3kfb1V05/Iin89cqdPHoWqI7n1C6poxFNcJQZZXcY4Lv3b93TZxiyWNz +FtApD0mpSPCzqrdsxacwOUBdrsTiXSZT8M4cIwhhqJQZugRiQOwfOHB3EgZxpzAY +XSUnpQIDAQABo4GlMIGiMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/ +MB0GA1UdDgQWBBS2CHsNesysIEyGVjJez6tuhS1wVzA/BgNVHR8EODA2MDSgMqAw +hi5odHRwOi8vd3d3Mi5wdWJsaWMtdHJ1c3QuY29tL2NybC9jdC9jdHJvb3QuY3Js +MB8GA1UdIwQYMBaAFLYIew16zKwgTIZWMl7Pq26FLXBXMA0GCSqGSIb3DQEBBQUA +A4IBAQBW7wojoFROlZfJ+InaRcHUowAl9B8Tq7ejhVhpwjCt2BWKLePJzYFa+HMj +Wqd8BfP9IjsO0QbE2zZMcwSO5bAi5MXzLqXZI+O4Tkogp24CJJ8iYGd7ix1yCcUx +XOl5n4BHPa2hCwcUPUf/A2kaDAtE52Mlp3+yybh2hO0j9n0Hq0V+09+zv+mKts2o +omcrUtW3ZfA5TGOgkXmTUg9U3YO7n9GPp1Nzw8v/MOx8BLjYRB+TX3EJIrduPuoc +A06dGiBh+4E37F78CkWr1+cXVdCg6mCbpvbjjFspwgZgFJ0tl0ypkxWdYcQBX0jW +WL1WMRJOEcgh4LMRkWXbtKaIOM5V +-----END CERTIFICATE----- + +# Issuer: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority +# Subject: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority +# Label: "ePKI Root Certification Authority" +# Serial: 28956088682735189655030529057352760477 +# MD5 Fingerprint: 1b:2e:00:ca:26:06:90:3d:ad:fe:6f:15:68:d3:6b:b3 +# SHA1 Fingerprint: 67:65:0d:f1:7e:8e:7e:5b:82:40:a4:f4:56:4b:cf:e2:3d:69:c6:f0 +# SHA256 Fingerprint: c0:a6:f4:dc:63:a2:4b:fd:cf:54:ef:2a:6a:08:2a:0a:72:de:35:80:3e:2f:f5:ff:52:7a:e5:d8:72:06:df:d5 +-----BEGIN CERTIFICATE----- +MIIFsDCCA5igAwIBAgIQFci9ZUdcr7iXAF7kBtK8nTANBgkqhkiG9w0BAQUFADBe +MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0 +ZC4xKjAoBgNVBAsMIWVQS0kgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe +Fw0wNDEyMjAwMjMxMjdaFw0zNDEyMjAwMjMxMjdaMF4xCzAJBgNVBAYTAlRXMSMw +IQYDVQQKDBpDaHVuZ2h3YSBUZWxlY29tIENvLiwgTHRkLjEqMCgGA1UECwwhZVBL +SSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjANBgkqhkiG9w0BAQEF +AAOCAg8AMIICCgKCAgEA4SUP7o3biDN1Z82tH306Tm2d0y8U82N0ywEhajfqhFAH +SyZbCUNsIZ5qyNUD9WBpj8zwIuQf5/dqIjG3LBXy4P4AakP/h2XGtRrBp0xtInAh +ijHyl3SJCRImHJ7K2RKilTza6We/CKBk49ZCt0Xvl/T29de1ShUCWH2YWEtgvM3X +DZoTM1PRYfl61dd4s5oz9wCGzh1NlDivqOx4UXCKXBCDUSH3ET00hl7lSM2XgYI1 +TBnsZfZrxQWh7kcT1rMhJ5QQCtkkO7q+RBNGMD+XPNjX12ruOzjjK9SXDrkb5wdJ +fzcq+Xd4z1TtW0ado4AOkUPB1ltfFLqfpo0kR0BZv3I4sjZsN/+Z0V0OWQqraffA +sgRFelQArr5T9rXn4fg8ozHSqf4hUmTFpmfwdQcGlBSBVcYn5AGPF8Fqcde+S/uU +WH1+ETOxQvdibBjWzwloPn9s9h6PYq2lY9sJpx8iQkEeb5mKPtf5P0B6ebClAZLS +nT0IFaUQAS2zMnaolQ2zepr7BxB4EW/hj8e6DyUadCrlHJhBmd8hh+iVBmoKs2pH +dmX2Os+PYhcZewoozRrSgx4hxyy/vv9haLdnG7t4TY3OZ+XkwY63I2binZB1NJip +NiuKmpS5nezMirH4JYlcWrYvjB9teSSnUmjDhDXiZo1jDiVN1Rmy5nk3pyKdVDEC +AwEAAaNqMGgwHQYDVR0OBBYEFB4M97Zn8uGSJglFwFU5Lnc/QkqiMAwGA1UdEwQF +MAMBAf8wOQYEZyoHAAQxMC8wLQIBADAJBgUrDgMCGgUAMAcGBWcqAwAABBRFsMLH +ClZ87lt4DJX5GFPBphzYEDANBgkqhkiG9w0BAQUFAAOCAgEACbODU1kBPpVJufGB +uvl2ICO1J2B01GqZNF5sAFPZn/KmsSQHRGoqxqWOeBLoR9lYGxMqXnmbnwoqZ6Yl +PwZpVnPDimZI+ymBV3QGypzqKOg4ZyYr8dW1P2WT+DZdjo2NQCCHGervJ8A9tDkP +JXtoUHRVnAxZfVo9QZQlUgjgRywVMRnVvwdVxrsStZf0X4OFunHB2WyBEXYKCrC/ +gpf36j36+uwtqSiUO1bd0lEursC9CBWMd1I0ltabrNMdjmEPNXubrjlpC2JgQCA2 +j6/7Nu4tCEoduL+bXPjqpRugc6bY+G7gMwRfaKonh+3ZwZCc7b3jajWvY9+rGNm6 +5ulK6lCKD2GTHuItGeIwlDWSXQ62B68ZgI9HkFFLLk3dheLSClIKF5r8GrBQAuUB +o2M3IUxExJtRmREOc5wGj1QupyheRDmHVi03vYVElOEMSyycw5KFNGHLD7ibSkNS +/jQ6fbjpKdx2qcgw+BRxgMYeNkh0IkFch4LoGHGLQYlE535YW6i4jRPpp2zDR+2z +Gp1iro2C6pSe3VkQw63d4k3jMdXH7OjysP6SHhYKGvzZ8/gntsm+HbRsZJB/9OTE +W9c3rkIO3aQab3yIVMUWbuF6aC74Or8NpDyJO3inTmODBCEIZ43ygknQW/2xzQ+D +hNQ+IIX3Sj0rnP0qCglN6oH4EZw= +-----END CERTIFICATE----- + +# Issuer: O=certSIGN OU=certSIGN ROOT CA +# Subject: O=certSIGN OU=certSIGN ROOT CA +# Label: "certSIGN ROOT CA" +# Serial: 35210227249154 +# MD5 Fingerprint: 18:98:c0:d6:e9:3a:fc:f9:b0:f5:0c:f7:4b:01:44:17 +# SHA1 Fingerprint: fa:b7:ee:36:97:26:62:fb:2d:b0:2a:f6:bf:03:fd:e8:7c:4b:2f:9b +# SHA256 Fingerprint: ea:a9:62:c4:fa:4a:6b:af:eb:e4:15:19:6d:35:1c:cd:88:8d:4f:53:f3:fa:8a:e6:d7:c4:66:a9:4e:60:42:bb +-----BEGIN CERTIFICATE----- +MIIDODCCAiCgAwIBAgIGIAYFFnACMA0GCSqGSIb3DQEBBQUAMDsxCzAJBgNVBAYT +AlJPMREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBD +QTAeFw0wNjA3MDQxNzIwMDRaFw0zMTA3MDQxNzIwMDRaMDsxCzAJBgNVBAYTAlJP +MREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBDQTCC +ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALczuX7IJUqOtdu0KBuqV5Do +0SLTZLrTk+jUrIZhQGpgV2hUhE28alQCBf/fm5oqrl0Hj0rDKH/v+yv6efHHrfAQ +UySQi2bJqIirr1qjAOm+ukbuW3N7LBeCgV5iLKECZbO9xSsAfsT8AzNXDe3i+s5d +RdY4zTW2ssHQnIFKquSyAVwdj1+ZxLGt24gh65AIgoDzMKND5pCCrlUoSe1b16kQ +OA7+j0xbm0bqQfWwCHTD0IgztnzXdN/chNFDDnU5oSVAKOp4yw4sLjmdjItuFhwv +JoIQ4uNllAoEwF73XVv4EOLQunpL+943AAAaWyjj0pxzPjKHmKHJUS/X3qwzs08C +AwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0O +BBYEFOCMm9slSbPxfIbWskKHC9BroNnkMA0GCSqGSIb3DQEBBQUAA4IBAQA+0hyJ +LjX8+HXd5n9liPRyTMks1zJO890ZeUe9jjtbkw9QSSQTaxQGcu8J06Gh40CEyecY +MnQ8SG4Pn0vU9x7Tk4ZkVJdjclDVVc/6IJMCopvDI5NOFlV2oHB5bc0hH88vLbwZ +44gx+FkagQnIl6Z0x2DEW8xXjrJ1/RsCCdtZb3KTafcxQdaIOL+Hsr0Wefmq5L6I +Jd1hJyMctTEHBDa0GpC9oHRxUIltvBTjD4au8as+x6AJzKNI0eDbZOeStc+vckNw +i/nDhDwTqn6Sm1dTk/pwwpEOMfmbZ13pljheX7NzTogVZ96edhBiIL5VaZVDADlN +9u6wWk5JRFRYX0KD +-----END CERTIFICATE----- + +# Issuer: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services) +# Subject: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services) +# Label: "NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny" +# Serial: 80544274841616 +# MD5 Fingerprint: c5:a1:b7:ff:73:dd:d6:d7:34:32:18:df:fc:3c:ad:88 +# SHA1 Fingerprint: 06:08:3f:59:3f:15:a1:04:a0:69:a4:6b:a9:03:d0:06:b7:97:09:91 +# SHA256 Fingerprint: 6c:61:da:c3:a2:de:f0:31:50:6b:e0:36:d2:a6:fe:40:19:94:fb:d1:3d:f9:c8:d4:66:59:92:74:c4:46:ec:98 +-----BEGIN CERTIFICATE----- +MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG +EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3 +MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl +cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR +dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB +pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM +b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm +aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz +IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A +MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT +lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz +AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5 +VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG +ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2 +BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG +AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M +U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh +bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C ++C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC +bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F +uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2 +XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E= +-----END CERTIFICATE----- + +# Issuer: CN=Hongkong Post Root CA 1 O=Hongkong Post +# Subject: CN=Hongkong Post Root CA 1 O=Hongkong Post +# Label: "Hongkong Post Root CA 1" +# Serial: 1000 +# MD5 Fingerprint: a8:0d:6f:39:78:b9:43:6d:77:42:6d:98:5a:cc:23:ca +# SHA1 Fingerprint: d6:da:a8:20:8d:09:d2:15:4d:24:b5:2f:cb:34:6e:b2:58:b2:8a:58 +# SHA256 Fingerprint: f9:e6:7d:33:6c:51:00:2a:c0:54:c6:32:02:2d:66:dd:a2:e7:e3:ff:f1:0a:d0:61:ed:31:d8:bb:b4:10:cf:b2 +-----BEGIN CERTIFICATE----- +MIIDMDCCAhigAwIBAgICA+gwDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UEBhMCSEsx +FjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdrb25nIFBvc3Qg +Um9vdCBDQSAxMB4XDTAzMDUxNTA1MTMxNFoXDTIzMDUxNTA0NTIyOVowRzELMAkG +A1UEBhMCSEsxFjAUBgNVBAoTDUhvbmdrb25nIFBvc3QxIDAeBgNVBAMTF0hvbmdr +b25nIFBvc3QgUm9vdCBDQSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEArP84tulmAknjorThkPlAj3n54r15/gK97iSSHSL22oVyaf7XPwnU3ZG1ApzQ +jVrhVcNQhrkpJsLj2aDxaQMoIIBFIi1WpztUlVYiWR8o3x8gPW2iNr4joLFutbEn +PzlTCeqrauh0ssJlXI6/fMN4hM2eFvz1Lk8gKgifd/PFHsSaUmYeSF7jEAaPIpjh +ZY4bXSNmO7ilMlHIhqqhqZ5/dpTCpmy3QfDVyAY45tQM4vM7TG1QjMSDJ8EThFk9 +nnV0ttgCXjqQesBCNnLsak3c78QA3xMYV18meMjWCnl3v/evt3a5pQuEF10Q6m/h +q5URX208o1xNg1vysxmKgIsLhwIDAQABoyYwJDASBgNVHRMBAf8ECDAGAQH/AgED +MA4GA1UdDwEB/wQEAwIBxjANBgkqhkiG9w0BAQUFAAOCAQEADkbVPK7ih9legYsC +mEEIjEy82tvuJxuC52pF7BaLT4Wg87JwvVqWuspube5Gi27nKi6Wsxkz67SfqLI3 +7piol7Yutmcn1KZJ/RyTZXaeQi/cImyaT/JaFTmxcdcrUehtHJjA2Sr0oYJ71clB +oiMBdDhViw+5LmeiIAQ32pwL0xch4I+XeTRvhEgCIDMb5jREn5Fw9IBehEPCKdJs +EhTkYY2sEJCehFC78JZvRZ+K88psT/oROhUVRsPNH4NbLUES7VBnQRM9IauUiqpO +fMGx+6fWtScvl6tu4B3i0RwsH0Ti/L6RoZz71ilTc4afU9hDDl3WY4JxHYB0yvbi +AmvZWg== +-----END CERTIFICATE----- + +# Issuer: CN=SecureSign RootCA11 O=Japan Certification Services, Inc. +# Subject: CN=SecureSign RootCA11 O=Japan Certification Services, Inc. +# Label: "SecureSign RootCA11" +# Serial: 1 +# MD5 Fingerprint: b7:52:74:e2:92:b4:80:93:f2:75:e4:cc:d7:f2:ea:26 +# SHA1 Fingerprint: 3b:c4:9f:48:f8:f3:73:a0:9c:1e:bd:f8:5b:b1:c3:65:c7:d8:11:b3 +# SHA256 Fingerprint: bf:0f:ee:fb:9e:3a:58:1a:d5:f9:e9:db:75:89:98:57:43:d2:61:08:5c:4d:31:4f:6f:5d:72:59:aa:42:16:12 +-----BEGIN CERTIFICATE----- +MIIDbTCCAlWgAwIBAgIBATANBgkqhkiG9w0BAQUFADBYMQswCQYDVQQGEwJKUDEr +MCkGA1UEChMiSmFwYW4gQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcywgSW5jLjEcMBoG +A1UEAxMTU2VjdXJlU2lnbiBSb290Q0ExMTAeFw0wOTA0MDgwNDU2NDdaFw0yOTA0 +MDgwNDU2NDdaMFgxCzAJBgNVBAYTAkpQMSswKQYDVQQKEyJKYXBhbiBDZXJ0aWZp +Y2F0aW9uIFNlcnZpY2VzLCBJbmMuMRwwGgYDVQQDExNTZWN1cmVTaWduIFJvb3RD +QTExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/XeqpRyQBTvLTJsz +i1oURaTnkBbR31fSIRCkF/3frNYfp+TbfPfs37gD2pRY/V1yfIw/XwFndBWW4wI8 +h9uuywGOwvNmxoVF9ALGOrVisq/6nL+k5tSAMJjzDbaTj6nU2DbysPyKyiyhFTOV +MdrAG/LuYpmGYz+/3ZMqg6h2uRMft85OQoWPIucuGvKVCbIFtUROd6EgvanyTgp9 +UK31BQ1FT0Zx/Sg+U/sE2C3XZR1KG/rPO7AxmjVuyIsG0wCR8pQIZUyxNAYAeoni +8McDWc/V1uinMrPmmECGxc0nEovMe863ETxiYAcjPitAbpSACW22s293bzUIUPsC +h8U+iQIDAQABo0IwQDAdBgNVHQ4EFgQUW/hNT7KlhtQ60vFjmqC+CfZXt94wDgYD +VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB +AKChOBZmLqdWHyGcBvod7bkixTgm2E5P7KN/ed5GIaGHd48HCJqypMWvDzKYC3xm +KbabfSVSSUOrTC4rbnpwrxYO4wJs+0LmGJ1F2FXI6Dvd5+H0LgscNFxsWEr7jIhQ +X5Ucv+2rIrVls4W6ng+4reV6G4pQOh29Dbx7VFALuUKvVaAYga1lme++5Jy/xIWr +QbJUb9wlze144o4MjQlJ3WN7WmmWAiGovVJZ6X01y8hSyn+B/tlr0/cR7SXf+Of5 +pPpyl4RTDaXQMhhRdlkUbA/r7F+AjHVDg8OFmP9Mni0N5HeDk061lgeLKBObjBmN +QSdJQO7e5iNEOdyhIta6A/I= +-----END CERTIFICATE----- + +# Issuer: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd. +# Subject: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd. +# Label: "Microsec e-Szigno Root CA 2009" +# Serial: 14014712776195784473 +# MD5 Fingerprint: f8:49:f4:03:bc:44:2d:83:be:48:69:7d:29:64:fc:b1 +# SHA1 Fingerprint: 89:df:74:fe:5c:f4:0f:4a:80:f9:e3:37:7d:54:da:91:e1:01:31:8e +# SHA256 Fingerprint: 3c:5f:81:fe:a5:fa:b8:2c:64:bf:a2:ea:ec:af:cd:e8:e0:77:fc:86:20:a7:ca:e5:37:16:3d:f3:6e:db:f3:78 +-----BEGIN CERTIFICATE----- +MIIECjCCAvKgAwIBAgIJAMJ+QwRORz8ZMA0GCSqGSIb3DQEBCwUAMIGCMQswCQYD +VQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFjAUBgNVBAoMDU1pY3Jvc2VjIEx0 +ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3ppZ25vIFJvb3QgQ0EgMjAwOTEfMB0G +CSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5odTAeFw0wOTA2MTYxMTMwMThaFw0y +OTEyMzAxMTMwMThaMIGCMQswCQYDVQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3Qx +FjAUBgNVBAoMDU1pY3Jvc2VjIEx0ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3pp +Z25vIFJvb3QgQ0EgMjAwOTEfMB0GCSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5o +dTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOn4j/NjrdqG2KfgQvvP +kd6mJviZpWNwrZuuyjNAfW2WbqEORO7hE52UQlKavXWFdCyoDh2Tthi3jCyoz/tc +cbna7P7ofo/kLx2yqHWH2Leh5TvPmUpG0IMZfcChEhyVbUr02MelTTMuhTlAdX4U +fIASmFDHQWe4oIBhVKZsTh/gnQ4H6cm6M+f+wFUoLAKApxn1ntxVUwOXewdI/5n7 +N4okxFnMUBBjjqqpGrCEGob5X7uxUG6k0QrM1XF+H6cbfPVTbiJfyyvm1HxdrtbC +xkzlBQHZ7Vf8wSN5/PrIJIOV87VqUQHQd9bpEqH5GoP7ghu5sJf0dgYzQ0mg/wu1 ++rUCAwEAAaOBgDB+MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G +A1UdDgQWBBTLD8bfQkPMPcu1SCOhGnqmKrs0aDAfBgNVHSMEGDAWgBTLD8bfQkPM +Pcu1SCOhGnqmKrs0aDAbBgNVHREEFDASgRBpbmZvQGUtc3ppZ25vLmh1MA0GCSqG +SIb3DQEBCwUAA4IBAQDJ0Q5eLtXMs3w+y/w9/w0olZMEyL/azXm4Q5DwpL7v8u8h +mLzU1F0G9u5C7DBsoKqpyvGvivo/C3NqPuouQH4frlRheesuCDfXI/OMn74dseGk +ddug4lQUsbocKaQY9hK6ohQU4zE1yED/t+AFdlfBHFny+L/k7SViXITwfn4fs775 +tyERzAMBVnCnEJIeGzSBHq2cGsMEPO0CYdYeBvNfOofyK/FFh+U9rNHHV4S9a67c +2Pm2G2JwCz02yULyMtd6YebS2z3PyKnJm9zbWETXbzivf3jTo60adbocwTZ8jx5t +HMN1Rq41Bab2XD0h7lbwyYIiLXpUq3DDfSJlgnCW +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3 +# Label: "GlobalSign Root CA - R3" +# Serial: 4835703278459759426209954 +# MD5 Fingerprint: c5:df:b8:49:ca:05:13:55:ee:2d:ba:1a:c3:3e:b0:28 +# SHA1 Fingerprint: d6:9b:56:11:48:f0:1c:77:c5:45:78:c1:09:26:df:5b:85:69:76:ad +# SHA256 Fingerprint: cb:b5:22:d7:b7:f1:27:ad:6a:01:13:86:5b:df:1c:d4:10:2e:7d:07:59:af:63:5a:7c:f4:72:0d:c9:63:c5:3b +-----BEGIN CERTIFICATE----- +MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G +A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp +Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4 +MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG +A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8 +RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT +gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm +KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd +QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ +XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw +DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o +LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU +RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp +jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK +6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX +mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs +Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH +WD9f +-----END CERTIFICATE----- + +# Issuer: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Subject: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068 +# Label: "Autoridad de Certificacion Firmaprofesional CIF A62634068" +# Serial: 6047274297262753887 +# MD5 Fingerprint: 73:3a:74:7a:ec:bb:a3:96:a6:c2:e4:e2:c8:9b:c0:c3 +# SHA1 Fingerprint: ae:c5:fb:3f:c8:e1:bf:c4:e5:4f:03:07:5a:9a:e8:00:b7:f7:b6:fa +# SHA256 Fingerprint: 04:04:80:28:bf:1f:28:64:d4:8f:9a:d4:d8:32:94:36:6a:82:88:56:55:3f:3b:14:30:3f:90:14:7f:5d:40:ef +-----BEGIN CERTIFICATE----- +MIIGFDCCA/ygAwIBAgIIU+w77vuySF8wDQYJKoZIhvcNAQEFBQAwUTELMAkGA1UE +BhMCRVMxQjBABgNVBAMMOUF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1h +cHJvZmVzaW9uYWwgQ0lGIEE2MjYzNDA2ODAeFw0wOTA1MjAwODM4MTVaFw0zMDEy +MzEwODM4MTVaMFExCzAJBgNVBAYTAkVTMUIwQAYDVQQDDDlBdXRvcmlkYWQgZGUg +Q2VydGlmaWNhY2lvbiBGaXJtYXByb2Zlc2lvbmFsIENJRiBBNjI2MzQwNjgwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKlmuO6vj78aI14H9M2uDDUtd9 +thDIAl6zQyrET2qyyhxdKJp4ERppWVevtSBC5IsP5t9bpgOSL/UR5GLXMnE42QQM +cas9UX4PB99jBVzpv5RvwSmCwLTaUbDBPLutN0pcyvFLNg4kq7/DhHf9qFD0sefG +L9ItWY16Ck6WaVICqjaY7Pz6FIMMNx/Jkjd/14Et5cS54D40/mf0PmbR0/RAz15i +NA9wBj4gGFrO93IbJWyTdBSTo3OxDqqHECNZXyAFGUftaI6SEspd/NYrspI8IM/h +X68gvqB2f3bl7BqGYTM+53u0P6APjqK5am+5hyZvQWyIplD9amML9ZMWGxmPsu2b +m8mQ9QEM3xk9Dz44I8kvjwzRAv4bVdZO0I08r0+k8/6vKtMFnXkIoctXMbScyJCy +Z/QYFpM6/EfY0XiWMR+6KwxfXZmtY4laJCB22N/9q06mIqqdXuYnin1oKaPnirja +EbsXLZmdEyRG98Xi2J+Of8ePdG1asuhy9azuJBCtLxTa/y2aRnFHvkLfuwHb9H/T +KI8xWVvTyQKmtFLKbpf7Q8UIJm+K9Lv9nyiqDdVF8xM6HdjAeI9BZzwelGSuewvF +6NkBiDkal4ZkQdU7hwxu+g/GvUgUvzlN1J5Bto+WHWOWk9mVBngxaJ43BjuAiUVh +OSPHG0SjFeUc+JIwuwIDAQABo4HvMIHsMBIGA1UdEwEB/wQIMAYBAf8CAQEwDgYD +VR0PAQH/BAQDAgEGMB0GA1UdDgQWBBRlzeurNR4APn7VdMActHNHDhpkLzCBpgYD +VR0gBIGeMIGbMIGYBgRVHSAAMIGPMC8GCCsGAQUFBwIBFiNodHRwOi8vd3d3LmZp +cm1hcHJvZmVzaW9uYWwuY29tL2NwczBcBggrBgEFBQcCAjBQHk4AUABhAHMAZQBv +ACAAZABlACAAbABhACAAQgBvAG4AYQBuAG8AdgBhACAANAA3ACAAQgBhAHIAYwBl +AGwAbwBuAGEAIAAwADgAMAAxADcwDQYJKoZIhvcNAQEFBQADggIBABd9oPm03cXF +661LJLWhAqvdpYhKsg9VSytXjDvlMd3+xDLx51tkljYyGOylMnfX40S2wBEqgLk9 +am58m9Ot/MPWo+ZkKXzR4Tgegiv/J2Wv+xYVxC5xhOW1//qkR71kMrv2JYSiJ0L1 +ILDCExARzRAVukKQKtJE4ZYm6zFIEv0q2skGz3QeqUvVhyj5eTSSPi5E6PaPT481 +PyWzOdxjKpBrIF/EUhJOlywqrJ2X3kjyo2bbwtKDlaZmp54lD+kLM5FlClrD2VQS +3a/DTg4fJl4N3LON7NWBcN7STyQF82xO9UxJZo3R/9ILJUFI/lGExkKvgATP0H5k +SeTy36LssUzAKh3ntLFlosS88Zj0qnAHY7S42jtM+kAiMFsRpvAFDsYCA0irhpuF +3dvd6qJ2gHN99ZwExEWN57kci57q13XRcrHedUTnQn3iV2t93Jm8PYMo6oCTjcVM +ZcFwgbg4/EMxsvYDNEeyrPsiBsse3RdHHF9mudMaotoRsaS8I8nkvof/uZS2+F0g +StRf571oe2XyFR7SOqkt6dhrJKyXWERHrVkY8SFlcN7ONGCoQPHzPKTDKCOM/icz +Q0CgFzzr6juwcqajuUpLXhZI9LK8yIySxZ2frHI2vDSANGupi5LAuBft7HZT9SQB +jLMi6Et8Vcad+qMUu2WFbm5PEn4KPJ2V +-----END CERTIFICATE----- + +# Issuer: CN=Izenpe.com O=IZENPE S.A. +# Subject: CN=Izenpe.com O=IZENPE S.A. +# Label: "Izenpe.com" +# Serial: 917563065490389241595536686991402621 +# MD5 Fingerprint: a6:b0:cd:85:80:da:5c:50:34:a3:39:90:2f:55:67:73 +# SHA1 Fingerprint: 2f:78:3d:25:52:18:a7:4a:65:39:71:b5:2c:a2:9c:45:15:6f:e9:19 +# SHA256 Fingerprint: 25:30:cc:8e:98:32:15:02:ba:d9:6f:9b:1f:ba:1b:09:9e:2d:29:9e:0f:45:48:bb:91:4f:36:3b:c0:d4:53:1f +-----BEGIN CERTIFICATE----- +MIIF8TCCA9mgAwIBAgIQALC3WhZIX7/hy/WL1xnmfTANBgkqhkiG9w0BAQsFADA4 +MQswCQYDVQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6 +ZW5wZS5jb20wHhcNMDcxMjEzMTMwODI4WhcNMzcxMjEzMDgyNzI1WjA4MQswCQYD +VQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6ZW5wZS5j +b20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDJ03rKDx6sp4boFmVq +scIbRTJxldn+EFvMr+eleQGPicPK8lVx93e+d5TzcqQsRNiekpsUOqHnJJAKClaO +xdgmlOHZSOEtPtoKct2jmRXagaKH9HtuJneJWK3W6wyyQXpzbm3benhB6QiIEn6H +LmYRY2xU+zydcsC8Lv/Ct90NduM61/e0aL6i9eOBbsFGb12N4E3GVFWJGjMxCrFX +uaOKmMPsOzTFlUFpfnXCPCDFYbpRR6AgkJOhkEvzTnyFRVSa0QUmQbC1TR0zvsQD +yCV8wXDbO/QJLVQnSKwv4cSsPsjLkkxTOTcj7NMB+eAJRE1NZMDhDVqHIrytG6P+ +JrUV86f8hBnp7KGItERphIPzidF0BqnMC9bC3ieFUCbKF7jJeodWLBoBHmy+E60Q +rLUk9TiRodZL2vG70t5HtfG8gfZZa88ZU+mNFctKy6lvROUbQc/hhqfK0GqfvEyN +BjNaooXlkDWgYlwWTvDjovoDGrQscbNYLN57C9saD+veIR8GdwYDsMnvmfzAuU8L +hij+0rnq49qlw0dpEuDb8PYZi+17cNcC1u2HGCgsBCRMd+RIihrGO5rUD8r6ddIB +QFqNeb+Lz0vPqhbBleStTIo+F5HUsWLlguWABKQDfo2/2n+iD5dPDNMN+9fR5XJ+ +HMh3/1uaD7euBUbl8agW7EekFwIDAQABo4H2MIHzMIGwBgNVHREEgagwgaWBD2lu +Zm9AaXplbnBlLmNvbaSBkTCBjjFHMEUGA1UECgw+SVpFTlBFIFMuQS4gLSBDSUYg +QTAxMzM3MjYwLVJNZXJjLlZpdG9yaWEtR2FzdGVpeiBUMTA1NSBGNjIgUzgxQzBB +BgNVBAkMOkF2ZGEgZGVsIE1lZGl0ZXJyYW5lbyBFdG9yYmlkZWEgMTQgLSAwMTAx +MCBWaXRvcmlhLUdhc3RlaXowDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AQYwHQYDVR0OBBYEFB0cZQ6o8iV7tJHP5LGx5r1VdGwFMA0GCSqGSIb3DQEBCwUA +A4ICAQB4pgwWSp9MiDrAyw6lFn2fuUhfGI8NYjb2zRlrrKvV9pF9rnHzP7MOeIWb +laQnIUdCSnxIOvVFfLMMjlF4rJUT3sb9fbgakEyrkgPH7UIBzg/YsfqikuFgba56 +awmqxinuaElnMIAkejEWOVt+8Rwu3WwJrfIxwYJOubv5vr8qhT/AQKM6WfxZSzwo +JNu0FXWuDYi6LnPAvViH5ULy617uHjAimcs30cQhbIHsvm0m5hzkQiCeR7Csg1lw +LDXWrzY0tM07+DKo7+N4ifuNRSzanLh+QBxh5z6ikixL8s36mLYp//Pye6kfLqCT +VyvehQP5aTfLnnhqBbTFMXiJ7HqnheG5ezzevh55hM6fcA5ZwjUukCox2eRFekGk +LhObNA5me0mrZJfQRsN5nXJQY6aYWwa9SG3YOYNw6DXwBdGqvOPbyALqfP2C2sJb +UjWumDqtujWTI6cfSN01RpiyEGjkpTHCClguGYEQyVB1/OpaFs4R1+7vUIgtYf8/ +QnMFlEPVjjxOAToZpR9GTnfQXeWBIiGH/pR9hNiTrdZoQ0iy2+tzJOeRf1SktoA+ +naM8THLCV8Sg1Mw4J87VBp6iSNnpn86CcDaTmjvfliHjWbcM2pE38P1ZWrOZyGls +QyYBNWNgVYkDOnXYukrZVP/u3oDYLdE41V4tC5h9Pmzb/CaIxw== +-----END CERTIFICATE----- + +# Issuer: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc. +# Subject: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc. +# Label: "Go Daddy Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: 80:3a:bc:22:c1:e6:fb:8d:9b:3b:27:4a:32:1b:9a:01 +# SHA1 Fingerprint: 47:be:ab:c9:22:ea:e8:0e:78:78:34:62:a7:9f:45:c2:54:fd:e6:8b +# SHA256 Fingerprint: 45:14:0b:32:47:eb:9c:c8:c5:b4:f0:d7:b5:30:91:f7:32:92:08:9e:6e:5a:63:e2:74:9d:d3:ac:a9:19:8e:da +-----BEGIN CERTIFICATE----- +MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT +EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp +ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz +NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH +EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE +AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw +DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD +E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH +/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy +DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh +GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR +tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA +AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE +FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX +WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu +9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr +gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo +2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO +LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI +4uJEvlz36hz1 +-----END CERTIFICATE----- + +# Issuer: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Subject: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Label: "Starfield Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: d6:39:81:c6:52:7e:96:69:fc:fc:ca:66:ed:05:f2:96 +# SHA1 Fingerprint: b5:1c:06:7c:ee:2b:0c:3d:f8:55:ab:2d:92:f4:fe:39:d4:e7:0f:0e +# SHA256 Fingerprint: 2c:e1:cb:0b:f9:d2:f9:e1:02:99:3f:be:21:51:52:c3:b2:dd:0c:ab:de:1c:68:e5:31:9b:83:91:54:db:b7:f5 +-----BEGIN CERTIFICATE----- +MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT +HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs +ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw +MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6 +b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj +aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp +Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC +ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg +nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1 +HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N +Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN +dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0 +HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO +BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G +CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU +sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3 +4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg +8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K +pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1 +mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0 +-----END CERTIFICATE----- + +# Issuer: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Subject: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc. +# Label: "Starfield Services Root Certificate Authority - G2" +# Serial: 0 +# MD5 Fingerprint: 17:35:74:af:7b:61:1c:eb:f4:f9:3c:e2:ee:40:f9:a2 +# SHA1 Fingerprint: 92:5a:8f:8d:2c:6d:04:e0:66:5f:59:6a:ff:22:d8:63:e8:25:6f:3f +# SHA256 Fingerprint: 56:8d:69:05:a2:c8:87:08:a4:b3:02:51:90:ed:cf:ed:b1:97:4a:60:6a:13:c6:e5:29:0f:cb:2a:e6:3e:da:b5 +-----BEGIN CERTIFICATE----- +MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx +EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT +HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs +ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5 +MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD +VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy +ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy +dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p +OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2 +8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K +Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe +hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk +6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw +DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q +AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI +bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB +ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z +qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd +iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn +0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN +sSi6 +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Commercial O=AffirmTrust +# Subject: CN=AffirmTrust Commercial O=AffirmTrust +# Label: "AffirmTrust Commercial" +# Serial: 8608355977964138876 +# MD5 Fingerprint: 82:92:ba:5b:ef:cd:8a:6f:a6:3d:55:f9:84:f6:d6:b7 +# SHA1 Fingerprint: f9:b5:b6:32:45:5f:9c:be:ec:57:5f:80:dc:e9:6e:2c:c7:b2:78:b7 +# SHA256 Fingerprint: 03:76:ab:1d:54:c5:f9:80:3c:e4:b2:e2:01:a0:ee:7e:ef:7b:57:b6:36:e8:a9:3c:9b:8d:48:60:c9:6f:5f:a7 +-----BEGIN CERTIFICATE----- +MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz +dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL +MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp +cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP +Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr +ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL +MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1 +yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr +VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/ +nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ +KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG +XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj +vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt +Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g +N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC +nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8= +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Networking O=AffirmTrust +# Subject: CN=AffirmTrust Networking O=AffirmTrust +# Label: "AffirmTrust Networking" +# Serial: 8957382827206547757 +# MD5 Fingerprint: 42:65:ca:be:01:9a:9a:4c:a9:8c:41:49:cd:c0:d5:7f +# SHA1 Fingerprint: 29:36:21:02:8b:20:ed:02:f5:66:c5:32:d1:d6:ed:90:9f:45:00:2f +# SHA256 Fingerprint: 0a:81:ec:5a:92:97:77:f1:45:90:4a:f3:8d:5d:50:9f:66:b5:e2:c5:8f:cd:b5:31:05:8b:0e:17:f3:f0:b4:1b +-----BEGIN CERTIFICATE----- +MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz +dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL +MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp +cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC +AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y +YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua +kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL +QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp +6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG +yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i +QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ +KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO +tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu +QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ +Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u +olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48 +x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s= +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Premium O=AffirmTrust +# Subject: CN=AffirmTrust Premium O=AffirmTrust +# Label: "AffirmTrust Premium" +# Serial: 7893706540734352110 +# MD5 Fingerprint: c4:5d:0e:48:b6:ac:28:30:4e:0a:bc:f9:38:16:87:57 +# SHA1 Fingerprint: d8:a6:33:2c:e0:03:6f:b1:85:f6:63:4f:7d:6a:06:65:26:32:28:27 +# SHA256 Fingerprint: 70:a7:3f:7f:37:6b:60:07:42:48:90:45:34:b1:14:82:d5:bf:0e:69:8e:cc:49:8d:f5:25:77:eb:f2:e9:3b:9a +-----BEGIN CERTIFICATE----- +MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE +BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz +dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG +A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U +cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf +qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ +JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ ++jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS +s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5 +HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7 +70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG +V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S +qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S +5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia +C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX +OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE +FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ +BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2 +KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg +Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B +8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ +MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc +0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ +u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF +u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH +YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8 +GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO +RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e +KeC2uAloGRwYQw== +-----END CERTIFICATE----- + +# Issuer: CN=AffirmTrust Premium ECC O=AffirmTrust +# Subject: CN=AffirmTrust Premium ECC O=AffirmTrust +# Label: "AffirmTrust Premium ECC" +# Serial: 8401224907861490260 +# MD5 Fingerprint: 64:b0:09:55:cf:b1:d5:99:e2:be:13:ab:a6:5d:ea:4d +# SHA1 Fingerprint: b8:23:6b:00:2f:1d:16:86:53:01:55:6c:11:a4:37:ca:eb:ff:c3:bb +# SHA256 Fingerprint: bd:71:fd:f6:da:97:e4:cf:62:d1:64:7a:dd:25:81:b0:7d:79:ad:f8:39:7e:b4:ec:ba:9c:5e:84:88:82:14:23 +-----BEGIN CERTIFICATE----- +MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC +VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ +cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ +BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt +VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D +0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9 +ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G +A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G +A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs +aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I +flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ== +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Network CA" +# Serial: 279744 +# MD5 Fingerprint: d5:e9:81:40:c5:18:69:fc:46:2c:89:75:62:0f:aa:78 +# SHA1 Fingerprint: 07:e0:32:e0:20:b7:2c:3f:19:2f:06:28:a2:59:3a:19:a7:0f:06:9e +# SHA256 Fingerprint: 5c:58:46:8d:55:f5:8e:49:7e:74:39:82:d2:b5:00:10:b6:d1:65:37:4a:cf:83:a7:d4:a3:2d:b7:68:c4:40:8e +-----BEGIN CERTIFICATE----- +MIIDuzCCAqOgAwIBAgIDBETAMA0GCSqGSIb3DQEBBQUAMH4xCzAJBgNVBAYTAlBM +MSIwIAYDVQQKExlVbml6ZXRvIFRlY2hub2xvZ2llcyBTLkEuMScwJQYDVQQLEx5D +ZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxIjAgBgNVBAMTGUNlcnR1bSBU +cnVzdGVkIE5ldHdvcmsgQ0EwHhcNMDgxMDIyMTIwNzM3WhcNMjkxMjMxMTIwNzM3 +WjB+MQswCQYDVQQGEwJQTDEiMCAGA1UEChMZVW5pemV0byBUZWNobm9sb2dpZXMg +Uy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MSIw +IAYDVQQDExlDZXJ0dW0gVHJ1c3RlZCBOZXR3b3JrIENBMIIBIjANBgkqhkiG9w0B +AQEFAAOCAQ8AMIIBCgKCAQEA4/t9o3K6wvDJFIf1awFO4W5AB7ptJ11/91sts1rH +UV+rpDKmYYe2bg+G0jACl/jXaVehGDldamR5xgFZrDwxSjh80gTSSyjoIF87B6LM +TXPb865Px1bVWqeWifrzq2jUI4ZZJ88JJ7ysbnKDHDBy3+Ci6dLhdHUZvSqeexVU +BBvXQzmtVSjF4hq79MDkrjhJM8x2hZ85RdKknvISjFH4fOQtf/WsX+sWn7Et0brM +kUJ3TCXJkDhv2/DM+44el1k+1WBO5gUo7Ul5E0u6SNsv+XLTOcr+H9g0cvW0QM8x +AcPs3hEtF10fuFDRXhmnad4HMyjKUJX5p1TLVIZQRan5SQIDAQABo0IwQDAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBQIds3LB/8k9sXN7buQvOKEN0Z19zAOBgNV +HQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBAKaorSLOAT2mo/9i0Eidi15y +sHhE49wcrwn9I0j6vSrEuVUEtRCjjSfeC4Jj0O7eDDd5QVsisrCaQVymcODU0HfL +I9MA4GxWL+FpDQ3Zqr8hgVDZBqWo/5U30Kr+4rP1mS1FhIrlQgnXdAIv94nYmem8 +J9RHjboNRhx3zxSkHLmkMcScKHQDNP8zGSal6Q10tz6XxnboJ5ajZt3hrvJBW8qY +VoNzcOSGGtIxQbovvi0TWnZvTuhOgQ4/WwMioBK+ZlgRSssDxLQqKi2WF+A5VLxI +03YnnZotBqbJ7DnSq9ufmgsnAjUpsUCV5/nonFWIGUbWtzT1fs45mtk48VH3Tyw= +-----END CERTIFICATE----- + +# Issuer: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA +# Subject: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA +# Label: "TWCA Root Certification Authority" +# Serial: 1 +# MD5 Fingerprint: aa:08:8f:f6:f9:7b:b7:f2:b1:a7:1e:9b:ea:ea:bd:79 +# SHA1 Fingerprint: cf:9e:87:6d:d3:eb:fc:42:26:97:a3:b5:a3:7a:a0:76:a9:06:23:48 +# SHA256 Fingerprint: bf:d8:8f:e1:10:1c:41:ae:3e:80:1b:f8:be:56:35:0e:e9:ba:d1:a6:b9:bd:51:5e:dc:5c:6d:5b:87:11:ac:44 +-----BEGIN CERTIFICATE----- +MIIDezCCAmOgAwIBAgIBATANBgkqhkiG9w0BAQUFADBfMQswCQYDVQQGEwJUVzES +MBAGA1UECgwJVEFJV0FOLUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFU +V0NBIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwODI4MDcyNDMz +WhcNMzAxMjMxMTU1OTU5WjBfMQswCQYDVQQGEwJUVzESMBAGA1UECgwJVEFJV0FO +LUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFUV0NBIFJvb3QgQ2VydGlm +aWNhdGlvbiBBdXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB +AQCwfnK4pAOU5qfeCTiRShFAh6d8WWQUe7UREN3+v9XAu1bihSX0NXIP+FPQQeFE +AcK0HMMxQhZHhTMidrIKbw/lJVBPhYa+v5guEGcevhEFhgWQxFnQfHgQsIBct+HH +K3XLfJ+utdGdIzdjp9xCoi2SBBtQwXu4PhvJVgSLL1KbralW6cH/ralYhzC2gfeX +RfwZVzsrb+RH9JlF/h3x+JejiB03HFyP4HYlmlD4oFT/RJB2I9IyxsOrBr/8+7/z +rX2SYgJbKdM1o5OaQ2RgXbL6Mv87BK9NQGr5x+PvI/1ry+UPizgN7gr8/g+YnzAx +3WxSZfmLgb4i4RxYA7qRG4kHAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqOFsmjd6LWvJPelSDGRjjCDWmujANBgkq +hkiG9w0BAQUFAAOCAQEAPNV3PdrfibqHDAhUaiBQkr6wQT25JmSDCi/oQMCXKCeC +MErJk/9q56YAf4lCmtYR5VPOL8zy2gXE/uJQxDqGfczafhAJO5I1KlOy/usrBdls +XebQ79NqZp4VKIV66IIArB6nCWlWQtNoURi+VJq/REG6Sb4gumlc7rh3zc5sH62D +lhh9DrUUOYTxKOkto557HnpyWoOzeW/vtPzQCqVYT0bf+215WfKEIlKuD8z7fDvn +aspHYcN6+NOSBB+4IIThNlQWx0DeO4pz3N/GCUzf7Nr/1FNCocnyYh0igzyXxfkZ +YiesZSLX0zzG5Y6yU8xJzrww/nsOM5D77dIUkR8Hrw== +-----END CERTIFICATE----- + +# Issuer: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2 +# Subject: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2 +# Label: "Security Communication RootCA2" +# Serial: 0 +# MD5 Fingerprint: 6c:39:7d:a4:0e:55:59:b2:3f:d6:41:b1:12:50:de:43 +# SHA1 Fingerprint: 5f:3b:8c:f2:f8:10:b3:7d:78:b4:ce:ec:19:19:c3:73:34:b9:c7:74 +# SHA256 Fingerprint: 51:3b:2c:ec:b8:10:d4:cd:e5:dd:85:39:1a:df:c6:c2:dd:60:d8:7b:b7:36:d2:b5:21:48:4a:a4:7a:0e:be:f6 +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIBADANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJKUDEl +MCMGA1UEChMcU0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UECxMe +U2VjdXJpdHkgQ29tbXVuaWNhdGlvbiBSb290Q0EyMB4XDTA5MDUyOTA1MDAzOVoX +DTI5MDUyOTA1MDAzOVowXTELMAkGA1UEBhMCSlAxJTAjBgNVBAoTHFNFQ09NIFRy +dXN0IFN5c3RlbXMgQ08uLExURC4xJzAlBgNVBAsTHlNlY3VyaXR5IENvbW11bmlj +YXRpb24gUm9vdENBMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANAV +OVKxUrO6xVmCxF1SrjpDZYBLx/KWvNs2l9amZIyoXvDjChz335c9S672XewhtUGr +zbl+dp+++T42NKA7wfYxEUV0kz1XgMX5iZnK5atq1LXaQZAQwdbWQonCv/Q4EpVM +VAX3NuRFg3sUZdbcDE3R3n4MqzvEFb46VqZab3ZpUql6ucjrappdUtAtCms1FgkQ +hNBqyjoGADdH5H5XTz+L62e4iKrFvlNVspHEfbmwhRkGeC7bYRr6hfVKkaHnFtWO +ojnflLhwHyg/i/xAXmODPIMqGplrz95Zajv8bxbXH/1KEOtOghY6rCcMU/Gt1SSw +awNQwS08Ft1ENCcadfsCAwEAAaNCMEAwHQYDVR0OBBYEFAqFqXdlBZh8QIH4D5cs +OPEK7DzPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 +DQEBCwUAA4IBAQBMOqNErLlFsceTfsgLCkLfZOoc7llsCLqJX2rKSpWeeo8HxdpF +coJxDjrSzG+ntKEju/Ykn8sX/oymzsLS28yN/HH8AynBbF0zX2S2ZTuJbxh2ePXc +okgfGT+Ok+vx+hfuzU7jBBJV1uXk3fs+BXziHV7Gp7yXT2g69ekuCkO2r1dcYmh8 +t/2jioSgrGK+KwmHNPBqAbubKVY8/gA3zyNs8U6qtnRGEmyR7jTV7JqR50S+kDFy +1UkC9gLl9B/rfNmWVan/7Ir5mUf/NVoCqgTLiluHcSmRvaS0eg29mvVXIwAHIRc/ +SjnRBUkLp7Y3gaVdjKozXoEofKd9J+sAro03 +-----END CERTIFICATE----- + +# Issuer: CN=EC-ACC O=Agencia Catalana de Certificacio (NIF Q-0801176-I) OU=Serveis Publics de Certificacio/Vegeu https://www.catcert.net/verarrel (c)03/Jerarquia Entitats de Certificacio Catalanes +# Subject: CN=EC-ACC O=Agencia Catalana de Certificacio (NIF Q-0801176-I) OU=Serveis Publics de Certificacio/Vegeu https://www.catcert.net/verarrel (c)03/Jerarquia Entitats de Certificacio Catalanes +# Label: "EC-ACC" +# Serial: -23701579247955709139626555126524820479 +# MD5 Fingerprint: eb:f5:9d:29:0d:61:f9:42:1f:7c:c2:ba:6d:e3:15:09 +# SHA1 Fingerprint: 28:90:3a:63:5b:52:80:fa:e6:77:4c:0b:6d:a7:d6:ba:a6:4a:f2:e8 +# SHA256 Fingerprint: 88:49:7f:01:60:2f:31:54:24:6a:e2:8c:4d:5a:ef:10:f1:d8:7e:bb:76:62:6f:4a:e0:b7:f9:5b:a7:96:87:99 +-----BEGIN CERTIFICATE----- +MIIFVjCCBD6gAwIBAgIQ7is969Qh3hSoYqwE893EATANBgkqhkiG9w0BAQUFADCB +8zELMAkGA1UEBhMCRVMxOzA5BgNVBAoTMkFnZW5jaWEgQ2F0YWxhbmEgZGUgQ2Vy +dGlmaWNhY2lvIChOSUYgUS0wODAxMTc2LUkpMSgwJgYDVQQLEx9TZXJ2ZWlzIFB1 +YmxpY3MgZGUgQ2VydGlmaWNhY2lvMTUwMwYDVQQLEyxWZWdldSBodHRwczovL3d3 +dy5jYXRjZXJ0Lm5ldC92ZXJhcnJlbCAoYykwMzE1MDMGA1UECxMsSmVyYXJxdWlh +IEVudGl0YXRzIGRlIENlcnRpZmljYWNpbyBDYXRhbGFuZXMxDzANBgNVBAMTBkVD +LUFDQzAeFw0wMzAxMDcyMzAwMDBaFw0zMTAxMDcyMjU5NTlaMIHzMQswCQYDVQQG +EwJFUzE7MDkGA1UEChMyQWdlbmNpYSBDYXRhbGFuYSBkZSBDZXJ0aWZpY2FjaW8g +KE5JRiBRLTA4MDExNzYtSSkxKDAmBgNVBAsTH1NlcnZlaXMgUHVibGljcyBkZSBD +ZXJ0aWZpY2FjaW8xNTAzBgNVBAsTLFZlZ2V1IGh0dHBzOi8vd3d3LmNhdGNlcnQu +bmV0L3ZlcmFycmVsIChjKTAzMTUwMwYDVQQLEyxKZXJhcnF1aWEgRW50aXRhdHMg +ZGUgQ2VydGlmaWNhY2lvIENhdGFsYW5lczEPMA0GA1UEAxMGRUMtQUNDMIIBIjAN +BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsyLHT+KXQpWIR4NA9h0X84NzJB5R +85iKw5K4/0CQBXCHYMkAqbWUZRkiFRfCQ2xmRJoNBD45b6VLeqpjt4pEndljkYRm +4CgPukLjbo73FCeTae6RDqNfDrHrZqJyTxIThmV6PttPB/SnCWDaOkKZx7J/sxaV +HMf5NLWUhdWZXqBIoH7nF2W4onW4HvPlQn2v7fOKSGRdghST2MDk/7NQcvJ29rNd +QlB50JQ+awwAvthrDk4q7D7SzIKiGGUzE3eeml0aE9jD2z3Il3rucO2n5nzbcc8t +lGLfbdb1OL4/pYUKGbio2Al1QnDE6u/LDsg0qBIimAy4E5S2S+zw0JDnJwIDAQAB +o4HjMIHgMB0GA1UdEQQWMBSBEmVjX2FjY0BjYXRjZXJ0Lm5ldDAPBgNVHRMBAf8E +BTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUoMOLRKo3pUW/l4Ba0fF4 +opvpXY0wfwYDVR0gBHgwdjB0BgsrBgEEAfV4AQMBCjBlMCwGCCsGAQUFBwIBFiBo +dHRwczovL3d3dy5jYXRjZXJ0Lm5ldC92ZXJhcnJlbDA1BggrBgEFBQcCAjApGidW +ZWdldSBodHRwczovL3d3dy5jYXRjZXJ0Lm5ldC92ZXJhcnJlbCAwDQYJKoZIhvcN +AQEFBQADggEBAKBIW4IB9k1IuDlVNZyAelOZ1Vr/sXE7zDkJlF7W2u++AVtd0x7Y +/X1PzaBB4DSTv8vihpw3kpBWHNzrKQXlxJ7HNd+KDM3FIUPpqojlNcAZQmNaAl6k +SBg6hW/cnbw/nZzBh7h6YQjpdwt/cKt63dmXLGQehb+8dJahw3oS7AwaboMMPOhy +Rp/7SNVel+axofjk70YllJyJ22k4vuxcDlbHZVHlUIiIv0LVKz3l+bqeLrPK9HOS +Agu+TGbrIP65y7WZf+a2E/rKS03Z7lNGBjvGTq2TWoF+bCpLagVFjPIhpDGQh2xl +nJ2lYJU6Un/10asIbvPuW/mIPX64b24D5EI= +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions RootCA 2011 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions RootCA 2011 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions RootCA 2011" +# Serial: 0 +# MD5 Fingerprint: 73:9f:4c:4b:73:5b:79:e9:fa:ba:1c:ef:6e:cb:d5:c9 +# SHA1 Fingerprint: fe:45:65:9b:79:03:5b:98:a1:61:b5:51:2e:ac:da:58:09:48:22:4d +# SHA256 Fingerprint: bc:10:4f:15:a4:8b:e7:09:dc:a5:42:a7:e1:d4:b9:df:6f:05:45:27:e8:02:ea:a9:2d:59:54:44:25:8a:fe:71 +-----BEGIN CERTIFICATE----- +MIIEMTCCAxmgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBlTELMAkGA1UEBhMCR1Ix +RDBCBgNVBAoTO0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1 +dGlvbnMgQ2VydC4gQXV0aG9yaXR5MUAwPgYDVQQDEzdIZWxsZW5pYyBBY2FkZW1p +YyBhbmQgUmVzZWFyY2ggSW5zdGl0dXRpb25zIFJvb3RDQSAyMDExMB4XDTExMTIw +NjEzNDk1MloXDTMxMTIwMTEzNDk1MlowgZUxCzAJBgNVBAYTAkdSMUQwQgYDVQQK +EztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0dXRpb25zIENl +cnQuIEF1dGhvcml0eTFAMD4GA1UEAxM3SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBSb290Q0EgMjAxMTCCASIwDQYJKoZIhvcNAQEB +BQADggEPADCCAQoCggEBAKlTAOMupvaO+mDYLZU++CwqVE7NuYRhlFhPjz2L5EPz +dYmNUeTDN9KKiE15HrcS3UN4SoqS5tdI1Q+kOilENbgH9mgdVc04UfCMJDGFr4PJ +fel3r+0ae50X+bOdOFAPplp5kYCvN66m0zH7tSYJnTxa71HFK9+WXesyHgLacEns +bgzImjeN9/E2YEsmLIKe0HjzDQ9jpFEw4fkrJxIH2Oq9GGKYsFk3fb7u8yBRQlqD +75O6aRXxYp2fmTmCobd0LovUxQt7L/DICto9eQqakxylKHJzkUOap9FNhYS5qXSP +FEDH3N6sQWRstBmbAmNtJGSPRLIl6s5ddAxjMlyNh+UCAwEAAaOBiTCBhjAPBgNV +HRMBAf8EBTADAQH/MAsGA1UdDwQEAwIBBjAdBgNVHQ4EFgQUppFC/RNhSiOeCKQp +5dgTBCPuQSUwRwYDVR0eBEAwPqA8MAWCAy5ncjAFggMuZXUwBoIELmVkdTAGggQu +b3JnMAWBAy5ncjAFgQMuZXUwBoEELmVkdTAGgQQub3JnMA0GCSqGSIb3DQEBBQUA +A4IBAQAf73lB4XtuP7KMhjdCSk4cNx6NZrokgclPEg8hwAOXhiVtXdMiKahsog2p +6z0GW5k6x8zDmjR/qw7IThzh+uTczQ2+vyT+bOdrwg3IBp5OjWEopmr95fZi6hg8 +TqBTnbI6nOulnJEWtk2C4AwFSKls9cz4y51JtPACpf1wA+2KIaWuE4ZJwzNzvoc7 +dIsXRSZMFpGD/md9zU1jZ/rzAxKWeAaNsWftjj++n08C9bMJL/NMh98qy5V8Acys +Nnq/onN694/BtZqhFLKPM58N7yLcZnuEvUUXBj08yrl3NI/K6s8/MT7jiOOASSXI +l7WdmplNsDz4SgCbZN2fOUvRJ9e4 +-----END CERTIFICATE----- + +# Issuer: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967 +# Subject: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967 +# Label: "Actalis Authentication Root CA" +# Serial: 6271844772424770508 +# MD5 Fingerprint: 69:c1:0d:4f:07:a3:1b:c3:fe:56:3d:04:bc:11:f6:a6 +# SHA1 Fingerprint: f3:73:b3:87:06:5a:28:84:8a:f2:f3:4a:ce:19:2b:dd:c7:8e:9c:ac +# SHA256 Fingerprint: 55:92:60:84:ec:96:3a:64:b9:6e:2a:be:01:ce:0b:a8:6a:64:fb:fe:bc:c7:aa:b5:af:c1:55:b3:7f:d7:60:66 +-----BEGIN CERTIFICATE----- +MIIFuzCCA6OgAwIBAgIIVwoRl0LE48wwDQYJKoZIhvcNAQELBQAwazELMAkGA1UE +BhMCSVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8w +MzM1ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290 +IENBMB4XDTExMDkyMjExMjIwMloXDTMwMDkyMjExMjIwMlowazELMAkGA1UEBhMC +SVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8wMzM1 +ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290IENB +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAp8bEpSmkLO/lGMWwUKNv +UTufClrJwkg4CsIcoBh/kbWHuUA/3R1oHwiD1S0eiKD4j1aPbZkCkpAW1V8IbInX +4ay8IMKx4INRimlNAJZaby/ARH6jDuSRzVju3PvHHkVH3Se5CAGfpiEd9UEtL0z9 +KK3giq0itFZljoZUj5NDKd45RnijMCO6zfB9E1fAXdKDa0hMxKufgFpbOr3JpyI/ +gCczWw63igxdBzcIy2zSekciRDXFzMwujt0q7bd9Zg1fYVEiVRvjRuPjPdA1Yprb +rxTIW6HMiRvhMCb8oJsfgadHHwTrozmSBp+Z07/T6k9QnBn+locePGX2oxgkg4YQ +51Q+qDp2JE+BIcXjDwL4k5RHILv+1A7TaLndxHqEguNTVHnd25zS8gebLra8Pu2F +be8lEfKXGkJh90qX6IuxEAf6ZYGyojnP9zz/GPvG8VqLWeICrHuS0E4UT1lF9gxe +KF+w6D9Fz8+vm2/7hNN3WpVvrJSEnu68wEqPSpP4RCHiMUVhUE4Q2OM1fEwZtN4F +v6MGn8i1zeQf1xcGDXqVdFUNaBr8EBtiZJ1t4JWgw5QHVw0U5r0F+7if5t+L4sbn +fpb2U8WANFAoWPASUHEXMLrmeGO89LKtmyuy/uE5jF66CyCU3nuDuP/jVo23Eek7 +jPKxwV2dpAtMK9myGPW1n0sCAwEAAaNjMGEwHQYDVR0OBBYEFFLYiDrIn3hm7Ynz +ezhwlMkCAjbQMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUUtiIOsifeGbt +ifN7OHCUyQICNtAwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAL +e3KHwGCmSUyIWOYdiPcUZEim2FgKDk8TNd81HdTtBjHIgT5q1d07GjLukD0R0i70 +jsNjLiNmsGe+b7bAEzlgqqI0JZN1Ut6nna0Oh4lScWoWPBkdg/iaKWW+9D+a2fDz +WochcYBNy+A4mz+7+uAwTc+G02UQGRjRlwKxK3JCaKygvU5a2hi/a5iB0P2avl4V +SM0RFbnAKVy06Ij3Pjaut2L9HmLecHgQHEhb2rykOLpn7VU+Xlff1ANATIGk0k9j +pwlCCRT8AKnCgHNPLsBA2RF7SOp6AsDT6ygBJlh0wcBzIm2Tlf05fbsq4/aC4yyX +X04fkZT6/iyj2HYauE2yOE+b+h1IYHkm4vP9qdCa6HCPSXrW5b0KDtst842/6+Ok +fcvHlXHo2qN8xcL4dJIEG4aspCJTQLas/kx2z/uUMsA1n3Y/buWQbqCmJqK4LL7R +K4X9p2jIugErsWx0Hbhzlefut8cl8ABMALJ+tguLHPPAUJ4lueAI3jZm/zel0btU +ZCzJJ7VLkn5l/9Mt4blOvH+kQSGQQXemOR/qnuOf0GZvBeyqdn6/axag67XH/JJU +LysRJyU3eExRarDzzFhdFPFqSBX/wge2sY0PjlxQRrM9vwGYT7JZVEc+NHt4bVaT +LnPqZih4zR0Uv6CPLy64Lo7yFIrM6bV8+2ydDKXhlg== +-----END CERTIFICATE----- + +# Issuer: CN=Buypass Class 2 Root CA O=Buypass AS-983163327 +# Subject: CN=Buypass Class 2 Root CA O=Buypass AS-983163327 +# Label: "Buypass Class 2 Root CA" +# Serial: 2 +# MD5 Fingerprint: 46:a7:d2:fe:45:fb:64:5a:a8:59:90:9b:78:44:9b:29 +# SHA1 Fingerprint: 49:0a:75:74:de:87:0a:47:fe:58:ee:f6:c7:6b:eb:c6:0b:12:40:99 +# SHA256 Fingerprint: 9a:11:40:25:19:7c:5b:b9:5d:94:e6:3d:55:cd:43:79:08:47:b6:46:b2:3c:df:11:ad:a4:a0:0e:ff:15:fb:48 +-----BEGIN CERTIFICATE----- +MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd +MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg +Q2xhc3MgMiBSb290IENBMB4XDTEwMTAyNjA4MzgwM1oXDTQwMTAyNjA4MzgwM1ow +TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw +HgYDVQQDDBdCdXlwYXNzIENsYXNzIDIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBANfHXvfBB9R3+0Mh9PT1aeTuMgHbo4Yf5FkNuud1g1Lr +6hxhFUi7HQfKjK6w3Jad6sNgkoaCKHOcVgb/S2TwDCo3SbXlzwx87vFKu3MwZfPV +L4O2fuPn9Z6rYPnT8Z2SdIrkHJasW4DptfQxh6NR/Md+oW+OU3fUl8FVM5I+GC91 +1K2GScuVr1QGbNgGE41b/+EmGVnAJLqBcXmQRFBoJJRfuLMR8SlBYaNByyM21cHx +MlAQTn/0hpPshNOOvEu/XAFOBz3cFIqUCqTqc/sLUegTBxj6DvEr0VQVfTzh97QZ +QmdiXnfgolXsttlpF9U6r0TtSsWe5HonfOV116rLJeffawrbD02TTqigzXsu8lkB +arcNuAeBfos4GzjmCleZPe4h6KP1DBbdi+w0jpwqHAAVF41og9JwnxgIzRFo1clr +Us3ERo/ctfPYV3Me6ZQ5BL/T3jjetFPsaRyifsSP5BtwrfKi+fv3FmRmaZ9JUaLi +FRhnBkp/1Wy1TbMz4GHrXb7pmA8y1x1LPC5aAVKRCfLf6o3YBkBjqhHk/sM3nhRS +P/TizPJhk9H9Z2vXUq6/aKtAQ6BXNVN48FP4YUIHZMbXb5tMOA1jrGKvNouicwoN +9SG9dKpN6nIDSdvHXx1iY8f93ZHsM+71bbRuMGjeyNYmsHVee7QHIJihdjK4TWxP +AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMmAd+BikoL1Rpzz +uvdMw964o605MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAU18h +9bqwOlI5LJKwbADJ784g7wbylp7ppHR/ehb8t/W2+xUbP6umwHJdELFx7rxP462s +A20ucS6vxOOto70MEae0/0qyexAQH6dXQbLArvQsWdZHEIjzIVEpMMpghq9Gqx3t +OluwlN5E40EIosHsHdb9T7bWR9AUC8rmyrV7d35BH16Dx7aMOZawP5aBQW9gkOLo ++fsicdl9sz1Gv7SEr5AcD48Saq/v7h56rgJKihcrdv6sVIkkLE8/trKnToyokZf7 +KcZ7XC25y2a2t6hbElGFtQl+Ynhw/qlqYLYdDnkM/crqJIByw5c/8nerQyIKx+u2 +DISCLIBrQYoIwOula9+ZEsuK1V6ADJHgJgg2SMX6OBE1/yWDLfJ6v9r9jv6ly0Us +H8SIU653DtmadsWOLB2jutXsMq7Aqqz30XpN69QH4kj3Io6wpJ9qzo6ysmD0oyLQ +I+uUWnpp3Q+/QFesa1lQ2aOZ4W7+jQF5JyMV3pKdewlNWudLSDBaGOYKbeaP4NK7 +5t98biGCwWg5TbSYWGZizEqQXsP6JwSxeRV0mcy+rSDeJmAc61ZRpqPq5KM/p/9h +3PFaTWwyI0PurKju7koSCTxdccK+efrCh2gdC/1cacwG0Jp9VJkqyTkaGa9LKkPz +Y11aWOIv4x3kqdbQCtCev9eBCfHJxyYNrJgWVqA= +-----END CERTIFICATE----- + +# Issuer: CN=Buypass Class 3 Root CA O=Buypass AS-983163327 +# Subject: CN=Buypass Class 3 Root CA O=Buypass AS-983163327 +# Label: "Buypass Class 3 Root CA" +# Serial: 2 +# MD5 Fingerprint: 3d:3b:18:9e:2c:64:5a:e8:d5:88:ce:0e:f9:37:c2:ec +# SHA1 Fingerprint: da:fa:f7:fa:66:84:ec:06:8f:14:50:bd:c7:c2:81:a5:bc:a9:64:57 +# SHA256 Fingerprint: ed:f7:eb:bc:a2:7a:2a:38:4d:38:7b:7d:40:10:c6:66:e2:ed:b4:84:3e:4c:29:b4:ae:1d:5b:93:32:e6:b2:4d +-----BEGIN CERTIFICATE----- +MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd +MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg +Q2xhc3MgMyBSb290IENBMB4XDTEwMTAyNjA4Mjg1OFoXDTQwMTAyNjA4Mjg1OFow +TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw +HgYDVQQDDBdCdXlwYXNzIENsYXNzIDMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBAKXaCpUWUOOV8l6ddjEGMnqb8RB2uACatVI2zSRHsJ8Y +ZLya9vrVediQYkwiL944PdbgqOkcLNt4EemOaFEVcsfzM4fkoF0LXOBXByow9c3E +N3coTRiR5r/VUv1xLXA+58bEiuPwKAv0dpihi4dVsjoT/Lc+JzeOIuOoTyrvYLs9 +tznDDgFHmV0ST9tD+leh7fmdvhFHJlsTmKtdFoqwNxxXnUX/iJY2v7vKB3tvh2PX +0DJq1l1sDPGzbjniazEuOQAnFN44wOwZZoYS6J1yFhNkUsepNxz9gjDthBgd9K5c +/3ATAOux9TN6S9ZV+AWNS2mw9bMoNlwUxFFzTWsL8TQH2xc519woe2v1n/MuwU8X +KhDzzMro6/1rqy6any2CbgTUUgGTLT2G/H783+9CHaZr77kgxve9oKeV/afmiSTY +zIw0bOIjL9kSGiG5VZFvC5F5GQytQIgLcOJ60g7YaEi7ghM5EFjp2CoHxhLbWNvS +O1UQRwUVZ2J+GGOmRj8JDlQyXr8NYnon74Do29lLBlo3WiXQCBJ31G8JUJc9yB3D +34xFMFbG02SrZvPAXpacw8Tvw3xrizp5f7NJzz3iiZ+gMEuFuZyUJHmPfWupRWgP +K9Dx2hzLabjKSWJtyNBjYt1gD1iqj6G8BaVmos8bdrKEZLFMOVLAMLrwjEsCsLa3 +AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEe4zf/lb+74suwv +Tg75JbCOPGvDMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAACAj +QTUEkMJAYmDv4jVM1z+s4jSQuKFvdvoWFqRINyzpkMLyPPgKn9iB5btb2iUspKdV +cSQy9sgL8rxq+JOssgfCX5/bzMiKqr5qb+FJEMwx14C7u8jYog5kV+qi9cKpMRXS +IGrs/CIBKM+GuIAeqcwRpTzyFrNHnfzSgCHEy9BHcEGhyoMZCCxt8l13nIoUE9Q2 +HJLw5QY33KbmkJs4j1xrG0aGQ0JfPgEHU1RdZX33inOhmlRaHylDFCfChQ+1iHsa +O5S3HWCntZznKWlXWpuTekMwGwPXYshApqr8ZORK15FTAaggiG6cX0S5y2CBNOxv +033aSF/rtJC8LakcC6wc1aJoIIAE1vyxjy+7SjENSoYc6+I2KSb12tjE8nVhz36u +dmNKekBlk4f4HoCMhuWG1o8O/FMsYOgWYRqiPkN7zTlgVGr18okmAWiDSKIz6MkE +kbIRNBE+6tBDGR8Dk5AM/1E9V/RBbuHLoL7ryWPNbczk+DaqaJ3tvV2XcEQNtg41 +3OEMXbugUZTLfhbrES+jkkXITHHZvMmZUldGL1DPvTVp9D0VzgalLA8+9oG6lLvD +u79leNKGef9JOxqDDPDeeOzI8k1MGt6CKfjBWtrt7uYnXuhF0J0cUahoq0Tj0Itq +4/g7u9xN12TyUb7mqqta6THuBrxzvxNiCp/HuZc= +-----END CERTIFICATE----- + +# Issuer: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Subject: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Label: "T-TeleSec GlobalRoot Class 3" +# Serial: 1 +# MD5 Fingerprint: ca:fb:40:a8:4e:39:92:8a:1d:fe:8e:2f:c4:27:ea:ef +# SHA1 Fingerprint: 55:a6:72:3e:cb:f2:ec:cd:c3:23:74:70:19:9d:2a:be:11:e3:81:d1 +# SHA256 Fingerprint: fd:73:da:d3:1c:64:4f:f1:b4:3b:ef:0c:cd:da:96:71:0b:9c:d9:87:5e:ca:7e:31:70:7a:f3:e9:6d:52:2b:bd +-----BEGIN CERTIFICATE----- +MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx +KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd +BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl +YyBHbG9iYWxSb290IENsYXNzIDMwHhcNMDgxMDAxMTAyOTU2WhcNMzMxMDAxMjM1 +OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy +aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 +ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDMwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9dZPwYiJvJK7genasfb3ZJNW4t/zN +8ELg63iIVl6bmlQdTQyK9tPPcPRStdiTBONGhnFBSivwKixVA9ZIw+A5OO3yXDw/ +RLyTPWGrTs0NvvAgJ1gORH8EGoel15YUNpDQSXuhdfsaa3Ox+M6pCSzyU9XDFES4 +hqX2iys52qMzVNn6chr3IhUciJFrf2blw2qAsCTz34ZFiP0Zf3WHHx+xGwpzJFu5 +ZeAsVMhg02YXP+HMVDNzkQI6pn97djmiH5a2OK61yJN0HZ65tOVgnS9W0eDrXltM +EnAMbEQgqxHY9Bn20pxSN+f6tsIxO0rUFJmtxxr1XV/6B7h8DR/Wgx6zAgMBAAGj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS1 +A/d2O2GCahKqGFPrAyGUv/7OyjANBgkqhkiG9w0BAQsFAAOCAQEAVj3vlNW92nOy +WL6ukK2YJ5f+AbGwUgC4TeQbIXQbfsDuXmkqJa9c1h3a0nnJ85cp4IaH3gRZD/FZ +1GSFS5mvJQQeyUapl96Cshtwn5z2r3Ex3XsFpSzTucpH9sry9uetuUg/vBa3wW30 +6gmv7PO15wWeph6KU1HWk4HMdJP2udqmJQV0eVp+QD6CSyYRMG7hP0HHRwA11fXT +91Q+gT3aSWqas+8QPebrb9HIIkfLzM8BMZLZGOMivgkeGj5asuRrDFR6fUNOuIml +e9eiPZaGzPImNC1qkp2aGtAw4l1OBLBfiyB+d8E9lYLRRpo7PHi4b6HQDWSieB4p +TpPDpFQUWw== +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH +# Subject: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH +# Label: "D-TRUST Root Class 3 CA 2 2009" +# Serial: 623603 +# MD5 Fingerprint: cd:e0:25:69:8d:47:ac:9c:89:35:90:f7:fd:51:3d:2f +# SHA1 Fingerprint: 58:e8:ab:b0:36:15:33:fb:80:f7:9b:1b:6d:29:d3:ff:8d:5f:00:f0 +# SHA256 Fingerprint: 49:e7:a4:42:ac:f0:ea:62:87:05:00:54:b5:25:64:b6:50:e4:f4:9e:42:e3:48:d6:aa:38:e0:39:e9:57:b1:c1 +-----BEGIN CERTIFICATE----- +MIIEMzCCAxugAwIBAgIDCYPzMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNVBAYTAkRF +MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMMHkQtVFJVU1QgUm9vdCBD +bGFzcyAzIENBIDIgMjAwOTAeFw0wOTExMDUwODM1NThaFw0yOTExMDUwODM1NTha +ME0xCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMM +HkQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgMjAwOTCCASIwDQYJKoZIhvcNAQEB +BQADggEPADCCAQoCggEBANOySs96R+91myP6Oi/WUEWJNTrGa9v+2wBoqOADER03 +UAifTUpolDWzU9GUY6cgVq/eUXjsKj3zSEhQPgrfRlWLJ23DEE0NkVJD2IfgXU42 +tSHKXzlABF9bfsyjxiupQB7ZNoTWSPOSHjRGICTBpFGOShrvUD9pXRl/RcPHAY9R +ySPocq60vFYJfxLLHLGvKZAKyVXMD9O0Gu1HNVpK7ZxzBCHQqr0ME7UAyiZsxGsM +lFqVlNpQmvH/pStmMaTJOKDfHR+4CS7zp+hnUquVH+BGPtikw8paxTGA6Eian5Rp +/hnd2HN8gcqW3o7tszIFZYQ05ub9VxC1X3a/L7AQDcUCAwEAAaOCARowggEWMA8G +A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFP3aFMSfMN4hvR5COfyrYyNJ4PGEMA4G +A1UdDwEB/wQEAwIBBjCB0wYDVR0fBIHLMIHIMIGAoH6gfIZ6bGRhcDovL2RpcmVj +dG9yeS5kLXRydXN0Lm5ldC9DTj1ELVRSVVNUJTIwUm9vdCUyMENsYXNzJTIwMyUy +MENBJTIwMiUyMDIwMDksTz1ELVRydXN0JTIwR21iSCxDPURFP2NlcnRpZmljYXRl +cmV2b2NhdGlvbmxpc3QwQ6BBoD+GPWh0dHA6Ly93d3cuZC10cnVzdC5uZXQvY3Js +L2QtdHJ1c3Rfcm9vdF9jbGFzc18zX2NhXzJfMjAwOS5jcmwwDQYJKoZIhvcNAQEL +BQADggEBAH+X2zDI36ScfSF6gHDOFBJpiBSVYEQBrLLpME+bUMJm2H6NMLVwMeni +acfzcNsgFYbQDfC+rAF1hM5+n02/t2A7nPPKHeJeaNijnZflQGDSNiH+0LS4F9p0 +o3/U37CYAqxva2ssJSRyoWXuJVrl5jLn8t+rSfrzkGkj2wTZ51xY/GXUl77M/C4K +zCUqNQT4YJEVdT1B/yMfGchs64JTBKbkTCJNjYy6zltz7GRUUG3RnFX7acM2w4y8 +PIWmawomDeCTmGCufsYkl4phX5GOZpIJhzbNi5stPvZR1FDUWSi9g/LMKHtThm3Y +Johw1+qRzT65ysCQblrGXnRl11z+o+I= +-----END CERTIFICATE----- + +# Issuer: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH +# Subject: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH +# Label: "D-TRUST Root Class 3 CA 2 EV 2009" +# Serial: 623604 +# MD5 Fingerprint: aa:c6:43:2c:5e:2d:cd:c4:34:c0:50:4f:11:02:4f:b6 +# SHA1 Fingerprint: 96:c9:1b:0b:95:b4:10:98:42:fa:d0:d8:22:79:fe:60:fa:b9:16:83 +# SHA256 Fingerprint: ee:c5:49:6b:98:8c:e9:86:25:b9:34:09:2e:ec:29:08:be:d0:b0:f3:16:c2:d4:73:0c:84:ea:f1:f3:d3:48:81 +-----BEGIN CERTIFICATE----- +MIIEQzCCAyugAwIBAgIDCYP0MA0GCSqGSIb3DQEBCwUAMFAxCzAJBgNVBAYTAkRF +MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNVBAMMIUQtVFJVU1QgUm9vdCBD +bGFzcyAzIENBIDIgRVYgMjAwOTAeFw0wOTExMDUwODUwNDZaFw0yOTExMDUwODUw +NDZaMFAxCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNV +BAMMIUQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgRVYgMjAwOTCCASIwDQYJKoZI +hvcNAQEBBQADggEPADCCAQoCggEBAJnxhDRwui+3MKCOvXwEz75ivJn9gpfSegpn +ljgJ9hBOlSJzmY3aFS3nBfwZcyK3jpgAvDw9rKFs+9Z5JUut8Mxk2og+KbgPCdM0 +3TP1YtHhzRnp7hhPTFiu4h7WDFsVWtg6uMQYZB7jM7K1iXdODL/ZlGsTl28So/6Z +qQTMFexgaDbtCHu39b+T7WYxg4zGcTSHThfqr4uRjRxWQa4iN1438h3Z0S0NL2lR +p75mpoo6Kr3HGrHhFPC+Oh25z1uxav60sUYgovseO3Dvk5h9jHOW8sXvhXCtKSb8 +HgQ+HKDYD8tSg2J87otTlZCpV6LqYQXY+U3EJ/pure3511H3a6UCAwEAAaOCASQw +ggEgMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNOUikxiEyoZLsyvcop9Ntea +HNxnMA4GA1UdDwEB/wQEAwIBBjCB3QYDVR0fBIHVMIHSMIGHoIGEoIGBhn9sZGFw +Oi8vZGlyZWN0b3J5LmQtdHJ1c3QubmV0L0NOPUQtVFJVU1QlMjBSb290JTIwQ2xh +c3MlMjAzJTIwQ0ElMjAyJTIwRVYlMjAyMDA5LE89RC1UcnVzdCUyMEdtYkgsQz1E +RT9jZXJ0aWZpY2F0ZXJldm9jYXRpb25saXN0MEagRKBChkBodHRwOi8vd3d3LmQt +dHJ1c3QubmV0L2NybC9kLXRydXN0X3Jvb3RfY2xhc3NfM19jYV8yX2V2XzIwMDku +Y3JsMA0GCSqGSIb3DQEBCwUAA4IBAQA07XtaPKSUiO8aEXUHL7P+PPoeUSbrh/Yp +3uDx1MYkCenBz1UbtDDZzhr+BlGmFaQt77JLvyAoJUnRpjZ3NOhk31KxEcdzes05 +nsKtjHEh8lprr988TlWvsoRlFIm5d8sqMb7Po23Pb0iUMkZv53GMoKaEGTcH8gNF +CSuGdXzfX2lXANtu2KZyIktQ1HWYVt+3GP9DQ1CuekR78HlR10M9p9OB0/DJT7na +xpeG0ILD5EJt/rDiZE4OJudANCa1CInXCGNjOCd1HjPqbqjdn5lPdE2BiYBL3ZqX +KVwvvoFBuYz/6n1gBp7N1z3TLqMVvKjmJuVvw9y4AyHqnxbxLFS1 +-----END CERTIFICATE----- + +# Issuer: CN=CA Disig Root R2 O=Disig a.s. +# Subject: CN=CA Disig Root R2 O=Disig a.s. +# Label: "CA Disig Root R2" +# Serial: 10572350602393338211 +# MD5 Fingerprint: 26:01:fb:d8:27:a7:17:9a:45:54:38:1a:43:01:3b:03 +# SHA1 Fingerprint: b5:61:eb:ea:a4:de:e4:25:4b:69:1a:98:a5:57:47:c2:34:c7:d9:71 +# SHA256 Fingerprint: e2:3d:4a:03:6d:7b:70:e9:f5:95:b1:42:20:79:d2:b9:1e:df:bb:1f:b6:51:a0:63:3e:aa:8a:9d:c5:f8:07:03 +-----BEGIN CERTIFICATE----- +MIIFaTCCA1GgAwIBAgIJAJK4iNuwisFjMA0GCSqGSIb3DQEBCwUAMFIxCzAJBgNV +BAYTAlNLMRMwEQYDVQQHEwpCcmF0aXNsYXZhMRMwEQYDVQQKEwpEaXNpZyBhLnMu +MRkwFwYDVQQDExBDQSBEaXNpZyBSb290IFIyMB4XDTEyMDcxOTA5MTUzMFoXDTQy +MDcxOTA5MTUzMFowUjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmEx +EzARBgNVBAoTCkRpc2lnIGEucy4xGTAXBgNVBAMTEENBIERpc2lnIFJvb3QgUjIw +ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCio8QACdaFXS1tFPbCw3Oe +NcJxVX6B+6tGUODBfEl45qt5WDza/3wcn9iXAng+a0EE6UG9vgMsRfYvZNSrXaNH +PWSb6WiaxswbP7q+sos0Ai6YVRn8jG+qX9pMzk0DIaPY0jSTVpbLTAwAFjxfGs3I +x2ymrdMxp7zo5eFm1tL7A7RBZckQrg4FY8aAamkw/dLukO8NJ9+flXP04SXabBbe +QTg06ov80egEFGEtQX6sx3dOy1FU+16SGBsEWmjGycT6txOgmLcRK7fWV8x8nhfR +yyX+hk4kLlYMeE2eARKmK6cBZW58Yh2EhN/qwGu1pSqVg8NTEQxzHQuyRpDRQjrO +QG6Vrf/GlK1ul4SOfW+eioANSW1z4nuSHsPzwfPrLgVv2RvPN3YEyLRa5Beny912 +H9AZdugsBbPWnDTYltxhh5EF5EQIM8HauQhl1K6yNg3ruji6DOWbnuuNZt2Zz9aJ +QfYEkoopKW1rOhzndX0CcQ7zwOe9yxndnWCywmZgtrEE7snmhrmaZkCo5xHtgUUD +i/ZnWejBBhG93c+AAk9lQHhcR1DIm+YfgXvkRKhbhZri3lrVx/k6RGZL5DJUfORs +nLMOPReisjQS1n6yqEm70XooQL6iFh/f5DcfEXP7kAplQ6INfPgGAVUzfbANuPT1 +rqVCV3w2EYx7XsQDnYx5nQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1Ud +DwEB/wQEAwIBBjAdBgNVHQ4EFgQUtZn4r7CU9eMg1gqtzk5WpC5uQu0wDQYJKoZI +hvcNAQELBQADggIBACYGXnDnZTPIgm7ZnBc6G3pmsgH2eDtpXi/q/075KMOYKmFM +tCQSin1tERT3nLXK5ryeJ45MGcipvXrA1zYObYVybqjGom32+nNjf7xueQgcnYqf +GopTpti72TVVsRHFqQOzVju5hJMiXn7B9hJSi+osZ7z+Nkz1uM/Rs0mSO9MpDpkb +lvdhuDvEK7Z4bLQjb/D907JedR+Zlais9trhxTF7+9FGs9K8Z7RiVLoJ92Owk6Ka ++elSLotgEqv89WBW7xBci8QaQtyDW2QOy7W81k/BfDxujRNt+3vrMNDcTa/F1bal +TFtxyegxvug4BkihGuLq0t4SOVga/4AOgnXmt8kHbA7v/zjxmHHEt38OFdAlab0i +nSvtBfZGR6ztwPDUO+Ls7pZbkBNOHlY667DvlruWIxG68kOGdGSVyCh13x01utI3 +gzhTODY7z2zp+WsO0PsE6E9312UBeIYMej4hYvF/Y3EMyZ9E26gnonW+boE+18Dr +G5gPcFw0sorMwIUY6256s/daoQe/qUKS82Ail+QUoQebTnbAjn39pCXHR+3/H3Os +zMOl6W8KjptlwlCFtaOgUxLMVYdh84GuEEZhvUQhuMI9dM9+JDX6HAcOmz0iyu8x +L4ysEr3vQCj8KWefshNPZiTEUxnpHikV7+ZtsH8tZ/3zbBt1RqPlShfppNcL +-----END CERTIFICATE----- + +# Issuer: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV +# Subject: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV +# Label: "ACCVRAIZ1" +# Serial: 6828503384748696800 +# MD5 Fingerprint: d0:a0:5a:ee:05:b6:09:94:21:a1:7d:f1:b2:29:82:02 +# SHA1 Fingerprint: 93:05:7a:88:15:c6:4f:ce:88:2f:fa:91:16:52:28:78:bc:53:64:17 +# SHA256 Fingerprint: 9a:6e:c0:12:e1:a7:da:9d:be:34:19:4d:47:8a:d7:c0:db:18:22:fb:07:1d:f1:29:81:49:6e:d1:04:38:41:13 +-----BEGIN CERTIFICATE----- +MIIH0zCCBbugAwIBAgIIXsO3pkN/pOAwDQYJKoZIhvcNAQEFBQAwQjESMBAGA1UE +AwwJQUNDVlJBSVoxMRAwDgYDVQQLDAdQS0lBQ0NWMQ0wCwYDVQQKDARBQ0NWMQsw +CQYDVQQGEwJFUzAeFw0xMTA1MDUwOTM3MzdaFw0zMDEyMzEwOTM3MzdaMEIxEjAQ +BgNVBAMMCUFDQ1ZSQUlaMTEQMA4GA1UECwwHUEtJQUNDVjENMAsGA1UECgwEQUND +VjELMAkGA1UEBhMCRVMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCb +qau/YUqXry+XZpp0X9DZlv3P4uRm7x8fRzPCRKPfmt4ftVTdFXxpNRFvu8gMjmoY +HtiP2Ra8EEg2XPBjs5BaXCQ316PWywlxufEBcoSwfdtNgM3802/J+Nq2DoLSRYWo +G2ioPej0RGy9ocLLA76MPhMAhN9KSMDjIgro6TenGEyxCQ0jVn8ETdkXhBilyNpA +lHPrzg5XPAOBOp0KoVdDaaxXbXmQeOW1tDvYvEyNKKGno6e6Ak4l0Squ7a4DIrhr +IA8wKFSVf+DuzgpmndFALW4ir50awQUZ0m/A8p/4e7MCQvtQqR0tkw8jq8bBD5L/ +0KIV9VMJcRz/RROE5iZe+OCIHAr8Fraocwa48GOEAqDGWuzndN9wrqODJerWx5eH +k6fGioozl2A3ED6XPm4pFdahD9GILBKfb6qkxkLrQaLjlUPTAYVtjrs78yM2x/47 +4KElB0iryYl0/wiPgL/AlmXz7uxLaL2diMMxs0Dx6M/2OLuc5NF/1OVYm3z61PMO +m3WR5LpSLhl+0fXNWhn8ugb2+1KoS5kE3fj5tItQo05iifCHJPqDQsGH+tUtKSpa +cXpkatcnYGMN285J9Y0fkIkyF/hzQ7jSWpOGYdbhdQrqeWZ2iE9x6wQl1gpaepPl +uUsXQA+xtrn13k/c4LOsOxFwYIRKQ26ZIMApcQrAZQIDAQABo4ICyzCCAscwfQYI +KwYBBQUHAQEEcTBvMEwGCCsGAQUFBzAChkBodHRwOi8vd3d3LmFjY3YuZXMvZmls +ZWFkbWluL0FyY2hpdm9zL2NlcnRpZmljYWRvcy9yYWl6YWNjdjEuY3J0MB8GCCsG +AQUFBzABhhNodHRwOi8vb2NzcC5hY2N2LmVzMB0GA1UdDgQWBBTSh7Tj3zcnk1X2 +VuqB5TbMjB4/vTAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFNKHtOPfNyeT +VfZW6oHlNsyMHj+9MIIBcwYDVR0gBIIBajCCAWYwggFiBgRVHSAAMIIBWDCCASIG +CCsGAQUFBwICMIIBFB6CARAAQQB1AHQAbwByAGkAZABhAGQAIABkAGUAIABDAGUA +cgB0AGkAZgBpAGMAYQBjAGkA8wBuACAAUgBhAO0AegAgAGQAZQAgAGwAYQAgAEEA +QwBDAFYAIAAoAEEAZwBlAG4AYwBpAGEAIABkAGUAIABUAGUAYwBuAG8AbABvAGcA +7QBhACAAeQAgAEMAZQByAHQAaQBmAGkAYwBhAGMAaQDzAG4AIABFAGwAZQBjAHQA +cgDzAG4AaQBjAGEALAAgAEMASQBGACAAUQA0ADYAMAAxADEANQA2AEUAKQAuACAA +QwBQAFMAIABlAG4AIABoAHQAdABwADoALwAvAHcAdwB3AC4AYQBjAGMAdgAuAGUA +czAwBggrBgEFBQcCARYkaHR0cDovL3d3dy5hY2N2LmVzL2xlZ2lzbGFjaW9uX2Mu +aHRtMFUGA1UdHwROMEwwSqBIoEaGRGh0dHA6Ly93d3cuYWNjdi5lcy9maWxlYWRt +aW4vQXJjaGl2b3MvY2VydGlmaWNhZG9zL3JhaXphY2N2MV9kZXIuY3JsMA4GA1Ud +DwEB/wQEAwIBBjAXBgNVHREEEDAOgQxhY2N2QGFjY3YuZXMwDQYJKoZIhvcNAQEF +BQADggIBAJcxAp/n/UNnSEQU5CmH7UwoZtCPNdpNYbdKl02125DgBS4OxnnQ8pdp +D70ER9m+27Up2pvZrqmZ1dM8MJP1jaGo/AaNRPTKFpV8M9xii6g3+CfYCS0b78gU +JyCpZET/LtZ1qmxNYEAZSUNUY9rizLpm5U9EelvZaoErQNV/+QEnWCzI7UiRfD+m +AM/EKXMRNt6GGT6d7hmKG9Ww7Y49nCrADdg9ZuM8Db3VlFzi4qc1GwQA9j9ajepD +vV+JHanBsMyZ4k0ACtrJJ1vnE5Bc5PUzolVt3OAJTS+xJlsndQAJxGJ3KQhfnlms +tn6tn1QwIgPBHnFk/vk4CpYY3QIUrCPLBhwepH2NDd4nQeit2hW3sCPdK6jT2iWH +7ehVRE2I9DZ+hJp4rPcOVkkO1jMl1oRQQmwgEh0q1b688nCBpHBgvgW1m54ERL5h +I6zppSSMEYCUWqKiuUnSwdzRp+0xESyeGabu4VXhwOrPDYTkF7eifKXeVSUG7szA +h1xA2syVP1XgNce4hL60Xc16gwFy7ofmXx2utYXGJt/mwZrpHgJHnyqobalbz+xF +d3+YJ5oyXSrjhO7FmGYvliAd3djDJ9ew+f7Zfc3Qn48LFFhRny+Lwzgt3uiP1o2H +pPVWQxaZLPSkVrQ0uGE3ycJYgBugl6H8WY3pEfbRD0tVNEYqi4Y7 +-----END CERTIFICATE----- + +# Issuer: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA +# Subject: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA +# Label: "TWCA Global Root CA" +# Serial: 3262 +# MD5 Fingerprint: f9:03:7e:cf:e6:9e:3c:73:7a:2a:90:07:69:ff:2b:96 +# SHA1 Fingerprint: 9c:bb:48:53:f6:a4:f6:d3:52:a4:e8:32:52:55:60:13:f5:ad:af:65 +# SHA256 Fingerprint: 59:76:90:07:f7:68:5d:0f:cd:50:87:2f:9f:95:d5:75:5a:5b:2b:45:7d:81:f3:69:2b:61:0a:98:67:2f:0e:1b +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgICDL4wDQYJKoZIhvcNAQELBQAwUTELMAkGA1UEBhMCVFcx +EjAQBgNVBAoTCVRBSVdBTi1DQTEQMA4GA1UECxMHUm9vdCBDQTEcMBoGA1UEAxMT +VFdDQSBHbG9iYWwgUm9vdCBDQTAeFw0xMjA2MjcwNjI4MzNaFw0zMDEyMzExNTU5 +NTlaMFExCzAJBgNVBAYTAlRXMRIwEAYDVQQKEwlUQUlXQU4tQ0ExEDAOBgNVBAsT +B1Jvb3QgQ0ExHDAaBgNVBAMTE1RXQ0EgR2xvYmFsIFJvb3QgQ0EwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCwBdvI64zEbooh745NnHEKH1Jw7W2CnJfF +10xORUnLQEK1EjRsGcJ0pDFfhQKX7EMzClPSnIyOt7h52yvVavKOZsTuKwEHktSz +0ALfUPZVr2YOy+BHYC8rMjk1Ujoog/h7FsYYuGLWRyWRzvAZEk2tY/XTP3VfKfCh +MBwqoJimFb3u/Rk28OKRQ4/6ytYQJ0lM793B8YVwm8rqqFpD/G2Gb3PpN0Wp8DbH +zIh1HrtsBv+baz4X7GGqcXzGHaL3SekVtTzWoWH1EfcFbx39Eb7QMAfCKbAJTibc +46KokWofwpFFiFzlmLhxpRUZyXx1EcxwdE8tmx2RRP1WKKD+u4ZqyPpcC1jcxkt2 +yKsi2XMPpfRaAok/T54igu6idFMqPVMnaR1sjjIsZAAmY2E2TqNGtz99sy2sbZCi +laLOz9qC5wc0GZbpuCGqKX6mOL6OKUohZnkfs8O1CWfe1tQHRvMq2uYiN2DLgbYP +oA/pyJV/v1WRBXrPPRXAb94JlAGD1zQbzECl8LibZ9WYkTunhHiVJqRaCPgrdLQA +BDzfuBSO6N+pjWxnkjMdwLfS7JLIvgm/LCkFbwJrnu+8vyq8W8BQj0FwcYeyTbcE +qYSjMq+u7msXi7Kx/mzhkIyIqJdIzshNy/MGz19qCkKxHh53L46g5pIOBvwFItIm +4TFRfTLcDwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB +/zANBgkqhkiG9w0BAQsFAAOCAgEAXzSBdu+WHdXltdkCY4QWwa6gcFGn90xHNcgL +1yg9iXHZqjNB6hQbbCEAwGxCGX6faVsgQt+i0trEfJdLjbDorMjupWkEmQqSpqsn +LhpNgb+E1HAerUf+/UqdM+DyucRFCCEK2mlpc3INvjT+lIutwx4116KD7+U4x6WF +H6vPNOw/KP4M8VeGTslV9xzU2KV9Bnpv1d8Q34FOIWWxtuEXeZVFBs5fzNxGiWNo +RI2T9GRwoD2dKAXDOXC4Ynsg/eTb6QihuJ49CcdP+yz4k3ZB3lLg4VfSnQO8d57+ +nile98FRYB/e2guyLXW3Q0iT5/Z5xoRdgFlglPx4mI88k1HtQJAH32RjJMtOcQWh +15QaiDLxInQirqWm2BJpTGCjAu4r7NRjkgtevi92a6O2JryPA9gK8kxkRr05YuWW +6zRjESjMlfGt7+/cgFhI6Uu46mWs6fyAtbXIRfmswZ/ZuepiiI7E8UuDEq3mi4TW +nsLrgxifarsbJGAzcMzs9zLzXNl5fe+epP7JI8Mk7hWSsT2RTyaGvWZzJBPqpK5j +wa19hAM8EHiGG3njxPPyBJUgriOCxLM6AGK/5jYk4Ve6xx6QddVfP5VhK8E7zeWz +aGHQRiapIVJpLesux+t3zqY6tQMzT3bR51xUAV3LePTJDL/PEo4XLSNolOer/qmy +KwbQBM0= +-----END CERTIFICATE----- + +# Issuer: CN=TeliaSonera Root CA v1 O=TeliaSonera +# Subject: CN=TeliaSonera Root CA v1 O=TeliaSonera +# Label: "TeliaSonera Root CA v1" +# Serial: 199041966741090107964904287217786801558 +# MD5 Fingerprint: 37:41:49:1b:18:56:9a:26:f5:ad:c2:66:fb:40:a5:4c +# SHA1 Fingerprint: 43:13:bb:96:f1:d5:86:9b:c1:4e:6a:92:f6:cf:f6:34:69:87:82:37 +# SHA256 Fingerprint: dd:69:36:fe:21:f8:f0:77:c1:23:a1:a5:21:c1:22:24:f7:22:55:b7:3e:03:a7:26:06:93:e8:a2:4b:0f:a3:89 +-----BEGIN CERTIFICATE----- +MIIFODCCAyCgAwIBAgIRAJW+FqD3LkbxezmCcvqLzZYwDQYJKoZIhvcNAQEFBQAw +NzEUMBIGA1UECgwLVGVsaWFTb25lcmExHzAdBgNVBAMMFlRlbGlhU29uZXJhIFJv +b3QgQ0EgdjEwHhcNMDcxMDE4MTIwMDUwWhcNMzIxMDE4MTIwMDUwWjA3MRQwEgYD +VQQKDAtUZWxpYVNvbmVyYTEfMB0GA1UEAwwWVGVsaWFTb25lcmEgUm9vdCBDQSB2 +MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMK+6yfwIaPzaSZVfp3F +VRaRXP3vIb9TgHot0pGMYzHw7CTww6XScnwQbfQ3t+XmfHnqjLWCi65ItqwA3GV1 +7CpNX8GH9SBlK4GoRz6JI5UwFpB/6FcHSOcZrr9FZ7E3GwYq/t75rH2D+1665I+X +Z75Ljo1kB1c4VWk0Nj0TSO9P4tNmHqTPGrdeNjPUtAa9GAH9d4RQAEX1jF3oI7x+ +/jXh7VB7qTCNGdMJjmhnXb88lxhTuylixcpecsHHltTbLaC0H2kD7OriUPEMPPCs +81Mt8Bz17Ww5OXOAFshSsCPN4D7c3TxHoLs1iuKYaIu+5b9y7tL6pe0S7fyYGKkm +dtwoSxAgHNN/Fnct7W+A90m7UwW7XWjH1Mh1Fj+JWov3F0fUTPHSiXk+TT2YqGHe +Oh7S+F4D4MHJHIzTjU3TlTazN19jY5szFPAtJmtTfImMMsJu7D0hADnJoWjiUIMu +sDor8zagrC/kb2HCUQk5PotTubtn2txTuXZZNp1D5SDgPTJghSJRt8czu90VL6R4 +pgd7gUY2BIbdeTXHlSw7sKMXNeVzH7RcWe/a6hBle3rQf5+ztCo3O3CLm1u5K7fs +slESl1MpWtTwEhDcTwK7EpIvYtQ/aUN8Ddb8WHUBiJ1YFkveupD/RwGJBmr2X7KQ +arMCpgKIv7NHfirZ1fpoeDVNAgMBAAGjPzA9MA8GA1UdEwEB/wQFMAMBAf8wCwYD +VR0PBAQDAgEGMB0GA1UdDgQWBBTwj1k4ALP1j5qWDNXr+nuqF+gTEjANBgkqhkiG +9w0BAQUFAAOCAgEAvuRcYk4k9AwI//DTDGjkk0kiP0Qnb7tt3oNmzqjMDfz1mgbl +dxSR651Be5kqhOX//CHBXfDkH1e3damhXwIm/9fH907eT/j3HEbAek9ALCI18Bmx +0GtnLLCo4MBANzX2hFxc469CeP6nyQ1Q6g2EdvZR74NTxnr/DlZJLo961gzmJ1Tj +TQpgcmLNkQfWpb/ImWvtxBnmq0wROMVvMeJuScg/doAmAyYp4Db29iBT4xdwNBed +Y2gea+zDTYa4EzAvXUYNR0PVG6pZDrlcjQZIrXSHX8f8MVRBE+LHIQ6e4B4N4cB7 +Q4WQxYpYxmUKeFfyxiMPAdkgS94P+5KFdSpcc41teyWRyu5FrgZLAMzTsVlQ2jqI +OylDRl6XK1TOU2+NSueW+r9xDkKLfP0ooNBIytrEgUy7onOTJsjrDNYmiLbAJM+7 +vVvrdX3pCI6GMyx5dwlppYn8s3CQh3aP0yK7Qs69cwsgJirQmz1wHiRszYd2qReW +t88NkvuOGKmYSdGe/mBEciG5Ge3C9THxOUiIkCR1VBatzvT4aRRkOfujuLpwQMcn +HL/EVlP6Y2XQ8xwOFvVrhlhNGNTkDY6lnVuR3HYkUD/GKvvZt5y11ubQ2egZixVx +SK236thZiNSQvxaz2emsWWFUyBy6ysHK4bkgTI86k4mloMy/0/Z1pHWWbVY= +-----END CERTIFICATE----- + +# Issuer: CN=E-Tugra Certification Authority O=E-Tu\u011fra EBG Bili\u015fim Teknolojileri ve Hizmetleri A.\u015e. OU=E-Tugra Sertifikasyon Merkezi +# Subject: CN=E-Tugra Certification Authority O=E-Tu\u011fra EBG Bili\u015fim Teknolojileri ve Hizmetleri A.\u015e. OU=E-Tugra Sertifikasyon Merkezi +# Label: "E-Tugra Certification Authority" +# Serial: 7667447206703254355 +# MD5 Fingerprint: b8:a1:03:63:b0:bd:21:71:70:8a:6f:13:3a:bb:79:49 +# SHA1 Fingerprint: 51:c6:e7:08:49:06:6e:f3:92:d4:5c:a0:0d:6d:a3:62:8f:c3:52:39 +# SHA256 Fingerprint: b0:bf:d5:2b:b0:d7:d9:bd:92:bf:5d:4d:c1:3d:a2:55:c0:2c:54:2f:37:83:65:ea:89:39:11:f5:5e:55:f2:3c +-----BEGIN CERTIFICATE----- +MIIGSzCCBDOgAwIBAgIIamg+nFGby1MwDQYJKoZIhvcNAQELBQAwgbIxCzAJBgNV +BAYTAlRSMQ8wDQYDVQQHDAZBbmthcmExQDA+BgNVBAoMN0UtVHXEn3JhIEVCRyBC +aWxpxZ9pbSBUZWtub2xvamlsZXJpIHZlIEhpem1ldGxlcmkgQS7Fni4xJjAkBgNV +BAsMHUUtVHVncmEgU2VydGlmaWthc3lvbiBNZXJrZXppMSgwJgYDVQQDDB9FLVR1 +Z3JhIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTEzMDMwNTEyMDk0OFoXDTIz +MDMwMzEyMDk0OFowgbIxCzAJBgNVBAYTAlRSMQ8wDQYDVQQHDAZBbmthcmExQDA+ +BgNVBAoMN0UtVHXEn3JhIEVCRyBCaWxpxZ9pbSBUZWtub2xvamlsZXJpIHZlIEhp +em1ldGxlcmkgQS7Fni4xJjAkBgNVBAsMHUUtVHVncmEgU2VydGlmaWthc3lvbiBN +ZXJrZXppMSgwJgYDVQQDDB9FLVR1Z3JhIENlcnRpZmljYXRpb24gQXV0aG9yaXR5 +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4vU/kwVRHoViVF56C/UY +B4Oufq9899SKa6VjQzm5S/fDxmSJPZQuVIBSOTkHS0vdhQd2h8y/L5VMzH2nPbxH +D5hw+IyFHnSOkm0bQNGZDbt1bsipa5rAhDGvykPL6ys06I+XawGb1Q5KCKpbknSF +Q9OArqGIW66z6l7LFpp3RMih9lRozt6Plyu6W0ACDGQXwLWTzeHxE2bODHnv0ZEo +q1+gElIwcxmOj+GMB6LDu0rw6h8VqO4lzKRG+Bsi77MOQ7osJLjFLFzUHPhdZL3D +k14opz8n8Y4e0ypQBaNV2cvnOVPAmJ6MVGKLJrD3fY185MaeZkJVgkfnsliNZvcH +fC425lAcP9tDJMW/hkd5s3kc91r0E+xs+D/iWR+V7kI+ua2oMoVJl0b+SzGPWsut +dEcf6ZG33ygEIqDUD13ieU/qbIWGvaimzuT6w+Gzrt48Ue7LE3wBf4QOXVGUnhMM +ti6lTPk5cDZvlsouDERVxcr6XQKj39ZkjFqzAQqptQpHF//vkUAqjqFGOjGY5RH8 +zLtJVor8udBhmm9lbObDyz51Sf6Pp+KJxWfXnUYTTjF2OySznhFlhqt/7x3U+Lzn +rFpct1pHXFXOVbQicVtbC/DP3KBhZOqp12gKY6fgDT+gr9Oq0n7vUaDmUStVkhUX +U8u3Zg5mTPj5dUyQ5xJwx0UCAwEAAaNjMGEwHQYDVR0OBBYEFC7j27JJ0JxUeVz6 +Jyr+zE7S6E5UMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAULuPbsknQnFR5 +XPonKv7MTtLoTlQwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAF +Nzr0TbdF4kV1JI+2d1LoHNgQk2Xz8lkGpD4eKexd0dCrfOAKkEh47U6YA5n+KGCR +HTAduGN8qOY1tfrTYXbm1gdLymmasoR6d5NFFxWfJNCYExL/u6Au/U5Mh/jOXKqY +GwXgAEZKgoClM4so3O0409/lPun++1ndYYRP0lSWE2ETPo+Aab6TR7U1Q9Jauz1c +77NCR807VRMGsAnb/WP2OogKmW9+4c4bU2pEZiNRCHu8W1Ki/QY3OEBhj0qWuJA3 ++GbHeJAAFS6LrVE1Uweoa2iu+U48BybNCAVwzDk/dr2l02cmAYamU9JgO3xDf1WK +vJUawSg5TB9D0pH0clmKuVb8P7Sd2nCcdlqMQ1DujjByTd//SffGqWfZbawCEeI6 +FiWnWAjLb1NBnEg4R2gz0dfHj9R0IdTDBZB6/86WiLEVKV0jq9BgoRJP3vQXzTLl +yb/IQ639Lo7xr+L0mPoSHyDYwKcMhcWQ9DstliaxLL5Mq+ux0orJ23gTDx4JnW2P +AJ8C2sH6H3p6CcRK5ogql5+Ji/03X186zjhZhkuvcQu02PJwT58yE+Owp1fl2tpD +y4Q08ijE6m30Ku/Ba3ba+367hTzSU8JNvnHhRdH9I2cNE3X7z2VnIp2usAnRCf8d +NL/+I5c30jn6PQ0GC7TbO6Orb1wdtn7os4I07QZcJA== +-----END CERTIFICATE----- + +# Issuer: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Subject: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center +# Label: "T-TeleSec GlobalRoot Class 2" +# Serial: 1 +# MD5 Fingerprint: 2b:9b:9e:e4:7b:6c:1f:00:72:1a:cc:c1:77:79:df:6a +# SHA1 Fingerprint: 59:0d:2d:7d:88:4f:40:2e:61:7e:a5:62:32:17:65:cf:17:d8:94:e9 +# SHA256 Fingerprint: 91:e2:f5:78:8d:58:10:eb:a7:ba:58:73:7d:e1:54:8a:8e:ca:cd:01:45:98:bc:0b:14:3e:04:1b:17:05:25:52 +-----BEGIN CERTIFICATE----- +MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx +KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd +BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl +YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1 +OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy +aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50 +ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G +CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd +AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC +FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi +1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq +jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ +wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/ +WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy +NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC +uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw +IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6 +g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN +9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP +BSeOE6Fuwg== +-----END CERTIFICATE----- + +# Issuer: CN=Atos TrustedRoot 2011 O=Atos +# Subject: CN=Atos TrustedRoot 2011 O=Atos +# Label: "Atos TrustedRoot 2011" +# Serial: 6643877497813316402 +# MD5 Fingerprint: ae:b9:c4:32:4b:ac:7f:5d:66:cc:77:94:bb:2a:77:56 +# SHA1 Fingerprint: 2b:b1:f5:3e:55:0c:1d:c5:f1:d4:e6:b7:6a:46:4b:55:06:02:ac:21 +# SHA256 Fingerprint: f3:56:be:a2:44:b7:a9:1e:b3:5d:53:ca:9a:d7:86:4a:ce:01:8e:2d:35:d5:f8:f9:6d:df:68:a6:f4:1a:a4:74 +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIIXDPLYixfszIwDQYJKoZIhvcNAQELBQAwPDEeMBwGA1UE +AwwVQXRvcyBUcnVzdGVkUm9vdCAyMDExMQ0wCwYDVQQKDARBdG9zMQswCQYDVQQG +EwJERTAeFw0xMTA3MDcxNDU4MzBaFw0zMDEyMzEyMzU5NTlaMDwxHjAcBgNVBAMM +FUF0b3MgVHJ1c3RlZFJvb3QgMjAxMTENMAsGA1UECgwEQXRvczELMAkGA1UEBhMC +REUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVhTuXbyo7LjvPpvMp +Nb7PGKw+qtn4TaA+Gke5vJrf8v7MPkfoepbCJI419KkM/IL9bcFyYie96mvr54rM +VD6QUM+A1JX76LWC1BTFtqlVJVfbsVD2sGBkWXppzwO3bw2+yj5vdHLqqjAqc2K+ +SZFhyBH+DgMq92og3AIVDV4VavzjgsG1xZ1kCWyjWZgHJ8cblithdHFsQ/H3NYkQ +4J7sVaE3IqKHBAUsR320HLliKWYoyrfhk/WklAOZuXCFteZI6o1Q/NnezG8HDt0L +cp2AMBYHlT8oDv3FdU9T1nSatCQujgKRz3bFmx5VdJx4IbHwLfELn8LVlhgf8FQi +eowHAgMBAAGjfTB7MB0GA1UdDgQWBBSnpQaxLKYJYO7Rl+lwrrw7GWzbITAPBgNV +HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFKelBrEspglg7tGX6XCuvDsZbNshMBgG +A1UdIAQRMA8wDQYLKwYBBAGwLQMEAQEwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3 +DQEBCwUAA4IBAQAmdzTblEiGKkGdLD4GkGDEjKwLVLgfuXvTBznk+j57sj1O7Z8j +vZfza1zv7v1Apt+hk6EKhqzvINB5Ab149xnYJDE0BAGmuhWawyfc2E8PzBhj/5kP +DpFrdRbhIfzYJsdHt6bPWHJxfrrhTZVHO8mvbaG0weyJ9rQPOLXiZNwlz6bb65pc +maHFCN795trV1lpFDMS3wrUU77QR/w4VtfX128a961qn8FYiqTxlVMYVqL2Gns2D +lmh6cYGJ4Qvh6hEbaAjMaZ7snkGeRDImeuKHCnE96+RapNLbxc3G3mB/ufNPRJLv +KrcYPqcZ2Qt9sTdBQrC6YB3y/gkRsPCHe6ed +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 1 G3" +# Serial: 687049649626669250736271037606554624078720034195 +# MD5 Fingerprint: a4:bc:5b:3f:fe:37:9a:fa:64:f0:e2:fa:05:3d:0b:ab +# SHA1 Fingerprint: 1b:8e:ea:57:96:29:1a:c9:39:ea:b8:0a:81:1a:73:73:c0:93:79:67 +# SHA256 Fingerprint: 8a:86:6f:d1:b2:76:b5:7e:57:8e:92:1c:65:82:8a:2b:ed:58:e9:f2:f2:88:05:41:34:b7:f1:f4:bf:c9:cc:74 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIUeFhfLq0sGUvjNwc1NBMotZbUZZMwDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMSBHMzAeFw0xMjAxMTIxNzI3NDRaFw00 +MjAxMTIxNzI3NDRaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDEgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCgvlAQjunybEC0BJyFuTHK3C3kEakEPBtV +wedYMB0ktMPvhd6MLOHBPd+C5k+tR4ds7FtJwUrVu4/sh6x/gpqG7D0DmVIB0jWe +rNrwU8lmPNSsAgHaJNM7qAJGr6Qc4/hzWHa39g6QDbXwz8z6+cZM5cOGMAqNF341 +68Xfuw6cwI2H44g4hWf6Pser4BOcBRiYz5P1sZK0/CPTz9XEJ0ngnjybCKOLXSoh +4Pw5qlPafX7PGglTvF0FBM+hSo+LdoINofjSxxR3W5A2B4GbPgb6Ul5jxaYA/qXp +UhtStZI5cgMJYr2wYBZupt0lwgNm3fME0UDiTouG9G/lg6AnhF4EwfWQvTA9xO+o +abw4m6SkltFi2mnAAZauy8RRNOoMqv8hjlmPSlzkYZqn0ukqeI1RPToV7qJZjqlc +3sX5kCLliEVx3ZGZbHqfPT2YfF72vhZooF6uCyP8Wg+qInYtyaEQHeTTRCOQiJ/G +KubX9ZqzWB4vMIkIG1SitZgj7Ah3HJVdYdHLiZxfokqRmu8hqkkWCKi9YSgxyXSt +hfbZxbGL0eUQMk1fiyA6PEkfM4VZDdvLCXVDaXP7a3F98N/ETH3Goy7IlXnLc6KO +Tk0k+17kBL5yG6YnLUlamXrXXAkgt3+UuU/xDRxeiEIbEbfnkduebPRq34wGmAOt +zCjvpUfzUwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQUo5fW816iEOGrRZ88F2Q87gFwnMwwDQYJKoZIhvcNAQELBQAD +ggIBABj6W3X8PnrHX3fHyt/PX8MSxEBd1DKquGrX1RUVRpgjpeaQWxiZTOOtQqOC +MTaIzen7xASWSIsBx40Bz1szBpZGZnQdT+3Btrm0DWHMY37XLneMlhwqI2hrhVd2 +cDMT/uFPpiN3GPoajOi9ZcnPP/TJF9zrx7zABC4tRi9pZsMbj/7sPtPKlL92CiUN +qXsCHKnQO18LwIE6PWThv6ctTr1NxNgpxiIY0MWscgKCP6o6ojoilzHdCGPDdRS5 +YCgtW2jgFqlmgiNR9etT2DGbe+m3nUvriBbP+V04ikkwj+3x6xn0dxoxGE1nVGwv +b2X52z3sIexe9PSLymBlVNFxZPT5pqOBMzYzcfCkeF9OrYMh3jRJjehZrJ3ydlo2 +8hP0r+AJx2EqbPfgna67hkooby7utHnNkDPDs3b69fBsnQGQ+p6Q9pxyz0fawx/k +NSBT8lTR32GDpgLiJTjehTItXnOQUl1CxM49S+H5GYQd1aJQzEH7QRTDvdbJWqNj +ZgKAvQU6O0ec7AAmTPWIUb+oI38YB7AL7YsmoWTTYUrrXJ/es69nA7Mf3W1daWhp +q1467HxpvMc7hU6eFbm0FU/DlXpY18ls6Wy58yljXrQs8C097Vpl4KlbQMJImYFt +nh8GKjwStIsPm6Ik8KaN1nrgS7ZklmOVhMJKzRwuJIczYOXD +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 2 G3" +# Serial: 390156079458959257446133169266079962026824725800 +# MD5 Fingerprint: af:0c:86:6e:bf:40:2d:7f:0b:3e:12:50:ba:12:3d:06 +# SHA1 Fingerprint: 09:3c:61:f3:8b:8b:dc:7d:55:df:75:38:02:05:00:e1:25:f5:c8:36 +# SHA256 Fingerprint: 8f:e4:fb:0a:f9:3a:4d:0d:67:db:0b:eb:b2:3e:37:c7:1b:f3:25:dc:bc:dd:24:0e:a0:4d:af:58:b4:7e:18:40 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIURFc0JFuBiZs18s64KztbpybwdSgwDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMiBHMzAeFw0xMjAxMTIxODU5MzJaFw00 +MjAxMTIxODU5MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDIgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQChriWyARjcV4g/Ruv5r+LrI3HimtFhZiFf +qq8nUeVuGxbULX1QsFN3vXg6YOJkApt8hpvWGo6t/x8Vf9WVHhLL5hSEBMHfNrMW +n4rjyduYNM7YMxcoRvynyfDStNVNCXJJ+fKH46nafaF9a7I6JaltUkSs+L5u+9ym +c5GQYaYDFCDy54ejiK2toIz/pgslUiXnFgHVy7g1gQyjO/Dh4fxaXc6AcW34Sas+ +O7q414AB+6XrW7PFXmAqMaCvN+ggOp+oMiwMzAkd056OXbxMmO7FGmh77FOm6RQ1 +o9/NgJ8MSPsc9PG/Srj61YxxSscfrf5BmrODXfKEVu+lV0POKa2Mq1W/xPtbAd0j +IaFYAI7D0GoT7RPjEiuA3GfmlbLNHiJuKvhB1PLKFAeNilUSxmn1uIZoL1NesNKq +IcGY5jDjZ1XHm26sGahVpkUG0CM62+tlXSoREfA7T8pt9DTEceT/AFr2XK4jYIVz +8eQQsSWu1ZK7E8EM4DnatDlXtas1qnIhO4M15zHfeiFuuDIIfR0ykRVKYnLP43eh +vNURG3YBZwjgQQvD6xVu+KQZ2aKrr+InUlYrAoosFCT5v0ICvybIxo/gbjh9Uy3l +7ZizlWNof/k19N+IxWA1ksB8aRxhlRbQ694Lrz4EEEVlWFA4r0jyWbYW8jwNkALG +cC4BrTwV1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQU7edvdlq/YOxJW8ald7tyFnGbxD0wDQYJKoZIhvcNAQELBQAD +ggIBAJHfgD9DCX5xwvfrs4iP4VGyvD11+ShdyLyZm3tdquXK4Qr36LLTn91nMX66 +AarHakE7kNQIXLJgapDwyM4DYvmL7ftuKtwGTTwpD4kWilhMSA/ohGHqPHKmd+RC +roijQ1h5fq7KpVMNqT1wvSAZYaRsOPxDMuHBR//47PERIjKWnML2W2mWeyAMQ0Ga +W/ZZGYjeVYg3UQt4XAoeo0L9x52ID8DyeAIkVJOviYeIyUqAHerQbj5hLja7NQ4n +lv1mNDthcnPxFlxHBlRJAHpYErAK74X9sbgzdWqTHBLmYF5vHX/JHyPLhGGfHoJE ++V+tYlUkmlKY7VHnoX6XOuYvHxHaU4AshZ6rNRDbIl9qxV6XU/IyAgkwo1jwDQHV +csaxfGl7w/U2Rcxhbl5MlMVerugOXou/983g7aEOGzPuVBj+D77vfoRrQ+NwmNtd +dbINWQeFFSM51vHfqSYP1kjHs6Yi9TM3WpVHn3u6GBVv/9YUZINJ0gpnIdsPNWNg +KCLjsZWDzYWm3S8P52dSbrsvhXz1SnPnxT7AvSESBT/8twNJAlvIJebiVDj1eYeM +HVOyToV7BjjHLPj4sHKNJeV3UvQDHEimUF+IIDBu8oJDqz2XhOdT+yHBTw8imoa4 +WSr2Rz0ZiC3oheGe7IUIarFsNMkd7EgrO3jtZsSOeWmD3n+M +-----END CERTIFICATE----- + +# Issuer: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited +# Subject: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited +# Label: "QuoVadis Root CA 3 G3" +# Serial: 268090761170461462463995952157327242137089239581 +# MD5 Fingerprint: df:7d:b9:ad:54:6f:68:a1:df:89:57:03:97:43:b0:d7 +# SHA1 Fingerprint: 48:12:bd:92:3c:a8:c4:39:06:e7:30:6d:27:96:e6:a4:cf:22:2e:7d +# SHA256 Fingerprint: 88:ef:81:de:20:2e:b0:18:45:2e:43:f8:64:72:5c:ea:5f:bd:1f:c2:d9:d2:05:73:07:09:c5:d8:b8:69:0f:46 +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIULvWbAiin23r/1aOp7r0DoM8Sah0wDQYJKoZIhvcNAQEL +BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc +BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMyBHMzAeFw0xMjAxMTIyMDI2MzJaFw00 +MjAxMTIyMDI2MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM +aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDMgRzMwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCzyw4QZ47qFJenMioKVjZ/aEzHs286IxSR +/xl/pcqs7rN2nXrpixurazHb+gtTTK/FpRp5PIpM/6zfJd5O2YIyC0TeytuMrKNu +FoM7pmRLMon7FhY4futD4tN0SsJiCnMK3UmzV9KwCoWdcTzeo8vAMvMBOSBDGzXR +U7Ox7sWTaYI+FrUoRqHe6okJ7UO4BUaKhvVZR74bbwEhELn9qdIoyhA5CcoTNs+c +ra1AdHkrAj80//ogaX3T7mH1urPnMNA3I4ZyYUUpSFlob3emLoG+B01vr87ERROR +FHAGjx+f+IdpsQ7vw4kZ6+ocYfx6bIrc1gMLnia6Et3UVDmrJqMz6nWB2i3ND0/k +A9HvFZcba5DFApCTZgIhsUfei5pKgLlVj7WiL8DWM2fafsSntARE60f75li59wzw +eyuxwHApw0BiLTtIadwjPEjrewl5qW3aqDCYz4ByA4imW0aucnl8CAMhZa634Ryl +sSqiMd5mBPfAdOhx3v89WcyWJhKLhZVXGqtrdQtEPREoPHtht+KPZ0/l7DxMYIBp +VzgeAVuNVejH38DMdyM0SXV89pgR6y3e7UEuFAUCf+D+IOs15xGsIs5XPd7JMG0Q +A4XN8f+MFrXBsj6IbGB/kE+V9/YtrQE5BwT6dYB9v0lQ7e/JxHwc64B+27bQ3RP+ +ydOc17KXqQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +BjAdBgNVHQ4EFgQUxhfQvKjqAkPyGwaZXSuQILnXnOQwDQYJKoZIhvcNAQELBQAD +ggIBADRh2Va1EodVTd2jNTFGu6QHcrxfYWLopfsLN7E8trP6KZ1/AvWkyaiTt3px +KGmPc+FSkNrVvjrlt3ZqVoAh313m6Tqe5T72omnHKgqwGEfcIHB9UqM+WXzBusnI +FUBhynLWcKzSt/Ac5IYp8M7vaGPQtSCKFWGafoaYtMnCdvvMujAWzKNhxnQT5Wvv +oxXqA/4Ti2Tk08HS6IT7SdEQTXlm66r99I0xHnAUrdzeZxNMgRVhvLfZkXdxGYFg +u/BYpbWcC/ePIlUnwEsBbTuZDdQdm2NnL9DuDcpmvJRPpq3t/O5jrFc/ZSXPsoaP +0Aj/uHYUbt7lJ+yreLVTubY/6CD50qi+YUbKh4yE8/nxoGibIh6BJpsQBJFxwAYf +3KDTuVan45gtf4Od34wrnDKOMpTwATwiKp9Dwi7DmDkHOHv8XgBCH/MyJnmDhPbl +8MFREsALHgQjDFSlTC9JxUrRtm5gDWv8a4uFJGS3iQ6rJUdbPM9+Sb3H6QrG2vd+ +DhcI00iX0HGS8A85PjRqHH3Y8iKuu2n0M7SmSFXRDw4m6Oy2Cy2nhTXN/VnIn9HN +PlopNLk9hM6xZdRZkZFWdSHBd575euFgndOtBBj0fOtek49TSiIp+EgrPk2GrFt/ +ywaZWWDYWGWVjUTR939+J399roD1B0y2PpxxVJkES/1Y+Zj0 +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root G2" +# Serial: 15385348160840213938643033620894905419 +# MD5 Fingerprint: 92:38:b9:f8:63:24:82:65:2c:57:33:e6:fe:81:8f:9d +# SHA1 Fingerprint: a1:4b:48:d9:43:ee:0a:0e:40:90:4f:3c:e0:a4:c0:91:93:51:5d:3f +# SHA256 Fingerprint: 7d:05:eb:b6:82:33:9f:8c:94:51:ee:09:4e:eb:fe:fa:79:53:a1:14:ed:b2:f4:49:49:45:2f:ab:7d:2f:c1:85 +-----BEGIN CERTIFICATE----- +MIIDljCCAn6gAwIBAgIQC5McOtY5Z+pnI7/Dr5r0SzANBgkqhkiG9w0BAQsFADBl +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv +b3QgRzIwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQG +EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl +cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzIwggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZ5ygvUj82ckmIkzTz+GoeMVSA +n61UQbVH35ao1K+ALbkKz3X9iaV9JPrjIgwrvJUXCzO/GU1BBpAAvQxNEP4Htecc +biJVMWWXvdMX0h5i89vqbFCMP4QMls+3ywPgym2hFEwbid3tALBSfK+RbLE4E9Hp +EgjAALAcKxHad3A2m67OeYfcgnDmCXRwVWmvo2ifv922ebPynXApVfSr/5Vh88lA +bx3RvpO704gqu52/clpWcTs/1PPRCv4o76Pu2ZmvA9OPYLfykqGxvYmJHzDNw6Yu +YjOuFgJ3RFrngQo8p0Quebg/BLxcoIfhG69Rjs3sLPr4/m3wOnyqi+RnlTGNAgMB +AAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQW +BBTOw0q5mVXyuNtgv6l+vVa1lzan1jANBgkqhkiG9w0BAQsFAAOCAQEAyqVVjOPI +QW5pJ6d1Ee88hjZv0p3GeDgdaZaikmkuOGybfQTUiaWxMTeKySHMq2zNixya1r9I +0jJmwYrA8y8678Dj1JGG0VDjA9tzd29KOVPt3ibHtX2vK0LRdWLjSisCx1BL4Gni +lmwORGYQRI+tBev4eaymG+g3NJ1TyWGqolKvSnAWhsI6yLETcDbYz+70CjTVW0z9 +B5yiutkBclzzTcHdDrEcDcRjvq30FPuJ7KJBDkzMyFdA0G4Dqs0MjomZmWzwPDCv +ON9vvKO+KSAnq3T/EyJ43pdSVR6DtVQgA+6uwE9W3jfMw3+qBCe703e4YtsXfJwo +IhNzbM8m9Yop5w== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Assured ID Root G3" +# Serial: 15459312981008553731928384953135426796 +# MD5 Fingerprint: 7c:7f:65:31:0c:81:df:8d:ba:3e:99:e2:5c:ad:6e:fb +# SHA1 Fingerprint: f5:17:a2:4f:9a:48:c6:c9:f8:a2:00:26:9f:dc:0f:48:2c:ab:30:89 +# SHA256 Fingerprint: 7e:37:cb:8b:4c:47:09:0c:ab:36:55:1b:a6:f4:5d:b8:40:68:0f:ba:16:6a:95:2d:b1:00:71:7f:43:05:3f:c2 +-----BEGIN CERTIFICATE----- +MIICRjCCAc2gAwIBAgIQC6Fa+h3foLVJRK/NJKBs7DAKBggqhkjOPQQDAzBlMQsw +CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu +ZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3Qg +RzMwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQGEwJV +UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu +Y29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzMwdjAQBgcq +hkjOPQIBBgUrgQQAIgNiAAQZ57ysRGXtzbg/WPuNsVepRC0FFfLvC/8QdJ+1YlJf +Zn4f5dwbRXkLzMZTCp2NXQLZqVneAlr2lSoOjThKiknGvMYDOAdfVdp+CW7if17Q +RSAPWXYQ1qAk8C3eNvJsKTmjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/ +BAQDAgGGMB0GA1UdDgQWBBTL0L2p4ZgFUaFNN6KDec6NHSrkhDAKBggqhkjOPQQD +AwNnADBkAjAlpIFFAmsSS3V0T8gj43DydXLefInwz5FyYZ5eEJJZVrmDxxDnOOlY +JjZ91eQ0hjkCMHw2U/Aw5WJjOpnitqM7mzT6HtoQknFekROn3aRukswy1vUhZscv +6pZjamVFkpUBtA== +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root G2" +# Serial: 4293743540046975378534879503202253541 +# MD5 Fingerprint: e4:a6:8a:c8:54:ac:52:42:46:0a:fd:72:48:1b:2a:44 +# SHA1 Fingerprint: df:3c:24:f9:bf:d6:66:76:1b:26:80:73:fe:06:d1:cc:8d:4f:82:a4 +# SHA256 Fingerprint: cb:3c:cb:b7:60:31:e5:e0:13:8f:8d:d3:9a:23:f9:de:47:ff:c3:5e:43:c1:14:4c:ea:27:d4:6a:5a:b1:cb:5f +-----BEGIN CERTIFICATE----- +MIIDjjCCAnagAwIBAgIQAzrx5qcRqaC7KGSxHQn65TANBgkqhkiG9w0BAQsFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH +MjAeFw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEcyMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuzfNNNx7a8myaJCtSnX/RrohCgiN9RlUyfuI +2/Ou8jqJkTx65qsGGmvPrC3oXgkkRLpimn7Wo6h+4FR1IAWsULecYxpsMNzaHxmx +1x7e/dfgy5SDN67sH0NO3Xss0r0upS/kqbitOtSZpLYl6ZtrAGCSYP9PIUkY92eQ +q2EGnI/yuum06ZIya7XzV+hdG82MHauVBJVJ8zUtluNJbd134/tJS7SsVQepj5Wz +tCO7TG1F8PapspUwtP1MVYwnSlcUfIKdzXOS0xZKBgyMUNGPHgm+F6HmIcr9g+UQ +vIOlCsRnKPZzFBQ9RnbDhxSJITRNrw9FDKZJobq7nMWxM4MphQIDAQABo0IwQDAP +BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNVHQ4EFgQUTiJUIBiV +5uNu5g/6+rkS7QYXjzkwDQYJKoZIhvcNAQELBQADggEBAGBnKJRvDkhj6zHd6mcY +1Yl9PMWLSn/pvtsrF9+wX3N3KjITOYFnQoQj8kVnNeyIv/iPsGEMNKSuIEyExtv4 +NeF22d+mQrvHRAiGfzZ0JFrabA0UWTW98kndth/Jsw1HKj2ZL7tcu7XUIOGZX1NG +Fdtom/DzMNU+MeKNhJ7jitralj41E6Vf8PlwUHBHQRFXGU7Aj64GxJUTFy8bJZ91 +8rGOmaFvE7FBcf6IKshPECBV1/MUReXgRPTqh5Uykw7+U0b6LJ3/iyK5S9kJRaTe +pLiaWN0bfVKfjllDiIGknibVb63dDcY3fe0Dkhvld1927jyNxF1WW6LZZm6zNTfl +MrY= +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Global Root G3" +# Serial: 7089244469030293291760083333884364146 +# MD5 Fingerprint: f5:5d:a4:50:a5:fb:28:7e:1e:0f:0d:cc:96:57:56:ca +# SHA1 Fingerprint: 7e:04:de:89:6a:3e:66:6d:00:e6:87:d3:3f:fa:d9:3b:e8:3d:34:9e +# SHA256 Fingerprint: 31:ad:66:48:f8:10:41:38:c7:38:f3:9e:a4:32:01:33:39:3e:3a:18:cc:02:29:6e:f9:7c:2a:c9:ef:67:31:d0 +-----BEGIN CERTIFICATE----- +MIICPzCCAcWgAwIBAgIQBVVWvPJepDU1w6QP1atFcjAKBggqhkjOPQQDAzBhMQsw +CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu +ZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBHMzAe +Fw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVTMRUw +EwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20x +IDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEczMHYwEAYHKoZIzj0CAQYF +K4EEACIDYgAE3afZu4q4C/sLfyHS8L6+c/MzXRq8NOrexpu80JX28MzQC7phW1FG +fp4tn+6OYwwX7Adw9c+ELkCDnOg/QW07rdOkFFk2eJ0DQ+4QE2xy3q6Ip6FrtUPO +Z9wj/wMco+I+o0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAd +BgNVHQ4EFgQUs9tIpPmhxdiuNkHMEWNpYim8S8YwCgYIKoZIzj0EAwMDaAAwZQIx +AK288mw/EkrRLTnDCgmXc/SINoyIJ7vmiI1Qhadj+Z4y3maTD/HMsQmP3Wyr+mt/ +oAIwOWZbwmSNuJ5Q3KjVSaLtx9zRSX8XAbjIho9OjIgrqJqpisXRAL34VOKa5Vt8 +sycX +-----END CERTIFICATE----- + +# Issuer: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com +# Subject: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com +# Label: "DigiCert Trusted Root G4" +# Serial: 7451500558977370777930084869016614236 +# MD5 Fingerprint: 78:f2:fc:aa:60:1f:2f:b4:eb:c9:37:ba:53:2e:75:49 +# SHA1 Fingerprint: dd:fb:16:cd:49:31:c9:73:a2:03:7d:3f:c8:3a:4d:7d:77:5d:05:e4 +# SHA256 Fingerprint: 55:2f:7b:dc:f1:a7:af:9e:6c:e6:72:01:7f:4f:12:ab:f7:72:40:c7:8e:76:1a:c2:03:d1:d9:d2:0a:c8:99:88 +-----BEGIN CERTIFICATE----- +MIIFkDCCA3igAwIBAgIQBZsbV56OITLiOQe9p3d1XDANBgkqhkiG9w0BAQwFADBi +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3Qg +RzQwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBiMQswCQYDVQQGEwJV +UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu +Y29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3QgRzQwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQC/5pBzaN675F1KPDAiMGkz7MKnJS7JIT3y +ithZwuEppz1Yq3aaza57G4QNxDAf8xukOBbrVsaXbR2rsnnyyhHS5F/WBTxSD1If +xp4VpX6+n6lXFllVcq9ok3DCsrp1mWpzMpTREEQQLt+C8weE5nQ7bXHiLQwb7iDV +ySAdYyktzuxeTsiT+CFhmzTrBcZe7FsavOvJz82sNEBfsXpm7nfISKhmV1efVFiO +DCu3T6cw2Vbuyntd463JT17lNecxy9qTXtyOj4DatpGYQJB5w3jHtrHEtWoYOAMQ +jdjUN6QuBX2I9YI+EJFwq1WCQTLX2wRzKm6RAXwhTNS8rhsDdV14Ztk6MUSaM0C/ +CNdaSaTC5qmgZ92kJ7yhTzm1EVgX9yRcRo9k98FpiHaYdj1ZXUJ2h4mXaXpI8OCi +EhtmmnTK3kse5w5jrubU75KSOp493ADkRSWJtppEGSt+wJS00mFt6zPZxd9LBADM +fRyVw4/3IbKyEbe7f/LVjHAsQWCqsWMYRJUadmJ+9oCw++hkpjPRiQfhvbfmQ6QY +uKZ3AeEPlAwhHbJUKSWJbOUOUlFHdL4mrLZBdd56rF+NP8m800ERElvlEFDrMcXK +chYiCd98THU/Y+whX8QgUWtvsauGi0/C1kVfnSD8oR7FwI+isX4KJpn15GkvmB0t +9dmpsh3lGwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB +hjAdBgNVHQ4EFgQU7NfjgtJxXWRM3y5nP+e6mK4cD08wDQYJKoZIhvcNAQEMBQAD +ggIBALth2X2pbL4XxJEbw6GiAI3jZGgPVs93rnD5/ZpKmbnJeFwMDF/k5hQpVgs2 +SV1EY+CtnJYYZhsjDT156W1r1lT40jzBQ0CuHVD1UvyQO7uYmWlrx8GnqGikJ9yd ++SeuMIW59mdNOj6PWTkiU0TryF0Dyu1Qen1iIQqAyHNm0aAFYF/opbSnr6j3bTWc +fFqK1qI4mfN4i/RN0iAL3gTujJtHgXINwBQy7zBZLq7gcfJW5GqXb5JQbZaNaHqa +sjYUegbyJLkJEVDXCLG4iXqEI2FCKeWjzaIgQdfRnGTZ6iahixTXTBmyUEFxPT9N +cCOGDErcgdLMMpSEDQgJlxxPwO5rIHQw0uA5NBCFIRUBCOhVMt5xSdkoF1BN5r5N +0XWs0Mr7QbhDparTwwVETyw2m+L64kW4I1NsBm9nVX9GtUw/bihaeSbSpKhil9Ie +4u1Ki7wb/UdKDd9nZn6yW0HQO+T0O/QEY+nvwlQAUaCKKsnOeMzV6ocEGLPOr0mI +r/OSmbaz5mEP0oUA51Aa5BuVnRmhuZyxm7EAHu/QD09CbMkKvO5D+jpxpchNJqU1 +/YldvIViHTLSoCtU7ZpXwdv6EM8Zt4tKG48BtieVU+i2iW1bvGjUI+iLUaJW+fCm +gKDWHrO8Dw9TdSmq6hN35N6MgSGtBxBHEa2HPQfRdbzP82Z+ +-----END CERTIFICATE----- + +# Issuer: CN=COMODO RSA Certification Authority O=COMODO CA Limited +# Subject: CN=COMODO RSA Certification Authority O=COMODO CA Limited +# Label: "COMODO RSA Certification Authority" +# Serial: 101909084537582093308941363524873193117 +# MD5 Fingerprint: 1b:31:b0:71:40:36:cc:14:36:91:ad:c4:3e:fd:ec:18 +# SHA1 Fingerprint: af:e5:d2:44:a8:d1:19:42:30:ff:47:9f:e2:f8:97:bb:cd:7a:8c:b4 +# SHA256 Fingerprint: 52:f0:e1:c4:e5:8e:c6:29:29:1b:60:31:7f:07:46:71:b8:5d:7e:a8:0d:5b:07:27:34:63:53:4b:32:b4:02:34 +-----BEGIN CERTIFICATE----- +MIIF2DCCA8CgAwIBAgIQTKr5yttjb+Af907YWwOGnTANBgkqhkiG9w0BAQwFADCB +hTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G +A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNV +BAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMTE5 +MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgT +EkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMR +Q09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNh +dGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCR +6FSS0gpWsawNJN3Fz0RndJkrN6N9I3AAcbxT38T6KhKPS38QVr2fcHK3YX/JSw8X +pz3jsARh7v8Rl8f0hj4K+j5c+ZPmNHrZFGvnnLOFoIJ6dq9xkNfs/Q36nGz637CC +9BR++b7Epi9Pf5l/tfxnQ3K9DADWietrLNPtj5gcFKt+5eNu/Nio5JIk2kNrYrhV +/erBvGy2i/MOjZrkm2xpmfh4SDBF1a3hDTxFYPwyllEnvGfDyi62a+pGx8cgoLEf +Zd5ICLqkTqnyg0Y3hOvozIFIQ2dOciqbXL1MGyiKXCJ7tKuY2e7gUYPDCUZObT6Z ++pUX2nwzV0E8jVHtC7ZcryxjGt9XyD+86V3Em69FmeKjWiS0uqlWPc9vqv9JWL7w +qP/0uK3pN/u6uPQLOvnoQ0IeidiEyxPx2bvhiWC4jChWrBQdnArncevPDt09qZah +SL0896+1DSJMwBGB7FY79tOi4lu3sgQiUpWAk2nojkxl8ZEDLXB0AuqLZxUpaVIC +u9ffUGpVRr+goyhhf3DQw6KqLCGqR84onAZFdr+CGCe01a60y1Dma/RMhnEw6abf +Fobg2P9A3fvQQoh/ozM6LlweQRGBY84YcWsr7KaKtzFcOmpH4MN5WdYgGq/yapiq +crxXStJLnbsQ/LBMQeXtHT1eKJ2czL+zUdqnR+WEUwIDAQABo0IwQDAdBgNVHQ4E +FgQUu69+Aj36pvE8hI6t7jiY7NkyMtQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB +/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAArx1UaEt65Ru2yyTUEUAJNMnMvl +wFTPoCWOAvn9sKIN9SCYPBMtrFaisNZ+EZLpLrqeLppysb0ZRGxhNaKatBYSaVqM +4dc+pBroLwP0rmEdEBsqpIt6xf4FpuHA1sj+nq6PK7o9mfjYcwlYRm6mnPTXJ9OV +2jeDchzTc+CiR5kDOF3VSXkAKRzH7JsgHAckaVd4sjn8OoSgtZx8jb8uk2Intzna +FxiuvTwJaP+EmzzV1gsD41eeFPfR60/IvYcjt7ZJQ3mFXLrrkguhxuhoqEwWsRqZ +CuhTLJK7oQkYdQxlqHvLI7cawiiFwxv/0Cti76R7CZGYZ4wUAc1oBmpjIXUDgIiK +boHGhfKppC3n9KUkEEeDys30jXlYsQab5xoq2Z0B15R97QNKyvDb6KkBPvVWmcke +jkk9u+UJueBPSZI9FoJAzMxZxuY67RIuaTxslbH9qh17f4a+Hg4yRvv7E491f0yL +S0Zj/gA0QHDBw7mh3aZw4gSzQbzpgJHqZJx64SIDqZxubw5lT2yHh17zbqD5daWb +QOhTsiedSrnAdyGN/4fy3ryM7xfft0kL0fJuMAsaDk527RH89elWsn2/x20Kk4yl +0MC2Hb46TpSi125sC8KKfPog88Tk5c0NqMuRkrF8hey1FGlmDoLnzc7ILaZRfyHB +NVOFBkpdn627G190 +-----END CERTIFICATE----- + +# Issuer: CN=USERTrust RSA Certification Authority O=The USERTRUST Network +# Subject: CN=USERTrust RSA Certification Authority O=The USERTRUST Network +# Label: "USERTrust RSA Certification Authority" +# Serial: 2645093764781058787591871645665788717 +# MD5 Fingerprint: 1b:fe:69:d1:91:b7:19:33:a3:72:a8:0f:e1:55:e5:b5 +# SHA1 Fingerprint: 2b:8f:1b:57:33:0d:bb:a2:d0:7a:6c:51:f7:0e:e9:0d:da:b9:ad:8e +# SHA256 Fingerprint: e7:93:c9:b0:2f:d8:aa:13:e2:1c:31:22:8a:cc:b0:81:19:64:3b:74:9c:89:89:64:b1:74:6d:46:c3:d4:cb:d2 +-----BEGIN CERTIFICATE----- +MIIF3jCCA8agAwIBAgIQAf1tMPyjylGoG7xkDjUDLTANBgkqhkiG9w0BAQwFADCB +iDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0pl +cnNleSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNV +BAMTJVVTRVJUcnVzdCBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAw +MjAxMDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNV +BAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVU +aGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBSU0EgQ2Vy +dGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCAEmUXNg7D2wiz0KxXDXbtzSfTTK1Qg2HiqiBNCS1kCdzOiZ/MPans9s/B +3PHTsdZ7NygRK0faOca8Ohm0X6a9fZ2jY0K2dvKpOyuR+OJv0OwWIJAJPuLodMkY +tJHUYmTbf6MG8YgYapAiPLz+E/CHFHv25B+O1ORRxhFnRghRy4YUVD+8M/5+bJz/ +Fp0YvVGONaanZshyZ9shZrHUm3gDwFA66Mzw3LyeTP6vBZY1H1dat//O+T23LLb2 +VN3I5xI6Ta5MirdcmrS3ID3KfyI0rn47aGYBROcBTkZTmzNg95S+UzeQc0PzMsNT +79uq/nROacdrjGCT3sTHDN/hMq7MkztReJVni+49Vv4M0GkPGw/zJSZrM233bkf6 +c0Plfg6lZrEpfDKEY1WJxA3Bk1QwGROs0303p+tdOmw1XNtB1xLaqUkL39iAigmT +Yo61Zs8liM2EuLE/pDkP2QKe6xJMlXzzawWpXhaDzLhn4ugTncxbgtNMs+1b/97l +c6wjOy0AvzVVdAlJ2ElYGn+SNuZRkg7zJn0cTRe8yexDJtC/QV9AqURE9JnnV4ee +UB9XVKg+/XRjL7FQZQnmWEIuQxpMtPAlR1n6BB6T1CZGSlCBst6+eLf8ZxXhyVeE +Hg9j1uliutZfVS7qXMYoCAQlObgOK6nyTJccBz8NUvXt7y+CDwIDAQABo0IwQDAd +BgNVHQ4EFgQUU3m/WqorSs9UgOHYm8Cd8rIDZsswDgYDVR0PAQH/BAQDAgEGMA8G +A1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAFzUfA3P9wF9QZllDHPF +Up/L+M+ZBn8b2kMVn54CVVeWFPFSPCeHlCjtHzoBN6J2/FNQwISbxmtOuowhT6KO +VWKR82kV2LyI48SqC/3vqOlLVSoGIG1VeCkZ7l8wXEskEVX/JJpuXior7gtNn3/3 +ATiUFJVDBwn7YKnuHKsSjKCaXqeYalltiz8I+8jRRa8YFWSQEg9zKC7F4iRO/Fjs +8PRF/iKz6y+O0tlFYQXBl2+odnKPi4w2r78NBc5xjeambx9spnFixdjQg3IM8WcR +iQycE0xyNN+81XHfqnHd4blsjDwSXWXavVcStkNr/+XeTWYRUc+ZruwXtuhxkYze +Sf7dNXGiFSeUHM9h4ya7b6NnJSFd5t0dCy5oGzuCr+yDZ4XUmFF0sbmZgIn/f3gZ +XHlKYC6SQK5MNyosycdiyA5d9zZbyuAlJQG03RoHnHcAP9Dc1ew91Pq7P8yF1m9/ +qS3fuQL39ZeatTXaw2ewh0qpKJ4jjv9cJ2vhsE/zB+4ALtRZh8tSQZXq9EfX7mRB +VXyNWQKV3WKdwrnuWih0hKWbt5DHDAff9Yk2dDLWKMGwsAvgnEzDHNb842m1R0aB +L6KCq9NjRHDEjf8tM7qtj3u1cIiuPhnPQCjY/MiQu12ZIvVS5ljFH4gxQ+6IHdfG +jjxDah2nGN59PRbxYvnKkKj9 +-----END CERTIFICATE----- + +# Issuer: CN=USERTrust ECC Certification Authority O=The USERTRUST Network +# Subject: CN=USERTrust ECC Certification Authority O=The USERTRUST Network +# Label: "USERTrust ECC Certification Authority" +# Serial: 123013823720199481456569720443997572134 +# MD5 Fingerprint: fa:68:bc:d9:b5:7f:ad:fd:c9:1d:06:83:28:cc:24:c1 +# SHA1 Fingerprint: d1:cb:ca:5d:b2:d5:2a:7f:69:3b:67:4d:e5:f0:5a:1d:0c:95:7d:f0 +# SHA256 Fingerprint: 4f:f4:60:d5:4b:9c:86:da:bf:bc:fc:57:12:e0:40:0d:2b:ed:3f:bc:4d:4f:bd:aa:86:e0:6a:dc:d2:a9:ad:7a +-----BEGIN CERTIFICATE----- +MIICjzCCAhWgAwIBAgIQXIuZxVqUxdJxVt7NiYDMJjAKBggqhkjOPQQDAzCBiDEL +MAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNl +eSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMT +JVVTRVJUcnVzdCBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMjAx +MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgT +Ck5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVUaGUg +VVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBFQ0MgQ2VydGlm +aWNhdGlvbiBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQarFRaqflo +I+d61SRvU8Za2EurxtW20eZzca7dnNYMYf3boIkDuAUU7FfO7l0/4iGzzvfUinng +o4N+LZfQYcTxmdwlkWOrfzCjtHDix6EznPO/LlxTsV+zfTJ/ijTjeXmjQjBAMB0G +A1UdDgQWBBQ64QmG1M8ZwpZ2dEl23OA1xmNjmjAOBgNVHQ8BAf8EBAMCAQYwDwYD +VR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjA2Z6EWCNzklwBBHU6+4WMB +zzuqQhFkoJ2UOQIReVx7Hfpkue4WQrO/isIJxOzksU0CMQDpKmFHjFJKS04YcPbW +RNZu9YO6bVi9JNlWSOrvxKJGgYhqOkbRqZtNyWHa0V1Xahg= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4 +# Label: "GlobalSign ECC Root CA - R4" +# Serial: 14367148294922964480859022125800977897474 +# MD5 Fingerprint: 20:f0:27:68:d1:7e:a0:9d:0e:e6:2a:ca:df:5c:89:8e +# SHA1 Fingerprint: 69:69:56:2e:40:80:f4:24:a1:e7:19:9f:14:ba:f3:ee:58:ab:6a:bb +# SHA256 Fingerprint: be:c9:49:11:c2:95:56:76:db:6c:0a:55:09:86:d7:6e:3b:a0:05:66:7c:44:2c:97:62:b4:fb:b7:73:de:22:8c +-----BEGIN CERTIFICATE----- +MIIB4TCCAYegAwIBAgIRKjikHJYKBN5CsiilC+g0mAIwCgYIKoZIzj0EAwIwUDEk +MCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBDQSAtIFI0MRMwEQYDVQQKEwpH +bG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTEyMTExMzAwMDAwMFoX +DTM4MDExOTAzMTQwN1owUDEkMCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBD +QSAtIFI0MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWdu +MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEuMZ5049sJQ6fLjkZHAOkrprlOQcJ +FspjsbmG+IpXwVfOQvpzofdlQv8ewQCybnMO/8ch5RikqtlxP6jUuc6MHaNCMEAw +DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFFSwe61F +uOJAf/sKbvu+M8k8o4TVMAoGCCqGSM49BAMCA0gAMEUCIQDckqGgE6bPA7DmxCGX +kPoUVy0D7O48027KqGx2vKLeuwIgJ6iFJzWbVsaj8kfSt24bAgAXqmemFZHe+pTs +ewv4n4Q= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5 +# Label: "GlobalSign ECC Root CA - R5" +# Serial: 32785792099990507226680698011560947931244 +# MD5 Fingerprint: 9f:ad:3b:1c:02:1e:8a:ba:17:74:38:81:0c:a2:bc:08 +# SHA1 Fingerprint: 1f:24:c6:30:cd:a4:18:ef:20:69:ff:ad:4f:dd:5f:46:3a:1b:69:aa +# SHA256 Fingerprint: 17:9f:bc:14:8a:3d:d0:0f:d2:4e:a1:34:58:cc:43:bf:a7:f5:9c:81:82:d7:83:a5:13:f6:eb:ec:10:0c:89:24 +-----BEGIN CERTIFICATE----- +MIICHjCCAaSgAwIBAgIRYFlJ4CYuu1X5CneKcflK2GwwCgYIKoZIzj0EAwMwUDEk +MCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBDQSAtIFI1MRMwEQYDVQQKEwpH +bG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTEyMTExMzAwMDAwMFoX +DTM4MDExOTAzMTQwN1owUDEkMCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBD +QSAtIFI1MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWdu +MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAER0UOlvt9Xb/pOdEh+J8LttV7HpI6SFkc +8GIxLcB6KP4ap1yztsyX50XUWPrRd21DosCHZTQKH3rd6zwzocWdTaRvQZU4f8ke +hOvRnkmSh5SHDDqFSmafnVmTTZdhBoZKo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYD +VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUPeYpSJvqB8ohREom3m7e0oPQn1kwCgYI +KoZIzj0EAwMDaAAwZQIxAOVpEslu28YxuglB4Zf4+/2a4n0Sye18ZNPLBSWLVtmg +515dTguDnFt2KaAJJiFqYgIwcdK1j1zqO+F4CYWodZI7yFz9SO8NdCKoCOJuxUnO +xwy8p2Fp8fc74SrL+SvzZpA3 +-----END CERTIFICATE----- + +# Issuer: CN=Staat der Nederlanden EV Root CA O=Staat der Nederlanden +# Subject: CN=Staat der Nederlanden EV Root CA O=Staat der Nederlanden +# Label: "Staat der Nederlanden EV Root CA" +# Serial: 10000013 +# MD5 Fingerprint: fc:06:af:7b:e8:1a:f1:9a:b4:e8:d2:70:1f:c0:f5:ba +# SHA1 Fingerprint: 76:e2:7e:c1:4f:db:82:c1:c0:a6:75:b5:05:be:3d:29:b4:ed:db:bb +# SHA256 Fingerprint: 4d:24:91:41:4c:fe:95:67:46:ec:4c:ef:a6:cf:6f:72:e2:8a:13:29:43:2f:9d:8a:90:7a:c4:cb:5d:ad:c1:5a +-----BEGIN CERTIFICATE----- +MIIFcDCCA1igAwIBAgIEAJiWjTANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJO +TDEeMBwGA1UECgwVU3RhYXQgZGVyIE5lZGVybGFuZGVuMSkwJwYDVQQDDCBTdGFh +dCBkZXIgTmVkZXJsYW5kZW4gRVYgUm9vdCBDQTAeFw0xMDEyMDgxMTE5MjlaFw0y +MjEyMDgxMTEwMjhaMFgxCzAJBgNVBAYTAk5MMR4wHAYDVQQKDBVTdGFhdCBkZXIg +TmVkZXJsYW5kZW4xKTAnBgNVBAMMIFN0YWF0IGRlciBOZWRlcmxhbmRlbiBFViBS +b290IENBMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA48d+ifkkSzrS +M4M1LGns3Amk41GoJSt5uAg94JG6hIXGhaTK5skuU6TJJB79VWZxXSzFYGgEt9nC +UiY4iKTWO0Cmws0/zZiTs1QUWJZV1VD+hq2kY39ch/aO5ieSZxeSAgMs3NZmdO3d +Z//BYY1jTw+bbRcwJu+r0h8QoPnFfxZpgQNH7R5ojXKhTbImxrpsX23Wr9GxE46p +rfNeaXUmGD5BKyF/7otdBwadQ8QpCiv8Kj6GyzyDOvnJDdrFmeK8eEEzduG/L13l +pJhQDBXd4Pqcfzho0LKmeqfRMb1+ilgnQ7O6M5HTp5gVXJrm0w912fxBmJc+qiXb +j5IusHsMX/FjqTf5m3VpTCgmJdrV8hJwRVXj33NeN/UhbJCONVrJ0yPr08C+eKxC +KFhmpUZtcALXEPlLVPxdhkqHz3/KRawRWrUgUY0viEeXOcDPusBCAUCZSCELa6fS +/ZbV0b5GnUngC6agIk440ME8MLxwjyx1zNDFjFE7PZQIZCZhfbnDZY8UnCHQqv0X +cgOPvZuM5l5Tnrmd74K74bzickFbIZTTRTeU0d8JOV3nI6qaHcptqAqGhYqCvkIH +1vI4gnPah1vlPNOePqc7nvQDs/nxfRN0Av+7oeX6AHkcpmZBiFxgV6YuCcS6/ZrP +px9Aw7vMWgpVSzs4dlG4Y4uElBbmVvMCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB +/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFP6rAJCYniT8qcwaivsnuL8wbqg7 +MA0GCSqGSIb3DQEBCwUAA4ICAQDPdyxuVr5Os7aEAJSrR8kN0nbHhp8dB9O2tLsI +eK9p0gtJ3jPFrK3CiAJ9Brc1AsFgyb/E6JTe1NOpEyVa/m6irn0F3H3zbPB+po3u +2dfOWBfoqSmuc0iH55vKbimhZF8ZE/euBhD/UcabTVUlT5OZEAFTdfETzsemQUHS +v4ilf0X8rLiltTMMgsT7B/Zq5SWEXwbKwYY5EdtYzXc7LMJMD16a4/CrPmEbUCTC +wPTxGfARKbalGAKb12NMcIxHowNDXLldRqANb/9Zjr7dn3LDWyvfjFvO5QxGbJKy +CqNMVEIYFRIYvdr8unRu/8G2oGTYqV9Vrp9canaW2HNnh/tNf1zuacpzEPuKqf2e +vTY4SUmH9A4U8OmHuD+nT3pajnnUk+S7aFKErGzp85hwVXIy+TSrK0m1zSBi5Dp6 +Z2Orltxtrpfs/J92VoguZs9btsmksNcFuuEnL5O7Jiqik7Ab846+HUCjuTaPPoIa +Gl6I6lD4WeKDRikL40Rc4ZW2aZCaFG+XroHPaO+Zmr615+F/+PoTRxZMzG0IQOeL +eG9QgkRQP2YGiqtDhFZKDyAthg710tvSeopLzaXoTvFeJiUBWSOgftL2fiFX1ye8 +FVdMpEbB4IMeDExNH08GGeL5qPQ6gqGyeUN51q1veieQA6TqJIc/2b3Z6fJfUEkc +7uzXLg== +-----END CERTIFICATE----- + +# Issuer: CN=IdenTrust Commercial Root CA 1 O=IdenTrust +# Subject: CN=IdenTrust Commercial Root CA 1 O=IdenTrust +# Label: "IdenTrust Commercial Root CA 1" +# Serial: 13298821034946342390520003877796839426 +# MD5 Fingerprint: b3:3e:77:73:75:ee:a0:d3:e3:7e:49:63:49:59:bb:c7 +# SHA1 Fingerprint: df:71:7e:aa:4a:d9:4e:c9:55:84:99:60:2d:48:de:5f:bc:f0:3a:25 +# SHA256 Fingerprint: 5d:56:49:9b:e4:d2:e0:8b:cf:ca:d0:8a:3e:38:72:3d:50:50:3b:de:70:69:48:e4:2f:55:60:30:19:e5:28:ae +-----BEGIN CERTIFICATE----- +MIIFYDCCA0igAwIBAgIQCgFCgAAAAUUjyES1AAAAAjANBgkqhkiG9w0BAQsFADBK +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVu +VHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwHhcNMTQwMTE2MTgxMjIzWhcNMzQw +MTE2MTgxMjIzWjBKMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScw +JQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQCnUBneP5k91DNG8W9RYYKyqU+PZ4ldhNlT +3Qwo2dfw/66VQ3KZ+bVdfIrBQuExUHTRgQ18zZshq0PirK1ehm7zCYofWjK9ouuU ++ehcCuz/mNKvcbO0U59Oh++SvL3sTzIwiEsXXlfEU8L2ApeN2WIrvyQfYo3fw7gp +S0l4PJNgiCL8mdo2yMKi1CxUAGc1bnO/AljwpN3lsKImesrgNqUZFvX9t++uP0D1 +bVoE/c40yiTcdCMbXTMTEl3EASX2MN0CXZ/g1Ue9tOsbobtJSdifWwLziuQkkORi +T0/Br4sOdBeo0XKIanoBScy0RnnGF7HamB4HWfp1IYVl3ZBWzvurpWCdxJ35UrCL +vYf5jysjCiN2O/cz4ckA82n5S6LgTrx+kzmEB/dEcH7+B1rlsazRGMzyNeVJSQjK +Vsk9+w8YfYs7wRPCTY/JTw436R+hDmrfYi7LNQZReSzIJTj0+kuniVyc0uMNOYZK +dHzVWYfCP04MXFL0PfdSgvHqo6z9STQaKPNBiDoT7uje/5kdX7rL6B7yuVBgwDHT +c+XvvqDtMwt0viAgxGds8AgDelWAf0ZOlqf0Hj7h9tgJ4TNkK2PXMl6f+cB7D3hv +l7yTmvmcEpB4eoCHFddydJxVdHixuuFucAS6T6C6aMN7/zHwcz09lCqxC0EOoP5N +iGVreTO01wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB +/zAdBgNVHQ4EFgQU7UQZwNPwBovupHu+QucmVMiONnYwDQYJKoZIhvcNAQELBQAD +ggIBAA2ukDL2pkt8RHYZYR4nKM1eVO8lvOMIkPkp165oCOGUAFjvLi5+U1KMtlwH +6oi6mYtQlNeCgN9hCQCTrQ0U5s7B8jeUeLBfnLOic7iPBZM4zY0+sLj7wM+x8uwt +LRvM7Kqas6pgghstO8OEPVeKlh6cdbjTMM1gCIOQ045U8U1mwF10A0Cj7oV+wh93 +nAbowacYXVKV7cndJZ5t+qntozo00Fl72u1Q8zW/7esUTTHHYPTa8Yec4kjixsU3 ++wYQ+nVZZjFHKdp2mhzpgq7vmrlR94gjmmmVYjzlVYA211QC//G5Xc7UI2/YRYRK +W2XviQzdFKcgyxilJbQN+QHwotL0AMh0jqEqSI5l2xPE4iUXfeu+h1sXIFRRk0pT +AwvsXcoz7WL9RccvW9xYoIA55vrX/hMUpu09lEpCdNTDd1lzzY9GvlU47/rokTLq +l1gEIt44w8y8bckzOmoKaT+gyOpyj4xjhiO9bTyWnpXgSUyqorkqG5w2gXjtw+hG +4iZZRHUe2XWJUc0QhJ1hYMtd+ZciTY6Y5uN/9lu7rs3KSoFrXgvzUeF0K+l+J6fZ +mUlO+KWA2yUPHGNiiskzZ2s8EIPGrd6ozRaOjfAHN3Gf8qv8QfXBi+wAN10J5U6A +7/qxXDgGpRtK4dw4LTzcqx+QGtVKnO7RcGzM7vRX+Bi6hG6H +-----END CERTIFICATE----- + +# Issuer: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust +# Subject: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust +# Label: "IdenTrust Public Sector Root CA 1" +# Serial: 13298821034946342390521976156843933698 +# MD5 Fingerprint: 37:06:a5:b0:fc:89:9d:ba:f4:6b:8c:1a:64:cd:d5:ba +# SHA1 Fingerprint: ba:29:41:60:77:98:3f:f4:f3:ef:f2:31:05:3b:2e:ea:6d:4d:45:fd +# SHA256 Fingerprint: 30:d0:89:5a:9a:44:8a:26:20:91:63:55:22:d1:f5:20:10:b5:86:7a:ca:e1:2c:78:ef:95:8f:d4:f4:38:9f:2f +-----BEGIN CERTIFICATE----- +MIIFZjCCA06gAwIBAgIQCgFCgAAAAUUjz0Z8AAAAAjANBgkqhkiG9w0BAQsFADBN +MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MSowKAYDVQQDEyFJZGVu +VHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwHhcNMTQwMTE2MTc1MzMyWhcN +MzQwMTE2MTc1MzMyWjBNMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0 +MSowKAYDVQQDEyFJZGVuVHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2IpT8pEiv6EdrCvsnduTyP4o7 +ekosMSqMjbCpwzFrqHd2hCa2rIFCDQjrVVi7evi8ZX3yoG2LqEfpYnYeEe4IFNGy +RBb06tD6Hi9e28tzQa68ALBKK0CyrOE7S8ItneShm+waOh7wCLPQ5CQ1B5+ctMlS +bdsHyo+1W/CD80/HLaXIrcuVIKQxKFdYWuSNG5qrng0M8gozOSI5Cpcu81N3uURF +/YTLNiCBWS2ab21ISGHKTN9T0a9SvESfqy9rg3LvdYDaBjMbXcjaY8ZNzaxmMc3R +3j6HEDbhuaR672BQssvKplbgN6+rNBM5Jeg5ZuSYeqoSmJxZZoY+rfGwyj4GD3vw +EUs3oERte8uojHH01bWRNszwFcYr3lEXsZdMUD2xlVl8BX0tIdUAvwFnol57plzy +9yLxkA2T26pEUWbMfXYD62qoKjgZl3YNa4ph+bz27nb9cCvdKTz4Ch5bQhyLVi9V +GxyhLrXHFub4qjySjmm2AcG1hp2JDws4lFTo6tyePSW8Uybt1as5qsVATFSrsrTZ +2fjXctscvG29ZV/viDUqZi/u9rNl8DONfJhBaUYPQxxp+pu10GFqzcpL2UyQRqsV +WaFHVCkugyhfHMKiq3IXAAaOReyL4jM9f9oZRORicsPfIsbyVtTdX5Vy7W1f90gD +W/3FKqD2cyOEEBsB5wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/ +BAUwAwEB/zAdBgNVHQ4EFgQU43HgntinQtnbcZFrlJPrw6PRFKMwDQYJKoZIhvcN +AQELBQADggIBAEf63QqwEZE4rU1d9+UOl1QZgkiHVIyqZJnYWv6IAcVYpZmxI1Qj +t2odIFflAWJBF9MJ23XLblSQdf4an4EKwt3X9wnQW3IV5B4Jaj0z8yGa5hV+rVHV +DRDtfULAj+7AmgjVQdZcDiFpboBhDhXAuM/FSRJSzL46zNQuOAXeNf0fb7iAaJg9 +TaDKQGXSc3z1i9kKlT/YPyNtGtEqJBnZhbMX73huqVjRI9PHE+1yJX9dsXNw0H8G +lwmEKYBhHfpe/3OsoOOJuBxxFcbeMX8S3OFtm6/n6J91eEyrRjuazr8FGF1NFTwW +mhlQBJqymm9li1JfPFgEKCXAZmExfrngdbkaqIHWchezxQMxNRF4eKLg6TCMf4Df +WN88uieW4oA0beOY02QnrEh+KHdcxiVhJfiFDGX6xDIvpZgF5PgLZxYWxoK4Mhn5 ++bl53B/N66+rDt0b20XkeucC4pVd/GnwU2lhlXV5C15V5jgclKlZM57IcXR5f1GJ +tshquDDIajjDbp7hNxbqBWJMWxJH7ae0s1hWx0nzfxJoCTFx8G34Tkf71oXuxVhA +GaQdp/lLQzfcaFpPz+vCZHTetBXZ9FRUGi8c15dxVJCO2SCdUyt/q4/i6jC8UDfv +8Ue1fXwsBOxonbRJRBD0ckscZOf85muQ3Wl9af0AVqW3rLatt8o+Ae+c +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - G2" +# Serial: 1246989352 +# MD5 Fingerprint: 4b:e2:c9:91:96:65:0c:f4:0e:5a:93:92:a0:0a:fe:b2 +# SHA1 Fingerprint: 8c:f4:27:fd:79:0c:3a:d1:66:06:8d:e8:1e:57:ef:bb:93:22:72:d4 +# SHA256 Fingerprint: 43:df:57:74:b0:3e:7f:ef:5f:e4:0d:93:1a:7b:ed:f1:bb:2e:6b:42:73:8c:4e:6d:38:41:10:3d:3a:a7:f3:39 +-----BEGIN CERTIFICATE----- +MIIEPjCCAyagAwIBAgIESlOMKDANBgkqhkiG9w0BAQsFADCBvjELMAkGA1UEBhMC +VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50 +cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3Qs +IEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVz +dCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRzIwHhcNMDkwNzA3MTcy +NTU0WhcNMzAxMjA3MTc1NTU0WjCBvjELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUVu +dHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50cnVzdC5uZXQvbGVnYWwt +dGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3QsIEluYy4gLSBmb3IgYXV0 +aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVzdCBSb290IENlcnRpZmlj +YXRpb24gQXV0aG9yaXR5IC0gRzIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK +AoIBAQC6hLZy254Ma+KZ6TABp3bqMriVQRrJ2mFOWHLP/vaCeb9zYQYKpSfYs1/T +RU4cctZOMvJyig/3gxnQaoCAAEUesMfnmr8SVycco2gvCoe9amsOXmXzHHfV1IWN +cCG0szLni6LVhjkCsbjSR87kyUnEO6fe+1R9V77w6G7CebI6C1XiUJgWMhNcL3hW +wcKUs/Ja5CeanyTXxuzQmyWC48zCxEXFjJd6BmsqEZ+pCm5IO2/b1BEZQvePB7/1 +U1+cPvQXLOZprE4yTGJ36rfo5bs0vBmLrpxR57d+tVOxMyLlbc9wPBr64ptntoP0 +jaWvYkxN4FisZDQSA/i2jZRjJKRxAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAP +BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqciZ60B7vfec7aVHUbI2fkBJmqzAN +BgkqhkiG9w0BAQsFAAOCAQEAeZ8dlsa2eT8ijYfThwMEYGprmi5ZiXMRrEPR9RP/ +jTkrwPK9T3CMqS/qF8QLVJ7UG5aYMzyorWKiAHarWWluBh1+xLlEjZivEtRh2woZ +Rkfz6/djwUAFQKXSt/S1mja/qYh2iARVBCuch38aNzx+LaUa2NSJXsq9rD1s2G2v +1fN2D807iDginWyTmsQ9v4IbZT+mD12q/OWyFcq1rca8PdCE6OoGcrBNOTJ4vz4R +nAuknZoh8/CbCzB428Hch0P+vGOaysXCHMnHjf87ElgI5rY97HosTvuDls4MPGmH +VHOkc8KT/1EQrBVUAdj8BbGJoX90g5pJ19xOe4pIb4tF9g== +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - EC1" +# Serial: 51543124481930649114116133369 +# MD5 Fingerprint: b6:7e:1d:f0:58:c5:49:6c:24:3b:3d:ed:98:18:ed:bc +# SHA1 Fingerprint: 20:d8:06:40:df:9b:25:f5:12:25:3a:11:ea:f7:59:8a:eb:14:b5:47 +# SHA256 Fingerprint: 02:ed:0e:b2:8c:14:da:45:16:5c:56:67:91:70:0d:64:51:d7:fb:56:f0:b2:ab:1d:3b:8e:b0:70:e5:6e:df:f5 +-----BEGIN CERTIFICATE----- +MIIC+TCCAoCgAwIBAgINAKaLeSkAAAAAUNCR+TAKBggqhkjOPQQDAzCBvzELMAkG +A1UEBhMCVVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3 +d3cuZW50cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDEyIEVu +dHJ1c3QsIEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEzMDEGA1UEAxMq +RW50cnVzdCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRUMxMB4XDTEy +MTIxODE1MjUzNloXDTM3MTIxODE1NTUzNlowgb8xCzAJBgNVBAYTAlVTMRYwFAYD +VQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1c3QubmV0 +L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxMiBFbnRydXN0LCBJbmMuIC0g +Zm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMzAxBgNVBAMTKkVudHJ1c3QgUm9vdCBD +ZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEVDMTB2MBAGByqGSM49AgEGBSuBBAAi +A2IABIQTydC6bUF74mzQ61VfZgIaJPRbiWlH47jCffHyAsWfoPZb1YsGGYZPUxBt +ByQnoaD41UcZYUx9ypMn6nQM72+WCf5j7HBdNq1nd67JnXxVRDqiY1Ef9eNi1KlH +Bz7MIKNCMEAwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O +BBYEFLdj5xrdjekIplWDpOBqUEFlEUJJMAoGCCqGSM49BAMDA2cAMGQCMGF52OVC +R98crlOZF7ZvHH3hvxGU0QOIdeSNiaSKd0bebWHvAvX7td/M/k7//qnmpwIwW5nX +hTcGtXsI/esni0qU+eH6p44mCOh8kmhtc9hvJqwhAriZtyZBWyVgrtBIGu4G +-----END CERTIFICATE----- + +# Issuer: CN=CFCA EV ROOT O=China Financial Certification Authority +# Subject: CN=CFCA EV ROOT O=China Financial Certification Authority +# Label: "CFCA EV ROOT" +# Serial: 407555286 +# MD5 Fingerprint: 74:e1:b6:ed:26:7a:7a:44:30:33:94:ab:7b:27:81:30 +# SHA1 Fingerprint: e2:b8:29:4b:55:84:ab:6b:58:c2:90:46:6c:ac:3f:b8:39:8f:84:83 +# SHA256 Fingerprint: 5c:c3:d7:8e:4e:1d:5e:45:54:7a:04:e6:87:3e:64:f9:0c:f9:53:6d:1c:cc:2e:f8:00:f3:55:c4:c5:fd:70:fd +-----BEGIN CERTIFICATE----- +MIIFjTCCA3WgAwIBAgIEGErM1jANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJD +TjEwMC4GA1UECgwnQ2hpbmEgRmluYW5jaWFsIENlcnRpZmljYXRpb24gQXV0aG9y +aXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJPT1QwHhcNMTIwODA4MDMwNzAxWhcNMjkx +MjMxMDMwNzAxWjBWMQswCQYDVQQGEwJDTjEwMC4GA1UECgwnQ2hpbmEgRmluYW5j +aWFsIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJP +T1QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDXXWvNED8fBVnVBU03 +sQ7smCuOFR36k0sXgiFxEFLXUWRwFsJVaU2OFW2fvwwbwuCjZ9YMrM8irq93VCpL +TIpTUnrD7i7es3ElweldPe6hL6P3KjzJIx1qqx2hp/Hz7KDVRM8Vz3IvHWOX6Jn5 +/ZOkVIBMUtRSqy5J35DNuF++P96hyk0g1CXohClTt7GIH//62pCfCqktQT+x8Rgp +7hZZLDRJGqgG16iI0gNyejLi6mhNbiyWZXvKWfry4t3uMCz7zEasxGPrb382KzRz +EpR/38wmnvFyXVBlWY9ps4deMm/DGIq1lY+wejfeWkU7xzbh72fROdOXW3NiGUgt +hxwG+3SYIElz8AXSG7Ggo7cbcNOIabla1jj0Ytwli3i/+Oh+uFzJlU9fpy25IGvP +a931DfSCt/SyZi4QKPaXWnuWFo8BGS1sbn85WAZkgwGDg8NNkt0yxoekN+kWzqot +aK8KgWU6cMGbrU1tVMoqLUuFG7OA5nBFDWteNfB/O7ic5ARwiRIlk9oKmSJgamNg +TnYGmE69g60dWIolhdLHZR4tjsbftsbhf4oEIRUpdPA+nJCdDC7xij5aqgwJHsfV +PKPtl8MeNPo4+QgO48BdK4PRVmrJtqhUUy54Mmc9gn900PvhtgVguXDbjgv5E1hv +cWAQUhC5wUEJ73IfZzF4/5YFjQIDAQABo2MwYTAfBgNVHSMEGDAWgBTj/i39KNAL +tbq2osS/BqoFjJP7LzAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAd +BgNVHQ4EFgQU4/4t/SjQC7W6tqLEvwaqBYyT+y8wDQYJKoZIhvcNAQELBQADggIB +ACXGumvrh8vegjmWPfBEp2uEcwPenStPuiB/vHiyz5ewG5zz13ku9Ui20vsXiObT +ej/tUxPQ4i9qecsAIyjmHjdXNYmEwnZPNDatZ8POQQaIxffu2Bq41gt/UP+TqhdL +jOztUmCypAbqTuv0axn96/Ua4CUqmtzHQTb3yHQFhDmVOdYLO6Qn+gjYXB74BGBS +ESgoA//vU2YApUo0FmZ8/Qmkrp5nGm9BC2sGE5uPhnEFtC+NiWYzKXZUmhH4J/qy +P5Hgzg0b8zAarb8iXRvTvyUFTeGSGn+ZnzxEk8rUQElsgIfXBDrDMlI1Dlb4pd19 +xIsNER9Tyx6yF7Zod1rg1MvIB671Oi6ON7fQAUtDKXeMOZePglr4UeWJoBjnaH9d +Ci77o0cOPaYjesYBx4/IXr9tgFa+iiS6M+qf4TIRnvHST4D2G0CvOJ4RUHlzEhLN +5mydLIhyPDCBBpEi6lmt2hkuIsKNuYyH4Ga8cyNfIWRjgEj1oDwYPZTISEEdQLpe +/v5WOaHIz16eGWRGENoXkbcFgKyLmZJ956LYBws2J+dIeWCKw9cTXPhyQN9Ky8+Z +AAoACxGV2lZFA4gKn2fQ1XmxqI1AbQ3CekD6819kR5LLU7m7Wc5P/dAVUwHY3+vZ +5nbv0CO7O6l5s9UCKc2Jo5YPSjXnTkLAdc0Hz+Ys63su +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GB CA" +# Serial: 157768595616588414422159278966750757568 +# MD5 Fingerprint: a4:eb:b9:61:28:2e:b7:2f:98:b0:35:26:90:99:51:1d +# SHA1 Fingerprint: 0f:f9:40:76:18:d3:d7:6a:4b:98:f0:a8:35:9e:0c:fd:27:ac:cc:ed +# SHA256 Fingerprint: 6b:9c:08:e8:6e:b0:f7:67:cf:ad:65:cd:98:b6:21:49:e5:49:4a:67:f5:84:5e:7b:d1:ed:01:9f:27:b8:6b:d6 +-----BEGIN CERTIFICATE----- +MIIDtTCCAp2gAwIBAgIQdrEgUnTwhYdGs/gjGvbCwDANBgkqhkiG9w0BAQsFADBt +MQswCQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUg +Rm91bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9i +YWwgUm9vdCBHQiBDQTAeFw0xNDEyMDExNTAwMzJaFw0zOTEyMDExNTEwMzFaMG0x +CzAJBgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBG +b3VuZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2Jh +bCBSb290IEdCIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Be3 +HEokKtaXscriHvt9OO+Y9bI5mE4nuBFde9IllIiCFSZqGzG7qFshISvYD06fWvGx +WuR51jIjK+FTzJlFXHtPrby/h0oLS5daqPZI7H17Dc0hBt+eFf1Biki3IPShehtX +1F1Q/7pn2COZH8g/497/b1t3sWtuuMlk9+HKQUYOKXHQuSP8yYFfTvdv37+ErXNk +u7dCjmn21HYdfp2nuFeKUWdy19SouJVUQHMD9ur06/4oQnc/nSMbsrY9gBQHTC5P +99UKFg29ZkM3fiNDecNAhvVMKdqOmq0NpQSHiB6F4+lT1ZvIiwNjeOvgGUpuuy9r +M2RYk61pv48b74JIxwIDAQABo1EwTzALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUw +AwEB/zAdBgNVHQ4EFgQUNQ/INmNe4qPs+TtmFc5RUuORmj0wEAYJKwYBBAGCNxUB +BAMCAQAwDQYJKoZIhvcNAQELBQADggEBAEBM+4eymYGQfp3FsLAmzYh7KzKNbrgh +cViXfa43FK8+5/ea4n32cZiZBKpDdHij40lhPnOMTZTg+XHEthYOU3gf1qKHLwI5 +gSk8rxWYITD+KJAAjNHhy/peyP34EEY7onhCkRd0VQreUGdNZtGn//3ZwLWoo4rO +ZvUPQ82nK1d7Y0Zqqi5S2PTt4W2tKZB4SLrhI6qjiey1q5bAtEuiHZeeevJuQHHf +aPFlTc58Bd9TZaml8LGXBHAVRgOY1NK/VLSgWH1Sb9pWJmLU2NuJMW8c8CLC02Ic +Nc1MaRVUGpCY3useX8p3x8uOPUNpnJpY0CQ73xtAln41rYHHTnG6iBM= +-----END CERTIFICATE----- + +# Issuer: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A. +# Subject: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A. +# Label: "SZAFIR ROOT CA2" +# Serial: 357043034767186914217277344587386743377558296292 +# MD5 Fingerprint: 11:64:c1:89:b0:24:b1:8c:b1:07:7e:89:9e:51:9e:99 +# SHA1 Fingerprint: e2:52:fa:95:3f:ed:db:24:60:bd:6e:28:f3:9c:cc:cf:5e:b3:3f:de +# SHA256 Fingerprint: a1:33:9d:33:28:1a:0b:56:e5:57:d3:d3:2b:1c:e7:f9:36:7e:b0:94:bd:5f:a7:2a:7e:50:04:c8:de:d7:ca:fe +-----BEGIN CERTIFICATE----- +MIIDcjCCAlqgAwIBAgIUPopdB+xV0jLVt+O2XwHrLdzk1uQwDQYJKoZIhvcNAQEL +BQAwUTELMAkGA1UEBhMCUEwxKDAmBgNVBAoMH0tyYWpvd2EgSXpiYSBSb3psaWN6 +ZW5pb3dhIFMuQS4xGDAWBgNVBAMMD1NaQUZJUiBST09UIENBMjAeFw0xNTEwMTkw +NzQzMzBaFw0zNTEwMTkwNzQzMzBaMFExCzAJBgNVBAYTAlBMMSgwJgYDVQQKDB9L +cmFqb3dhIEl6YmEgUm96bGljemVuaW93YSBTLkEuMRgwFgYDVQQDDA9TWkFGSVIg +Uk9PVCBDQTIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3vD5QqEvN +QLXOYeeWyrSh2gwisPq1e3YAd4wLz32ohswmUeQgPYUM1ljj5/QqGJ3a0a4m7utT +3PSQ1hNKDJA8w/Ta0o4NkjrcsbH/ON7Dui1fgLkCvUqdGw+0w8LBZwPd3BucPbOw +3gAeqDRHu5rr/gsUvTaE2g0gv/pby6kWIK05YO4vdbbnl5z5Pv1+TW9NL++IDWr6 +3fE9biCloBK0TXC5ztdyO4mTp4CEHCdJckm1/zuVnsHMyAHs6A6KCpbns6aH5db5 +BSsNl0BwPLqsdVqc1U2dAgrSS5tmS0YHF2Wtn2yIANwiieDhZNRnvDF5YTy7ykHN +XGoAyDw4jlivAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD +AgEGMB0GA1UdDgQWBBQuFqlKGLXLzPVvUPMjX/hd56zwyDANBgkqhkiG9w0BAQsF +AAOCAQEAtXP4A9xZWx126aMqe5Aosk3AM0+qmrHUuOQn/6mWmc5G4G18TKI4pAZw +8PRBEew/R40/cof5O/2kbytTAOD/OblqBw7rHRz2onKQy4I9EYKL0rufKq8h5mOG +nXkZ7/e7DDWQw4rtTw/1zBLZpD67oPwglV9PJi8RI4NOdQcPv5vRtB3pEAT+ymCP +oky4rc/hkA/NrgrHXXu3UNLUYfrVFdvXn4dRVOul4+vJhaAlIDf7js4MNIThPIGy +d05DpYhfhmehPea0XGG2Ptv+tyjFogeutcrKjSoS75ftwjCkySp6+/NNIxuZMzSg +LvWpCz/UXeHPhJ/iGcJfitYgHuNztw== +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Network CA 2" +# Serial: 44979900017204383099463764357512596969 +# MD5 Fingerprint: 6d:46:9e:d9:25:6d:08:23:5b:5e:74:7d:1e:27:db:f2 +# SHA1 Fingerprint: d3:dd:48:3e:2b:bf:4c:05:e8:af:10:f5:fa:76:26:cf:d3:dc:30:92 +# SHA256 Fingerprint: b6:76:f2:ed:da:e8:77:5c:d3:6c:b0:f6:3c:d1:d4:60:39:61:f4:9e:62:65:ba:01:3a:2f:03:07:b6:d0:b8:04 +-----BEGIN CERTIFICATE----- +MIIF0jCCA7qgAwIBAgIQIdbQSk8lD8kyN/yqXhKN6TANBgkqhkiG9w0BAQ0FADCB +gDELMAkGA1UEBhMCUEwxIjAgBgNVBAoTGVVuaXpldG8gVGVjaG5vbG9naWVzIFMu +QS4xJzAlBgNVBAsTHkNlcnR1bSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEkMCIG +A1UEAxMbQ2VydHVtIFRydXN0ZWQgTmV0d29yayBDQSAyMCIYDzIwMTExMDA2MDgz +OTU2WhgPMjA0NjEwMDYwODM5NTZaMIGAMQswCQYDVQQGEwJQTDEiMCAGA1UEChMZ +VW5pemV0byBUZWNobm9sb2dpZXMgUy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRp +ZmljYXRpb24gQXV0aG9yaXR5MSQwIgYDVQQDExtDZXJ0dW0gVHJ1c3RlZCBOZXR3 +b3JrIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC9+Xj45tWA +DGSdhhuWZGc/IjoedQF97/tcZ4zJzFxrqZHmuULlIEub2pt7uZld2ZuAS9eEQCsn +0+i6MLs+CRqnSZXvK0AkwpfHp+6bJe+oCgCXhVqqndwpyeI1B+twTUrWwbNWuKFB +OJvR+zF/j+Bf4bE/D44WSWDXBo0Y+aomEKsq09DRZ40bRr5HMNUuctHFY9rnY3lE +fktjJImGLjQ/KUxSiyqnwOKRKIm5wFv5HdnnJ63/mgKXwcZQkpsCLL2puTRZCr+E +Sv/f/rOf69me4Jgj7KZrdxYq28ytOxykh9xGc14ZYmhFV+SQgkK7QtbwYeDBoz1m +o130GO6IyY0XRSmZMnUCMe4pJshrAua1YkV/NxVaI2iJ1D7eTiew8EAMvE0Xy02i +sx7QBlrd9pPPV3WZ9fqGGmd4s7+W/jTcvedSVuWz5XV710GRBdxdaeOVDUO5/IOW +OZV7bIBaTxNyxtd9KXpEulKkKtVBRgkg/iKgtlswjbyJDNXXcPiHUv3a76xRLgez +Tv7QCdpw75j6VuZt27VXS9zlLCUVyJ4ueE742pyehizKV/Ma5ciSixqClnrDvFAS +adgOWkaLOusm+iPJtrCBvkIApPjW/jAux9JG9uWOdf3yzLnQh1vMBhBgu4M1t15n +3kfsmUjxpKEV/q2MYo45VU85FrmxY53/twIDAQABo0IwQDAPBgNVHRMBAf8EBTAD +AQH/MB0GA1UdDgQWBBS2oVQ5AsOgP46KvPrU+Bym0ToO/TAOBgNVHQ8BAf8EBAMC +AQYwDQYJKoZIhvcNAQENBQADggIBAHGlDs7k6b8/ONWJWsQCYftMxRQXLYtPU2sQ +F/xlhMcQSZDe28cmk4gmb3DWAl45oPePq5a1pRNcgRRtDoGCERuKTsZPpd1iHkTf +CVn0W3cLN+mLIMb4Ck4uWBzrM9DPhmDJ2vuAL55MYIR4PSFk1vtBHxgP58l1cb29 +XN40hz5BsA72udY/CROWFC/emh1auVbONTqwX3BNXuMp8SMoclm2q8KMZiYcdywm +djWLKKdpoPk79SPdhRB0yZADVpHnr7pH1BKXESLjokmUbOe3lEu6LaTaM4tMpkT/ +WjzGHWTYtTHkpjx6qFcL2+1hGsvxznN3Y6SHb0xRONbkX8eftoEq5IVIeVheO/jb +AoJnwTnbw3RLPTYe+SmTiGhbqEQZIfCn6IENLOiTNrQ3ssqwGyZ6miUfmpqAnksq +P/ujmv5zMnHCnsZy4YpoJ/HkD7TETKVhk/iXEAcqMCWpuchxuO9ozC1+9eB+D4Ko +b7a6bINDd82Kkhehnlt4Fj1F4jNy3eFmypnTycUm/Q1oBEauttmbjL4ZvrHG8hnj +XALKLNhvSgfZyTXaQHXyxKcZb55CEJh15pWLYLztxRLXis7VmFxWlgPF7ncGNf/P +5O4/E2Hu29othfDNrp2yGAlFw5Khchf8R7agCyzxxN5DaAhqXzvwdmP7zAYspsbi +DrW5viSP +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions RootCA 2015" +# Serial: 0 +# MD5 Fingerprint: ca:ff:e2:db:03:d9:cb:4b:e9:0f:ad:84:fd:7b:18:ce +# SHA1 Fingerprint: 01:0c:06:95:a6:98:19:14:ff:bf:5f:c6:b0:b6:95:ea:29:e9:12:a6 +# SHA256 Fingerprint: a0:40:92:9a:02:ce:53:b4:ac:f4:f2:ff:c6:98:1c:e4:49:6f:75:5e:6d:45:fe:0b:2a:69:2b:cd:52:52:3f:36 +-----BEGIN CERTIFICATE----- +MIIGCzCCA/OgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBpjELMAkGA1UEBhMCR1Ix +DzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5k +IFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxQDA+BgNVBAMT +N0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgUm9v +dENBIDIwMTUwHhcNMTUwNzA3MTAxMTIxWhcNNDAwNjMwMTAxMTIxWjCBpjELMAkG +A1UEBhMCR1IxDzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNh +ZGVtaWMgYW5kIFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkx +QDA+BgNVBAMTN0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1 +dGlvbnMgUm9vdENBIDIwMTUwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC +AQDC+Kk/G4n8PDwEXT2QNrCROnk8ZlrvbTkBSRq0t89/TSNTt5AA4xMqKKYx8ZEA +4yjsriFBzh/a/X0SWwGDD7mwX5nh8hKDgE0GPt+sr+ehiGsxr/CL0BgzuNtFajT0 +AoAkKAoCFZVedioNmToUW/bLy1O8E00BiDeUJRtCvCLYjqOWXjrZMts+6PAQZe10 +4S+nfK8nNLspfZu2zwnI5dMK/IhlZXQK3HMcXM1AsRzUtoSMTFDPaI6oWa7CJ06C +ojXdFPQf/7J31Ycvqm59JCfnxssm5uX+Zwdj2EUN3TpZZTlYepKZcj2chF6IIbjV +9Cz82XBST3i4vTwri5WY9bPRaM8gFH5MXF/ni+X1NYEZN9cRCLdmvtNKzoNXADrD +gfgXy5I2XdGj2HUb4Ysn6npIQf1FGQatJ5lOwXBH3bWfgVMS5bGMSF0xQxfjjMZ6 +Y5ZLKTBOhE5iGV48zpeQpX8B653g+IuJ3SWYPZK2fu/Z8VFRfS0myGlZYeCsargq +NhEEelC9MoS+L9xy1dcdFkfkR2YgP/SWxa+OAXqlD3pk9Q0Yh9muiNX6hME6wGko +LfINaFGq46V3xqSQDqE3izEjR8EJCOtu93ib14L8hCCZSRm2Ekax+0VVFqmjZayc +Bw/qa9wfLgZy7IaIEuQt218FL+TwA9MmM+eAws1CoRc0CwIDAQABo0IwQDAPBgNV +HRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUcRVnyMjJvXVd +ctA4GGqd83EkVAswDQYJKoZIhvcNAQELBQADggIBAHW7bVRLqhBYRjTyYtcWNl0I +XtVsyIe9tC5G8jH4fOpCtZMWVdyhDBKg2mF+D1hYc2Ryx+hFjtyp8iY/xnmMsVMI +M4GwVhO+5lFc2JsKT0ucVlMC6U/2DWDqTUJV6HwbISHTGzrMd/K4kPFox/la/vot +9L/J9UUbzjgQKjeKeaO04wlshYaT/4mWJ3iBj2fjRnRUjtkNaeJK9E10A/+yd+2V +Z5fkscWrv2oj6NSU4kQoYsRL4vDY4ilrGnB+JGGTe08DMiUNRSQrlrRGar9KC/ea +j8GsGsVn82800vpzY4zvFrCopEYq+OsS7HK07/grfoxSwIuEVPkvPuNVqNxmsdnh +X9izjFk0WaSrT2y7HxjbdavYy5LNlDhhDgcGH0tGEPEVvo2FXDtKK4F5D7Rpn0lQ +l033DlZdwJVqwjbDG2jJ9SrcR5q+ss7FJej6A7na+RZukYT1HCjI/CbM1xyQVqdf +bzoEvM14iQuODy+jqk+iGxI9FghAD/FGTNeqewjBCvVtJ94Cj8rDtSvK6evIIVM4 +pcw72Hc3MKJP2W/R8kCtQXoXxdZKNYm3QdV8hn9VTYNKpXMgwDqvkPGaJI7ZjnHK +e7iG2rKPmT4dEw0SEe7Uq/DpFXYC5ODfqiAeW2GFZECpkJcNrVPSWh2HagCXZWK0 +vm9qp/UsQu0yrbYhnr68 +-----END CERTIFICATE----- + +# Issuer: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Subject: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority +# Label: "Hellenic Academic and Research Institutions ECC RootCA 2015" +# Serial: 0 +# MD5 Fingerprint: 81:e5:b4:17:eb:c2:f5:e1:4b:0d:41:7b:49:92:fe:ef +# SHA1 Fingerprint: 9f:f1:71:8d:92:d5:9a:f3:7d:74:97:b4:bc:6f:84:68:0b:ba:b6:66 +# SHA256 Fingerprint: 44:b5:45:aa:8a:25:e6:5a:73:ca:15:dc:27:fc:36:d2:4c:1c:b9:95:3a:06:65:39:b1:15:82:dc:48:7b:48:33 +-----BEGIN CERTIFICATE----- +MIICwzCCAkqgAwIBAgIBADAKBggqhkjOPQQDAjCBqjELMAkGA1UEBhMCR1IxDzAN +BgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxRDBCBgNVBAMTO0hl +bGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgRUNDIFJv +b3RDQSAyMDE1MB4XDTE1MDcwNzEwMzcxMloXDTQwMDYzMDEwMzcxMlowgaoxCzAJ +BgNVBAYTAkdSMQ8wDQYDVQQHEwZBdGhlbnMxRDBCBgNVBAoTO0hlbGxlbmljIEFj +YWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgQ2VydC4gQXV0aG9yaXR5 +MUQwQgYDVQQDEztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0 +dXRpb25zIEVDQyBSb290Q0EgMjAxNTB2MBAGByqGSM49AgEGBSuBBAAiA2IABJKg +QehLgoRc4vgxEZmGZE4JJS+dQS8KrjVPdJWyUWRrjWvmP3CV8AVER6ZyOFB2lQJa +jq4onvktTpnvLEhvTCUp6NFxW98dwXU3tNf6e3pCnGoKVlp8aQuqgAkkbH7BRqNC +MEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFLQi +C4KZJAEOnLvkDv2/+5cgk5kqMAoGCCqGSM49BAMCA2cAMGQCMGfOFmI4oqxiRaep +lSTAGiecMjvAwNW6qef4BENThe5SId6d9SWDPp5YSy/XZxMOIQIwBeF1Ad5o7Sof +TUwJCA3sS61kFyjndc5FZXIhF8siQQ6ME5g4mlRtm8rifOoCWCKR +-----END CERTIFICATE----- + +# Issuer: CN=ISRG Root X1 O=Internet Security Research Group +# Subject: CN=ISRG Root X1 O=Internet Security Research Group +# Label: "ISRG Root X1" +# Serial: 172886928669790476064670243504169061120 +# MD5 Fingerprint: 0c:d2:f9:e0:da:17:73:e9:ed:86:4d:a5:e3:70:e7:4e +# SHA1 Fingerprint: ca:bd:2a:79:a1:07:6a:31:f2:1d:25:36:35:cb:03:9d:43:29:a5:e8 +# SHA256 Fingerprint: 96:bc:ec:06:26:49:76:f3:74:60:77:9a:cf:28:c5:a7:cf:e8:a3:c0:aa:e1:1a:8f:fc:ee:05:c0:bd:df:08:c6 +-----BEGIN CERTIFICATE----- +MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw +TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh +cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4 +WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu +ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY +MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc +h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+ +0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U +A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW +T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH +B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC +B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv +KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn +OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn +jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw +qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI +rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV +HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq +hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL +ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ +3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK +NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5 +ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur +TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC +jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc +oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq +4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA +mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d +emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc= +-----END CERTIFICATE----- + +# Issuer: O=FNMT-RCM OU=AC RAIZ FNMT-RCM +# Subject: O=FNMT-RCM OU=AC RAIZ FNMT-RCM +# Label: "AC RAIZ FNMT-RCM" +# Serial: 485876308206448804701554682760554759 +# MD5 Fingerprint: e2:09:04:b4:d3:bd:d1:a0:14:fd:1a:d2:47:c4:57:1d +# SHA1 Fingerprint: ec:50:35:07:b2:15:c4:95:62:19:e2:a8:9a:5b:42:99:2c:4c:2c:20 +# SHA256 Fingerprint: eb:c5:57:0c:29:01:8c:4d:67:b1:aa:12:7b:af:12:f7:03:b4:61:1e:bc:17:b7:da:b5:57:38:94:17:9b:93:fa +-----BEGIN CERTIFICATE----- +MIIFgzCCA2ugAwIBAgIPXZONMGc2yAYdGsdUhGkHMA0GCSqGSIb3DQEBCwUAMDsx +CzAJBgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJ +WiBGTk1ULVJDTTAeFw0wODEwMjkxNTU5NTZaFw0zMDAxMDEwMDAwMDBaMDsxCzAJ +BgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJWiBG +Tk1ULVJDTTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALpxgHpMhm5/ +yBNtwMZ9HACXjywMI7sQmkCpGreHiPibVmr75nuOi5KOpyVdWRHbNi63URcfqQgf +BBckWKo3Shjf5TnUV/3XwSyRAZHiItQDwFj8d0fsjz50Q7qsNI1NOHZnjrDIbzAz +WHFctPVrbtQBULgTfmxKo0nRIBnuvMApGGWn3v7v3QqQIecaZ5JCEJhfTzC8PhxF +tBDXaEAUwED653cXeuYLj2VbPNmaUtu1vZ5Gzz3rkQUCwJaydkxNEJY7kvqcfw+Z +374jNUUeAlz+taibmSXaXvMiwzn15Cou08YfxGyqxRxqAQVKL9LFwag0Jl1mpdIC +IfkYtwb1TplvqKtMUejPUBjFd8g5CSxJkjKZqLsXF3mwWsXmo8RZZUc1g16p6DUL +mbvkzSDGm0oGObVo/CK67lWMK07q87Hj/LaZmtVC+nFNCM+HHmpxffnTtOmlcYF7 +wk5HlqX2doWjKI/pgG6BU6VtX7hI+cL5NqYuSf+4lsKMB7ObiFj86xsc3i1w4peS +MKGJ47xVqCfWS+2QrYv6YyVZLag13cqXM7zlzced0ezvXg5KkAYmY6252TUtB7p2 +ZSysV4999AeU14ECll2jB0nVetBX+RvnU0Z1qrB5QstocQjpYL05ac70r8NWQMet +UqIJ5G+GR4of6ygnXYMgrwTJbFaai0b1AgMBAAGjgYMwgYAwDwYDVR0TAQH/BAUw +AwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFPd9xf3E6Jobd2Sn9R2gzL+H +YJptMD4GA1UdIAQ3MDUwMwYEVR0gADArMCkGCCsGAQUFBwIBFh1odHRwOi8vd3d3 +LmNlcnQuZm5tdC5lcy9kcGNzLzANBgkqhkiG9w0BAQsFAAOCAgEAB5BK3/MjTvDD +nFFlm5wioooMhfNzKWtN/gHiqQxjAb8EZ6WdmF/9ARP67Jpi6Yb+tmLSbkyU+8B1 +RXxlDPiyN8+sD8+Nb/kZ94/sHvJwnvDKuO+3/3Y3dlv2bojzr2IyIpMNOmqOFGYM +LVN0V2Ue1bLdI4E7pWYjJ2cJj+F3qkPNZVEI7VFY/uY5+ctHhKQV8Xa7pO6kO8Rf +77IzlhEYt8llvhjho6Tc+hj507wTmzl6NLrTQfv6MooqtyuGC2mDOL7Nii4LcK2N +JpLuHvUBKwrZ1pebbuCoGRw6IYsMHkCtA+fdZn71uSANA+iW+YJF1DngoABd15jm +fZ5nc8OaKveri6E6FO80vFIOiZiaBECEHX5FaZNXzuvO+FB8TxxuBEOb+dY7Ixjp +6o7RTUaN8Tvkasq6+yO3m/qZASlaWFot4/nUbQ4mrcFuNLwy+AwF+mWj2zs3gyLp +1txyM/1d8iC9djwj2ij3+RvrWWTV3F9yfiD8zYm1kGdNYno/Tq0dwzn+evQoFt9B +9kiABdcPUXmsEKvU7ANm5mqwujGSQkBqvjrTcuFqN1W8rB2Vt2lh8kORdOag0wok +RqEIr9baRRmW1FMdW4R58MD3R++Lj8UGrp1MYp3/RgT408m2ECVAdf4WqslKYIYv +uu8wd+RU4riEmViAqhOLUTpPSPaLtrM= +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 1 O=Amazon +# Subject: CN=Amazon Root CA 1 O=Amazon +# Label: "Amazon Root CA 1" +# Serial: 143266978916655856878034712317230054538369994 +# MD5 Fingerprint: 43:c6:bf:ae:ec:fe:ad:2f:18:c6:88:68:30:fc:c8:e6 +# SHA1 Fingerprint: 8d:a7:f9:65:ec:5e:fc:37:91:0f:1c:6e:59:fd:c1:cc:6a:6e:de:16 +# SHA256 Fingerprint: 8e:cd:e6:88:4f:3d:87:b1:12:5b:a3:1a:c3:fc:b1:3d:70:16:de:7f:57:cc:90:4f:e1:cb:97:c6:ae:98:19:6e +-----BEGIN CERTIFICATE----- +MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAxMB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALJ4gHHKeNXj +ca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOgQ3pOsqTQNroBvo3bSMgHFzZM +9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9tBb6dNqcmzU5L/qw +IFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAwhmahRWa6 +VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L +93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQm +jgSubJrIqg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AYYwHQYDVR0OBBYEFIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUA +A4IBAQCY8jdaQZChGsV2USggNiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDI +U5PMCCjjmCXPI6T53iHTfIUJrU6adTrCC2qJeHZERxhlbI1Bjjt/msv0tadQ1wUs +N+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V8viTO96LXFvKWlJbYK8U90vv +o/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJcJmApzyMZFo6IQ6XU +5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeXeGADbkpy +rqXRfboQnoZsG4q5WTP468SQvvG5 +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 2 O=Amazon +# Subject: CN=Amazon Root CA 2 O=Amazon +# Label: "Amazon Root CA 2" +# Serial: 143266982885963551818349160658925006970653239 +# MD5 Fingerprint: c8:e5:8d:ce:a8:42:e2:7a:c0:2a:5c:7c:9e:26:bf:66 +# SHA1 Fingerprint: 5a:8c:ef:45:d7:a6:98:59:76:7a:8c:8b:44:96:b5:78:cf:47:4b:1a +# SHA256 Fingerprint: 1b:a5:b2:aa:8c:65:40:1a:82:96:01:18:f8:0b:ec:4f:62:30:4d:83:ce:c4:71:3a:19:c3:9c:01:1e:a4:6d:b4 +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgITBmyf0pY1hp8KD+WGePhbJruKNzANBgkqhkiG9w0BAQwF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAyMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK2Wny2cSkxK +gXlRmeyKy2tgURO8TW0G/LAIjd0ZEGrHJgw12MBvIITplLGbhQPDW9tK6Mj4kHbZ +W0/jTOgGNk3Mmqw9DJArktQGGWCsN0R5hYGCrVo34A3MnaZMUnbqQ523BNFQ9lXg +1dKmSYXpN+nKfq5clU1Imj+uIFptiJXZNLhSGkOQsL9sBbm2eLfq0OQ6PBJTYv9K +8nu+NQWpEjTj82R0Yiw9AElaKP4yRLuH3WUnAnE72kr3H9rN9yFVkE8P7K6C4Z9r +2UXTu/Bfh+08LDmG2j/e7HJV63mjrdvdfLC6HM783k81ds8P+HgfajZRRidhW+me +z/CiVX18JYpvL7TFz4QuK/0NURBs+18bvBt+xa47mAExkv8LV/SasrlX6avvDXbR +8O70zoan4G7ptGmh32n2M8ZpLpcTnqWHsFcQgTfJU7O7f/aS0ZzQGPSSbtqDT6Zj +mUyl+17vIWR6IF9sZIUVyzfpYgwLKhbcAS4y2j5L9Z469hdAlO+ekQiG+r5jqFoz +7Mt0Q5X5bGlSNscpb/xVA1wf+5+9R+vnSUeVC06JIglJ4PVhHvG/LopyboBZ/1c6 ++XUyo05f7O0oYtlNc/LMgRdg7c3r3NunysV+Ar3yVAhU/bQtCSwXVEqY0VThUWcI +0u1ufm8/0i2BWSlmy5A5lREedCf+3euvAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMB +Af8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSwDPBMMPQFWAJI/TPlUq9LhONm +UjANBgkqhkiG9w0BAQwFAAOCAgEAqqiAjw54o+Ci1M3m9Zh6O+oAA7CXDpO8Wqj2 +LIxyh6mx/H9z/WNxeKWHWc8w4Q0QshNabYL1auaAn6AFC2jkR2vHat+2/XcycuUY ++gn0oJMsXdKMdYV2ZZAMA3m3MSNjrXiDCYZohMr/+c8mmpJ5581LxedhpxfL86kS +k5Nrp+gvU5LEYFiwzAJRGFuFjWJZY7attN6a+yb3ACfAXVU3dJnJUH/jWS5E4ywl +7uxMMne0nxrpS10gxdr9HIcWxkPo1LsmmkVwXqkLN1PiRnsn/eBG8om3zEK2yygm +btmlyTrIQRNg91CMFa6ybRoVGld45pIq2WWQgj9sAq+uEjonljYE1x2igGOpm/Hl +urR8FLBOybEfdF849lHqm/osohHUqS0nGkWxr7JOcQ3AWEbWaQbLU8uz/mtBzUF+ +fUwPfHJ5elnNXkoOrJupmHN5fLT0zLm4BwyydFy4x2+IoZCn9Kr5v2c69BoVYh63 +n749sSmvZ6ES8lgQGVMDMBu4Gon2nL2XA46jCfMdiyHxtN/kHNGfZQIG6lzWE7OE +76KlXIx3KadowGuuQNKotOrN8I1LOJwZmhsoVLiJkO/KdYE+HvJkJMcYr07/R54H +9jVlpNMKVv/1F2Rs76giJUmTtt8AF9pYfl3uxRuw0dFfIRDH+fO6AgonB8Xx1sfT +4PsJYGw= +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 3 O=Amazon +# Subject: CN=Amazon Root CA 3 O=Amazon +# Label: "Amazon Root CA 3" +# Serial: 143266986699090766294700635381230934788665930 +# MD5 Fingerprint: a0:d4:ef:0b:f7:b5:d8:49:95:2a:ec:f5:c4:fc:81:87 +# SHA1 Fingerprint: 0d:44:dd:8c:3c:8c:1a:1a:58:75:64:81:e9:0f:2e:2a:ff:b3:d2:6e +# SHA256 Fingerprint: 18:ce:6c:fe:7b:f1:4e:60:b2:e3:47:b8:df:e8:68:cb:31:d0:2e:bb:3a:da:27:15:69:f5:03:43:b4:6d:b3:a4 +-----BEGIN CERTIFICATE----- +MIIBtjCCAVugAwIBAgITBmyf1XSXNmY/Owua2eiedgPySjAKBggqhkjOPQQDAjA5 +MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g +Um9vdCBDQSAzMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG +A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg +Q0EgMzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABCmXp8ZBf8ANm+gBG1bG8lKl +ui2yEujSLtf6ycXYqm0fc4E7O5hrOXwzpcVOho6AF2hiRVd9RFgdszflZwjrZt6j +QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSr +ttvXBp43rDCGB5Fwx5zEGbF4wDAKBggqhkjOPQQDAgNJADBGAiEA4IWSoxe3jfkr +BqWTrBqYaGFy+uGh0PsceGCmQ5nFuMQCIQCcAu/xlJyzlvnrxir4tiz+OpAUFteM +YyRIHN8wfdVoOw== +-----END CERTIFICATE----- + +# Issuer: CN=Amazon Root CA 4 O=Amazon +# Subject: CN=Amazon Root CA 4 O=Amazon +# Label: "Amazon Root CA 4" +# Serial: 143266989758080763974105200630763877849284878 +# MD5 Fingerprint: 89:bc:27:d5:eb:17:8d:06:6a:69:d5:fd:89:47:b4:cd +# SHA1 Fingerprint: f6:10:84:07:d6:f8:bb:67:98:0c:c2:e2:44:c2:eb:ae:1c:ef:63:be +# SHA256 Fingerprint: e3:5d:28:41:9e:d0:20:25:cf:a6:90:38:cd:62:39:62:45:8d:a5:c6:95:fb:de:a3:c2:2b:0b:fb:25:89:70:92 +-----BEGIN CERTIFICATE----- +MIIB8jCCAXigAwIBAgITBmyf18G7EEwpQ+Vxe3ssyBrBDjAKBggqhkjOPQQDAzA5 +MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g +Um9vdCBDQSA0MB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG +A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg +Q0EgNDB2MBAGByqGSM49AgEGBSuBBAAiA2IABNKrijdPo1MN/sGKe0uoe0ZLY7Bi +9i0b2whxIdIA6GO9mif78DluXeo9pcmBqqNbIJhFXRbb/egQbeOc4OO9X4Ri83Bk +M6DLJC9wuoihKqB1+IGuYgbEgds5bimwHvouXKNCMEAwDwYDVR0TAQH/BAUwAwEB +/zAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFNPsxzplbszh2naaVvuc84ZtV+WB +MAoGCCqGSM49BAMDA2gAMGUCMDqLIfG9fhGt0O9Yli/W651+kI0rz2ZVwyzjKKlw +CkcO8DdZEv8tmZQoTipPNU0zWgIxAOp1AE47xDqUEpHJWEadIRNyp4iciuRMStuW +1KyLa2tJElMzrdfkviT8tQp21KW8EA== +-----END CERTIFICATE----- + +# Issuer: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM +# Subject: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM +# Label: "TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1" +# Serial: 1 +# MD5 Fingerprint: dc:00:81:dc:69:2f:3e:2f:b0:3b:f6:3d:5a:91:8e:49 +# SHA1 Fingerprint: 31:43:64:9b:ec:ce:27:ec:ed:3a:3f:0b:8f:0d:e4:e8:91:dd:ee:ca +# SHA256 Fingerprint: 46:ed:c3:68:90:46:d5:3a:45:3f:b3:10:4a:b8:0d:ca:ec:65:8b:26:60:ea:16:29:dd:7e:86:79:90:64:87:16 +-----BEGIN CERTIFICATE----- +MIIEYzCCA0ugAwIBAgIBATANBgkqhkiG9w0BAQsFADCB0jELMAkGA1UEBhMCVFIx +GDAWBgNVBAcTD0dlYnplIC0gS29jYWVsaTFCMEAGA1UEChM5VHVya2l5ZSBCaWxp +bXNlbCB2ZSBUZWtub2xvamlrIEFyYXN0aXJtYSBLdXJ1bXUgLSBUVUJJVEFLMS0w +KwYDVQQLEyRLYW11IFNlcnRpZmlrYXN5b24gTWVya2V6aSAtIEthbXUgU00xNjA0 +BgNVBAMTLVRVQklUQUsgS2FtdSBTTSBTU0wgS29rIFNlcnRpZmlrYXNpIC0gU3Vy +dW0gMTAeFw0xMzExMjUwODI1NTVaFw00MzEwMjUwODI1NTVaMIHSMQswCQYDVQQG +EwJUUjEYMBYGA1UEBxMPR2ViemUgLSBLb2NhZWxpMUIwQAYDVQQKEzlUdXJraXll +IEJpbGltc2VsIHZlIFRla25vbG9qaWsgQXJhc3Rpcm1hIEt1cnVtdSAtIFRVQklU +QUsxLTArBgNVBAsTJEthbXUgU2VydGlmaWthc3lvbiBNZXJrZXppIC0gS2FtdSBT +TTE2MDQGA1UEAxMtVFVCSVRBSyBLYW11IFNNIFNTTCBLb2sgU2VydGlmaWthc2kg +LSBTdXJ1bSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr3UwM6q7 +a9OZLBI3hNmNe5eA027n/5tQlT6QlVZC1xl8JoSNkvoBHToP4mQ4t4y86Ij5iySr +LqP1N+RAjhgleYN1Hzv/bKjFxlb4tO2KRKOrbEz8HdDc72i9z+SqzvBV96I01INr +N3wcwv61A+xXzry0tcXtAA9TNypN9E8Mg/uGz8v+jE69h/mniyFXnHrfA2eJLJ2X +YacQuFWQfw4tJzh03+f92k4S400VIgLI4OD8D62K18lUUMw7D8oWgITQUVbDjlZ/ +iSIzL+aFCr2lqBs23tPcLG07xxO9WSMs5uWk99gL7eqQQESolbuT1dCANLZGeA4f +AJNG4e7p+exPFwIDAQABo0IwQDAdBgNVHQ4EFgQUZT/HiobGPN08VFw1+DrtUgxH +V8gwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBACo/4fEyjq7hmFxLXs9rHmoJ0iKpEsdeV31zVmSAhHqT5Am5EM2fKifh +AHe+SMg1qIGf5LgsyX8OsNJLN13qudULXjS99HMpw+0mFZx+CFOKWI3QSyjfwbPf +IPP54+M638yclNhOT8NrF7f3cuitZjO1JVOr4PhMqZ398g26rrnZqsZr+ZO7rqu4 +lzwDGrpDxpa5RXI4s6ehlj2Re37AIVNMh+3yC1SVUZPVIqUNivGTDj5UDrDYyU7c +8jEyVupk+eq1nRZmQnLzf9OxMUP8pI4X8W0jq5Rm+K37DwhuJi1/FwcJsoz7UMCf +lo3Ptv0AnVoUmr8CRPXBwp8iXqIPoeM= +-----END CERTIFICATE----- + +# Issuer: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD. +# Subject: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD. +# Label: "GDCA TrustAUTH R5 ROOT" +# Serial: 9009899650740120186 +# MD5 Fingerprint: 63:cc:d9:3d:34:35:5c:6f:53:a3:e2:08:70:48:1f:b4 +# SHA1 Fingerprint: 0f:36:38:5b:81:1a:25:c3:9b:31:4e:83:ca:e9:34:66:70:cc:74:b4 +# SHA256 Fingerprint: bf:ff:8f:d0:44:33:48:7d:6a:8a:a6:0c:1a:29:76:7a:9f:c2:bb:b0:5e:42:0f:71:3a:13:b9:92:89:1d:38:93 +-----BEGIN CERTIFICATE----- +MIIFiDCCA3CgAwIBAgIIfQmX/vBH6nowDQYJKoZIhvcNAQELBQAwYjELMAkGA1UE +BhMCQ04xMjAwBgNVBAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZ +IENPLixMVEQuMR8wHQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMB4XDTE0 +MTEyNjA1MTMxNVoXDTQwMTIzMTE1NTk1OVowYjELMAkGA1UEBhMCQ04xMjAwBgNV +BAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZIENPLixMVEQuMR8w +HQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMIICIjANBgkqhkiG9w0BAQEF +AAOCAg8AMIICCgKCAgEA2aMW8Mh0dHeb7zMNOwZ+Vfy1YI92hhJCfVZmPoiC7XJj +Dp6L3TQsAlFRwxn9WVSEyfFrs0yw6ehGXTjGoqcuEVe6ghWinI9tsJlKCvLriXBj +TnnEt1u9ol2x8kECK62pOqPseQrsXzrj/e+APK00mxqriCZ7VqKChh/rNYmDf1+u +KU49tm7srsHwJ5uu4/Ts765/94Y9cnrrpftZTqfrlYwiOXnhLQiPzLyRuEH3FMEj +qcOtmkVEs7LXLM3GKeJQEK5cy4KOFxg2fZfmiJqwTTQJ9Cy5WmYqsBebnh52nUpm +MUHfP/vFBu8btn4aRjb3ZGM74zkYI+dndRTVdVeSN72+ahsmUPI2JgaQxXABZG12 +ZuGR224HwGGALrIuL4xwp9E7PLOR5G62xDtw8mySlwnNR30YwPO7ng/Wi64HtloP +zgsMR6flPri9fcebNaBhlzpBdRfMK5Z3KpIhHtmVdiBnaM8Nvd/WHwlqmuLMc3Gk +L30SgLdTMEZeS1SZD2fJpcjyIMGC7J0R38IC+xo70e0gmu9lZJIQDSri3nDxGGeC +jGHeuLzRL5z7D9Ar7Rt2ueQ5Vfj4oR24qoAATILnsn8JuLwwoC8N9VKejveSswoA +HQBUlwbgsQfZxw9cZX08bVlX5O2ljelAU58VS6Bx9hoh49pwBiFYFIeFd3mqgnkC +AwEAAaNCMEAwHQYDVR0OBBYEFOLJQJ9NzuiaoXzPDj9lxSmIahlRMA8GA1UdEwEB +/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQDRSVfg +p8xoWLoBDysZzY2wYUWsEe1jUGn4H3++Fo/9nesLqjJHdtJnJO29fDMylyrHBYZm +DRd9FBUb1Ov9H5r2XpdptxolpAqzkT9fNqyL7FeoPueBihhXOYV0GkLH6VsTX4/5 +COmSdI31R9KrO9b7eGZONn356ZLpBN79SWP8bfsUcZNnL0dKt7n/HipzcEYwv1ry +L3ml4Y0M2fmyYzeMN2WFcGpcWwlyua1jPLHd+PwyvzeG5LuOmCd+uh8W4XAR8gPf +JWIyJyYYMoSf/wA6E7qaTfRPuBRwIrHKK5DOKcFw9C+df/KQHtZa37dG/OaG+svg +IHZ6uqbL9XzeYqWxi+7egmaKTjowHz+Ay60nugxe19CxVsp3cbK1daFQqUBDF8Io +2c9Si1vIY9RCPqAzekYu9wogRlR+ak8x8YF+QnQ4ZXMn7sZ8uI7XpTrXmKGcjBBV +09tL7ECQ8s1uV9JiDnxXk7Gnbc2dg7sq5+W2O3FYrf3RRbxake5TFW/TRQl1brqQ +XR4EzzffHqhmsYzmIGrv/EhOdJhCrylvLmrH+33RZjEizIYAfmaDDEL0vTSSwxrq +T8p+ck0LcIymSLumoRT2+1hEmRSuqguTaaApJUqlyyvdimYHFngVV3Eb7PVHhPOe +MTd61X8kreS8/f3MboPoDKi3QWwH3b08hpcv0g== +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor RootCert CA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor RootCert CA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor RootCert CA-1" +# Serial: 15752444095811006489 +# MD5 Fingerprint: 6e:85:f1:dc:1a:00:d3:22:d5:b2:b2:ac:6b:37:05:45 +# SHA1 Fingerprint: ff:bd:cd:e7:82:c8:43:5e:3c:6f:26:86:5c:ca:a8:3a:45:5b:c3:0a +# SHA256 Fingerprint: d4:0e:9c:86:cd:8f:e4:68:c1:77:69:59:f4:9e:a7:74:fa:54:86:84:b6:c4:06:f3:90:92:61:f4:dc:e2:57:5c +-----BEGIN CERTIFICATE----- +MIIEMDCCAxigAwIBAgIJANqb7HHzA7AZMA0GCSqGSIb3DQEBCwUAMIGkMQswCQYD +VQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEgQ2l0eTEk +MCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYDVQQLDB5U +cnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxHzAdBgNVBAMMFlRydXN0Q29y +IFJvb3RDZXJ0IENBLTEwHhcNMTYwMjA0MTIzMjE2WhcNMjkxMjMxMTcyMzE2WjCB +pDELMAkGA1UEBhMCUEExDzANBgNVBAgMBlBhbmFtYTEUMBIGA1UEBwwLUGFuYW1h +IENpdHkxJDAiBgNVBAoMG1RydXN0Q29yIFN5c3RlbXMgUy4gZGUgUi5MLjEnMCUG +A1UECwweVHJ1c3RDb3IgQ2VydGlmaWNhdGUgQXV0aG9yaXR5MR8wHQYDVQQDDBZU +cnVzdENvciBSb290Q2VydCBDQS0xMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAv463leLCJhJrMxnHQFgKq1mqjQCj/IDHUHuO1CAmujIS2CNUSSUQIpid +RtLByZ5OGy4sDjjzGiVoHKZaBeYei0i/mJZ0PmnK6bV4pQa81QBeCQryJ3pS/C3V +seq0iWEk8xoT26nPUu0MJLq5nux+AHT6k61sKZKuUbS701e/s/OojZz0JEsq1pme +9J7+wH5COucLlVPat2gOkEz7cD+PSiyU8ybdY2mplNgQTsVHCJCZGxdNuWxu72CV +EY4hgLW9oHPY0LJ3xEXqWib7ZnZ2+AYfYW0PVcWDtxBWcgYHpfOxGgMFZA6dWorW +hnAbJN7+KIor0Gqw/Hqi3LJ5DotlDwIDAQABo2MwYTAdBgNVHQ4EFgQU7mtJPHo/ +DeOxCbeKyKsZn3MzUOcwHwYDVR0jBBgwFoAU7mtJPHo/DeOxCbeKyKsZn3MzUOcw +DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQAD +ggEBACUY1JGPE+6PHh0RU9otRCkZoB5rMZ5NDp6tPVxBb5UrJKF5mDo4Nvu7Zp5I +/5CQ7z3UuJu0h3U/IJvOcs+hVcFNZKIZBqEHMwwLKeXx6quj7LUKdJDHfXLy11yf +ke+Ri7fc7Waiz45mO7yfOgLgJ90WmMCV1Aqk5IGadZQ1nJBfiDcGrVmVCrDRZ9MZ +yonnMlo2HD6CqFqTvsbQZJG2z9m2GM/bftJlo6bEjhcxwft+dtvTheNYsnd6djts +L1Ac59v2Z3kf9YKVmgenFK+P3CghZwnS1k1aHBkcjndcw5QkPTJrS37UeJSDvjdN +zl/HHk484IkzlQsPpTLWPFp5LBk= +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor RootCert CA-2 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor RootCert CA-2 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor RootCert CA-2" +# Serial: 2711694510199101698 +# MD5 Fingerprint: a2:e1:f8:18:0b:ba:45:d5:c7:41:2a:bb:37:52:45:64 +# SHA1 Fingerprint: b8:be:6d:cb:56:f1:55:b9:63:d4:12:ca:4e:06:34:c7:94:b2:1c:c0 +# SHA256 Fingerprint: 07:53:e9:40:37:8c:1b:d5:e3:83:6e:39:5d:ae:a5:cb:83:9e:50:46:f1:bd:0e:ae:19:51:cf:10:fe:c7:c9:65 +-----BEGIN CERTIFICATE----- +MIIGLzCCBBegAwIBAgIIJaHfyjPLWQIwDQYJKoZIhvcNAQELBQAwgaQxCzAJBgNV +BAYTAlBBMQ8wDQYDVQQIDAZQYW5hbWExFDASBgNVBAcMC1BhbmFtYSBDaXR5MSQw +IgYDVQQKDBtUcnVzdENvciBTeXN0ZW1zIFMuIGRlIFIuTC4xJzAlBgNVBAsMHlRy +dXN0Q29yIENlcnRpZmljYXRlIEF1dGhvcml0eTEfMB0GA1UEAwwWVHJ1c3RDb3Ig +Um9vdENlcnQgQ0EtMjAeFw0xNjAyMDQxMjMyMjNaFw0zNDEyMzExNzI2MzlaMIGk +MQswCQYDVQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEg +Q2l0eTEkMCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYD +VQQLDB5UcnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxHzAdBgNVBAMMFlRy +dXN0Q29yIFJvb3RDZXJ0IENBLTIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCnIG7CKqJiJJWQdsg4foDSq8GbZQWU9MEKENUCrO2fk8eHyLAnK0IMPQo+ +QVqedd2NyuCb7GgypGmSaIwLgQ5WoD4a3SwlFIIvl9NkRvRUqdw6VC0xK5mC8tkq +1+9xALgxpL56JAfDQiDyitSSBBtlVkxs1Pu2YVpHI7TYabS3OtB0PAx1oYxOdqHp +2yqlO/rOsP9+aij9JxzIsekp8VduZLTQwRVtDr4uDkbIXvRR/u8OYzo7cbrPb1nK +DOObXUm4TOJXsZiKQlecdu/vvdFoqNL0Cbt3Nb4lggjEFixEIFapRBF37120Hape +az6LMvYHL1cEksr1/p3C6eizjkxLAjHZ5DxIgif3GIJ2SDpxsROhOdUuxTTCHWKF +3wP+TfSvPd9cW436cOGlfifHhi5qjxLGhF5DUVCcGZt45vz27Ud+ez1m7xMTiF88 +oWP7+ayHNZ/zgp6kPwqcMWmLmaSISo5uZk3vFsQPeSghYA2FFn3XVDjxklb9tTNM +g9zXEJ9L/cb4Qr26fHMC4P99zVvh1Kxhe1fVSntb1IVYJ12/+CtgrKAmrhQhJ8Z3 +mjOAPF5GP/fDsaOGM8boXg25NSyqRsGFAnWAoOsk+xWq5Gd/bnc/9ASKL3x74xdh +8N0JqSDIvgmk0H5Ew7IwSjiqqewYmgeCK9u4nBit2uBGF6zPXQIDAQABo2MwYTAd +BgNVHQ4EFgQU2f4hQG6UnrybPZx9mCAZ5YwwYrIwHwYDVR0jBBgwFoAU2f4hQG6U +nrybPZx9mCAZ5YwwYrIwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYw +DQYJKoZIhvcNAQELBQADggIBAJ5Fngw7tu/hOsh80QA9z+LqBrWyOrsGS2h60COX +dKcs8AjYeVrXWoSK2BKaG9l9XE1wxaX5q+WjiYndAfrs3fnpkpfbsEZC89NiqpX+ +MWcUaViQCqoL7jcjx1BRtPV+nuN79+TMQjItSQzL/0kMmx40/W5ulop5A7Zv2wnL +/V9lFDfhOPXzYRZY5LVtDQsEGz9QLX+zx3oaFoBg+Iof6Rsqxvm6ARppv9JYx1RX +CI/hOWB3S6xZhBqI8d3LT3jX5+EzLfzuQfogsL7L9ziUwOHQhQ+77Sxzq+3+knYa +ZH9bDTMJBzN7Bj8RpFxwPIXAz+OQqIN3+tvmxYxoZxBnpVIt8MSZj3+/0WvitUfW +2dCFmU2Umw9Lje4AWkcdEQOsQRivh7dvDDqPys/cA8GiCcjl/YBeyGBCARsaU1q7 +N6a3vLqE6R5sGtRk2tRD/pOLS/IseRYQ1JMLiI+h2IYURpFHmygk71dSTlxCnKr3 +Sewn6EAes6aJInKc9Q0ztFijMDvd1GpUk74aTfOTlPf8hAs/hCBcNANExdqtvArB +As8e5ZTZ845b2EzwnexhF7sUMlQMAimTHpKG9n/v55IFDlndmQguLvqcAFLTxWYp +5KeXRKQOKIETNcX2b2TmQcTVL8w0RSXPQQCWPUouwpaYT05KnJe32x+SMsj/D1Fu +1uwJ +-----END CERTIFICATE----- + +# Issuer: CN=TrustCor ECA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Subject: CN=TrustCor ECA-1 O=TrustCor Systems S. de R.L. OU=TrustCor Certificate Authority +# Label: "TrustCor ECA-1" +# Serial: 9548242946988625984 +# MD5 Fingerprint: 27:92:23:1d:0a:f5:40:7c:e9:e6:6b:9d:d8:f5:e7:6c +# SHA1 Fingerprint: 58:d1:df:95:95:67:6b:63:c0:f0:5b:1c:17:4d:8b:84:0b:c8:78:bd +# SHA256 Fingerprint: 5a:88:5d:b1:9c:01:d9:12:c5:75:93:88:93:8c:af:bb:df:03:1a:b2:d4:8e:91:ee:15:58:9b:42:97:1d:03:9c +-----BEGIN CERTIFICATE----- +MIIEIDCCAwigAwIBAgIJAISCLF8cYtBAMA0GCSqGSIb3DQEBCwUAMIGcMQswCQYD +VQQGEwJQQTEPMA0GA1UECAwGUGFuYW1hMRQwEgYDVQQHDAtQYW5hbWEgQ2l0eTEk +MCIGA1UECgwbVHJ1c3RDb3IgU3lzdGVtcyBTLiBkZSBSLkwuMScwJQYDVQQLDB5U +cnVzdENvciBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxFzAVBgNVBAMMDlRydXN0Q29y +IEVDQS0xMB4XDTE2MDIwNDEyMzIzM1oXDTI5MTIzMTE3MjgwN1owgZwxCzAJBgNV +BAYTAlBBMQ8wDQYDVQQIDAZQYW5hbWExFDASBgNVBAcMC1BhbmFtYSBDaXR5MSQw +IgYDVQQKDBtUcnVzdENvciBTeXN0ZW1zIFMuIGRlIFIuTC4xJzAlBgNVBAsMHlRy +dXN0Q29yIENlcnRpZmljYXRlIEF1dGhvcml0eTEXMBUGA1UEAwwOVHJ1c3RDb3Ig +RUNBLTEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDPj+ARtZ+odnbb +3w9U73NjKYKtR8aja+3+XzP4Q1HpGjORMRegdMTUpwHmspI+ap3tDvl0mEDTPwOA +BoJA6LHip1GnHYMma6ve+heRK9jGrB6xnhkB1Zem6g23xFUfJ3zSCNV2HykVh0A5 +3ThFEXXQmqc04L/NyFIduUd+Dbi7xgz2c1cWWn5DkR9VOsZtRASqnKmcp0yJF4Ou +owReUoCLHhIlERnXDH19MURB6tuvsBzvgdAsxZohmz3tQjtQJvLsznFhBmIhVE5/ +wZ0+fyCMgMsq2JdiyIMzkX2woloPV+g7zPIlstR8L+xNxqE6FXrntl019fZISjZF +ZtS6mFjBAgMBAAGjYzBhMB0GA1UdDgQWBBREnkj1zG1I1KBLf/5ZJC+Dl5mahjAf +BgNVHSMEGDAWgBREnkj1zG1I1KBLf/5ZJC+Dl5mahjAPBgNVHRMBAf8EBTADAQH/ +MA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAQEABT41XBVwm8nHc2Fv +civUwo/yQ10CzsSUuZQRg2dd4mdsdXa/uwyqNsatR5Nj3B5+1t4u/ukZMjgDfxT2 +AHMsWbEhBuH7rBiVDKP/mZb3Kyeb1STMHd3BOuCYRLDE5D53sXOpZCz2HAF8P11F +hcCF5yWPldwX8zyfGm6wyuMdKulMY/okYWLW2n62HGz1Ah3UKt1VkOsqEUc8Ll50 +soIipX1TH0XsJ5F95yIW6MBoNtjG8U+ARDL54dHRHareqKucBK+tIA5kmE2la8BI +WJZpTdwHjFGTot+fDz2LYLSCjaoITmJF4PkL0uDgPFveXHEnJcLmA4GLEFPjx1Wi +tJ/X5g== +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com Root Certification Authority RSA O=SSL Corporation +# Subject: CN=SSL.com Root Certification Authority RSA O=SSL Corporation +# Label: "SSL.com Root Certification Authority RSA" +# Serial: 8875640296558310041 +# MD5 Fingerprint: 86:69:12:c0:70:f1:ec:ac:ac:c2:d5:bc:a5:5b:a1:29 +# SHA1 Fingerprint: b7:ab:33:08:d1:ea:44:77:ba:14:80:12:5a:6f:bd:a9:36:49:0c:bb +# SHA256 Fingerprint: 85:66:6a:56:2e:e0:be:5c:e9:25:c1:d8:89:0a:6f:76:a8:7e:c1:6d:4d:7d:5f:29:ea:74:19:cf:20:12:3b:69 +-----BEGIN CERTIFICATE----- +MIIF3TCCA8WgAwIBAgIIeyyb0xaAMpkwDQYJKoZIhvcNAQELBQAwfDELMAkGA1UE +BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQK +DA9TU0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eSBSU0EwHhcNMTYwMjEyMTczOTM5WhcNNDEwMjEyMTcz +OTM5WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv +dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNv +bSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFJTQTCCAiIwDQYJKoZIhvcN +AQEBBQADggIPADCCAgoCggIBAPkP3aMrfcvQKv7sZ4Wm5y4bunfh4/WvpOz6Sl2R +xFdHaxh3a3by/ZPkPQ/CFp4LZsNWlJ4Xg4XOVu/yFv0AYvUiCVToZRdOQbngT0aX +qhvIuG5iXmmxX9sqAn78bMrzQdjt0Oj8P2FI7bADFB0QDksZ4LtO7IZl/zbzXmcC +C52GVWH9ejjt/uIZALdvoVBidXQ8oPrIJZK0bnoix/geoeOy3ZExqysdBP+lSgQ3 +6YWkMyv94tZVNHwZpEpox7Ko07fKoZOI68GXvIz5HdkihCR0xwQ9aqkpk8zruFvh +/l8lqjRYyMEjVJ0bmBHDOJx+PYZspQ9AhnwC9FwCTyjLrnGfDzrIM/4RJTXq/LrF +YD3ZfBjVsqnTdXgDciLKOsMf7yzlLqn6niy2UUb9rwPW6mBo6oUWNmuF6R7As93E +JNyAKoFBbZQ+yODJgUEAnl6/f8UImKIYLEJAs/lvOCdLToD0PYFH4Ih86hzOtXVc +US4cK38acijnALXRdMbX5J+tB5O2UzU1/Dfkw/ZdFr4hc96SCvigY2q8lpJqPvi8 +ZVWb3vUNiSYE/CUapiVpy8JtynziWV+XrOvvLsi81xtZPCvM8hnIk2snYxnP/Okm ++Mpxm3+T/jRnhE6Z6/yzeAkzcLpmpnbtG3PrGqUNxCITIJRWCk4sbE6x/c+cCbqi +M+2HAgMBAAGjYzBhMB0GA1UdDgQWBBTdBAkHovV6fVJTEpKV7jiAJQ2mWTAPBgNV +HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFN0ECQei9Xp9UlMSkpXuOIAlDaZZMA4G +A1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAIBgRlCn7Jp0cHh5wYfGV +cpNxJK1ok1iOMq8bs3AD/CUrdIWQPXhq9LmLpZc7tRiRux6n+UBbkflVma8eEdBc +Hadm47GUBwwyOabqG7B52B2ccETjit3E+ZUfijhDPwGFpUenPUayvOUiaPd7nNgs +PgohyC0zrL/FgZkxdMF1ccW+sfAjRfSda/wZY52jvATGGAslu1OJD7OAUN5F7kR/ +q5R4ZJjT9ijdh9hwZXT7DrkT66cPYakylszeu+1jTBi7qUD3oFRuIIhxdRjqerQ0 +cuAjJ3dctpDqhiVAq+8zD8ufgr6iIPv2tS0a5sKFsXQP+8hlAqRSAUfdSSLBv9jr +a6x+3uxjMxW3IwiPxg+NQVrdjsW5j+VFP3jbutIbQLH+cU0/4IGiul607BXgk90I +H37hVZkLId6Tngr75qNJvTYw/ud3sqB1l7UtgYgXZSD32pAAn8lSzDLKNXz1PQ/Y +K9f1JmzJBjSWFupwWRoyeXkLtoh/D1JIPb9s2KJELtFOt3JY04kTlf5Eq/jXixtu +nLwsoFvVagCvXzfh1foQC5ichucmj87w7G6KVwuA406ywKBjYZC6VWg3dGq2ktuf +oYYitmUnDuy2n0Jg5GfCtdpBC8TTi2EbvPofkSvXRAdeuims2cXp71NIWuuA8ShY +Ic2wBlX7Jz9TkHCpBB5XJ7k= +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com Root Certification Authority ECC O=SSL Corporation +# Subject: CN=SSL.com Root Certification Authority ECC O=SSL Corporation +# Label: "SSL.com Root Certification Authority ECC" +# Serial: 8495723813297216424 +# MD5 Fingerprint: 2e:da:e4:39:7f:9c:8f:37:d1:70:9f:26:17:51:3a:8e +# SHA1 Fingerprint: c3:19:7c:39:24:e6:54:af:1b:c4:ab:20:95:7a:e2:c3:0e:13:02:6a +# SHA256 Fingerprint: 34:17:bb:06:cc:60:07:da:1b:96:1c:92:0b:8a:b4:ce:3f:ad:82:0e:4a:a3:0b:9a:cb:c4:a7:4e:bd:ce:bc:65 +-----BEGIN CERTIFICATE----- +MIICjTCCAhSgAwIBAgIIdebfy8FoW6gwCgYIKoZIzj0EAwIwfDELMAkGA1UEBhMC +VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T +U0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZpY2F0 +aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNDAzWhcNNDEwMjEyMTgxNDAz +WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hvdXN0 +b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNvbSBS +b290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49AgEGBSuB +BAAiA2IABEVuqVDEpiM2nl8ojRfLliJkP9x6jh3MCLOicSS6jkm5BBtHllirLZXI +7Z4INcgn64mMU1jrYor+8FsPazFSY0E7ic3s7LaNGdM0B9y7xgZ/wkWV7Mt/qCPg +CemB+vNH06NjMGEwHQYDVR0OBBYEFILRhXMw5zUE044CkvvlpNHEIejNMA8GA1Ud +EwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUgtGFczDnNQTTjgKS++Wk0cQh6M0wDgYD +VR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2cAMGQCMG/n61kRpGDPYbCWe+0F+S8T +kdzt5fxQaxFGRrMcIQBiu77D5+jNB5n5DQtdcj7EqgIwH7y6C+IwJPt8bYBVCpk+ +gA0z5Wajs6O7pdWLjwkspl1+4vAHCGht0nxpbl/f5Wpl +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation +# Subject: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation +# Label: "SSL.com EV Root Certification Authority RSA R2" +# Serial: 6248227494352943350 +# MD5 Fingerprint: e1:1e:31:58:1a:ae:54:53:02:f6:17:6a:11:7b:4d:95 +# SHA1 Fingerprint: 74:3a:f0:52:9b:d0:32:a0:f4:4a:83:cd:d4:ba:a9:7b:7c:2e:c4:9a +# SHA256 Fingerprint: 2e:7b:f1:6c:c2:24:85:a7:bb:e2:aa:86:96:75:07:61:b0:ae:39:be:3b:2f:e9:d0:cc:6d:4e:f7:34:91:42:5c +-----BEGIN CERTIFICATE----- +MIIF6zCCA9OgAwIBAgIIVrYpzTS8ePYwDQYJKoZIhvcNAQELBQAwgYIxCzAJBgNV +BAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHSG91c3RvbjEYMBYGA1UE +CgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQDDC5TU0wuY29tIEVWIFJvb3QgQ2Vy +dGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIyMB4XDTE3MDUzMTE4MTQzN1oXDTQy +MDUzMDE4MTQzN1owgYIxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4G +A1UEBwwHSG91c3RvbjEYMBYGA1UECgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQD +DC5TU0wuY29tIEVWIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIy +MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAjzZlQOHWTcDXtOlG2mvq +M0fNTPl9fb69LT3w23jhhqXZuglXaO1XPqDQCEGD5yhBJB/jchXQARr7XnAjssuf +OePPxU7Gkm0mxnu7s9onnQqG6YE3Bf7wcXHswxzpY6IXFJ3vG2fThVUCAtZJycxa +4bH3bzKfydQ7iEGonL3Lq9ttewkfokxykNorCPzPPFTOZw+oz12WGQvE43LrrdF9 +HSfvkusQv1vrO6/PgN3B0pYEW3p+pKk8OHakYo6gOV7qd89dAFmPZiw+B6KjBSYR +aZfqhbcPlgtLyEDhULouisv3D5oi53+aNxPN8k0TayHRwMwi8qFG9kRpnMphNQcA +b9ZhCBHqurj26bNg5U257J8UZslXWNvNh2n4ioYSA0e/ZhN2rHd9NCSFg83XqpyQ +Gp8hLH94t2S42Oim9HizVcuE0jLEeK6jj2HdzghTreyI/BXkmg3mnxp3zkyPuBQV +PWKchjgGAGYS5Fl2WlPAApiiECtoRHuOec4zSnaqW4EWG7WK2NAAe15itAnWhmMO +pgWVSbooi4iTsjQc2KRVbrcc0N6ZVTsj9CLg+SlmJuwgUHfbSguPvuUCYHBBXtSu +UDkiFCbLsjtzdFVHB3mBOagwE0TlBIqulhMlQg+5U8Sb/M3kHN48+qvWBkofZ6aY +MBzdLNvcGJVXZsb/XItW9XcCAwEAAaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAfBgNV +HSMEGDAWgBT5YLvU49U09rj1BoAlp3PbRmmonjAdBgNVHQ4EFgQU+WC71OPVNPa4 +9QaAJadz20ZpqJ4wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQBW +s47LCp1Jjr+kxJG7ZhcFUZh1++VQLHqe8RT6q9OKPv+RKY9ji9i0qVQBDb6Thi/5 +Sm3HXvVX+cpVHBK+Rw82xd9qt9t1wkclf7nxY/hoLVUE0fKNsKTPvDxeH3jnpaAg +cLAExbf3cqfeIg29MyVGjGSSJuM+LmOW2puMPfgYCdcDzH2GguDKBAdRUNf/ktUM +79qGn5nX67evaOI5JpS6aLe/g9Pqemc9YmeuJeVy6OLk7K4S9ksrPJ/psEDzOFSz +/bdoyNrGj1E8svuR3Bznm53htw1yj+KkxKl4+esUrMZDBcJlOSgYAsOCsp0FvmXt +ll9ldDz7CTUue5wT/RsPXcdtgTpWD8w74a8CLyKsRspGPKAcTNZEtF4uXBVmCeEm +Kf7GUmG6sXP/wwyc5WxqlD8UykAWlYTzWamsX0xhk23RO8yilQwipmdnRC652dKK +QbNmC1r7fSOl8hqw/96bg5Qu0T/fkreRrwU7ZcegbLHNYhLDkBvjJc40vG93drEQ +w/cFGsDWr3RiSBd3kmmQYRzelYB0VI8YHMPzA9C/pEN1hlMYegouCRw2n5H9gooi +S9EOUCXdywMMF8mDAAhONU2Ki+3wApRmLER/y5UnlhetCTCstnEXbosX9hwJ1C07 +mKVx01QT2WDz9UtmT/rx7iASjbSsV7FFY6GsdqnC+w== +-----END CERTIFICATE----- + +# Issuer: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation +# Subject: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation +# Label: "SSL.com EV Root Certification Authority ECC" +# Serial: 3182246526754555285 +# MD5 Fingerprint: 59:53:22:65:83:42:01:54:c0:ce:42:b9:5a:7c:f2:90 +# SHA1 Fingerprint: 4c:dd:51:a3:d1:f5:20:32:14:b0:c6:c5:32:23:03:91:c7:46:42:6d +# SHA256 Fingerprint: 22:a2:c1:f7:bd:ed:70:4c:c1:e7:01:b5:f4:08:c3:10:88:0f:e9:56:b5:de:2a:4a:44:f9:9c:87:3a:25:a7:c8 +-----BEGIN CERTIFICATE----- +MIIClDCCAhqgAwIBAgIILCmcWxbtBZUwCgYIKoZIzj0EAwIwfzELMAkGA1UEBhMC +VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T +U0wgQ29ycG9yYXRpb24xNDAyBgNVBAMMK1NTTC5jb20gRVYgUm9vdCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNTIzWhcNNDEwMjEyMTgx +NTIzWjB/MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv +dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjE0MDIGA1UEAwwrU1NMLmNv +bSBFViBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49 +AgEGBSuBBAAiA2IABKoSR5CYG/vvw0AHgyBO8TCCogbR8pKGYfL2IWjKAMTH6kMA +VIbc/R/fALhBYlzccBYy3h+Z1MzFB8gIH2EWB1E9fVwHU+M1OIzfzZ/ZLg1Kthku +WnBaBu2+8KGwytAJKaNjMGEwHQYDVR0OBBYEFFvKXuXe0oGqzagtZFG22XKbl+ZP +MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUW8pe5d7SgarNqC1kUbbZcpuX +5k8wDgYDVR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2gAMGUCMQCK5kCJN+vp1RPZ +ytRrJPOwPYdGWBrssd9v+1a6cGvHOMzosYxPD/fxZ3YOg9AeUY8CMD32IygmTMZg +h5Mmm7I1HrrW9zzRHM76JTymGoEVW/MSD2zuZYrJh6j5B+BimoxcSg== +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6 +# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6 +# Label: "GlobalSign Root CA - R6" +# Serial: 1417766617973444989252670301619537 +# MD5 Fingerprint: 4f:dd:07:e4:d4:22:64:39:1e:0c:37:42:ea:d1:c6:ae +# SHA1 Fingerprint: 80:94:64:0e:b5:a7:a1:ca:11:9c:1f:dd:d5:9f:81:02:63:a7:fb:d1 +# SHA256 Fingerprint: 2c:ab:ea:fe:37:d0:6c:a2:2a:ba:73:91:c0:03:3d:25:98:29:52:c4:53:64:73:49:76:3a:3a:b5:ad:6c:cf:69 +-----BEGIN CERTIFICATE----- +MIIFgzCCA2ugAwIBAgIORea7A4Mzw4VlSOb/RVEwDQYJKoZIhvcNAQEMBQAwTDEg +MB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjYxEzARBgNVBAoTCkdsb2Jh +bFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMTQxMjEwMDAwMDAwWhcNMzQx +MjEwMDAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSNjET +MBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCAiIwDQYJ +KoZIhvcNAQEBBQADggIPADCCAgoCggIBAJUH6HPKZvnsFMp7PPcNCPG0RQssgrRI +xutbPK6DuEGSMxSkb3/pKszGsIhrxbaJ0cay/xTOURQh7ErdG1rG1ofuTToVBu1k +ZguSgMpE3nOUTvOniX9PeGMIyBJQbUJmL025eShNUhqKGoC3GYEOfsSKvGRMIRxD +aNc9PIrFsmbVkJq3MQbFvuJtMgamHvm566qjuL++gmNQ0PAYid/kD3n16qIfKtJw +LnvnvJO7bVPiSHyMEAc4/2ayd2F+4OqMPKq0pPbzlUoSB239jLKJz9CgYXfIWHSw +1CM69106yqLbnQneXUQtkPGBzVeS+n68UARjNN9rkxi+azayOeSsJDa38O+2HBNX +k7besvjihbdzorg1qkXy4J02oW9UivFyVm4uiMVRQkQVlO6jxTiWm05OWgtH8wY2 +SXcwvHE35absIQh1/OZhFj931dmRl4QKbNQCTXTAFO39OfuD8l4UoQSwC+n+7o/h +bguyCLNhZglqsQY6ZZZZwPA1/cnaKI0aEYdwgQqomnUdnjqGBQCe24DWJfncBZ4n +WUx2OVvq+aWh2IMP0f/fMBH5hc8zSPXKbWQULHpYT9NLCEnFlWQaYw55PfWzjMpY +rZxCRXluDocZXFSxZba/jJvcE+kNb7gu3GduyYsRtYQUigAZcIN5kZeR1Bonvzce +MgfYFGM8KEyvAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTAD +AQH/MB0GA1UdDgQWBBSubAWjkxPioufi1xzWx/B/yGdToDAfBgNVHSMEGDAWgBSu +bAWjkxPioufi1xzWx/B/yGdToDANBgkqhkiG9w0BAQwFAAOCAgEAgyXt6NH9lVLN +nsAEoJFp5lzQhN7craJP6Ed41mWYqVuoPId8AorRbrcWc+ZfwFSY1XS+wc3iEZGt +Ixg93eFyRJa0lV7Ae46ZeBZDE1ZXs6KzO7V33EByrKPrmzU+sQghoefEQzd5Mr61 +55wsTLxDKZmOMNOsIeDjHfrYBzN2VAAiKrlNIC5waNrlU/yDXNOd8v9EDERm8tLj +vUYAGm0CuiVdjaExUd1URhxN25mW7xocBFymFe944Hn+Xds+qkxV/ZoVqW/hpvvf +cDDpw+5CRu3CkwWJ+n1jez/QcYF8AOiYrg54NMMl+68KnyBr3TsTjxKM4kEaSHpz +oHdpx7Zcf4LIHv5YGygrqGytXm3ABdJ7t+uA/iU3/gKbaKxCXcPu9czc8FB10jZp +nOZ7BN9uBmm23goJSFmH63sUYHpkqmlD75HHTOwY3WzvUy2MmeFe8nI+z1TIvWfs +pA9MRf/TuTAjB0yPEL+GltmZWrSZVxykzLsViVO6LAUP5MSeGbEYNNVMnbrt9x+v +JJUEeKgDu+6B5dpffItKoZB0JaezPkvILFa9x8jvOOJckvB595yEunQtYQEgfn7R +8k8HWV+LLUNS60YMlOH1Zkd5d9VUWx+tJDfLRVpOoERIyNiwmcUVhAn21klJwGW4 +5hpxbqCo8YLoRT5s1gLXCmeDBVrJpBA= +-----END CERTIFICATE----- + +# Issuer: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed +# Subject: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed +# Label: "OISTE WISeKey Global Root GC CA" +# Serial: 44084345621038548146064804565436152554 +# MD5 Fingerprint: a9:d6:b9:2d:2f:93:64:f8:a5:69:ca:91:e9:68:07:23 +# SHA1 Fingerprint: e0:11:84:5e:34:de:be:88:81:b9:9c:f6:16:26:d1:96:1f:c3:b9:31 +# SHA256 Fingerprint: 85:60:f9:1c:36:24:da:ba:95:70:b5:fe:a0:db:e3:6f:f1:1a:83:23:be:94:86:85:4f:b3:f3:4a:55:71:19:8d +-----BEGIN CERTIFICATE----- +MIICaTCCAe+gAwIBAgIQISpWDK7aDKtARb8roi066jAKBggqhkjOPQQDAzBtMQsw +CQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUgRm91 +bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9iYWwg +Um9vdCBHQyBDQTAeFw0xNzA1MDkwOTQ4MzRaFw00MjA1MDkwOTU4MzNaMG0xCzAJ +BgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBGb3Vu +ZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2JhbCBS +b290IEdDIENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAETOlQwMYPchi82PG6s4ni +eUqjFqdrVCTbUf/q9Akkwwsin8tqJ4KBDdLArzHkdIJuyiXZjHWd8dvQmqJLIX4W +p2OQ0jnUsYd4XxiWD1AbNTcPasbc2RNNpI6QN+a9WzGRo1QwUjAOBgNVHQ8BAf8E +BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUSIcUrOPDnpBgOtfKie7T +rYy0UGYwEAYJKwYBBAGCNxUBBAMCAQAwCgYIKoZIzj0EAwMDaAAwZQIwJsdpW9zV +57LnyAyMjMPdeYwbY9XJUpROTYJKcx6ygISpJcBMWm1JKWB4E+J+SOtkAjEA2zQg +Mgj/mkkCtojeFK9dbJlxjRo/i9fgojaGHAeCOnZT/cKi7e97sIBPWA9LUzm9 +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R1 O=Google Trust Services LLC +# Subject: CN=GTS Root R1 O=Google Trust Services LLC +# Label: "GTS Root R1" +# Serial: 146587175971765017618439757810265552097 +# MD5 Fingerprint: 82:1a:ef:d4:d2:4a:f2:9f:e2:3d:97:06:14:70:72:85 +# SHA1 Fingerprint: e1:c9:50:e6:ef:22:f8:4c:56:45:72:8b:92:20:60:d7:d5:a7:a3:e8 +# SHA256 Fingerprint: 2a:57:54:71:e3:13:40:bc:21:58:1c:bd:2c:f1:3e:15:84:63:20:3e:ce:94:bc:f9:d3:cc:19:6b:f0:9a:54:72 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQbkepxUtHDA3sM9CJuRz04TANBgkqhkiG9w0BAQwFADBH +MQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExM +QzEUMBIGA1UEAxMLR1RTIFJvb3QgUjEwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIy +MDAwMDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNl +cnZpY2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjEwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQC2EQKLHuOhd5s73L+UPreVp0A8of2C+X0yBoJx9vaM +f/vo27xqLpeXo4xL+Sv2sfnOhB2x+cWX3u+58qPpvBKJXqeqUqv4IyfLpLGcY9vX +mX7wCl7raKb0xlpHDU0QM+NOsROjyBhsS+z8CZDfnWQpJSMHobTSPS5g4M/SCYe7 +zUjwTcLCeoiKu7rPWRnWr4+wB7CeMfGCwcDfLqZtbBkOtdh+JhpFAz2weaSUKK0P +fyblqAj+lug8aJRT7oM6iCsVlgmy4HqMLnXWnOunVmSPlk9orj2XwoSPwLxAwAtc +vfaHszVsrBhQf4TgTM2S0yDpM7xSma8ytSmzJSq0SPly4cpk9+aCEI3oncKKiPo4 +Zor8Y/kB+Xj9e1x3+naH+uzfsQ55lVe0vSbv1gHR6xYKu44LtcXFilWr06zqkUsp +zBmkMiVOKvFlRNACzqrOSbTqn3yDsEB750Orp2yjj32JgfpMpf/VjsPOS+C12LOO +Rc92wO1AK/1TD7Cn1TsNsYqiA94xrcx36m97PtbfkSIS5r762DL8EGMUUXLeXdYW +k70paDPvOmbsB4om3xPXV2V4J95eSRQAogB/mqghtqmxlbCluQ0WEdrHbEg8QOB+ +DVrNVjzRlwW5y0vtOUucxD/SVRNuJLDWcfr0wbrM7Rv1/oFB2ACYPTrIrnqYNxgF +lQIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQU5K8rJnEaK0gnhS9SZizv8IkTcT4wDQYJKoZIhvcNAQEMBQADggIBADiW +Cu49tJYeX++dnAsznyvgyv3SjgofQXSlfKqE1OXyHuY3UjKcC9FhHb8owbZEKTV1 +d5iyfNm9dKyKaOOpMQkpAWBz40d8U6iQSifvS9efk+eCNs6aaAyC58/UEBZvXw6Z +XPYfcX3v73svfuo21pdwCxXu11xWajOl40k4DLh9+42FpLFZXvRq4d2h9mREruZR +gyFmxhE+885H7pwoHyXa/6xmld01D1zvICxi/ZG6qcz8WpyTgYMpl0p8WnK0OdC3 +d8t5/Wk6kjftbjhlRn7pYL15iJdfOBL07q9bgsiG1eGZbYwE8na6SfZu6W0eX6Dv +J4J2QPim01hcDyxC2kLGe4g0x8HYRZvBPsVhHdljUEn2NIVq4BjFbkerQUIpm/Zg +DdIx02OYI5NaAIFItO/Nis3Jz5nu2Z6qNuFoS3FJFDYoOj0dzpqPJeaAcWErtXvM ++SUWgeExX6GjfhaknBZqlxi9dnKlC54dNuYvoS++cJEPqOba+MSSQGwlfnuzCdyy +F62ARPBopY+Udf90WuioAnwMCeKpSwughQtiue+hMZL77/ZRBIls6Kl0obsXs7X9 +SQ98POyDGCBDTtWTurQ0sR8WNh8M5mQ5Fkzc4P4dyKliPUDqysU0ArSuiYgzNdws +E3PYJ/HQcu51OyLemGhmW/HGY0dVHLqlCFF1pkgl +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R2 O=Google Trust Services LLC +# Subject: CN=GTS Root R2 O=Google Trust Services LLC +# Label: "GTS Root R2" +# Serial: 146587176055767053814479386953112547951 +# MD5 Fingerprint: 44:ed:9a:0e:a4:09:3b:00:f2:ae:4c:a3:c6:61:b0:8b +# SHA1 Fingerprint: d2:73:96:2a:2a:5e:39:9f:73:3f:e1:c7:1e:64:3f:03:38:34:fc:4d +# SHA256 Fingerprint: c4:5d:7b:b0:8e:6d:67:e6:2e:42:35:11:0b:56:4e:5f:78:fd:92:ef:05:8c:84:0a:ea:4e:64:55:d7:58:5c:60 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQbkepxlqz5yDFMJo/aFLybzANBgkqhkiG9w0BAQwFADBH +MQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExM +QzEUMBIGA1UEAxMLR1RTIFJvb3QgUjIwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIy +MDAwMDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNl +cnZpY2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjIwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQDO3v2m++zsFDQ8BwZabFn3GTXd98GdVarTzTukk3Lv +CvptnfbwhYBboUhSnznFt+4orO/LdmgUud+tAWyZH8QiHZ/+cnfgLFuv5AS/T3Kg +GjSY6Dlo7JUle3ah5mm5hRm9iYz+re026nO8/4Piy33B0s5Ks40FnotJk9/BW9Bu +XvAuMC6C/Pq8tBcKSOWIm8Wba96wyrQD8Nr0kLhlZPdcTK3ofmZemde4wj7I0BOd +re7kRXuJVfeKH2JShBKzwkCX44ofR5GmdFrS+LFjKBC4swm4VndAoiaYecb+3yXu +PuWgf9RhD1FLPD+M2uFwdNjCaKH5wQzpoeJ/u1U8dgbuak7MkogwTZq9TwtImoS1 +mKPV+3PBV2HdKFZ1E66HjucMUQkQdYhMvI35ezzUIkgfKtzra7tEscszcTJGr61K +8YzodDqs5xoic4DSMPclQsciOzsSrZYuxsN2B6ogtzVJV+mSSeh2FnIxZyuWfoqj +x5RWIr9qS34BIbIjMt/kmkRtWVtd9QCgHJvGeJeNkP+byKq0rxFROV7Z+2et1VsR +nTKaG73VululycslaVNVJ1zgyjbLiGH7HrfQy+4W+9OmTN6SpdTi3/UGVN4unUu0 +kzCqgc7dGtxRcw1PcOnlthYhGXmy5okLdWTK1au8CcEYof/UVKGFPP0UJAOyh9Ok +twIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUu//KjiOfT5nK2+JopqUVJxce2Q4wDQYJKoZIhvcNAQEMBQADggIBALZp +8KZ3/p7uC4Gt4cCpx/k1HUCCq+YEtN/L9x0Pg/B+E02NjO7jMyLDOfxA325BS0JT +vhaI8dI4XsRomRyYUpOM52jtG2pzegVATX9lO9ZY8c6DR2Dj/5epnGB3GFW1fgiT +z9D2PGcDFWEJ+YF59exTpJ/JjwGLc8R3dtyDovUMSRqodt6Sm2T4syzFJ9MHwAiA +pJiS4wGWAqoC7o87xdFtCjMwc3i5T1QWvwsHoaRc5svJXISPD+AVdyx+Jn7axEvb +pxZ3B7DNdehyQtaVhJ2Gg/LkkM0JR9SLA3DaWsYDQvTtN6LwG1BUSw7YhN4ZKJmB +R64JGz9I0cNv4rBgF/XuIwKl2gBbbZCr7qLpGzvpx0QnRY5rn/WkhLx3+WuXrD5R +RaIRpsyF7gpo8j5QOHokYh4XIDdtak23CZvJ/KRY9bb7nE4Yu5UC56GtmwfuNmsk +0jmGwZODUNKBRqhfYlcsu2xkiAhu7xNUX90txGdj08+JN7+dIPT7eoOboB6BAFDC +5AwiWVIQ7UNWhwD4FFKnHYuTjKJNRn8nxnGbJN7k2oaLDX5rIMHAnuFl2GqjpuiF +izoHCBy69Y9Vmhh1fuXsgWbRIXOhNUQLgD1bnF5vKheW0YMjiGZt5obicDIvUiLn +yOd/xCxgXS/Dr55FBcOEArf9LAhST4Ldo/DUhgkC +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R3 O=Google Trust Services LLC +# Subject: CN=GTS Root R3 O=Google Trust Services LLC +# Label: "GTS Root R3" +# Serial: 146587176140553309517047991083707763997 +# MD5 Fingerprint: 1a:79:5b:6b:04:52:9c:5d:c7:74:33:1b:25:9a:f9:25 +# SHA1 Fingerprint: 30:d4:24:6f:07:ff:db:91:89:8a:0b:e9:49:66:11:eb:8c:5e:46:e5 +# SHA256 Fingerprint: 15:d5:b8:77:46:19:ea:7d:54:ce:1c:a6:d0:b0:c4:03:e0:37:a9:17:f1:31:e8:a0:4e:1e:6b:7a:71:ba:bc:e5 +-----BEGIN CERTIFICATE----- +MIICDDCCAZGgAwIBAgIQbkepx2ypcyRAiQ8DVd2NHTAKBggqhkjOPQQDAzBHMQsw +CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU +MBIGA1UEAxMLR1RTIFJvb3QgUjMwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw +MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp +Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjMwdjAQBgcqhkjOPQIBBgUrgQQA +IgNiAAQfTzOHMymKoYTey8chWEGJ6ladK0uFxh1MJ7x/JlFyb+Kf1qPKzEUURout +736GjOyxfi//qXGdGIRFBEFVbivqJn+7kAHjSxm65FSWRQmx1WyRRK2EE46ajA2A +DDL24CejQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud +DgQWBBTB8Sa6oC2uhYHP0/EqEr24Cmf9vDAKBggqhkjOPQQDAwNpADBmAjEAgFuk +fCPAlaUs3L6JbyO5o91lAFJekazInXJ0glMLfalAvWhgxeG4VDvBNhcl2MG9AjEA +njWSdIUlUfUk7GRSJFClH9voy8l27OyCbvWFGFPouOOaKaqW04MjyaR7YbPMAuhd +-----END CERTIFICATE----- + +# Issuer: CN=GTS Root R4 O=Google Trust Services LLC +# Subject: CN=GTS Root R4 O=Google Trust Services LLC +# Label: "GTS Root R4" +# Serial: 146587176229350439916519468929765261721 +# MD5 Fingerprint: 5d:b6:6a:c4:60:17:24:6a:1a:99:a8:4b:ee:5e:b4:26 +# SHA1 Fingerprint: 2a:1d:60:27:d9:4a:b1:0a:1c:4d:91:5c:cd:33:a0:cb:3e:2d:54:cb +# SHA256 Fingerprint: 71:cc:a5:39:1f:9e:79:4b:04:80:25:30:b3:63:e1:21:da:8a:30:43:bb:26:66:2f:ea:4d:ca:7f:c9:51:a4:bd +-----BEGIN CERTIFICATE----- +MIICCjCCAZGgAwIBAgIQbkepyIuUtui7OyrYorLBmTAKBggqhkjOPQQDAzBHMQsw +CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU +MBIGA1UEAxMLR1RTIFJvb3QgUjQwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw +MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp +Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjQwdjAQBgcqhkjOPQIBBgUrgQQA +IgNiAATzdHOnaItgrkO4NcWBMHtLSZ37wWHO5t5GvWvVYRg1rkDdc/eJkTBa6zzu +hXyiQHY7qca4R9gq55KRanPpsXI5nymfopjTX15YhmUPoYRlBtHci8nHc8iMai/l +xKvRHYqjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud +DgQWBBSATNbrdP9JNqPV2Py1PsVq8JQdjDAKBggqhkjOPQQDAwNnADBkAjBqUFJ0 +CMRw3J5QdCHojXohw0+WbhXRIjVhLfoIN+4Zba3bssx9BzT1YBkstTTZbyACMANx +sbqjYAuG7ZoIapVon+Kz4ZNkfF6Tpt95LY2F45TPI11xzPKwTdb+mciUqXWi4w== +-----END CERTIFICATE----- + +# Issuer: CN=UCA Global G2 Root O=UniTrust +# Subject: CN=UCA Global G2 Root O=UniTrust +# Label: "UCA Global G2 Root" +# Serial: 124779693093741543919145257850076631279 +# MD5 Fingerprint: 80:fe:f0:c4:4a:f0:5c:62:32:9f:1c:ba:78:a9:50:f8 +# SHA1 Fingerprint: 28:f9:78:16:19:7a:ff:18:25:18:aa:44:fe:c1:a0:ce:5c:b6:4c:8a +# SHA256 Fingerprint: 9b:ea:11:c9:76:fe:01:47:64:c1:be:56:a6:f9:14:b5:a5:60:31:7a:bd:99:88:39:33:82:e5:16:1a:a0:49:3c +-----BEGIN CERTIFICATE----- +MIIFRjCCAy6gAwIBAgIQXd+x2lqj7V2+WmUgZQOQ7zANBgkqhkiG9w0BAQsFADA9 +MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxGzAZBgNVBAMMElVDQSBH +bG9iYWwgRzIgUm9vdDAeFw0xNjAzMTEwMDAwMDBaFw00MDEyMzEwMDAwMDBaMD0x +CzAJBgNVBAYTAkNOMREwDwYDVQQKDAhVbmlUcnVzdDEbMBkGA1UEAwwSVUNBIEds +b2JhbCBHMiBSb290MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxeYr +b3zvJgUno4Ek2m/LAfmZmqkywiKHYUGRO8vDaBsGxUypK8FnFyIdK+35KYmToni9 +kmugow2ifsqTs6bRjDXVdfkX9s9FxeV67HeToI8jrg4aA3++1NDtLnurRiNb/yzm +VHqUwCoV8MmNsHo7JOHXaOIxPAYzRrZUEaalLyJUKlgNAQLx+hVRZ2zA+te2G3/R +VogvGjqNO7uCEeBHANBSh6v7hn4PJGtAnTRnvI3HLYZveT6OqTwXS3+wmeOwcWDc +C/Vkw85DvG1xudLeJ1uK6NjGruFZfc8oLTW4lVYa8bJYS7cSN8h8s+1LgOGN+jIj +tm+3SJUIsUROhYw6AlQgL9+/V087OpAh18EmNVQg7Mc/R+zvWr9LesGtOxdQXGLY +D0tK3Cv6brxzks3sx1DoQZbXqX5t2Okdj4q1uViSukqSKwxW/YDrCPBeKW4bHAyv +j5OJrdu9o54hyokZ7N+1wxrrFv54NkzWbtA+FxyQF2smuvt6L78RHBgOLXMDj6Dl +NaBa4kx1HXHhOThTeEDMg5PXCp6dW4+K5OXgSORIskfNTip1KnvyIvbJvgmRlld6 +iIis7nCs+dwp4wwcOxJORNanTrAmyPPZGpeRaOrvjUYG0lZFWJo8DA+DuAUlwznP +O6Q0ibd5Ei9Hxeepl2n8pndntd978XplFeRhVmUCAwEAAaNCMEAwDgYDVR0PAQH/ +BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFIHEjMz15DD/pQwIX4wV +ZyF0Ad/fMA0GCSqGSIb3DQEBCwUAA4ICAQATZSL1jiutROTL/7lo5sOASD0Ee/oj +L3rtNtqyzm325p7lX1iPyzcyochltq44PTUbPrw7tgTQvPlJ9Zv3hcU2tsu8+Mg5 +1eRfB70VVJd0ysrtT7q6ZHafgbiERUlMjW+i67HM0cOU2kTC5uLqGOiiHycFutfl +1qnN3e92mI0ADs0b+gO3joBYDic/UvuUospeZcnWhNq5NXHzJsBPd+aBJ9J3O5oU +b3n09tDh05S60FdRvScFDcH9yBIw7m+NESsIndTUv4BFFJqIRNow6rSn4+7vW4LV +PtateJLbXDzz2K36uGt/xDYotgIVilQsnLAXc47QN6MUPJiVAAwpBVueSUmxX8fj +y88nZY41F7dXyDDZQVu5FLbowg+UMaeUmMxq67XhJ/UQqAHojhJi6IjMtX9Gl8Cb +EGY4GjZGXyJoPd/JxhMnq1MGrKI8hgZlb7F+sSlEmqO6SWkoaY/X5V+tBIZkbxqg +DMUIYs6Ao9Dz7GjevjPHF1t/gMRMTLGmhIrDO7gJzRSBuhjjVFc2/tsvfEehOjPI ++Vg7RE+xygKJBJYoaMVLuCaJu9YzL1DV/pqJuhgyklTGW+Cd+V7lDSKb9triyCGy +YiGqhkCyLmTTX8jjfhFnRR8F/uOi77Oos/N9j/gMHyIfLXC0uAE0djAA5SN4p1bX +UB+K+wb1whnw0A== +-----END CERTIFICATE----- + +# Issuer: CN=UCA Extended Validation Root O=UniTrust +# Subject: CN=UCA Extended Validation Root O=UniTrust +# Label: "UCA Extended Validation Root" +# Serial: 106100277556486529736699587978573607008 +# MD5 Fingerprint: a1:f3:5f:43:c6:34:9b:da:bf:8c:7e:05:53:ad:96:e2 +# SHA1 Fingerprint: a3:a1:b0:6f:24:61:23:4a:e3:36:a5:c2:37:fc:a6:ff:dd:f0:d7:3a +# SHA256 Fingerprint: d4:3a:f9:b3:54:73:75:5c:96:84:fc:06:d7:d8:cb:70:ee:5c:28:e7:73:fb:29:4e:b4:1e:e7:17:22:92:4d:24 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgIQT9Irj/VkyDOeTzRYZiNwYDANBgkqhkiG9w0BAQsFADBH +MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNVBAMMHFVDQSBF +eHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwHhcNMTUwMzEzMDAwMDAwWhcNMzgxMjMx +MDAwMDAwWjBHMQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNV +BAMMHFVDQSBFeHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQCpCQcoEwKwmeBkqh5DFnpzsZGgdT6o+uM4AHrsiWog +D4vFsJszA1qGxliG1cGFu0/GnEBNyr7uaZa4rYEwmnySBesFK5pI0Lh2PpbIILvS +sPGP2KxFRv+qZ2C0d35qHzwaUnoEPQc8hQ2E0B92CvdqFN9y4zR8V05WAT558aop +O2z6+I9tTcg1367r3CTueUWnhbYFiN6IXSV8l2RnCdm/WhUFhvMJHuxYMjMR83dk +sHYf5BA1FxvyDrFspCqjc/wJHx4yGVMR59mzLC52LqGj3n5qiAno8geK+LLNEOfi +c0CTuwjRP+H8C5SzJe98ptfRr5//lpr1kXuYC3fUfugH0mK1lTnj8/FtDw5lhIpj +VMWAtuCeS31HJqcBCF3RiJ7XwzJE+oJKCmhUfzhTA8ykADNkUVkLo4KRel7sFsLz +KuZi2irbWWIQJUoqgQtHB0MGcIfS+pMRKXpITeuUx3BNr2fVUbGAIAEBtHoIppB/ +TuDvB0GHr2qlXov7z1CymlSvw4m6WC31MJixNnI5fkkE/SmnTHnkBVfblLkWU41G +sx2VYVdWf6/wFlthWG82UBEL2KwrlRYaDh8IzTY0ZRBiZtWAXxQgXy0MoHgKaNYs +1+lvK9JKBZP8nm9rZ/+I8U6laUpSNwXqxhaN0sSZ0YIrO7o1dfdRUVjzyAfd5LQD +fwIDAQABo0IwQDAdBgNVHQ4EFgQU2XQ65DA9DfcS3H5aBZ8eNJr34RQwDwYDVR0T +AQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBADaN +l8xCFWQpN5smLNb7rhVpLGsaGvdftvkHTFnq88nIua7Mui563MD1sC3AO6+fcAUR +ap8lTwEpcOPlDOHqWnzcSbvBHiqB9RZLcpHIojG5qtr8nR/zXUACE/xOHAbKsxSQ +VBcZEhrxH9cMaVr2cXj0lH2RC47skFSOvG+hTKv8dGT9cZr4QQehzZHkPJrgmzI5 +c6sq1WnIeJEmMX3ixzDx/BR4dxIOE/TdFpS/S2d7cFOFyrC78zhNLJA5wA3CXWvp +4uXViI3WLL+rG761KIcSF3Ru/H38j9CHJrAb+7lsq+KePRXBOy5nAliRn+/4Qh8s +t2j1da3Ptfb/EX3C8CSlrdP6oDyp+l3cpaDvRKS+1ujl5BOWF3sGPjLtx7dCvHaj +2GU4Kzg1USEODm8uNBNA4StnDG1KQTAYI1oyVZnJF+A83vbsea0rWBmirSwiGpWO +vpaQXUJXxPkUAzUrHC1RVwinOt4/5Mi0A3PCwSaAuwtCH60NryZy2sy+s6ODWA2C +xR9GUeOcGMyNm43sSet1UNWMKFnKdDTajAshqx7qG+XH/RU+wBeq+yNuJkbL+vmx +cmtpzyKEC2IPrNkZAJSidjzULZrtBJ4tBmIQN1IchXIbJ+XMxjHsN+xjWZsLHXbM +fjKaiJUINlK73nZfdklJrX+9ZSCyycErdhh2n1ax +-----END CERTIFICATE----- + +# Issuer: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036 +# Subject: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036 +# Label: "Certigna Root CA" +# Serial: 269714418870597844693661054334862075617 +# MD5 Fingerprint: 0e:5c:30:62:27:eb:5b:bc:d7:ae:62:ba:e9:d5:df:77 +# SHA1 Fingerprint: 2d:0d:52:14:ff:9e:ad:99:24:01:74:20:47:6e:6c:85:27:27:f5:43 +# SHA256 Fingerprint: d4:8d:3d:23:ee:db:50:a4:59:e5:51:97:60:1c:27:77:4b:9d:7b:18:c9:4d:5a:05:95:11:a1:02:50:b9:31:68 +-----BEGIN CERTIFICATE----- +MIIGWzCCBEOgAwIBAgIRAMrpG4nxVQMNo+ZBbcTjpuEwDQYJKoZIhvcNAQELBQAw +WjELMAkGA1UEBhMCRlIxEjAQBgNVBAoMCURoaW15b3RpczEcMBoGA1UECwwTMDAw +MiA0ODE0NjMwODEwMDAzNjEZMBcGA1UEAwwQQ2VydGlnbmEgUm9vdCBDQTAeFw0x +MzEwMDEwODMyMjdaFw0zMzEwMDEwODMyMjdaMFoxCzAJBgNVBAYTAkZSMRIwEAYD +VQQKDAlEaGlteW90aXMxHDAaBgNVBAsMEzAwMDIgNDgxNDYzMDgxMDAwMzYxGTAX +BgNVBAMMEENlcnRpZ25hIFJvb3QgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAw +ggIKAoICAQDNGDllGlmx6mQWDoyUJJV8g9PFOSbcDO8WV43X2KyjQn+Cyu3NW9sO +ty3tRQgXstmzy9YXUnIo245Onoq2C/mehJpNdt4iKVzSs9IGPjA5qXSjklYcoW9M +CiBtnyN6tMbaLOQdLNyzKNAT8kxOAkmhVECe5uUFoC2EyP+YbNDrihqECB63aCPu +I9Vwzm1RaRDuoXrC0SIxwoKF0vJVdlB8JXrJhFwLrN1CTivngqIkicuQstDuI7pm +TLtipPlTWmR7fJj6o0ieD5Wupxj0auwuA0Wv8HT4Ks16XdG+RCYyKfHx9WzMfgIh +C59vpD++nVPiz32pLHxYGpfhPTc3GGYo0kDFUYqMwy3OU4gkWGQwFsWq4NYKpkDf +ePb1BHxpE4S80dGnBs8B92jAqFe7OmGtBIyT46388NtEbVncSVmurJqZNjBBe3Yz +IoejwpKGbvlw7q6Hh5UbxHq9MfPU0uWZ/75I7HX1eBYdpnDBfzwboZL7z8g81sWT +Co/1VTp2lc5ZmIoJlXcymoO6LAQ6l73UL77XbJuiyn1tJslV1c/DeVIICZkHJC1k +JWumIWmbat10TWuXekG9qxf5kBdIjzb5LdXF2+6qhUVB+s06RbFo5jZMm5BX7CO5 +hwjCxAnxl4YqKE3idMDaxIzb3+KhF1nOJFl0Mdp//TBt2dzhauH8XwIDAQABo4IB +GjCCARYwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE +FBiHVuBud+4kNTxOc5of1uHieX4rMB8GA1UdIwQYMBaAFBiHVuBud+4kNTxOc5of +1uHieX4rMEQGA1UdIAQ9MDswOQYEVR0gADAxMC8GCCsGAQUFBwIBFiNodHRwczov +L3d3d3cuY2VydGlnbmEuZnIvYXV0b3JpdGVzLzBtBgNVHR8EZjBkMC+gLaArhilo +dHRwOi8vY3JsLmNlcnRpZ25hLmZyL2NlcnRpZ25hcm9vdGNhLmNybDAxoC+gLYYr +aHR0cDovL2NybC5kaGlteW90aXMuY29tL2NlcnRpZ25hcm9vdGNhLmNybDANBgkq +hkiG9w0BAQsFAAOCAgEAlLieT/DjlQgi581oQfccVdV8AOItOoldaDgvUSILSo3L +6btdPrtcPbEo/uRTVRPPoZAbAh1fZkYJMyjhDSSXcNMQH+pkV5a7XdrnxIxPTGRG +HVyH41neQtGbqH6mid2PHMkwgu07nM3A6RngatgCdTer9zQoKJHyBApPNeNgJgH6 +0BGM+RFq7q89w1DTj18zeTyGqHNFkIwgtnJzFyO+B2XleJINugHA64wcZr+shncB +lA2c5uk5jR+mUYyZDDl34bSb+hxnV29qao6pK0xXeXpXIs/NX2NGjVxZOob4Mkdi +o2cNGJHc+6Zr9UhhcyNZjgKnvETq9Emd8VRY+WCv2hikLyhF3HqgiIZd8zvn/yk1 +gPxkQ5Tm4xxvvq0OKmOZK8l+hfZx6AYDlf7ej0gcWtSS6Cvu5zHbugRqh5jnxV/v +faci9wHYTfmJ0A6aBVmknpjZbyvKcL5kwlWj9Omvw5Ip3IgWJJk8jSaYtlu3zM63 +Nwf9JtmYhST/WSMDmu2dnajkXjjO11INb9I/bbEFa0nOipFGc/T2L/Coc3cOZayh +jWZSaX5LaAzHHjcng6WMxwLkFM1JAbBzs/3GkDpv0mztO+7skb6iQ12LAEpmJURw +3kAP+HwV96LOPNdeE4yBFxgX0b3xdxA61GU5wSesVywlVP+i2k+KYTlerj1KjL0= +-----END CERTIFICATE----- + +# Issuer: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI +# Subject: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI +# Label: "emSign Root CA - G1" +# Serial: 235931866688319308814040 +# MD5 Fingerprint: 9c:42:84:57:dd:cb:0b:a7:2e:95:ad:b6:f3:da:bc:ac +# SHA1 Fingerprint: 8a:c7:ad:8f:73:ac:4e:c1:b5:75:4d:a5:40:f4:fc:cf:7c:b5:8e:8c +# SHA256 Fingerprint: 40:f6:af:03:46:a9:9a:a1:cd:1d:55:5a:4e:9c:ce:62:c7:f9:63:46:03:ee:40:66:15:83:3d:c8:c8:d0:03:67 +-----BEGIN CERTIFICATE----- +MIIDlDCCAnygAwIBAgIKMfXkYgxsWO3W2DANBgkqhkiG9w0BAQsFADBnMQswCQYD +VQQGEwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBU +ZWNobm9sb2dpZXMgTGltaXRlZDEcMBoGA1UEAxMTZW1TaWduIFJvb3QgQ0EgLSBH +MTAeFw0xODAyMTgxODMwMDBaFw00MzAyMTgxODMwMDBaMGcxCzAJBgNVBAYTAklO +MRMwEQYDVQQLEwplbVNpZ24gUEtJMSUwIwYDVQQKExxlTXVkaHJhIFRlY2hub2xv +Z2llcyBMaW1pdGVkMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEcxMIIBIjAN +BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAk0u76WaK7p1b1TST0Bsew+eeuGQz +f2N4aLTNLnF115sgxk0pvLZoYIr3IZpWNVrzdr3YzZr/k1ZLpVkGoZM0Kd0WNHVO +8oG0x5ZOrRkVUkr+PHB1cM2vK6sVmjM8qrOLqs1D/fXqcP/tzxE7lM5OMhbTI0Aq +d7OvPAEsbO2ZLIvZTmmYsvePQbAyeGHWDV/D+qJAkh1cF+ZwPjXnorfCYuKrpDhM +tTk1b+oDafo6VGiFbdbyL0NVHpENDtjVaqSW0RM8LHhQ6DqS0hdW5TUaQBw+jSzt +Od9C4INBdN+jzcKGYEho42kLVACL5HZpIQ15TjQIXhTCzLG3rdd8cIrHhQIDAQAB +o0IwQDAdBgNVHQ4EFgQU++8Nhp6w492pufEhF38+/PB3KxowDgYDVR0PAQH/BAQD +AgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAFn/8oz1h31x +PaOfG1vR2vjTnGs2vZupYeveFix0PZ7mddrXuqe8QhfnPZHr5X3dPpzxz5KsbEjM +wiI/aTvFthUvozXGaCocV685743QNcMYDHsAVhzNixl03r4PEuDQqqE/AjSxcM6d +GNYIAwlG7mDgfrbESQRRfXBgvKqy/3lyeqYdPV8q+Mri/Tm3R7nrft8EI6/6nAYH +6ftjk4BAtcZsCjEozgyfz7MjNYBBjWzEN3uBL4ChQEKF6dk4jeihU80Bv2noWgby +RQuQ+q7hv53yrlc8pa6yVvSLZUDp/TGBLPQ5Cdjua6e0ph0VpZj3AYHYhX3zUVxx +iN66zB+Afko= +-----END CERTIFICATE----- + +# Issuer: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI +# Subject: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI +# Label: "emSign ECC Root CA - G3" +# Serial: 287880440101571086945156 +# MD5 Fingerprint: ce:0b:72:d1:9f:88:8e:d0:50:03:e8:e3:b8:8b:67:40 +# SHA1 Fingerprint: 30:43:fa:4f:f2:57:dc:a0:c3:80:ee:2e:58:ea:78:b2:3f:e6:bb:c1 +# SHA256 Fingerprint: 86:a1:ec:ba:08:9c:4a:8d:3b:be:27:34:c6:12:ba:34:1d:81:3e:04:3c:f9:e8:a8:62:cd:5c:57:a3:6b:be:6b +-----BEGIN CERTIFICATE----- +MIICTjCCAdOgAwIBAgIKPPYHqWhwDtqLhDAKBggqhkjOPQQDAzBrMQswCQYDVQQG +EwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNo +bm9sb2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0g +RzMwHhcNMTgwMjE4MTgzMDAwWhcNNDMwMjE4MTgzMDAwWjBrMQswCQYDVQQGEwJJ +TjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNobm9s +b2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0gRzMw +djAQBgcqhkjOPQIBBgUrgQQAIgNiAAQjpQy4LRL1KPOxst3iAhKAnjlfSU2fySU0 +WXTsuwYc58Byr+iuL+FBVIcUqEqy6HyC5ltqtdyzdc6LBtCGI79G1Y4PPwT01xyS +fvalY8L1X44uT6EYGQIrMgqCZH0Wk9GjQjBAMB0GA1UdDgQWBBR8XQKEE9TMipuB +zhccLikenEhjQjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggq +hkjOPQQDAwNpADBmAjEAvvNhzwIQHWSVB7gYboiFBS+DCBeQyh+KTOgNG3qxrdWB +CUfvO6wIBHxcmbHtRwfSAjEAnbpV/KlK6O3t5nYBQnvI+GDZjVGLVTv7jHvrZQnD ++JbNR6iC8hZVdyR+EhCVBCyj +-----END CERTIFICATE----- + +# Issuer: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI +# Subject: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI +# Label: "emSign Root CA - C1" +# Serial: 825510296613316004955058 +# MD5 Fingerprint: d8:e3:5d:01:21:fa:78:5a:b0:df:ba:d2:ee:2a:5f:68 +# SHA1 Fingerprint: e7:2e:f1:df:fc:b2:09:28:cf:5d:d4:d5:67:37:b1:51:cb:86:4f:01 +# SHA256 Fingerprint: 12:56:09:aa:30:1d:a0:a2:49:b9:7a:82:39:cb:6a:34:21:6f:44:dc:ac:9f:39:54:b1:42:92:f2:e8:c8:60:8f +-----BEGIN CERTIFICATE----- +MIIDczCCAlugAwIBAgILAK7PALrEzzL4Q7IwDQYJKoZIhvcNAQELBQAwVjELMAkG +A1UEBhMCVVMxEzARBgNVBAsTCmVtU2lnbiBQS0kxFDASBgNVBAoTC2VNdWRocmEg +SW5jMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEMxMB4XDTE4MDIxODE4MzAw +MFoXDTQzMDIxODE4MzAwMFowVjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln +biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMRwwGgYDVQQDExNlbVNpZ24gUm9v +dCBDQSAtIEMxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz+upufGZ +BczYKCFK83M0UYRWEPWgTywS4/oTmifQz/l5GnRfHXk5/Fv4cI7gklL35CX5VIPZ +HdPIWoU/Xse2B+4+wM6ar6xWQio5JXDWv7V7Nq2s9nPczdcdioOl+yuQFTdrHCZH +3DspVpNqs8FqOp099cGXOFgFixwR4+S0uF2FHYP+eF8LRWgYSKVGczQ7/g/IdrvH +GPMF0Ybzhe3nudkyrVWIzqa2kbBPrH4VI5b2P/AgNBbeCsbEBEV5f6f9vtKppa+c +xSMq9zwhbL2vj07FOrLzNBL834AaSaTUqZX3noleoomslMuoaJuvimUnzYnu3Yy1 +aylwQ6BpC+S5DwIDAQABo0IwQDAdBgNVHQ4EFgQU/qHgcB4qAzlSWkK+XJGFehiq +TbUwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBAMJKVvoVIXsoounlHfv4LcQ5lkFMOycsxGwYFYDGrK9HWS8mC+M2sO87 +/kOXSTKZEhVb3xEp/6tT+LvBeA+snFOvV71ojD1pM/CjoCNjO2RnIkSt1XHLVip4 +kqNPEjE2NuLe/gDEo2APJ62gsIq1NnpSob0n9CAnYuhNlCQT5AoE6TyrLshDCUrG +YQTlSTR+08TI9Q/Aqum6VF7zYytPT1DU/rl7mYw9wC68AivTxEDkigcxHpvOJpkT ++xHqmiIMERnHXhuBUDDIlhJu58tBf5E7oke3VIAb3ADMmpDqw8NQBmIMMMAVSKeo +WXzhriKi4gp6D/piq1JM4fHfyr6DDUI= +-----END CERTIFICATE----- + +# Issuer: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI +# Subject: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI +# Label: "emSign ECC Root CA - C3" +# Serial: 582948710642506000014504 +# MD5 Fingerprint: 3e:53:b3:a3:81:ee:d7:10:f8:d3:b0:1d:17:92:f5:d5 +# SHA1 Fingerprint: b6:af:43:c2:9b:81:53:7d:f6:ef:6b:c3:1f:1f:60:15:0c:ee:48:66 +# SHA256 Fingerprint: bc:4d:80:9b:15:18:9d:78:db:3e:1d:8c:f4:f9:72:6a:79:5d:a1:64:3c:a5:f1:35:8e:1d:db:0e:dc:0d:7e:b3 +-----BEGIN CERTIFICATE----- +MIICKzCCAbGgAwIBAgIKe3G2gla4EnycqDAKBggqhkjOPQQDAzBaMQswCQYDVQQG +EwJVUzETMBEGA1UECxMKZW1TaWduIFBLSTEUMBIGA1UEChMLZU11ZGhyYSBJbmMx +IDAeBgNVBAMTF2VtU2lnbiBFQ0MgUm9vdCBDQSAtIEMzMB4XDTE4MDIxODE4MzAw +MFoXDTQzMDIxODE4MzAwMFowWjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln +biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMSAwHgYDVQQDExdlbVNpZ24gRUND +IFJvb3QgQ0EgLSBDMzB2MBAGByqGSM49AgEGBSuBBAAiA2IABP2lYa57JhAd6bci +MK4G9IGzsUJxlTm801Ljr6/58pc1kjZGDoeVjbk5Wum739D+yAdBPLtVb4Ojavti +sIGJAnB9SMVK4+kiVCJNk7tCDK93nCOmfddhEc5lx/h//vXyqaNCMEAwHQYDVR0O +BBYEFPtaSNCAIEDyqOkAB2kZd6fmw/TPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB +Af8EBTADAQH/MAoGCCqGSM49BAMDA2gAMGUCMQC02C8Cif22TGK6Q04ThHK1rt0c +3ta13FaPWEBaLd4gTCKDypOofu4SQMfWh0/434UCMBwUZOR8loMRnLDRWmFLpg9J +0wD8ofzkpf9/rdcw0Md3f76BB1UwUCAU9Vc4CqgxUQ== +-----END CERTIFICATE----- + +# Issuer: CN=Hongkong Post Root CA 3 O=Hongkong Post +# Subject: CN=Hongkong Post Root CA 3 O=Hongkong Post +# Label: "Hongkong Post Root CA 3" +# Serial: 46170865288971385588281144162979347873371282084 +# MD5 Fingerprint: 11:fc:9f:bd:73:30:02:8a:fd:3f:f3:58:b9:cb:20:f0 +# SHA1 Fingerprint: 58:a2:d0:ec:20:52:81:5b:c1:f3:f8:64:02:24:4e:c2:8e:02:4b:02 +# SHA256 Fingerprint: 5a:2f:c0:3f:0c:83:b0:90:bb:fa:40:60:4b:09:88:44:6c:76:36:18:3d:f9:84:6e:17:10:1a:44:7f:b8:ef:d6 +-----BEGIN CERTIFICATE----- +MIIFzzCCA7egAwIBAgIUCBZfikyl7ADJk0DfxMauI7gcWqQwDQYJKoZIhvcNAQEL +BQAwbzELMAkGA1UEBhMCSEsxEjAQBgNVBAgTCUhvbmcgS29uZzESMBAGA1UEBxMJ +SG9uZyBLb25nMRYwFAYDVQQKEw1Ib25na29uZyBQb3N0MSAwHgYDVQQDExdIb25n +a29uZyBQb3N0IFJvb3QgQ0EgMzAeFw0xNzA2MDMwMjI5NDZaFw00MjA2MDMwMjI5 +NDZaMG8xCzAJBgNVBAYTAkhLMRIwEAYDVQQIEwlIb25nIEtvbmcxEjAQBgNVBAcT +CUhvbmcgS29uZzEWMBQGA1UEChMNSG9uZ2tvbmcgUG9zdDEgMB4GA1UEAxMXSG9u +Z2tvbmcgUG9zdCBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK +AoICAQCziNfqzg8gTr7m1gNt7ln8wlffKWihgw4+aMdoWJwcYEuJQwy51BWy7sFO +dem1p+/l6TWZ5Mwc50tfjTMwIDNT2aa71T4Tjukfh0mtUC1Qyhi+AViiE3CWu4mI +VoBc+L0sPOFMV4i707mV78vH9toxdCim5lSJ9UExyuUmGs2C4HDaOym71QP1mbpV +9WTRYA6ziUm4ii8F0oRFKHyPaFASePwLtVPLwpgchKOesL4jpNrcyCse2m5FHomY +2vkALgbpDDtw1VAliJnLzXNg99X/NWfFobxeq81KuEXryGgeDQ0URhLj0mRiikKY +vLTGCAj4/ahMZJx2Ab0vqWwzD9g/KLg8aQFChn5pwckGyuV6RmXpwtZQQS4/t+Tt +bNe/JgERohYpSms0BpDsE9K2+2p20jzt8NYt3eEV7KObLyzJPivkaTv/ciWxNoZb +x39ri1UbSsUgYT2uy1DhCDq+sI9jQVMwCFk8mB13umOResoQUGC/8Ne8lYePl8X+ +l2oBlKN8W4UdKjk60FSh0Tlxnf0h+bV78OLgAo9uliQlLKAeLKjEiafv7ZkGL7YK +TE/bosw3Gq9HhS2KX8Q0NEwA/RiTZxPRN+ZItIsGxVd7GYYKecsAyVKvQv83j+Gj +Hno9UKtjBucVtT+2RTeUN7F+8kjDf8V1/peNRY8apxpyKBpADwIDAQABo2MwYTAP +BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAfBgNVHSMEGDAWgBQXnc0e +i9Y5K3DTXNSguB+wAPzFYTAdBgNVHQ4EFgQUF53NHovWOStw01zUoLgfsAD8xWEw +DQYJKoZIhvcNAQELBQADggIBAFbVe27mIgHSQpsY1Q7XZiNc4/6gx5LS6ZStS6LG +7BJ8dNVI0lkUmcDrudHr9EgwW62nV3OZqdPlt9EuWSRY3GguLmLYauRwCy0gUCCk +MpXRAJi70/33MvJJrsZ64Ee+bs7Lo3I6LWldy8joRTnU+kLBEUx3XZL7av9YROXr +gZ6voJmtvqkBZss4HTzfQx/0TW60uhdG/H39h4F5ag0zD/ov+BS5gLNdTaqX4fnk +GMX41TiMJjz98iji7lpJiCzfeT2OnpA8vUFKOt1b9pq0zj8lMH8yfaIDlNDceqFS +3m6TjRgm/VWsvY+b0s+v54Ysyx8Jb6NvqYTUc79NoXQbTiNg8swOqn+knEwlqLJm +Ozj/2ZQw9nKEvmhVEA/GcywWaZMH/rFF7buiVWqw2rVKAiUnhde3t4ZEFolsgCs+ +l6mc1X5VTMbeRRAc6uk7nwNT7u56AQIWeNTowr5GdogTPyK7SBIdUgC0An4hGh6c +JfTzPV4e0hz5sy229zdcxsshTrD3mUcYhcErulWuBurQB7Lcq9CClnXO0lD+mefP +L5/ndtFhKvshuzHQqp9HpLIiyhY6UFfEW0NnxWViA0kB60PZ2Pierc+xYw5F9KBa +LJstxabArahH9CdMOA0uG0k7UvToiIMrVCjU8jVStDKDYmlkDJGcn5fqdBb9HxEG +mpv0 +-----END CERTIFICATE----- + +# Issuer: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only +# Subject: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only +# Label: "Entrust Root Certification Authority - G4" +# Serial: 289383649854506086828220374796556676440 +# MD5 Fingerprint: 89:53:f1:83:23:b7:7c:8e:05:f1:8c:71:38:4e:1f:88 +# SHA1 Fingerprint: 14:88:4e:86:26:37:b0:26:af:59:62:5c:40:77:ec:35:29:ba:96:01 +# SHA256 Fingerprint: db:35:17:d1:f6:73:2a:2d:5a:b9:7c:53:3e:c7:07:79:ee:32:70:a6:2f:b4:ac:42:38:37:24:60:e6:f0:1e:88 +-----BEGIN CERTIFICATE----- +MIIGSzCCBDOgAwIBAgIRANm1Q3+vqTkPAAAAAFVlrVgwDQYJKoZIhvcNAQELBQAw +gb4xCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQL +Ex9TZWUgd3d3LmVudHJ1c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykg +MjAxNSBFbnRydXN0LCBJbmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAw +BgNVBAMTKUVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0 +MB4XDTE1MDUyNzExMTExNloXDTM3MTIyNzExNDExNlowgb4xCzAJBgNVBAYTAlVT +MRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1 +c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxNSBFbnRydXN0LCBJ +bmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAwBgNVBAMTKUVudHJ1c3Qg +Um9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0MIICIjANBgkqhkiG9w0B +AQEFAAOCAg8AMIICCgKCAgEAsewsQu7i0TD/pZJH4i3DumSXbcr3DbVZwbPLqGgZ +2K+EbTBwXX7zLtJTmeH+H17ZSK9dE43b/2MzTdMAArzE+NEGCJR5WIoV3imz/f3E +T+iq4qA7ec2/a0My3dl0ELn39GjUu9CH1apLiipvKgS1sqbHoHrmSKvS0VnM1n4j +5pds8ELl3FFLFUHtSUrJ3hCX1nbB76W1NhSXNdh4IjVS70O92yfbYVaCNNzLiGAM +C1rlLAHGVK/XqsEQe9IFWrhAnoanw5CGAlZSCXqc0ieCU0plUmr1POeo8pyvi73T +DtTUXm6Hnmo9RR3RXRv06QqsYJn7ibT/mCzPfB3pAqoEmh643IhuJbNsZvc8kPNX +wbMv9W3y+8qh+CmdRouzavbmZwe+LGcKKh9asj5XxNMhIWNlUpEbsZmOeX7m640A +2Vqq6nPopIICR5b+W45UYaPrL0swsIsjdXJ8ITzI9vF01Bx7owVV7rtNOzK+mndm +nqxpkCIHH2E6lr7lmk/MBTwoWdPBDFSoWWG9yHJM6Nyfh3+9nEg2XpWjDrk4JFX8 +dWbrAuMINClKxuMrLzOg2qOGpRKX/YAr2hRC45K9PvJdXmd0LhyIRyk0X+IyqJwl +N4y6mACXi0mWHv0liqzc2thddG5msP9E36EYxr5ILzeUePiVSj9/E15dWf10hkNj +c0kCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD +VR0OBBYEFJ84xFYjwznooHFs6FRM5Og6sb9nMA0GCSqGSIb3DQEBCwUAA4ICAQAS +5UKme4sPDORGpbZgQIeMJX6tuGguW8ZAdjwD+MlZ9POrYs4QjbRaZIxowLByQzTS +Gwv2LFPSypBLhmb8qoMi9IsabyZIrHZ3CL/FmFz0Jomee8O5ZDIBf9PD3Vht7LGr +hFV0d4QEJ1JrhkzO3bll/9bGXp+aEJlLdWr+aumXIOTkdnrG0CSqkM0gkLpHZPt/ +B7NTeLUKYvJzQ85BK4FqLoUWlFPUa19yIqtRLULVAJyZv967lDtX/Zr1hstWO1uI +AeV8KEsD+UmDfLJ/fOPtjqF/YFOOVZ1QNBIPt5d7bIdKROf1beyAN/BYGW5KaHbw +H5Lk6rWS02FREAutp9lfx1/cH6NcjKF+m7ee01ZvZl4HliDtC3T7Zk6LERXpgUl+ +b7DUUH8i119lAg2m9IUe2K4GS0qn0jFmwvjO5QimpAKWRGhXxNUzzxkvFMSUHHuk +2fCfDrGA4tGeEWSpiBE6doLlYsKA2KSD7ZPvfC+QsDJMlhVoSFLUmQjAJOgc47Ol +IQ6SwJAfzyBfyjs4x7dtOvPmRLgOMWuIjnDrnBdSqEGULoe256YSxXXfW8AKbnuk +5F6G+TaU33fD6Q3AOfF5u0aOq0NZJ7cguyPpVkAh7DE9ZapD8j3fcEThuk0mEDuY +n/PIjhs4ViFqUZPTkcpG2om3PVODLAgfi49T3f+sHw== +-----END CERTIFICATE----- + +# Issuer: CN=Microsoft ECC Root Certificate Authority 2017 O=Microsoft Corporation +# Subject: CN=Microsoft ECC Root Certificate Authority 2017 O=Microsoft Corporation +# Label: "Microsoft ECC Root Certificate Authority 2017" +# Serial: 136839042543790627607696632466672567020 +# MD5 Fingerprint: dd:a1:03:e6:4a:93:10:d1:bf:f0:19:42:cb:fe:ed:67 +# SHA1 Fingerprint: 99:9a:64:c3:7f:f4:7d:9f:ab:95:f1:47:69:89:14:60:ee:c4:c3:c5 +# SHA256 Fingerprint: 35:8d:f3:9d:76:4a:f9:e1:b7:66:e9:c9:72:df:35:2e:e1:5c:fa:c2:27:af:6a:d1:d7:0e:8e:4a:6e:dc:ba:02 +-----BEGIN CERTIFICATE----- +MIICWTCCAd+gAwIBAgIQZvI9r4fei7FK6gxXMQHC7DAKBggqhkjOPQQDAzBlMQsw +CQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYD +VQQDEy1NaWNyb3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIw +MTcwHhcNMTkxMjE4MjMwNjQ1WhcNNDIwNzE4MjMxNjA0WjBlMQswCQYDVQQGEwJV +UzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1NaWNy +b3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwdjAQBgcq +hkjOPQIBBgUrgQQAIgNiAATUvD0CQnVBEyPNgASGAlEvaqiBYgtlzPbKnR5vSmZR +ogPZnZH6thaxjG7efM3beaYvzrvOcS/lpaso7GMEZpn4+vKTEAXhgShC48Zo9OYb +hGBKia/teQ87zvH2RPUBeMCjVDBSMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8E +BTADAQH/MB0GA1UdDgQWBBTIy5lycFIM+Oa+sgRXKSrPQhDtNTAQBgkrBgEEAYI3 +FQEEAwIBADAKBggqhkjOPQQDAwNoADBlAjBY8k3qDPlfXu5gKcs68tvWMoQZP3zV +L8KxzJOuULsJMsbG7X7JNpQS5GiFBqIb0C8CMQCZ6Ra0DvpWSNSkMBaReNtUjGUB +iudQZsIxtzm6uBoiB078a1QWIP8rtedMDE2mT3M= +-----END CERTIFICATE----- + +# Issuer: CN=Microsoft RSA Root Certificate Authority 2017 O=Microsoft Corporation +# Subject: CN=Microsoft RSA Root Certificate Authority 2017 O=Microsoft Corporation +# Label: "Microsoft RSA Root Certificate Authority 2017" +# Serial: 40975477897264996090493496164228220339 +# MD5 Fingerprint: 10:ff:00:ff:cf:c9:f8:c7:7a:c0:ee:35:8e:c9:0f:47 +# SHA1 Fingerprint: 73:a5:e6:4a:3b:ff:83:16:ff:0e:dc:cc:61:8a:90:6e:4e:ae:4d:74 +# SHA256 Fingerprint: c7:41:f7:0f:4b:2a:8d:88:bf:2e:71:c1:41:22:ef:53:ef:10:eb:a0:cf:a5:e6:4c:fa:20:f4:18:85:30:73:e0 +-----BEGIN CERTIFICATE----- +MIIFqDCCA5CgAwIBAgIQHtOXCV/YtLNHcB6qvn9FszANBgkqhkiG9w0BAQwFADBl +MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYw +NAYDVQQDEy1NaWNyb3NvZnQgUlNBIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5 +IDIwMTcwHhcNMTkxMjE4MjI1MTIyWhcNNDIwNzE4MjMwMDIzWjBlMQswCQYDVQQG +EwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1N +aWNyb3NvZnQgUlNBIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwggIi +MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKW76UM4wplZEWCpW9R2LBifOZ +Nt9GkMml7Xhqb0eRaPgnZ1AzHaGm++DlQ6OEAlcBXZxIQIJTELy/xztokLaCLeX0 +ZdDMbRnMlfl7rEqUrQ7eS0MdhweSE5CAg2Q1OQT85elss7YfUJQ4ZVBcF0a5toW1 +HLUX6NZFndiyJrDKxHBKrmCk3bPZ7Pw71VdyvD/IybLeS2v4I2wDwAW9lcfNcztm +gGTjGqwu+UcF8ga2m3P1eDNbx6H7JyqhtJqRjJHTOoI+dkC0zVJhUXAoP8XFWvLJ +jEm7FFtNyP9nTUwSlq31/niol4fX/V4ggNyhSyL71Imtus5Hl0dVe49FyGcohJUc +aDDv70ngNXtk55iwlNpNhTs+VcQor1fznhPbRiefHqJeRIOkpcrVE7NLP8TjwuaG +YaRSMLl6IE9vDzhTyzMMEyuP1pq9KsgtsRx9S1HKR9FIJ3Jdh+vVReZIZZ2vUpC6 +W6IYZVcSn2i51BVrlMRpIpj0M+Dt+VGOQVDJNE92kKz8OMHY4Xu54+OU4UZpyw4K +UGsTuqwPN1q3ErWQgR5WrlcihtnJ0tHXUeOrO8ZV/R4O03QK0dqq6mm4lyiPSMQH ++FJDOvTKVTUssKZqwJz58oHhEmrARdlns87/I6KJClTUFLkqqNfs+avNJVgyeY+Q +W5g5xAgGwax/Dj0ApQIDAQABo1QwUjAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/ +BAUwAwEB/zAdBgNVHQ4EFgQUCctZf4aycI8awznjwNnpv7tNsiMwEAYJKwYBBAGC +NxUBBAMCAQAwDQYJKoZIhvcNAQEMBQADggIBAKyvPl3CEZaJjqPnktaXFbgToqZC +LgLNFgVZJ8og6Lq46BrsTaiXVq5lQ7GPAJtSzVXNUzltYkyLDVt8LkS/gxCP81OC +gMNPOsduET/m4xaRhPtthH80dK2Jp86519efhGSSvpWhrQlTM93uCupKUY5vVau6 +tZRGrox/2KJQJWVggEbbMwSubLWYdFQl3JPk+ONVFT24bcMKpBLBaYVu32TxU5nh +SnUgnZUP5NbcA/FZGOhHibJXWpS2qdgXKxdJ5XbLwVaZOjex/2kskZGT4d9Mozd2 +TaGf+G0eHdP67Pv0RR0Tbc/3WeUiJ3IrhvNXuzDtJE3cfVa7o7P4NHmJweDyAmH3 +pvwPuxwXC65B2Xy9J6P9LjrRk5Sxcx0ki69bIImtt2dmefU6xqaWM/5TkshGsRGR +xpl/j8nWZjEgQRCHLQzWwa80mMpkg/sTV9HB8Dx6jKXB/ZUhoHHBk2dxEuqPiApp +GWSZI1b7rCoucL5mxAyE7+WL85MB+GqQk2dLsmijtWKP6T+MejteD+eMuMZ87zf9 +dOLITzNy4ZQ5bb0Sr74MTnB8G2+NszKTc0QWbej09+CVgI+WXTik9KveCjCHk9hN +AHFiRSdLOkKEW39lt2c0Ui2cFmuqqNh7o0JMcccMyj6D5KbvtwEwXlGjefVwaaZB +RA+GsCyRxj3qrg+E +-----END CERTIFICATE----- + +# Issuer: CN=e-Szigno Root CA 2017 O=Microsec Ltd. +# Subject: CN=e-Szigno Root CA 2017 O=Microsec Ltd. +# Label: "e-Szigno Root CA 2017" +# Serial: 411379200276854331539784714 +# MD5 Fingerprint: de:1f:f6:9e:84:ae:a7:b4:21:ce:1e:58:7d:d1:84:98 +# SHA1 Fingerprint: 89:d4:83:03:4f:9e:9a:48:80:5f:72:37:d4:a9:a6:ef:cb:7c:1f:d1 +# SHA256 Fingerprint: be:b0:0b:30:83:9b:9b:c3:2c:32:e4:44:79:05:95:06:41:f2:64:21:b1:5e:d0:89:19:8b:51:8a:e2:ea:1b:99 +-----BEGIN CERTIFICATE----- +MIICQDCCAeWgAwIBAgIMAVRI7yH9l1kN9QQKMAoGCCqGSM49BAMCMHExCzAJBgNV +BAYTAkhVMREwDwYDVQQHDAhCdWRhcGVzdDEWMBQGA1UECgwNTWljcm9zZWMgTHRk +LjEXMBUGA1UEYQwOVkFUSFUtMjM1ODQ0OTcxHjAcBgNVBAMMFWUtU3ppZ25vIFJv +b3QgQ0EgMjAxNzAeFw0xNzA4MjIxMjA3MDZaFw00MjA4MjIxMjA3MDZaMHExCzAJ +BgNVBAYTAkhVMREwDwYDVQQHDAhCdWRhcGVzdDEWMBQGA1UECgwNTWljcm9zZWMg +THRkLjEXMBUGA1UEYQwOVkFUSFUtMjM1ODQ0OTcxHjAcBgNVBAMMFWUtU3ppZ25v +IFJvb3QgQ0EgMjAxNzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABJbcPYrYsHtv +xie+RJCxs1YVe45DJH0ahFnuY2iyxl6H0BVIHqiQrb1TotreOpCmYF9oMrWGQd+H +Wyx7xf58etqjYzBhMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G +A1UdDgQWBBSHERUI0arBeAyxr87GyZDvvzAEwDAfBgNVHSMEGDAWgBSHERUI0arB +eAyxr87GyZDvvzAEwDAKBggqhkjOPQQDAgNJADBGAiEAtVfd14pVCzbhhkT61Nlo +jbjcI4qKDdQvfepz7L9NbKgCIQDLpbQS+ue16M9+k/zzNY9vTlp8tLxOsvxyqltZ ++efcMQ== +-----END CERTIFICATE----- + +# Issuer: O=CERTSIGN SA OU=certSIGN ROOT CA G2 +# Subject: O=CERTSIGN SA OU=certSIGN ROOT CA G2 +# Label: "certSIGN Root CA G2" +# Serial: 313609486401300475190 +# MD5 Fingerprint: 8c:f1:75:8a:c6:19:cf:94:b7:f7:65:20:87:c3:97:c7 +# SHA1 Fingerprint: 26:f9:93:b4:ed:3d:28:27:b0:b9:4b:a7:e9:15:1d:a3:8d:92:e5:32 +# SHA256 Fingerprint: 65:7c:fe:2f:a7:3f:aa:38:46:25:71:f3:32:a2:36:3a:46:fc:e7:02:09:51:71:07:02:cd:fb:b6:ee:da:33:05 +-----BEGIN CERTIFICATE----- +MIIFRzCCAy+gAwIBAgIJEQA0tk7GNi02MA0GCSqGSIb3DQEBCwUAMEExCzAJBgNV +BAYTAlJPMRQwEgYDVQQKEwtDRVJUU0lHTiBTQTEcMBoGA1UECxMTY2VydFNJR04g +Uk9PVCBDQSBHMjAeFw0xNzAyMDYwOTI3MzVaFw00MjAyMDYwOTI3MzVaMEExCzAJ +BgNVBAYTAlJPMRQwEgYDVQQKEwtDRVJUU0lHTiBTQTEcMBoGA1UECxMTY2VydFNJ +R04gUk9PVCBDQSBHMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMDF +dRmRfUR0dIf+DjuW3NgBFszuY5HnC2/OOwppGnzC46+CjobXXo9X69MhWf05N0Iw +vlDqtg+piNguLWkh59E3GE59kdUWX2tbAMI5Qw02hVK5U2UPHULlj88F0+7cDBrZ +uIt4ImfkabBoxTzkbFpG583H+u/E7Eu9aqSs/cwoUe+StCmrqzWaTOTECMYmzPhp +n+Sc8CnTXPnGFiWeI8MgwT0PPzhAsP6CRDiqWhqKa2NYOLQV07YRaXseVO6MGiKs +cpc/I1mbySKEwQdPzH/iV8oScLumZfNpdWO9lfsbl83kqK/20U6o2YpxJM02PbyW +xPFsqa7lzw1uKA2wDrXKUXt4FMMgL3/7FFXhEZn91QqhngLjYl/rNUssuHLoPj1P +rCy7Lobio3aP5ZMqz6WryFyNSwb/EkaseMsUBzXgqd+L6a8VTxaJW732jcZZroiF +DsGJ6x9nxUWO/203Nit4ZoORUSs9/1F3dmKh7Gc+PoGD4FapUB8fepmrY7+EF3fx +DTvf95xhszWYijqy7DwaNz9+j5LP2RIUZNoQAhVB/0/E6xyjyfqZ90bp4RjZsbgy +LcsUDFDYg2WD7rlcz8sFWkz6GZdr1l0T08JcVLwyc6B49fFtHsufpaafItzRUZ6C +eWRgKRM+o/1Pcmqr4tTluCRVLERLiohEnMqE0yo7AgMBAAGjQjBAMA8GA1UdEwEB +/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSCIS1mxteg4BXrzkwJ +d8RgnlRuAzANBgkqhkiG9w0BAQsFAAOCAgEAYN4auOfyYILVAzOBywaK8SJJ6ejq +kX/GM15oGQOGO0MBzwdw5AgeZYWR5hEit/UCI46uuR59H35s5r0l1ZUa8gWmr4UC +b6741jH/JclKyMeKqdmfS0mbEVeZkkMR3rYzpMzXjWR91M08KCy0mpbqTfXERMQl +qiCA2ClV9+BB/AYm/7k29UMUA2Z44RGx2iBfRgB4ACGlHgAoYXhvqAEBj500mv/0 +OJD7uNGzcgbJceaBxXntC6Z58hMLnPddDnskk7RI24Zf3lCGeOdA5jGokHZwYa+c +NywRtYK3qq4kNFtyDGkNzVmf9nGvnAvRCjj5BiKDUyUM/FHE5r7iOZULJK2v0ZXk +ltd0ZGtxTgI8qoXzIKNDOXZbbFD+mpwUHmUUihW9o4JFWklWatKcsWMy5WHgUyIO +pwpJ6st+H6jiYoD2EEVSmAYY3qXNL3+q1Ok+CHLsIwMCPKaq2LxndD0UF/tUSxfj +03k9bWtJySgOLnRQvwzZRjoQhsmnP+mg7H/rpXdYaXHmgwo38oZJar55CJD2AhZk +PuXaTH4MNMn5X7azKFGnpyuqSfqNZSlO42sTp5SjLVFteAxEy9/eCG/Oo2Sr05WE +1LlSVHJ7liXMvGnjSG4N0MedJ5qq+BOS3R7fY581qRY27Iy4g/Q9iY/NtBde17MX +QRBdJ3NghVdJIgc= +-----END CERTIFICATE----- + +# Issuer: CN=Trustwave Global Certification Authority O=Trustwave Holdings, Inc. +# Subject: CN=Trustwave Global Certification Authority O=Trustwave Holdings, Inc. +# Label: "Trustwave Global Certification Authority" +# Serial: 1846098327275375458322922162 +# MD5 Fingerprint: f8:1c:18:2d:2f:ba:5f:6d:a1:6c:bc:c7:ab:91:c7:0e +# SHA1 Fingerprint: 2f:8f:36:4f:e1:58:97:44:21:59:87:a5:2a:9a:d0:69:95:26:7f:b5 +# SHA256 Fingerprint: 97:55:20:15:f5:dd:fc:3c:87:88:c0:06:94:45:55:40:88:94:45:00:84:f1:00:86:70:86:bc:1a:2b:b5:8d:c8 +-----BEGIN CERTIFICATE----- +MIIF2jCCA8KgAwIBAgIMBfcOhtpJ80Y1LrqyMA0GCSqGSIb3DQEBCwUAMIGIMQsw +CQYDVQQGEwJVUzERMA8GA1UECAwISWxsaW5vaXMxEDAOBgNVBAcMB0NoaWNhZ28x +ITAfBgNVBAoMGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjExMC8GA1UEAwwoVHJ1 +c3R3YXZlIEdsb2JhbCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0xNzA4MjMx +OTM0MTJaFw00MjA4MjMxOTM0MTJaMIGIMQswCQYDVQQGEwJVUzERMA8GA1UECAwI +SWxsaW5vaXMxEDAOBgNVBAcMB0NoaWNhZ28xITAfBgNVBAoMGFRydXN0d2F2ZSBI +b2xkaW5ncywgSW5jLjExMC8GA1UEAwwoVHJ1c3R3YXZlIEdsb2JhbCBDZXJ0aWZp +Y2F0aW9uIEF1dGhvcml0eTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB +ALldUShLPDeS0YLOvR29zd24q88KPuFd5dyqCblXAj7mY2Hf8g+CY66j96xz0Xzn +swuvCAAJWX/NKSqIk4cXGIDtiLK0thAfLdZfVaITXdHG6wZWiYj+rDKd/VzDBcdu +7oaJuogDnXIhhpCujwOl3J+IKMujkkkP7NAP4m1ET4BqstTnoApTAbqOl5F2brz8 +1Ws25kCI1nsvXwXoLG0R8+eyvpJETNKXpP7ScoFDB5zpET71ixpZfR9oWN0EACyW +80OzfpgZdNmcc9kYvkHHNHnZ9GLCQ7mzJ7Aiy/k9UscwR7PJPrhq4ufogXBeQotP +JqX+OsIgbrv4Fo7NDKm0G2x2EOFYeUY+VM6AqFcJNykbmROPDMjWLBz7BegIlT1l +RtzuzWniTY+HKE40Cz7PFNm73bZQmq131BnW2hqIyE4bJ3XYsgjxroMwuREOzYfw +hI0Vcnyh78zyiGG69Gm7DIwLdVcEuE4qFC49DxweMqZiNu5m4iK4BUBjECLzMx10 +coos9TkpoNPnG4CELcU9402x/RpvumUHO1jsQkUm+9jaJXLE9gCxInm943xZYkqc +BW89zubWR2OZxiRvchLIrH+QtAuRcOi35hYQcRfO3gZPSEF9NUqjifLJS3tBEW1n +twiYTOURGa5CgNz7kAXU+FDKvuStx8KU1xad5hePrzb7AgMBAAGjQjBAMA8GA1Ud +EwEB/wQFMAMBAf8wHQYDVR0OBBYEFJngGWcNYtt2s9o9uFvo/ULSMQ6HMA4GA1Ud +DwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAmHNw4rDT7TnsTGDZqRKGFx6W +0OhUKDtkLSGm+J1WE2pIPU/HPinbbViDVD2HfSMF1OQc3Og4ZYbFdada2zUFvXfe +uyk3QAUHw5RSn8pk3fEbK9xGChACMf1KaA0HZJDmHvUqoai7PF35owgLEQzxPy0Q +lG/+4jSHg9bP5Rs1bdID4bANqKCqRieCNqcVtgimQlRXtpla4gt5kNdXElE1GYhB +aCXUNxeEFfsBctyV3lImIJgm4nb1J2/6ADtKYdkNy1GTKv0WBpanI5ojSP5RvbbE +sLFUzt5sQa0WZ37b/TjNuThOssFgy50X31ieemKyJo90lZvkWx3SD92YHJtZuSPT +MaCm/zjdzyBP6VhWOmfD0faZmZ26NraAL4hHT4a/RDqA5Dccprrql5gR0IRiR2Qe +qu5AvzSxnI9O4fKSTx+O856X3vOmeWqJcU9LJxdI/uz0UA9PSX3MReO9ekDFQdxh +VicGaeVyQYHTtgGJoC86cnn+OjC/QezHYj6RS8fZMXZC+fc8Y+wmjHMMfRod6qh8 +h6jCJ3zhM0EPz8/8AKAigJ5Kp28AsEFFtyLKaEjFQqKu3R3y4G5OBVixwJAWKqQ9 +EEC+j2Jjg6mcgn0tAumDMHzLJ8n9HmYAsC7TIS+OMxZsmO0QqAfWzJPP29FpHOTK +yeC2nOnOcXHebD8WpHk= +-----END CERTIFICATE----- + +# Issuer: CN=Trustwave Global ECC P256 Certification Authority O=Trustwave Holdings, Inc. +# Subject: CN=Trustwave Global ECC P256 Certification Authority O=Trustwave Holdings, Inc. +# Label: "Trustwave Global ECC P256 Certification Authority" +# Serial: 4151900041497450638097112925 +# MD5 Fingerprint: 5b:44:e3:8d:5d:36:86:26:e8:0d:05:d2:59:a7:83:54 +# SHA1 Fingerprint: b4:90:82:dd:45:0c:be:8b:5b:b1:66:d3:e2:a4:08:26:cd:ed:42:cf +# SHA256 Fingerprint: 94:5b:bc:82:5e:a5:54:f4:89:d1:fd:51:a7:3d:df:2e:a6:24:ac:70:19:a0:52:05:22:5c:22:a7:8c:cf:a8:b4 +-----BEGIN CERTIFICATE----- +MIICYDCCAgegAwIBAgIMDWpfCD8oXD5Rld9dMAoGCCqGSM49BAMCMIGRMQswCQYD +VQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAf +BgNVBAoTGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3 +YXZlIEdsb2JhbCBFQ0MgUDI1NiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0x +NzA4MjMxOTM1MTBaFw00MjA4MjMxOTM1MTBaMIGRMQswCQYDVQQGEwJVUzERMA8G +A1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAfBgNVBAoTGFRydXN0 +d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3YXZlIEdsb2JhbCBF +Q0MgUDI1NiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTBZMBMGByqGSM49AgEGCCqG +SM49AwEHA0IABH77bOYj43MyCMpg5lOcunSNGLB4kFKA3TjASh3RqMyTpJcGOMoN +FWLGjgEqZZ2q3zSRLoHB5DOSMcT9CTqmP62jQzBBMA8GA1UdEwEB/wQFMAMBAf8w +DwYDVR0PAQH/BAUDAwcGADAdBgNVHQ4EFgQUo0EGrJBt0UrrdaVKEJmzsaGLSvcw +CgYIKoZIzj0EAwIDRwAwRAIgB+ZU2g6gWrKuEZ+Hxbb/ad4lvvigtwjzRM4q3wgh +DDcCIC0mA6AFvWvR9lz4ZcyGbbOcNEhjhAnFjXca4syc4XR7 +-----END CERTIFICATE----- + +# Issuer: CN=Trustwave Global ECC P384 Certification Authority O=Trustwave Holdings, Inc. +# Subject: CN=Trustwave Global ECC P384 Certification Authority O=Trustwave Holdings, Inc. +# Label: "Trustwave Global ECC P384 Certification Authority" +# Serial: 2704997926503831671788816187 +# MD5 Fingerprint: ea:cf:60:c4:3b:b9:15:29:40:a1:97:ed:78:27:93:d6 +# SHA1 Fingerprint: e7:f3:a3:c8:cf:6f:c3:04:2e:6d:0e:67:32:c5:9e:68:95:0d:5e:d2 +# SHA256 Fingerprint: 55:90:38:59:c8:c0:c3:eb:b8:75:9e:ce:4e:25:57:22:5f:f5:75:8b:bd:38:eb:d4:82:76:60:1e:1b:d5:80:97 +-----BEGIN CERTIFICATE----- +MIICnTCCAiSgAwIBAgIMCL2Fl2yZJ6SAaEc7MAoGCCqGSM49BAMDMIGRMQswCQYD +VQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAf +BgNVBAoTGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3 +YXZlIEdsb2JhbCBFQ0MgUDM4NCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0x +NzA4MjMxOTM2NDNaFw00MjA4MjMxOTM2NDNaMIGRMQswCQYDVQQGEwJVUzERMA8G +A1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAfBgNVBAoTGFRydXN0 +d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3YXZlIEdsb2JhbCBF +Q0MgUDM4NCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTB2MBAGByqGSM49AgEGBSuB +BAAiA2IABGvaDXU1CDFHBa5FmVXxERMuSvgQMSOjfoPTfygIOiYaOs+Xgh+AtycJ +j9GOMMQKmw6sWASr9zZ9lCOkmwqKi6vr/TklZvFe/oyujUF5nQlgziip04pt89ZF +1PKYhDhloKNDMEEwDwYDVR0TAQH/BAUwAwEB/zAPBgNVHQ8BAf8EBQMDBwYAMB0G +A1UdDgQWBBRVqYSJ0sEyvRjLbKYHTsjnnb6CkDAKBggqhkjOPQQDAwNnADBkAjA3 +AZKXRRJ+oPM+rRk6ct30UJMDEr5E0k9BpIycnR+j9sKS50gU/k6bpZFXrsY3crsC +MGclCrEMXu6pY5Jv5ZAL/mYiykf9ijH3g/56vxC+GCsej/YpHpRZ744hN8tRmKVu +Sw== +-----END CERTIFICATE----- + +# Issuer: CN=NAVER Global Root Certification Authority O=NAVER BUSINESS PLATFORM Corp. +# Subject: CN=NAVER Global Root Certification Authority O=NAVER BUSINESS PLATFORM Corp. +# Label: "NAVER Global Root Certification Authority" +# Serial: 9013692873798656336226253319739695165984492813 +# MD5 Fingerprint: c8:7e:41:f6:25:3b:f5:09:b3:17:e8:46:3d:bf:d0:9b +# SHA1 Fingerprint: 8f:6b:f2:a9:27:4a:da:14:a0:c4:f4:8e:61:27:f9:c0:1e:78:5d:d1 +# SHA256 Fingerprint: 88:f4:38:dc:f8:ff:d1:fa:8f:42:91:15:ff:e5:f8:2a:e1:e0:6e:0c:70:c3:75:fa:ad:71:7b:34:a4:9e:72:65 +-----BEGIN CERTIFICATE----- +MIIFojCCA4qgAwIBAgIUAZQwHqIL3fXFMyqxQ0Rx+NZQTQ0wDQYJKoZIhvcNAQEM +BQAwaTELMAkGA1UEBhMCS1IxJjAkBgNVBAoMHU5BVkVSIEJVU0lORVNTIFBMQVRG +T1JNIENvcnAuMTIwMAYDVQQDDClOQVZFUiBHbG9iYWwgUm9vdCBDZXJ0aWZpY2F0 +aW9uIEF1dGhvcml0eTAeFw0xNzA4MTgwODU4NDJaFw0zNzA4MTgyMzU5NTlaMGkx +CzAJBgNVBAYTAktSMSYwJAYDVQQKDB1OQVZFUiBCVVNJTkVTUyBQTEFURk9STSBD +b3JwLjEyMDAGA1UEAwwpTkFWRVIgR2xvYmFsIFJvb3QgQ2VydGlmaWNhdGlvbiBB +dXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC21PGTXLVA +iQqrDZBbUGOukJR0F0Vy1ntlWilLp1agS7gvQnXp2XskWjFlqxcX0TM62RHcQDaH +38dq6SZeWYp34+hInDEW+j6RscrJo+KfziFTowI2MMtSAuXaMl3Dxeb57hHHi8lE +HoSTGEq0n+USZGnQJoViAbbJAh2+g1G7XNr4rRVqmfeSVPc0W+m/6imBEtRTkZaz +kVrd/pBzKPswRrXKCAfHcXLJZtM0l/aM9BhK4dA9WkW2aacp+yPOiNgSnABIqKYP +szuSjXEOdMWLyEz59JuOuDxp7W87UC9Y7cSw0BwbagzivESq2M0UXZR4Yb8Obtoq +vC8MC3GmsxY/nOb5zJ9TNeIDoKAYv7vxvvTWjIcNQvcGufFt7QSUqP620wbGQGHf +nZ3zVHbOUzoBppJB7ASjjw2i1QnK1sua8e9DXcCrpUHPXFNwcMmIpi3Ua2FzUCaG +YQ5fG8Ir4ozVu53BA0K6lNpfqbDKzE0K70dpAy8i+/Eozr9dUGWokG2zdLAIx6yo +0es+nPxdGoMuK8u180SdOqcXYZaicdNwlhVNt0xz7hlcxVs+Qf6sdWA7G2POAN3a +CJBitOUt7kinaxeZVL6HSuOpXgRM6xBtVNbv8ejyYhbLgGvtPe31HzClrkvJE+2K +AQHJuFFYwGY6sWZLxNUxAmLpdIQM201GLQIDAQABo0IwQDAdBgNVHQ4EFgQU0p+I +36HNLL3s9TsBAZMzJ7LrYEswDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMB +Af8wDQYJKoZIhvcNAQEMBQADggIBADLKgLOdPVQG3dLSLvCkASELZ0jKbY7gyKoN +qo0hV4/GPnrK21HUUrPUloSlWGB/5QuOH/XcChWB5Tu2tyIvCZwTFrFsDDUIbatj +cu3cvuzHV+YwIHHW1xDBE1UBjCpD5EHxzzp6U5LOogMFDTjfArsQLtk70pt6wKGm ++LUx5vR1yblTmXVHIloUFcd4G7ad6Qz4G3bxhYTeodoS76TiEJd6eN4MUZeoIUCL +hr0N8F5OSza7OyAfikJW4Qsav3vQIkMsRIz75Sq0bBwcupTgE34h5prCy8VCZLQe +lHsIJchxzIdFV4XTnyliIoNRlwAYl3dqmJLJfGBs32x9SuRwTMKeuB330DTHD8z7 +p/8Dvq1wkNoL3chtl1+afwkyQf3NosxabUzyqkn+Zvjp2DXrDige7kgvOtB5CTh8 +piKCk5XQA76+AqAF3SAi428diDRgxuYKuQl1C/AH6GmWNcf7I4GOODm4RStDeKLR +LBT/DShycpWbXgnbiUSYqqFJu3FS8r/2/yehNq+4tneI3TqkbZs0kNwUXTC/t+sX +5Ie3cdCh13cV1ELX8vMxmV2b3RZtP+oGI/hGoiLtk/bdmuYqh7GYVPEi92tF4+KO +dh2ajcQGjTa3FPOdVGm3jjzVpG2Tgbet9r1ke8LJaDmgkpzNNIaRkPpkUZ3+/uul +9XXeifdy +-----END CERTIFICATE----- + +# Issuer: CN=AC RAIZ FNMT-RCM SERVIDORES SEGUROS O=FNMT-RCM OU=Ceres +# Subject: CN=AC RAIZ FNMT-RCM SERVIDORES SEGUROS O=FNMT-RCM OU=Ceres +# Label: "AC RAIZ FNMT-RCM SERVIDORES SEGUROS" +# Serial: 131542671362353147877283741781055151509 +# MD5 Fingerprint: 19:36:9c:52:03:2f:d2:d1:bb:23:cc:dd:1e:12:55:bb +# SHA1 Fingerprint: 62:ff:d9:9e:c0:65:0d:03:ce:75:93:d2:ed:3f:2d:32:c9:e3:e5:4a +# SHA256 Fingerprint: 55:41:53:b1:3d:2c:f9:dd:b7:53:bf:be:1a:4e:0a:e0:8d:0a:a4:18:70:58:fe:60:a2:b8:62:b2:e4:b8:7b:cb +-----BEGIN CERTIFICATE----- +MIICbjCCAfOgAwIBAgIQYvYybOXE42hcG2LdnC6dlTAKBggqhkjOPQQDAzB4MQsw +CQYDVQQGEwJFUzERMA8GA1UECgwIRk5NVC1SQ00xDjAMBgNVBAsMBUNlcmVzMRgw +FgYDVQRhDA9WQVRFUy1RMjgyNjAwNEoxLDAqBgNVBAMMI0FDIFJBSVogRk5NVC1S +Q00gU0VSVklET1JFUyBTRUdVUk9TMB4XDTE4MTIyMDA5MzczM1oXDTQzMTIyMDA5 +MzczM1oweDELMAkGA1UEBhMCRVMxETAPBgNVBAoMCEZOTVQtUkNNMQ4wDAYDVQQL +DAVDZXJlczEYMBYGA1UEYQwPVkFURVMtUTI4MjYwMDRKMSwwKgYDVQQDDCNBQyBS +QUlaIEZOTVQtUkNNIFNFUlZJRE9SRVMgU0VHVVJPUzB2MBAGByqGSM49AgEGBSuB +BAAiA2IABPa6V1PIyqvfNkpSIeSX0oNnnvBlUdBeh8dHsVnyV0ebAAKTRBdp20LH +sbI6GA60XYyzZl2hNPk2LEnb80b8s0RpRBNm/dfF/a82Tc4DTQdxz69qBdKiQ1oK +Um8BA06Oi6NCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD +VR0OBBYEFAG5L++/EYZg8k/QQW6rcx/n0m5JMAoGCCqGSM49BAMDA2kAMGYCMQCu +SuMrQMN0EfKVrRYj3k4MGuZdpSRea0R7/DjiT8ucRRcRTBQnJlU5dUoDzBOQn5IC +MQD6SmxgiHPz7riYYqnOK8LZiqZwMR2vsJRM60/G49HzYqc8/5MuB1xJAWdpEgJy +v+c= +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign Root R46 O=GlobalSign nv-sa +# Subject: CN=GlobalSign Root R46 O=GlobalSign nv-sa +# Label: "GlobalSign Root R46" +# Serial: 1552617688466950547958867513931858518042577 +# MD5 Fingerprint: c4:14:30:e4:fa:66:43:94:2a:6a:1b:24:5f:19:d0:ef +# SHA1 Fingerprint: 53:a2:b0:4b:ca:6b:d6:45:e6:39:8a:8e:c4:0d:d2:bf:77:c3:a2:90 +# SHA256 Fingerprint: 4f:a3:12:6d:8d:3a:11:d1:c4:85:5a:4f:80:7c:ba:d6:cf:91:9d:3a:5a:88:b0:3b:ea:2c:63:72:d9:3c:40:c9 +-----BEGIN CERTIFICATE----- +MIIFWjCCA0KgAwIBAgISEdK7udcjGJ5AXwqdLdDfJWfRMA0GCSqGSIb3DQEBDAUA +MEYxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMRwwGgYD +VQQDExNHbG9iYWxTaWduIFJvb3QgUjQ2MB4XDTE5MDMyMDAwMDAwMFoXDTQ2MDMy +MDAwMDAwMFowRjELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYt +c2ExHDAaBgNVBAMTE0dsb2JhbFNpZ24gUm9vdCBSNDYwggIiMA0GCSqGSIb3DQEB +AQUAA4ICDwAwggIKAoICAQCsrHQy6LNl5brtQyYdpokNRbopiLKkHWPd08EsCVeJ +OaFV6Wc0dwxu5FUdUiXSE2te4R2pt32JMl8Nnp8semNgQB+msLZ4j5lUlghYruQG +vGIFAha/r6gjA7aUD7xubMLL1aa7DOn2wQL7Id5m3RerdELv8HQvJfTqa1VbkNud +316HCkD7rRlr+/fKYIje2sGP1q7Vf9Q8g+7XFkyDRTNrJ9CG0Bwta/OrffGFqfUo +0q3v84RLHIf8E6M6cqJaESvWJ3En7YEtbWaBkoe0G1h6zD8K+kZPTXhc+CtI4wSE +y132tGqzZfxCnlEmIyDLPRT5ge1lFgBPGmSXZgjPjHvjK8Cd+RTyG/FWaha/LIWF +zXg4mutCagI0GIMXTpRW+LaCtfOW3T3zvn8gdz57GSNrLNRyc0NXfeD412lPFzYE ++cCQYDdF3uYM2HSNrpyibXRdQr4G9dlkbgIQrImwTDsHTUB+JMWKmIJ5jqSngiCN +I/onccnfxkF0oE32kRbcRoxfKWMxWXEM2G/CtjJ9++ZdU6Z+Ffy7dXxd7Pj2Fxzs +x2sZy/N78CsHpdlseVR2bJ0cpm4O6XkMqCNqo98bMDGfsVR7/mrLZqrcZdCinkqa +ByFrgY/bxFn63iLABJzjqls2k+g9vXqhnQt2sQvHnf3PmKgGwvgqo6GDoLclcqUC +4wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV +HQ4EFgQUA1yrc4GHqMywptWU4jaWSf8FmSwwDQYJKoZIhvcNAQEMBQADggIBAHx4 +7PYCLLtbfpIrXTncvtgdokIzTfnvpCo7RGkerNlFo048p9gkUbJUHJNOxO97k4Vg +JuoJSOD1u8fpaNK7ajFxzHmuEajwmf3lH7wvqMxX63bEIaZHU1VNaL8FpO7XJqti +2kM3S+LGteWygxk6x9PbTZ4IevPuzz5i+6zoYMzRx6Fcg0XERczzF2sUyQQCPtIk +pnnpHs6i58FZFZ8d4kuaPp92CC1r2LpXFNqD6v6MVenQTqnMdzGxRBF6XLE+0xRF +FRhiJBPSy03OXIPBNvIQtQ6IbbjhVp+J3pZmOUdkLG5NrmJ7v2B0GbhWrJKsFjLt +rWhV/pi60zTe9Mlhww6G9kuEYO4Ne7UyWHmRVSyBQ7N0H3qqJZ4d16GLuc1CLgSk +ZoNNiTW2bKg2SnkheCLQQrzRQDGQob4Ez8pn7fXwgNNgyYMqIgXQBztSvwyeqiv5 +u+YfjyW6hY0XHgL+XVAEV8/+LbzvXMAaq7afJMbfc2hIkCwU9D9SGuTSyxTDYWnP +4vkYxboznxSjBF25cfe1lNj2M8FawTSLfJvdkzrnE6JwYZ+vj+vYxXX4M2bUdGc6 +N3ec592kD3ZDZopD8p/7DEJ4Y9HiD2971KE9dJeFt0g5QdYg/NA6s/rob8SKunE3 +vouXsXgxT7PntgMTzlSdriVZzH81Xwj3QEUxeCp6 +-----END CERTIFICATE----- + +# Issuer: CN=GlobalSign Root E46 O=GlobalSign nv-sa +# Subject: CN=GlobalSign Root E46 O=GlobalSign nv-sa +# Label: "GlobalSign Root E46" +# Serial: 1552617690338932563915843282459653771421763 +# MD5 Fingerprint: b5:b8:66:ed:de:08:83:e3:c9:e2:01:34:06:ac:51:6f +# SHA1 Fingerprint: 39:b4:6c:d5:fe:80:06:eb:e2:2f:4a:bb:08:33:a0:af:db:b9:dd:84 +# SHA256 Fingerprint: cb:b9:c4:4d:84:b8:04:3e:10:50:ea:31:a6:9f:51:49:55:d7:bf:d2:e2:c6:b4:93:01:01:9a:d6:1d:9f:50:58 +-----BEGIN CERTIFICATE----- +MIICCzCCAZGgAwIBAgISEdK7ujNu1LzmJGjFDYQdmOhDMAoGCCqGSM49BAMDMEYx +CzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMRwwGgYDVQQD +ExNHbG9iYWxTaWduIFJvb3QgRTQ2MB4XDTE5MDMyMDAwMDAwMFoXDTQ2MDMyMDAw +MDAwMFowRjELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2Ex +HDAaBgNVBAMTE0dsb2JhbFNpZ24gUm9vdCBFNDYwdjAQBgcqhkjOPQIBBgUrgQQA +IgNiAAScDrHPt+ieUnd1NPqlRqetMhkytAepJ8qUuwzSChDH2omwlwxwEwkBjtjq +R+q+soArzfwoDdusvKSGN+1wCAB16pMLey5SnCNoIwZD7JIvU4Tb+0cUB+hflGdd +yXqBPCCjQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud +DgQWBBQxCpCPtsad0kRLgLWi5h+xEk8blTAKBggqhkjOPQQDAwNoADBlAjEA31SQ +7Zvvi5QCkxeCmb6zniz2C5GMn0oUsfZkvLtoURMMA/cVi4RguYv/Uo7njLwcAjA8 ++RHUjE7AwWHCFUyqqx0LMV87HOIAl0Qx5v5zli/altP+CAezNIm8BZ/3Hobui3A= +-----END CERTIFICATE----- + +# Issuer: CN=GLOBALTRUST 2020 O=e-commerce monitoring GmbH +# Subject: CN=GLOBALTRUST 2020 O=e-commerce monitoring GmbH +# Label: "GLOBALTRUST 2020" +# Serial: 109160994242082918454945253 +# MD5 Fingerprint: 8a:c7:6f:cb:6d:e3:cc:a2:f1:7c:83:fa:0e:78:d7:e8 +# SHA1 Fingerprint: d0:67:c1:13:51:01:0c:aa:d0:c7:6a:65:37:31:16:26:4f:53:71:a2 +# SHA256 Fingerprint: 9a:29:6a:51:82:d1:d4:51:a2:e3:7f:43:9b:74:da:af:a2:67:52:33:29:f9:0f:9a:0d:20:07:c3:34:e2:3c:9a +-----BEGIN CERTIFICATE----- +MIIFgjCCA2qgAwIBAgILWku9WvtPilv6ZeUwDQYJKoZIhvcNAQELBQAwTTELMAkG +A1UEBhMCQVQxIzAhBgNVBAoTGmUtY29tbWVyY2UgbW9uaXRvcmluZyBHbWJIMRkw +FwYDVQQDExBHTE9CQUxUUlVTVCAyMDIwMB4XDTIwMDIxMDAwMDAwMFoXDTQwMDYx +MDAwMDAwMFowTTELMAkGA1UEBhMCQVQxIzAhBgNVBAoTGmUtY29tbWVyY2UgbW9u +aXRvcmluZyBHbWJIMRkwFwYDVQQDExBHTE9CQUxUUlVTVCAyMDIwMIICIjANBgkq +hkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAri5WrRsc7/aVj6B3GyvTY4+ETUWiD59b +RatZe1E0+eyLinjF3WuvvcTfk0Uev5E4C64OFudBc/jbu9G4UeDLgztzOG53ig9Z +YybNpyrOVPu44sB8R85gfD+yc/LAGbaKkoc1DZAoouQVBGM+uq/ufF7MpotQsjj3 +QWPKzv9pj2gOlTblzLmMCcpL3TGQlsjMH/1WljTbjhzqLL6FLmPdqqmV0/0plRPw +yJiT2S0WR5ARg6I6IqIoV6Lr/sCMKKCmfecqQjuCgGOlYx8ZzHyyZqjC0203b+J+ +BlHZRYQfEs4kUmSFC0iAToexIiIwquuuvuAC4EDosEKAA1GqtH6qRNdDYfOiaxaJ +SaSjpCuKAsR49GiKweR6NrFvG5Ybd0mN1MkGco/PU+PcF4UgStyYJ9ORJitHHmkH +r96i5OTUawuzXnzUJIBHKWk7buis/UDr2O1xcSvy6Fgd60GXIsUf1DnQJ4+H4xj0 +4KlGDfV0OoIu0G4skaMxXDtG6nsEEFZegB31pWXogvziB4xiRfUg3kZwhqG8k9Me +dKZssCz3AwyIDMvUclOGvGBG85hqwvG/Q/lwIHfKN0F5VVJjjVsSn8VoxIidrPIw +q7ejMZdnrY8XD2zHc+0klGvIg5rQmjdJBKuxFshsSUktq6HQjJLyQUp5ISXbY9e2 +nKd+Qmn7OmMCAwEAAaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AQYwHQYDVR0OBBYEFNwuH9FhN3nkq9XVsxJxaD1qaJwiMB8GA1UdIwQYMBaAFNwu +H9FhN3nkq9XVsxJxaD1qaJwiMA0GCSqGSIb3DQEBCwUAA4ICAQCR8EICaEDuw2jA +VC/f7GLDw56KoDEoqoOOpFaWEhCGVrqXctJUMHytGdUdaG/7FELYjQ7ztdGl4wJC +XtzoRlgHNQIw4Lx0SsFDKv/bGtCwr2zD/cuz9X9tAy5ZVp0tLTWMstZDFyySCstd +6IwPS3BD0IL/qMy/pJTAvoe9iuOTe8aPmxadJ2W8esVCgmxcB9CpwYhgROmYhRZf ++I/KARDOJcP5YBugxZfD0yyIMaK9MOzQ0MAS8cE54+X1+NZK3TTN+2/BT+MAi1bi +kvcoskJ3ciNnxz8RFbLEAwW+uxF7Cr+obuf/WEPPm2eggAe2HcqtbepBEX4tdJP7 +wry+UUTF72glJ4DjyKDUEuzZpTcdN3y0kcra1LGWge9oXHYQSa9+pTeAsRxSvTOB +TI/53WXZFM2KJVj04sWDpQmQ1GwUY7VA3+vA/MRYfg0UFodUJ25W5HCEuGwyEn6C +MUO+1918oa2u1qsgEu8KwxCMSZY13At1XrFP1U80DhEgB3VDRemjEdqso5nCtnkn +4rnvyOL2NSl6dPrFf4IFYqYK6miyeUcGbvJXqBUzxvd4Sj1Ce2t+/vdG6tHrju+I +aFvowdlxfv1k7/9nR4hYJS8+hge9+6jlgqispdNpQ80xiEmEU5LAsTkbOYMBMMTy +qfrQA71yN2BWHzZ8vTmR9W0Nv3vXkg== +-----END CERTIFICATE----- + +# Issuer: CN=ANF Secure Server Root CA O=ANF Autoridad de Certificacion OU=ANF CA Raiz +# Subject: CN=ANF Secure Server Root CA O=ANF Autoridad de Certificacion OU=ANF CA Raiz +# Label: "ANF Secure Server Root CA" +# Serial: 996390341000653745 +# MD5 Fingerprint: 26:a6:44:5a:d9:af:4e:2f:b2:1d:b6:65:b0:4e:e8:96 +# SHA1 Fingerprint: 5b:6e:68:d0:cc:15:b6:a0:5f:1e:c1:5f:ae:02:fc:6b:2f:5d:6f:74 +# SHA256 Fingerprint: fb:8f:ec:75:91:69:b9:10:6b:1e:51:16:44:c6:18:c5:13:04:37:3f:6c:06:43:08:8d:8b:ef:fd:1b:99:75:99 +-----BEGIN CERTIFICATE----- +MIIF7zCCA9egAwIBAgIIDdPjvGz5a7EwDQYJKoZIhvcNAQELBQAwgYQxEjAQBgNV +BAUTCUc2MzI4NzUxMDELMAkGA1UEBhMCRVMxJzAlBgNVBAoTHkFORiBBdXRvcmlk +YWQgZGUgQ2VydGlmaWNhY2lvbjEUMBIGA1UECxMLQU5GIENBIFJhaXoxIjAgBgNV +BAMTGUFORiBTZWN1cmUgU2VydmVyIFJvb3QgQ0EwHhcNMTkwOTA0MTAwMDM4WhcN +MzkwODMwMTAwMDM4WjCBhDESMBAGA1UEBRMJRzYzMjg3NTEwMQswCQYDVQQGEwJF +UzEnMCUGA1UEChMeQU5GIEF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uMRQwEgYD +VQQLEwtBTkYgQ0EgUmFpejEiMCAGA1UEAxMZQU5GIFNlY3VyZSBTZXJ2ZXIgUm9v +dCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANvrayvmZFSVgpCj +cqQZAZ2cC4Ffc0m6p6zzBE57lgvsEeBbphzOG9INgxwruJ4dfkUyYA8H6XdYfp9q +yGFOtibBTI3/TO80sh9l2Ll49a2pcbnvT1gdpd50IJeh7WhM3pIXS7yr/2WanvtH +2Vdy8wmhrnZEE26cLUQ5vPnHO6RYPUG9tMJJo8gN0pcvB2VSAKduyK9o7PQUlrZX +H1bDOZ8rbeTzPvY1ZNoMHKGESy9LS+IsJJ1tk0DrtSOOMspvRdOoiXsezx76W0OL +zc2oD2rKDF65nkeP8Nm2CgtYZRczuSPkdxl9y0oukntPLxB3sY0vaJxizOBQ+OyR +p1RMVwnVdmPF6GUe7m1qzwmd+nxPrWAI/VaZDxUse6mAq4xhj0oHdkLePfTdsiQz +W7i1o0TJrH93PB0j7IKppuLIBkwC/qxcmZkLLxCKpvR/1Yd0DVlJRfbwcVw5Kda/ +SiOL9V8BY9KHcyi1Swr1+KuCLH5zJTIdC2MKF4EA/7Z2Xue0sUDKIbvVgFHlSFJn +LNJhiQcND85Cd8BEc5xEUKDbEAotlRyBr+Qc5RQe8TZBAQIvfXOn3kLMTOmJDVb3 +n5HUA8ZsyY/b2BzgQJhdZpmYgG4t/wHFzstGH6wCxkPmrqKEPMVOHj1tyRRM4y5B +u8o5vzY8KhmqQYdOpc5LMnndkEl/AgMBAAGjYzBhMB8GA1UdIwQYMBaAFJxf0Gxj +o1+TypOYCK2Mh6UsXME3MB0GA1UdDgQWBBScX9BsY6Nfk8qTmAitjIelLFzBNzAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC +AgEATh65isagmD9uw2nAalxJUqzLK114OMHVVISfk/CHGT0sZonrDUL8zPB1hT+L +9IBdeeUXZ701guLyPI59WzbLWoAAKfLOKyzxj6ptBZNscsdW699QIyjlRRA96Gej +rw5VD5AJYu9LWaL2U/HANeQvwSS9eS9OICI7/RogsKQOLHDtdD+4E5UGUcjohybK +pFtqFiGS3XNgnhAY3jyB6ugYw3yJ8otQPr0R4hUDqDZ9MwFsSBXXiJCZBMXM5gf0 +vPSQ7RPi6ovDj6MzD8EpTBNO2hVWcXNyglD2mjN8orGoGjR0ZVzO0eurU+AagNjq +OknkJjCb5RyKqKkVMoaZkgoQI1YS4PbOTOK7vtuNknMBZi9iPrJyJ0U27U1W45eZ +/zo1PqVUSlJZS2Db7v54EX9K3BR5YLZrZAPbFYPhor72I5dQ8AkzNqdxliXzuUJ9 +2zg/LFis6ELhDtjTO0wugumDLmsx2d1Hhk9tl5EuT+IocTUW0fJz/iUrB0ckYyfI ++PbZa/wSMVYIwFNCr5zQM378BvAxRAMU8Vjq8moNqRGyg77FGr8H6lnco4g175x2 +MjxNBiLOFeXdntiP2t7SxDnlF4HPOEfrf4htWRvfn0IUrn7PqLBmZdo3r5+qPeoo +tt7VMVgWglvquxl1AnMaykgaIZOQCo6ThKd9OyMYkomgjaw= +-----END CERTIFICATE----- + +# Issuer: CN=Certum EC-384 CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Subject: CN=Certum EC-384 CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Label: "Certum EC-384 CA" +# Serial: 160250656287871593594747141429395092468 +# MD5 Fingerprint: b6:65:b3:96:60:97:12:a1:ec:4e:e1:3d:a3:c6:c9:f1 +# SHA1 Fingerprint: f3:3e:78:3c:ac:df:f4:a2:cc:ac:67:55:69:56:d7:e5:16:3c:e1:ed +# SHA256 Fingerprint: 6b:32:80:85:62:53:18:aa:50:d1:73:c9:8d:8b:da:09:d5:7e:27:41:3d:11:4c:f7:87:a0:f5:d0:6c:03:0c:f6 +-----BEGIN CERTIFICATE----- +MIICZTCCAeugAwIBAgIQeI8nXIESUiClBNAt3bpz9DAKBggqhkjOPQQDAzB0MQsw +CQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBTLkEuMScw +JQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAXBgNVBAMT +EENlcnR1bSBFQy0zODQgQ0EwHhcNMTgwMzI2MDcyNDU0WhcNNDMwMzI2MDcyNDU0 +WjB0MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBT +LkEuMScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAX +BgNVBAMTEENlcnR1bSBFQy0zODQgQ0EwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATE +KI6rGFtqvm5kN2PkzeyrOvfMobgOgknXhimfoZTy42B4mIF4Bk3y7JoOV2CDn7Tm +Fy8as10CW4kjPMIRBSqniBMY81CE1700LCeJVf/OTOffph8oxPBUw7l8t1Ot68Kj +QjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI0GZnQkdjrzife81r1HfS+8 +EF9LMA4GA1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNoADBlAjADVS2m5hjEfO/J +UG7BJw+ch69u1RsIGL2SKcHvlJF40jocVYli5RsJHrpka/F2tNQCMQC0QoSZ/6vn +nvuRlydd3LBbMHHOXjgaatkl5+r3YZJW+OraNsKHZZYuciUvf9/DE8k= +-----END CERTIFICATE----- + +# Issuer: CN=Certum Trusted Root CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Subject: CN=Certum Trusted Root CA O=Asseco Data Systems S.A. OU=Certum Certification Authority +# Label: "Certum Trusted Root CA" +# Serial: 40870380103424195783807378461123655149 +# MD5 Fingerprint: 51:e1:c2:e7:fe:4c:84:af:59:0e:2f:f4:54:6f:ea:29 +# SHA1 Fingerprint: c8:83:44:c0:18:ae:9f:cc:f1:87:b7:8f:22:d1:c5:d7:45:84:ba:e5 +# SHA256 Fingerprint: fe:76:96:57:38:55:77:3e:37:a9:5e:7a:d4:d9:cc:96:c3:01:57:c1:5d:31:76:5b:a9:b1:57:04:e1:ae:78:fd +-----BEGIN CERTIFICATE----- +MIIFwDCCA6igAwIBAgIQHr9ZULjJgDdMBvfrVU+17TANBgkqhkiG9w0BAQ0FADB6 +MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBTLkEu +MScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxHzAdBgNV +BAMTFkNlcnR1bSBUcnVzdGVkIFJvb3QgQ0EwHhcNMTgwMzE2MTIxMDEzWhcNNDMw +MzE2MTIxMDEzWjB6MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEg +U3lzdGVtcyBTLkEuMScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRo +b3JpdHkxHzAdBgNVBAMTFkNlcnR1bSBUcnVzdGVkIFJvb3QgQ0EwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQDRLY67tzbqbTeRn06TpwXkKQMlzhyC93yZ +n0EGze2jusDbCSzBfN8pfktlL5On1AFrAygYo9idBcEq2EXxkd7fO9CAAozPOA/q +p1x4EaTByIVcJdPTsuclzxFUl6s1wB52HO8AU5853BSlLCIls3Jy/I2z5T4IHhQq +NwuIPMqw9MjCoa68wb4pZ1Xi/K1ZXP69VyywkI3C7Te2fJmItdUDmj0VDT06qKhF +8JVOJVkdzZhpu9PMMsmN74H+rX2Ju7pgE8pllWeg8xn2A1bUatMn4qGtg/BKEiJ3 +HAVz4hlxQsDsdUaakFjgao4rpUYwBI4Zshfjvqm6f1bxJAPXsiEodg42MEx51UGa +mqi4NboMOvJEGyCI98Ul1z3G4z5D3Yf+xOr1Uz5MZf87Sst4WmsXXw3Hw09Omiqi +7VdNIuJGmj8PkTQkfVXjjJU30xrwCSss0smNtA0Aq2cpKNgB9RkEth2+dv5yXMSF +ytKAQd8FqKPVhJBPC/PgP5sZ0jeJP/J7UhyM9uH3PAeXjA6iWYEMspA90+NZRu0P +qafegGtaqge2Gcu8V/OXIXoMsSt0Puvap2ctTMSYnjYJdmZm/Bo/6khUHL4wvYBQ +v3y1zgD2DGHZ5yQD4OMBgQ692IU0iL2yNqh7XAjlRICMb/gv1SHKHRzQ+8S1h9E6 +Tsd2tTVItQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSM+xx1 +vALTn04uSNn5YFSqxLNP+jAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQAD +ggIBAEii1QALLtA/vBzVtVRJHlpr9OTy4EA34MwUe7nJ+jW1dReTagVphZzNTxl4 +WxmB82M+w85bj/UvXgF2Ez8sALnNllI5SW0ETsXpD4YN4fqzX4IS8TrOZgYkNCvo +zMrnadyHncI013nR03e4qllY/p0m+jiGPp2Kh2RX5Rc64vmNueMzeMGQ2Ljdt4NR +5MTMI9UGfOZR0800McD2RrsLrfw9EAUqO0qRJe6M1ISHgCq8CYyqOhNf6DR5UMEQ +GfnTKB7U0VEwKbOukGfWHwpjscWpxkIxYxeU72nLL/qMFH3EQxiJ2fAyQOaA4kZf +5ePBAFmo+eggvIksDkc0C+pXwlM2/KfUrzHN/gLldfq5Jwn58/U7yn2fqSLLiMmq +0Uc9NneoWWRrJ8/vJ8HjJLWG965+Mk2weWjROeiQWMODvA8s1pfrzgzhIMfatz7D +P78v3DSk+yshzWePS/Tj6tQ/50+6uaWTRRxmHyH6ZF5v4HaUMst19W7l9o/HuKTM +qJZ9ZPskWkoDbGs4xugDQ5r3V7mzKWmTOPQD8rv7gmsHINFSH5pkAnuYZttcTVoP +0ISVoDwUQwbKytu4QTbaakRnh6+v40URFWkIsr4WOZckbxJF0WddCajJFdr60qZf +E2Efv4WstK2tBZQIgx51F9NxO5NQI1mg7TyRVJ12AMXDuDjb +-----END CERTIFICATE----- + +# Issuer: CN=TunTrust Root CA O=Agence Nationale de Certification Electronique +# Subject: CN=TunTrust Root CA O=Agence Nationale de Certification Electronique +# Label: "TunTrust Root CA" +# Serial: 108534058042236574382096126452369648152337120275 +# MD5 Fingerprint: 85:13:b9:90:5b:36:5c:b6:5e:b8:5a:f8:e0:31:57:b4 +# SHA1 Fingerprint: cf:e9:70:84:0f:e0:73:0f:9d:f6:0c:7f:2c:4b:ee:20:46:34:9c:bb +# SHA256 Fingerprint: 2e:44:10:2a:b5:8c:b8:54:19:45:1c:8e:19:d9:ac:f3:66:2c:af:bc:61:4b:6a:53:96:0a:30:f7:d0:e2:eb:41 +-----BEGIN CERTIFICATE----- +MIIFszCCA5ugAwIBAgIUEwLV4kBMkkaGFmddtLu7sms+/BMwDQYJKoZIhvcNAQEL +BQAwYTELMAkGA1UEBhMCVE4xNzA1BgNVBAoMLkFnZW5jZSBOYXRpb25hbGUgZGUg +Q2VydGlmaWNhdGlvbiBFbGVjdHJvbmlxdWUxGTAXBgNVBAMMEFR1blRydXN0IFJv +b3QgQ0EwHhcNMTkwNDI2MDg1NzU2WhcNNDQwNDI2MDg1NzU2WjBhMQswCQYDVQQG +EwJUTjE3MDUGA1UECgwuQWdlbmNlIE5hdGlvbmFsZSBkZSBDZXJ0aWZpY2F0aW9u +IEVsZWN0cm9uaXF1ZTEZMBcGA1UEAwwQVHVuVHJ1c3QgUm9vdCBDQTCCAiIwDQYJ +KoZIhvcNAQEBBQADggIPADCCAgoCggIBAMPN0/y9BFPdDCA61YguBUtB9YOCfvdZ +n56eY+hz2vYGqU8ftPkLHzmMmiDQfgbU7DTZhrx1W4eI8NLZ1KMKsmwb60ksPqxd +2JQDoOw05TDENX37Jk0bbjBU2PWARZw5rZzJJQRNmpA+TkBuimvNKWfGzC3gdOgF +VwpIUPp6Q9p+7FuaDmJ2/uqdHYVy7BG7NegfJ7/Boce7SBbdVtfMTqDhuazb1YMZ +GoXRlJfXyqNlC/M4+QKu3fZnz8k/9YosRxqZbwUN/dAdgjH8KcwAWJeRTIAAHDOF +li/LQcKLEITDCSSJH7UP2dl3RxiSlGBcx5kDPP73lad9UKGAwqmDrViWVSHbhlnU +r8a83YFuB9tgYv7sEG7aaAH0gxupPqJbI9dkxt/con3YS7qC0lH4Zr8GRuR5KiY2 +eY8fTpkdso8MDhz/yV3A/ZAQprE38806JG60hZC/gLkMjNWb1sjxVj8agIl6qeIb +MlEsPvLfe/ZdeikZjuXIvTZxi11Mwh0/rViizz1wTaZQmCXcI/m4WEEIcb9PuISg +jwBUFfyRbVinljvrS5YnzWuioYasDXxU5mZMZl+QviGaAkYt5IPCgLnPSz7ofzwB +7I9ezX/SKEIBlYrilz0QIX32nRzFNKHsLA4KUiwSVXAkPcvCFDVDXSdOvsC9qnyW +5/yeYa1E0wCXAgMBAAGjYzBhMB0GA1UdDgQWBBQGmpsfU33x9aTI04Y+oXNZtPdE +ITAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFAaamx9TffH1pMjThj6hc1m0 +90QhMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAqgVutt0Vyb+z +xiD2BkewhpMl0425yAA/l/VSJ4hxyXT968pk21vvHl26v9Hr7lxpuhbI87mP0zYu +QEkHDVneixCwSQXi/5E/S7fdAo74gShczNxtr18UnH1YeA32gAm56Q6XKRm4t+v4 +FstVEuTGfbvE7Pi1HE4+Z7/FXxttbUcoqgRYYdZ2vyJ/0Adqp2RT8JeNnYA/u8EH +22Wv5psymsNUk8QcCMNE+3tjEUPRahphanltkE8pjkcFwRJpadbGNjHh/PqAulxP +xOu3Mqz4dWEX1xAZufHSCe96Qp1bWgvUxpVOKs7/B9dPfhgGiPEZtdmYu65xxBzn +dFlY7wyJz4sfdZMaBBSSSFCp61cpABbjNhzI+L/wM9VBD8TMPN3pM0MBkRArHtG5 +Xc0yGYuPjCB31yLEQtyEFpslbei0VXF/sHyz03FJuc9SpAQ/3D2gu68zngowYI7b +nV2UqL1g52KAdoGDDIzMMEZJ4gzSqK/rYXHv5yJiqfdcZGyfFoxnNidF9Ql7v/YQ +CvGwjVRDjAS6oz/v4jXH+XTgbzRB0L9zZVcg+ZtnemZoJE6AZb0QmQZZ8mWvuMZH +u/2QeItBcy6vVR/cO5JyboTT0GFMDcx2V+IthSIVNg3rAZ3r2OvEhJn7wAzMMujj +d9qDRIueVSjAi1jTkD5OGwDxFa2DK5o= +-----END CERTIFICATE----- + +# Issuer: CN=HARICA TLS RSA Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Subject: CN=HARICA TLS RSA Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Label: "HARICA TLS RSA Root CA 2021" +# Serial: 76817823531813593706434026085292783742 +# MD5 Fingerprint: 65:47:9b:58:86:dd:2c:f0:fc:a2:84:1f:1e:96:c4:91 +# SHA1 Fingerprint: 02:2d:05:82:fa:88:ce:14:0c:06:79:de:7f:14:10:e9:45:d7:a5:6d +# SHA256 Fingerprint: d9:5d:0e:8e:da:79:52:5b:f9:be:b1:1b:14:d2:10:0d:32:94:98:5f:0c:62:d9:fa:bd:9c:d9:99:ec:cb:7b:1d +-----BEGIN CERTIFICATE----- +MIIFpDCCA4ygAwIBAgIQOcqTHO9D88aOk8f0ZIk4fjANBgkqhkiG9w0BAQsFADBs +MQswCQYDVQQGEwJHUjE3MDUGA1UECgwuSGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl +c2VhcmNoIEluc3RpdHV0aW9ucyBDQTEkMCIGA1UEAwwbSEFSSUNBIFRMUyBSU0Eg +Um9vdCBDQSAyMDIxMB4XDTIxMDIxOTEwNTUzOFoXDTQ1MDIxMzEwNTUzN1owbDEL +MAkGA1UEBhMCR1IxNzA1BgNVBAoMLkhlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNl +YXJjaCBJbnN0aXR1dGlvbnMgQ0ExJDAiBgNVBAMMG0hBUklDQSBUTFMgUlNBIFJv +b3QgQ0EgMjAyMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAIvC569l +mwVnlskNJLnQDmT8zuIkGCyEf3dRywQRNrhe7Wlxp57kJQmXZ8FHws+RFjZiPTgE +4VGC/6zStGndLuwRo0Xua2s7TL+MjaQenRG56Tj5eg4MmOIjHdFOY9TnuEFE+2uv +a9of08WRiFukiZLRgeaMOVig1mlDqa2YUlhu2wr7a89o+uOkXjpFc5gH6l8Cct4M +pbOfrqkdtx2z/IpZ525yZa31MJQjB/OCFks1mJxTuy/K5FrZx40d/JiZ+yykgmvw +Kh+OC19xXFyuQnspiYHLA6OZyoieC0AJQTPb5lh6/a6ZcMBaD9YThnEvdmn8kN3b +LW7R8pv1GmuebxWMevBLKKAiOIAkbDakO/IwkfN4E8/BPzWr8R0RI7VDIp4BkrcY +AuUR0YLbFQDMYTfBKnya4dC6s1BG7oKsnTH4+yPiAwBIcKMJJnkVU2DzOFytOOqB +AGMUuTNe3QvboEUHGjMJ+E20pwKmafTCWQWIZYVWrkvL4N48fS0ayOn7H6NhStYq +E613TBoYm5EPWNgGVMWX+Ko/IIqmhaZ39qb8HOLubpQzKoNQhArlT4b4UEV4AIHr +W2jjJo3Me1xR9BQsQL4aYB16cmEdH2MtiKrOokWQCPxrvrNQKlr9qEgYRtaQQJKQ +CoReaDH46+0N0x3GfZkYVVYnZS6NRcUk7M7jAgMBAAGjQjBAMA8GA1UdEwEB/wQF +MAMBAf8wHQYDVR0OBBYEFApII6ZgpJIKM+qTW8VX6iVNvRLuMA4GA1UdDwEB/wQE +AwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAPpBIqm5iFSVmewzVjIuJndftTgfvnNAU +X15QvWiWkKQUEapobQk1OUAJ2vQJLDSle1mESSmXdMgHHkdt8s4cUCbjnj1AUz/3 +f5Z2EMVGpdAgS1D0NTsY9FVqQRtHBmg8uwkIYtlfVUKqrFOFrJVWNlar5AWMxaja +H6NpvVMPxP/cyuN+8kyIhkdGGvMA9YCRotxDQpSbIPDRzbLrLFPCU3hKTwSUQZqP +JzLB5UkZv/HywouoCjkxKLR9YjYsTewfM7Z+d21+UPCfDtcRj88YxeMn/ibvBZ3P +zzfF0HvaO7AWhAw6k9a+F9sPPg4ZeAnHqQJyIkv3N3a6dcSFA1pj1bF1BcK5vZSt +jBWZp5N99sXzqnTPBIWUmAD04vnKJGW/4GKvyMX6ssmeVkjaef2WdhW+o45WxLM0 +/L5H9MG0qPzVMIho7suuyWPEdr6sOBjhXlzPrjoiUevRi7PzKzMHVIf6tLITe7pT +BGIBnfHAT+7hOtSLIBD6Alfm78ELt5BGnBkpjNxvoEppaZS3JGWg/6w/zgH7IS79 +aPib8qXPMThcFarmlwDB31qlpzmq6YR/PFGoOtmUW4y/Twhx5duoXNTSpv4Ao8YW +xw/ogM4cKGR0GQjTQuPOAF1/sdwTsOEFy9EgqoZ0njnnkf3/W9b3raYvAwtt41dU +63ZTGI0RmLo= +-----END CERTIFICATE----- + +# Issuer: CN=HARICA TLS ECC Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Subject: CN=HARICA TLS ECC Root CA 2021 O=Hellenic Academic and Research Institutions CA +# Label: "HARICA TLS ECC Root CA 2021" +# Serial: 137515985548005187474074462014555733966 +# MD5 Fingerprint: ae:f7:4c:e5:66:35:d1:b7:9b:8c:22:93:74:d3:4b:b0 +# SHA1 Fingerprint: bc:b0:c1:9d:e9:98:92:70:19:38:57:e9:8d:a7:b4:5d:6e:ee:01:48 +# SHA256 Fingerprint: 3f:99:cc:47:4a:cf:ce:4d:fe:d5:87:94:66:5e:47:8d:15:47:73:9f:2e:78:0f:1b:b4:ca:9b:13:30:97:d4:01 +-----BEGIN CERTIFICATE----- +MIICVDCCAdugAwIBAgIQZ3SdjXfYO2rbIvT/WeK/zjAKBggqhkjOPQQDAzBsMQsw +CQYDVQQGEwJHUjE3MDUGA1UECgwuSGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJlc2Vh +cmNoIEluc3RpdHV0aW9ucyBDQTEkMCIGA1UEAwwbSEFSSUNBIFRMUyBFQ0MgUm9v +dCBDQSAyMDIxMB4XDTIxMDIxOTExMDExMFoXDTQ1MDIxMzExMDEwOVowbDELMAkG +A1UEBhMCR1IxNzA1BgNVBAoMLkhlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJj +aCBJbnN0aXR1dGlvbnMgQ0ExJDAiBgNVBAMMG0hBUklDQSBUTFMgRUNDIFJvb3Qg +Q0EgMjAyMTB2MBAGByqGSM49AgEGBSuBBAAiA2IABDgI/rGgltJ6rK9JOtDA4MM7 +KKrxcm1lAEeIhPyaJmuqS7psBAqIXhfyVYf8MLA04jRYVxqEU+kw2anylnTDUR9Y +STHMmE5gEYd103KUkE+bECUqqHgtvpBBWJAVcqeht6NCMEAwDwYDVR0TAQH/BAUw +AwEB/zAdBgNVHQ4EFgQUyRtTgRL+BNUW0aq8mm+3oJUZbsowDgYDVR0PAQH/BAQD +AgGGMAoGCCqGSM49BAMDA2cAMGQCMBHervjcToiwqfAircJRQO9gcS3ujwLEXQNw +SaSS6sUUiHCm0w2wqsosQJz76YJumgIwK0eaB8bRwoF8yguWGEEbo/QwCZ61IygN +nxS2PFOiTAZpffpskcYqSUXm7LcT4Tps +-----END CERTIFICATE----- diff --git a/python/lib/python3.10/site-packages/pip/_vendor/certifi/core.py b/python/lib/python3.10/site-packages/pip/_vendor/certifi/core.py new file mode 100644 index 0000000..f8d4313 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/certifi/core.py @@ -0,0 +1,76 @@ +# -*- coding: utf-8 -*- + +""" +certifi.py +~~~~~~~~~~ + +This module returns the installation location of cacert.pem or its contents. +""" +import os + + +class _PipPatchedCertificate(Exception): + pass + + +DEBIAN_CA_CERTS_PATH = '/etc/ssl/certs/ca-certificates.crt' + +try: + # Return a certificate file on disk for a standalone pip zipapp running in + # an isolated build environment to use. Passing --cert to the standalone + # pip does not work since requests calls where() unconditionally on import. + _PIP_STANDALONE_CERT = os.environ.get("_PIP_STANDALONE_CERT") + if _PIP_STANDALONE_CERT: + def where(): + return _PIP_STANDALONE_CERT + raise _PipPatchedCertificate() + + from importlib.resources import path as get_path, read_text + + _CACERT_CTX = None + _CACERT_PATH = None + + def where(): + # This is slightly terrible, but we want to delay extracting the file + # in cases where we're inside of a zipimport situation until someone + # actually calls where(), but we don't want to re-extract the file + # on every call of where(), so we'll do it once then store it in a + # global variable. + global _CACERT_CTX + global _CACERT_PATH + if _CACERT_PATH is None: + # This is slightly janky, the importlib.resources API wants you to + # manage the cleanup of this file, so it doesn't actually return a + # path, it returns a context manager that will give you the path + # when you enter it and will do any cleanup when you leave it. In + # the common case of not needing a temporary file, it will just + # return the file system location and the __exit__() is a no-op. + # + # We also have to hold onto the actual context manager, because + # it will do the cleanup whenever it gets garbage collected, so + # we will also store that at the global level as well. + _CACERT_PATH = DEBIAN_CA_CERTS_PATH + + return _CACERT_PATH + +except _PipPatchedCertificate: + pass + +except ImportError: + # This fallback will work for Python versions prior to 3.7 that lack the + # importlib.resources module but relies on the existing `where` function + # so won't address issues with environments like PyOxidizer that don't set + # __file__ on modules. + def read_text(_module, _path, encoding="ascii"): + with open(where(), "r", encoding=encoding) as data: + return data.read() + + # If we don't have importlib.resources, then we will just do the old logic + # of assuming we're on the filesystem and munge the path directly. + def where(): + return DEBIAN_CA_CERTS_PATH + + +def contents(): + with open(where(), "r", encoding="ascii") as data: + return data.read() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/__init__.py new file mode 100644 index 0000000..80ad254 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/__init__.py @@ -0,0 +1,83 @@ +######################## BEGIN LICENSE BLOCK ######################## +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + + +from .universaldetector import UniversalDetector +from .enums import InputState +from .version import __version__, VERSION + + +__all__ = ['UniversalDetector', 'detect', 'detect_all', '__version__', 'VERSION'] + + +def detect(byte_str): + """ + Detect the encoding of the given byte string. + + :param byte_str: The byte sequence to examine. + :type byte_str: ``bytes`` or ``bytearray`` + """ + if not isinstance(byte_str, bytearray): + if not isinstance(byte_str, bytes): + raise TypeError('Expected object of type bytes or bytearray, got: ' + '{}'.format(type(byte_str))) + else: + byte_str = bytearray(byte_str) + detector = UniversalDetector() + detector.feed(byte_str) + return detector.close() + + +def detect_all(byte_str): + """ + Detect all the possible encodings of the given byte string. + + :param byte_str: The byte sequence to examine. + :type byte_str: ``bytes`` or ``bytearray`` + """ + if not isinstance(byte_str, bytearray): + if not isinstance(byte_str, bytes): + raise TypeError('Expected object of type bytes or bytearray, got: ' + '{}'.format(type(byte_str))) + else: + byte_str = bytearray(byte_str) + + detector = UniversalDetector() + detector.feed(byte_str) + detector.close() + + if detector._input_state == InputState.HIGH_BYTE: + results = [] + for prober in detector._charset_probers: + if prober.get_confidence() > detector.MINIMUM_THRESHOLD: + charset_name = prober.charset_name + lower_charset_name = prober.charset_name.lower() + # Use Windows encoding name instead of ISO-8859 if we saw any + # extra Windows-specific bytes + if lower_charset_name.startswith('iso-8859'): + if detector._has_win_bytes: + charset_name = detector.ISO_WIN_MAP.get(lower_charset_name, + charset_name) + results.append({ + 'encoding': charset_name, + 'confidence': prober.get_confidence(), + 'language': prober.language, + }) + if len(results) > 0: + return sorted(results, key=lambda result: -result['confidence']) + + return [detector.result] diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/big5freq.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/big5freq.py new file mode 100644 index 0000000..38f3251 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/big5freq.py @@ -0,0 +1,386 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# Big5 frequency table +# by Taiwan's Mandarin Promotion Council +# +# +# 128 --> 0.42261 +# 256 --> 0.57851 +# 512 --> 0.74851 +# 1024 --> 0.89384 +# 2048 --> 0.97583 +# +# Ideal Distribution Ratio = 0.74851/(1-0.74851) =2.98 +# Random Distribution Ration = 512/(5401-512)=0.105 +# +# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR + +BIG5_TYPICAL_DISTRIBUTION_RATIO = 0.75 + +#Char to FreqOrder table +BIG5_TABLE_SIZE = 5376 + +BIG5_CHAR_TO_FREQ_ORDER = ( + 1,1801,1506, 255,1431, 198, 9, 82, 6,5008, 177, 202,3681,1256,2821, 110, # 16 +3814, 33,3274, 261, 76, 44,2114, 16,2946,2187,1176, 659,3971, 26,3451,2653, # 32 +1198,3972,3350,4202, 410,2215, 302, 590, 361,1964, 8, 204, 58,4510,5009,1932, # 48 + 63,5010,5011, 317,1614, 75, 222, 159,4203,2417,1480,5012,3555,3091, 224,2822, # 64 +3682, 3, 10,3973,1471, 29,2787,1135,2866,1940, 873, 130,3275,1123, 312,5013, # 80 +4511,2052, 507, 252, 682,5014, 142,1915, 124, 206,2947, 34,3556,3204, 64, 604, # 96 +5015,2501,1977,1978, 155,1991, 645, 641,1606,5016,3452, 337, 72, 406,5017, 80, # 112 + 630, 238,3205,1509, 263, 939,1092,2654, 756,1440,1094,3453, 449, 69,2987, 591, # 128 + 179,2096, 471, 115,2035,1844, 60, 50,2988, 134, 806,1869, 734,2036,3454, 180, # 144 + 995,1607, 156, 537,2907, 688,5018, 319,1305, 779,2145, 514,2379, 298,4512, 359, # 160 +2502, 90,2716,1338, 663, 11, 906,1099,2553, 20,2441, 182, 532,1716,5019, 732, # 176 +1376,4204,1311,1420,3206, 25,2317,1056, 113, 399, 382,1950, 242,3455,2474, 529, # 192 +3276, 475,1447,3683,5020, 117, 21, 656, 810,1297,2300,2334,3557,5021, 126,4205, # 208 + 706, 456, 150, 613,4513, 71,1118,2037,4206, 145,3092, 85, 835, 486,2115,1246, # 224 +1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,5022,2128,2359, 347,3815, 221, # 240 +3558,3135,5023,1956,1153,4207, 83, 296,1199,3093, 192, 624, 93,5024, 822,1898, # 256 +2823,3136, 795,2065, 991,1554,1542,1592, 27, 43,2867, 859, 139,1456, 860,4514, # 272 + 437, 712,3974, 164,2397,3137, 695, 211,3037,2097, 195,3975,1608,3559,3560,3684, # 288 +3976, 234, 811,2989,2098,3977,2233,1441,3561,1615,2380, 668,2077,1638, 305, 228, # 304 +1664,4515, 467, 415,5025, 262,2099,1593, 239, 108, 300, 200,1033, 512,1247,2078, # 320 +5026,5027,2176,3207,3685,2682, 593, 845,1062,3277, 88,1723,2038,3978,1951, 212, # 336 + 266, 152, 149, 468,1899,4208,4516, 77, 187,5028,3038, 37, 5,2990,5029,3979, # 352 +5030,5031, 39,2524,4517,2908,3208,2079, 55, 148, 74,4518, 545, 483,1474,1029, # 368 +1665, 217,1870,1531,3138,1104,2655,4209, 24, 172,3562, 900,3980,3563,3564,4519, # 384 + 32,1408,2824,1312, 329, 487,2360,2251,2717, 784,2683, 4,3039,3351,1427,1789, # 400 + 188, 109, 499,5032,3686,1717,1790, 888,1217,3040,4520,5033,3565,5034,3352,1520, # 416 +3687,3981, 196,1034, 775,5035,5036, 929,1816, 249, 439, 38,5037,1063,5038, 794, # 432 +3982,1435,2301, 46, 178,3278,2066,5039,2381,5040, 214,1709,4521, 804, 35, 707, # 448 + 324,3688,1601,2554, 140, 459,4210,5041,5042,1365, 839, 272, 978,2262,2580,3456, # 464 +2129,1363,3689,1423, 697, 100,3094, 48, 70,1231, 495,3139,2196,5043,1294,5044, # 480 +2080, 462, 586,1042,3279, 853, 256, 988, 185,2382,3457,1698, 434,1084,5045,3458, # 496 + 314,2625,2788,4522,2335,2336, 569,2285, 637,1817,2525, 757,1162,1879,1616,3459, # 512 + 287,1577,2116, 768,4523,1671,2868,3566,2526,1321,3816, 909,2418,5046,4211, 933, # 528 +3817,4212,2053,2361,1222,4524, 765,2419,1322, 786,4525,5047,1920,1462,1677,2909, # 544 +1699,5048,4526,1424,2442,3140,3690,2600,3353,1775,1941,3460,3983,4213, 309,1369, # 560 +1130,2825, 364,2234,1653,1299,3984,3567,3985,3986,2656, 525,1085,3041, 902,2001, # 576 +1475, 964,4527, 421,1845,1415,1057,2286, 940,1364,3141, 376,4528,4529,1381, 7, # 592 +2527, 983,2383, 336,1710,2684,1846, 321,3461, 559,1131,3042,2752,1809,1132,1313, # 608 + 265,1481,1858,5049, 352,1203,2826,3280, 167,1089, 420,2827, 776, 792,1724,3568, # 624 +4214,2443,3281,5050,4215,5051, 446, 229, 333,2753, 901,3818,1200,1557,4530,2657, # 640 +1921, 395,2754,2685,3819,4216,1836, 125, 916,3209,2626,4531,5052,5053,3820,5054, # 656 +5055,5056,4532,3142,3691,1133,2555,1757,3462,1510,2318,1409,3569,5057,2146, 438, # 672 +2601,2910,2384,3354,1068, 958,3043, 461, 311,2869,2686,4217,1916,3210,4218,1979, # 688 + 383, 750,2755,2627,4219, 274, 539, 385,1278,1442,5058,1154,1965, 384, 561, 210, # 704 + 98,1295,2556,3570,5059,1711,2420,1482,3463,3987,2911,1257, 129,5060,3821, 642, # 720 + 523,2789,2790,2658,5061, 141,2235,1333, 68, 176, 441, 876, 907,4220, 603,2602, # 736 + 710, 171,3464, 404, 549, 18,3143,2398,1410,3692,1666,5062,3571,4533,2912,4534, # 752 +5063,2991, 368,5064, 146, 366, 99, 871,3693,1543, 748, 807,1586,1185, 22,2263, # 768 + 379,3822,3211,5065,3212, 505,1942,2628,1992,1382,2319,5066, 380,2362, 218, 702, # 784 +1818,1248,3465,3044,3572,3355,3282,5067,2992,3694, 930,3283,3823,5068, 59,5069, # 800 + 585, 601,4221, 497,3466,1112,1314,4535,1802,5070,1223,1472,2177,5071, 749,1837, # 816 + 690,1900,3824,1773,3988,1476, 429,1043,1791,2236,2117, 917,4222, 447,1086,1629, # 832 +5072, 556,5073,5074,2021,1654, 844,1090, 105, 550, 966,1758,2828,1008,1783, 686, # 848 +1095,5075,2287, 793,1602,5076,3573,2603,4536,4223,2948,2302,4537,3825, 980,2503, # 864 + 544, 353, 527,4538, 908,2687,2913,5077, 381,2629,1943,1348,5078,1341,1252, 560, # 880 +3095,5079,3467,2870,5080,2054, 973, 886,2081, 143,4539,5081,5082, 157,3989, 496, # 896 +4224, 57, 840, 540,2039,4540,4541,3468,2118,1445, 970,2264,1748,1966,2082,4225, # 912 +3144,1234,1776,3284,2829,3695, 773,1206,2130,1066,2040,1326,3990,1738,1725,4226, # 928 + 279,3145, 51,1544,2604, 423,1578,2131,2067, 173,4542,1880,5083,5084,1583, 264, # 944 + 610,3696,4543,2444, 280, 154,5085,5086,5087,1739, 338,1282,3096, 693,2871,1411, # 960 +1074,3826,2445,5088,4544,5089,5090,1240, 952,2399,5091,2914,1538,2688, 685,1483, # 976 +4227,2475,1436, 953,4228,2055,4545, 671,2400, 79,4229,2446,3285, 608, 567,2689, # 992 +3469,4230,4231,1691, 393,1261,1792,2401,5092,4546,5093,5094,5095,5096,1383,1672, # 1008 +3827,3213,1464, 522,1119, 661,1150, 216, 675,4547,3991,1432,3574, 609,4548,2690, # 1024 +2402,5097,5098,5099,4232,3045, 0,5100,2476, 315, 231,2447, 301,3356,4549,2385, # 1040 +5101, 233,4233,3697,1819,4550,4551,5102, 96,1777,1315,2083,5103, 257,5104,1810, # 1056 +3698,2718,1139,1820,4234,2022,1124,2164,2791,1778,2659,5105,3097, 363,1655,3214, # 1072 +5106,2993,5107,5108,5109,3992,1567,3993, 718, 103,3215, 849,1443, 341,3357,2949, # 1088 +1484,5110,1712, 127, 67, 339,4235,2403, 679,1412, 821,5111,5112, 834, 738, 351, # 1104 +2994,2147, 846, 235,1497,1881, 418,1993,3828,2719, 186,1100,2148,2756,3575,1545, # 1120 +1355,2950,2872,1377, 583,3994,4236,2581,2995,5113,1298,3699,1078,2557,3700,2363, # 1136 + 78,3829,3830, 267,1289,2100,2002,1594,4237, 348, 369,1274,2197,2178,1838,4552, # 1152 +1821,2830,3701,2757,2288,2003,4553,2951,2758, 144,3358, 882,4554,3995,2759,3470, # 1168 +4555,2915,5114,4238,1726, 320,5115,3996,3046, 788,2996,5116,2831,1774,1327,2873, # 1184 +3997,2832,5117,1306,4556,2004,1700,3831,3576,2364,2660, 787,2023, 506, 824,3702, # 1200 + 534, 323,4557,1044,3359,2024,1901, 946,3471,5118,1779,1500,1678,5119,1882,4558, # 1216 + 165, 243,4559,3703,2528, 123, 683,4239, 764,4560, 36,3998,1793, 589,2916, 816, # 1232 + 626,1667,3047,2237,1639,1555,1622,3832,3999,5120,4000,2874,1370,1228,1933, 891, # 1248 +2084,2917, 304,4240,5121, 292,2997,2720,3577, 691,2101,4241,1115,4561, 118, 662, # 1264 +5122, 611,1156, 854,2386,1316,2875, 2, 386, 515,2918,5123,5124,3286, 868,2238, # 1280 +1486, 855,2661, 785,2216,3048,5125,1040,3216,3578,5126,3146, 448,5127,1525,5128, # 1296 +2165,4562,5129,3833,5130,4242,2833,3579,3147, 503, 818,4001,3148,1568, 814, 676, # 1312 +1444, 306,1749,5131,3834,1416,1030, 197,1428, 805,2834,1501,4563,5132,5133,5134, # 1328 +1994,5135,4564,5136,5137,2198, 13,2792,3704,2998,3149,1229,1917,5138,3835,2132, # 1344 +5139,4243,4565,2404,3580,5140,2217,1511,1727,1120,5141,5142, 646,3836,2448, 307, # 1360 +5143,5144,1595,3217,5145,5146,5147,3705,1113,1356,4002,1465,2529,2530,5148, 519, # 1376 +5149, 128,2133, 92,2289,1980,5150,4003,1512, 342,3150,2199,5151,2793,2218,1981, # 1392 +3360,4244, 290,1656,1317, 789, 827,2365,5152,3837,4566, 562, 581,4004,5153, 401, # 1408 +4567,2252, 94,4568,5154,1399,2794,5155,1463,2025,4569,3218,1944,5156, 828,1105, # 1424 +4245,1262,1394,5157,4246, 605,4570,5158,1784,2876,5159,2835, 819,2102, 578,2200, # 1440 +2952,5160,1502, 436,3287,4247,3288,2836,4005,2919,3472,3473,5161,2721,2320,5162, # 1456 +5163,2337,2068, 23,4571, 193, 826,3838,2103, 699,1630,4248,3098, 390,1794,1064, # 1472 +3581,5164,1579,3099,3100,1400,5165,4249,1839,1640,2877,5166,4572,4573, 137,4250, # 1488 + 598,3101,1967, 780, 104, 974,2953,5167, 278, 899, 253, 402, 572, 504, 493,1339, # 1504 +5168,4006,1275,4574,2582,2558,5169,3706,3049,3102,2253, 565,1334,2722, 863, 41, # 1520 +5170,5171,4575,5172,1657,2338, 19, 463,2760,4251, 606,5173,2999,3289,1087,2085, # 1536 +1323,2662,3000,5174,1631,1623,1750,4252,2691,5175,2878, 791,2723,2663,2339, 232, # 1552 +2421,5176,3001,1498,5177,2664,2630, 755,1366,3707,3290,3151,2026,1609, 119,1918, # 1568 +3474, 862,1026,4253,5178,4007,3839,4576,4008,4577,2265,1952,2477,5179,1125, 817, # 1584 +4254,4255,4009,1513,1766,2041,1487,4256,3050,3291,2837,3840,3152,5180,5181,1507, # 1600 +5182,2692, 733, 40,1632,1106,2879, 345,4257, 841,2531, 230,4578,3002,1847,3292, # 1616 +3475,5183,1263, 986,3476,5184, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562, # 1632 +4010,4011,2954, 967,2761,2665,1349, 592,2134,1692,3361,3003,1995,4258,1679,4012, # 1648 +1902,2188,5185, 739,3708,2724,1296,1290,5186,4259,2201,2202,1922,1563,2605,2559, # 1664 +1871,2762,3004,5187, 435,5188, 343,1108, 596, 17,1751,4579,2239,3477,3709,5189, # 1680 +4580, 294,3582,2955,1693, 477, 979, 281,2042,3583, 643,2043,3710,2631,2795,2266, # 1696 +1031,2340,2135,2303,3584,4581, 367,1249,2560,5190,3585,5191,4582,1283,3362,2005, # 1712 + 240,1762,3363,4583,4584, 836,1069,3153, 474,5192,2149,2532, 268,3586,5193,3219, # 1728 +1521,1284,5194,1658,1546,4260,5195,3587,3588,5196,4261,3364,2693,1685,4262, 961, # 1744 +1673,2632, 190,2006,2203,3841,4585,4586,5197, 570,2504,3711,1490,5198,4587,2633, # 1760 +3293,1957,4588, 584,1514, 396,1045,1945,5199,4589,1968,2449,5200,5201,4590,4013, # 1776 + 619,5202,3154,3294, 215,2007,2796,2561,3220,4591,3221,4592, 763,4263,3842,4593, # 1792 +5203,5204,1958,1767,2956,3365,3712,1174, 452,1477,4594,3366,3155,5205,2838,1253, # 1808 +2387,2189,1091,2290,4264, 492,5206, 638,1169,1825,2136,1752,4014, 648, 926,1021, # 1824 +1324,4595, 520,4596, 997, 847,1007, 892,4597,3843,2267,1872,3713,2405,1785,4598, # 1840 +1953,2957,3103,3222,1728,4265,2044,3714,4599,2008,1701,3156,1551, 30,2268,4266, # 1856 +5207,2027,4600,3589,5208, 501,5209,4267, 594,3478,2166,1822,3590,3479,3591,3223, # 1872 + 829,2839,4268,5210,1680,3157,1225,4269,5211,3295,4601,4270,3158,2341,5212,4602, # 1888 +4271,5213,4015,4016,5214,1848,2388,2606,3367,5215,4603, 374,4017, 652,4272,4273, # 1904 + 375,1140, 798,5216,5217,5218,2366,4604,2269, 546,1659, 138,3051,2450,4605,5219, # 1920 +2254, 612,1849, 910, 796,3844,1740,1371, 825,3845,3846,5220,2920,2562,5221, 692, # 1936 + 444,3052,2634, 801,4606,4274,5222,1491, 244,1053,3053,4275,4276, 340,5223,4018, # 1952 +1041,3005, 293,1168, 87,1357,5224,1539, 959,5225,2240, 721, 694,4277,3847, 219, # 1968 +1478, 644,1417,3368,2666,1413,1401,1335,1389,4019,5226,5227,3006,2367,3159,1826, # 1984 + 730,1515, 184,2840, 66,4607,5228,1660,2958, 246,3369, 378,1457, 226,3480, 975, # 2000 +4020,2959,1264,3592, 674, 696,5229, 163,5230,1141,2422,2167, 713,3593,3370,4608, # 2016 +4021,5231,5232,1186, 15,5233,1079,1070,5234,1522,3224,3594, 276,1050,2725, 758, # 2032 +1126, 653,2960,3296,5235,2342, 889,3595,4022,3104,3007, 903,1250,4609,4023,3481, # 2048 +3596,1342,1681,1718, 766,3297, 286, 89,2961,3715,5236,1713,5237,2607,3371,3008, # 2064 +5238,2962,2219,3225,2880,5239,4610,2505,2533, 181, 387,1075,4024, 731,2190,3372, # 2080 +5240,3298, 310, 313,3482,2304, 770,4278, 54,3054, 189,4611,3105,3848,4025,5241, # 2096 +1230,1617,1850, 355,3597,4279,4612,3373, 111,4280,3716,1350,3160,3483,3055,4281, # 2112 +2150,3299,3598,5242,2797,4026,4027,3009, 722,2009,5243,1071, 247,1207,2343,2478, # 2128 +1378,4613,2010, 864,1437,1214,4614, 373,3849,1142,2220, 667,4615, 442,2763,2563, # 2144 +3850,4028,1969,4282,3300,1840, 837, 170,1107, 934,1336,1883,5244,5245,2119,4283, # 2160 +2841, 743,1569,5246,4616,4284, 582,2389,1418,3484,5247,1803,5248, 357,1395,1729, # 2176 +3717,3301,2423,1564,2241,5249,3106,3851,1633,4617,1114,2086,4285,1532,5250, 482, # 2192 +2451,4618,5251,5252,1492, 833,1466,5253,2726,3599,1641,2842,5254,1526,1272,3718, # 2208 +4286,1686,1795, 416,2564,1903,1954,1804,5255,3852,2798,3853,1159,2321,5256,2881, # 2224 +4619,1610,1584,3056,2424,2764, 443,3302,1163,3161,5257,5258,4029,5259,4287,2506, # 2240 +3057,4620,4030,3162,2104,1647,3600,2011,1873,4288,5260,4289, 431,3485,5261, 250, # 2256 + 97, 81,4290,5262,1648,1851,1558, 160, 848,5263, 866, 740,1694,5264,2204,2843, # 2272 +3226,4291,4621,3719,1687, 950,2479, 426, 469,3227,3720,3721,4031,5265,5266,1188, # 2288 + 424,1996, 861,3601,4292,3854,2205,2694, 168,1235,3602,4293,5267,2087,1674,4622, # 2304 +3374,3303, 220,2565,1009,5268,3855, 670,3010, 332,1208, 717,5269,5270,3603,2452, # 2320 +4032,3375,5271, 513,5272,1209,2882,3376,3163,4623,1080,5273,5274,5275,5276,2534, # 2336 +3722,3604, 815,1587,4033,4034,5277,3605,3486,3856,1254,4624,1328,3058,1390,4035, # 2352 +1741,4036,3857,4037,5278, 236,3858,2453,3304,5279,5280,3723,3859,1273,3860,4625, # 2368 +5281, 308,5282,4626, 245,4627,1852,2480,1307,2583, 430, 715,2137,2454,5283, 270, # 2384 + 199,2883,4038,5284,3606,2727,1753, 761,1754, 725,1661,1841,4628,3487,3724,5285, # 2400 +5286, 587, 14,3305, 227,2608, 326, 480,2270, 943,2765,3607, 291, 650,1884,5287, # 2416 +1702,1226, 102,1547, 62,3488, 904,4629,3489,1164,4294,5288,5289,1224,1548,2766, # 2432 + 391, 498,1493,5290,1386,1419,5291,2056,1177,4630, 813, 880,1081,2368, 566,1145, # 2448 +4631,2291,1001,1035,2566,2609,2242, 394,1286,5292,5293,2069,5294, 86,1494,1730, # 2464 +4039, 491,1588, 745, 897,2963, 843,3377,4040,2767,2884,3306,1768, 998,2221,2070, # 2480 + 397,1827,1195,1970,3725,3011,3378, 284,5295,3861,2507,2138,2120,1904,5296,4041, # 2496 +2151,4042,4295,1036,3490,1905, 114,2567,4296, 209,1527,5297,5298,2964,2844,2635, # 2512 +2390,2728,3164, 812,2568,5299,3307,5300,1559, 737,1885,3726,1210, 885, 28,2695, # 2528 +3608,3862,5301,4297,1004,1780,4632,5302, 346,1982,2222,2696,4633,3863,1742, 797, # 2544 +1642,4043,1934,1072,1384,2152, 896,4044,3308,3727,3228,2885,3609,5303,2569,1959, # 2560 +4634,2455,1786,5304,5305,5306,4045,4298,1005,1308,3728,4299,2729,4635,4636,1528, # 2576 +2610, 161,1178,4300,1983, 987,4637,1101,4301, 631,4046,1157,3229,2425,1343,1241, # 2592 +1016,2243,2570, 372, 877,2344,2508,1160, 555,1935, 911,4047,5307, 466,1170, 169, # 2608 +1051,2921,2697,3729,2481,3012,1182,2012,2571,1251,2636,5308, 992,2345,3491,1540, # 2624 +2730,1201,2071,2406,1997,2482,5309,4638, 528,1923,2191,1503,1874,1570,2369,3379, # 2640 +3309,5310, 557,1073,5311,1828,3492,2088,2271,3165,3059,3107, 767,3108,2799,4639, # 2656 +1006,4302,4640,2346,1267,2179,3730,3230, 778,4048,3231,2731,1597,2667,5312,4641, # 2672 +5313,3493,5314,5315,5316,3310,2698,1433,3311, 131, 95,1504,4049, 723,4303,3166, # 2688 +1842,3610,2768,2192,4050,2028,2105,3731,5317,3013,4051,1218,5318,3380,3232,4052, # 2704 +4304,2584, 248,1634,3864, 912,5319,2845,3732,3060,3865, 654, 53,5320,3014,5321, # 2720 +1688,4642, 777,3494,1032,4053,1425,5322, 191, 820,2121,2846, 971,4643, 931,3233, # 2736 + 135, 664, 783,3866,1998, 772,2922,1936,4054,3867,4644,2923,3234, 282,2732, 640, # 2752 +1372,3495,1127, 922, 325,3381,5323,5324, 711,2045,5325,5326,4055,2223,2800,1937, # 2768 +4056,3382,2224,2255,3868,2305,5327,4645,3869,1258,3312,4057,3235,2139,2965,4058, # 2784 +4059,5328,2225, 258,3236,4646, 101,1227,5329,3313,1755,5330,1391,3314,5331,2924, # 2800 +2057, 893,5332,5333,5334,1402,4305,2347,5335,5336,3237,3611,5337,5338, 878,1325, # 2816 +1781,2801,4647, 259,1385,2585, 744,1183,2272,4648,5339,4060,2509,5340, 684,1024, # 2832 +4306,5341, 472,3612,3496,1165,3315,4061,4062, 322,2153, 881, 455,1695,1152,1340, # 2848 + 660, 554,2154,4649,1058,4650,4307, 830,1065,3383,4063,4651,1924,5342,1703,1919, # 2864 +5343, 932,2273, 122,5344,4652, 947, 677,5345,3870,2637, 297,1906,1925,2274,4653, # 2880 +2322,3316,5346,5347,4308,5348,4309, 84,4310, 112, 989,5349, 547,1059,4064, 701, # 2896 +3613,1019,5350,4311,5351,3497, 942, 639, 457,2306,2456, 993,2966, 407, 851, 494, # 2912 +4654,3384, 927,5352,1237,5353,2426,3385, 573,4312, 680, 921,2925,1279,1875, 285, # 2928 + 790,1448,1984, 719,2168,5354,5355,4655,4065,4066,1649,5356,1541, 563,5357,1077, # 2944 +5358,3386,3061,3498, 511,3015,4067,4068,3733,4069,1268,2572,3387,3238,4656,4657, # 2960 +5359, 535,1048,1276,1189,2926,2029,3167,1438,1373,2847,2967,1134,2013,5360,4313, # 2976 +1238,2586,3109,1259,5361, 700,5362,2968,3168,3734,4314,5363,4315,1146,1876,1907, # 2992 +4658,2611,4070, 781,2427, 132,1589, 203, 147, 273,2802,2407, 898,1787,2155,4071, # 3008 +4072,5364,3871,2803,5365,5366,4659,4660,5367,3239,5368,1635,3872, 965,5369,1805, # 3024 +2699,1516,3614,1121,1082,1329,3317,4073,1449,3873, 65,1128,2848,2927,2769,1590, # 3040 +3874,5370,5371, 12,2668, 45, 976,2587,3169,4661, 517,2535,1013,1037,3240,5372, # 3056 +3875,2849,5373,3876,5374,3499,5375,2612, 614,1999,2323,3877,3110,2733,2638,5376, # 3072 +2588,4316, 599,1269,5377,1811,3735,5378,2700,3111, 759,1060, 489,1806,3388,3318, # 3088 +1358,5379,5380,2391,1387,1215,2639,2256, 490,5381,5382,4317,1759,2392,2348,5383, # 3104 +4662,3878,1908,4074,2640,1807,3241,4663,3500,3319,2770,2349, 874,5384,5385,3501, # 3120 +3736,1859, 91,2928,3737,3062,3879,4664,5386,3170,4075,2669,5387,3502,1202,1403, # 3136 +3880,2969,2536,1517,2510,4665,3503,2511,5388,4666,5389,2701,1886,1495,1731,4076, # 3152 +2370,4667,5390,2030,5391,5392,4077,2702,1216, 237,2589,4318,2324,4078,3881,4668, # 3168 +4669,2703,3615,3504, 445,4670,5393,5394,5395,5396,2771, 61,4079,3738,1823,4080, # 3184 +5397, 687,2046, 935, 925, 405,2670, 703,1096,1860,2734,4671,4081,1877,1367,2704, # 3200 +3389, 918,2106,1782,2483, 334,3320,1611,1093,4672, 564,3171,3505,3739,3390, 945, # 3216 +2641,2058,4673,5398,1926, 872,4319,5399,3506,2705,3112, 349,4320,3740,4082,4674, # 3232 +3882,4321,3741,2156,4083,4675,4676,4322,4677,2408,2047, 782,4084, 400, 251,4323, # 3248 +1624,5400,5401, 277,3742, 299,1265, 476,1191,3883,2122,4324,4325,1109, 205,5402, # 3264 +2590,1000,2157,3616,1861,5403,5404,5405,4678,5406,4679,2573, 107,2484,2158,4085, # 3280 +3507,3172,5407,1533, 541,1301, 158, 753,4326,2886,3617,5408,1696, 370,1088,4327, # 3296 +4680,3618, 579, 327, 440, 162,2244, 269,1938,1374,3508, 968,3063, 56,1396,3113, # 3312 +2107,3321,3391,5409,1927,2159,4681,3016,5410,3619,5411,5412,3743,4682,2485,5413, # 3328 +2804,5414,1650,4683,5415,2613,5416,5417,4086,2671,3392,1149,3393,4087,3884,4088, # 3344 +5418,1076, 49,5419, 951,3242,3322,3323, 450,2850, 920,5420,1812,2805,2371,4328, # 3360 +1909,1138,2372,3885,3509,5421,3243,4684,1910,1147,1518,2428,4685,3886,5422,4686, # 3376 +2393,2614, 260,1796,3244,5423,5424,3887,3324, 708,5425,3620,1704,5426,3621,1351, # 3392 +1618,3394,3017,1887, 944,4329,3395,4330,3064,3396,4331,5427,3744, 422, 413,1714, # 3408 +3325, 500,2059,2350,4332,2486,5428,1344,1911, 954,5429,1668,5430,5431,4089,2409, # 3424 +4333,3622,3888,4334,5432,2307,1318,2512,3114, 133,3115,2887,4687, 629, 31,2851, # 3440 +2706,3889,4688, 850, 949,4689,4090,2970,1732,2089,4335,1496,1853,5433,4091, 620, # 3456 +3245, 981,1242,3745,3397,1619,3746,1643,3326,2140,2457,1971,1719,3510,2169,5434, # 3472 +3246,5435,5436,3398,1829,5437,1277,4690,1565,2048,5438,1636,3623,3116,5439, 869, # 3488 +2852, 655,3890,3891,3117,4092,3018,3892,1310,3624,4691,5440,5441,5442,1733, 558, # 3504 +4692,3747, 335,1549,3065,1756,4336,3748,1946,3511,1830,1291,1192, 470,2735,2108, # 3520 +2806, 913,1054,4093,5443,1027,5444,3066,4094,4693, 982,2672,3399,3173,3512,3247, # 3536 +3248,1947,2807,5445, 571,4694,5446,1831,5447,3625,2591,1523,2429,5448,2090, 984, # 3552 +4695,3749,1960,5449,3750, 852, 923,2808,3513,3751, 969,1519, 999,2049,2325,1705, # 3568 +5450,3118, 615,1662, 151, 597,4095,2410,2326,1049, 275,4696,3752,4337, 568,3753, # 3584 +3626,2487,4338,3754,5451,2430,2275, 409,3249,5452,1566,2888,3514,1002, 769,2853, # 3600 + 194,2091,3174,3755,2226,3327,4339, 628,1505,5453,5454,1763,2180,3019,4096, 521, # 3616 +1161,2592,1788,2206,2411,4697,4097,1625,4340,4341, 412, 42,3119, 464,5455,2642, # 3632 +4698,3400,1760,1571,2889,3515,2537,1219,2207,3893,2643,2141,2373,4699,4700,3328, # 3648 +1651,3401,3627,5456,5457,3628,2488,3516,5458,3756,5459,5460,2276,2092, 460,5461, # 3664 +4701,5462,3020, 962, 588,3629, 289,3250,2644,1116, 52,5463,3067,1797,5464,5465, # 3680 +5466,1467,5467,1598,1143,3757,4342,1985,1734,1067,4702,1280,3402, 465,4703,1572, # 3696 + 510,5468,1928,2245,1813,1644,3630,5469,4704,3758,5470,5471,2673,1573,1534,5472, # 3712 +5473, 536,1808,1761,3517,3894,3175,2645,5474,5475,5476,4705,3518,2929,1912,2809, # 3728 +5477,3329,1122, 377,3251,5478, 360,5479,5480,4343,1529, 551,5481,2060,3759,1769, # 3744 +2431,5482,2930,4344,3330,3120,2327,2109,2031,4706,1404, 136,1468,1479, 672,1171, # 3760 +3252,2308, 271,3176,5483,2772,5484,2050, 678,2736, 865,1948,4707,5485,2014,4098, # 3776 +2971,5486,2737,2227,1397,3068,3760,4708,4709,1735,2931,3403,3631,5487,3895, 509, # 3792 +2854,2458,2890,3896,5488,5489,3177,3178,4710,4345,2538,4711,2309,1166,1010, 552, # 3808 + 681,1888,5490,5491,2972,2973,4099,1287,1596,1862,3179, 358, 453, 736, 175, 478, # 3824 +1117, 905,1167,1097,5492,1854,1530,5493,1706,5494,2181,3519,2292,3761,3520,3632, # 3840 +4346,2093,4347,5495,3404,1193,2489,4348,1458,2193,2208,1863,1889,1421,3331,2932, # 3856 +3069,2182,3521, 595,2123,5496,4100,5497,5498,4349,1707,2646, 223,3762,1359, 751, # 3872 +3121, 183,3522,5499,2810,3021, 419,2374, 633, 704,3897,2394, 241,5500,5501,5502, # 3888 + 838,3022,3763,2277,2773,2459,3898,1939,2051,4101,1309,3122,2246,1181,5503,1136, # 3904 +2209,3899,2375,1446,4350,2310,4712,5504,5505,4351,1055,2615, 484,3764,5506,4102, # 3920 + 625,4352,2278,3405,1499,4353,4103,5507,4104,4354,3253,2279,2280,3523,5508,5509, # 3936 +2774, 808,2616,3765,3406,4105,4355,3123,2539, 526,3407,3900,4356, 955,5510,1620, # 3952 +4357,2647,2432,5511,1429,3766,1669,1832, 994, 928,5512,3633,1260,5513,5514,5515, # 3968 +1949,2293, 741,2933,1626,4358,2738,2460, 867,1184, 362,3408,1392,5516,5517,4106, # 3984 +4359,1770,1736,3254,2934,4713,4714,1929,2707,1459,1158,5518,3070,3409,2891,1292, # 4000 +1930,2513,2855,3767,1986,1187,2072,2015,2617,4360,5519,2574,2514,2170,3768,2490, # 4016 +3332,5520,3769,4715,5521,5522, 666,1003,3023,1022,3634,4361,5523,4716,1814,2257, # 4032 + 574,3901,1603, 295,1535, 705,3902,4362, 283, 858, 417,5524,5525,3255,4717,4718, # 4048 +3071,1220,1890,1046,2281,2461,4107,1393,1599, 689,2575, 388,4363,5526,2491, 802, # 4064 +5527,2811,3903,2061,1405,2258,5528,4719,3904,2110,1052,1345,3256,1585,5529, 809, # 4080 +5530,5531,5532, 575,2739,3524, 956,1552,1469,1144,2328,5533,2329,1560,2462,3635, # 4096 +3257,4108, 616,2210,4364,3180,2183,2294,5534,1833,5535,3525,4720,5536,1319,3770, # 4112 +3771,1211,3636,1023,3258,1293,2812,5537,5538,5539,3905, 607,2311,3906, 762,2892, # 4128 +1439,4365,1360,4721,1485,3072,5540,4722,1038,4366,1450,2062,2648,4367,1379,4723, # 4144 +2593,5541,5542,4368,1352,1414,2330,2935,1172,5543,5544,3907,3908,4724,1798,1451, # 4160 +5545,5546,5547,5548,2936,4109,4110,2492,2351, 411,4111,4112,3637,3333,3124,4725, # 4176 +1561,2674,1452,4113,1375,5549,5550, 47,2974, 316,5551,1406,1591,2937,3181,5552, # 4192 +1025,2142,3125,3182, 354,2740, 884,2228,4369,2412, 508,3772, 726,3638, 996,2433, # 4208 +3639, 729,5553, 392,2194,1453,4114,4726,3773,5554,5555,2463,3640,2618,1675,2813, # 4224 + 919,2352,2975,2353,1270,4727,4115, 73,5556,5557, 647,5558,3259,2856,2259,1550, # 4240 +1346,3024,5559,1332, 883,3526,5560,5561,5562,5563,3334,2775,5564,1212, 831,1347, # 4256 +4370,4728,2331,3909,1864,3073, 720,3910,4729,4730,3911,5565,4371,5566,5567,4731, # 4272 +5568,5569,1799,4732,3774,2619,4733,3641,1645,2376,4734,5570,2938, 669,2211,2675, # 4288 +2434,5571,2893,5572,5573,1028,3260,5574,4372,2413,5575,2260,1353,5576,5577,4735, # 4304 +3183, 518,5578,4116,5579,4373,1961,5580,2143,4374,5581,5582,3025,2354,2355,3912, # 4320 + 516,1834,1454,4117,2708,4375,4736,2229,2620,1972,1129,3642,5583,2776,5584,2976, # 4336 +1422, 577,1470,3026,1524,3410,5585,5586, 432,4376,3074,3527,5587,2594,1455,2515, # 4352 +2230,1973,1175,5588,1020,2741,4118,3528,4737,5589,2742,5590,1743,1361,3075,3529, # 4368 +2649,4119,4377,4738,2295, 895, 924,4378,2171, 331,2247,3076, 166,1627,3077,1098, # 4384 +5591,1232,2894,2231,3411,4739, 657, 403,1196,2377, 542,3775,3412,1600,4379,3530, # 4400 +5592,4740,2777,3261, 576, 530,1362,4741,4742,2540,2676,3776,4120,5593, 842,3913, # 4416 +5594,2814,2032,1014,4121, 213,2709,3413, 665, 621,4380,5595,3777,2939,2435,5596, # 4432 +2436,3335,3643,3414,4743,4381,2541,4382,4744,3644,1682,4383,3531,1380,5597, 724, # 4448 +2282, 600,1670,5598,1337,1233,4745,3126,2248,5599,1621,4746,5600, 651,4384,5601, # 4464 +1612,4385,2621,5602,2857,5603,2743,2312,3078,5604, 716,2464,3079, 174,1255,2710, # 4480 +4122,3645, 548,1320,1398, 728,4123,1574,5605,1891,1197,3080,4124,5606,3081,3082, # 4496 +3778,3646,3779, 747,5607, 635,4386,4747,5608,5609,5610,4387,5611,5612,4748,5613, # 4512 +3415,4749,2437, 451,5614,3780,2542,2073,4388,2744,4389,4125,5615,1764,4750,5616, # 4528 +4390, 350,4751,2283,2395,2493,5617,4391,4126,2249,1434,4127, 488,4752, 458,4392, # 4544 +4128,3781, 771,1330,2396,3914,2576,3184,2160,2414,1553,2677,3185,4393,5618,2494, # 4560 +2895,2622,1720,2711,4394,3416,4753,5619,2543,4395,5620,3262,4396,2778,5621,2016, # 4576 +2745,5622,1155,1017,3782,3915,5623,3336,2313, 201,1865,4397,1430,5624,4129,5625, # 4592 +5626,5627,5628,5629,4398,1604,5630, 414,1866, 371,2595,4754,4755,3532,2017,3127, # 4608 +4756,1708, 960,4399, 887, 389,2172,1536,1663,1721,5631,2232,4130,2356,2940,1580, # 4624 +5632,5633,1744,4757,2544,4758,4759,5634,4760,5635,2074,5636,4761,3647,3417,2896, # 4640 +4400,5637,4401,2650,3418,2815, 673,2712,2465, 709,3533,4131,3648,4402,5638,1148, # 4656 + 502, 634,5639,5640,1204,4762,3649,1575,4763,2623,3783,5641,3784,3128, 948,3263, # 4672 + 121,1745,3916,1110,5642,4403,3083,2516,3027,4132,3785,1151,1771,3917,1488,4133, # 4688 +1987,5643,2438,3534,5644,5645,2094,5646,4404,3918,1213,1407,2816, 531,2746,2545, # 4704 +3264,1011,1537,4764,2779,4405,3129,1061,5647,3786,3787,1867,2897,5648,2018, 120, # 4720 +4406,4407,2063,3650,3265,2314,3919,2678,3419,1955,4765,4134,5649,3535,1047,2713, # 4736 +1266,5650,1368,4766,2858, 649,3420,3920,2546,2747,1102,2859,2679,5651,5652,2000, # 4752 +5653,1111,3651,2977,5654,2495,3921,3652,2817,1855,3421,3788,5655,5656,3422,2415, # 4768 +2898,3337,3266,3653,5657,2577,5658,3654,2818,4135,1460, 856,5659,3655,5660,2899, # 4784 +2978,5661,2900,3922,5662,4408, 632,2517, 875,3923,1697,3924,2296,5663,5664,4767, # 4800 +3028,1239, 580,4768,4409,5665, 914, 936,2075,1190,4136,1039,2124,5666,5667,5668, # 4816 +5669,3423,1473,5670,1354,4410,3925,4769,2173,3084,4137, 915,3338,4411,4412,3339, # 4832 +1605,1835,5671,2748, 398,3656,4413,3926,4138, 328,1913,2860,4139,3927,1331,4414, # 4848 +3029, 937,4415,5672,3657,4140,4141,3424,2161,4770,3425, 524, 742, 538,3085,1012, # 4864 +5673,5674,3928,2466,5675, 658,1103, 225,3929,5676,5677,4771,5678,4772,5679,3267, # 4880 +1243,5680,4142, 963,2250,4773,5681,2714,3658,3186,5682,5683,2596,2332,5684,4774, # 4896 +5685,5686,5687,3536, 957,3426,2547,2033,1931,2941,2467, 870,2019,3659,1746,2780, # 4912 +2781,2439,2468,5688,3930,5689,3789,3130,3790,3537,3427,3791,5690,1179,3086,5691, # 4928 +3187,2378,4416,3792,2548,3188,3131,2749,4143,5692,3428,1556,2549,2297, 977,2901, # 4944 +2034,4144,1205,3429,5693,1765,3430,3189,2125,1271, 714,1689,4775,3538,5694,2333, # 4960 +3931, 533,4417,3660,2184, 617,5695,2469,3340,3539,2315,5696,5697,3190,5698,5699, # 4976 +3932,1988, 618, 427,2651,3540,3431,5700,5701,1244,1690,5702,2819,4418,4776,5703, # 4992 +3541,4777,5704,2284,1576, 473,3661,4419,3432, 972,5705,3662,5706,3087,5707,5708, # 5008 +4778,4779,5709,3793,4145,4146,5710, 153,4780, 356,5711,1892,2902,4420,2144, 408, # 5024 + 803,2357,5712,3933,5713,4421,1646,2578,2518,4781,4782,3934,5714,3935,4422,5715, # 5040 +2416,3433, 752,5716,5717,1962,3341,2979,5718, 746,3030,2470,4783,4423,3794, 698, # 5056 +4784,1893,4424,3663,2550,4785,3664,3936,5719,3191,3434,5720,1824,1302,4147,2715, # 5072 +3937,1974,4425,5721,4426,3192, 823,1303,1288,1236,2861,3542,4148,3435, 774,3938, # 5088 +5722,1581,4786,1304,2862,3939,4787,5723,2440,2162,1083,3268,4427,4149,4428, 344, # 5104 +1173, 288,2316, 454,1683,5724,5725,1461,4788,4150,2597,5726,5727,4789, 985, 894, # 5120 +5728,3436,3193,5729,1914,2942,3795,1989,5730,2111,1975,5731,4151,5732,2579,1194, # 5136 + 425,5733,4790,3194,1245,3796,4429,5734,5735,2863,5736, 636,4791,1856,3940, 760, # 5152 +1800,5737,4430,2212,1508,4792,4152,1894,1684,2298,5738,5739,4793,4431,4432,2213, # 5168 + 479,5740,5741, 832,5742,4153,2496,5743,2980,2497,3797, 990,3132, 627,1815,2652, # 5184 +4433,1582,4434,2126,2112,3543,4794,5744, 799,4435,3195,5745,4795,2113,1737,3031, # 5200 +1018, 543, 754,4436,3342,1676,4796,4797,4154,4798,1489,5746,3544,5747,2624,2903, # 5216 +4155,5748,5749,2981,5750,5751,5752,5753,3196,4799,4800,2185,1722,5754,3269,3270, # 5232 +1843,3665,1715, 481, 365,1976,1857,5755,5756,1963,2498,4801,5757,2127,3666,3271, # 5248 + 433,1895,2064,2076,5758, 602,2750,5759,5760,5761,5762,5763,3032,1628,3437,5764, # 5264 +3197,4802,4156,2904,4803,2519,5765,2551,2782,5766,5767,5768,3343,4804,2905,5769, # 5280 +4805,5770,2864,4806,4807,1221,2982,4157,2520,5771,5772,5773,1868,1990,5774,5775, # 5296 +5776,1896,5777,5778,4808,1897,4158, 318,5779,2095,4159,4437,5780,5781, 485,5782, # 5312 + 938,3941, 553,2680, 116,5783,3942,3667,5784,3545,2681,2783,3438,3344,2820,5785, # 5328 +3668,2943,4160,1747,2944,2983,5786,5787, 207,5788,4809,5789,4810,2521,5790,3033, # 5344 + 890,3669,3943,5791,1878,3798,3439,5792,2186,2358,3440,1652,5793,5794,5795, 941, # 5360 +2299, 208,3546,4161,2020, 330,4438,3944,2906,2499,3799,4439,4811,5796,5797,5798, # 5376 +) + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/big5prober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/big5prober.py new file mode 100644 index 0000000..98f9970 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/big5prober.py @@ -0,0 +1,47 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import Big5DistributionAnalysis +from .mbcssm import BIG5_SM_MODEL + + +class Big5Prober(MultiByteCharSetProber): + def __init__(self): + super(Big5Prober, self).__init__() + self.coding_sm = CodingStateMachine(BIG5_SM_MODEL) + self.distribution_analyzer = Big5DistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "Big5" + + @property + def language(self): + return "Chinese" diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/chardistribution.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/chardistribution.py new file mode 100644 index 0000000..c0395f4 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/chardistribution.py @@ -0,0 +1,233 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .euctwfreq import (EUCTW_CHAR_TO_FREQ_ORDER, EUCTW_TABLE_SIZE, + EUCTW_TYPICAL_DISTRIBUTION_RATIO) +from .euckrfreq import (EUCKR_CHAR_TO_FREQ_ORDER, EUCKR_TABLE_SIZE, + EUCKR_TYPICAL_DISTRIBUTION_RATIO) +from .gb2312freq import (GB2312_CHAR_TO_FREQ_ORDER, GB2312_TABLE_SIZE, + GB2312_TYPICAL_DISTRIBUTION_RATIO) +from .big5freq import (BIG5_CHAR_TO_FREQ_ORDER, BIG5_TABLE_SIZE, + BIG5_TYPICAL_DISTRIBUTION_RATIO) +from .jisfreq import (JIS_CHAR_TO_FREQ_ORDER, JIS_TABLE_SIZE, + JIS_TYPICAL_DISTRIBUTION_RATIO) + + +class CharDistributionAnalysis(object): + ENOUGH_DATA_THRESHOLD = 1024 + SURE_YES = 0.99 + SURE_NO = 0.01 + MINIMUM_DATA_THRESHOLD = 3 + + def __init__(self): + # Mapping table to get frequency order from char order (get from + # GetOrder()) + self._char_to_freq_order = None + self._table_size = None # Size of above table + # This is a constant value which varies from language to language, + # used in calculating confidence. See + # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html + # for further detail. + self.typical_distribution_ratio = None + self._done = None + self._total_chars = None + self._freq_chars = None + self.reset() + + def reset(self): + """reset analyser, clear any state""" + # If this flag is set to True, detection is done and conclusion has + # been made + self._done = False + self._total_chars = 0 # Total characters encountered + # The number of characters whose frequency order is less than 512 + self._freq_chars = 0 + + def feed(self, char, char_len): + """feed a character with known length""" + if char_len == 2: + # we only care about 2-bytes character in our distribution analysis + order = self.get_order(char) + else: + order = -1 + if order >= 0: + self._total_chars += 1 + # order is valid + if order < self._table_size: + if 512 > self._char_to_freq_order[order]: + self._freq_chars += 1 + + def get_confidence(self): + """return confidence based on existing data""" + # if we didn't receive any character in our consideration range, + # return negative answer + if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD: + return self.SURE_NO + + if self._total_chars != self._freq_chars: + r = (self._freq_chars / ((self._total_chars - self._freq_chars) + * self.typical_distribution_ratio)) + if r < self.SURE_YES: + return r + + # normalize confidence (we don't want to be 100% sure) + return self.SURE_YES + + def got_enough_data(self): + # It is not necessary to receive all data to draw conclusion. + # For charset detection, certain amount of data is enough + return self._total_chars > self.ENOUGH_DATA_THRESHOLD + + def get_order(self, byte_str): + # We do not handle characters based on the original encoding string, + # but convert this encoding string to a number, here called order. + # This allows multiple encodings of a language to share one frequency + # table. + return -1 + + +class EUCTWDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(EUCTWDistributionAnalysis, self).__init__() + self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER + self._table_size = EUCTW_TABLE_SIZE + self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for euc-TW encoding, we are interested + # first byte range: 0xc4 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char = byte_str[0] + if first_char >= 0xC4: + return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1 + else: + return -1 + + +class EUCKRDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(EUCKRDistributionAnalysis, self).__init__() + self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER + self._table_size = EUCKR_TABLE_SIZE + self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for euc-KR encoding, we are interested + # first byte range: 0xb0 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char = byte_str[0] + if first_char >= 0xB0: + return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1 + else: + return -1 + + +class GB2312DistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(GB2312DistributionAnalysis, self).__init__() + self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER + self._table_size = GB2312_TABLE_SIZE + self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for GB2312 encoding, we are interested + # first byte range: 0xb0 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char, second_char = byte_str[0], byte_str[1] + if (first_char >= 0xB0) and (second_char >= 0xA1): + return 94 * (first_char - 0xB0) + second_char - 0xA1 + else: + return -1 + + +class Big5DistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(Big5DistributionAnalysis, self).__init__() + self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER + self._table_size = BIG5_TABLE_SIZE + self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for big5 encoding, we are interested + # first byte range: 0xa4 -- 0xfe + # second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe + # no validation needed here. State machine has done that + first_char, second_char = byte_str[0], byte_str[1] + if first_char >= 0xA4: + if second_char >= 0xA1: + return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63 + else: + return 157 * (first_char - 0xA4) + second_char - 0x40 + else: + return -1 + + +class SJISDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(SJISDistributionAnalysis, self).__init__() + self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER + self._table_size = JIS_TABLE_SIZE + self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for sjis encoding, we are interested + # first byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe + # second byte range: 0x40 -- 0x7e, 0x81 -- oxfe + # no validation needed here. State machine has done that + first_char, second_char = byte_str[0], byte_str[1] + if (first_char >= 0x81) and (first_char <= 0x9F): + order = 188 * (first_char - 0x81) + elif (first_char >= 0xE0) and (first_char <= 0xEF): + order = 188 * (first_char - 0xE0 + 31) + else: + return -1 + order = order + second_char - 0x40 + if second_char > 0x7F: + order = -1 + return order + + +class EUCJPDistributionAnalysis(CharDistributionAnalysis): + def __init__(self): + super(EUCJPDistributionAnalysis, self).__init__() + self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER + self._table_size = JIS_TABLE_SIZE + self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO + + def get_order(self, byte_str): + # for euc-JP encoding, we are interested + # first byte range: 0xa0 -- 0xfe + # second byte range: 0xa1 -- 0xfe + # no validation needed here. State machine has done that + char = byte_str[0] + if char >= 0xA0: + return 94 * (char - 0xA1) + byte_str[1] - 0xa1 + else: + return -1 diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py new file mode 100644 index 0000000..5812cef --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/charsetgroupprober.py @@ -0,0 +1,107 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import ProbingState +from .charsetprober import CharSetProber + + +class CharSetGroupProber(CharSetProber): + def __init__(self, lang_filter=None): + super(CharSetGroupProber, self).__init__(lang_filter=lang_filter) + self._active_num = 0 + self.probers = [] + self._best_guess_prober = None + + def reset(self): + super(CharSetGroupProber, self).reset() + self._active_num = 0 + for prober in self.probers: + if prober: + prober.reset() + prober.active = True + self._active_num += 1 + self._best_guess_prober = None + + @property + def charset_name(self): + if not self._best_guess_prober: + self.get_confidence() + if not self._best_guess_prober: + return None + return self._best_guess_prober.charset_name + + @property + def language(self): + if not self._best_guess_prober: + self.get_confidence() + if not self._best_guess_prober: + return None + return self._best_guess_prober.language + + def feed(self, byte_str): + for prober in self.probers: + if not prober: + continue + if not prober.active: + continue + state = prober.feed(byte_str) + if not state: + continue + if state == ProbingState.FOUND_IT: + self._best_guess_prober = prober + self._state = ProbingState.FOUND_IT + return self.state + elif state == ProbingState.NOT_ME: + prober.active = False + self._active_num -= 1 + if self._active_num <= 0: + self._state = ProbingState.NOT_ME + return self.state + return self.state + + def get_confidence(self): + state = self.state + if state == ProbingState.FOUND_IT: + return 0.99 + elif state == ProbingState.NOT_ME: + return 0.01 + best_conf = 0.0 + self._best_guess_prober = None + for prober in self.probers: + if not prober: + continue + if not prober.active: + self.logger.debug('%s not active', prober.charset_name) + continue + conf = prober.get_confidence() + self.logger.debug('%s %s confidence = %s', prober.charset_name, prober.language, conf) + if best_conf < conf: + best_conf = conf + self._best_guess_prober = prober + if not self._best_guess_prober: + return 0.0 + return best_conf diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/charsetprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/charsetprober.py new file mode 100644 index 0000000..eac4e59 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/charsetprober.py @@ -0,0 +1,145 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +import logging +import re + +from .enums import ProbingState + + +class CharSetProber(object): + + SHORTCUT_THRESHOLD = 0.95 + + def __init__(self, lang_filter=None): + self._state = None + self.lang_filter = lang_filter + self.logger = logging.getLogger(__name__) + + def reset(self): + self._state = ProbingState.DETECTING + + @property + def charset_name(self): + return None + + def feed(self, buf): + pass + + @property + def state(self): + return self._state + + def get_confidence(self): + return 0.0 + + @staticmethod + def filter_high_byte_only(buf): + buf = re.sub(b'([\x00-\x7F])+', b' ', buf) + return buf + + @staticmethod + def filter_international_words(buf): + """ + We define three types of bytes: + alphabet: english alphabets [a-zA-Z] + international: international characters [\x80-\xFF] + marker: everything else [^a-zA-Z\x80-\xFF] + + The input buffer can be thought to contain a series of words delimited + by markers. This function works to filter all words that contain at + least one international character. All contiguous sequences of markers + are replaced by a single space ascii character. + + This filter applies to all scripts which do not use English characters. + """ + filtered = bytearray() + + # This regex expression filters out only words that have at-least one + # international character. The word may include one marker character at + # the end. + words = re.findall(b'[a-zA-Z]*[\x80-\xFF]+[a-zA-Z]*[^a-zA-Z\x80-\xFF]?', + buf) + + for word in words: + filtered.extend(word[:-1]) + + # If the last character in the word is a marker, replace it with a + # space as markers shouldn't affect our analysis (they are used + # similarly across all languages and may thus have similar + # frequencies). + last_char = word[-1:] + if not last_char.isalpha() and last_char < b'\x80': + last_char = b' ' + filtered.extend(last_char) + + return filtered + + @staticmethod + def filter_with_english_letters(buf): + """ + Returns a copy of ``buf`` that retains only the sequences of English + alphabet and high byte characters that are not between <> characters. + Also retains English alphabet and high byte characters immediately + before occurrences of >. + + This filter can be applied to all scripts which contain both English + characters and extended ASCII characters, but is currently only used by + ``Latin1Prober``. + """ + filtered = bytearray() + in_tag = False + prev = 0 + + for curr in range(len(buf)): + # Slice here to get bytes instead of an int with Python 3 + buf_char = buf[curr:curr + 1] + # Check if we're coming out of or entering an HTML tag + if buf_char == b'>': + in_tag = False + elif buf_char == b'<': + in_tag = True + + # If current character is not extended-ASCII and not alphabetic... + if buf_char < b'\x80' and not buf_char.isalpha(): + # ...and we're not in a tag + if curr > prev and not in_tag: + # Keep everything after last non-extended-ASCII, + # non-alphabetic character + filtered.extend(buf[prev:curr]) + # Output a space to delimit stretch we kept + filtered.extend(b' ') + prev = curr + 1 + + # If we're not in a tag... + if not in_tag: + # Keep everything after last non-extended-ASCII, non-alphabetic + # character + filtered.extend(buf[prev:]) + + return filtered diff --git a/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/dependency_links.txt b/python/lib/python3.10/site-packages/pip/_vendor/chardet/cli/__init__.py similarity index 100% rename from lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/dependency_links.txt rename to python/lib/python3.10/site-packages/pip/_vendor/chardet/cli/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/cli/chardetect.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/cli/chardetect.py new file mode 100644 index 0000000..6d6f93a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/cli/chardetect.py @@ -0,0 +1,84 @@ +""" +Script which takes one or more file paths and reports on their detected +encodings + +Example:: + + % chardetect somefile someotherfile + somefile: windows-1252 with confidence 0.5 + someotherfile: ascii with confidence 1.0 + +If no paths are provided, it takes its input from stdin. + +""" + +from __future__ import absolute_import, print_function, unicode_literals + +import argparse +import sys + +from pip._vendor.chardet import __version__ +from pip._vendor.chardet.compat import PY2 +from pip._vendor.chardet.universaldetector import UniversalDetector + + +def description_of(lines, name='stdin'): + """ + Return a string describing the probable encoding of a file or + list of strings. + + :param lines: The lines to get the encoding of. + :type lines: Iterable of bytes + :param name: Name of file or collection of lines + :type name: str + """ + u = UniversalDetector() + for line in lines: + line = bytearray(line) + u.feed(line) + # shortcut out of the loop to save reading further - particularly useful if we read a BOM. + if u.done: + break + u.close() + result = u.result + if PY2: + name = name.decode(sys.getfilesystemencoding(), 'ignore') + if result['encoding']: + return '{}: {} with confidence {}'.format(name, result['encoding'], + result['confidence']) + else: + return '{}: no result'.format(name) + + +def main(argv=None): + """ + Handles command line arguments and gets things started. + + :param argv: List of arguments, as if specified on the command-line. + If None, ``sys.argv[1:]`` is used instead. + :type argv: list of str + """ + # Get command line arguments + parser = argparse.ArgumentParser( + description="Takes one or more file paths and reports their detected \ + encodings") + parser.add_argument('input', + help='File whose encoding we would like to determine. \ + (default: stdin)', + type=argparse.FileType('rb'), nargs='*', + default=[sys.stdin if PY2 else sys.stdin.buffer]) + parser.add_argument('--version', action='version', + version='%(prog)s {}'.format(__version__)) + args = parser.parse_args(argv) + + for f in args.input: + if f.isatty(): + print("You are running chardetect interactively. Press " + + "CTRL-D twice at the start of a blank line to signal the " + + "end of your input. If you want help, run chardetect " + + "--help\n", file=sys.stderr) + print(description_of(f, f.name)) + + +if __name__ == '__main__': + main() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/codingstatemachine.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/codingstatemachine.py new file mode 100644 index 0000000..68fba44 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/codingstatemachine.py @@ -0,0 +1,88 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +import logging + +from .enums import MachineState + + +class CodingStateMachine(object): + """ + A state machine to verify a byte sequence for a particular encoding. For + each byte the detector receives, it will feed that byte to every active + state machine available, one byte at a time. The state machine changes its + state based on its previous state and the byte it receives. There are 3 + states in a state machine that are of interest to an auto-detector: + + START state: This is the state to start with, or a legal byte sequence + (i.e. a valid code point) for character has been identified. + + ME state: This indicates that the state machine identified a byte sequence + that is specific to the charset it is designed for and that + there is no other possible encoding which can contain this byte + sequence. This will to lead to an immediate positive answer for + the detector. + + ERROR state: This indicates the state machine identified an illegal byte + sequence for that encoding. This will lead to an immediate + negative answer for this encoding. Detector will exclude this + encoding from consideration from here on. + """ + def __init__(self, sm): + self._model = sm + self._curr_byte_pos = 0 + self._curr_char_len = 0 + self._curr_state = None + self.logger = logging.getLogger(__name__) + self.reset() + + def reset(self): + self._curr_state = MachineState.START + + def next_state(self, c): + # for each byte we get its class + # if it is first byte, we also get byte length + byte_class = self._model['class_table'][c] + if self._curr_state == MachineState.START: + self._curr_byte_pos = 0 + self._curr_char_len = self._model['char_len_table'][byte_class] + # from byte's class and state_table, we get its next state + curr_state = (self._curr_state * self._model['class_factor'] + + byte_class) + self._curr_state = self._model['state_table'][curr_state] + self._curr_byte_pos += 1 + return self._curr_state + + def get_current_charlen(self): + return self._curr_char_len + + def get_coding_state_machine(self): + return self._model['name'] + + @property + def language(self): + return self._model['language'] diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/compat.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/compat.py new file mode 100644 index 0000000..8941572 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/compat.py @@ -0,0 +1,36 @@ +######################## BEGIN LICENSE BLOCK ######################## +# Contributor(s): +# Dan Blanchard +# Ian Cordasco +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +import sys + + +if sys.version_info < (3, 0): + PY2 = True + PY3 = False + string_types = (str, unicode) + text_type = unicode + iteritems = dict.iteritems +else: + PY2 = False + PY3 = True + string_types = (bytes, str) + text_type = str + iteritems = dict.items diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/cp949prober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/cp949prober.py new file mode 100644 index 0000000..efd793a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/cp949prober.py @@ -0,0 +1,49 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .chardistribution import EUCKRDistributionAnalysis +from .codingstatemachine import CodingStateMachine +from .mbcharsetprober import MultiByteCharSetProber +from .mbcssm import CP949_SM_MODEL + + +class CP949Prober(MultiByteCharSetProber): + def __init__(self): + super(CP949Prober, self).__init__() + self.coding_sm = CodingStateMachine(CP949_SM_MODEL) + # NOTE: CP949 is a superset of EUC-KR, so the distribution should be + # not different. + self.distribution_analyzer = EUCKRDistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "CP949" + + @property + def language(self): + return "Korean" diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/enums.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/enums.py new file mode 100644 index 0000000..0451207 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/enums.py @@ -0,0 +1,76 @@ +""" +All of the Enums that are used throughout the chardet package. + +:author: Dan Blanchard (dan.blanchard@gmail.com) +""" + + +class InputState(object): + """ + This enum represents the different states a universal detector can be in. + """ + PURE_ASCII = 0 + ESC_ASCII = 1 + HIGH_BYTE = 2 + + +class LanguageFilter(object): + """ + This enum represents the different language filters we can apply to a + ``UniversalDetector``. + """ + CHINESE_SIMPLIFIED = 0x01 + CHINESE_TRADITIONAL = 0x02 + JAPANESE = 0x04 + KOREAN = 0x08 + NON_CJK = 0x10 + ALL = 0x1F + CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL + CJK = CHINESE | JAPANESE | KOREAN + + +class ProbingState(object): + """ + This enum represents the different states a prober can be in. + """ + DETECTING = 0 + FOUND_IT = 1 + NOT_ME = 2 + + +class MachineState(object): + """ + This enum represents the different states a state machine can be in. + """ + START = 0 + ERROR = 1 + ITS_ME = 2 + + +class SequenceLikelihood(object): + """ + This enum represents the likelihood of a character following the previous one. + """ + NEGATIVE = 0 + UNLIKELY = 1 + LIKELY = 2 + POSITIVE = 3 + + @classmethod + def get_num_categories(cls): + """:returns: The number of likelihood categories in the enum.""" + return 4 + + +class CharacterCategory(object): + """ + This enum represents the different categories language models for + ``SingleByteCharsetProber`` put characters into. + + Anything less than CONTROL is considered a letter. + """ + UNDEFINED = 255 + LINE_BREAK = 254 + SYMBOL = 253 + DIGIT = 252 + CONTROL = 251 diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/escprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/escprober.py new file mode 100644 index 0000000..c70493f --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/escprober.py @@ -0,0 +1,101 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .codingstatemachine import CodingStateMachine +from .enums import LanguageFilter, ProbingState, MachineState +from .escsm import (HZ_SM_MODEL, ISO2022CN_SM_MODEL, ISO2022JP_SM_MODEL, + ISO2022KR_SM_MODEL) + + +class EscCharSetProber(CharSetProber): + """ + This CharSetProber uses a "code scheme" approach for detecting encodings, + whereby easily recognizable escape or shift sequences are relied on to + identify these encodings. + """ + + def __init__(self, lang_filter=None): + super(EscCharSetProber, self).__init__(lang_filter=lang_filter) + self.coding_sm = [] + if self.lang_filter & LanguageFilter.CHINESE_SIMPLIFIED: + self.coding_sm.append(CodingStateMachine(HZ_SM_MODEL)) + self.coding_sm.append(CodingStateMachine(ISO2022CN_SM_MODEL)) + if self.lang_filter & LanguageFilter.JAPANESE: + self.coding_sm.append(CodingStateMachine(ISO2022JP_SM_MODEL)) + if self.lang_filter & LanguageFilter.KOREAN: + self.coding_sm.append(CodingStateMachine(ISO2022KR_SM_MODEL)) + self.active_sm_count = None + self._detected_charset = None + self._detected_language = None + self._state = None + self.reset() + + def reset(self): + super(EscCharSetProber, self).reset() + for coding_sm in self.coding_sm: + if not coding_sm: + continue + coding_sm.active = True + coding_sm.reset() + self.active_sm_count = len(self.coding_sm) + self._detected_charset = None + self._detected_language = None + + @property + def charset_name(self): + return self._detected_charset + + @property + def language(self): + return self._detected_language + + def get_confidence(self): + if self._detected_charset: + return 0.99 + else: + return 0.00 + + def feed(self, byte_str): + for c in byte_str: + for coding_sm in self.coding_sm: + if not coding_sm or not coding_sm.active: + continue + coding_state = coding_sm.next_state(c) + if coding_state == MachineState.ERROR: + coding_sm.active = False + self.active_sm_count -= 1 + if self.active_sm_count <= 0: + self._state = ProbingState.NOT_ME + return self.state + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + self._detected_charset = coding_sm.get_coding_state_machine() + self._detected_language = coding_sm.language + return self.state + + return self.state diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/escsm.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/escsm.py new file mode 100644 index 0000000..0069523 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/escsm.py @@ -0,0 +1,246 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import MachineState + +HZ_CLS = ( +1,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,0,0, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,0,0,0,0, # 20 - 27 +0,0,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +0,0,0,0,0,0,0,0, # 40 - 47 +0,0,0,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,4,0,5,2,0, # 78 - 7f +1,1,1,1,1,1,1,1, # 80 - 87 +1,1,1,1,1,1,1,1, # 88 - 8f +1,1,1,1,1,1,1,1, # 90 - 97 +1,1,1,1,1,1,1,1, # 98 - 9f +1,1,1,1,1,1,1,1, # a0 - a7 +1,1,1,1,1,1,1,1, # a8 - af +1,1,1,1,1,1,1,1, # b0 - b7 +1,1,1,1,1,1,1,1, # b8 - bf +1,1,1,1,1,1,1,1, # c0 - c7 +1,1,1,1,1,1,1,1, # c8 - cf +1,1,1,1,1,1,1,1, # d0 - d7 +1,1,1,1,1,1,1,1, # d8 - df +1,1,1,1,1,1,1,1, # e0 - e7 +1,1,1,1,1,1,1,1, # e8 - ef +1,1,1,1,1,1,1,1, # f0 - f7 +1,1,1,1,1,1,1,1, # f8 - ff +) + +HZ_ST = ( +MachineState.START,MachineState.ERROR, 3,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,# 00-07 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 08-0f +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START, 4,MachineState.ERROR,# 10-17 + 5,MachineState.ERROR, 6,MachineState.ERROR, 5, 5, 4,MachineState.ERROR,# 18-1f + 4,MachineState.ERROR, 4, 4, 4,MachineState.ERROR, 4,MachineState.ERROR,# 20-27 + 4,MachineState.ITS_ME,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 28-2f +) + +HZ_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) + +HZ_SM_MODEL = {'class_table': HZ_CLS, + 'class_factor': 6, + 'state_table': HZ_ST, + 'char_len_table': HZ_CHAR_LEN_TABLE, + 'name': "HZ-GB-2312", + 'language': 'Chinese'} + +ISO2022CN_CLS = ( +2,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,0,0, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,0,0,0,0, # 20 - 27 +0,3,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +0,0,0,4,0,0,0,0, # 40 - 47 +0,0,0,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,0,0,0,0,0, # 78 - 7f +2,2,2,2,2,2,2,2, # 80 - 87 +2,2,2,2,2,2,2,2, # 88 - 8f +2,2,2,2,2,2,2,2, # 90 - 97 +2,2,2,2,2,2,2,2, # 98 - 9f +2,2,2,2,2,2,2,2, # a0 - a7 +2,2,2,2,2,2,2,2, # a8 - af +2,2,2,2,2,2,2,2, # b0 - b7 +2,2,2,2,2,2,2,2, # b8 - bf +2,2,2,2,2,2,2,2, # c0 - c7 +2,2,2,2,2,2,2,2, # c8 - cf +2,2,2,2,2,2,2,2, # d0 - d7 +2,2,2,2,2,2,2,2, # d8 - df +2,2,2,2,2,2,2,2, # e0 - e7 +2,2,2,2,2,2,2,2, # e8 - ef +2,2,2,2,2,2,2,2, # f0 - f7 +2,2,2,2,2,2,2,2, # f8 - ff +) + +ISO2022CN_ST = ( +MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 00-07 +MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 08-0f +MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 10-17 +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,# 18-1f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 20-27 + 5, 6,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 28-2f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 30-37 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,# 38-3f +) + +ISO2022CN_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0) + +ISO2022CN_SM_MODEL = {'class_table': ISO2022CN_CLS, + 'class_factor': 9, + 'state_table': ISO2022CN_ST, + 'char_len_table': ISO2022CN_CHAR_LEN_TABLE, + 'name': "ISO-2022-CN", + 'language': 'Chinese'} + +ISO2022JP_CLS = ( +2,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,2,2, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,7,0,0,0, # 20 - 27 +3,0,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +6,0,4,0,8,0,0,0, # 40 - 47 +0,9,5,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,0,0,0,0,0, # 78 - 7f +2,2,2,2,2,2,2,2, # 80 - 87 +2,2,2,2,2,2,2,2, # 88 - 8f +2,2,2,2,2,2,2,2, # 90 - 97 +2,2,2,2,2,2,2,2, # 98 - 9f +2,2,2,2,2,2,2,2, # a0 - a7 +2,2,2,2,2,2,2,2, # a8 - af +2,2,2,2,2,2,2,2, # b0 - b7 +2,2,2,2,2,2,2,2, # b8 - bf +2,2,2,2,2,2,2,2, # c0 - c7 +2,2,2,2,2,2,2,2, # c8 - cf +2,2,2,2,2,2,2,2, # d0 - d7 +2,2,2,2,2,2,2,2, # d8 - df +2,2,2,2,2,2,2,2, # e0 - e7 +2,2,2,2,2,2,2,2, # e8 - ef +2,2,2,2,2,2,2,2, # f0 - f7 +2,2,2,2,2,2,2,2, # f8 - ff +) + +ISO2022JP_ST = ( +MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 00-07 +MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 08-0f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 10-17 +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,# 18-1f +MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,MachineState.ERROR,# 20-27 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 6,MachineState.ITS_ME,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,# 28-2f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,# 30-37 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 38-3f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,MachineState.START,# 40-47 +) + +ISO2022JP_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0) + +ISO2022JP_SM_MODEL = {'class_table': ISO2022JP_CLS, + 'class_factor': 10, + 'state_table': ISO2022JP_ST, + 'char_len_table': ISO2022JP_CHAR_LEN_TABLE, + 'name': "ISO-2022-JP", + 'language': 'Japanese'} + +ISO2022KR_CLS = ( +2,0,0,0,0,0,0,0, # 00 - 07 +0,0,0,0,0,0,0,0, # 08 - 0f +0,0,0,0,0,0,0,0, # 10 - 17 +0,0,0,1,0,0,0,0, # 18 - 1f +0,0,0,0,3,0,0,0, # 20 - 27 +0,4,0,0,0,0,0,0, # 28 - 2f +0,0,0,0,0,0,0,0, # 30 - 37 +0,0,0,0,0,0,0,0, # 38 - 3f +0,0,0,5,0,0,0,0, # 40 - 47 +0,0,0,0,0,0,0,0, # 48 - 4f +0,0,0,0,0,0,0,0, # 50 - 57 +0,0,0,0,0,0,0,0, # 58 - 5f +0,0,0,0,0,0,0,0, # 60 - 67 +0,0,0,0,0,0,0,0, # 68 - 6f +0,0,0,0,0,0,0,0, # 70 - 77 +0,0,0,0,0,0,0,0, # 78 - 7f +2,2,2,2,2,2,2,2, # 80 - 87 +2,2,2,2,2,2,2,2, # 88 - 8f +2,2,2,2,2,2,2,2, # 90 - 97 +2,2,2,2,2,2,2,2, # 98 - 9f +2,2,2,2,2,2,2,2, # a0 - a7 +2,2,2,2,2,2,2,2, # a8 - af +2,2,2,2,2,2,2,2, # b0 - b7 +2,2,2,2,2,2,2,2, # b8 - bf +2,2,2,2,2,2,2,2, # c0 - c7 +2,2,2,2,2,2,2,2, # c8 - cf +2,2,2,2,2,2,2,2, # d0 - d7 +2,2,2,2,2,2,2,2, # d8 - df +2,2,2,2,2,2,2,2, # e0 - e7 +2,2,2,2,2,2,2,2, # e8 - ef +2,2,2,2,2,2,2,2, # f0 - f7 +2,2,2,2,2,2,2,2, # f8 - ff +) + +ISO2022KR_ST = ( +MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,# 00-07 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,# 08-0f +MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 4,MachineState.ERROR,MachineState.ERROR,# 10-17 +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,# 18-1f +MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.START,MachineState.START,MachineState.START,MachineState.START,# 20-27 +) + +ISO2022KR_CHAR_LEN_TABLE = (0, 0, 0, 0, 0, 0) + +ISO2022KR_SM_MODEL = {'class_table': ISO2022KR_CLS, + 'class_factor': 6, + 'state_table': ISO2022KR_ST, + 'char_len_table': ISO2022KR_CHAR_LEN_TABLE, + 'name': "ISO-2022-KR", + 'language': 'Korean'} + + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/eucjpprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/eucjpprober.py new file mode 100644 index 0000000..20ce8f7 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/eucjpprober.py @@ -0,0 +1,92 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import ProbingState, MachineState +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import EUCJPDistributionAnalysis +from .jpcntx import EUCJPContextAnalysis +from .mbcssm import EUCJP_SM_MODEL + + +class EUCJPProber(MultiByteCharSetProber): + def __init__(self): + super(EUCJPProber, self).__init__() + self.coding_sm = CodingStateMachine(EUCJP_SM_MODEL) + self.distribution_analyzer = EUCJPDistributionAnalysis() + self.context_analyzer = EUCJPContextAnalysis() + self.reset() + + def reset(self): + super(EUCJPProber, self).reset() + self.context_analyzer.reset() + + @property + def charset_name(self): + return "EUC-JP" + + @property + def language(self): + return "Japanese" + + def feed(self, byte_str): + for i in range(len(byte_str)): + # PY3K: byte_str is a byte array, so byte_str[i] is an int, not a byte + coding_state = self.coding_sm.next_state(byte_str[i]) + if coding_state == MachineState.ERROR: + self.logger.debug('%s %s prober hit error at byte %s', + self.charset_name, self.language, i) + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + char_len = self.coding_sm.get_current_charlen() + if i == 0: + self._last_char[1] = byte_str[0] + self.context_analyzer.feed(self._last_char, char_len) + self.distribution_analyzer.feed(self._last_char, char_len) + else: + self.context_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + self.distribution_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + + self._last_char[0] = byte_str[-1] + + if self.state == ProbingState.DETECTING: + if (self.context_analyzer.got_enough_data() and + (self.get_confidence() > self.SHORTCUT_THRESHOLD)): + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + context_conf = self.context_analyzer.get_confidence() + distrib_conf = self.distribution_analyzer.get_confidence() + return max(context_conf, distrib_conf) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/euckrfreq.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euckrfreq.py new file mode 100644 index 0000000..b68078c --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euckrfreq.py @@ -0,0 +1,195 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# Sampling from about 20M text materials include literature and computer technology + +# 128 --> 0.79 +# 256 --> 0.92 +# 512 --> 0.986 +# 1024 --> 0.99944 +# 2048 --> 0.99999 +# +# Idea Distribution Ratio = 0.98653 / (1-0.98653) = 73.24 +# Random Distribution Ration = 512 / (2350-512) = 0.279. +# +# Typical Distribution Ratio + +EUCKR_TYPICAL_DISTRIBUTION_RATIO = 6.0 + +EUCKR_TABLE_SIZE = 2352 + +# Char to FreqOrder table , +EUCKR_CHAR_TO_FREQ_ORDER = ( + 13, 130, 120,1396, 481,1719,1720, 328, 609, 212,1721, 707, 400, 299,1722, 87, +1397,1723, 104, 536,1117,1203,1724,1267, 685,1268, 508,1725,1726,1727,1728,1398, +1399,1729,1730,1731, 141, 621, 326,1057, 368,1732, 267, 488, 20,1733,1269,1734, + 945,1400,1735, 47, 904,1270,1736,1737, 773, 248,1738, 409, 313, 786, 429,1739, + 116, 987, 813,1401, 683, 75,1204, 145,1740,1741,1742,1743, 16, 847, 667, 622, + 708,1744,1745,1746, 966, 787, 304, 129,1747, 60, 820, 123, 676,1748,1749,1750, +1751, 617,1752, 626,1753,1754,1755,1756, 653,1757,1758,1759,1760,1761,1762, 856, + 344,1763,1764,1765,1766, 89, 401, 418, 806, 905, 848,1767,1768,1769, 946,1205, + 709,1770,1118,1771, 241,1772,1773,1774,1271,1775, 569,1776, 999,1777,1778,1779, +1780, 337, 751,1058, 28, 628, 254,1781, 177, 906, 270, 349, 891,1079,1782, 19, +1783, 379,1784, 315,1785, 629, 754,1402, 559,1786, 636, 203,1206,1787, 710, 567, +1788, 935, 814,1789,1790,1207, 766, 528,1791,1792,1208,1793,1794,1795,1796,1797, +1403,1798,1799, 533,1059,1404,1405,1156,1406, 936, 884,1080,1800, 351,1801,1802, +1803,1804,1805, 801,1806,1807,1808,1119,1809,1157, 714, 474,1407,1810, 298, 899, + 885,1811,1120, 802,1158,1812, 892,1813,1814,1408, 659,1815,1816,1121,1817,1818, +1819,1820,1821,1822, 319,1823, 594, 545,1824, 815, 937,1209,1825,1826, 573,1409, +1022,1827,1210,1828,1829,1830,1831,1832,1833, 556, 722, 807,1122,1060,1834, 697, +1835, 900, 557, 715,1836,1410, 540,1411, 752,1159, 294, 597,1211, 976, 803, 770, +1412,1837,1838, 39, 794,1413, 358,1839, 371, 925,1840, 453, 661, 788, 531, 723, + 544,1023,1081, 869, 91,1841, 392, 430, 790, 602,1414, 677,1082, 457,1415,1416, +1842,1843, 475, 327,1024,1417, 795, 121,1844, 733, 403,1418,1845,1846,1847, 300, + 119, 711,1212, 627,1848,1272, 207,1849,1850, 796,1213, 382,1851, 519,1852,1083, + 893,1853,1854,1855, 367, 809, 487, 671,1856, 663,1857,1858, 956, 471, 306, 857, +1859,1860,1160,1084,1861,1862,1863,1864,1865,1061,1866,1867,1868,1869,1870,1871, + 282, 96, 574,1872, 502,1085,1873,1214,1874, 907,1875,1876, 827, 977,1419,1420, +1421, 268,1877,1422,1878,1879,1880, 308,1881, 2, 537,1882,1883,1215,1884,1885, + 127, 791,1886,1273,1423,1887, 34, 336, 404, 643,1888, 571, 654, 894, 840,1889, + 0, 886,1274, 122, 575, 260, 908, 938,1890,1275, 410, 316,1891,1892, 100,1893, +1894,1123, 48,1161,1124,1025,1895, 633, 901,1276,1896,1897, 115, 816,1898, 317, +1899, 694,1900, 909, 734,1424, 572, 866,1425, 691, 85, 524,1010, 543, 394, 841, +1901,1902,1903,1026,1904,1905,1906,1907,1908,1909, 30, 451, 651, 988, 310,1910, +1911,1426, 810,1216, 93,1912,1913,1277,1217,1914, 858, 759, 45, 58, 181, 610, + 269,1915,1916, 131,1062, 551, 443,1000, 821,1427, 957, 895,1086,1917,1918, 375, +1919, 359,1920, 687,1921, 822,1922, 293,1923,1924, 40, 662, 118, 692, 29, 939, + 887, 640, 482, 174,1925, 69,1162, 728,1428, 910,1926,1278,1218,1279, 386, 870, + 217, 854,1163, 823,1927,1928,1929,1930, 834,1931, 78,1932, 859,1933,1063,1934, +1935,1936,1937, 438,1164, 208, 595,1938,1939,1940,1941,1219,1125,1942, 280, 888, +1429,1430,1220,1431,1943,1944,1945,1946,1947,1280, 150, 510,1432,1948,1949,1950, +1951,1952,1953,1954,1011,1087,1955,1433,1043,1956, 881,1957, 614, 958,1064,1065, +1221,1958, 638,1001, 860, 967, 896,1434, 989, 492, 553,1281,1165,1959,1282,1002, +1283,1222,1960,1961,1962,1963, 36, 383, 228, 753, 247, 454,1964, 876, 678,1965, +1966,1284, 126, 464, 490, 835, 136, 672, 529, 940,1088,1435, 473,1967,1968, 467, + 50, 390, 227, 587, 279, 378, 598, 792, 968, 240, 151, 160, 849, 882,1126,1285, + 639,1044, 133, 140, 288, 360, 811, 563,1027, 561, 142, 523,1969,1970,1971, 7, + 103, 296, 439, 407, 506, 634, 990,1972,1973,1974,1975, 645,1976,1977,1978,1979, +1980,1981, 236,1982,1436,1983,1984,1089, 192, 828, 618, 518,1166, 333,1127,1985, + 818,1223,1986,1987,1988,1989,1990,1991,1992,1993, 342,1128,1286, 746, 842,1994, +1995, 560, 223,1287, 98, 8, 189, 650, 978,1288,1996,1437,1997, 17, 345, 250, + 423, 277, 234, 512, 226, 97, 289, 42, 167,1998, 201,1999,2000, 843, 836, 824, + 532, 338, 783,1090, 182, 576, 436,1438,1439, 527, 500,2001, 947, 889,2002,2003, +2004,2005, 262, 600, 314, 447,2006, 547,2007, 693, 738,1129,2008, 71,1440, 745, + 619, 688,2009, 829,2010,2011, 147,2012, 33, 948,2013,2014, 74, 224,2015, 61, + 191, 918, 399, 637,2016,1028,1130, 257, 902,2017,2018,2019,2020,2021,2022,2023, +2024,2025,2026, 837,2027,2028,2029,2030, 179, 874, 591, 52, 724, 246,2031,2032, +2033,2034,1167, 969,2035,1289, 630, 605, 911,1091,1168,2036,2037,2038,1441, 912, +2039, 623,2040,2041, 253,1169,1290,2042,1442, 146, 620, 611, 577, 433,2043,1224, + 719,1170, 959, 440, 437, 534, 84, 388, 480,1131, 159, 220, 198, 679,2044,1012, + 819,1066,1443, 113,1225, 194, 318,1003,1029,2045,2046,2047,2048,1067,2049,2050, +2051,2052,2053, 59, 913, 112,2054, 632,2055, 455, 144, 739,1291,2056, 273, 681, + 499,2057, 448,2058,2059, 760,2060,2061, 970, 384, 169, 245,1132,2062,2063, 414, +1444,2064,2065, 41, 235,2066, 157, 252, 877, 568, 919, 789, 580,2067, 725,2068, +2069,1292,2070,2071,1445,2072,1446,2073,2074, 55, 588, 66,1447, 271,1092,2075, +1226,2076, 960,1013, 372,2077,2078,2079,2080,2081,1293,2082,2083,2084,2085, 850, +2086,2087,2088,2089,2090, 186,2091,1068, 180,2092,2093,2094, 109,1227, 522, 606, +2095, 867,1448,1093, 991,1171, 926, 353,1133,2096, 581,2097,2098,2099,1294,1449, +1450,2100, 596,1172,1014,1228,2101,1451,1295,1173,1229,2102,2103,1296,1134,1452, + 949,1135,2104,2105,1094,1453,1454,1455,2106,1095,2107,2108,2109,2110,2111,2112, +2113,2114,2115,2116,2117, 804,2118,2119,1230,1231, 805,1456, 405,1136,2120,2121, +2122,2123,2124, 720, 701,1297, 992,1457, 927,1004,2125,2126,2127,2128,2129,2130, + 22, 417,2131, 303,2132, 385,2133, 971, 520, 513,2134,1174, 73,1096, 231, 274, + 962,1458, 673,2135,1459,2136, 152,1137,2137,2138,2139,2140,1005,1138,1460,1139, +2141,2142,2143,2144, 11, 374, 844,2145, 154,1232, 46,1461,2146, 838, 830, 721, +1233, 106,2147, 90, 428, 462, 578, 566,1175, 352,2148,2149, 538,1234, 124,1298, +2150,1462, 761, 565,2151, 686,2152, 649,2153, 72, 173,2154, 460, 415,2155,1463, +2156,1235, 305,2157,2158,2159,2160,2161,2162, 579,2163,2164,2165,2166,2167, 747, +2168,2169,2170,2171,1464, 669,2172,2173,2174,2175,2176,1465,2177, 23, 530, 285, +2178, 335, 729,2179, 397,2180,2181,2182,1030,2183,2184, 698,2185,2186, 325,2187, +2188, 369,2189, 799,1097,1015, 348,2190,1069, 680,2191, 851,1466,2192,2193, 10, +2194, 613, 424,2195, 979, 108, 449, 589, 27, 172, 81,1031, 80, 774, 281, 350, +1032, 525, 301, 582,1176,2196, 674,1045,2197,2198,1467, 730, 762,2199,2200,2201, +2202,1468,2203, 993,2204,2205, 266,1070, 963,1140,2206,2207,2208, 664,1098, 972, +2209,2210,2211,1177,1469,1470, 871,2212,2213,2214,2215,2216,1471,2217,2218,2219, +2220,2221,2222,2223,2224,2225,2226,2227,1472,1236,2228,2229,2230,2231,2232,2233, +2234,2235,1299,2236,2237, 200,2238, 477, 373,2239,2240, 731, 825, 777,2241,2242, +2243, 521, 486, 548,2244,2245,2246,1473,1300, 53, 549, 137, 875, 76, 158,2247, +1301,1474, 469, 396,1016, 278, 712,2248, 321, 442, 503, 767, 744, 941,1237,1178, +1475,2249, 82, 178,1141,1179, 973,2250,1302,2251, 297,2252,2253, 570,2254,2255, +2256, 18, 450, 206,2257, 290, 292,1142,2258, 511, 162, 99, 346, 164, 735,2259, +1476,1477, 4, 554, 343, 798,1099,2260,1100,2261, 43, 171,1303, 139, 215,2262, +2263, 717, 775,2264,1033, 322, 216,2265, 831,2266, 149,2267,1304,2268,2269, 702, +1238, 135, 845, 347, 309,2270, 484,2271, 878, 655, 238,1006,1478,2272, 67,2273, + 295,2274,2275, 461,2276, 478, 942, 412,2277,1034,2278,2279,2280, 265,2281, 541, +2282,2283,2284,2285,2286, 70, 852,1071,2287,2288,2289,2290, 21, 56, 509, 117, + 432,2291,2292, 331, 980, 552,1101, 148, 284, 105, 393,1180,1239, 755,2293, 187, +2294,1046,1479,2295, 340,2296, 63,1047, 230,2297,2298,1305, 763,1306, 101, 800, + 808, 494,2299,2300,2301, 903,2302, 37,1072, 14, 5,2303, 79, 675,2304, 312, +2305,2306,2307,2308,2309,1480, 6,1307,2310,2311,2312, 1, 470, 35, 24, 229, +2313, 695, 210, 86, 778, 15, 784, 592, 779, 32, 77, 855, 964,2314, 259,2315, + 501, 380,2316,2317, 83, 981, 153, 689,1308,1481,1482,1483,2318,2319, 716,1484, +2320,2321,2322,2323,2324,2325,1485,2326,2327, 128, 57, 68, 261,1048, 211, 170, +1240, 31,2328, 51, 435, 742,2329,2330,2331, 635,2332, 264, 456,2333,2334,2335, + 425,2336,1486, 143, 507, 263, 943,2337, 363, 920,1487, 256,1488,1102, 243, 601, +1489,2338,2339,2340,2341,2342,2343,2344, 861,2345,2346,2347,2348,2349,2350, 395, +2351,1490,1491, 62, 535, 166, 225,2352,2353, 668, 419,1241, 138, 604, 928,2354, +1181,2355,1492,1493,2356,2357,2358,1143,2359, 696,2360, 387, 307,1309, 682, 476, +2361,2362, 332, 12, 222, 156,2363, 232,2364, 641, 276, 656, 517,1494,1495,1035, + 416, 736,1496,2365,1017, 586,2366,2367,2368,1497,2369, 242,2370,2371,2372,1498, +2373, 965, 713,2374,2375,2376,2377, 740, 982,1499, 944,1500,1007,2378,2379,1310, +1501,2380,2381,2382, 785, 329,2383,2384,1502,2385,2386,2387, 932,2388,1503,2389, +2390,2391,2392,1242,2393,2394,2395,2396,2397, 994, 950,2398,2399,2400,2401,1504, +1311,2402,2403,2404,2405,1049, 749,2406,2407, 853, 718,1144,1312,2408,1182,1505, +2409,2410, 255, 516, 479, 564, 550, 214,1506,1507,1313, 413, 239, 444, 339,1145, +1036,1508,1509,1314,1037,1510,1315,2411,1511,2412,2413,2414, 176, 703, 497, 624, + 593, 921, 302,2415, 341, 165,1103,1512,2416,1513,2417,2418,2419, 376,2420, 700, +2421,2422,2423, 258, 768,1316,2424,1183,2425, 995, 608,2426,2427,2428,2429, 221, +2430,2431,2432,2433,2434,2435,2436,2437, 195, 323, 726, 188, 897, 983,1317, 377, + 644,1050, 879,2438, 452,2439,2440,2441,2442,2443,2444, 914,2445,2446,2447,2448, + 915, 489,2449,1514,1184,2450,2451, 515, 64, 427, 495,2452, 583,2453, 483, 485, +1038, 562, 213,1515, 748, 666,2454,2455,2456,2457, 334,2458, 780, 996,1008, 705, +1243,2459,2460,2461,2462,2463, 114,2464, 493,1146, 366, 163,1516, 961,1104,2465, + 291,2466,1318,1105,2467,1517, 365,2468, 355, 951,1244,2469,1319,2470, 631,2471, +2472, 218,1320, 364, 320, 756,1518,1519,1321,1520,1322,2473,2474,2475,2476, 997, +2477,2478,2479,2480, 665,1185,2481, 916,1521,2482,2483,2484, 584, 684,2485,2486, + 797,2487,1051,1186,2488,2489,2490,1522,2491,2492, 370,2493,1039,1187, 65,2494, + 434, 205, 463,1188,2495, 125, 812, 391, 402, 826, 699, 286, 398, 155, 781, 771, + 585,2496, 590, 505,1073,2497, 599, 244, 219, 917,1018, 952, 646,1523,2498,1323, +2499,2500, 49, 984, 354, 741,2501, 625,2502,1324,2503,1019, 190, 357, 757, 491, + 95, 782, 868,2504,2505,2506,2507,2508,2509, 134,1524,1074, 422,1525, 898,2510, + 161,2511,2512,2513,2514, 769,2515,1526,2516,2517, 411,1325,2518, 472,1527,2519, +2520,2521,2522,2523,2524, 985,2525,2526,2527,2528,2529,2530, 764,2531,1245,2532, +2533, 25, 204, 311,2534, 496,2535,1052,2536,2537,2538,2539,2540,2541,2542, 199, + 704, 504, 468, 758, 657,1528, 196, 44, 839,1246, 272, 750,2543, 765, 862,2544, +2545,1326,2546, 132, 615, 933,2547, 732,2548,2549,2550,1189,1529,2551, 283,1247, +1053, 607, 929,2552,2553,2554, 930, 183, 872, 616,1040,1147,2555,1148,1020, 441, + 249,1075,2556,2557,2558, 466, 743,2559,2560,2561, 92, 514, 426, 420, 526,2562, +2563,2564,2565,2566,2567,2568, 185,2569,2570,2571,2572, 776,1530, 658,2573, 362, +2574, 361, 922,1076, 793,2575,2576,2577,2578,2579,2580,1531, 251,2581,2582,2583, +2584,1532, 54, 612, 237,1327,2585,2586, 275, 408, 647, 111,2587,1533,1106, 465, + 3, 458, 9, 38,2588, 107, 110, 890, 209, 26, 737, 498,2589,1534,2590, 431, + 202, 88,1535, 356, 287,1107, 660,1149,2591, 381,1536, 986,1150, 445,1248,1151, + 974,2592,2593, 846,2594, 446, 953, 184,1249,1250, 727,2595, 923, 193, 883,2596, +2597,2598, 102, 324, 539, 817,2599, 421,1041,2600, 832,2601, 94, 175, 197, 406, +2602, 459,2603,2604,2605,2606,2607, 330, 555,2608,2609,2610, 706,1108, 389,2611, +2612,2613,2614, 233,2615, 833, 558, 931, 954,1251,2616,2617,1537, 546,2618,2619, +1009,2620,2621,2622,1538, 690,1328,2623, 955,2624,1539,2625,2626, 772,2627,2628, +2629,2630,2631, 924, 648, 863, 603,2632,2633, 934,1540, 864, 865,2634, 642,1042, + 670,1190,2635,2636,2637,2638, 168,2639, 652, 873, 542,1054,1541,2640,2641,2642, # 512, 256 +) + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/euckrprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euckrprober.py new file mode 100644 index 0000000..345a060 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euckrprober.py @@ -0,0 +1,47 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import EUCKRDistributionAnalysis +from .mbcssm import EUCKR_SM_MODEL + + +class EUCKRProber(MultiByteCharSetProber): + def __init__(self): + super(EUCKRProber, self).__init__() + self.coding_sm = CodingStateMachine(EUCKR_SM_MODEL) + self.distribution_analyzer = EUCKRDistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "EUC-KR" + + @property + def language(self): + return "Korean" diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/euctwfreq.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euctwfreq.py new file mode 100644 index 0000000..ed7a995 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euctwfreq.py @@ -0,0 +1,387 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# EUCTW frequency table +# Converted from big5 work +# by Taiwan's Mandarin Promotion Council +# + +# 128 --> 0.42261 +# 256 --> 0.57851 +# 512 --> 0.74851 +# 1024 --> 0.89384 +# 2048 --> 0.97583 +# +# Idea Distribution Ratio = 0.74851/(1-0.74851) =2.98 +# Random Distribution Ration = 512/(5401-512)=0.105 +# +# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR + +EUCTW_TYPICAL_DISTRIBUTION_RATIO = 0.75 + +# Char to FreqOrder table , +EUCTW_TABLE_SIZE = 5376 + +EUCTW_CHAR_TO_FREQ_ORDER = ( + 1,1800,1506, 255,1431, 198, 9, 82, 6,7310, 177, 202,3615,1256,2808, 110, # 2742 +3735, 33,3241, 261, 76, 44,2113, 16,2931,2184,1176, 659,3868, 26,3404,2643, # 2758 +1198,3869,3313,4060, 410,2211, 302, 590, 361,1963, 8, 204, 58,4296,7311,1931, # 2774 + 63,7312,7313, 317,1614, 75, 222, 159,4061,2412,1480,7314,3500,3068, 224,2809, # 2790 +3616, 3, 10,3870,1471, 29,2774,1135,2852,1939, 873, 130,3242,1123, 312,7315, # 2806 +4297,2051, 507, 252, 682,7316, 142,1914, 124, 206,2932, 34,3501,3173, 64, 604, # 2822 +7317,2494,1976,1977, 155,1990, 645, 641,1606,7318,3405, 337, 72, 406,7319, 80, # 2838 + 630, 238,3174,1509, 263, 939,1092,2644, 756,1440,1094,3406, 449, 69,2969, 591, # 2854 + 179,2095, 471, 115,2034,1843, 60, 50,2970, 134, 806,1868, 734,2035,3407, 180, # 2870 + 995,1607, 156, 537,2893, 688,7320, 319,1305, 779,2144, 514,2374, 298,4298, 359, # 2886 +2495, 90,2707,1338, 663, 11, 906,1099,2545, 20,2436, 182, 532,1716,7321, 732, # 2902 +1376,4062,1311,1420,3175, 25,2312,1056, 113, 399, 382,1949, 242,3408,2467, 529, # 2918 +3243, 475,1447,3617,7322, 117, 21, 656, 810,1297,2295,2329,3502,7323, 126,4063, # 2934 + 706, 456, 150, 613,4299, 71,1118,2036,4064, 145,3069, 85, 835, 486,2114,1246, # 2950 +1426, 428, 727,1285,1015, 800, 106, 623, 303,1281,7324,2127,2354, 347,3736, 221, # 2966 +3503,3110,7325,1955,1153,4065, 83, 296,1199,3070, 192, 624, 93,7326, 822,1897, # 2982 +2810,3111, 795,2064, 991,1554,1542,1592, 27, 43,2853, 859, 139,1456, 860,4300, # 2998 + 437, 712,3871, 164,2392,3112, 695, 211,3017,2096, 195,3872,1608,3504,3505,3618, # 3014 +3873, 234, 811,2971,2097,3874,2229,1441,3506,1615,2375, 668,2076,1638, 305, 228, # 3030 +1664,4301, 467, 415,7327, 262,2098,1593, 239, 108, 300, 200,1033, 512,1247,2077, # 3046 +7328,7329,2173,3176,3619,2673, 593, 845,1062,3244, 88,1723,2037,3875,1950, 212, # 3062 + 266, 152, 149, 468,1898,4066,4302, 77, 187,7330,3018, 37, 5,2972,7331,3876, # 3078 +7332,7333, 39,2517,4303,2894,3177,2078, 55, 148, 74,4304, 545, 483,1474,1029, # 3094 +1665, 217,1869,1531,3113,1104,2645,4067, 24, 172,3507, 900,3877,3508,3509,4305, # 3110 + 32,1408,2811,1312, 329, 487,2355,2247,2708, 784,2674, 4,3019,3314,1427,1788, # 3126 + 188, 109, 499,7334,3620,1717,1789, 888,1217,3020,4306,7335,3510,7336,3315,1520, # 3142 +3621,3878, 196,1034, 775,7337,7338, 929,1815, 249, 439, 38,7339,1063,7340, 794, # 3158 +3879,1435,2296, 46, 178,3245,2065,7341,2376,7342, 214,1709,4307, 804, 35, 707, # 3174 + 324,3622,1601,2546, 140, 459,4068,7343,7344,1365, 839, 272, 978,2257,2572,3409, # 3190 +2128,1363,3623,1423, 697, 100,3071, 48, 70,1231, 495,3114,2193,7345,1294,7346, # 3206 +2079, 462, 586,1042,3246, 853, 256, 988, 185,2377,3410,1698, 434,1084,7347,3411, # 3222 + 314,2615,2775,4308,2330,2331, 569,2280, 637,1816,2518, 757,1162,1878,1616,3412, # 3238 + 287,1577,2115, 768,4309,1671,2854,3511,2519,1321,3737, 909,2413,7348,4069, 933, # 3254 +3738,7349,2052,2356,1222,4310, 765,2414,1322, 786,4311,7350,1919,1462,1677,2895, # 3270 +1699,7351,4312,1424,2437,3115,3624,2590,3316,1774,1940,3413,3880,4070, 309,1369, # 3286 +1130,2812, 364,2230,1653,1299,3881,3512,3882,3883,2646, 525,1085,3021, 902,2000, # 3302 +1475, 964,4313, 421,1844,1415,1057,2281, 940,1364,3116, 376,4314,4315,1381, 7, # 3318 +2520, 983,2378, 336,1710,2675,1845, 321,3414, 559,1131,3022,2742,1808,1132,1313, # 3334 + 265,1481,1857,7352, 352,1203,2813,3247, 167,1089, 420,2814, 776, 792,1724,3513, # 3350 +4071,2438,3248,7353,4072,7354, 446, 229, 333,2743, 901,3739,1200,1557,4316,2647, # 3366 +1920, 395,2744,2676,3740,4073,1835, 125, 916,3178,2616,4317,7355,7356,3741,7357, # 3382 +7358,7359,4318,3117,3625,1133,2547,1757,3415,1510,2313,1409,3514,7360,2145, 438, # 3398 +2591,2896,2379,3317,1068, 958,3023, 461, 311,2855,2677,4074,1915,3179,4075,1978, # 3414 + 383, 750,2745,2617,4076, 274, 539, 385,1278,1442,7361,1154,1964, 384, 561, 210, # 3430 + 98,1295,2548,3515,7362,1711,2415,1482,3416,3884,2897,1257, 129,7363,3742, 642, # 3446 + 523,2776,2777,2648,7364, 141,2231,1333, 68, 176, 441, 876, 907,4077, 603,2592, # 3462 + 710, 171,3417, 404, 549, 18,3118,2393,1410,3626,1666,7365,3516,4319,2898,4320, # 3478 +7366,2973, 368,7367, 146, 366, 99, 871,3627,1543, 748, 807,1586,1185, 22,2258, # 3494 + 379,3743,3180,7368,3181, 505,1941,2618,1991,1382,2314,7369, 380,2357, 218, 702, # 3510 +1817,1248,3418,3024,3517,3318,3249,7370,2974,3628, 930,3250,3744,7371, 59,7372, # 3526 + 585, 601,4078, 497,3419,1112,1314,4321,1801,7373,1223,1472,2174,7374, 749,1836, # 3542 + 690,1899,3745,1772,3885,1476, 429,1043,1790,2232,2116, 917,4079, 447,1086,1629, # 3558 +7375, 556,7376,7377,2020,1654, 844,1090, 105, 550, 966,1758,2815,1008,1782, 686, # 3574 +1095,7378,2282, 793,1602,7379,3518,2593,4322,4080,2933,2297,4323,3746, 980,2496, # 3590 + 544, 353, 527,4324, 908,2678,2899,7380, 381,2619,1942,1348,7381,1341,1252, 560, # 3606 +3072,7382,3420,2856,7383,2053, 973, 886,2080, 143,4325,7384,7385, 157,3886, 496, # 3622 +4081, 57, 840, 540,2038,4326,4327,3421,2117,1445, 970,2259,1748,1965,2081,4082, # 3638 +3119,1234,1775,3251,2816,3629, 773,1206,2129,1066,2039,1326,3887,1738,1725,4083, # 3654 + 279,3120, 51,1544,2594, 423,1578,2130,2066, 173,4328,1879,7386,7387,1583, 264, # 3670 + 610,3630,4329,2439, 280, 154,7388,7389,7390,1739, 338,1282,3073, 693,2857,1411, # 3686 +1074,3747,2440,7391,4330,7392,7393,1240, 952,2394,7394,2900,1538,2679, 685,1483, # 3702 +4084,2468,1436, 953,4085,2054,4331, 671,2395, 79,4086,2441,3252, 608, 567,2680, # 3718 +3422,4087,4088,1691, 393,1261,1791,2396,7395,4332,7396,7397,7398,7399,1383,1672, # 3734 +3748,3182,1464, 522,1119, 661,1150, 216, 675,4333,3888,1432,3519, 609,4334,2681, # 3750 +2397,7400,7401,7402,4089,3025, 0,7403,2469, 315, 231,2442, 301,3319,4335,2380, # 3766 +7404, 233,4090,3631,1818,4336,4337,7405, 96,1776,1315,2082,7406, 257,7407,1809, # 3782 +3632,2709,1139,1819,4091,2021,1124,2163,2778,1777,2649,7408,3074, 363,1655,3183, # 3798 +7409,2975,7410,7411,7412,3889,1567,3890, 718, 103,3184, 849,1443, 341,3320,2934, # 3814 +1484,7413,1712, 127, 67, 339,4092,2398, 679,1412, 821,7414,7415, 834, 738, 351, # 3830 +2976,2146, 846, 235,1497,1880, 418,1992,3749,2710, 186,1100,2147,2746,3520,1545, # 3846 +1355,2935,2858,1377, 583,3891,4093,2573,2977,7416,1298,3633,1078,2549,3634,2358, # 3862 + 78,3750,3751, 267,1289,2099,2001,1594,4094, 348, 369,1274,2194,2175,1837,4338, # 3878 +1820,2817,3635,2747,2283,2002,4339,2936,2748, 144,3321, 882,4340,3892,2749,3423, # 3894 +4341,2901,7417,4095,1726, 320,7418,3893,3026, 788,2978,7419,2818,1773,1327,2859, # 3910 +3894,2819,7420,1306,4342,2003,1700,3752,3521,2359,2650, 787,2022, 506, 824,3636, # 3926 + 534, 323,4343,1044,3322,2023,1900, 946,3424,7421,1778,1500,1678,7422,1881,4344, # 3942 + 165, 243,4345,3637,2521, 123, 683,4096, 764,4346, 36,3895,1792, 589,2902, 816, # 3958 + 626,1667,3027,2233,1639,1555,1622,3753,3896,7423,3897,2860,1370,1228,1932, 891, # 3974 +2083,2903, 304,4097,7424, 292,2979,2711,3522, 691,2100,4098,1115,4347, 118, 662, # 3990 +7425, 611,1156, 854,2381,1316,2861, 2, 386, 515,2904,7426,7427,3253, 868,2234, # 4006 +1486, 855,2651, 785,2212,3028,7428,1040,3185,3523,7429,3121, 448,7430,1525,7431, # 4022 +2164,4348,7432,3754,7433,4099,2820,3524,3122, 503, 818,3898,3123,1568, 814, 676, # 4038 +1444, 306,1749,7434,3755,1416,1030, 197,1428, 805,2821,1501,4349,7435,7436,7437, # 4054 +1993,7438,4350,7439,7440,2195, 13,2779,3638,2980,3124,1229,1916,7441,3756,2131, # 4070 +7442,4100,4351,2399,3525,7443,2213,1511,1727,1120,7444,7445, 646,3757,2443, 307, # 4086 +7446,7447,1595,3186,7448,7449,7450,3639,1113,1356,3899,1465,2522,2523,7451, 519, # 4102 +7452, 128,2132, 92,2284,1979,7453,3900,1512, 342,3125,2196,7454,2780,2214,1980, # 4118 +3323,7455, 290,1656,1317, 789, 827,2360,7456,3758,4352, 562, 581,3901,7457, 401, # 4134 +4353,2248, 94,4354,1399,2781,7458,1463,2024,4355,3187,1943,7459, 828,1105,4101, # 4150 +1262,1394,7460,4102, 605,4356,7461,1783,2862,7462,2822, 819,2101, 578,2197,2937, # 4166 +7463,1502, 436,3254,4103,3255,2823,3902,2905,3425,3426,7464,2712,2315,7465,7466, # 4182 +2332,2067, 23,4357, 193, 826,3759,2102, 699,1630,4104,3075, 390,1793,1064,3526, # 4198 +7467,1579,3076,3077,1400,7468,4105,1838,1640,2863,7469,4358,4359, 137,4106, 598, # 4214 +3078,1966, 780, 104, 974,2938,7470, 278, 899, 253, 402, 572, 504, 493,1339,7471, # 4230 +3903,1275,4360,2574,2550,7472,3640,3029,3079,2249, 565,1334,2713, 863, 41,7473, # 4246 +7474,4361,7475,1657,2333, 19, 463,2750,4107, 606,7476,2981,3256,1087,2084,1323, # 4262 +2652,2982,7477,1631,1623,1750,4108,2682,7478,2864, 791,2714,2653,2334, 232,2416, # 4278 +7479,2983,1498,7480,2654,2620, 755,1366,3641,3257,3126,2025,1609, 119,1917,3427, # 4294 + 862,1026,4109,7481,3904,3760,4362,3905,4363,2260,1951,2470,7482,1125, 817,4110, # 4310 +4111,3906,1513,1766,2040,1487,4112,3030,3258,2824,3761,3127,7483,7484,1507,7485, # 4326 +2683, 733, 40,1632,1106,2865, 345,4113, 841,2524, 230,4364,2984,1846,3259,3428, # 4342 +7486,1263, 986,3429,7487, 735, 879, 254,1137, 857, 622,1300,1180,1388,1562,3907, # 4358 +3908,2939, 967,2751,2655,1349, 592,2133,1692,3324,2985,1994,4114,1679,3909,1901, # 4374 +2185,7488, 739,3642,2715,1296,1290,7489,4115,2198,2199,1921,1563,2595,2551,1870, # 4390 +2752,2986,7490, 435,7491, 343,1108, 596, 17,1751,4365,2235,3430,3643,7492,4366, # 4406 + 294,3527,2940,1693, 477, 979, 281,2041,3528, 643,2042,3644,2621,2782,2261,1031, # 4422 +2335,2134,2298,3529,4367, 367,1249,2552,7493,3530,7494,4368,1283,3325,2004, 240, # 4438 +1762,3326,4369,4370, 836,1069,3128, 474,7495,2148,2525, 268,3531,7496,3188,1521, # 4454 +1284,7497,1658,1546,4116,7498,3532,3533,7499,4117,3327,2684,1685,4118, 961,1673, # 4470 +2622, 190,2005,2200,3762,4371,4372,7500, 570,2497,3645,1490,7501,4373,2623,3260, # 4486 +1956,4374, 584,1514, 396,1045,1944,7502,4375,1967,2444,7503,7504,4376,3910, 619, # 4502 +7505,3129,3261, 215,2006,2783,2553,3189,4377,3190,4378, 763,4119,3763,4379,7506, # 4518 +7507,1957,1767,2941,3328,3646,1174, 452,1477,4380,3329,3130,7508,2825,1253,2382, # 4534 +2186,1091,2285,4120, 492,7509, 638,1169,1824,2135,1752,3911, 648, 926,1021,1324, # 4550 +4381, 520,4382, 997, 847,1007, 892,4383,3764,2262,1871,3647,7510,2400,1784,4384, # 4566 +1952,2942,3080,3191,1728,4121,2043,3648,4385,2007,1701,3131,1551, 30,2263,4122, # 4582 +7511,2026,4386,3534,7512, 501,7513,4123, 594,3431,2165,1821,3535,3432,3536,3192, # 4598 + 829,2826,4124,7514,1680,3132,1225,4125,7515,3262,4387,4126,3133,2336,7516,4388, # 4614 +4127,7517,3912,3913,7518,1847,2383,2596,3330,7519,4389, 374,3914, 652,4128,4129, # 4630 + 375,1140, 798,7520,7521,7522,2361,4390,2264, 546,1659, 138,3031,2445,4391,7523, # 4646 +2250, 612,1848, 910, 796,3765,1740,1371, 825,3766,3767,7524,2906,2554,7525, 692, # 4662 + 444,3032,2624, 801,4392,4130,7526,1491, 244,1053,3033,4131,4132, 340,7527,3915, # 4678 +1041,2987, 293,1168, 87,1357,7528,1539, 959,7529,2236, 721, 694,4133,3768, 219, # 4694 +1478, 644,1417,3331,2656,1413,1401,1335,1389,3916,7530,7531,2988,2362,3134,1825, # 4710 + 730,1515, 184,2827, 66,4393,7532,1660,2943, 246,3332, 378,1457, 226,3433, 975, # 4726 +3917,2944,1264,3537, 674, 696,7533, 163,7534,1141,2417,2166, 713,3538,3333,4394, # 4742 +3918,7535,7536,1186, 15,7537,1079,1070,7538,1522,3193,3539, 276,1050,2716, 758, # 4758 +1126, 653,2945,3263,7539,2337, 889,3540,3919,3081,2989, 903,1250,4395,3920,3434, # 4774 +3541,1342,1681,1718, 766,3264, 286, 89,2946,3649,7540,1713,7541,2597,3334,2990, # 4790 +7542,2947,2215,3194,2866,7543,4396,2498,2526, 181, 387,1075,3921, 731,2187,3335, # 4806 +7544,3265, 310, 313,3435,2299, 770,4134, 54,3034, 189,4397,3082,3769,3922,7545, # 4822 +1230,1617,1849, 355,3542,4135,4398,3336, 111,4136,3650,1350,3135,3436,3035,4137, # 4838 +2149,3266,3543,7546,2784,3923,3924,2991, 722,2008,7547,1071, 247,1207,2338,2471, # 4854 +1378,4399,2009, 864,1437,1214,4400, 373,3770,1142,2216, 667,4401, 442,2753,2555, # 4870 +3771,3925,1968,4138,3267,1839, 837, 170,1107, 934,1336,1882,7548,7549,2118,4139, # 4886 +2828, 743,1569,7550,4402,4140, 582,2384,1418,3437,7551,1802,7552, 357,1395,1729, # 4902 +3651,3268,2418,1564,2237,7553,3083,3772,1633,4403,1114,2085,4141,1532,7554, 482, # 4918 +2446,4404,7555,7556,1492, 833,1466,7557,2717,3544,1641,2829,7558,1526,1272,3652, # 4934 +4142,1686,1794, 416,2556,1902,1953,1803,7559,3773,2785,3774,1159,2316,7560,2867, # 4950 +4405,1610,1584,3036,2419,2754, 443,3269,1163,3136,7561,7562,3926,7563,4143,2499, # 4966 +3037,4406,3927,3137,2103,1647,3545,2010,1872,4144,7564,4145, 431,3438,7565, 250, # 4982 + 97, 81,4146,7566,1648,1850,1558, 160, 848,7567, 866, 740,1694,7568,2201,2830, # 4998 +3195,4147,4407,3653,1687, 950,2472, 426, 469,3196,3654,3655,3928,7569,7570,1188, # 5014 + 424,1995, 861,3546,4148,3775,2202,2685, 168,1235,3547,4149,7571,2086,1674,4408, # 5030 +3337,3270, 220,2557,1009,7572,3776, 670,2992, 332,1208, 717,7573,7574,3548,2447, # 5046 +3929,3338,7575, 513,7576,1209,2868,3339,3138,4409,1080,7577,7578,7579,7580,2527, # 5062 +3656,3549, 815,1587,3930,3931,7581,3550,3439,3777,1254,4410,1328,3038,1390,3932, # 5078 +1741,3933,3778,3934,7582, 236,3779,2448,3271,7583,7584,3657,3780,1273,3781,4411, # 5094 +7585, 308,7586,4412, 245,4413,1851,2473,1307,2575, 430, 715,2136,2449,7587, 270, # 5110 + 199,2869,3935,7588,3551,2718,1753, 761,1754, 725,1661,1840,4414,3440,3658,7589, # 5126 +7590, 587, 14,3272, 227,2598, 326, 480,2265, 943,2755,3552, 291, 650,1883,7591, # 5142 +1702,1226, 102,1547, 62,3441, 904,4415,3442,1164,4150,7592,7593,1224,1548,2756, # 5158 + 391, 498,1493,7594,1386,1419,7595,2055,1177,4416, 813, 880,1081,2363, 566,1145, # 5174 +4417,2286,1001,1035,2558,2599,2238, 394,1286,7596,7597,2068,7598, 86,1494,1730, # 5190 +3936, 491,1588, 745, 897,2948, 843,3340,3937,2757,2870,3273,1768, 998,2217,2069, # 5206 + 397,1826,1195,1969,3659,2993,3341, 284,7599,3782,2500,2137,2119,1903,7600,3938, # 5222 +2150,3939,4151,1036,3443,1904, 114,2559,4152, 209,1527,7601,7602,2949,2831,2625, # 5238 +2385,2719,3139, 812,2560,7603,3274,7604,1559, 737,1884,3660,1210, 885, 28,2686, # 5254 +3553,3783,7605,4153,1004,1779,4418,7606, 346,1981,2218,2687,4419,3784,1742, 797, # 5270 +1642,3940,1933,1072,1384,2151, 896,3941,3275,3661,3197,2871,3554,7607,2561,1958, # 5286 +4420,2450,1785,7608,7609,7610,3942,4154,1005,1308,3662,4155,2720,4421,4422,1528, # 5302 +2600, 161,1178,4156,1982, 987,4423,1101,4157, 631,3943,1157,3198,2420,1343,1241, # 5318 +1016,2239,2562, 372, 877,2339,2501,1160, 555,1934, 911,3944,7611, 466,1170, 169, # 5334 +1051,2907,2688,3663,2474,2994,1182,2011,2563,1251,2626,7612, 992,2340,3444,1540, # 5350 +2721,1201,2070,2401,1996,2475,7613,4424, 528,1922,2188,1503,1873,1570,2364,3342, # 5366 +3276,7614, 557,1073,7615,1827,3445,2087,2266,3140,3039,3084, 767,3085,2786,4425, # 5382 +1006,4158,4426,2341,1267,2176,3664,3199, 778,3945,3200,2722,1597,2657,7616,4427, # 5398 +7617,3446,7618,7619,7620,3277,2689,1433,3278, 131, 95,1504,3946, 723,4159,3141, # 5414 +1841,3555,2758,2189,3947,2027,2104,3665,7621,2995,3948,1218,7622,3343,3201,3949, # 5430 +4160,2576, 248,1634,3785, 912,7623,2832,3666,3040,3786, 654, 53,7624,2996,7625, # 5446 +1688,4428, 777,3447,1032,3950,1425,7626, 191, 820,2120,2833, 971,4429, 931,3202, # 5462 + 135, 664, 783,3787,1997, 772,2908,1935,3951,3788,4430,2909,3203, 282,2723, 640, # 5478 +1372,3448,1127, 922, 325,3344,7627,7628, 711,2044,7629,7630,3952,2219,2787,1936, # 5494 +3953,3345,2220,2251,3789,2300,7631,4431,3790,1258,3279,3954,3204,2138,2950,3955, # 5510 +3956,7632,2221, 258,3205,4432, 101,1227,7633,3280,1755,7634,1391,3281,7635,2910, # 5526 +2056, 893,7636,7637,7638,1402,4161,2342,7639,7640,3206,3556,7641,7642, 878,1325, # 5542 +1780,2788,4433, 259,1385,2577, 744,1183,2267,4434,7643,3957,2502,7644, 684,1024, # 5558 +4162,7645, 472,3557,3449,1165,3282,3958,3959, 322,2152, 881, 455,1695,1152,1340, # 5574 + 660, 554,2153,4435,1058,4436,4163, 830,1065,3346,3960,4437,1923,7646,1703,1918, # 5590 +7647, 932,2268, 122,7648,4438, 947, 677,7649,3791,2627, 297,1905,1924,2269,4439, # 5606 +2317,3283,7650,7651,4164,7652,4165, 84,4166, 112, 989,7653, 547,1059,3961, 701, # 5622 +3558,1019,7654,4167,7655,3450, 942, 639, 457,2301,2451, 993,2951, 407, 851, 494, # 5638 +4440,3347, 927,7656,1237,7657,2421,3348, 573,4168, 680, 921,2911,1279,1874, 285, # 5654 + 790,1448,1983, 719,2167,7658,7659,4441,3962,3963,1649,7660,1541, 563,7661,1077, # 5670 +7662,3349,3041,3451, 511,2997,3964,3965,3667,3966,1268,2564,3350,3207,4442,4443, # 5686 +7663, 535,1048,1276,1189,2912,2028,3142,1438,1373,2834,2952,1134,2012,7664,4169, # 5702 +1238,2578,3086,1259,7665, 700,7666,2953,3143,3668,4170,7667,4171,1146,1875,1906, # 5718 +4444,2601,3967, 781,2422, 132,1589, 203, 147, 273,2789,2402, 898,1786,2154,3968, # 5734 +3969,7668,3792,2790,7669,7670,4445,4446,7671,3208,7672,1635,3793, 965,7673,1804, # 5750 +2690,1516,3559,1121,1082,1329,3284,3970,1449,3794, 65,1128,2835,2913,2759,1590, # 5766 +3795,7674,7675, 12,2658, 45, 976,2579,3144,4447, 517,2528,1013,1037,3209,7676, # 5782 +3796,2836,7677,3797,7678,3452,7679,2602, 614,1998,2318,3798,3087,2724,2628,7680, # 5798 +2580,4172, 599,1269,7681,1810,3669,7682,2691,3088, 759,1060, 489,1805,3351,3285, # 5814 +1358,7683,7684,2386,1387,1215,2629,2252, 490,7685,7686,4173,1759,2387,2343,7687, # 5830 +4448,3799,1907,3971,2630,1806,3210,4449,3453,3286,2760,2344, 874,7688,7689,3454, # 5846 +3670,1858, 91,2914,3671,3042,3800,4450,7690,3145,3972,2659,7691,3455,1202,1403, # 5862 +3801,2954,2529,1517,2503,4451,3456,2504,7692,4452,7693,2692,1885,1495,1731,3973, # 5878 +2365,4453,7694,2029,7695,7696,3974,2693,1216, 237,2581,4174,2319,3975,3802,4454, # 5894 +4455,2694,3560,3457, 445,4456,7697,7698,7699,7700,2761, 61,3976,3672,1822,3977, # 5910 +7701, 687,2045, 935, 925, 405,2660, 703,1096,1859,2725,4457,3978,1876,1367,2695, # 5926 +3352, 918,2105,1781,2476, 334,3287,1611,1093,4458, 564,3146,3458,3673,3353, 945, # 5942 +2631,2057,4459,7702,1925, 872,4175,7703,3459,2696,3089, 349,4176,3674,3979,4460, # 5958 +3803,4177,3675,2155,3980,4461,4462,4178,4463,2403,2046, 782,3981, 400, 251,4179, # 5974 +1624,7704,7705, 277,3676, 299,1265, 476,1191,3804,2121,4180,4181,1109, 205,7706, # 5990 +2582,1000,2156,3561,1860,7707,7708,7709,4464,7710,4465,2565, 107,2477,2157,3982, # 6006 +3460,3147,7711,1533, 541,1301, 158, 753,4182,2872,3562,7712,1696, 370,1088,4183, # 6022 +4466,3563, 579, 327, 440, 162,2240, 269,1937,1374,3461, 968,3043, 56,1396,3090, # 6038 +2106,3288,3354,7713,1926,2158,4467,2998,7714,3564,7715,7716,3677,4468,2478,7717, # 6054 +2791,7718,1650,4469,7719,2603,7720,7721,3983,2661,3355,1149,3356,3984,3805,3985, # 6070 +7722,1076, 49,7723, 951,3211,3289,3290, 450,2837, 920,7724,1811,2792,2366,4184, # 6086 +1908,1138,2367,3806,3462,7725,3212,4470,1909,1147,1518,2423,4471,3807,7726,4472, # 6102 +2388,2604, 260,1795,3213,7727,7728,3808,3291, 708,7729,3565,1704,7730,3566,1351, # 6118 +1618,3357,2999,1886, 944,4185,3358,4186,3044,3359,4187,7731,3678, 422, 413,1714, # 6134 +3292, 500,2058,2345,4188,2479,7732,1344,1910, 954,7733,1668,7734,7735,3986,2404, # 6150 +4189,3567,3809,4190,7736,2302,1318,2505,3091, 133,3092,2873,4473, 629, 31,2838, # 6166 +2697,3810,4474, 850, 949,4475,3987,2955,1732,2088,4191,1496,1852,7737,3988, 620, # 6182 +3214, 981,1242,3679,3360,1619,3680,1643,3293,2139,2452,1970,1719,3463,2168,7738, # 6198 +3215,7739,7740,3361,1828,7741,1277,4476,1565,2047,7742,1636,3568,3093,7743, 869, # 6214 +2839, 655,3811,3812,3094,3989,3000,3813,1310,3569,4477,7744,7745,7746,1733, 558, # 6230 +4478,3681, 335,1549,3045,1756,4192,3682,1945,3464,1829,1291,1192, 470,2726,2107, # 6246 +2793, 913,1054,3990,7747,1027,7748,3046,3991,4479, 982,2662,3362,3148,3465,3216, # 6262 +3217,1946,2794,7749, 571,4480,7750,1830,7751,3570,2583,1523,2424,7752,2089, 984, # 6278 +4481,3683,1959,7753,3684, 852, 923,2795,3466,3685, 969,1519, 999,2048,2320,1705, # 6294 +7754,3095, 615,1662, 151, 597,3992,2405,2321,1049, 275,4482,3686,4193, 568,3687, # 6310 +3571,2480,4194,3688,7755,2425,2270, 409,3218,7756,1566,2874,3467,1002, 769,2840, # 6326 + 194,2090,3149,3689,2222,3294,4195, 628,1505,7757,7758,1763,2177,3001,3993, 521, # 6342 +1161,2584,1787,2203,2406,4483,3994,1625,4196,4197, 412, 42,3096, 464,7759,2632, # 6358 +4484,3363,1760,1571,2875,3468,2530,1219,2204,3814,2633,2140,2368,4485,4486,3295, # 6374 +1651,3364,3572,7760,7761,3573,2481,3469,7762,3690,7763,7764,2271,2091, 460,7765, # 6390 +4487,7766,3002, 962, 588,3574, 289,3219,2634,1116, 52,7767,3047,1796,7768,7769, # 6406 +7770,1467,7771,1598,1143,3691,4198,1984,1734,1067,4488,1280,3365, 465,4489,1572, # 6422 + 510,7772,1927,2241,1812,1644,3575,7773,4490,3692,7774,7775,2663,1573,1534,7776, # 6438 +7777,4199, 536,1807,1761,3470,3815,3150,2635,7778,7779,7780,4491,3471,2915,1911, # 6454 +2796,7781,3296,1122, 377,3220,7782, 360,7783,7784,4200,1529, 551,7785,2059,3693, # 6470 +1769,2426,7786,2916,4201,3297,3097,2322,2108,2030,4492,1404, 136,1468,1479, 672, # 6486 +1171,3221,2303, 271,3151,7787,2762,7788,2049, 678,2727, 865,1947,4493,7789,2013, # 6502 +3995,2956,7790,2728,2223,1397,3048,3694,4494,4495,1735,2917,3366,3576,7791,3816, # 6518 + 509,2841,2453,2876,3817,7792,7793,3152,3153,4496,4202,2531,4497,2304,1166,1010, # 6534 + 552, 681,1887,7794,7795,2957,2958,3996,1287,1596,1861,3154, 358, 453, 736, 175, # 6550 + 478,1117, 905,1167,1097,7796,1853,1530,7797,1706,7798,2178,3472,2287,3695,3473, # 6566 +3577,4203,2092,4204,7799,3367,1193,2482,4205,1458,2190,2205,1862,1888,1421,3298, # 6582 +2918,3049,2179,3474, 595,2122,7800,3997,7801,7802,4206,1707,2636, 223,3696,1359, # 6598 + 751,3098, 183,3475,7803,2797,3003, 419,2369, 633, 704,3818,2389, 241,7804,7805, # 6614 +7806, 838,3004,3697,2272,2763,2454,3819,1938,2050,3998,1309,3099,2242,1181,7807, # 6630 +1136,2206,3820,2370,1446,4207,2305,4498,7808,7809,4208,1055,2605, 484,3698,7810, # 6646 +3999, 625,4209,2273,3368,1499,4210,4000,7811,4001,4211,3222,2274,2275,3476,7812, # 6662 +7813,2764, 808,2606,3699,3369,4002,4212,3100,2532, 526,3370,3821,4213, 955,7814, # 6678 +1620,4214,2637,2427,7815,1429,3700,1669,1831, 994, 928,7816,3578,1260,7817,7818, # 6694 +7819,1948,2288, 741,2919,1626,4215,2729,2455, 867,1184, 362,3371,1392,7820,7821, # 6710 +4003,4216,1770,1736,3223,2920,4499,4500,1928,2698,1459,1158,7822,3050,3372,2877, # 6726 +1292,1929,2506,2842,3701,1985,1187,2071,2014,2607,4217,7823,2566,2507,2169,3702, # 6742 +2483,3299,7824,3703,4501,7825,7826, 666,1003,3005,1022,3579,4218,7827,4502,1813, # 6758 +2253, 574,3822,1603, 295,1535, 705,3823,4219, 283, 858, 417,7828,7829,3224,4503, # 6774 +4504,3051,1220,1889,1046,2276,2456,4004,1393,1599, 689,2567, 388,4220,7830,2484, # 6790 + 802,7831,2798,3824,2060,1405,2254,7832,4505,3825,2109,1052,1345,3225,1585,7833, # 6806 + 809,7834,7835,7836, 575,2730,3477, 956,1552,1469,1144,2323,7837,2324,1560,2457, # 6822 +3580,3226,4005, 616,2207,3155,2180,2289,7838,1832,7839,3478,4506,7840,1319,3704, # 6838 +3705,1211,3581,1023,3227,1293,2799,7841,7842,7843,3826, 607,2306,3827, 762,2878, # 6854 +1439,4221,1360,7844,1485,3052,7845,4507,1038,4222,1450,2061,2638,4223,1379,4508, # 6870 +2585,7846,7847,4224,1352,1414,2325,2921,1172,7848,7849,3828,3829,7850,1797,1451, # 6886 +7851,7852,7853,7854,2922,4006,4007,2485,2346, 411,4008,4009,3582,3300,3101,4509, # 6902 +1561,2664,1452,4010,1375,7855,7856, 47,2959, 316,7857,1406,1591,2923,3156,7858, # 6918 +1025,2141,3102,3157, 354,2731, 884,2224,4225,2407, 508,3706, 726,3583, 996,2428, # 6934 +3584, 729,7859, 392,2191,1453,4011,4510,3707,7860,7861,2458,3585,2608,1675,2800, # 6950 + 919,2347,2960,2348,1270,4511,4012, 73,7862,7863, 647,7864,3228,2843,2255,1550, # 6966 +1346,3006,7865,1332, 883,3479,7866,7867,7868,7869,3301,2765,7870,1212, 831,1347, # 6982 +4226,4512,2326,3830,1863,3053, 720,3831,4513,4514,3832,7871,4227,7872,7873,4515, # 6998 +7874,7875,1798,4516,3708,2609,4517,3586,1645,2371,7876,7877,2924, 669,2208,2665, # 7014 +2429,7878,2879,7879,7880,1028,3229,7881,4228,2408,7882,2256,1353,7883,7884,4518, # 7030 +3158, 518,7885,4013,7886,4229,1960,7887,2142,4230,7888,7889,3007,2349,2350,3833, # 7046 + 516,1833,1454,4014,2699,4231,4519,2225,2610,1971,1129,3587,7890,2766,7891,2961, # 7062 +1422, 577,1470,3008,1524,3373,7892,7893, 432,4232,3054,3480,7894,2586,1455,2508, # 7078 +2226,1972,1175,7895,1020,2732,4015,3481,4520,7896,2733,7897,1743,1361,3055,3482, # 7094 +2639,4016,4233,4521,2290, 895, 924,4234,2170, 331,2243,3056, 166,1627,3057,1098, # 7110 +7898,1232,2880,2227,3374,4522, 657, 403,1196,2372, 542,3709,3375,1600,4235,3483, # 7126 +7899,4523,2767,3230, 576, 530,1362,7900,4524,2533,2666,3710,4017,7901, 842,3834, # 7142 +7902,2801,2031,1014,4018, 213,2700,3376, 665, 621,4236,7903,3711,2925,2430,7904, # 7158 +2431,3302,3588,3377,7905,4237,2534,4238,4525,3589,1682,4239,3484,1380,7906, 724, # 7174 +2277, 600,1670,7907,1337,1233,4526,3103,2244,7908,1621,4527,7909, 651,4240,7910, # 7190 +1612,4241,2611,7911,2844,7912,2734,2307,3058,7913, 716,2459,3059, 174,1255,2701, # 7206 +4019,3590, 548,1320,1398, 728,4020,1574,7914,1890,1197,3060,4021,7915,3061,3062, # 7222 +3712,3591,3713, 747,7916, 635,4242,4528,7917,7918,7919,4243,7920,7921,4529,7922, # 7238 +3378,4530,2432, 451,7923,3714,2535,2072,4244,2735,4245,4022,7924,1764,4531,7925, # 7254 +4246, 350,7926,2278,2390,2486,7927,4247,4023,2245,1434,4024, 488,4532, 458,4248, # 7270 +4025,3715, 771,1330,2391,3835,2568,3159,2159,2409,1553,2667,3160,4249,7928,2487, # 7286 +2881,2612,1720,2702,4250,3379,4533,7929,2536,4251,7930,3231,4252,2768,7931,2015, # 7302 +2736,7932,1155,1017,3716,3836,7933,3303,2308, 201,1864,4253,1430,7934,4026,7935, # 7318 +7936,7937,7938,7939,4254,1604,7940, 414,1865, 371,2587,4534,4535,3485,2016,3104, # 7334 +4536,1708, 960,4255, 887, 389,2171,1536,1663,1721,7941,2228,4027,2351,2926,1580, # 7350 +7942,7943,7944,1744,7945,2537,4537,4538,7946,4539,7947,2073,7948,7949,3592,3380, # 7366 +2882,4256,7950,4257,2640,3381,2802, 673,2703,2460, 709,3486,4028,3593,4258,7951, # 7382 +1148, 502, 634,7952,7953,1204,4540,3594,1575,4541,2613,3717,7954,3718,3105, 948, # 7398 +3232, 121,1745,3837,1110,7955,4259,3063,2509,3009,4029,3719,1151,1771,3838,1488, # 7414 +4030,1986,7956,2433,3487,7957,7958,2093,7959,4260,3839,1213,1407,2803, 531,2737, # 7430 +2538,3233,1011,1537,7960,2769,4261,3106,1061,7961,3720,3721,1866,2883,7962,2017, # 7446 + 120,4262,4263,2062,3595,3234,2309,3840,2668,3382,1954,4542,7963,7964,3488,1047, # 7462 +2704,1266,7965,1368,4543,2845, 649,3383,3841,2539,2738,1102,2846,2669,7966,7967, # 7478 +1999,7968,1111,3596,2962,7969,2488,3842,3597,2804,1854,3384,3722,7970,7971,3385, # 7494 +2410,2884,3304,3235,3598,7972,2569,7973,3599,2805,4031,1460, 856,7974,3600,7975, # 7510 +2885,2963,7976,2886,3843,7977,4264, 632,2510, 875,3844,1697,3845,2291,7978,7979, # 7526 +4544,3010,1239, 580,4545,4265,7980, 914, 936,2074,1190,4032,1039,2123,7981,7982, # 7542 +7983,3386,1473,7984,1354,4266,3846,7985,2172,3064,4033, 915,3305,4267,4268,3306, # 7558 +1605,1834,7986,2739, 398,3601,4269,3847,4034, 328,1912,2847,4035,3848,1331,4270, # 7574 +3011, 937,4271,7987,3602,4036,4037,3387,2160,4546,3388, 524, 742, 538,3065,1012, # 7590 +7988,7989,3849,2461,7990, 658,1103, 225,3850,7991,7992,4547,7993,4548,7994,3236, # 7606 +1243,7995,4038, 963,2246,4549,7996,2705,3603,3161,7997,7998,2588,2327,7999,4550, # 7622 +8000,8001,8002,3489,3307, 957,3389,2540,2032,1930,2927,2462, 870,2018,3604,1746, # 7638 +2770,2771,2434,2463,8003,3851,8004,3723,3107,3724,3490,3390,3725,8005,1179,3066, # 7654 +8006,3162,2373,4272,3726,2541,3163,3108,2740,4039,8007,3391,1556,2542,2292, 977, # 7670 +2887,2033,4040,1205,3392,8008,1765,3393,3164,2124,1271,1689, 714,4551,3491,8009, # 7686 +2328,3852, 533,4273,3605,2181, 617,8010,2464,3308,3492,2310,8011,8012,3165,8013, # 7702 +8014,3853,1987, 618, 427,2641,3493,3394,8015,8016,1244,1690,8017,2806,4274,4552, # 7718 +8018,3494,8019,8020,2279,1576, 473,3606,4275,3395, 972,8021,3607,8022,3067,8023, # 7734 +8024,4553,4554,8025,3727,4041,4042,8026, 153,4555, 356,8027,1891,2888,4276,2143, # 7750 + 408, 803,2352,8028,3854,8029,4277,1646,2570,2511,4556,4557,3855,8030,3856,4278, # 7766 +8031,2411,3396, 752,8032,8033,1961,2964,8034, 746,3012,2465,8035,4279,3728, 698, # 7782 +4558,1892,4280,3608,2543,4559,3609,3857,8036,3166,3397,8037,1823,1302,4043,2706, # 7798 +3858,1973,4281,8038,4282,3167, 823,1303,1288,1236,2848,3495,4044,3398, 774,3859, # 7814 +8039,1581,4560,1304,2849,3860,4561,8040,2435,2161,1083,3237,4283,4045,4284, 344, # 7830 +1173, 288,2311, 454,1683,8041,8042,1461,4562,4046,2589,8043,8044,4563, 985, 894, # 7846 +8045,3399,3168,8046,1913,2928,3729,1988,8047,2110,1974,8048,4047,8049,2571,1194, # 7862 + 425,8050,4564,3169,1245,3730,4285,8051,8052,2850,8053, 636,4565,1855,3861, 760, # 7878 +1799,8054,4286,2209,1508,4566,4048,1893,1684,2293,8055,8056,8057,4287,4288,2210, # 7894 + 479,8058,8059, 832,8060,4049,2489,8061,2965,2490,3731, 990,3109, 627,1814,2642, # 7910 +4289,1582,4290,2125,2111,3496,4567,8062, 799,4291,3170,8063,4568,2112,1737,3013, # 7926 +1018, 543, 754,4292,3309,1676,4569,4570,4050,8064,1489,8065,3497,8066,2614,2889, # 7942 +4051,8067,8068,2966,8069,8070,8071,8072,3171,4571,4572,2182,1722,8073,3238,3239, # 7958 +1842,3610,1715, 481, 365,1975,1856,8074,8075,1962,2491,4573,8076,2126,3611,3240, # 7974 + 433,1894,2063,2075,8077, 602,2741,8078,8079,8080,8081,8082,3014,1628,3400,8083, # 7990 +3172,4574,4052,2890,4575,2512,8084,2544,2772,8085,8086,8087,3310,4576,2891,8088, # 8006 +4577,8089,2851,4578,4579,1221,2967,4053,2513,8090,8091,8092,1867,1989,8093,8094, # 8022 +8095,1895,8096,8097,4580,1896,4054, 318,8098,2094,4055,4293,8099,8100, 485,8101, # 8038 + 938,3862, 553,2670, 116,8102,3863,3612,8103,3498,2671,2773,3401,3311,2807,8104, # 8054 +3613,2929,4056,1747,2930,2968,8105,8106, 207,8107,8108,2672,4581,2514,8109,3015, # 8070 + 890,3614,3864,8110,1877,3732,3402,8111,2183,2353,3403,1652,8112,8113,8114, 941, # 8086 +2294, 208,3499,4057,2019, 330,4294,3865,2892,2492,3733,4295,8115,8116,8117,8118, # 8102 +) + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/euctwprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euctwprober.py new file mode 100644 index 0000000..35669cc --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/euctwprober.py @@ -0,0 +1,46 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import EUCTWDistributionAnalysis +from .mbcssm import EUCTW_SM_MODEL + +class EUCTWProber(MultiByteCharSetProber): + def __init__(self): + super(EUCTWProber, self).__init__() + self.coding_sm = CodingStateMachine(EUCTW_SM_MODEL) + self.distribution_analyzer = EUCTWDistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "EUC-TW" + + @property + def language(self): + return "Taiwan" diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/gb2312freq.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/gb2312freq.py new file mode 100644 index 0000000..697837b --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/gb2312freq.py @@ -0,0 +1,283 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# GB2312 most frequently used character table +# +# Char to FreqOrder table , from hz6763 + +# 512 --> 0.79 -- 0.79 +# 1024 --> 0.92 -- 0.13 +# 2048 --> 0.98 -- 0.06 +# 6768 --> 1.00 -- 0.02 +# +# Ideal Distribution Ratio = 0.79135/(1-0.79135) = 3.79 +# Random Distribution Ration = 512 / (3755 - 512) = 0.157 +# +# Typical Distribution Ratio about 25% of Ideal one, still much higher that RDR + +GB2312_TYPICAL_DISTRIBUTION_RATIO = 0.9 + +GB2312_TABLE_SIZE = 3760 + +GB2312_CHAR_TO_FREQ_ORDER = ( +1671, 749,1443,2364,3924,3807,2330,3921,1704,3463,2691,1511,1515, 572,3191,2205, +2361, 224,2558, 479,1711, 963,3162, 440,4060,1905,2966,2947,3580,2647,3961,3842, +2204, 869,4207, 970,2678,5626,2944,2956,1479,4048, 514,3595, 588,1346,2820,3409, + 249,4088,1746,1873,2047,1774, 581,1813, 358,1174,3590,1014,1561,4844,2245, 670, +1636,3112, 889,1286, 953, 556,2327,3060,1290,3141, 613, 185,3477,1367, 850,3820, +1715,2428,2642,2303,2732,3041,2562,2648,3566,3946,1349, 388,3098,2091,1360,3585, + 152,1687,1539, 738,1559, 59,1232,2925,2267,1388,1249,1741,1679,2960, 151,1566, +1125,1352,4271, 924,4296, 385,3166,4459, 310,1245,2850, 70,3285,2729,3534,3575, +2398,3298,3466,1960,2265, 217,3647, 864,1909,2084,4401,2773,1010,3269,5152, 853, +3051,3121,1244,4251,1895, 364,1499,1540,2313,1180,3655,2268, 562, 715,2417,3061, + 544, 336,3768,2380,1752,4075, 950, 280,2425,4382, 183,2759,3272, 333,4297,2155, +1688,2356,1444,1039,4540, 736,1177,3349,2443,2368,2144,2225, 565, 196,1482,3406, + 927,1335,4147, 692, 878,1311,1653,3911,3622,1378,4200,1840,2969,3149,2126,1816, +2534,1546,2393,2760, 737,2494, 13, 447, 245,2747, 38,2765,2129,2589,1079, 606, + 360, 471,3755,2890, 404, 848, 699,1785,1236, 370,2221,1023,3746,2074,2026,2023, +2388,1581,2119, 812,1141,3091,2536,1519, 804,2053, 406,1596,1090, 784, 548,4414, +1806,2264,2936,1100, 343,4114,5096, 622,3358, 743,3668,1510,1626,5020,3567,2513, +3195,4115,5627,2489,2991, 24,2065,2697,1087,2719, 48,1634, 315, 68, 985,2052, + 198,2239,1347,1107,1439, 597,2366,2172, 871,3307, 919,2487,2790,1867, 236,2570, +1413,3794, 906,3365,3381,1701,1982,1818,1524,2924,1205, 616,2586,2072,2004, 575, + 253,3099, 32,1365,1182, 197,1714,2454,1201, 554,3388,3224,2748, 756,2587, 250, +2567,1507,1517,3529,1922,2761,2337,3416,1961,1677,2452,2238,3153, 615, 911,1506, +1474,2495,1265,1906,2749,3756,3280,2161, 898,2714,1759,3450,2243,2444, 563, 26, +3286,2266,3769,3344,2707,3677, 611,1402, 531,1028,2871,4548,1375, 261,2948, 835, +1190,4134, 353, 840,2684,1900,3082,1435,2109,1207,1674, 329,1872,2781,4055,2686, +2104, 608,3318,2423,2957,2768,1108,3739,3512,3271,3985,2203,1771,3520,1418,2054, +1681,1153, 225,1627,2929, 162,2050,2511,3687,1954, 124,1859,2431,1684,3032,2894, + 585,4805,3969,2869,2704,2088,2032,2095,3656,2635,4362,2209, 256, 518,2042,2105, +3777,3657, 643,2298,1148,1779, 190, 989,3544, 414, 11,2135,2063,2979,1471, 403, +3678, 126, 770,1563, 671,2499,3216,2877, 600,1179, 307,2805,4937,1268,1297,2694, + 252,4032,1448,1494,1331,1394, 127,2256, 222,1647,1035,1481,3056,1915,1048, 873, +3651, 210, 33,1608,2516, 200,1520, 415, 102, 0,3389,1287, 817, 91,3299,2940, + 836,1814, 549,2197,1396,1669,2987,3582,2297,2848,4528,1070, 687, 20,1819, 121, +1552,1364,1461,1968,2617,3540,2824,2083, 177, 948,4938,2291, 110,4549,2066, 648, +3359,1755,2110,2114,4642,4845,1693,3937,3308,1257,1869,2123, 208,1804,3159,2992, +2531,2549,3361,2418,1350,2347,2800,2568,1291,2036,2680, 72, 842,1990, 212,1233, +1154,1586, 75,2027,3410,4900,1823,1337,2710,2676, 728,2810,1522,3026,4995, 157, + 755,1050,4022, 710, 785,1936,2194,2085,1406,2777,2400, 150,1250,4049,1206, 807, +1910, 534, 529,3309,1721,1660, 274, 39,2827, 661,2670,1578, 925,3248,3815,1094, +4278,4901,4252, 41,1150,3747,2572,2227,4501,3658,4902,3813,3357,3617,2884,2258, + 887, 538,4187,3199,1294,2439,3042,2329,2343,2497,1255, 107, 543,1527, 521,3478, +3568, 194,5062, 15, 961,3870,1241,1192,2664, 66,5215,3260,2111,1295,1127,2152, +3805,4135, 901,1164,1976, 398,1278, 530,1460, 748, 904,1054,1966,1426, 53,2909, + 509, 523,2279,1534, 536,1019, 239,1685, 460,2353, 673,1065,2401,3600,4298,2272, +1272,2363, 284,1753,3679,4064,1695, 81, 815,2677,2757,2731,1386, 859, 500,4221, +2190,2566, 757,1006,2519,2068,1166,1455, 337,2654,3203,1863,1682,1914,3025,1252, +1409,1366, 847, 714,2834,2038,3209, 964,2970,1901, 885,2553,1078,1756,3049, 301, +1572,3326, 688,2130,1996,2429,1805,1648,2930,3421,2750,3652,3088, 262,1158,1254, + 389,1641,1812, 526,1719, 923,2073,1073,1902, 468, 489,4625,1140, 857,2375,3070, +3319,2863, 380, 116,1328,2693,1161,2244, 273,1212,1884,2769,3011,1775,1142, 461, +3066,1200,2147,2212, 790, 702,2695,4222,1601,1058, 434,2338,5153,3640, 67,2360, +4099,2502, 618,3472,1329, 416,1132, 830,2782,1807,2653,3211,3510,1662, 192,2124, + 296,3979,1739,1611,3684, 23, 118, 324, 446,1239,1225, 293,2520,3814,3795,2535, +3116, 17,1074, 467,2692,2201, 387,2922, 45,1326,3055,1645,3659,2817, 958, 243, +1903,2320,1339,2825,1784,3289, 356, 576, 865,2315,2381,3377,3916,1088,3122,1713, +1655, 935, 628,4689,1034,1327, 441, 800, 720, 894,1979,2183,1528,5289,2702,1071, +4046,3572,2399,1571,3281, 79, 761,1103, 327, 134, 758,1899,1371,1615, 879, 442, + 215,2605,2579, 173,2048,2485,1057,2975,3317,1097,2253,3801,4263,1403,1650,2946, + 814,4968,3487,1548,2644,1567,1285, 2, 295,2636, 97, 946,3576, 832, 141,4257, +3273, 760,3821,3521,3156,2607, 949,1024,1733,1516,1803,1920,2125,2283,2665,3180, +1501,2064,3560,2171,1592, 803,3518,1416, 732,3897,4258,1363,1362,2458, 119,1427, + 602,1525,2608,1605,1639,3175, 694,3064, 10, 465, 76,2000,4846,4208, 444,3781, +1619,3353,2206,1273,3796, 740,2483, 320,1723,2377,3660,2619,1359,1137,1762,1724, +2345,2842,1850,1862, 912, 821,1866, 612,2625,1735,2573,3369,1093, 844, 89, 937, + 930,1424,3564,2413,2972,1004,3046,3019,2011, 711,3171,1452,4178, 428, 801,1943, + 432, 445,2811, 206,4136,1472, 730, 349, 73, 397,2802,2547, 998,1637,1167, 789, + 396,3217, 154,1218, 716,1120,1780,2819,4826,1931,3334,3762,2139,1215,2627, 552, +3664,3628,3232,1405,2383,3111,1356,2652,3577,3320,3101,1703, 640,1045,1370,1246, +4996, 371,1575,2436,1621,2210, 984,4033,1734,2638, 16,4529, 663,2755,3255,1451, +3917,2257,1253,1955,2234,1263,2951, 214,1229, 617, 485, 359,1831,1969, 473,2310, + 750,2058, 165, 80,2864,2419, 361,4344,2416,2479,1134, 796,3726,1266,2943, 860, +2715, 938, 390,2734,1313,1384, 248, 202, 877,1064,2854, 522,3907, 279,1602, 297, +2357, 395,3740, 137,2075, 944,4089,2584,1267,3802, 62,1533,2285, 178, 176, 780, +2440, 201,3707, 590, 478,1560,4354,2117,1075, 30, 74,4643,4004,1635,1441,2745, + 776,2596, 238,1077,1692,1912,2844, 605, 499,1742,3947, 241,3053, 980,1749, 936, +2640,4511,2582, 515,1543,2162,5322,2892,2993, 890,2148,1924, 665,1827,3581,1032, + 968,3163, 339,1044,1896, 270, 583,1791,1720,4367,1194,3488,3669, 43,2523,1657, + 163,2167, 290,1209,1622,3378, 550, 634,2508,2510, 695,2634,2384,2512,1476,1414, + 220,1469,2341,2138,2852,3183,2900,4939,2865,3502,1211,3680, 854,3227,1299,2976, +3172, 186,2998,1459, 443,1067,3251,1495, 321,1932,3054, 909, 753,1410,1828, 436, +2441,1119,1587,3164,2186,1258, 227, 231,1425,1890,3200,3942, 247, 959, 725,5254, +2741, 577,2158,2079, 929, 120, 174, 838,2813, 591,1115, 417,2024, 40,3240,1536, +1037, 291,4151,2354, 632,1298,2406,2500,3535,1825,1846,3451, 205,1171, 345,4238, + 18,1163, 811, 685,2208,1217, 425,1312,1508,1175,4308,2552,1033, 587,1381,3059, +2984,3482, 340,1316,4023,3972, 792,3176, 519, 777,4690, 918, 933,4130,2981,3741, + 90,3360,2911,2200,5184,4550, 609,3079,2030, 272,3379,2736, 363,3881,1130,1447, + 286, 779, 357,1169,3350,3137,1630,1220,2687,2391, 747,1277,3688,2618,2682,2601, +1156,3196,5290,4034,3102,1689,3596,3128, 874, 219,2783, 798, 508,1843,2461, 269, +1658,1776,1392,1913,2983,3287,2866,2159,2372, 829,4076, 46,4253,2873,1889,1894, + 915,1834,1631,2181,2318, 298, 664,2818,3555,2735, 954,3228,3117, 527,3511,2173, + 681,2712,3033,2247,2346,3467,1652, 155,2164,3382, 113,1994, 450, 899, 494, 994, +1237,2958,1875,2336,1926,3727, 545,1577,1550, 633,3473, 204,1305,3072,2410,1956, +2471, 707,2134, 841,2195,2196,2663,3843,1026,4940, 990,3252,4997, 368,1092, 437, +3212,3258,1933,1829, 675,2977,2893, 412, 943,3723,4644,3294,3283,2230,2373,5154, +2389,2241,2661,2323,1404,2524, 593, 787, 677,3008,1275,2059, 438,2709,2609,2240, +2269,2246,1446, 36,1568,1373,3892,1574,2301,1456,3962, 693,2276,5216,2035,1143, +2720,1919,1797,1811,2763,4137,2597,1830,1699,1488,1198,2090, 424,1694, 312,3634, +3390,4179,3335,2252,1214, 561,1059,3243,2295,2561, 975,5155,2321,2751,3772, 472, +1537,3282,3398,1047,2077,2348,2878,1323,3340,3076, 690,2906, 51, 369, 170,3541, +1060,2187,2688,3670,2541,1083,1683, 928,3918, 459, 109,4427, 599,3744,4286, 143, +2101,2730,2490, 82,1588,3036,2121, 281,1860, 477,4035,1238,2812,3020,2716,3312, +1530,2188,2055,1317, 843, 636,1808,1173,3495, 649, 181,1002, 147,3641,1159,2414, +3750,2289,2795, 813,3123,2610,1136,4368, 5,3391,4541,2174, 420, 429,1728, 754, +1228,2115,2219, 347,2223,2733, 735,1518,3003,2355,3134,1764,3948,3329,1888,2424, +1001,1234,1972,3321,3363,1672,1021,1450,1584, 226, 765, 655,2526,3404,3244,2302, +3665, 731, 594,2184, 319,1576, 621, 658,2656,4299,2099,3864,1279,2071,2598,2739, + 795,3086,3699,3908,1707,2352,2402,1382,3136,2475,1465,4847,3496,3865,1085,3004, +2591,1084, 213,2287,1963,3565,2250, 822, 793,4574,3187,1772,1789,3050, 595,1484, +1959,2770,1080,2650, 456, 422,2996, 940,3322,4328,4345,3092,2742, 965,2784, 739, +4124, 952,1358,2498,2949,2565, 332,2698,2378, 660,2260,2473,4194,3856,2919, 535, +1260,2651,1208,1428,1300,1949,1303,2942, 433,2455,2450,1251,1946, 614,1269, 641, +1306,1810,2737,3078,2912, 564,2365,1419,1415,1497,4460,2367,2185,1379,3005,1307, +3218,2175,1897,3063, 682,1157,4040,4005,1712,1160,1941,1399, 394, 402,2952,1573, +1151,2986,2404, 862, 299,2033,1489,3006, 346, 171,2886,3401,1726,2932, 168,2533, + 47,2507,1030,3735,1145,3370,1395,1318,1579,3609,4560,2857,4116,1457,2529,1965, + 504,1036,2690,2988,2405, 745,5871, 849,2397,2056,3081, 863,2359,3857,2096, 99, +1397,1769,2300,4428,1643,3455,1978,1757,3718,1440, 35,4879,3742,1296,4228,2280, + 160,5063,1599,2013, 166, 520,3479,1646,3345,3012, 490,1937,1545,1264,2182,2505, +1096,1188,1369,1436,2421,1667,2792,2460,1270,2122, 727,3167,2143, 806,1706,1012, +1800,3037, 960,2218,1882, 805, 139,2456,1139,1521, 851,1052,3093,3089, 342,2039, + 744,5097,1468,1502,1585,2087, 223, 939, 326,2140,2577, 892,2481,1623,4077, 982, +3708, 135,2131, 87,2503,3114,2326,1106, 876,1616, 547,2997,2831,2093,3441,4530, +4314, 9,3256,4229,4148, 659,1462,1986,1710,2046,2913,2231,4090,4880,5255,3392, +3274,1368,3689,4645,1477, 705,3384,3635,1068,1529,2941,1458,3782,1509, 100,1656, +2548, 718,2339, 408,1590,2780,3548,1838,4117,3719,1345,3530, 717,3442,2778,3220, +2898,1892,4590,3614,3371,2043,1998,1224,3483, 891, 635, 584,2559,3355, 733,1766, +1729,1172,3789,1891,2307, 781,2982,2271,1957,1580,5773,2633,2005,4195,3097,1535, +3213,1189,1934,5693,3262, 586,3118,1324,1598, 517,1564,2217,1868,1893,4445,3728, +2703,3139,1526,1787,1992,3882,2875,1549,1199,1056,2224,1904,2711,5098,4287, 338, +1993,3129,3489,2689,1809,2815,1997, 957,1855,3898,2550,3275,3057,1105,1319, 627, +1505,1911,1883,3526, 698,3629,3456,1833,1431, 746, 77,1261,2017,2296,1977,1885, + 125,1334,1600, 525,1798,1109,2222,1470,1945, 559,2236,1186,3443,2476,1929,1411, +2411,3135,1777,3372,2621,1841,1613,3229, 668,1430,1839,2643,2916, 195,1989,2671, +2358,1387, 629,3205,2293,5256,4439, 123,1310, 888,1879,4300,3021,3605,1003,1162, +3192,2910,2010, 140,2395,2859, 55,1082,2012,2901, 662, 419,2081,1438, 680,2774, +4654,3912,1620,1731,1625,5035,4065,2328, 512,1344, 802,5443,2163,2311,2537, 524, +3399, 98,1155,2103,1918,2606,3925,2816,1393,2465,1504,3773,2177,3963,1478,4346, + 180,1113,4655,3461,2028,1698, 833,2696,1235,1322,1594,4408,3623,3013,3225,2040, +3022, 541,2881, 607,3632,2029,1665,1219, 639,1385,1686,1099,2803,3231,1938,3188, +2858, 427, 676,2772,1168,2025, 454,3253,2486,3556, 230,1950, 580, 791,1991,1280, +1086,1974,2034, 630, 257,3338,2788,4903,1017, 86,4790, 966,2789,1995,1696,1131, + 259,3095,4188,1308, 179,1463,5257, 289,4107,1248, 42,3413,1725,2288, 896,1947, + 774,4474,4254, 604,3430,4264, 392,2514,2588, 452, 237,1408,3018, 988,4531,1970, +3034,3310, 540,2370,1562,1288,2990, 502,4765,1147, 4,1853,2708, 207, 294,2814, +4078,2902,2509, 684, 34,3105,3532,2551, 644, 709,2801,2344, 573,1727,3573,3557, +2021,1081,3100,4315,2100,3681, 199,2263,1837,2385, 146,3484,1195,2776,3949, 997, +1939,3973,1008,1091,1202,1962,1847,1149,4209,5444,1076, 493, 117,5400,2521, 972, +1490,2934,1796,4542,2374,1512,2933,2657, 413,2888,1135,2762,2314,2156,1355,2369, + 766,2007,2527,2170,3124,2491,2593,2632,4757,2437, 234,3125,3591,1898,1750,1376, +1942,3468,3138, 570,2127,2145,3276,4131, 962, 132,1445,4196, 19, 941,3624,3480, +3366,1973,1374,4461,3431,2629, 283,2415,2275, 808,2887,3620,2112,2563,1353,3610, + 955,1089,3103,1053, 96, 88,4097, 823,3808,1583, 399, 292,4091,3313, 421,1128, + 642,4006, 903,2539,1877,2082, 596, 29,4066,1790, 722,2157, 130, 995,1569, 769, +1485, 464, 513,2213, 288,1923,1101,2453,4316, 133, 486,2445, 50, 625, 487,2207, + 57, 423, 481,2962, 159,3729,1558, 491, 303, 482, 501, 240,2837, 112,3648,2392, +1783, 362, 8,3433,3422, 610,2793,3277,1390,1284,1654, 21,3823, 734, 367, 623, + 193, 287, 374,1009,1483, 816, 476, 313,2255,2340,1262,2150,2899,1146,2581, 782, +2116,1659,2018,1880, 255,3586,3314,1110,2867,2137,2564, 986,2767,5185,2006, 650, + 158, 926, 762, 881,3157,2717,2362,3587, 306,3690,3245,1542,3077,2427,1691,2478, +2118,2985,3490,2438, 539,2305, 983, 129,1754, 355,4201,2386, 827,2923, 104,1773, +2838,2771, 411,2905,3919, 376, 767, 122,1114, 828,2422,1817,3506, 266,3460,1007, +1609,4998, 945,2612,4429,2274, 726,1247,1964,2914,2199,2070,4002,4108, 657,3323, +1422, 579, 455,2764,4737,1222,2895,1670, 824,1223,1487,2525, 558, 861,3080, 598, +2659,2515,1967, 752,2583,2376,2214,4180, 977, 704,2464,4999,2622,4109,1210,2961, + 819,1541, 142,2284, 44, 418, 457,1126,3730,4347,4626,1644,1876,3671,1864, 302, +1063,5694, 624, 723,1984,3745,1314,1676,2488,1610,1449,3558,3569,2166,2098, 409, +1011,2325,3704,2306, 818,1732,1383,1824,1844,3757, 999,2705,3497,1216,1423,2683, +2426,2954,2501,2726,2229,1475,2554,5064,1971,1794,1666,2014,1343, 783, 724, 191, +2434,1354,2220,5065,1763,2752,2472,4152, 131, 175,2885,3434, 92,1466,4920,2616, +3871,3872,3866, 128,1551,1632, 669,1854,3682,4691,4125,1230, 188,2973,3290,1302, +1213, 560,3266, 917, 763,3909,3249,1760, 868,1958, 764,1782,2097, 145,2277,3774, +4462, 64,1491,3062, 971,2132,3606,2442, 221,1226,1617, 218, 323,1185,3207,3147, + 571, 619,1473,1005,1744,2281, 449,1887,2396,3685, 275, 375,3816,1743,3844,3731, + 845,1983,2350,4210,1377, 773, 967,3499,3052,3743,2725,4007,1697,1022,3943,1464, +3264,2855,2722,1952,1029,2839,2467, 84,4383,2215, 820,1391,2015,2448,3672, 377, +1948,2168, 797,2545,3536,2578,2645, 94,2874,1678, 405,1259,3071, 771, 546,1315, + 470,1243,3083, 895,2468, 981, 969,2037, 846,4181, 653,1276,2928, 14,2594, 557, +3007,2474, 156, 902,1338,1740,2574, 537,2518, 973,2282,2216,2433,1928, 138,2903, +1293,2631,1612, 646,3457, 839,2935, 111, 496,2191,2847, 589,3186, 149,3994,2060, +4031,2641,4067,3145,1870, 37,3597,2136,1025,2051,3009,3383,3549,1121,1016,3261, +1301, 251,2446,2599,2153, 872,3246, 637, 334,3705, 831, 884, 921,3065,3140,4092, +2198,1944, 246,2964, 108,2045,1152,1921,2308,1031, 203,3173,4170,1907,3890, 810, +1401,2003,1690, 506, 647,1242,2828,1761,1649,3208,2249,1589,3709,2931,5156,1708, + 498, 666,2613, 834,3817,1231, 184,2851,1124, 883,3197,2261,3710,1765,1553,2658, +1178,2639,2351, 93,1193, 942,2538,2141,4402, 235,1821, 870,1591,2192,1709,1871, +3341,1618,4126,2595,2334, 603, 651, 69, 701, 268,2662,3411,2555,1380,1606, 503, + 448, 254,2371,2646, 574,1187,2309,1770, 322,2235,1292,1801, 305, 566,1133, 229, +2067,2057, 706, 167, 483,2002,2672,3295,1820,3561,3067, 316, 378,2746,3452,1112, + 136,1981, 507,1651,2917,1117, 285,4591, 182,2580,3522,1304, 335,3303,1835,2504, +1795,1792,2248, 674,1018,2106,2449,1857,2292,2845, 976,3047,1781,2600,2727,1389, +1281, 52,3152, 153, 265,3950, 672,3485,3951,4463, 430,1183, 365, 278,2169, 27, +1407,1336,2304, 209,1340,1730,2202,1852,2403,2883, 979,1737,1062, 631,2829,2542, +3876,2592, 825,2086,2226,3048,3625, 352,1417,3724, 542, 991, 431,1351,3938,1861, +2294, 826,1361,2927,3142,3503,1738, 463,2462,2723, 582,1916,1595,2808, 400,3845, +3891,2868,3621,2254, 58,2492,1123, 910,2160,2614,1372,1603,1196,1072,3385,1700, +3267,1980, 696, 480,2430, 920, 799,1570,2920,1951,2041,4047,2540,1321,4223,2469, +3562,2228,1271,2602, 401,2833,3351,2575,5157, 907,2312,1256, 410, 263,3507,1582, + 996, 678,1849,2316,1480, 908,3545,2237, 703,2322, 667,1826,2849,1531,2604,2999, +2407,3146,2151,2630,1786,3711, 469,3542, 497,3899,2409, 858, 837,4446,3393,1274, + 786, 620,1845,2001,3311, 484, 308,3367,1204,1815,3691,2332,1532,2557,1842,2020, +2724,1927,2333,4440, 567, 22,1673,2728,4475,1987,1858,1144,1597, 101,1832,3601, + 12, 974,3783,4391, 951,1412, 1,3720, 453,4608,4041, 528,1041,1027,3230,2628, +1129, 875,1051,3291,1203,2262,1069,2860,2799,2149,2615,3278, 144,1758,3040, 31, + 475,1680, 366,2685,3184, 311,1642,4008,2466,5036,1593,1493,2809, 216,1420,1668, + 233, 304,2128,3284, 232,1429,1768,1040,2008,3407,2740,2967,2543, 242,2133, 778, +1565,2022,2620, 505,2189,2756,1098,2273, 372,1614, 708, 553,2846,2094,2278, 169, +3626,2835,4161, 228,2674,3165, 809,1454,1309, 466,1705,1095, 900,3423, 880,2667, +3751,5258,2317,3109,2571,4317,2766,1503,1342, 866,4447,1118, 63,2076, 314,1881, +1348,1061, 172, 978,3515,1747, 532, 511,3970, 6, 601, 905,2699,3300,1751, 276, +1467,3725,2668, 65,4239,2544,2779,2556,1604, 578,2451,1802, 992,2331,2624,1320, +3446, 713,1513,1013, 103,2786,2447,1661, 886,1702, 916, 654,3574,2031,1556, 751, +2178,2821,2179,1498,1538,2176, 271, 914,2251,2080,1325, 638,1953,2937,3877,2432, +2754, 95,3265,1716, 260,1227,4083, 775, 106,1357,3254, 426,1607, 555,2480, 772, +1985, 244,2546, 474, 495,1046,2611,1851,2061, 71,2089,1675,2590, 742,3758,2843, +3222,1433, 267,2180,2576,2826,2233,2092,3913,2435, 956,1745,3075, 856,2113,1116, + 451, 3,1988,2896,1398, 993,2463,1878,2049,1341,2718,2721,2870,2108, 712,2904, +4363,2753,2324, 277,2872,2349,2649, 384, 987, 435, 691,3000, 922, 164,3939, 652, +1500,1184,4153,2482,3373,2165,4848,2335,3775,3508,3154,2806,2830,1554,2102,1664, +2530,1434,2408, 893,1547,2623,3447,2832,2242,2532,3169,2856,3223,2078, 49,3770, +3469, 462, 318, 656,2259,3250,3069, 679,1629,2758, 344,1138,1104,3120,1836,1283, +3115,2154,1437,4448, 934, 759,1999, 794,2862,1038, 533,2560,1722,2342, 855,2626, +1197,1663,4476,3127, 85,4240,2528, 25,1111,1181,3673, 407,3470,4561,2679,2713, + 768,1925,2841,3986,1544,1165, 932, 373,1240,2146,1930,2673, 721,4766, 354,4333, + 391,2963, 187, 61,3364,1442,1102, 330,1940,1767, 341,3809,4118, 393,2496,2062, +2211, 105, 331, 300, 439, 913,1332, 626, 379,3304,1557, 328, 689,3952, 309,1555, + 931, 317,2517,3027, 325, 569, 686,2107,3084, 60,1042,1333,2794, 264,3177,4014, +1628, 258,3712, 7,4464,1176,1043,1778, 683, 114,1975, 78,1492, 383,1886, 510, + 386, 645,5291,2891,2069,3305,4138,3867,2939,2603,2493,1935,1066,1848,3588,1015, +1282,1289,4609, 697,1453,3044,2666,3611,1856,2412, 54, 719,1330, 568,3778,2459, +1748, 788, 492, 551,1191,1000, 488,3394,3763, 282,1799, 348,2016,1523,3155,2390, +1049, 382,2019,1788,1170, 729,2968,3523, 897,3926,2785,2938,3292, 350,2319,3238, +1718,1717,2655,3453,3143,4465, 161,2889,2980,2009,1421, 56,1908,1640,2387,2232, +1917,1874,2477,4921, 148, 83,3438, 592,4245,2882,1822,1055, 741, 115,1496,1624, + 381,1638,4592,1020, 516,3214, 458, 947,4575,1432, 211,1514,2926,1865,2142, 189, + 852,1221,1400,1486, 882,2299,4036, 351, 28,1122, 700,6479,6480,6481,6482,6483, #last 512 +) + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/gb2312prober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/gb2312prober.py new file mode 100644 index 0000000..8446d2d --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/gb2312prober.py @@ -0,0 +1,46 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import GB2312DistributionAnalysis +from .mbcssm import GB2312_SM_MODEL + +class GB2312Prober(MultiByteCharSetProber): + def __init__(self): + super(GB2312Prober, self).__init__() + self.coding_sm = CodingStateMachine(GB2312_SM_MODEL) + self.distribution_analyzer = GB2312DistributionAnalysis() + self.reset() + + @property + def charset_name(self): + return "GB2312" + + @property + def language(self): + return "Chinese" diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/hebrewprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/hebrewprober.py new file mode 100644 index 0000000..b0e1bf4 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/hebrewprober.py @@ -0,0 +1,292 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Shy Shalom +# Portions created by the Initial Developer are Copyright (C) 2005 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState + +# This prober doesn't actually recognize a language or a charset. +# It is a helper prober for the use of the Hebrew model probers + +### General ideas of the Hebrew charset recognition ### +# +# Four main charsets exist in Hebrew: +# "ISO-8859-8" - Visual Hebrew +# "windows-1255" - Logical Hebrew +# "ISO-8859-8-I" - Logical Hebrew +# "x-mac-hebrew" - ?? Logical Hebrew ?? +# +# Both "ISO" charsets use a completely identical set of code points, whereas +# "windows-1255" and "x-mac-hebrew" are two different proper supersets of +# these code points. windows-1255 defines additional characters in the range +# 0x80-0x9F as some misc punctuation marks as well as some Hebrew-specific +# diacritics and additional 'Yiddish' ligature letters in the range 0xc0-0xd6. +# x-mac-hebrew defines similar additional code points but with a different +# mapping. +# +# As far as an average Hebrew text with no diacritics is concerned, all four +# charsets are identical with respect to code points. Meaning that for the +# main Hebrew alphabet, all four map the same values to all 27 Hebrew letters +# (including final letters). +# +# The dominant difference between these charsets is their directionality. +# "Visual" directionality means that the text is ordered as if the renderer is +# not aware of a BIDI rendering algorithm. The renderer sees the text and +# draws it from left to right. The text itself when ordered naturally is read +# backwards. A buffer of Visual Hebrew generally looks like so: +# "[last word of first line spelled backwards] [whole line ordered backwards +# and spelled backwards] [first word of first line spelled backwards] +# [end of line] [last word of second line] ... etc' " +# adding punctuation marks, numbers and English text to visual text is +# naturally also "visual" and from left to right. +# +# "Logical" directionality means the text is ordered "naturally" according to +# the order it is read. It is the responsibility of the renderer to display +# the text from right to left. A BIDI algorithm is used to place general +# punctuation marks, numbers and English text in the text. +# +# Texts in x-mac-hebrew are almost impossible to find on the Internet. From +# what little evidence I could find, it seems that its general directionality +# is Logical. +# +# To sum up all of the above, the Hebrew probing mechanism knows about two +# charsets: +# Visual Hebrew - "ISO-8859-8" - backwards text - Words and sentences are +# backwards while line order is natural. For charset recognition purposes +# the line order is unimportant (In fact, for this implementation, even +# word order is unimportant). +# Logical Hebrew - "windows-1255" - normal, naturally ordered text. +# +# "ISO-8859-8-I" is a subset of windows-1255 and doesn't need to be +# specifically identified. +# "x-mac-hebrew" is also identified as windows-1255. A text in x-mac-hebrew +# that contain special punctuation marks or diacritics is displayed with +# some unconverted characters showing as question marks. This problem might +# be corrected using another model prober for x-mac-hebrew. Due to the fact +# that x-mac-hebrew texts are so rare, writing another model prober isn't +# worth the effort and performance hit. +# +#### The Prober #### +# +# The prober is divided between two SBCharSetProbers and a HebrewProber, +# all of which are managed, created, fed data, inquired and deleted by the +# SBCSGroupProber. The two SBCharSetProbers identify that the text is in +# fact some kind of Hebrew, Logical or Visual. The final decision about which +# one is it is made by the HebrewProber by combining final-letter scores +# with the scores of the two SBCharSetProbers to produce a final answer. +# +# The SBCSGroupProber is responsible for stripping the original text of HTML +# tags, English characters, numbers, low-ASCII punctuation characters, spaces +# and new lines. It reduces any sequence of such characters to a single space. +# The buffer fed to each prober in the SBCS group prober is pure text in +# high-ASCII. +# The two SBCharSetProbers (model probers) share the same language model: +# Win1255Model. +# The first SBCharSetProber uses the model normally as any other +# SBCharSetProber does, to recognize windows-1255, upon which this model was +# built. The second SBCharSetProber is told to make the pair-of-letter +# lookup in the language model backwards. This in practice exactly simulates +# a visual Hebrew model using the windows-1255 logical Hebrew model. +# +# The HebrewProber is not using any language model. All it does is look for +# final-letter evidence suggesting the text is either logical Hebrew or visual +# Hebrew. Disjointed from the model probers, the results of the HebrewProber +# alone are meaningless. HebrewProber always returns 0.00 as confidence +# since it never identifies a charset by itself. Instead, the pointer to the +# HebrewProber is passed to the model probers as a helper "Name Prober". +# When the Group prober receives a positive identification from any prober, +# it asks for the name of the charset identified. If the prober queried is a +# Hebrew model prober, the model prober forwards the call to the +# HebrewProber to make the final decision. In the HebrewProber, the +# decision is made according to the final-letters scores maintained and Both +# model probers scores. The answer is returned in the form of the name of the +# charset identified, either "windows-1255" or "ISO-8859-8". + +class HebrewProber(CharSetProber): + # windows-1255 / ISO-8859-8 code points of interest + FINAL_KAF = 0xea + NORMAL_KAF = 0xeb + FINAL_MEM = 0xed + NORMAL_MEM = 0xee + FINAL_NUN = 0xef + NORMAL_NUN = 0xf0 + FINAL_PE = 0xf3 + NORMAL_PE = 0xf4 + FINAL_TSADI = 0xf5 + NORMAL_TSADI = 0xf6 + + # Minimum Visual vs Logical final letter score difference. + # If the difference is below this, don't rely solely on the final letter score + # distance. + MIN_FINAL_CHAR_DISTANCE = 5 + + # Minimum Visual vs Logical model score difference. + # If the difference is below this, don't rely at all on the model score + # distance. + MIN_MODEL_DISTANCE = 0.01 + + VISUAL_HEBREW_NAME = "ISO-8859-8" + LOGICAL_HEBREW_NAME = "windows-1255" + + def __init__(self): + super(HebrewProber, self).__init__() + self._final_char_logical_score = None + self._final_char_visual_score = None + self._prev = None + self._before_prev = None + self._logical_prober = None + self._visual_prober = None + self.reset() + + def reset(self): + self._final_char_logical_score = 0 + self._final_char_visual_score = 0 + # The two last characters seen in the previous buffer, + # mPrev and mBeforePrev are initialized to space in order to simulate + # a word delimiter at the beginning of the data + self._prev = ' ' + self._before_prev = ' ' + # These probers are owned by the group prober. + + def set_model_probers(self, logicalProber, visualProber): + self._logical_prober = logicalProber + self._visual_prober = visualProber + + def is_final(self, c): + return c in [self.FINAL_KAF, self.FINAL_MEM, self.FINAL_NUN, + self.FINAL_PE, self.FINAL_TSADI] + + def is_non_final(self, c): + # The normal Tsadi is not a good Non-Final letter due to words like + # 'lechotet' (to chat) containing an apostrophe after the tsadi. This + # apostrophe is converted to a space in FilterWithoutEnglishLetters + # causing the Non-Final tsadi to appear at an end of a word even + # though this is not the case in the original text. + # The letters Pe and Kaf rarely display a related behavior of not being + # a good Non-Final letter. Words like 'Pop', 'Winamp' and 'Mubarak' + # for example legally end with a Non-Final Pe or Kaf. However, the + # benefit of these letters as Non-Final letters outweighs the damage + # since these words are quite rare. + return c in [self.NORMAL_KAF, self.NORMAL_MEM, + self.NORMAL_NUN, self.NORMAL_PE] + + def feed(self, byte_str): + # Final letter analysis for logical-visual decision. + # Look for evidence that the received buffer is either logical Hebrew + # or visual Hebrew. + # The following cases are checked: + # 1) A word longer than 1 letter, ending with a final letter. This is + # an indication that the text is laid out "naturally" since the + # final letter really appears at the end. +1 for logical score. + # 2) A word longer than 1 letter, ending with a Non-Final letter. In + # normal Hebrew, words ending with Kaf, Mem, Nun, Pe or Tsadi, + # should not end with the Non-Final form of that letter. Exceptions + # to this rule are mentioned above in isNonFinal(). This is an + # indication that the text is laid out backwards. +1 for visual + # score + # 3) A word longer than 1 letter, starting with a final letter. Final + # letters should not appear at the beginning of a word. This is an + # indication that the text is laid out backwards. +1 for visual + # score. + # + # The visual score and logical score are accumulated throughout the + # text and are finally checked against each other in GetCharSetName(). + # No checking for final letters in the middle of words is done since + # that case is not an indication for either Logical or Visual text. + # + # We automatically filter out all 7-bit characters (replace them with + # spaces) so the word boundary detection works properly. [MAP] + + if self.state == ProbingState.NOT_ME: + # Both model probers say it's not them. No reason to continue. + return ProbingState.NOT_ME + + byte_str = self.filter_high_byte_only(byte_str) + + for cur in byte_str: + if cur == ' ': + # We stand on a space - a word just ended + if self._before_prev != ' ': + # next-to-last char was not a space so self._prev is not a + # 1 letter word + if self.is_final(self._prev): + # case (1) [-2:not space][-1:final letter][cur:space] + self._final_char_logical_score += 1 + elif self.is_non_final(self._prev): + # case (2) [-2:not space][-1:Non-Final letter][ + # cur:space] + self._final_char_visual_score += 1 + else: + # Not standing on a space + if ((self._before_prev == ' ') and + (self.is_final(self._prev)) and (cur != ' ')): + # case (3) [-2:space][-1:final letter][cur:not space] + self._final_char_visual_score += 1 + self._before_prev = self._prev + self._prev = cur + + # Forever detecting, till the end or until both model probers return + # ProbingState.NOT_ME (handled above) + return ProbingState.DETECTING + + @property + def charset_name(self): + # Make the decision: is it Logical or Visual? + # If the final letter score distance is dominant enough, rely on it. + finalsub = self._final_char_logical_score - self._final_char_visual_score + if finalsub >= self.MIN_FINAL_CHAR_DISTANCE: + return self.LOGICAL_HEBREW_NAME + if finalsub <= -self.MIN_FINAL_CHAR_DISTANCE: + return self.VISUAL_HEBREW_NAME + + # It's not dominant enough, try to rely on the model scores instead. + modelsub = (self._logical_prober.get_confidence() + - self._visual_prober.get_confidence()) + if modelsub > self.MIN_MODEL_DISTANCE: + return self.LOGICAL_HEBREW_NAME + if modelsub < -self.MIN_MODEL_DISTANCE: + return self.VISUAL_HEBREW_NAME + + # Still no good, back to final letter distance, maybe it'll save the + # day. + if finalsub < 0.0: + return self.VISUAL_HEBREW_NAME + + # (finalsub > 0 - Logical) or (don't know what to do) default to + # Logical. + return self.LOGICAL_HEBREW_NAME + + @property + def language(self): + return 'Hebrew' + + @property + def state(self): + # Remain active as long as any of the model probers are active. + if (self._logical_prober.state == ProbingState.NOT_ME) and \ + (self._visual_prober.state == ProbingState.NOT_ME): + return ProbingState.NOT_ME + return ProbingState.DETECTING diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/jisfreq.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/jisfreq.py new file mode 100644 index 0000000..83fc082 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/jisfreq.py @@ -0,0 +1,325 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +# Sampling from about 20M text materials include literature and computer technology +# +# Japanese frequency table, applied to both S-JIS and EUC-JP +# They are sorted in order. + +# 128 --> 0.77094 +# 256 --> 0.85710 +# 512 --> 0.92635 +# 1024 --> 0.97130 +# 2048 --> 0.99431 +# +# Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58 +# Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191 +# +# Typical Distribution Ratio, 25% of IDR + +JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0 + +# Char to FreqOrder table , +JIS_TABLE_SIZE = 4368 + +JIS_CHAR_TO_FREQ_ORDER = ( + 40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16 +3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32 +1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48 +2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64 +2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80 +5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96 +1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112 +5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128 +5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144 +5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160 +5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176 +5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192 +5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208 +1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224 +1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240 +1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256 +2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272 +3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288 +3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304 + 4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320 + 12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336 +1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352 + 109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368 +5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384 + 271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400 + 32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416 + 43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432 + 280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448 + 54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464 +5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480 +5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496 +5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512 +4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528 +5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544 +5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560 +5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576 +5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592 +5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608 +5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624 +5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640 +5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656 +5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672 +3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688 +5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704 +5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720 +5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736 +5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752 +5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768 +5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784 +5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800 +5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816 +5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832 +5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848 +5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864 +5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880 +5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896 +5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912 +5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928 +5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944 +5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960 +5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976 +5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992 +5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008 +5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024 +5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040 +5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056 +5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072 +5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088 +5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104 +5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120 +5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136 +5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152 +5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168 +5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184 +5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200 +5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216 +5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232 +5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248 +5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264 +5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280 +5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296 +6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312 +6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328 +6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344 +6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360 +6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376 +6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392 +6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408 +6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424 +4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440 + 854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456 + 665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472 +1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488 +1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504 + 896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520 +3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536 +3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552 + 804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568 +3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584 +3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600 + 586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616 +2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632 + 277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648 +3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664 +1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680 + 380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696 +1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712 + 850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728 +2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744 +2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760 +2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776 +2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792 +1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808 +1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824 +1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840 +1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856 +2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872 +1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888 +2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904 +1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920 +1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936 +1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952 +1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968 +1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984 +1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000 + 606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016 + 684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032 +1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048 +2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064 +2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080 +2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096 +3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112 +3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128 + 884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144 +3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160 +1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176 + 861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192 +2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208 +1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224 + 576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240 +3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256 +4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272 +2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288 +1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304 +2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320 +1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336 + 385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352 + 178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368 +1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384 +2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400 +2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416 +2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432 +3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448 +1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464 +2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480 + 359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496 + 837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512 + 855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528 +1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544 +2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560 + 633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576 +1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592 +1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608 + 353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624 +1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640 +1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656 +1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672 + 764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688 +2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704 + 278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720 +2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736 +3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752 +2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768 +1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784 +6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800 +1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816 +2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832 +1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848 + 470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864 + 72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880 +3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896 +3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912 +1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928 +1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944 +1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960 +1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976 + 123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992 + 913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008 +2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024 + 900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040 +3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056 +2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072 + 423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088 +1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104 +2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120 + 220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136 +1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152 + 745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168 +4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184 +2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200 +1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216 + 666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232 +1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248 +2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264 + 376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280 +6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296 +1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312 +1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328 +2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344 +3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360 + 914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376 +3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392 +1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408 + 674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424 +1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440 + 199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456 +3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472 + 370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488 +2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504 + 414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520 +4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536 +2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552 +1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568 +1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584 +1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600 + 166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616 +1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632 +3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648 +1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664 +3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680 + 264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696 + 543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712 + 983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728 +2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744 +1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760 + 867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776 +1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792 + 894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808 +1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824 + 530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840 + 839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856 + 480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872 +1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888 +1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904 +2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920 +4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936 + 227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952 +1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968 + 328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984 +1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000 +3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016 +1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032 +2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048 +2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064 +1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080 +1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096 +2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112 + 455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128 +2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144 +1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160 +1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176 +1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192 +1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208 +3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224 +2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240 +2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256 + 575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272 +3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288 +3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304 +1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320 +2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336 +1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352 +2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512 +) + + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/jpcntx.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/jpcntx.py new file mode 100644 index 0000000..20044e4 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/jpcntx.py @@ -0,0 +1,233 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Communicator client code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + + +# This is hiragana 2-char sequence table, the number in each cell represents its frequency category +jp2CharContext = ( +(0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1), +(2,4,0,4,0,3,0,4,0,3,4,4,4,2,4,3,3,4,3,2,3,3,4,2,3,3,3,2,4,1,4,3,3,1,5,4,3,4,3,4,3,5,3,0,3,5,4,2,0,3,1,0,3,3,0,3,3,0,1,1,0,4,3,0,3,3,0,4,0,2,0,3,5,5,5,5,4,0,4,1,0,3,4), +(0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2), +(0,4,0,5,0,5,0,4,0,4,5,4,4,3,5,3,5,1,5,3,4,3,4,4,3,4,3,3,4,3,5,4,4,3,5,5,3,5,5,5,3,5,5,3,4,5,5,3,1,3,2,0,3,4,0,4,2,0,4,2,1,5,3,2,3,5,0,4,0,2,0,5,4,4,5,4,5,0,4,0,0,4,4), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,3,0,4,0,3,0,3,0,4,5,4,3,3,3,3,4,3,5,4,4,3,5,4,4,3,4,3,4,4,4,4,5,3,4,4,3,4,5,5,4,5,5,1,4,5,4,3,0,3,3,1,3,3,0,4,4,0,3,3,1,5,3,3,3,5,0,4,0,3,0,4,4,3,4,3,3,0,4,1,1,3,4), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,4,0,3,0,3,0,4,0,3,4,4,3,2,2,1,2,1,3,1,3,3,3,3,3,4,3,1,3,3,5,3,3,0,4,3,0,5,4,3,3,5,4,4,3,4,4,5,0,1,2,0,1,2,0,2,2,0,1,0,0,5,2,2,1,4,0,3,0,1,0,4,4,3,5,4,3,0,2,1,0,4,3), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,3,0,5,0,4,0,2,1,4,4,2,4,1,4,2,4,2,4,3,3,3,4,3,3,3,3,1,4,2,3,3,3,1,4,4,1,1,1,4,3,3,2,0,2,4,3,2,0,3,3,0,3,1,1,0,0,0,3,3,0,4,2,2,3,4,0,4,0,3,0,4,4,5,3,4,4,0,3,0,0,1,4), +(1,4,0,4,0,4,0,4,0,3,5,4,4,3,4,3,5,4,3,3,4,3,5,4,4,4,4,3,4,2,4,3,3,1,5,4,3,2,4,5,4,5,5,4,4,5,4,4,0,3,2,2,3,3,0,4,3,1,3,2,1,4,3,3,4,5,0,3,0,2,0,4,5,5,4,5,4,0,4,0,0,5,4), +(0,5,0,5,0,4,0,3,0,4,4,3,4,3,3,3,4,0,4,4,4,3,4,3,4,3,3,1,4,2,4,3,4,0,5,4,1,4,5,4,4,5,3,2,4,3,4,3,2,4,1,3,3,3,2,3,2,0,4,3,3,4,3,3,3,4,0,4,0,3,0,4,5,4,4,4,3,0,4,1,0,1,3), +(0,3,1,4,0,3,0,2,0,3,4,4,3,1,4,2,3,3,4,3,4,3,4,3,4,4,3,2,3,1,5,4,4,1,4,4,3,5,4,4,3,5,5,4,3,4,4,3,1,2,3,1,2,2,0,3,2,0,3,1,0,5,3,3,3,4,3,3,3,3,4,4,4,4,5,4,2,0,3,3,2,4,3), +(0,2,0,3,0,1,0,1,0,0,3,2,0,0,2,0,1,0,2,1,3,3,3,1,2,3,1,0,1,0,4,2,1,1,3,3,0,4,3,3,1,4,3,3,0,3,3,2,0,0,0,0,1,0,0,2,0,0,0,0,0,4,1,0,2,3,2,2,2,1,3,3,3,4,4,3,2,0,3,1,0,3,3), +(0,4,0,4,0,3,0,3,0,4,4,4,3,3,3,3,3,3,4,3,4,2,4,3,4,3,3,2,4,3,4,5,4,1,4,5,3,5,4,5,3,5,4,0,3,5,5,3,1,3,3,2,2,3,0,3,4,1,3,3,2,4,3,3,3,4,0,4,0,3,0,4,5,4,4,5,3,0,4,1,0,3,4), +(0,2,0,3,0,3,0,0,0,2,2,2,1,0,1,0,0,0,3,0,3,0,3,0,1,3,1,0,3,1,3,3,3,1,3,3,3,0,1,3,1,3,4,0,0,3,1,1,0,3,2,0,0,0,0,1,3,0,1,0,0,3,3,2,0,3,0,0,0,0,0,3,4,3,4,3,3,0,3,0,0,2,3), +(2,3,0,3,0,2,0,1,0,3,3,4,3,1,3,1,1,1,3,1,4,3,4,3,3,3,0,0,3,1,5,4,3,1,4,3,2,5,5,4,4,4,4,3,3,4,4,4,0,2,1,1,3,2,0,1,2,0,0,1,0,4,1,3,3,3,0,3,0,1,0,4,4,4,5,5,3,0,2,0,0,4,4), +(0,2,0,1,0,3,1,3,0,2,3,3,3,0,3,1,0,0,3,0,3,2,3,1,3,2,1,1,0,0,4,2,1,0,2,3,1,4,3,2,0,4,4,3,1,3,1,3,0,1,0,0,1,0,0,0,1,0,0,0,0,4,1,1,1,2,0,3,0,0,0,3,4,2,4,3,2,0,1,0,0,3,3), +(0,1,0,4,0,5,0,4,0,2,4,4,2,3,3,2,3,3,5,3,3,3,4,3,4,2,3,0,4,3,3,3,4,1,4,3,2,1,5,5,3,4,5,1,3,5,4,2,0,3,3,0,1,3,0,4,2,0,1,3,1,4,3,3,3,3,0,3,0,1,0,3,4,4,4,5,5,0,3,0,1,4,5), +(0,2,0,3,0,3,0,0,0,2,3,1,3,0,4,0,1,1,3,0,3,4,3,2,3,1,0,3,3,2,3,1,3,0,2,3,0,2,1,4,1,2,2,0,0,3,3,0,0,2,0,0,0,1,0,0,0,0,2,2,0,3,2,1,3,3,0,2,0,2,0,0,3,3,1,2,4,0,3,0,2,2,3), +(2,4,0,5,0,4,0,4,0,2,4,4,4,3,4,3,3,3,1,2,4,3,4,3,4,4,5,0,3,3,3,3,2,0,4,3,1,4,3,4,1,4,4,3,3,4,4,3,1,2,3,0,4,2,0,4,1,0,3,3,0,4,3,3,3,4,0,4,0,2,0,3,5,3,4,5,2,0,3,0,0,4,5), +(0,3,0,4,0,1,0,1,0,1,3,2,2,1,3,0,3,0,2,0,2,0,3,0,2,0,0,0,1,0,1,1,0,0,3,1,0,0,0,4,0,3,1,0,2,1,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,4,2,2,3,1,0,3,0,0,0,1,4,4,4,3,0,0,4,0,0,1,4), +(1,4,1,5,0,3,0,3,0,4,5,4,4,3,5,3,3,4,4,3,4,1,3,3,3,3,2,1,4,1,5,4,3,1,4,4,3,5,4,4,3,5,4,3,3,4,4,4,0,3,3,1,2,3,0,3,1,0,3,3,0,5,4,4,4,4,4,4,3,3,5,4,4,3,3,5,4,0,3,2,0,4,4), +(0,2,0,3,0,1,0,0,0,1,3,3,3,2,4,1,3,0,3,1,3,0,2,2,1,1,0,0,2,0,4,3,1,0,4,3,0,4,4,4,1,4,3,1,1,3,3,1,0,2,0,0,1,3,0,0,0,0,2,0,0,4,3,2,4,3,5,4,3,3,3,4,3,3,4,3,3,0,2,1,0,3,3), +(0,2,0,4,0,3,0,2,0,2,5,5,3,4,4,4,4,1,4,3,3,0,4,3,4,3,1,3,3,2,4,3,0,3,4,3,0,3,4,4,2,4,4,0,4,5,3,3,2,2,1,1,1,2,0,1,5,0,3,3,2,4,3,3,3,4,0,3,0,2,0,4,4,3,5,5,0,0,3,0,2,3,3), +(0,3,0,4,0,3,0,1,0,3,4,3,3,1,3,3,3,0,3,1,3,0,4,3,3,1,1,0,3,0,3,3,0,0,4,4,0,1,5,4,3,3,5,0,3,3,4,3,0,2,0,1,1,1,0,1,3,0,1,2,1,3,3,2,3,3,0,3,0,1,0,1,3,3,4,4,1,0,1,2,2,1,3), +(0,1,0,4,0,4,0,3,0,1,3,3,3,2,3,1,1,0,3,0,3,3,4,3,2,4,2,0,1,0,4,3,2,0,4,3,0,5,3,3,2,4,4,4,3,3,3,4,0,1,3,0,0,1,0,0,1,0,0,0,0,4,2,3,3,3,0,3,0,0,0,4,4,4,5,3,2,0,3,3,0,3,5), +(0,2,0,3,0,0,0,3,0,1,3,0,2,0,0,0,1,0,3,1,1,3,3,0,0,3,0,0,3,0,2,3,1,0,3,1,0,3,3,2,0,4,2,2,0,2,0,0,0,4,0,0,0,0,0,0,0,0,0,0,0,2,1,2,0,1,0,1,0,0,0,1,3,1,2,0,0,0,1,0,0,1,4), +(0,3,0,3,0,5,0,1,0,2,4,3,1,3,3,2,1,1,5,2,1,0,5,1,2,0,0,0,3,3,2,2,3,2,4,3,0,0,3,3,1,3,3,0,2,5,3,4,0,3,3,0,1,2,0,2,2,0,3,2,0,2,2,3,3,3,0,2,0,1,0,3,4,4,2,5,4,0,3,0,0,3,5), +(0,3,0,3,0,3,0,1,0,3,3,3,3,0,3,0,2,0,2,1,1,0,2,0,1,0,0,0,2,1,0,0,1,0,3,2,0,0,3,3,1,2,3,1,0,3,3,0,0,1,0,0,0,0,0,2,0,0,0,0,0,2,3,1,2,3,0,3,0,1,0,3,2,1,0,4,3,0,1,1,0,3,3), +(0,4,0,5,0,3,0,3,0,4,5,5,4,3,5,3,4,3,5,3,3,2,5,3,4,4,4,3,4,3,4,5,5,3,4,4,3,4,4,5,4,4,4,3,4,5,5,4,2,3,4,2,3,4,0,3,3,1,4,3,2,4,3,3,5,5,0,3,0,3,0,5,5,5,5,4,4,0,4,0,1,4,4), +(0,4,0,4,0,3,0,3,0,3,5,4,4,2,3,2,5,1,3,2,5,1,4,2,3,2,3,3,4,3,3,3,3,2,5,4,1,3,3,5,3,4,4,0,4,4,3,1,1,3,1,0,2,3,0,2,3,0,3,0,0,4,3,1,3,4,0,3,0,2,0,4,4,4,3,4,5,0,4,0,0,3,4), +(0,3,0,3,0,3,1,2,0,3,4,4,3,3,3,0,2,2,4,3,3,1,3,3,3,1,1,0,3,1,4,3,2,3,4,4,2,4,4,4,3,4,4,3,2,4,4,3,1,3,3,1,3,3,0,4,1,0,2,2,1,4,3,2,3,3,5,4,3,3,5,4,4,3,3,0,4,0,3,2,2,4,4), +(0,2,0,1,0,0,0,0,0,1,2,1,3,0,0,0,0,0,2,0,1,2,1,0,0,1,0,0,0,0,3,0,0,1,0,1,1,3,1,0,0,0,1,1,0,1,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,1,1,2,2,0,3,4,0,0,0,1,1,0,0,1,0,0,0,0,0,1,1), +(0,1,0,0,0,1,0,0,0,0,4,0,4,1,4,0,3,0,4,0,3,0,4,0,3,0,3,0,4,1,5,1,4,0,0,3,0,5,0,5,2,0,1,0,0,0,2,1,4,0,1,3,0,0,3,0,0,3,1,1,4,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0), +(1,4,0,5,0,3,0,2,0,3,5,4,4,3,4,3,5,3,4,3,3,0,4,3,3,3,3,3,3,2,4,4,3,1,3,4,4,5,4,4,3,4,4,1,3,5,4,3,3,3,1,2,2,3,3,1,3,1,3,3,3,5,3,3,4,5,0,3,0,3,0,3,4,3,4,4,3,0,3,0,2,4,3), +(0,1,0,4,0,0,0,0,0,1,4,0,4,1,4,2,4,0,3,0,1,0,1,0,0,0,0,0,2,0,3,1,1,1,0,3,0,0,0,1,2,1,0,0,1,1,1,1,0,1,0,0,0,1,0,0,3,0,0,0,0,3,2,0,2,2,0,1,0,0,0,2,3,2,3,3,0,0,0,0,2,1,0), +(0,5,1,5,0,3,0,3,0,5,4,4,5,1,5,3,3,0,4,3,4,3,5,3,4,3,3,2,4,3,4,3,3,0,3,3,1,4,4,3,4,4,4,3,4,5,5,3,2,3,1,1,3,3,1,3,1,1,3,3,2,4,5,3,3,5,0,4,0,3,0,4,4,3,5,3,3,0,3,4,0,4,3), +(0,5,0,5,0,3,0,2,0,4,4,3,5,2,4,3,3,3,4,4,4,3,5,3,5,3,3,1,4,0,4,3,3,0,3,3,0,4,4,4,4,5,4,3,3,5,5,3,2,3,1,2,3,2,0,1,0,0,3,2,2,4,4,3,1,5,0,4,0,3,0,4,3,1,3,2,1,0,3,3,0,3,3), +(0,4,0,5,0,5,0,4,0,4,5,5,5,3,4,3,3,2,5,4,4,3,5,3,5,3,4,0,4,3,4,4,3,2,4,4,3,4,5,4,4,5,5,0,3,5,5,4,1,3,3,2,3,3,1,3,1,0,4,3,1,4,4,3,4,5,0,4,0,2,0,4,3,4,4,3,3,0,4,0,0,5,5), +(0,4,0,4,0,5,0,1,1,3,3,4,4,3,4,1,3,0,5,1,3,0,3,1,3,1,1,0,3,0,3,3,4,0,4,3,0,4,4,4,3,4,4,0,3,5,4,1,0,3,0,0,2,3,0,3,1,0,3,1,0,3,2,1,3,5,0,3,0,1,0,3,2,3,3,4,4,0,2,2,0,4,4), +(2,4,0,5,0,4,0,3,0,4,5,5,4,3,5,3,5,3,5,3,5,2,5,3,4,3,3,4,3,4,5,3,2,1,5,4,3,2,3,4,5,3,4,1,2,5,4,3,0,3,3,0,3,2,0,2,3,0,4,1,0,3,4,3,3,5,0,3,0,1,0,4,5,5,5,4,3,0,4,2,0,3,5), +(0,5,0,4,0,4,0,2,0,5,4,3,4,3,4,3,3,3,4,3,4,2,5,3,5,3,4,1,4,3,4,4,4,0,3,5,0,4,4,4,4,5,3,1,3,4,5,3,3,3,3,3,3,3,0,2,2,0,3,3,2,4,3,3,3,5,3,4,1,3,3,5,3,2,0,0,0,0,4,3,1,3,3), +(0,1,0,3,0,3,0,1,0,1,3,3,3,2,3,3,3,0,3,0,0,0,3,1,3,0,0,0,2,2,2,3,0,0,3,2,0,1,2,4,1,3,3,0,0,3,3,3,0,1,0,0,2,1,0,0,3,0,3,1,0,3,0,0,1,3,0,2,0,1,0,3,3,1,3,3,0,0,1,1,0,3,3), +(0,2,0,3,0,2,1,4,0,2,2,3,1,1,3,1,1,0,2,0,3,1,2,3,1,3,0,0,1,0,4,3,2,3,3,3,1,4,2,3,3,3,3,1,0,3,1,4,0,1,1,0,1,2,0,1,1,0,1,1,0,3,1,3,2,2,0,1,0,0,0,2,3,3,3,1,0,0,0,0,0,2,3), +(0,5,0,4,0,5,0,2,0,4,5,5,3,3,4,3,3,1,5,4,4,2,4,4,4,3,4,2,4,3,5,5,4,3,3,4,3,3,5,5,4,5,5,1,3,4,5,3,1,4,3,1,3,3,0,3,3,1,4,3,1,4,5,3,3,5,0,4,0,3,0,5,3,3,1,4,3,0,4,0,1,5,3), +(0,5,0,5,0,4,0,2,0,4,4,3,4,3,3,3,3,3,5,4,4,4,4,4,4,5,3,3,5,2,4,4,4,3,4,4,3,3,4,4,5,5,3,3,4,3,4,3,3,4,3,3,3,3,1,2,2,1,4,3,3,5,4,4,3,4,0,4,0,3,0,4,4,4,4,4,1,0,4,2,0,2,4), +(0,4,0,4,0,3,0,1,0,3,5,2,3,0,3,0,2,1,4,2,3,3,4,1,4,3,3,2,4,1,3,3,3,0,3,3,0,0,3,3,3,5,3,3,3,3,3,2,0,2,0,0,2,0,0,2,0,0,1,0,0,3,1,2,2,3,0,3,0,2,0,4,4,3,3,4,1,0,3,0,0,2,4), +(0,0,0,4,0,0,0,0,0,0,1,0,1,0,2,0,0,0,0,0,1,0,2,0,1,0,0,0,0,0,3,1,3,0,3,2,0,0,0,1,0,3,2,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,0,2,0,0,0,0,0,0,2), +(0,2,1,3,0,2,0,2,0,3,3,3,3,1,3,1,3,3,3,3,3,3,4,2,2,1,2,1,4,0,4,3,1,3,3,3,2,4,3,5,4,3,3,3,3,3,3,3,0,1,3,0,2,0,0,1,0,0,1,0,0,4,2,0,2,3,0,3,3,0,3,3,4,2,3,1,4,0,1,2,0,2,3), +(0,3,0,3,0,1,0,3,0,2,3,3,3,0,3,1,2,0,3,3,2,3,3,2,3,2,3,1,3,0,4,3,2,0,3,3,1,4,3,3,2,3,4,3,1,3,3,1,1,0,1,1,0,1,0,1,0,1,0,0,0,4,1,1,0,3,0,3,1,0,2,3,3,3,3,3,1,0,0,2,0,3,3), +(0,0,0,0,0,0,0,0,0,0,3,0,2,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,3,0,3,1,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,2,0,2,3,0,0,0,0,0,0,0,0,3), +(0,2,0,3,1,3,0,3,0,2,3,3,3,1,3,1,3,1,3,1,3,3,3,1,3,0,2,3,1,1,4,3,3,2,3,3,1,2,2,4,1,3,3,0,1,4,2,3,0,1,3,0,3,0,0,1,3,0,2,0,0,3,3,2,1,3,0,3,0,2,0,3,4,4,4,3,1,0,3,0,0,3,3), +(0,2,0,1,0,2,0,0,0,1,3,2,2,1,3,0,1,1,3,0,3,2,3,1,2,0,2,0,1,1,3,3,3,0,3,3,1,1,2,3,2,3,3,1,2,3,2,0,0,1,0,0,0,0,0,0,3,0,1,0,0,2,1,2,1,3,0,3,0,0,0,3,4,4,4,3,2,0,2,0,0,2,4), +(0,0,0,1,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,2,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,3,1,0,0,0,0,0,0,0,3), +(0,3,0,3,0,2,0,3,0,3,3,3,2,3,2,2,2,0,3,1,3,3,3,2,3,3,0,0,3,0,3,2,2,0,2,3,1,4,3,4,3,3,2,3,1,5,4,4,0,3,1,2,1,3,0,3,1,1,2,0,2,3,1,3,1,3,0,3,0,1,0,3,3,4,4,2,1,0,2,1,0,2,4), +(0,1,0,3,0,1,0,2,0,1,4,2,5,1,4,0,2,0,2,1,3,1,4,0,2,1,0,0,2,1,4,1,1,0,3,3,0,5,1,3,2,3,3,1,0,3,2,3,0,1,0,0,0,0,0,0,1,0,0,0,0,4,0,1,0,3,0,2,0,1,0,3,3,3,4,3,3,0,0,0,0,2,3), +(0,0,0,1,0,0,0,0,0,0,2,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,0,1,0,0,0,0,0,3), +(0,1,0,3,0,4,0,3,0,2,4,3,1,0,3,2,2,1,3,1,2,2,3,1,1,1,2,1,3,0,1,2,0,1,3,2,1,3,0,5,5,1,0,0,1,3,2,1,0,3,0,0,1,0,0,0,0,0,3,4,0,1,1,1,3,2,0,2,0,1,0,2,3,3,1,2,3,0,1,0,1,0,4), +(0,0,0,1,0,3,0,3,0,2,2,1,0,0,4,0,3,0,3,1,3,0,3,0,3,0,1,0,3,0,3,1,3,0,3,3,0,0,1,2,1,1,1,0,1,2,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,2,2,1,2,0,0,2,0,0,0,0,2,3,3,3,3,0,0,0,0,1,4), +(0,0,0,3,0,3,0,0,0,0,3,1,1,0,3,0,1,0,2,0,1,0,0,0,0,0,0,0,1,0,3,0,2,0,2,3,0,0,2,2,3,1,2,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,2,0,0,0,0,2,3), +(2,4,0,5,0,5,0,4,0,3,4,3,3,3,4,3,3,3,4,3,4,4,5,4,5,5,5,2,3,0,5,5,4,1,5,4,3,1,5,4,3,4,4,3,3,4,3,3,0,3,2,0,2,3,0,3,0,0,3,3,0,5,3,2,3,3,0,3,0,3,0,3,4,5,4,5,3,0,4,3,0,3,4), +(0,3,0,3,0,3,0,3,0,3,3,4,3,2,3,2,3,0,4,3,3,3,3,3,3,3,3,0,3,2,4,3,3,1,3,4,3,4,4,4,3,4,4,3,2,4,4,1,0,2,0,0,1,1,0,2,0,0,3,1,0,5,3,2,1,3,0,3,0,1,2,4,3,2,4,3,3,0,3,2,0,4,4), +(0,3,0,3,0,1,0,0,0,1,4,3,3,2,3,1,3,1,4,2,3,2,4,2,3,4,3,0,2,2,3,3,3,0,3,3,3,0,3,4,1,3,3,0,3,4,3,3,0,1,1,0,1,0,0,0,4,0,3,0,0,3,1,2,1,3,0,4,0,1,0,4,3,3,4,3,3,0,2,0,0,3,3), +(0,3,0,4,0,1,0,3,0,3,4,3,3,0,3,3,3,1,3,1,3,3,4,3,3,3,0,0,3,1,5,3,3,1,3,3,2,5,4,3,3,4,5,3,2,5,3,4,0,1,0,0,0,0,0,2,0,0,1,1,0,4,2,2,1,3,0,3,0,2,0,4,4,3,5,3,2,0,1,1,0,3,4), +(0,5,0,4,0,5,0,2,0,4,4,3,3,2,3,3,3,1,4,3,4,1,5,3,4,3,4,0,4,2,4,3,4,1,5,4,0,4,4,4,4,5,4,1,3,5,4,2,1,4,1,1,3,2,0,3,1,0,3,2,1,4,3,3,3,4,0,4,0,3,0,4,4,4,3,3,3,0,4,2,0,3,4), +(1,4,0,4,0,3,0,1,0,3,3,3,1,1,3,3,2,2,3,3,1,0,3,2,2,1,2,0,3,1,2,1,2,0,3,2,0,2,2,3,3,4,3,0,3,3,1,2,0,1,1,3,1,2,0,0,3,0,1,1,0,3,2,2,3,3,0,3,0,0,0,2,3,3,4,3,3,0,1,0,0,1,4), +(0,4,0,4,0,4,0,0,0,3,4,4,3,1,4,2,3,2,3,3,3,1,4,3,4,0,3,0,4,2,3,3,2,2,5,4,2,1,3,4,3,4,3,1,3,3,4,2,0,2,1,0,3,3,0,0,2,0,3,1,0,4,4,3,4,3,0,4,0,1,0,2,4,4,4,4,4,0,3,2,0,3,3), +(0,0,0,1,0,4,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,3,2,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2), +(0,2,0,3,0,4,0,4,0,1,3,3,3,0,4,0,2,1,2,1,1,1,2,0,3,1,1,0,1,0,3,1,0,0,3,3,2,0,1,1,0,0,0,0,0,1,0,2,0,2,2,0,3,1,0,0,1,0,1,1,0,1,2,0,3,0,0,0,0,1,0,0,3,3,4,3,1,0,1,0,3,0,2), +(0,0,0,3,0,5,0,0,0,0,1,0,2,0,3,1,0,1,3,0,0,0,2,0,0,0,1,0,0,0,1,1,0,0,4,0,0,0,2,3,0,1,4,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,1,0,0,0,0,0,0,0,2,0,0,3,0,0,0,0,0,3), +(0,2,0,5,0,5,0,1,0,2,4,3,3,2,5,1,3,2,3,3,3,0,4,1,2,0,3,0,4,0,2,2,1,1,5,3,0,0,1,4,2,3,2,0,3,3,3,2,0,2,4,1,1,2,0,1,1,0,3,1,0,1,3,1,2,3,0,2,0,0,0,1,3,5,4,4,4,0,3,0,0,1,3), +(0,4,0,5,0,4,0,4,0,4,5,4,3,3,4,3,3,3,4,3,4,4,5,3,4,5,4,2,4,2,3,4,3,1,4,4,1,3,5,4,4,5,5,4,4,5,5,5,2,3,3,1,4,3,1,3,3,0,3,3,1,4,3,4,4,4,0,3,0,4,0,3,3,4,4,5,0,0,4,3,0,4,5), +(0,4,0,4,0,3,0,3,0,3,4,4,4,3,3,2,4,3,4,3,4,3,5,3,4,3,2,1,4,2,4,4,3,1,3,4,2,4,5,5,3,4,5,4,1,5,4,3,0,3,2,2,3,2,1,3,1,0,3,3,3,5,3,3,3,5,4,4,2,3,3,4,3,3,3,2,1,0,3,2,1,4,3), +(0,4,0,5,0,4,0,3,0,3,5,5,3,2,4,3,4,0,5,4,4,1,4,4,4,3,3,3,4,3,5,5,2,3,3,4,1,2,5,5,3,5,5,2,3,5,5,4,0,3,2,0,3,3,1,1,5,1,4,1,0,4,3,2,3,5,0,4,0,3,0,5,4,3,4,3,0,0,4,1,0,4,4), +(1,3,0,4,0,2,0,2,0,2,5,5,3,3,3,3,3,0,4,2,3,4,4,4,3,4,0,0,3,4,5,4,3,3,3,3,2,5,5,4,5,5,5,4,3,5,5,5,1,3,1,0,1,0,0,3,2,0,4,2,0,5,2,3,2,4,1,3,0,3,0,4,5,4,5,4,3,0,4,2,0,5,4), +(0,3,0,4,0,5,0,3,0,3,4,4,3,2,3,2,3,3,3,3,3,2,4,3,3,2,2,0,3,3,3,3,3,1,3,3,3,0,4,4,3,4,4,1,1,4,4,2,0,3,1,0,1,1,0,4,1,0,2,3,1,3,3,1,3,4,0,3,0,1,0,3,1,3,0,0,1,0,2,0,0,4,4), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), +(0,3,0,3,0,2,0,3,0,1,5,4,3,3,3,1,4,2,1,2,3,4,4,2,4,4,5,0,3,1,4,3,4,0,4,3,3,3,2,3,2,5,3,4,3,2,2,3,0,0,3,0,2,1,0,1,2,0,0,0,0,2,1,1,3,1,0,2,0,4,0,3,4,4,4,5,2,0,2,0,0,1,3), +(0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,1,1,0,0,0,4,2,1,1,0,1,0,3,2,0,0,3,1,1,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,0,2,0,0,0,1,4,0,4,2,1,0,0,0,0,0,1), +(0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,3,1,0,0,0,2,0,2,1,0,0,1,2,1,0,1,1,0,0,3,0,0,0,0,0,0,0,0,0,0,0,1,3,1,0,0,0,0,0,1,0,0,2,1,0,0,0,0,0,0,0,0,2), +(0,4,0,4,0,4,0,3,0,4,4,3,4,2,4,3,2,0,4,4,4,3,5,3,5,3,3,2,4,2,4,3,4,3,1,4,0,2,3,4,4,4,3,3,3,4,4,4,3,4,1,3,4,3,2,1,2,1,3,3,3,4,4,3,3,5,0,4,0,3,0,4,3,3,3,2,1,0,3,0,0,3,3), +(0,4,0,3,0,3,0,3,0,3,5,5,3,3,3,3,4,3,4,3,3,3,4,4,4,3,3,3,3,4,3,5,3,3,1,3,2,4,5,5,5,5,4,3,4,5,5,3,2,2,3,3,3,3,2,3,3,1,2,3,2,4,3,3,3,4,0,4,0,2,0,4,3,2,2,1,2,0,3,0,0,4,1), +) + +class JapaneseContextAnalysis(object): + NUM_OF_CATEGORY = 6 + DONT_KNOW = -1 + ENOUGH_REL_THRESHOLD = 100 + MAX_REL_THRESHOLD = 1000 + MINIMUM_DATA_THRESHOLD = 4 + + def __init__(self): + self._total_rel = None + self._rel_sample = None + self._need_to_skip_char_num = None + self._last_char_order = None + self._done = None + self.reset() + + def reset(self): + self._total_rel = 0 # total sequence received + # category counters, each integer counts sequence in its category + self._rel_sample = [0] * self.NUM_OF_CATEGORY + # if last byte in current buffer is not the last byte of a character, + # we need to know how many bytes to skip in next buffer + self._need_to_skip_char_num = 0 + self._last_char_order = -1 # The order of previous char + # If this flag is set to True, detection is done and conclusion has + # been made + self._done = False + + def feed(self, byte_str, num_bytes): + if self._done: + return + + # The buffer we got is byte oriented, and a character may span in more than one + # buffers. In case the last one or two byte in last buffer is not + # complete, we record how many byte needed to complete that character + # and skip these bytes here. We can choose to record those bytes as + # well and analyse the character once it is complete, but since a + # character will not make much difference, by simply skipping + # this character will simply our logic and improve performance. + i = self._need_to_skip_char_num + while i < num_bytes: + order, char_len = self.get_order(byte_str[i:i + 2]) + i += char_len + if i > num_bytes: + self._need_to_skip_char_num = i - num_bytes + self._last_char_order = -1 + else: + if (order != -1) and (self._last_char_order != -1): + self._total_rel += 1 + if self._total_rel > self.MAX_REL_THRESHOLD: + self._done = True + break + self._rel_sample[jp2CharContext[self._last_char_order][order]] += 1 + self._last_char_order = order + + def got_enough_data(self): + return self._total_rel > self.ENOUGH_REL_THRESHOLD + + def get_confidence(self): + # This is just one way to calculate confidence. It works well for me. + if self._total_rel > self.MINIMUM_DATA_THRESHOLD: + return (self._total_rel - self._rel_sample[0]) / self._total_rel + else: + return self.DONT_KNOW + + def get_order(self, byte_str): + return -1, 1 + +class SJISContextAnalysis(JapaneseContextAnalysis): + def __init__(self): + super(SJISContextAnalysis, self).__init__() + self._charset_name = "SHIFT_JIS" + + @property + def charset_name(self): + return self._charset_name + + def get_order(self, byte_str): + if not byte_str: + return -1, 1 + # find out current char's byte length + first_char = byte_str[0] + if (0x81 <= first_char <= 0x9F) or (0xE0 <= first_char <= 0xFC): + char_len = 2 + if (first_char == 0x87) or (0xFA <= first_char <= 0xFC): + self._charset_name = "CP932" + else: + char_len = 1 + + # return its order if it is hiragana + if len(byte_str) > 1: + second_char = byte_str[1] + if (first_char == 202) and (0x9F <= second_char <= 0xF1): + return second_char - 0x9F, char_len + + return -1, char_len + +class EUCJPContextAnalysis(JapaneseContextAnalysis): + def get_order(self, byte_str): + if not byte_str: + return -1, 1 + # find out current char's byte length + first_char = byte_str[0] + if (first_char == 0x8E) or (0xA1 <= first_char <= 0xFE): + char_len = 2 + elif first_char == 0x8F: + char_len = 3 + else: + char_len = 1 + + # return its order if it is hiragana + if len(byte_str) > 1: + second_char = byte_str[1] + if (first_char == 0xA4) and (0xA1 <= second_char <= 0xF3): + return second_char - 0xA1, char_len + + return -1, char_len + + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py new file mode 100644 index 0000000..e963a50 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langbulgarianmodel.py @@ -0,0 +1,4650 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +BULGARIAN_LANG_MODEL = { + 63: { # 'e' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 1, # 'б' + 9: 1, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 1, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 0, # 'и' + 26: 1, # 'й' + 12: 1, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 1, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 0, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 45: { # '\xad' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 0, # 'Л' + 38: 1, # 'М' + 36: 0, # 'Ð' + 41: 1, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 1, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 1, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 0, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 0, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 0, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 31: { # 'Ð' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 1, # 'Ð' + 32: 1, # 'Б' + 35: 2, # 'Ð’' + 43: 1, # 'Г' + 37: 2, # 'Д' + 44: 2, # 'Е' + 55: 1, # 'Ж' + 47: 2, # 'З' + 40: 1, # 'И' + 59: 1, # 'Й' + 33: 1, # 'К' + 46: 2, # 'Л' + 38: 1, # 'М' + 36: 2, # 'Ð' + 41: 1, # 'О' + 30: 2, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 1, # 'У' + 48: 2, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 2, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 1, # 'а' + 18: 2, # 'б' + 9: 2, # 'в' + 20: 2, # 'г' + 11: 2, # 'д' + 3: 1, # 'е' + 23: 1, # 'ж' + 15: 2, # 'з' + 2: 0, # 'и' + 26: 2, # 'й' + 12: 2, # 'к' + 10: 3, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 0, # 'о' + 13: 2, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 1, # 'у' + 29: 2, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 1, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 32: { # 'Б' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 2, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 2, # 'Д' + 44: 1, # 'Е' + 55: 1, # 'Ж' + 47: 2, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 2, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 1, # 'У' + 48: 2, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 0, # 'Ш' + 57: 1, # 'Щ' + 61: 2, # 'Ъ' + 60: 1, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 2, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 35: { # 'Ð’' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 2, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 1, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 2, # 'Ф' + 49: 0, # 'Ð¥' + 53: 1, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 2, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 2, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 2, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 2, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 43: { # 'Г' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 2, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 0, # 'П' + 39: 1, # 'Р' + 28: 1, # 'С' + 34: 0, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 1, # 'Щ' + 61: 1, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 1, # 'б' + 9: 1, # 'в' + 20: 0, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 2, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 1, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 37: { # 'Д' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 2, # 'Ð’' + 43: 1, # 'Г' + 37: 2, # 'Д' + 44: 2, # 'Е' + 55: 2, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 2, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 2, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 2, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 44: { # 'Е' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 1, # 'Б' + 35: 2, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 1, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 1, # 'Й' + 33: 2, # 'К' + 46: 2, # 'Л' + 38: 1, # 'М' + 36: 2, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 1, # 'У' + 48: 2, # 'Ф' + 49: 1, # 'Ð¥' + 53: 2, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 1, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 0, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 2, # 'д' + 3: 0, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 0, # 'и' + 26: 1, # 'й' + 12: 2, # 'к' + 10: 2, # 'л' + 14: 2, # 'м' + 6: 2, # 'н' + 4: 0, # 'о' + 13: 1, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 1, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 55: { # 'Ж' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 1, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 47: { # 'З' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 2, # 'Ð' + 41: 1, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 1, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 2, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 1, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 1, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 40: { # 'И' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 1, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 2, # 'Е' + 55: 1, # 'Ж' + 47: 2, # 'З' + 40: 1, # 'И' + 59: 1, # 'Й' + 33: 2, # 'К' + 46: 2, # 'Л' + 38: 2, # 'М' + 36: 2, # 'Ð' + 41: 1, # 'О' + 30: 1, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 0, # 'У' + 48: 1, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 1, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 2, # 'Я' + 1: 1, # 'а' + 18: 1, # 'б' + 9: 3, # 'в' + 20: 2, # 'г' + 11: 1, # 'д' + 3: 1, # 'е' + 23: 0, # 'ж' + 15: 3, # 'з' + 2: 0, # 'и' + 26: 1, # 'й' + 12: 1, # 'к' + 10: 2, # 'л' + 14: 2, # 'м' + 6: 2, # 'н' + 4: 0, # 'о' + 13: 1, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 0, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 1, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 59: { # 'Й' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 1, # 'С' + 34: 1, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 1, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 0, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 1, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 0, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 2, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 33: { # 'К' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 0, # 'М' + 36: 2, # 'Ð' + 41: 2, # 'О' + 30: 2, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 1, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 1, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 2, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 3, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 46: { # 'Л' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 2, # 'Г' + 37: 1, # 'Д' + 44: 2, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 0, # 'Р' + 28: 1, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 1, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 1, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 2, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 38: { # 'М' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 2, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 1, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 2, # 'л' + 14: 0, # 'м' + 6: 2, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 36: { # 'Ð' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 2, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 2, # 'Д' + 44: 2, # 'Е' + 55: 1, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 1, # 'Й' + 33: 2, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 1, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 41: { # 'О' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 1, # 'Б' + 35: 2, # 'Ð’' + 43: 1, # 'Г' + 37: 2, # 'Д' + 44: 1, # 'Е' + 55: 1, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 1, # 'Й' + 33: 2, # 'К' + 46: 2, # 'Л' + 38: 2, # 'М' + 36: 2, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 1, # 'Ð¥' + 53: 0, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 1, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 1, # 'а' + 18: 2, # 'б' + 9: 2, # 'в' + 20: 2, # 'г' + 11: 1, # 'д' + 3: 1, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 0, # 'и' + 26: 1, # 'й' + 12: 2, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 0, # 'о' + 13: 2, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 1, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 1, # 'ц' + 21: 2, # 'ч' + 27: 0, # 'ш' + 24: 2, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 30: { # 'П' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 2, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 2, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 3, # 'л' + 14: 0, # 'м' + 6: 1, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 3, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 2, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 39: { # 'Р' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 2, # 'Г' + 37: 2, # 'Д' + 44: 2, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 0, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 2, # 'П' + 39: 1, # 'Р' + 28: 1, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 1, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 3, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 28: { # 'С' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 3, # 'Ð' + 32: 2, # 'Б' + 35: 2, # 'Ð’' + 43: 1, # 'Г' + 37: 2, # 'Д' + 44: 2, # 'Е' + 55: 1, # 'Ж' + 47: 1, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 2, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 2, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 2, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 1, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 2, # 'к' + 10: 3, # 'л' + 14: 2, # 'м' + 6: 1, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 2, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 2, # 'у' + 29: 2, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 1, # 'ц' + 21: 1, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 34: { # 'Т' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 2, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 2, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 2, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 2, # 'О' + 30: 1, # 'П' + 39: 2, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 1, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 1, # 'Ъ' + 60: 0, # 'Ю' + 56: 1, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 1, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 1, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 3, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 51: { # 'У' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 1, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 2, # 'Е' + 55: 1, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 0, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 1, # 'С' + 34: 2, # 'Т' + 51: 0, # 'У' + 48: 1, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 1, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 2, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 2, # 'и' + 26: 1, # 'й' + 12: 2, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 2, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 2, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 48: { # 'Ф' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 2, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 1, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 2, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 2, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 49: { # 'Ð¥' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 1, # 'П' + 39: 1, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 1, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 1, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 0, # 'н' + 4: 2, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 53: { # 'Ц' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 2, # 'И' + 59: 0, # 'Й' + 33: 2, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 1, # 'Р' + 28: 2, # 'С' + 34: 0, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 2, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 1, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 50: { # 'Ч' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 2, # 'Ð' + 32: 1, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 1, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 2, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 54: { # 'Ш' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 1, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 1, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 1, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 2, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 2, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 57: { # 'Щ' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 1, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 1, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 1, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 1, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 61: { # 'Ъ' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 1, # 'Д' + 44: 0, # 'Е' + 55: 1, # 'Ж' + 47: 1, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 2, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 0, # 'О' + 30: 1, # 'П' + 39: 2, # 'Р' + 28: 1, # 'С' + 34: 1, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 1, # 'Ð¥' + 53: 1, # 'Ц' + 50: 1, # 'Ч' + 54: 1, # 'Ш' + 57: 1, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 0, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 0, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 1, # 'л' + 14: 0, # 'м' + 6: 1, # 'н' + 4: 0, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 60: { # 'Ю' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 1, # 'Б' + 35: 0, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 0, # 'Е' + 55: 1, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 0, # 'М' + 36: 1, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 1, # 'Р' + 28: 1, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 1, # 'б' + 9: 1, # 'в' + 20: 2, # 'г' + 11: 1, # 'д' + 3: 0, # 'е' + 23: 2, # 'ж' + 15: 1, # 'з' + 2: 1, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 0, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 56: { # 'Я' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 1, # 'Б' + 35: 1, # 'Ð’' + 43: 1, # 'Г' + 37: 1, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 1, # 'Л' + 38: 1, # 'М' + 36: 1, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 1, # 'С' + 34: 2, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 1, # 'б' + 9: 1, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 0, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 1, # 'и' + 26: 1, # 'й' + 12: 1, # 'к' + 10: 1, # 'л' + 14: 2, # 'м' + 6: 2, # 'н' + 4: 0, # 'о' + 13: 2, # 'п' + 7: 1, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 1: { # 'а' + 63: 1, # 'e' + 45: 1, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 1, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 1, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 3, # 'з' + 2: 3, # 'и' + 26: 3, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 2, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 3, # 'Ñ„' + 25: 3, # 'Ñ…' + 22: 3, # 'ц' + 21: 3, # 'ч' + 27: 3, # 'ш' + 24: 3, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 18: { # 'б' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 3, # 'в' + 20: 1, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 3, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 1, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 3, # 'у' + 29: 0, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 1, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 3, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 9: { # 'в' + 63: 1, # 'e' + 45: 1, # '\xad' + 31: 0, # 'Ð' + 32: 1, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 0, # 'в' + 20: 2, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 3, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 2, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 3, # 'ч' + 27: 2, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 20: { # 'г' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 3, # 'л' + 14: 1, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 1, # 'п' + 7: 3, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 3, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 11: { # 'д' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 2, # 'б' + 9: 3, # 'в' + 20: 2, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 2, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 3, # 'у' + 29: 1, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 3: { # 'е' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 2, # 'е' + 23: 3, # 'ж' + 15: 3, # 'з' + 2: 2, # 'и' + 26: 3, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 2, # 'у' + 29: 3, # 'Ñ„' + 25: 3, # 'Ñ…' + 22: 3, # 'ц' + 21: 3, # 'ч' + 27: 3, # 'ш' + 24: 3, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 23: { # 'ж' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 2, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 3, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 1, # 'ц' + 21: 1, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 15: { # 'з' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 1, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 2, # 'ш' + 24: 1, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 2: { # 'и' + 63: 1, # 'e' + 45: 1, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 1, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 1, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 1, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 1, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 3, # 'з' + 2: 3, # 'и' + 26: 3, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 2, # 'у' + 29: 3, # 'Ñ„' + 25: 3, # 'Ñ…' + 22: 3, # 'ц' + 21: 3, # 'ч' + 27: 3, # 'ш' + 24: 3, # 'щ' + 17: 2, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 26: { # 'й' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 1, # 'а' + 18: 2, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 2, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 2, # 'з' + 2: 1, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 2, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 2, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 1, # 'у' + 29: 2, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 12: { # 'к' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 1, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 1, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 3, # 'в' + 20: 2, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 2, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 3, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 1, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 3, # 'ц' + 21: 2, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 10: { # 'л' + 63: 1, # 'e' + 45: 1, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 1, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 2, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 1, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 2, # 'п' + 7: 2, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 2, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 2, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 2, # 'ÑŒ' + 42: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 14: { # 'м' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 1, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 2, # 'к' + 10: 3, # 'л' + 14: 1, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 3, # 'у' + 29: 2, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 2, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 6: { # 'н' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 1, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 2, # 'б' + 9: 2, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 2, # 'ж' + 15: 2, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 1, # 'п' + 7: 2, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 3, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 3, # 'ц' + 21: 3, # 'ч' + 27: 2, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 2, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 4: { # 'о' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 2, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 3, # 'з' + 2: 3, # 'и' + 26: 3, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 2, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 2, # 'у' + 29: 3, # 'Ñ„' + 25: 3, # 'Ñ…' + 22: 3, # 'ц' + 21: 3, # 'ч' + 27: 3, # 'ш' + 24: 3, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 13: { # 'п' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 1, # 'й' + 12: 2, # 'к' + 10: 3, # 'л' + 14: 1, # 'м' + 6: 2, # 'н' + 4: 3, # 'о' + 13: 1, # 'п' + 7: 3, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 3, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 7: { # 'Ñ€' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 3, # 'е' + 23: 3, # 'ж' + 15: 2, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 2, # 'п' + 7: 1, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 2, # 'Ñ„' + 25: 3, # 'Ñ…' + 22: 3, # 'ц' + 21: 2, # 'ч' + 27: 3, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 8: { # 'Ñ' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 2, # 'б' + 9: 3, # 'в' + 20: 2, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 2, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 2, # 'ш' + 24: 0, # 'щ' + 17: 3, # 'ÑŠ' + 52: 2, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 5: { # 'Ñ‚' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 2, # 'г' + 11: 2, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 2, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 3, # 'у' + 29: 1, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 2, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 3, # 'ÑŠ' + 52: 2, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 19: { # 'у' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 2, # 'е' + 23: 3, # 'ж' + 15: 3, # 'з' + 2: 2, # 'и' + 26: 2, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 2, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 1, # 'у' + 29: 2, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 3, # 'ч' + 27: 3, # 'ш' + 24: 2, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 29: { # 'Ñ„' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 1, # 'в' + 20: 1, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 2, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 2, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 2, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 25: { # 'Ñ…' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 3, # 'в' + 20: 0, # 'г' + 11: 1, # 'д' + 3: 2, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 2, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 1, # 'п' + 7: 3, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 3, # 'у' + 29: 0, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 22: { # 'ц' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 2, # 'в' + 20: 1, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 1, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 2, # 'к' + 10: 1, # 'л' + 14: 1, # 'м' + 6: 1, # 'н' + 4: 2, # 'о' + 13: 1, # 'п' + 7: 1, # 'Ñ€' + 8: 1, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 2, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 1, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 21: { # 'ч' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 1, # 'б' + 9: 3, # 'в' + 20: 1, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 1, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 2, # 'л' + 14: 2, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 2, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 3, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 1, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 27: { # 'ш' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 2, # 'в' + 20: 0, # 'г' + 11: 1, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 3, # 'к' + 10: 2, # 'л' + 14: 1, # 'м' + 6: 3, # 'н' + 4: 2, # 'о' + 13: 2, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 2, # 'у' + 29: 1, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 1, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 2, # 'ÑŠ' + 52: 1, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 24: { # 'щ' + 63: 1, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 3, # 'а' + 18: 0, # 'б' + 9: 1, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 3, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 3, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 2, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 1, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 3, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 1, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 2, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 17: { # 'ÑŠ' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 1, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 3, # 'г' + 11: 3, # 'д' + 3: 2, # 'е' + 23: 3, # 'ж' + 15: 3, # 'з' + 2: 1, # 'и' + 26: 2, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 3, # 'о' + 13: 3, # 'п' + 7: 3, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 1, # 'у' + 29: 1, # 'Ñ„' + 25: 2, # 'Ñ…' + 22: 2, # 'ц' + 21: 3, # 'ч' + 27: 2, # 'ш' + 24: 3, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 2, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 52: { # 'ÑŒ' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 1, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 0, # 'и' + 26: 0, # 'й' + 12: 1, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 1, # 'н' + 4: 3, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 1, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 1, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 42: { # 'ÑŽ' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 1, # 'а' + 18: 2, # 'б' + 9: 1, # 'в' + 20: 2, # 'г' + 11: 2, # 'д' + 3: 1, # 'е' + 23: 2, # 'ж' + 15: 2, # 'з' + 2: 1, # 'и' + 26: 1, # 'й' + 12: 2, # 'к' + 10: 2, # 'л' + 14: 2, # 'м' + 6: 2, # 'н' + 4: 1, # 'о' + 13: 1, # 'п' + 7: 2, # 'Ñ€' + 8: 2, # 'Ñ' + 5: 2, # 'Ñ‚' + 19: 1, # 'у' + 29: 1, # 'Ñ„' + 25: 1, # 'Ñ…' + 22: 2, # 'ц' + 21: 3, # 'ч' + 27: 1, # 'ш' + 24: 1, # 'щ' + 17: 1, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 16: { # 'Ñ' + 63: 0, # 'e' + 45: 1, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 3, # 'б' + 9: 3, # 'в' + 20: 2, # 'г' + 11: 3, # 'д' + 3: 2, # 'е' + 23: 1, # 'ж' + 15: 2, # 'з' + 2: 1, # 'и' + 26: 2, # 'й' + 12: 3, # 'к' + 10: 3, # 'л' + 14: 3, # 'м' + 6: 3, # 'н' + 4: 1, # 'о' + 13: 2, # 'п' + 7: 2, # 'Ñ€' + 8: 3, # 'Ñ' + 5: 3, # 'Ñ‚' + 19: 1, # 'у' + 29: 1, # 'Ñ„' + 25: 3, # 'Ñ…' + 22: 2, # 'ц' + 21: 1, # 'ч' + 27: 1, # 'ш' + 24: 2, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 58: { # 'Ñ”' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 0, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 0, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 0, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, + 62: { # 'â„–' + 63: 0, # 'e' + 45: 0, # '\xad' + 31: 0, # 'Ð' + 32: 0, # 'Б' + 35: 0, # 'Ð’' + 43: 0, # 'Г' + 37: 0, # 'Д' + 44: 0, # 'Е' + 55: 0, # 'Ж' + 47: 0, # 'З' + 40: 0, # 'И' + 59: 0, # 'Й' + 33: 0, # 'К' + 46: 0, # 'Л' + 38: 0, # 'М' + 36: 0, # 'Ð' + 41: 0, # 'О' + 30: 0, # 'П' + 39: 0, # 'Р' + 28: 0, # 'С' + 34: 0, # 'Т' + 51: 0, # 'У' + 48: 0, # 'Ф' + 49: 0, # 'Ð¥' + 53: 0, # 'Ц' + 50: 0, # 'Ч' + 54: 0, # 'Ш' + 57: 0, # 'Щ' + 61: 0, # 'Ъ' + 60: 0, # 'Ю' + 56: 0, # 'Я' + 1: 0, # 'а' + 18: 0, # 'б' + 9: 0, # 'в' + 20: 0, # 'г' + 11: 0, # 'д' + 3: 0, # 'е' + 23: 0, # 'ж' + 15: 0, # 'з' + 2: 0, # 'и' + 26: 0, # 'й' + 12: 0, # 'к' + 10: 0, # 'л' + 14: 0, # 'м' + 6: 0, # 'н' + 4: 0, # 'о' + 13: 0, # 'п' + 7: 0, # 'Ñ€' + 8: 0, # 'Ñ' + 5: 0, # 'Ñ‚' + 19: 0, # 'у' + 29: 0, # 'Ñ„' + 25: 0, # 'Ñ…' + 22: 0, # 'ц' + 21: 0, # 'ч' + 27: 0, # 'ш' + 24: 0, # 'щ' + 17: 0, # 'ÑŠ' + 52: 0, # 'ÑŒ' + 42: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + 58: 0, # 'Ñ”' + 62: 0, # 'â„–' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +ISO_8859_5_BULGARIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 77, # 'A' + 66: 90, # 'B' + 67: 99, # 'C' + 68: 100, # 'D' + 69: 72, # 'E' + 70: 109, # 'F' + 71: 107, # 'G' + 72: 101, # 'H' + 73: 79, # 'I' + 74: 185, # 'J' + 75: 81, # 'K' + 76: 102, # 'L' + 77: 76, # 'M' + 78: 94, # 'N' + 79: 82, # 'O' + 80: 110, # 'P' + 81: 186, # 'Q' + 82: 108, # 'R' + 83: 91, # 'S' + 84: 74, # 'T' + 85: 119, # 'U' + 86: 84, # 'V' + 87: 96, # 'W' + 88: 111, # 'X' + 89: 187, # 'Y' + 90: 115, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 65, # 'a' + 98: 69, # 'b' + 99: 70, # 'c' + 100: 66, # 'd' + 101: 63, # 'e' + 102: 68, # 'f' + 103: 112, # 'g' + 104: 103, # 'h' + 105: 92, # 'i' + 106: 194, # 'j' + 107: 104, # 'k' + 108: 95, # 'l' + 109: 86, # 'm' + 110: 87, # 'n' + 111: 71, # 'o' + 112: 116, # 'p' + 113: 195, # 'q' + 114: 85, # 'r' + 115: 93, # 's' + 116: 97, # 't' + 117: 113, # 'u' + 118: 196, # 'v' + 119: 197, # 'w' + 120: 198, # 'x' + 121: 199, # 'y' + 122: 200, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 194, # '\x80' + 129: 195, # '\x81' + 130: 196, # '\x82' + 131: 197, # '\x83' + 132: 198, # '\x84' + 133: 199, # '\x85' + 134: 200, # '\x86' + 135: 201, # '\x87' + 136: 202, # '\x88' + 137: 203, # '\x89' + 138: 204, # '\x8a' + 139: 205, # '\x8b' + 140: 206, # '\x8c' + 141: 207, # '\x8d' + 142: 208, # '\x8e' + 143: 209, # '\x8f' + 144: 210, # '\x90' + 145: 211, # '\x91' + 146: 212, # '\x92' + 147: 213, # '\x93' + 148: 214, # '\x94' + 149: 215, # '\x95' + 150: 216, # '\x96' + 151: 217, # '\x97' + 152: 218, # '\x98' + 153: 219, # '\x99' + 154: 220, # '\x9a' + 155: 221, # '\x9b' + 156: 222, # '\x9c' + 157: 223, # '\x9d' + 158: 224, # '\x9e' + 159: 225, # '\x9f' + 160: 81, # '\xa0' + 161: 226, # 'Ð' + 162: 227, # 'Ђ' + 163: 228, # 'Ѓ' + 164: 229, # 'Є' + 165: 230, # 'Ð…' + 166: 105, # 'І' + 167: 231, # 'Ї' + 168: 232, # 'Ј' + 169: 233, # 'Љ' + 170: 234, # 'Њ' + 171: 235, # 'Ћ' + 172: 236, # 'ÐŒ' + 173: 45, # '\xad' + 174: 237, # 'ÐŽ' + 175: 238, # 'Ð' + 176: 31, # 'Ð' + 177: 32, # 'Б' + 178: 35, # 'Ð’' + 179: 43, # 'Г' + 180: 37, # 'Д' + 181: 44, # 'Е' + 182: 55, # 'Ж' + 183: 47, # 'З' + 184: 40, # 'И' + 185: 59, # 'Й' + 186: 33, # 'К' + 187: 46, # 'Л' + 188: 38, # 'М' + 189: 36, # 'Ð' + 190: 41, # 'О' + 191: 30, # 'П' + 192: 39, # 'Р' + 193: 28, # 'С' + 194: 34, # 'Т' + 195: 51, # 'У' + 196: 48, # 'Ф' + 197: 49, # 'Ð¥' + 198: 53, # 'Ц' + 199: 50, # 'Ч' + 200: 54, # 'Ш' + 201: 57, # 'Щ' + 202: 61, # 'Ъ' + 203: 239, # 'Ы' + 204: 67, # 'Ь' + 205: 240, # 'Э' + 206: 60, # 'Ю' + 207: 56, # 'Я' + 208: 1, # 'а' + 209: 18, # 'б' + 210: 9, # 'в' + 211: 20, # 'г' + 212: 11, # 'д' + 213: 3, # 'е' + 214: 23, # 'ж' + 215: 15, # 'з' + 216: 2, # 'и' + 217: 26, # 'й' + 218: 12, # 'к' + 219: 10, # 'л' + 220: 14, # 'м' + 221: 6, # 'н' + 222: 4, # 'о' + 223: 13, # 'п' + 224: 7, # 'Ñ€' + 225: 8, # 'Ñ' + 226: 5, # 'Ñ‚' + 227: 19, # 'у' + 228: 29, # 'Ñ„' + 229: 25, # 'Ñ…' + 230: 22, # 'ц' + 231: 21, # 'ч' + 232: 27, # 'ш' + 233: 24, # 'щ' + 234: 17, # 'ÑŠ' + 235: 75, # 'Ñ‹' + 236: 52, # 'ÑŒ' + 237: 241, # 'Ñ' + 238: 42, # 'ÑŽ' + 239: 16, # 'Ñ' + 240: 62, # 'â„–' + 241: 242, # 'Ñ‘' + 242: 243, # 'Ñ’' + 243: 244, # 'Ñ“' + 244: 58, # 'Ñ”' + 245: 245, # 'Ñ•' + 246: 98, # 'Ñ–' + 247: 246, # 'Ñ—' + 248: 247, # 'ј' + 249: 248, # 'Ñ™' + 250: 249, # 'Ñš' + 251: 250, # 'Ñ›' + 252: 251, # 'Ñœ' + 253: 91, # '§' + 254: 252, # 'Ñž' + 255: 253, # 'ÑŸ' +} + +ISO_8859_5_BULGARIAN_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-5', + language='Bulgarian', + char_to_order_map=ISO_8859_5_BULGARIAN_CHAR_TO_ORDER, + language_model=BULGARIAN_LANG_MODEL, + typical_positive_ratio=0.969392, + keep_ascii_letters=False, + alphabet='ÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрÑтуфхцчшщъьюÑ') + +WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 77, # 'A' + 66: 90, # 'B' + 67: 99, # 'C' + 68: 100, # 'D' + 69: 72, # 'E' + 70: 109, # 'F' + 71: 107, # 'G' + 72: 101, # 'H' + 73: 79, # 'I' + 74: 185, # 'J' + 75: 81, # 'K' + 76: 102, # 'L' + 77: 76, # 'M' + 78: 94, # 'N' + 79: 82, # 'O' + 80: 110, # 'P' + 81: 186, # 'Q' + 82: 108, # 'R' + 83: 91, # 'S' + 84: 74, # 'T' + 85: 119, # 'U' + 86: 84, # 'V' + 87: 96, # 'W' + 88: 111, # 'X' + 89: 187, # 'Y' + 90: 115, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 65, # 'a' + 98: 69, # 'b' + 99: 70, # 'c' + 100: 66, # 'd' + 101: 63, # 'e' + 102: 68, # 'f' + 103: 112, # 'g' + 104: 103, # 'h' + 105: 92, # 'i' + 106: 194, # 'j' + 107: 104, # 'k' + 108: 95, # 'l' + 109: 86, # 'm' + 110: 87, # 'n' + 111: 71, # 'o' + 112: 116, # 'p' + 113: 195, # 'q' + 114: 85, # 'r' + 115: 93, # 's' + 116: 97, # 't' + 117: 113, # 'u' + 118: 196, # 'v' + 119: 197, # 'w' + 120: 198, # 'x' + 121: 199, # 'y' + 122: 200, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 206, # 'Ђ' + 129: 207, # 'Ѓ' + 130: 208, # '‚' + 131: 209, # 'Ñ“' + 132: 210, # '„' + 133: 211, # '…' + 134: 212, # '†' + 135: 213, # '‡' + 136: 120, # '€' + 137: 214, # '‰' + 138: 215, # 'Љ' + 139: 216, # '‹' + 140: 217, # 'Њ' + 141: 218, # 'ÐŒ' + 142: 219, # 'Ћ' + 143: 220, # 'Ð' + 144: 221, # 'Ñ’' + 145: 78, # '‘' + 146: 64, # '’' + 147: 83, # '“' + 148: 121, # 'â€' + 149: 98, # '•' + 150: 117, # '–' + 151: 105, # '—' + 152: 222, # None + 153: 223, # 'â„¢' + 154: 224, # 'Ñ™' + 155: 225, # '›' + 156: 226, # 'Ñš' + 157: 227, # 'Ñœ' + 158: 228, # 'Ñ›' + 159: 229, # 'ÑŸ' + 160: 88, # '\xa0' + 161: 230, # 'ÐŽ' + 162: 231, # 'Ñž' + 163: 232, # 'Ј' + 164: 233, # '¤' + 165: 122, # 'Ò' + 166: 89, # '¦' + 167: 106, # '§' + 168: 234, # 'Ð' + 169: 235, # '©' + 170: 236, # 'Є' + 171: 237, # '«' + 172: 238, # '¬' + 173: 45, # '\xad' + 174: 239, # '®' + 175: 240, # 'Ї' + 176: 73, # '°' + 177: 80, # '±' + 178: 118, # 'І' + 179: 114, # 'Ñ–' + 180: 241, # 'Ò‘' + 181: 242, # 'µ' + 182: 243, # '¶' + 183: 244, # '·' + 184: 245, # 'Ñ‘' + 185: 62, # 'â„–' + 186: 58, # 'Ñ”' + 187: 246, # '»' + 188: 247, # 'ј' + 189: 248, # 'Ð…' + 190: 249, # 'Ñ•' + 191: 250, # 'Ñ—' + 192: 31, # 'Ð' + 193: 32, # 'Б' + 194: 35, # 'Ð’' + 195: 43, # 'Г' + 196: 37, # 'Д' + 197: 44, # 'Е' + 198: 55, # 'Ж' + 199: 47, # 'З' + 200: 40, # 'И' + 201: 59, # 'Й' + 202: 33, # 'К' + 203: 46, # 'Л' + 204: 38, # 'М' + 205: 36, # 'Ð' + 206: 41, # 'О' + 207: 30, # 'П' + 208: 39, # 'Р' + 209: 28, # 'С' + 210: 34, # 'Т' + 211: 51, # 'У' + 212: 48, # 'Ф' + 213: 49, # 'Ð¥' + 214: 53, # 'Ц' + 215: 50, # 'Ч' + 216: 54, # 'Ш' + 217: 57, # 'Щ' + 218: 61, # 'Ъ' + 219: 251, # 'Ы' + 220: 67, # 'Ь' + 221: 252, # 'Э' + 222: 60, # 'Ю' + 223: 56, # 'Я' + 224: 1, # 'а' + 225: 18, # 'б' + 226: 9, # 'в' + 227: 20, # 'г' + 228: 11, # 'д' + 229: 3, # 'е' + 230: 23, # 'ж' + 231: 15, # 'з' + 232: 2, # 'и' + 233: 26, # 'й' + 234: 12, # 'к' + 235: 10, # 'л' + 236: 14, # 'м' + 237: 6, # 'н' + 238: 4, # 'о' + 239: 13, # 'п' + 240: 7, # 'Ñ€' + 241: 8, # 'Ñ' + 242: 5, # 'Ñ‚' + 243: 19, # 'у' + 244: 29, # 'Ñ„' + 245: 25, # 'Ñ…' + 246: 22, # 'ц' + 247: 21, # 'ч' + 248: 27, # 'ш' + 249: 24, # 'щ' + 250: 17, # 'ÑŠ' + 251: 75, # 'Ñ‹' + 252: 52, # 'ÑŒ' + 253: 253, # 'Ñ' + 254: 42, # 'ÑŽ' + 255: 16, # 'Ñ' +} + +WINDOWS_1251_BULGARIAN_MODEL = SingleByteCharSetModel(charset_name='windows-1251', + language='Bulgarian', + char_to_order_map=WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER, + language_model=BULGARIAN_LANG_MODEL, + typical_positive_ratio=0.969392, + keep_ascii_letters=False, + alphabet='ÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрÑтуфхцчшщъьюÑ') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langgreekmodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langgreekmodel.py new file mode 100644 index 0000000..d99528e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langgreekmodel.py @@ -0,0 +1,4398 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +GREEK_LANG_MODEL = { + 60: { # 'e' + 60: 2, # 'e' + 55: 1, # 'o' + 58: 2, # 't' + 36: 1, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 1, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 55: { # 'o' + 60: 0, # 'e' + 55: 2, # 'o' + 58: 2, # 't' + 36: 1, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 1, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 1, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 58: { # 't' + 60: 2, # 'e' + 55: 1, # 'o' + 58: 1, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 1, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 36: { # '·' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 61: { # 'Ά' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 1, # 'γ' + 21: 2, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 1, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 46: { # 'Έ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 2, # 'β' + 20: 2, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 2, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 0, # 'ο' + 9: 2, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 1, # 'σ' + 2: 2, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 3, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 54: { # 'ÎŒ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 2, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 2, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 2, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 31: { # 'Α' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 2, # 'Î’' + 43: 2, # 'Γ' + 41: 1, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 2, # 'Θ' + 47: 2, # 'Ι' + 44: 2, # 'Κ' + 53: 2, # 'Λ' + 38: 2, # 'Μ' + 49: 2, # 'Î' + 59: 1, # 'Ξ' + 39: 0, # 'Ο' + 35: 2, # 'Π' + 48: 2, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 2, # 'Î¥' + 56: 2, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 2, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 1, # 'θ' + 5: 0, # 'ι' + 11: 2, # 'κ' + 16: 3, # 'λ' + 10: 2, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 0, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 2, # 'Ï‚' + 7: 2, # 'σ' + 2: 0, # 'Ï„' + 12: 3, # 'Ï…' + 28: 2, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 51: { # 'Î’' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 1, # 'Ε' + 40: 1, # 'Η' + 52: 0, # 'Θ' + 47: 1, # 'Ι' + 44: 0, # 'Κ' + 53: 1, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 2, # 'ή' + 15: 0, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 43: { # 'Γ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 1, # 'Α' + 51: 0, # 'Î’' + 43: 2, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 1, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 1, # 'Κ' + 53: 1, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 1, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 2, # 'Î¥' + 56: 0, # 'Φ' + 50: 1, # 'Χ' + 57: 2, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 41: { # 'Δ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 2, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 2, # 'ή' + 15: 2, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 1, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 34: { # 'Ε' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 2, # 'Γ' + 41: 2, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 2, # 'Κ' + 53: 2, # 'Λ' + 38: 2, # 'Μ' + 49: 2, # 'Î' + 59: 1, # 'Ξ' + 39: 0, # 'Ο' + 35: 2, # 'Π' + 48: 2, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 2, # 'Î¥' + 56: 0, # 'Φ' + 50: 2, # 'Χ' + 57: 2, # 'Ω' + 17: 3, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 3, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 3, # 'γ' + 21: 2, # 'δ' + 3: 1, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 1, # 'θ' + 5: 2, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 2, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 0, # 'ο' + 9: 3, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 2, # 'σ' + 2: 2, # 'Ï„' + 12: 2, # 'Ï…' + 28: 2, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 1, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 40: { # 'Η' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 1, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 2, # 'Θ' + 47: 0, # 'Ι' + 44: 2, # 'Κ' + 53: 0, # 'Λ' + 38: 2, # 'Μ' + 49: 2, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 2, # 'Π' + 48: 2, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 1, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 1, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 1, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 52: { # 'Θ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 1, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 1, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 2, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 47: { # 'Ι' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 1, # 'Î’' + 43: 1, # 'Γ' + 41: 2, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 2, # 'Κ' + 53: 2, # 'Λ' + 38: 2, # 'Μ' + 49: 2, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 0, # 'Î¥' + 56: 2, # 'Φ' + 50: 0, # 'Χ' + 57: 2, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 2, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 1, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 2, # 'σ' + 2: 1, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 1, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 44: { # 'Κ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 1, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 1, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 0, # 'Σ' + 33: 1, # 'Τ' + 45: 2, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 1, # 'Ω' + 17: 3, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 2, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 53: { # 'Λ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 2, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 2, # 'Σ' + 33: 0, # 'Τ' + 45: 2, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 2, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 0, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 1, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 2, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 38: { # 'Μ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 2, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 2, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 2, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 2, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 3, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 2, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 49: { # 'Î' + 60: 2, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 2, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 2, # 'Ω' + 17: 0, # 'ά' + 18: 2, # 'έ' + 22: 0, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 1, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 1, # 'ω' + 19: 2, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 59: { # 'Ξ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 1, # 'Ε' + 40: 1, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 1, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 2, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 39: { # 'Ο' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 1, # 'Î’' + 43: 2, # 'Γ' + 41: 2, # 'Δ' + 34: 2, # 'Ε' + 40: 1, # 'Η' + 52: 2, # 'Θ' + 47: 2, # 'Ι' + 44: 2, # 'Κ' + 53: 2, # 'Λ' + 38: 2, # 'Μ' + 49: 2, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 2, # 'Π' + 48: 2, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 2, # 'Î¥' + 56: 2, # 'Φ' + 50: 2, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 2, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 2, # 'κ' + 16: 2, # 'λ' + 10: 2, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 2, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 2, # 'Ï„' + 12: 2, # 'Ï…' + 28: 1, # 'φ' + 23: 1, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 35: { # 'Π' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 2, # 'Λ' + 38: 1, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 0, # 'Σ' + 33: 1, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 1, # 'Χ' + 57: 2, # 'Ω' + 17: 2, # 'ά' + 18: 1, # 'έ' + 22: 1, # 'ή' + 15: 2, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 2, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 48: { # 'Ρ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 1, # 'Γ' + 41: 1, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 2, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 0, # 'Σ' + 33: 1, # 'Τ' + 45: 1, # 'Î¥' + 56: 0, # 'Φ' + 50: 1, # 'Χ' + 57: 1, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 2, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 1, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 37: { # 'Σ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 1, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 2, # 'Κ' + 53: 0, # 'Λ' + 38: 2, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 2, # 'Î¥' + 56: 0, # 'Φ' + 50: 2, # 'Χ' + 57: 2, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 2, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 2, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 2, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 2, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 33: { # 'Τ' + 60: 0, # 'e' + 55: 1, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 2, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 2, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 0, # 'Σ' + 33: 1, # 'Τ' + 45: 1, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 2, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 0, # 'ή' + 15: 2, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 2, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 2, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 2, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 45: { # 'Î¥' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 2, # 'Γ' + 41: 0, # 'Δ' + 34: 1, # 'Ε' + 40: 2, # 'Η' + 52: 2, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 1, # 'Λ' + 38: 2, # 'Μ' + 49: 2, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 2, # 'Π' + 48: 1, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 1, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 3, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 56: { # 'Φ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 1, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 1, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 2, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 2, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 1, # 'Ï' + 27: 1, # 'ÏŽ' + }, + 50: { # 'Χ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 1, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 2, # 'Ε' + 40: 2, # 'Η' + 52: 0, # 'Θ' + 47: 2, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 1, # 'Î' + 59: 0, # 'Ξ' + 39: 1, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 1, # 'Χ' + 57: 1, # 'Ω' + 17: 2, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 2, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 2, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 57: { # 'Ω' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 1, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 1, # 'Λ' + 38: 0, # 'Μ' + 49: 2, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 2, # 'Ρ' + 37: 2, # 'Σ' + 33: 2, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 2, # 'Ï' + 14: 2, # 'Ï‚' + 7: 2, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 1, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 17: { # 'ά' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 3, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 3, # 'ε' + 32: 3, # 'ζ' + 13: 0, # 'η' + 25: 3, # 'θ' + 5: 2, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 0, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 3, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 18: { # 'έ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 3, # 'α' + 29: 2, # 'β' + 20: 3, # 'γ' + 21: 2, # 'δ' + 3: 3, # 'ε' + 32: 2, # 'ζ' + 13: 0, # 'η' + 25: 3, # 'θ' + 5: 0, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 3, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 22: { # 'ή' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 1, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 3, # 'θ' + 5: 0, # 'ι' + 11: 3, # 'κ' + 16: 2, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 0, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 15: { # 'ί' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 3, # 'α' + 29: 2, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 3, # 'ε' + 32: 3, # 'ζ' + 13: 3, # 'η' + 25: 3, # 'θ' + 5: 0, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 1, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 3, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 1: { # 'α' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 2, # 'έ' + 22: 0, # 'ή' + 15: 3, # 'ί' + 1: 0, # 'α' + 29: 3, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 2, # 'ε' + 32: 3, # 'ζ' + 13: 1, # 'η' + 25: 3, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 2, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 0, # 'ω' + 19: 2, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 29: { # 'β' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 2, # 'έ' + 22: 3, # 'ή' + 15: 2, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 2, # 'γ' + 21: 2, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 3, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 2, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 20: { # 'γ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 3, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 3, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 21: { # 'δ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 3: { # 'ε' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 3, # 'ί' + 1: 2, # 'α' + 29: 3, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 2, # 'ε' + 32: 2, # 'ζ' + 13: 0, # 'η' + 25: 3, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 2, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 3, # 'ω' + 19: 2, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 32: { # 'ζ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 2, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 1, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 2, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 13: { # 'η' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 3, # 'γ' + 21: 2, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 3, # 'θ' + 5: 0, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 0, # 'ο' + 9: 2, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 25: { # 'θ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 2, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 1, # 'λ' + 10: 3, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 5: { # 'ι' + 60: 0, # 'e' + 55: 1, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 1, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 0, # 'ί' + 1: 3, # 'α' + 29: 3, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 3, # 'ε' + 32: 2, # 'ζ' + 13: 3, # 'η' + 25: 3, # 'θ' + 5: 0, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 11: { # 'κ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 3, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 2, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 2, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 2, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 16: { # 'λ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 1, # 'β' + 20: 2, # 'γ' + 21: 1, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 2, # 'θ' + 5: 3, # 'ι' + 11: 2, # 'κ' + 16: 3, # 'λ' + 10: 2, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 2, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 10: { # 'μ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 1, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 3, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 2, # 'Ï…' + 28: 3, # 'φ' + 23: 0, # 'χ' + 42: 2, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 6: { # 'ν' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 3, # 'δ' + 3: 3, # 'ε' + 32: 2, # 'ζ' + 13: 3, # 'η' + 25: 3, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 1, # 'λ' + 10: 0, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 30: { # 'ξ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 2, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 2, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 2, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 1, # 'ÏŽ' + }, + 4: { # 'ο' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 2, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 2, # 'α' + 29: 3, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 3, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 2, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 2, # 'ω' + 19: 1, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 9: { # 'Ï€' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 3, # 'λ' + 10: 0, # 'μ' + 6: 2, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 2, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 8: { # 'Ï' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 2, # 'β' + 20: 3, # 'γ' + 21: 2, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 3, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 1, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 3, # 'ο' + 9: 2, # 'Ï€' + 8: 2, # 'Ï' + 14: 0, # 'Ï‚' + 7: 2, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 14: { # 'Ï‚' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 2, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 0, # 'θ' + 5: 0, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 0, # 'Ï„' + 12: 0, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 7: { # 'σ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 3, # 'β' + 20: 0, # 'γ' + 21: 2, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 3, # 'θ' + 5: 3, # 'ι' + 11: 3, # 'κ' + 16: 2, # 'λ' + 10: 3, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 3, # 'φ' + 23: 3, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 2: { # 'Ï„' + 60: 0, # 'e' + 55: 2, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 2, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 3, # 'ι' + 11: 2, # 'κ' + 16: 2, # 'λ' + 10: 3, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 2, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 12: { # 'Ï…' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 3, # 'ή' + 15: 2, # 'ί' + 1: 3, # 'α' + 29: 2, # 'β' + 20: 3, # 'γ' + 21: 2, # 'δ' + 3: 2, # 'ε' + 32: 2, # 'ζ' + 13: 2, # 'η' + 25: 3, # 'θ' + 5: 2, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 3, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 2, # 'ω' + 19: 2, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 28: { # 'φ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 3, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 2, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 0, # 'μ' + 6: 1, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 1, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 2, # 'Ï' + 27: 2, # 'ÏŽ' + }, + 23: { # 'χ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 3, # 'ά' + 18: 2, # 'έ' + 22: 3, # 'ή' + 15: 3, # 'ί' + 1: 3, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 2, # 'θ' + 5: 3, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 2, # 'μ' + 6: 3, # 'ν' + 30: 0, # 'ξ' + 4: 3, # 'ο' + 9: 0, # 'Ï€' + 8: 3, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 3, # 'Ï„' + 12: 3, # 'Ï…' + 28: 0, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 3, # 'ω' + 19: 3, # 'ÏŒ' + 26: 3, # 'Ï' + 27: 3, # 'ÏŽ' + }, + 42: { # 'ψ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 2, # 'ά' + 18: 2, # 'έ' + 22: 1, # 'ή' + 15: 2, # 'ί' + 1: 2, # 'α' + 29: 0, # 'β' + 20: 0, # 'γ' + 21: 0, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 3, # 'η' + 25: 0, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 0, # 'λ' + 10: 0, # 'μ' + 6: 0, # 'ν' + 30: 0, # 'ξ' + 4: 2, # 'ο' + 9: 0, # 'Ï€' + 8: 0, # 'Ï' + 14: 0, # 'Ï‚' + 7: 0, # 'σ' + 2: 2, # 'Ï„' + 12: 1, # 'Ï…' + 28: 0, # 'φ' + 23: 0, # 'χ' + 42: 0, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 24: { # 'ω' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 1, # 'ά' + 18: 0, # 'έ' + 22: 2, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 2, # 'β' + 20: 3, # 'γ' + 21: 2, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 0, # 'η' + 25: 3, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 0, # 'ξ' + 4: 0, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 2, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 19: { # 'ÏŒ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 3, # 'β' + 20: 3, # 'γ' + 21: 3, # 'δ' + 3: 1, # 'ε' + 32: 2, # 'ζ' + 13: 2, # 'η' + 25: 2, # 'θ' + 5: 2, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 1, # 'ξ' + 4: 2, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 3, # 'χ' + 42: 2, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 26: { # 'Ï' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 2, # 'α' + 29: 2, # 'β' + 20: 2, # 'γ' + 21: 1, # 'δ' + 3: 3, # 'ε' + 32: 0, # 'ζ' + 13: 2, # 'η' + 25: 3, # 'θ' + 5: 0, # 'ι' + 11: 3, # 'κ' + 16: 3, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 2, # 'ξ' + 4: 3, # 'ο' + 9: 3, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 2, # 'φ' + 23: 2, # 'χ' + 42: 2, # 'ψ' + 24: 2, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, + 27: { # 'ÏŽ' + 60: 0, # 'e' + 55: 0, # 'o' + 58: 0, # 't' + 36: 0, # '·' + 61: 0, # 'Ά' + 46: 0, # 'Έ' + 54: 0, # 'ÎŒ' + 31: 0, # 'Α' + 51: 0, # 'Î’' + 43: 0, # 'Γ' + 41: 0, # 'Δ' + 34: 0, # 'Ε' + 40: 0, # 'Η' + 52: 0, # 'Θ' + 47: 0, # 'Ι' + 44: 0, # 'Κ' + 53: 0, # 'Λ' + 38: 0, # 'Μ' + 49: 0, # 'Î' + 59: 0, # 'Ξ' + 39: 0, # 'Ο' + 35: 0, # 'Π' + 48: 0, # 'Ρ' + 37: 0, # 'Σ' + 33: 0, # 'Τ' + 45: 0, # 'Î¥' + 56: 0, # 'Φ' + 50: 0, # 'Χ' + 57: 0, # 'Ω' + 17: 0, # 'ά' + 18: 0, # 'έ' + 22: 0, # 'ή' + 15: 0, # 'ί' + 1: 0, # 'α' + 29: 1, # 'β' + 20: 0, # 'γ' + 21: 3, # 'δ' + 3: 0, # 'ε' + 32: 0, # 'ζ' + 13: 1, # 'η' + 25: 2, # 'θ' + 5: 2, # 'ι' + 11: 0, # 'κ' + 16: 2, # 'λ' + 10: 3, # 'μ' + 6: 3, # 'ν' + 30: 1, # 'ξ' + 4: 0, # 'ο' + 9: 2, # 'Ï€' + 8: 3, # 'Ï' + 14: 3, # 'Ï‚' + 7: 3, # 'σ' + 2: 3, # 'Ï„' + 12: 0, # 'Ï…' + 28: 1, # 'φ' + 23: 1, # 'χ' + 42: 0, # 'ψ' + 24: 0, # 'ω' + 19: 0, # 'ÏŒ' + 26: 0, # 'Ï' + 27: 0, # 'ÏŽ' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +WINDOWS_1253_GREEK_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 82, # 'A' + 66: 100, # 'B' + 67: 104, # 'C' + 68: 94, # 'D' + 69: 98, # 'E' + 70: 101, # 'F' + 71: 116, # 'G' + 72: 102, # 'H' + 73: 111, # 'I' + 74: 187, # 'J' + 75: 117, # 'K' + 76: 92, # 'L' + 77: 88, # 'M' + 78: 113, # 'N' + 79: 85, # 'O' + 80: 79, # 'P' + 81: 118, # 'Q' + 82: 105, # 'R' + 83: 83, # 'S' + 84: 67, # 'T' + 85: 114, # 'U' + 86: 119, # 'V' + 87: 95, # 'W' + 88: 99, # 'X' + 89: 109, # 'Y' + 90: 188, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 72, # 'a' + 98: 70, # 'b' + 99: 80, # 'c' + 100: 81, # 'd' + 101: 60, # 'e' + 102: 96, # 'f' + 103: 93, # 'g' + 104: 89, # 'h' + 105: 68, # 'i' + 106: 120, # 'j' + 107: 97, # 'k' + 108: 77, # 'l' + 109: 86, # 'm' + 110: 69, # 'n' + 111: 55, # 'o' + 112: 78, # 'p' + 113: 115, # 'q' + 114: 65, # 'r' + 115: 66, # 's' + 116: 58, # 't' + 117: 76, # 'u' + 118: 106, # 'v' + 119: 103, # 'w' + 120: 87, # 'x' + 121: 107, # 'y' + 122: 112, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 255, # '€' + 129: 255, # None + 130: 255, # '‚' + 131: 255, # 'Æ’' + 132: 255, # '„' + 133: 255, # '…' + 134: 255, # '†' + 135: 255, # '‡' + 136: 255, # None + 137: 255, # '‰' + 138: 255, # None + 139: 255, # '‹' + 140: 255, # None + 141: 255, # None + 142: 255, # None + 143: 255, # None + 144: 255, # None + 145: 255, # '‘' + 146: 255, # '’' + 147: 255, # '“' + 148: 255, # 'â€' + 149: 255, # '•' + 150: 255, # '–' + 151: 255, # '—' + 152: 255, # None + 153: 255, # 'â„¢' + 154: 255, # None + 155: 255, # '›' + 156: 255, # None + 157: 255, # None + 158: 255, # None + 159: 255, # None + 160: 253, # '\xa0' + 161: 233, # 'Î…' + 162: 61, # 'Ά' + 163: 253, # '£' + 164: 253, # '¤' + 165: 253, # 'Â¥' + 166: 253, # '¦' + 167: 253, # '§' + 168: 253, # '¨' + 169: 253, # '©' + 170: 253, # None + 171: 253, # '«' + 172: 253, # '¬' + 173: 74, # '\xad' + 174: 253, # '®' + 175: 253, # '―' + 176: 253, # '°' + 177: 253, # '±' + 178: 253, # '²' + 179: 253, # '³' + 180: 247, # '΄' + 181: 253, # 'µ' + 182: 253, # '¶' + 183: 36, # '·' + 184: 46, # 'Έ' + 185: 71, # 'Ή' + 186: 73, # 'Ί' + 187: 253, # '»' + 188: 54, # 'ÎŒ' + 189: 253, # '½' + 190: 108, # 'ÎŽ' + 191: 123, # 'Î' + 192: 110, # 'Î' + 193: 31, # 'Α' + 194: 51, # 'Î’' + 195: 43, # 'Γ' + 196: 41, # 'Δ' + 197: 34, # 'Ε' + 198: 91, # 'Ζ' + 199: 40, # 'Η' + 200: 52, # 'Θ' + 201: 47, # 'Ι' + 202: 44, # 'Κ' + 203: 53, # 'Λ' + 204: 38, # 'Μ' + 205: 49, # 'Î' + 206: 59, # 'Ξ' + 207: 39, # 'Ο' + 208: 35, # 'Π' + 209: 48, # 'Ρ' + 210: 250, # None + 211: 37, # 'Σ' + 212: 33, # 'Τ' + 213: 45, # 'Î¥' + 214: 56, # 'Φ' + 215: 50, # 'Χ' + 216: 84, # 'Ψ' + 217: 57, # 'Ω' + 218: 120, # 'Ϊ' + 219: 121, # 'Ϋ' + 220: 17, # 'ά' + 221: 18, # 'έ' + 222: 22, # 'ή' + 223: 15, # 'ί' + 224: 124, # 'ΰ' + 225: 1, # 'α' + 226: 29, # 'β' + 227: 20, # 'γ' + 228: 21, # 'δ' + 229: 3, # 'ε' + 230: 32, # 'ζ' + 231: 13, # 'η' + 232: 25, # 'θ' + 233: 5, # 'ι' + 234: 11, # 'κ' + 235: 16, # 'λ' + 236: 10, # 'μ' + 237: 6, # 'ν' + 238: 30, # 'ξ' + 239: 4, # 'ο' + 240: 9, # 'Ï€' + 241: 8, # 'Ï' + 242: 14, # 'Ï‚' + 243: 7, # 'σ' + 244: 2, # 'Ï„' + 245: 12, # 'Ï…' + 246: 28, # 'φ' + 247: 23, # 'χ' + 248: 42, # 'ψ' + 249: 24, # 'ω' + 250: 64, # 'ÏŠ' + 251: 75, # 'Ï‹' + 252: 19, # 'ÏŒ' + 253: 26, # 'Ï' + 254: 27, # 'ÏŽ' + 255: 253, # None +} + +WINDOWS_1253_GREEK_MODEL = SingleByteCharSetModel(charset_name='windows-1253', + language='Greek', + char_to_order_map=WINDOWS_1253_GREEK_CHAR_TO_ORDER, + language_model=GREEK_LANG_MODEL, + typical_positive_ratio=0.982851, + keep_ascii_letters=False, + alphabet='ΆΈΉΊΌΎÎΑΒΓΔΕΖΗΘΙΚΛΜÎΞΟΠΡΣΤΥΦΧΨΩάέήίαβγδεζηθικλμνξοπÏςστυφχψωόÏÏŽ') + +ISO_8859_7_GREEK_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 82, # 'A' + 66: 100, # 'B' + 67: 104, # 'C' + 68: 94, # 'D' + 69: 98, # 'E' + 70: 101, # 'F' + 71: 116, # 'G' + 72: 102, # 'H' + 73: 111, # 'I' + 74: 187, # 'J' + 75: 117, # 'K' + 76: 92, # 'L' + 77: 88, # 'M' + 78: 113, # 'N' + 79: 85, # 'O' + 80: 79, # 'P' + 81: 118, # 'Q' + 82: 105, # 'R' + 83: 83, # 'S' + 84: 67, # 'T' + 85: 114, # 'U' + 86: 119, # 'V' + 87: 95, # 'W' + 88: 99, # 'X' + 89: 109, # 'Y' + 90: 188, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 72, # 'a' + 98: 70, # 'b' + 99: 80, # 'c' + 100: 81, # 'd' + 101: 60, # 'e' + 102: 96, # 'f' + 103: 93, # 'g' + 104: 89, # 'h' + 105: 68, # 'i' + 106: 120, # 'j' + 107: 97, # 'k' + 108: 77, # 'l' + 109: 86, # 'm' + 110: 69, # 'n' + 111: 55, # 'o' + 112: 78, # 'p' + 113: 115, # 'q' + 114: 65, # 'r' + 115: 66, # 's' + 116: 58, # 't' + 117: 76, # 'u' + 118: 106, # 'v' + 119: 103, # 'w' + 120: 87, # 'x' + 121: 107, # 'y' + 122: 112, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 255, # '\x80' + 129: 255, # '\x81' + 130: 255, # '\x82' + 131: 255, # '\x83' + 132: 255, # '\x84' + 133: 255, # '\x85' + 134: 255, # '\x86' + 135: 255, # '\x87' + 136: 255, # '\x88' + 137: 255, # '\x89' + 138: 255, # '\x8a' + 139: 255, # '\x8b' + 140: 255, # '\x8c' + 141: 255, # '\x8d' + 142: 255, # '\x8e' + 143: 255, # '\x8f' + 144: 255, # '\x90' + 145: 255, # '\x91' + 146: 255, # '\x92' + 147: 255, # '\x93' + 148: 255, # '\x94' + 149: 255, # '\x95' + 150: 255, # '\x96' + 151: 255, # '\x97' + 152: 255, # '\x98' + 153: 255, # '\x99' + 154: 255, # '\x9a' + 155: 255, # '\x9b' + 156: 255, # '\x9c' + 157: 255, # '\x9d' + 158: 255, # '\x9e' + 159: 255, # '\x9f' + 160: 253, # '\xa0' + 161: 233, # '‘' + 162: 90, # '’' + 163: 253, # '£' + 164: 253, # '€' + 165: 253, # '₯' + 166: 253, # '¦' + 167: 253, # '§' + 168: 253, # '¨' + 169: 253, # '©' + 170: 253, # 'ͺ' + 171: 253, # '«' + 172: 253, # '¬' + 173: 74, # '\xad' + 174: 253, # None + 175: 253, # '―' + 176: 253, # '°' + 177: 253, # '±' + 178: 253, # '²' + 179: 253, # '³' + 180: 247, # '΄' + 181: 248, # 'Î…' + 182: 61, # 'Ά' + 183: 36, # '·' + 184: 46, # 'Έ' + 185: 71, # 'Ή' + 186: 73, # 'Ί' + 187: 253, # '»' + 188: 54, # 'ÎŒ' + 189: 253, # '½' + 190: 108, # 'ÎŽ' + 191: 123, # 'Î' + 192: 110, # 'Î' + 193: 31, # 'Α' + 194: 51, # 'Î’' + 195: 43, # 'Γ' + 196: 41, # 'Δ' + 197: 34, # 'Ε' + 198: 91, # 'Ζ' + 199: 40, # 'Η' + 200: 52, # 'Θ' + 201: 47, # 'Ι' + 202: 44, # 'Κ' + 203: 53, # 'Λ' + 204: 38, # 'Μ' + 205: 49, # 'Î' + 206: 59, # 'Ξ' + 207: 39, # 'Ο' + 208: 35, # 'Π' + 209: 48, # 'Ρ' + 210: 250, # None + 211: 37, # 'Σ' + 212: 33, # 'Τ' + 213: 45, # 'Î¥' + 214: 56, # 'Φ' + 215: 50, # 'Χ' + 216: 84, # 'Ψ' + 217: 57, # 'Ω' + 218: 120, # 'Ϊ' + 219: 121, # 'Ϋ' + 220: 17, # 'ά' + 221: 18, # 'έ' + 222: 22, # 'ή' + 223: 15, # 'ί' + 224: 124, # 'ΰ' + 225: 1, # 'α' + 226: 29, # 'β' + 227: 20, # 'γ' + 228: 21, # 'δ' + 229: 3, # 'ε' + 230: 32, # 'ζ' + 231: 13, # 'η' + 232: 25, # 'θ' + 233: 5, # 'ι' + 234: 11, # 'κ' + 235: 16, # 'λ' + 236: 10, # 'μ' + 237: 6, # 'ν' + 238: 30, # 'ξ' + 239: 4, # 'ο' + 240: 9, # 'Ï€' + 241: 8, # 'Ï' + 242: 14, # 'Ï‚' + 243: 7, # 'σ' + 244: 2, # 'Ï„' + 245: 12, # 'Ï…' + 246: 28, # 'φ' + 247: 23, # 'χ' + 248: 42, # 'ψ' + 249: 24, # 'ω' + 250: 64, # 'ÏŠ' + 251: 75, # 'Ï‹' + 252: 19, # 'ÏŒ' + 253: 26, # 'Ï' + 254: 27, # 'ÏŽ' + 255: 253, # None +} + +ISO_8859_7_GREEK_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-7', + language='Greek', + char_to_order_map=ISO_8859_7_GREEK_CHAR_TO_ORDER, + language_model=GREEK_LANG_MODEL, + typical_positive_ratio=0.982851, + keep_ascii_letters=False, + alphabet='ΆΈΉΊΌΎÎΑΒΓΔΕΖΗΘΙΚΛΜÎΞΟΠΡΣΤΥΦΧΨΩάέήίαβγδεζηθικλμνξοπÏςστυφχψωόÏÏŽ') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langhebrewmodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langhebrewmodel.py new file mode 100644 index 0000000..484c652 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langhebrewmodel.py @@ -0,0 +1,4383 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +HEBREW_LANG_MODEL = { + 50: { # 'a' + 50: 0, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 2, # 'l' + 54: 2, # 'n' + 49: 0, # 'o' + 51: 2, # 'r' + 43: 1, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 1, # '×§' + 7: 0, # 'ר' + 10: 1, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 60: { # 'c' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 0, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 0, # 'n' + 49: 1, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 61: { # 'd' + 50: 1, # 'a' + 60: 0, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 1, # 'n' + 49: 2, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 0, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 1, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 42: { # 'e' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 2, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 2, # 'l' + 54: 2, # 'n' + 49: 1, # 'o' + 51: 2, # 'r' + 43: 2, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 1, # '–' + 52: 2, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 53: { # 'i' + 50: 1, # 'a' + 60: 2, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 0, # 'i' + 56: 1, # 'l' + 54: 2, # 'n' + 49: 2, # 'o' + 51: 1, # 'r' + 43: 2, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 56: { # 'l' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 2, # 'e' + 53: 2, # 'i' + 56: 2, # 'l' + 54: 1, # 'n' + 49: 1, # 'o' + 51: 0, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 54: { # 'n' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 1, # 'n' + 49: 1, # 'o' + 51: 0, # 'r' + 43: 1, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 2, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 49: { # 'o' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 2, # 'n' + 49: 1, # 'o' + 51: 2, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 51: { # 'r' + 50: 2, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 2, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 1, # 'n' + 49: 2, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 2, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 43: { # 's' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 0, # 'd' + 42: 2, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 1, # 'n' + 49: 1, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 2, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 44: { # 't' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 0, # 'd' + 42: 2, # 'e' + 53: 2, # 'i' + 56: 1, # 'l' + 54: 0, # 'n' + 49: 1, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 1, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 2, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 63: { # 'u' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 1, # 'n' + 49: 0, # 'o' + 51: 1, # 'r' + 43: 2, # 's' + 44: 1, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 34: { # '\xa0' + 50: 1, # 'a' + 60: 0, # 'c' + 61: 1, # 'd' + 42: 0, # 'e' + 53: 1, # 'i' + 56: 0, # 'l' + 54: 1, # 'n' + 49: 1, # 'o' + 51: 0, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 0, # 'u' + 34: 2, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 1, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 2, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 2, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 1, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 55: { # '´' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 1, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 2, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 1, # 'ן' + 12: 1, # '× ' + 19: 1, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 48: { # '¼' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 1, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 39: { # '½' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 57: { # '¾' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 30: { # 'Ö°' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 2, # '×’' + 16: 2, # 'ד' + 3: 2, # '×”' + 2: 2, # 'ו' + 24: 2, # '×–' + 14: 2, # '×—' + 22: 2, # 'ט' + 1: 2, # '×™' + 25: 2, # 'ך' + 15: 2, # '×›' + 4: 2, # 'ל' + 11: 1, # '×' + 6: 2, # 'מ' + 23: 0, # 'ן' + 12: 2, # '× ' + 19: 2, # 'ס' + 13: 2, # '×¢' + 26: 0, # '×£' + 18: 2, # 'פ' + 27: 0, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 59: { # 'Ö±' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 1, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 1, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 2, # 'ל' + 11: 0, # '×' + 6: 2, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 41: { # 'Ö²' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 2, # 'ב' + 20: 1, # '×’' + 16: 2, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 1, # '×™' + 25: 1, # 'ך' + 15: 1, # '×›' + 4: 2, # 'ל' + 11: 0, # '×' + 6: 2, # 'מ' + 23: 0, # 'ן' + 12: 2, # '× ' + 19: 1, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 2, # 'צ' + 17: 1, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 33: { # 'Ö´' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 1, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 1, # 'Ö´' + 37: 0, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 1, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 1, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 2, # 'ב' + 20: 2, # '×’' + 16: 2, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 2, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 2, # '×›' + 4: 2, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 2, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 2, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 37: { # 'Öµ' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 1, # 'Ö·' + 29: 1, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 1, # '×’' + 16: 2, # 'ד' + 3: 2, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 2, # '×—' + 22: 1, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 1, # '×›' + 4: 2, # 'ל' + 11: 2, # '×' + 6: 1, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 1, # 'ס' + 13: 2, # '×¢' + 26: 1, # '×£' + 18: 1, # 'פ' + 27: 1, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 36: { # 'Ö¶' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 1, # 'Ö·' + 29: 1, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 1, # '×’' + 16: 2, # 'ד' + 3: 2, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 2, # '×—' + 22: 1, # 'ט' + 1: 2, # '×™' + 25: 2, # 'ך' + 15: 1, # '×›' + 4: 2, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 2, # 'ס' + 13: 1, # '×¢' + 26: 1, # '×£' + 18: 1, # 'פ' + 27: 2, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 31: { # 'Ö·' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 1, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 2, # '×’' + 16: 2, # 'ד' + 3: 2, # '×”' + 2: 1, # 'ו' + 24: 2, # '×–' + 14: 2, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 2, # '×›' + 4: 2, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 2, # 'ס' + 13: 2, # '×¢' + 26: 2, # '×£' + 18: 2, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 29: { # 'Ö¸' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 1, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 1, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 2, # '×’' + 16: 2, # 'ד' + 3: 3, # '×”' + 2: 2, # 'ו' + 24: 2, # '×–' + 14: 2, # '×—' + 22: 1, # 'ט' + 1: 2, # '×™' + 25: 2, # 'ך' + 15: 2, # '×›' + 4: 2, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 1, # 'ס' + 13: 2, # '×¢' + 26: 1, # '×£' + 18: 2, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 35: { # 'Ö¹' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 1, # '×’' + 16: 2, # 'ד' + 3: 2, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 1, # '×™' + 25: 1, # 'ך' + 15: 2, # '×›' + 4: 2, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 2, # 'ס' + 13: 2, # '×¢' + 26: 1, # '×£' + 18: 2, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 62: { # 'Ö»' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 1, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 2, # 'ל' + 11: 1, # '×' + 6: 1, # 'מ' + 23: 1, # 'ן' + 12: 1, # '× ' + 19: 1, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 28: { # 'Ö¼' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 3, # 'Ö°' + 59: 0, # 'Ö±' + 41: 1, # 'Ö²' + 33: 3, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 3, # 'Ö·' + 29: 3, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 2, # '×' + 45: 1, # 'ׂ' + 9: 2, # '×' + 8: 2, # 'ב' + 20: 1, # '×’' + 16: 2, # 'ד' + 3: 1, # '×”' + 2: 2, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 2, # '×™' + 25: 2, # 'ך' + 15: 2, # '×›' + 4: 2, # 'ל' + 11: 1, # '×' + 6: 2, # 'מ' + 23: 1, # 'ן' + 12: 2, # '× ' + 19: 1, # 'ס' + 13: 2, # '×¢' + 26: 1, # '×£' + 18: 1, # 'פ' + 27: 1, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 2, # 'ר' + 10: 2, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 38: { # '×' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 2, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 45: { # 'ׂ' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 1, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 1, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 1, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 2, # 'ו' + 24: 0, # '×–' + 14: 1, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 1, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 0, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 0, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 9: { # '×' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 2, # 'Ö±' + 41: 2, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 2, # '×¢' + 26: 3, # '×£' + 18: 3, # 'פ' + 27: 1, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 8: { # 'ב' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 1, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 3, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 1, # '×£' + 18: 3, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 1, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 20: { # '×’' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 2, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 1, # 'Ö´' + 37: 1, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 3, # 'ב' + 20: 2, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 2, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 1, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 2, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 2, # 'פ' + 27: 1, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 16: { # 'ד' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 1, # '×–' + 14: 2, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 2, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 2, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 0, # '×¥' + 21: 2, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 3: { # '×”' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 1, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 1, # 'Ö°' + 59: 1, # 'Ö±' + 41: 2, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 3, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 0, # '×£' + 18: 3, # 'פ' + 27: 1, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 1, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 2: { # 'ו' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 1, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 1, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 3, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 3, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 3, # '×£' + 18: 3, # 'פ' + 27: 3, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 1, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 24: { # '×–' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 1, # 'Ö²' + 33: 1, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 2, # 'ב' + 20: 2, # '×’' + 16: 2, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 2, # '×–' + 14: 2, # '×—' + 22: 1, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 2, # '× ' + 19: 1, # 'ס' + 13: 2, # '×¢' + 26: 1, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 2, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 1, # 'ש' + 5: 2, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 14: { # '×—' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 1, # 'Ö±' + 41: 2, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 3, # 'ב' + 20: 2, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 2, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 2, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 1, # '×¢' + 26: 2, # '×£' + 18: 2, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 22: { # 'ט' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 1, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 1, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 1, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 1, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 2, # '×–' + 14: 3, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 2, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 2, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 3, # 'ר' + 10: 2, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 1: { # '×™' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 3, # '×£' + 18: 3, # 'פ' + 27: 3, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 1, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 25: { # 'ך' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 1, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 1, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 1, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 15: { # '×›' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 3, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 2, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 2, # '×¢' + 26: 3, # '×£' + 18: 3, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 2, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 4: { # 'ל' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 3, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 1, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 11: { # '×' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 1, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 0, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 1, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 6: { # 'מ' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 0, # '×£' + 18: 3, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 23: { # 'ן' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 1, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 0, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 1, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 1, # 'ס' + 13: 1, # '×¢' + 26: 1, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 1, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 1, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 12: { # '× ' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 19: { # 'ס' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 1, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 1, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 2, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 1, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 2, # 'ס' + 13: 3, # '×¢' + 26: 3, # '×£' + 18: 3, # 'פ' + 27: 0, # '×¥' + 21: 2, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 1, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 13: { # '×¢' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 1, # 'Ö°' + 59: 1, # 'Ö±' + 41: 2, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 1, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 2, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 2, # '×¢' + 26: 1, # '×£' + 18: 2, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 26: { # '×£' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 1, # 'ו' + 24: 0, # '×–' + 14: 1, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 1, # 'ס' + 13: 0, # '×¢' + 26: 1, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 1, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 18: { # 'פ' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 1, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 1, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 2, # 'ב' + 20: 3, # '×’' + 16: 2, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 2, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 2, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 2, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 27: { # '×¥' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 1, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 0, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 21: { # 'צ' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 2, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 1, # '×–' + 14: 3, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 1, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 1, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 2, # '×¥' + 21: 2, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 0, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 17: { # '×§' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 1, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 2, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 2, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 1, # 'ך' + 15: 1, # '×›' + 4: 3, # 'ל' + 11: 2, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 2, # '×¥' + 21: 3, # 'צ' + 17: 2, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 7: { # 'ר' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 2, # '´' + 48: 1, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 1, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 2, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 3, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 3, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 3, # '×¥' + 21: 3, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 10: { # 'ש' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 1, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 1, # 'Ö´' + 37: 1, # 'Öµ' + 36: 1, # 'Ö¶' + 31: 1, # 'Ö·' + 29: 1, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 3, # '×' + 45: 2, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 3, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 2, # '×–' + 14: 3, # '×—' + 22: 3, # 'ט' + 1: 3, # '×™' + 25: 3, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 2, # 'ן' + 12: 3, # '× ' + 19: 2, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 1, # '…' + }, + 5: { # 'ת' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 1, # '\xa0' + 55: 0, # '´' + 48: 1, # '¼' + 39: 1, # '½' + 57: 0, # '¾' + 30: 2, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 2, # 'Ö´' + 37: 2, # 'Öµ' + 36: 2, # 'Ö¶' + 31: 2, # 'Ö·' + 29: 2, # 'Ö¸' + 35: 1, # 'Ö¹' + 62: 1, # 'Ö»' + 28: 2, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 3, # '×' + 8: 3, # 'ב' + 20: 3, # '×’' + 16: 2, # 'ד' + 3: 3, # '×”' + 2: 3, # 'ו' + 24: 2, # '×–' + 14: 3, # '×—' + 22: 2, # 'ט' + 1: 3, # '×™' + 25: 2, # 'ך' + 15: 3, # '×›' + 4: 3, # 'ל' + 11: 3, # '×' + 6: 3, # 'מ' + 23: 3, # 'ן' + 12: 3, # '× ' + 19: 2, # 'ס' + 13: 3, # '×¢' + 26: 2, # '×£' + 18: 3, # 'פ' + 27: 1, # '×¥' + 21: 2, # 'צ' + 17: 3, # '×§' + 7: 3, # 'ר' + 10: 3, # 'ש' + 5: 3, # 'ת' + 32: 1, # '–' + 52: 1, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, + 32: { # '–' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 1, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 1, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 0, # '×–' + 14: 1, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 1, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 1, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 52: { # '’' + 50: 1, # 'a' + 60: 0, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 1, # 'r' + 43: 2, # 's' + 44: 2, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 1, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 47: { # '“' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 1, # 'l' + 54: 1, # 'n' + 49: 1, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 1, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 2, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 1, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 1, # '×—' + 22: 1, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 1, # 'ס' + 13: 1, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 1, # 'צ' + 17: 1, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 46: { # 'â€' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 1, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 1, # 'ב' + 20: 1, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 1, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 0, # '†' + 40: 0, # '…' + }, + 58: { # '†' + 50: 0, # 'a' + 60: 0, # 'c' + 61: 0, # 'd' + 42: 0, # 'e' + 53: 0, # 'i' + 56: 0, # 'l' + 54: 0, # 'n' + 49: 0, # 'o' + 51: 0, # 'r' + 43: 0, # 's' + 44: 0, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 0, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 0, # '×”' + 2: 0, # 'ו' + 24: 0, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 0, # '×™' + 25: 0, # 'ך' + 15: 0, # '×›' + 4: 0, # 'ל' + 11: 0, # '×' + 6: 0, # 'מ' + 23: 0, # 'ן' + 12: 0, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 0, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 0, # 'ר' + 10: 0, # 'ש' + 5: 0, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 0, # 'â€' + 58: 2, # '†' + 40: 0, # '…' + }, + 40: { # '…' + 50: 1, # 'a' + 60: 1, # 'c' + 61: 1, # 'd' + 42: 1, # 'e' + 53: 1, # 'i' + 56: 0, # 'l' + 54: 1, # 'n' + 49: 0, # 'o' + 51: 1, # 'r' + 43: 1, # 's' + 44: 1, # 't' + 63: 0, # 'u' + 34: 0, # '\xa0' + 55: 0, # '´' + 48: 0, # '¼' + 39: 0, # '½' + 57: 0, # '¾' + 30: 0, # 'Ö°' + 59: 0, # 'Ö±' + 41: 0, # 'Ö²' + 33: 0, # 'Ö´' + 37: 0, # 'Öµ' + 36: 0, # 'Ö¶' + 31: 0, # 'Ö·' + 29: 0, # 'Ö¸' + 35: 0, # 'Ö¹' + 62: 0, # 'Ö»' + 28: 0, # 'Ö¼' + 38: 0, # '×' + 45: 0, # 'ׂ' + 9: 1, # '×' + 8: 0, # 'ב' + 20: 0, # '×’' + 16: 0, # 'ד' + 3: 1, # '×”' + 2: 1, # 'ו' + 24: 1, # '×–' + 14: 0, # '×—' + 22: 0, # 'ט' + 1: 1, # '×™' + 25: 0, # 'ך' + 15: 1, # '×›' + 4: 1, # 'ל' + 11: 0, # '×' + 6: 1, # 'מ' + 23: 0, # 'ן' + 12: 1, # '× ' + 19: 0, # 'ס' + 13: 0, # '×¢' + 26: 0, # '×£' + 18: 1, # 'פ' + 27: 0, # '×¥' + 21: 0, # 'צ' + 17: 0, # '×§' + 7: 1, # 'ר' + 10: 1, # 'ש' + 5: 1, # 'ת' + 32: 0, # '–' + 52: 0, # '’' + 47: 0, # '“' + 46: 1, # 'â€' + 58: 0, # '†' + 40: 2, # '…' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +WINDOWS_1255_HEBREW_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 69, # 'A' + 66: 91, # 'B' + 67: 79, # 'C' + 68: 80, # 'D' + 69: 92, # 'E' + 70: 89, # 'F' + 71: 97, # 'G' + 72: 90, # 'H' + 73: 68, # 'I' + 74: 111, # 'J' + 75: 112, # 'K' + 76: 82, # 'L' + 77: 73, # 'M' + 78: 95, # 'N' + 79: 85, # 'O' + 80: 78, # 'P' + 81: 121, # 'Q' + 82: 86, # 'R' + 83: 71, # 'S' + 84: 67, # 'T' + 85: 102, # 'U' + 86: 107, # 'V' + 87: 84, # 'W' + 88: 114, # 'X' + 89: 103, # 'Y' + 90: 115, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 50, # 'a' + 98: 74, # 'b' + 99: 60, # 'c' + 100: 61, # 'd' + 101: 42, # 'e' + 102: 76, # 'f' + 103: 70, # 'g' + 104: 64, # 'h' + 105: 53, # 'i' + 106: 105, # 'j' + 107: 93, # 'k' + 108: 56, # 'l' + 109: 65, # 'm' + 110: 54, # 'n' + 111: 49, # 'o' + 112: 66, # 'p' + 113: 110, # 'q' + 114: 51, # 'r' + 115: 43, # 's' + 116: 44, # 't' + 117: 63, # 'u' + 118: 81, # 'v' + 119: 77, # 'w' + 120: 98, # 'x' + 121: 75, # 'y' + 122: 108, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 124, # '€' + 129: 202, # None + 130: 203, # '‚' + 131: 204, # 'Æ’' + 132: 205, # '„' + 133: 40, # '…' + 134: 58, # '†' + 135: 206, # '‡' + 136: 207, # 'ˆ' + 137: 208, # '‰' + 138: 209, # None + 139: 210, # '‹' + 140: 211, # None + 141: 212, # None + 142: 213, # None + 143: 214, # None + 144: 215, # None + 145: 83, # '‘' + 146: 52, # '’' + 147: 47, # '“' + 148: 46, # 'â€' + 149: 72, # '•' + 150: 32, # '–' + 151: 94, # '—' + 152: 216, # 'Ëœ' + 153: 113, # 'â„¢' + 154: 217, # None + 155: 109, # '›' + 156: 218, # None + 157: 219, # None + 158: 220, # None + 159: 221, # None + 160: 34, # '\xa0' + 161: 116, # '¡' + 162: 222, # '¢' + 163: 118, # '£' + 164: 100, # '₪' + 165: 223, # 'Â¥' + 166: 224, # '¦' + 167: 117, # '§' + 168: 119, # '¨' + 169: 104, # '©' + 170: 125, # '×' + 171: 225, # '«' + 172: 226, # '¬' + 173: 87, # '\xad' + 174: 99, # '®' + 175: 227, # '¯' + 176: 106, # '°' + 177: 122, # '±' + 178: 123, # '²' + 179: 228, # '³' + 180: 55, # '´' + 181: 229, # 'µ' + 182: 230, # '¶' + 183: 101, # '·' + 184: 231, # '¸' + 185: 232, # '¹' + 186: 120, # '÷' + 187: 233, # '»' + 188: 48, # '¼' + 189: 39, # '½' + 190: 57, # '¾' + 191: 234, # '¿' + 192: 30, # 'Ö°' + 193: 59, # 'Ö±' + 194: 41, # 'Ö²' + 195: 88, # 'Ö³' + 196: 33, # 'Ö´' + 197: 37, # 'Öµ' + 198: 36, # 'Ö¶' + 199: 31, # 'Ö·' + 200: 29, # 'Ö¸' + 201: 35, # 'Ö¹' + 202: 235, # None + 203: 62, # 'Ö»' + 204: 28, # 'Ö¼' + 205: 236, # 'Ö½' + 206: 126, # 'Ö¾' + 207: 237, # 'Ö¿' + 208: 238, # '×€' + 209: 38, # '×' + 210: 45, # 'ׂ' + 211: 239, # '׃' + 212: 240, # '×°' + 213: 241, # '×±' + 214: 242, # 'ײ' + 215: 243, # '׳' + 216: 127, # '×´' + 217: 244, # None + 218: 245, # None + 219: 246, # None + 220: 247, # None + 221: 248, # None + 222: 249, # None + 223: 250, # None + 224: 9, # '×' + 225: 8, # 'ב' + 226: 20, # '×’' + 227: 16, # 'ד' + 228: 3, # '×”' + 229: 2, # 'ו' + 230: 24, # '×–' + 231: 14, # '×—' + 232: 22, # 'ט' + 233: 1, # '×™' + 234: 25, # 'ך' + 235: 15, # '×›' + 236: 4, # 'ל' + 237: 11, # '×' + 238: 6, # 'מ' + 239: 23, # 'ן' + 240: 12, # '× ' + 241: 19, # 'ס' + 242: 13, # '×¢' + 243: 26, # '×£' + 244: 18, # 'פ' + 245: 27, # '×¥' + 246: 21, # 'צ' + 247: 17, # '×§' + 248: 7, # 'ר' + 249: 10, # 'ש' + 250: 5, # 'ת' + 251: 251, # None + 252: 252, # None + 253: 128, # '\u200e' + 254: 96, # '\u200f' + 255: 253, # None +} + +WINDOWS_1255_HEBREW_MODEL = SingleByteCharSetModel(charset_name='windows-1255', + language='Hebrew', + char_to_order_map=WINDOWS_1255_HEBREW_CHAR_TO_ORDER, + language_model=HEBREW_LANG_MODEL, + typical_positive_ratio=0.984004, + keep_ascii_letters=False, + alphabet='×בגדהוזחטיךכל×מןנסעףפץצקרשתװױײ') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langhungarianmodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langhungarianmodel.py new file mode 100644 index 0000000..bbc5cda --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langhungarianmodel.py @@ -0,0 +1,4650 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +HUNGARIAN_LANG_MODEL = { + 28: { # 'A' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 2, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 2, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 2, # 'K' + 41: 2, # 'L' + 34: 1, # 'M' + 35: 2, # 'N' + 47: 1, # 'O' + 46: 2, # 'P' + 43: 2, # 'R' + 33: 2, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 2, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 1, # 'j' + 7: 2, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 2, # 'n' + 8: 0, # 'o' + 23: 2, # 'p' + 10: 2, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 1, # 'u' + 19: 1, # 'v' + 62: 1, # 'x' + 16: 0, # 'y' + 11: 3, # 'z' + 51: 1, # 'Ã' + 44: 0, # 'É' + 61: 1, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 40: { # 'B' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 0, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 3, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 2, # 'i' + 22: 1, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 3, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 54: { # 'C' + 28: 1, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 0, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 2, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 0, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 1, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 3, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 1, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 45: { # 'D' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 0, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 0, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 3, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 1, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 1, # 'o' + 23: 0, # 'p' + 10: 2, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 2, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 1, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 0, # 'ű' + }, + 32: { # 'E' + 28: 1, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 2, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 2, # 'K' + 41: 2, # 'L' + 34: 2, # 'M' + 35: 2, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 2, # 'R' + 33: 2, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 1, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 3, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 2, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 2, # 's' + 3: 1, # 't' + 21: 2, # 'u' + 19: 1, # 'v' + 62: 1, # 'x' + 16: 0, # 'y' + 11: 3, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 0, # 'Ú' + 63: 1, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 1, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 50: { # 'F' + 28: 1, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 0, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 0, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 0, # 'V' + 55: 1, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 1, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 2, # 'i' + 22: 1, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 2, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 0, # 'Ú' + 63: 1, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 2, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 49: { # 'G' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 2, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 1, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 2, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 2, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 0, # 'ű' + }, + 38: { # 'H' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 0, # 'D' + 32: 1, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 1, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 1, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 1, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 0, # 'V' + 55: 1, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 2, # 'i' + 22: 1, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 0, # 'n' + 8: 3, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 2, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 2, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 1, # 'é' + 30: 2, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 39: { # 'I' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 2, # 'K' + 41: 2, # 'L' + 34: 1, # 'M' + 35: 2, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 2, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 2, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 0, # 'e' + 27: 1, # 'f' + 12: 2, # 'g' + 20: 1, # 'h' + 9: 0, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 1, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 2, # 's' + 3: 2, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 53: { # 'J' + 28: 2, # 'A' + 40: 0, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 1, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 1, # 'o' + 23: 0, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 2, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 2, # 'ó' + 24: 2, # 'ö' + 31: 1, # 'ú' + 29: 0, # 'ü' + 42: 1, # 'Å‘' + 56: 0, # 'ű' + }, + 36: { # 'K' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 0, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 1, # 'f' + 12: 0, # 'g' + 20: 1, # 'h' + 9: 3, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 2, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 2, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 2, # 'ö' + 31: 1, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 0, # 'ű' + }, + 41: { # 'L' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 2, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 3, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 2, # 'i' + 22: 1, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 2, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 0, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 34: { # 'M' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 0, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 3, # 'a' + 18: 0, # 'b' + 26: 1, # 'c' + 17: 0, # 'd' + 1: 3, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 3, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 3, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 2, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 1, # 'ű' + }, + 35: { # 'N' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 2, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 2, # 'Y' + 52: 1, # 'Z' + 2: 3, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 3, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 2, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 2, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 1, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 1, # 'Å‘' + 56: 0, # 'ű' + }, + 47: { # 'O' + 28: 1, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 2, # 'K' + 41: 2, # 'L' + 34: 2, # 'M' + 35: 2, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 2, # 'R' + 33: 2, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 1, # 'j' + 7: 2, # 'k' + 6: 2, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 1, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 1, # 's' + 3: 2, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 1, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 1, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 46: { # 'P' + 28: 1, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 0, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 2, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 1, # 'f' + 12: 0, # 'g' + 20: 1, # 'h' + 9: 2, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 2, # 'r' + 5: 1, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 0, # 'Ú' + 63: 1, # 'Ü' + 14: 3, # 'á' + 15: 2, # 'é' + 30: 0, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 0, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 0, # 'ű' + }, + 43: { # 'R' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 2, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 1, # 'h' + 9: 2, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 2, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 2, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 33: { # 'S' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 2, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 3, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 1, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 1, # 'h' + 9: 2, # 'i' + 22: 0, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 1, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 1, # 't' + 21: 1, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 3, # 'z' + 51: 2, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 37: { # 'T' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 1, # 'P' + 43: 2, # 'R' + 33: 1, # 'S' + 37: 2, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 2, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 1, # 'h' + 9: 2, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 0, # 't' + 21: 2, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 1, # 'z' + 51: 2, # 'Ã' + 44: 2, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 2, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 57: { # 'U' + 28: 1, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 2, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 1, # 'e' + 27: 0, # 'f' + 12: 2, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 1, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 48: { # 'V' + 28: 2, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 0, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 2, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 2, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 2, # 'o' + 23: 0, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 2, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 0, # 'Ú' + 63: 1, # 'Ü' + 14: 2, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 0, # 'ó' + 24: 1, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 55: { # 'Y' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 1, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 2, # 'Z' + 2: 1, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 1, # 'd' + 1: 1, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 8: 1, # 'o' + 23: 1, # 'p' + 10: 0, # 'r' + 5: 0, # 's' + 3: 0, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 1, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 52: { # 'Z' + 28: 2, # 'A' + 40: 1, # 'B' + 54: 0, # 'C' + 45: 1, # 'D' + 32: 2, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 2, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 2, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 2, # 'S' + 37: 1, # 'T' + 57: 1, # 'U' + 48: 1, # 'V' + 55: 1, # 'Y' + 52: 1, # 'Z' + 2: 1, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 1, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 1, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 8: 1, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 2, # 's' + 3: 0, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 2, # 'Ã' + 44: 1, # 'É' + 61: 1, # 'Ã' + 58: 1, # 'Ó' + 59: 1, # 'Ö' + 60: 1, # 'Ú' + 63: 1, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 2: { # 'a' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 2, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 2, # 'o' + 23: 3, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 1, # 'x' + 16: 2, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 18: { # 'b' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 2, # 'k' + 6: 2, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 2, # 's' + 3: 1, # 't' + 21: 3, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 3, # 'ó' + 24: 2, # 'ö' + 31: 2, # 'ú' + 29: 2, # 'ü' + 42: 2, # 'Å‘' + 56: 1, # 'ű' + }, + 26: { # 'c' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 1, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 1, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 1, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 1, # 'j' + 7: 2, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 3, # 's' + 3: 2, # 't' + 21: 2, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 2, # 'á' + 15: 2, # 'é' + 30: 2, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 17: { # 'd' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 2, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 2, # 'k' + 6: 1, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 2, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 3, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 2, # 'ú' + 29: 2, # 'ü' + 42: 2, # 'Å‘' + 56: 1, # 'ű' + }, + 1: { # 'e' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 2, # 'e' + 27: 3, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 2, # 'o' + 23: 3, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 2, # 'u' + 19: 3, # 'v' + 62: 2, # 'x' + 16: 2, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 27: { # 'f' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 3, # 'o' + 23: 0, # 'p' + 10: 3, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 2, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 0, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 3, # 'ö' + 31: 1, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 12: { # 'g' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 2, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 2, # 'k' + 6: 3, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 3, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 3, # 'ó' + 24: 2, # 'ö' + 31: 2, # 'ú' + 29: 2, # 'ü' + 42: 2, # 'Å‘' + 56: 1, # 'ű' + }, + 20: { # 'h' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 0, # 'd' + 1: 3, # 'e' + 27: 0, # 'f' + 12: 1, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 3, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 2, # 's' + 3: 1, # 't' + 21: 3, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 2, # 'y' + 11: 0, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 3, # 'í' + 25: 2, # 'ó' + 24: 2, # 'ö' + 31: 2, # 'ú' + 29: 1, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 9: { # 'i' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 3, # 'e' + 27: 3, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 2, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 2, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 1, # 'x' + 16: 1, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 3, # 'ó' + 24: 1, # 'ö' + 31: 2, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 1, # 'ű' + }, + 22: { # 'j' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 2, # 'b' + 26: 1, # 'c' + 17: 3, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 2, # 'h' + 9: 1, # 'i' + 22: 2, # 'j' + 7: 2, # 'k' + 6: 2, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 2, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 1, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 3, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 7: { # 'k' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 1, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 2, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 3, # 'í' + 25: 2, # 'ó' + 24: 3, # 'ö' + 31: 1, # 'ú' + 29: 3, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 6: { # 'l' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 1, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 1, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 2, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 3, # 'e' + 27: 3, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 2, # 'p' + 10: 2, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 3, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 3, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 2, # 'ú' + 29: 2, # 'ü' + 42: 3, # 'Å‘' + 56: 1, # 'ű' + }, + 13: { # 'm' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 1, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 8: 3, # 'o' + 23: 3, # 'p' + 10: 2, # 'r' + 5: 2, # 's' + 3: 2, # 't' + 21: 3, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 2, # 'ó' + 24: 2, # 'ö' + 31: 2, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 2, # 'ű' + }, + 4: { # 'n' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 2, # 'p' + 10: 2, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 2, # 'v' + 62: 1, # 'x' + 16: 3, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 2, # 'ó' + 24: 3, # 'ö' + 31: 2, # 'ú' + 29: 3, # 'ü' + 42: 2, # 'Å‘' + 56: 1, # 'ű' + }, + 8: { # 'o' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 1, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 2, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 2, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 1, # 'o' + 23: 3, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 2, # 'u' + 19: 3, # 'v' + 62: 1, # 'x' + 16: 1, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 23: { # 'p' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 1, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 2, # 'k' + 6: 3, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 8: 3, # 'o' + 23: 3, # 'p' + 10: 3, # 'r' + 5: 2, # 's' + 3: 2, # 't' + 21: 3, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 2, # 'ó' + 24: 2, # 'ö' + 31: 1, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 10: { # 'r' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 1, # 'x' + 16: 2, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 3, # 'ú' + 29: 3, # 'ü' + 42: 2, # 'Å‘' + 56: 2, # 'ű' + }, + 5: { # 's' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 2, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 2, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 1, # 'j' + 7: 3, # 'k' + 6: 2, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 3, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 3, # 'ú' + 29: 3, # 'ü' + 42: 2, # 'Å‘' + 56: 1, # 'ű' + }, + 3: { # 't' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 1, # 'g' + 20: 3, # 'h' + 9: 3, # 'i' + 22: 3, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 3, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 3, # 'ú' + 29: 3, # 'ü' + 42: 3, # 'Å‘' + 56: 2, # 'ű' + }, + 21: { # 'u' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 2, # 'b' + 26: 2, # 'c' + 17: 3, # 'd' + 1: 2, # 'e' + 27: 1, # 'f' + 12: 3, # 'g' + 20: 2, # 'h' + 9: 2, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 1, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 1, # 'u' + 19: 3, # 'v' + 62: 1, # 'x' + 16: 1, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 2, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 0, # 'ö' + 31: 1, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 19: { # 'v' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 2, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 3, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 1, # 'r' + 5: 2, # 's' + 3: 2, # 't' + 21: 2, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 2, # 'ó' + 24: 2, # 'ö' + 31: 1, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 1, # 'ű' + }, + 62: { # 'x' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 0, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 1, # 'i' + 22: 0, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 1, # 'o' + 23: 1, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 1, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 1, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 16: { # 'y' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 2, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 3, # 'e' + 27: 2, # 'f' + 12: 2, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 2, # 'j' + 7: 2, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 2, # 'p' + 10: 2, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 2, # 'í' + 25: 2, # 'ó' + 24: 3, # 'ö' + 31: 2, # 'ú' + 29: 2, # 'ü' + 42: 1, # 'Å‘' + 56: 2, # 'ű' + }, + 11: { # 'z' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 3, # 'a' + 18: 2, # 'b' + 26: 1, # 'c' + 17: 3, # 'd' + 1: 3, # 'e' + 27: 1, # 'f' + 12: 2, # 'g' + 20: 2, # 'h' + 9: 3, # 'i' + 22: 1, # 'j' + 7: 3, # 'k' + 6: 2, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 3, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 3, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 3, # 'á' + 15: 3, # 'é' + 30: 3, # 'í' + 25: 3, # 'ó' + 24: 3, # 'ö' + 31: 2, # 'ú' + 29: 3, # 'ü' + 42: 2, # 'Å‘' + 56: 1, # 'ű' + }, + 51: { # 'Ã' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 0, # 'E' + 50: 1, # 'F' + 49: 2, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 2, # 'L' + 34: 1, # 'M' + 35: 2, # 'N' + 47: 0, # 'O' + 46: 1, # 'P' + 43: 2, # 'R' + 33: 2, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 0, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 1, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 44: { # 'É' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 1, # 'E' + 50: 0, # 'F' + 49: 2, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 2, # 'L' + 34: 1, # 'M' + 35: 2, # 'N' + 47: 0, # 'O' + 46: 1, # 'P' + 43: 2, # 'R' + 33: 2, # 'S' + 37: 2, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 0, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 2, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 3, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 0, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 61: { # 'Ã' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 0, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 1, # 'J' + 36: 0, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 0, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 2, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 1, # 'm' + 4: 0, # 'n' + 8: 0, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 0, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 58: { # 'Ó' + 28: 1, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 0, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 1, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 2, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 0, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 2, # 'h' + 9: 0, # 'i' + 22: 0, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 0, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 1, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 59: { # 'Ö' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 0, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 0, # 'O' + 46: 1, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 0, # 'b' + 26: 1, # 'c' + 17: 1, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 0, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 0, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 8: 0, # 'o' + 23: 0, # 'p' + 10: 2, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 60: { # 'Ú' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 1, # 'C' + 45: 1, # 'D' + 32: 0, # 'E' + 50: 1, # 'F' + 49: 1, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 0, # 'b' + 26: 0, # 'c' + 17: 0, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 2, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 2, # 'j' + 7: 0, # 'k' + 6: 0, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 8: 0, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 0, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 0, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 63: { # 'Ü' + 28: 0, # 'A' + 40: 1, # 'B' + 54: 0, # 'C' + 45: 1, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 1, # 'G' + 38: 1, # 'H' + 39: 0, # 'I' + 53: 1, # 'J' + 36: 1, # 'K' + 41: 1, # 'L' + 34: 1, # 'M' + 35: 1, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 1, # 'R' + 33: 1, # 'S' + 37: 1, # 'T' + 57: 0, # 'U' + 48: 1, # 'V' + 55: 0, # 'Y' + 52: 1, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 0, # 'c' + 17: 1, # 'd' + 1: 0, # 'e' + 27: 0, # 'f' + 12: 1, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 0, # 'j' + 7: 0, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 8: 0, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 1, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 14: { # 'á' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 3, # 'b' + 26: 3, # 'c' + 17: 3, # 'd' + 1: 1, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 2, # 'h' + 9: 2, # 'i' + 22: 3, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 1, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 2, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 1, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 2, # 'é' + 30: 1, # 'í' + 25: 0, # 'ó' + 24: 1, # 'ö' + 31: 0, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 15: { # 'é' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 3, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 3, # 'g' + 20: 3, # 'h' + 9: 2, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 1, # 'o' + 23: 3, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 0, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 30: { # 'í' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 0, # 'a' + 18: 1, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 0, # 'e' + 27: 1, # 'f' + 12: 3, # 'g' + 20: 0, # 'h' + 9: 0, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 2, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 2, # 's' + 3: 3, # 't' + 21: 0, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 25: { # 'ó' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 2, # 'a' + 18: 3, # 'b' + 26: 2, # 'c' + 17: 3, # 'd' + 1: 1, # 'e' + 27: 2, # 'f' + 12: 2, # 'g' + 20: 2, # 'h' + 9: 2, # 'i' + 22: 2, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 8: 1, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 1, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 0, # 'ó' + 24: 1, # 'ö' + 31: 1, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 24: { # 'ö' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 0, # 'a' + 18: 3, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 0, # 'e' + 27: 1, # 'f' + 12: 2, # 'g' + 20: 1, # 'h' + 9: 0, # 'i' + 22: 1, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 8: 0, # 'o' + 23: 2, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 3, # 't' + 21: 0, # 'u' + 19: 3, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 3, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 31: { # 'ú' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 1, # 'b' + 26: 2, # 'c' + 17: 1, # 'd' + 1: 1, # 'e' + 27: 2, # 'f' + 12: 3, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 3, # 'j' + 7: 1, # 'k' + 6: 3, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 3, # 'r' + 5: 3, # 's' + 3: 2, # 't' + 21: 1, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 1, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 29: { # 'ü' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 1, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 3, # 'g' + 20: 2, # 'h' + 9: 1, # 'i' + 22: 1, # 'j' + 7: 3, # 'k' + 6: 3, # 'l' + 13: 1, # 'm' + 4: 3, # 'n' + 8: 0, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 2, # 's' + 3: 2, # 't' + 21: 0, # 'u' + 19: 2, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 1, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 42: { # 'Å‘' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 2, # 'b' + 26: 1, # 'c' + 17: 2, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 1, # 'j' + 7: 2, # 'k' + 6: 3, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 8: 1, # 'o' + 23: 1, # 'p' + 10: 2, # 'r' + 5: 2, # 's' + 3: 2, # 't' + 21: 1, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 1, # 'é' + 30: 1, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 1, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, + 56: { # 'ű' + 28: 0, # 'A' + 40: 0, # 'B' + 54: 0, # 'C' + 45: 0, # 'D' + 32: 0, # 'E' + 50: 0, # 'F' + 49: 0, # 'G' + 38: 0, # 'H' + 39: 0, # 'I' + 53: 0, # 'J' + 36: 0, # 'K' + 41: 0, # 'L' + 34: 0, # 'M' + 35: 0, # 'N' + 47: 0, # 'O' + 46: 0, # 'P' + 43: 0, # 'R' + 33: 0, # 'S' + 37: 0, # 'T' + 57: 0, # 'U' + 48: 0, # 'V' + 55: 0, # 'Y' + 52: 0, # 'Z' + 2: 1, # 'a' + 18: 1, # 'b' + 26: 0, # 'c' + 17: 1, # 'd' + 1: 1, # 'e' + 27: 1, # 'f' + 12: 1, # 'g' + 20: 1, # 'h' + 9: 1, # 'i' + 22: 1, # 'j' + 7: 1, # 'k' + 6: 1, # 'l' + 13: 0, # 'm' + 4: 2, # 'n' + 8: 0, # 'o' + 23: 0, # 'p' + 10: 1, # 'r' + 5: 1, # 's' + 3: 1, # 't' + 21: 0, # 'u' + 19: 1, # 'v' + 62: 0, # 'x' + 16: 0, # 'y' + 11: 2, # 'z' + 51: 0, # 'Ã' + 44: 0, # 'É' + 61: 0, # 'Ã' + 58: 0, # 'Ó' + 59: 0, # 'Ö' + 60: 0, # 'Ú' + 63: 0, # 'Ü' + 14: 0, # 'á' + 15: 0, # 'é' + 30: 0, # 'í' + 25: 0, # 'ó' + 24: 0, # 'ö' + 31: 0, # 'ú' + 29: 0, # 'ü' + 42: 0, # 'Å‘' + 56: 0, # 'ű' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +WINDOWS_1250_HUNGARIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 28, # 'A' + 66: 40, # 'B' + 67: 54, # 'C' + 68: 45, # 'D' + 69: 32, # 'E' + 70: 50, # 'F' + 71: 49, # 'G' + 72: 38, # 'H' + 73: 39, # 'I' + 74: 53, # 'J' + 75: 36, # 'K' + 76: 41, # 'L' + 77: 34, # 'M' + 78: 35, # 'N' + 79: 47, # 'O' + 80: 46, # 'P' + 81: 72, # 'Q' + 82: 43, # 'R' + 83: 33, # 'S' + 84: 37, # 'T' + 85: 57, # 'U' + 86: 48, # 'V' + 87: 64, # 'W' + 88: 68, # 'X' + 89: 55, # 'Y' + 90: 52, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 2, # 'a' + 98: 18, # 'b' + 99: 26, # 'c' + 100: 17, # 'd' + 101: 1, # 'e' + 102: 27, # 'f' + 103: 12, # 'g' + 104: 20, # 'h' + 105: 9, # 'i' + 106: 22, # 'j' + 107: 7, # 'k' + 108: 6, # 'l' + 109: 13, # 'm' + 110: 4, # 'n' + 111: 8, # 'o' + 112: 23, # 'p' + 113: 67, # 'q' + 114: 10, # 'r' + 115: 5, # 's' + 116: 3, # 't' + 117: 21, # 'u' + 118: 19, # 'v' + 119: 65, # 'w' + 120: 62, # 'x' + 121: 16, # 'y' + 122: 11, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 161, # '€' + 129: 162, # None + 130: 163, # '‚' + 131: 164, # None + 132: 165, # '„' + 133: 166, # '…' + 134: 167, # '†' + 135: 168, # '‡' + 136: 169, # None + 137: 170, # '‰' + 138: 171, # 'Å ' + 139: 172, # '‹' + 140: 173, # 'Åš' + 141: 174, # 'Ť' + 142: 175, # 'Ž' + 143: 176, # 'Ź' + 144: 177, # None + 145: 178, # '‘' + 146: 179, # '’' + 147: 180, # '“' + 148: 78, # 'â€' + 149: 181, # '•' + 150: 69, # '–' + 151: 182, # '—' + 152: 183, # None + 153: 184, # 'â„¢' + 154: 185, # 'Å¡' + 155: 186, # '›' + 156: 187, # 'Å›' + 157: 188, # 'Å¥' + 158: 189, # 'ž' + 159: 190, # 'ź' + 160: 191, # '\xa0' + 161: 192, # 'ˇ' + 162: 193, # '˘' + 163: 194, # 'Å' + 164: 195, # '¤' + 165: 196, # 'Ä„' + 166: 197, # '¦' + 167: 76, # '§' + 168: 198, # '¨' + 169: 199, # '©' + 170: 200, # 'Åž' + 171: 201, # '«' + 172: 202, # '¬' + 173: 203, # '\xad' + 174: 204, # '®' + 175: 205, # 'Å»' + 176: 81, # '°' + 177: 206, # '±' + 178: 207, # 'Ë›' + 179: 208, # 'Å‚' + 180: 209, # '´' + 181: 210, # 'µ' + 182: 211, # '¶' + 183: 212, # '·' + 184: 213, # '¸' + 185: 214, # 'Ä…' + 186: 215, # 'ÅŸ' + 187: 216, # '»' + 188: 217, # 'Ľ' + 189: 218, # 'Ë' + 190: 219, # 'ľ' + 191: 220, # 'ż' + 192: 221, # 'Å”' + 193: 51, # 'Ã' + 194: 83, # 'Â' + 195: 222, # 'Ä‚' + 196: 80, # 'Ä' + 197: 223, # 'Ĺ' + 198: 224, # 'Ć' + 199: 225, # 'Ç' + 200: 226, # 'ÄŒ' + 201: 44, # 'É' + 202: 227, # 'Ę' + 203: 228, # 'Ë' + 204: 229, # 'Äš' + 205: 61, # 'Ã' + 206: 230, # 'ÃŽ' + 207: 231, # 'ÄŽ' + 208: 232, # 'Ä' + 209: 233, # 'Ń' + 210: 234, # 'Ň' + 211: 58, # 'Ó' + 212: 235, # 'Ô' + 213: 66, # 'Å' + 214: 59, # 'Ö' + 215: 236, # '×' + 216: 237, # 'Ř' + 217: 238, # 'Å®' + 218: 60, # 'Ú' + 219: 70, # 'Ű' + 220: 63, # 'Ü' + 221: 239, # 'Ã' + 222: 240, # 'Å¢' + 223: 241, # 'ß' + 224: 84, # 'Å•' + 225: 14, # 'á' + 226: 75, # 'â' + 227: 242, # 'ă' + 228: 71, # 'ä' + 229: 82, # 'ĺ' + 230: 243, # 'ć' + 231: 73, # 'ç' + 232: 244, # 'Ä' + 233: 15, # 'é' + 234: 85, # 'Ä™' + 235: 79, # 'ë' + 236: 86, # 'Ä›' + 237: 30, # 'í' + 238: 77, # 'î' + 239: 87, # 'Ä' + 240: 245, # 'Ä‘' + 241: 246, # 'Å„' + 242: 247, # 'ň' + 243: 25, # 'ó' + 244: 74, # 'ô' + 245: 42, # 'Å‘' + 246: 24, # 'ö' + 247: 248, # '÷' + 248: 249, # 'Å™' + 249: 250, # 'ů' + 250: 31, # 'ú' + 251: 56, # 'ű' + 252: 29, # 'ü' + 253: 251, # 'ý' + 254: 252, # 'Å£' + 255: 253, # 'Ë™' +} + +WINDOWS_1250_HUNGARIAN_MODEL = SingleByteCharSetModel(charset_name='windows-1250', + language='Hungarian', + char_to_order_map=WINDOWS_1250_HUNGARIAN_CHAR_TO_ORDER, + language_model=HUNGARIAN_LANG_MODEL, + typical_positive_ratio=0.947368, + keep_ascii_letters=True, + alphabet='ABCDEFGHIJKLMNOPRSTUVZabcdefghijklmnoprstuvzÃÉÃÓÖÚÜáéíóöúüÅőŰű') + +ISO_8859_2_HUNGARIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 28, # 'A' + 66: 40, # 'B' + 67: 54, # 'C' + 68: 45, # 'D' + 69: 32, # 'E' + 70: 50, # 'F' + 71: 49, # 'G' + 72: 38, # 'H' + 73: 39, # 'I' + 74: 53, # 'J' + 75: 36, # 'K' + 76: 41, # 'L' + 77: 34, # 'M' + 78: 35, # 'N' + 79: 47, # 'O' + 80: 46, # 'P' + 81: 71, # 'Q' + 82: 43, # 'R' + 83: 33, # 'S' + 84: 37, # 'T' + 85: 57, # 'U' + 86: 48, # 'V' + 87: 64, # 'W' + 88: 68, # 'X' + 89: 55, # 'Y' + 90: 52, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 2, # 'a' + 98: 18, # 'b' + 99: 26, # 'c' + 100: 17, # 'd' + 101: 1, # 'e' + 102: 27, # 'f' + 103: 12, # 'g' + 104: 20, # 'h' + 105: 9, # 'i' + 106: 22, # 'j' + 107: 7, # 'k' + 108: 6, # 'l' + 109: 13, # 'm' + 110: 4, # 'n' + 111: 8, # 'o' + 112: 23, # 'p' + 113: 67, # 'q' + 114: 10, # 'r' + 115: 5, # 's' + 116: 3, # 't' + 117: 21, # 'u' + 118: 19, # 'v' + 119: 65, # 'w' + 120: 62, # 'x' + 121: 16, # 'y' + 122: 11, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 159, # '\x80' + 129: 160, # '\x81' + 130: 161, # '\x82' + 131: 162, # '\x83' + 132: 163, # '\x84' + 133: 164, # '\x85' + 134: 165, # '\x86' + 135: 166, # '\x87' + 136: 167, # '\x88' + 137: 168, # '\x89' + 138: 169, # '\x8a' + 139: 170, # '\x8b' + 140: 171, # '\x8c' + 141: 172, # '\x8d' + 142: 173, # '\x8e' + 143: 174, # '\x8f' + 144: 175, # '\x90' + 145: 176, # '\x91' + 146: 177, # '\x92' + 147: 178, # '\x93' + 148: 179, # '\x94' + 149: 180, # '\x95' + 150: 181, # '\x96' + 151: 182, # '\x97' + 152: 183, # '\x98' + 153: 184, # '\x99' + 154: 185, # '\x9a' + 155: 186, # '\x9b' + 156: 187, # '\x9c' + 157: 188, # '\x9d' + 158: 189, # '\x9e' + 159: 190, # '\x9f' + 160: 191, # '\xa0' + 161: 192, # 'Ä„' + 162: 193, # '˘' + 163: 194, # 'Å' + 164: 195, # '¤' + 165: 196, # 'Ľ' + 166: 197, # 'Åš' + 167: 75, # '§' + 168: 198, # '¨' + 169: 199, # 'Å ' + 170: 200, # 'Åž' + 171: 201, # 'Ť' + 172: 202, # 'Ź' + 173: 203, # '\xad' + 174: 204, # 'Ž' + 175: 205, # 'Å»' + 176: 79, # '°' + 177: 206, # 'Ä…' + 178: 207, # 'Ë›' + 179: 208, # 'Å‚' + 180: 209, # '´' + 181: 210, # 'ľ' + 182: 211, # 'Å›' + 183: 212, # 'ˇ' + 184: 213, # '¸' + 185: 214, # 'Å¡' + 186: 215, # 'ÅŸ' + 187: 216, # 'Å¥' + 188: 217, # 'ź' + 189: 218, # 'Ë' + 190: 219, # 'ž' + 191: 220, # 'ż' + 192: 221, # 'Å”' + 193: 51, # 'Ã' + 194: 81, # 'Â' + 195: 222, # 'Ä‚' + 196: 78, # 'Ä' + 197: 223, # 'Ĺ' + 198: 224, # 'Ć' + 199: 225, # 'Ç' + 200: 226, # 'ÄŒ' + 201: 44, # 'É' + 202: 227, # 'Ę' + 203: 228, # 'Ë' + 204: 229, # 'Äš' + 205: 61, # 'Ã' + 206: 230, # 'ÃŽ' + 207: 231, # 'ÄŽ' + 208: 232, # 'Ä' + 209: 233, # 'Ń' + 210: 234, # 'Ň' + 211: 58, # 'Ó' + 212: 235, # 'Ô' + 213: 66, # 'Å' + 214: 59, # 'Ö' + 215: 236, # '×' + 216: 237, # 'Ř' + 217: 238, # 'Å®' + 218: 60, # 'Ú' + 219: 69, # 'Ű' + 220: 63, # 'Ü' + 221: 239, # 'Ã' + 222: 240, # 'Å¢' + 223: 241, # 'ß' + 224: 82, # 'Å•' + 225: 14, # 'á' + 226: 74, # 'â' + 227: 242, # 'ă' + 228: 70, # 'ä' + 229: 80, # 'ĺ' + 230: 243, # 'ć' + 231: 72, # 'ç' + 232: 244, # 'Ä' + 233: 15, # 'é' + 234: 83, # 'Ä™' + 235: 77, # 'ë' + 236: 84, # 'Ä›' + 237: 30, # 'í' + 238: 76, # 'î' + 239: 85, # 'Ä' + 240: 245, # 'Ä‘' + 241: 246, # 'Å„' + 242: 247, # 'ň' + 243: 25, # 'ó' + 244: 73, # 'ô' + 245: 42, # 'Å‘' + 246: 24, # 'ö' + 247: 248, # '÷' + 248: 249, # 'Å™' + 249: 250, # 'ů' + 250: 31, # 'ú' + 251: 56, # 'ű' + 252: 29, # 'ü' + 253: 251, # 'ý' + 254: 252, # 'Å£' + 255: 253, # 'Ë™' +} + +ISO_8859_2_HUNGARIAN_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-2', + language='Hungarian', + char_to_order_map=ISO_8859_2_HUNGARIAN_CHAR_TO_ORDER, + language_model=HUNGARIAN_LANG_MODEL, + typical_positive_ratio=0.947368, + keep_ascii_letters=True, + alphabet='ABCDEFGHIJKLMNOPRSTUVZabcdefghijklmnoprstuvzÃÉÃÓÖÚÜáéíóöúüÅőŰű') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langrussianmodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langrussianmodel.py new file mode 100644 index 0000000..5594452 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langrussianmodel.py @@ -0,0 +1,5718 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +RUSSIAN_LANG_MODEL = { + 37: { # 'Ð' + 37: 0, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 1, # 'Ж' + 51: 1, # 'З' + 42: 1, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 2, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 1, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 1, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 1, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 0, # 'е' + 24: 1, # 'ж' + 20: 1, # 'з' + 4: 0, # 'и' + 23: 1, # 'й' + 11: 2, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 0, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 2, # 'у' + 39: 2, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 0, # 'ц' + 22: 1, # 'ч' + 25: 2, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 44: { # 'Б' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 1, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 2, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 33: { # 'Ð’' + 37: 2, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 1, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 2, # 'а' + 21: 1, # 'б' + 10: 1, # 'в' + 19: 1, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 2, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 1, # 'ц' + 22: 2, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 1, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 46: { # 'Г' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 2, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 1, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 41: { # 'Д' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 2, # 'Е' + 56: 1, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 2, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 3, # 'ж' + 20: 1, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 1, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 48: { # 'Е' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 1, # 'Ж' + 51: 1, # 'З' + 42: 1, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 2, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 2, # 'Р' + 32: 2, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 1, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 0, # 'а' + 21: 0, # 'б' + 10: 2, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 2, # 'е' + 24: 1, # 'ж' + 20: 1, # 'з' + 4: 0, # 'и' + 23: 2, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 1, # 'н' + 1: 0, # 'о' + 15: 1, # 'п' + 9: 1, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 2, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 56: { # 'Ж' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 1, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 1, # 'б' + 10: 0, # 'в' + 19: 1, # 'г' + 13: 1, # 'д' + 2: 2, # 'е' + 24: 1, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 1, # 'м' + 5: 0, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 51: { # 'З' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 0, # 'г' + 13: 2, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 1, # 'л' + 12: 1, # 'м' + 5: 2, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 1, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 42: { # 'И' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 2, # 'Е' + 56: 1, # 'Ж' + 51: 1, # 'З' + 42: 1, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 2, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 1, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 1, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 2, # 'з' + 4: 1, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 1, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 1, # 'у' + 39: 1, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 60: { # 'Й' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 1, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 0, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 36: { # 'К' + 37: 2, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 1, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 2, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 1, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 0, # 'м' + 5: 1, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 49: { # 'Л' + 37: 2, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 1, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 1, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 0, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 0, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 1, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 1, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 1, # 'л' + 12: 0, # 'м' + 5: 1, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 38: { # 'М' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 1, # 'Ф' + 55: 1, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 0, # 'Ь' + 47: 1, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 1, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 1, # 'л' + 12: 1, # 'м' + 5: 2, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 31: { # 'Ð' + 37: 2, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 1, # 'З' + 42: 2, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 1, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 1, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 3, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 34: { # 'О' + 37: 0, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 2, # 'Д' + 48: 1, # 'Е' + 56: 1, # 'Ж' + 51: 1, # 'З' + 42: 1, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 2, # 'Л' + 38: 1, # 'М' + 31: 2, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 2, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 1, # 'Ф' + 55: 1, # 'Ð¥' + 58: 0, # 'Ц' + 50: 1, # 'Ч' + 57: 1, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 1, # 'а' + 21: 2, # 'б' + 10: 1, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 0, # 'е' + 24: 1, # 'ж' + 20: 1, # 'з' + 4: 0, # 'и' + 23: 1, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 3, # 'н' + 1: 0, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 1, # 'у' + 39: 1, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 1, # 'ц' + 22: 2, # 'ч' + 25: 2, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 35: { # 'П' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 2, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 1, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 0, # 'м' + 5: 1, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 3, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 1, # 'Ñ‚' + 14: 2, # 'у' + 39: 1, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 2, # 'Ñ' + }, + 45: { # 'Р' + 37: 2, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 2, # 'Е' + 56: 1, # 'Ж' + 51: 0, # 'З' + 42: 2, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 2, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 1, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 1, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 1, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 2, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 2, # 'Ñ' + }, + 32: { # 'С' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 2, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 1, # 'Ч' + 57: 1, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 1, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 2, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 1, # 'ж' + 20: 1, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 2, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 2, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 1, # 'ц' + 22: 1, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 1, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 40: { # 'Т' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 2, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 1, # 'Ь' + 47: 1, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 1, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 52: { # 'У' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 1, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 0, # 'Ц' + 50: 1, # 'Ч' + 57: 1, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 1, # 'Ю' + 43: 0, # 'Я' + 3: 1, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 2, # 'д' + 2: 1, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 2, # 'и' + 23: 1, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 1, # 'н' + 1: 2, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 0, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 1, # 'ц' + 22: 2, # 'ч' + 25: 1, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 53: { # 'Ф' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 1, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 55: { # 'Ð¥' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 2, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 0, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 1, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 58: { # 'Ц' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 1, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 1, # 'а' + 21: 0, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 0, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 1, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 50: { # 'Ч' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 1, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 1, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 1, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 1, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 57: { # 'Ш' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 1, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 0, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 1, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 1, # 'н' + 1: 2, # 'о' + 15: 2, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 63: { # 'Щ' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 1, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 1, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 1, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 1, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 1, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 1, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 62: { # 'Ы' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 1, # 'Ц' + 50: 0, # 'Ч' + 57: 1, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 0, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 0, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 0, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 61: { # 'Ь' + 37: 0, # 'Ð' + 44: 1, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 0, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 1, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 1, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 1, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 1, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 0, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 0, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 0, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 0, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 47: { # 'Э' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 1, # 'Й' + 36: 1, # 'К' + 49: 1, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 1, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 1, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 2, # 'д' + 2: 0, # 'е' + 24: 1, # 'ж' + 20: 0, # 'з' + 4: 0, # 'и' + 23: 2, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 0, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 1, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 59: { # 'Ю' + 37: 1, # 'Ð' + 44: 1, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 1, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 0, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 1, # 'б' + 10: 0, # 'в' + 19: 1, # 'г' + 13: 1, # 'д' + 2: 0, # 'е' + 24: 1, # 'ж' + 20: 0, # 'з' + 4: 0, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 2, # 'н' + 1: 0, # 'о' + 15: 1, # 'п' + 9: 1, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 43: { # 'Я' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 1, # 'Ð’' + 46: 1, # 'Г' + 41: 0, # 'Д' + 48: 1, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 1, # 'С' + 40: 1, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 1, # 'Ð¥' + 58: 0, # 'Ц' + 50: 1, # 'Ч' + 57: 0, # 'Ш' + 63: 1, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 1, # 'Ю' + 43: 1, # 'Я' + 3: 0, # 'а' + 21: 1, # 'б' + 10: 1, # 'в' + 19: 1, # 'г' + 13: 1, # 'д' + 2: 0, # 'е' + 24: 0, # 'ж' + 20: 1, # 'з' + 4: 0, # 'и' + 23: 1, # 'й' + 11: 1, # 'к' + 8: 1, # 'л' + 12: 1, # 'м' + 5: 2, # 'н' + 1: 0, # 'о' + 15: 1, # 'п' + 9: 1, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 0, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 3: { # 'а' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 1, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 3, # 'з' + 4: 3, # 'и' + 23: 3, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 2, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 3, # 'ц' + 22: 3, # 'ч' + 25: 3, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 21: { # 'б' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 1, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 1, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 0, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 1, # 'ц' + 22: 1, # 'ч' + 25: 2, # 'ш' + 29: 3, # 'щ' + 54: 2, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 10: { # 'в' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 2, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 1, # 'ж' + 20: 3, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 3, # 'ш' + 29: 2, # 'щ' + 54: 2, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 19: { # 'г' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 3, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 1, # 'ц' + 22: 2, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 13: { # 'д' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 3, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 3, # 'ц' + 22: 2, # 'ч' + 25: 2, # 'ш' + 29: 1, # 'щ' + 54: 2, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 2: { # 'е' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 3, # 'з' + 4: 2, # 'и' + 23: 3, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 2, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 3, # 'ц' + 22: 3, # 'ч' + 25: 3, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 24: { # 'ж' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 1, # 'в' + 19: 2, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 3, # 'н' + 1: 2, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 1, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 1, # 'ц' + 22: 2, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 20: { # 'з' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 3, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 1, # 'ц' + 22: 2, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 2, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 4: { # 'и' + 37: 1, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 3, # 'з' + 4: 3, # 'и' + 23: 3, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 2, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 3, # 'ц' + 22: 3, # 'ч' + 25: 3, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 23: { # 'й' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 1, # 'а' + 21: 1, # 'б' + 10: 1, # 'в' + 19: 2, # 'г' + 13: 3, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 2, # 'з' + 4: 1, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 2, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 1, # 'у' + 39: 2, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 2, # 'ц' + 22: 3, # 'ч' + 25: 2, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 2, # 'Ñ' + }, + 11: { # 'к' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 3, # 'в' + 19: 1, # 'г' + 13: 1, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 3, # 'л' + 12: 1, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 2, # 'ц' + 22: 1, # 'ч' + 25: 2, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 1, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 8: { # 'л' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 3, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 1, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 1, # 'ц' + 22: 3, # 'ч' + 25: 2, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 12: { # 'м' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 2, # 'г' + 13: 1, # 'д' + 2: 3, # 'е' + 24: 1, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 1, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 5: { # 'н' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 3, # 'ц' + 22: 3, # 'ч' + 25: 2, # 'ш' + 29: 2, # 'щ' + 54: 1, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 1: { # 'о' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 3, # 'з' + 4: 3, # 'и' + 23: 3, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 2, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 2, # 'ц' + 22: 3, # 'ч' + 25: 3, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 15: { # 'п' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 3, # 'л' + 12: 1, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 3, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 1, # 'ш' + 29: 1, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 9: { # 'Ñ€' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 2, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 3, # 'ш' + 29: 2, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 7: { # 'Ñ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 1, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 3, # 'в' + 19: 2, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 2, # 'ц' + 22: 3, # 'ч' + 25: 2, # 'ш' + 29: 1, # 'щ' + 54: 2, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 6: { # 'Ñ‚' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 2, # 'б' + 10: 3, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 1, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 2, # 'ш' + 29: 2, # 'щ' + 54: 2, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 14: { # 'у' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 3, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 3, # 'з' + 4: 2, # 'и' + 23: 2, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 2, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 1, # 'у' + 39: 2, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 2, # 'ц' + 22: 3, # 'ч' + 25: 3, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 2, # 'Ñ' + }, + 39: { # 'Ñ„' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 0, # 'в' + 19: 1, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 1, # 'н' + 1: 3, # 'о' + 15: 1, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 2, # 'у' + 39: 2, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 1, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 2, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 2, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 26: { # 'Ñ…' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 3, # 'в' + 19: 1, # 'г' + 13: 1, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 1, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 1, # 'п' + 9: 3, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 2, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 1, # 'ц' + 22: 1, # 'ч' + 25: 2, # 'ш' + 29: 0, # 'щ' + 54: 1, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 28: { # 'ц' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 1, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 1, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 2, # 'к' + 8: 1, # 'л' + 12: 1, # 'м' + 5: 1, # 'н' + 1: 3, # 'о' + 15: 0, # 'п' + 9: 1, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 1, # 'Ñ‚' + 14: 3, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 1, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 3, # 'Ñ‹' + 17: 1, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 22: { # 'ч' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 1, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 2, # 'л' + 12: 1, # 'м' + 5: 3, # 'н' + 1: 2, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 3, # 'у' + 39: 1, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 1, # 'ч' + 25: 2, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 25: { # 'ш' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 1, # 'б' + 10: 2, # 'в' + 19: 1, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 2, # 'м' + 5: 3, # 'н' + 1: 3, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 1, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 3, # 'у' + 39: 2, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 1, # 'ц' + 22: 1, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 3, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 29: { # 'щ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 3, # 'а' + 21: 0, # 'б' + 10: 1, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 3, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 3, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 1, # 'м' + 5: 2, # 'н' + 1: 1, # 'о' + 15: 0, # 'п' + 9: 2, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 2, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 2, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 0, # 'Ñ' + }, + 54: { # 'ÑŠ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 0, # 'б' + 10: 0, # 'в' + 19: 0, # 'г' + 13: 0, # 'д' + 2: 2, # 'е' + 24: 0, # 'ж' + 20: 0, # 'з' + 4: 0, # 'и' + 23: 0, # 'й' + 11: 0, # 'к' + 8: 0, # 'л' + 12: 0, # 'м' + 5: 0, # 'н' + 1: 0, # 'о' + 15: 0, # 'п' + 9: 0, # 'Ñ€' + 7: 0, # 'Ñ' + 6: 0, # 'Ñ‚' + 14: 0, # 'у' + 39: 0, # 'Ñ„' + 26: 0, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 0, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 2, # 'Ñ' + }, + 18: { # 'Ñ‹' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 3, # 'б' + 10: 3, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 2, # 'и' + 23: 3, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 1, # 'о' + 15: 3, # 'п' + 9: 3, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 1, # 'у' + 39: 0, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 2, # 'ц' + 22: 3, # 'ч' + 25: 3, # 'ш' + 29: 2, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 0, # 'ÑŽ' + 16: 2, # 'Ñ' + }, + 17: { # 'ÑŒ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 2, # 'б' + 10: 2, # 'в' + 19: 2, # 'г' + 13: 2, # 'д' + 2: 3, # 'е' + 24: 1, # 'ж' + 20: 3, # 'з' + 4: 2, # 'и' + 23: 0, # 'й' + 11: 3, # 'к' + 8: 0, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 2, # 'о' + 15: 2, # 'п' + 9: 1, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 2, # 'Ñ‚' + 14: 0, # 'у' + 39: 2, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 3, # 'ш' + 29: 2, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 3, # 'ÑŽ' + 16: 3, # 'Ñ' + }, + 30: { # 'Ñ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 1, # 'М' + 31: 1, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 1, # 'Р' + 32: 1, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 1, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 1, # 'б' + 10: 1, # 'в' + 19: 1, # 'г' + 13: 2, # 'д' + 2: 1, # 'е' + 24: 0, # 'ж' + 20: 1, # 'з' + 4: 0, # 'и' + 23: 2, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 0, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 2, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 1, # 'у' + 39: 2, # 'Ñ„' + 26: 1, # 'Ñ…' + 28: 0, # 'ц' + 22: 0, # 'ч' + 25: 1, # 'ш' + 29: 0, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 1, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 27: { # 'ÑŽ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 2, # 'а' + 21: 3, # 'б' + 10: 1, # 'в' + 19: 2, # 'г' + 13: 3, # 'д' + 2: 1, # 'е' + 24: 2, # 'ж' + 20: 2, # 'з' + 4: 1, # 'и' + 23: 1, # 'й' + 11: 2, # 'к' + 8: 2, # 'л' + 12: 2, # 'м' + 5: 2, # 'н' + 1: 1, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 0, # 'у' + 39: 1, # 'Ñ„' + 26: 2, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 2, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 1, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 1, # 'Ñ' + }, + 16: { # 'Ñ' + 37: 0, # 'Ð' + 44: 0, # 'Б' + 33: 0, # 'Ð’' + 46: 0, # 'Г' + 41: 0, # 'Д' + 48: 0, # 'Е' + 56: 0, # 'Ж' + 51: 0, # 'З' + 42: 0, # 'И' + 60: 0, # 'Й' + 36: 0, # 'К' + 49: 0, # 'Л' + 38: 0, # 'М' + 31: 0, # 'Ð' + 34: 0, # 'О' + 35: 0, # 'П' + 45: 0, # 'Р' + 32: 0, # 'С' + 40: 0, # 'Т' + 52: 0, # 'У' + 53: 0, # 'Ф' + 55: 0, # 'Ð¥' + 58: 0, # 'Ц' + 50: 0, # 'Ч' + 57: 0, # 'Ш' + 63: 0, # 'Щ' + 62: 0, # 'Ы' + 61: 0, # 'Ь' + 47: 0, # 'Э' + 59: 0, # 'Ю' + 43: 0, # 'Я' + 3: 0, # 'а' + 21: 2, # 'б' + 10: 3, # 'в' + 19: 2, # 'г' + 13: 3, # 'д' + 2: 3, # 'е' + 24: 3, # 'ж' + 20: 3, # 'з' + 4: 2, # 'и' + 23: 2, # 'й' + 11: 3, # 'к' + 8: 3, # 'л' + 12: 3, # 'м' + 5: 3, # 'н' + 1: 0, # 'о' + 15: 2, # 'п' + 9: 2, # 'Ñ€' + 7: 3, # 'Ñ' + 6: 3, # 'Ñ‚' + 14: 1, # 'у' + 39: 1, # 'Ñ„' + 26: 3, # 'Ñ…' + 28: 2, # 'ц' + 22: 2, # 'ч' + 25: 2, # 'ш' + 29: 3, # 'щ' + 54: 0, # 'ÑŠ' + 18: 0, # 'Ñ‹' + 17: 0, # 'ÑŒ' + 30: 0, # 'Ñ' + 27: 2, # 'ÑŽ' + 16: 2, # 'Ñ' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +IBM866_RUSSIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 142, # 'A' + 66: 143, # 'B' + 67: 144, # 'C' + 68: 145, # 'D' + 69: 146, # 'E' + 70: 147, # 'F' + 71: 148, # 'G' + 72: 149, # 'H' + 73: 150, # 'I' + 74: 151, # 'J' + 75: 152, # 'K' + 76: 74, # 'L' + 77: 153, # 'M' + 78: 75, # 'N' + 79: 154, # 'O' + 80: 155, # 'P' + 81: 156, # 'Q' + 82: 157, # 'R' + 83: 158, # 'S' + 84: 159, # 'T' + 85: 160, # 'U' + 86: 161, # 'V' + 87: 162, # 'W' + 88: 163, # 'X' + 89: 164, # 'Y' + 90: 165, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 71, # 'a' + 98: 172, # 'b' + 99: 66, # 'c' + 100: 173, # 'd' + 101: 65, # 'e' + 102: 174, # 'f' + 103: 76, # 'g' + 104: 175, # 'h' + 105: 64, # 'i' + 106: 176, # 'j' + 107: 177, # 'k' + 108: 77, # 'l' + 109: 72, # 'm' + 110: 178, # 'n' + 111: 69, # 'o' + 112: 67, # 'p' + 113: 179, # 'q' + 114: 78, # 'r' + 115: 73, # 's' + 116: 180, # 't' + 117: 181, # 'u' + 118: 79, # 'v' + 119: 182, # 'w' + 120: 183, # 'x' + 121: 184, # 'y' + 122: 185, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 37, # 'Ð' + 129: 44, # 'Б' + 130: 33, # 'Ð’' + 131: 46, # 'Г' + 132: 41, # 'Д' + 133: 48, # 'Е' + 134: 56, # 'Ж' + 135: 51, # 'З' + 136: 42, # 'И' + 137: 60, # 'Й' + 138: 36, # 'К' + 139: 49, # 'Л' + 140: 38, # 'М' + 141: 31, # 'Ð' + 142: 34, # 'О' + 143: 35, # 'П' + 144: 45, # 'Р' + 145: 32, # 'С' + 146: 40, # 'Т' + 147: 52, # 'У' + 148: 53, # 'Ф' + 149: 55, # 'Ð¥' + 150: 58, # 'Ц' + 151: 50, # 'Ч' + 152: 57, # 'Ш' + 153: 63, # 'Щ' + 154: 70, # 'Ъ' + 155: 62, # 'Ы' + 156: 61, # 'Ь' + 157: 47, # 'Э' + 158: 59, # 'Ю' + 159: 43, # 'Я' + 160: 3, # 'а' + 161: 21, # 'б' + 162: 10, # 'в' + 163: 19, # 'г' + 164: 13, # 'д' + 165: 2, # 'е' + 166: 24, # 'ж' + 167: 20, # 'з' + 168: 4, # 'и' + 169: 23, # 'й' + 170: 11, # 'к' + 171: 8, # 'л' + 172: 12, # 'м' + 173: 5, # 'н' + 174: 1, # 'о' + 175: 15, # 'п' + 176: 191, # 'â–‘' + 177: 192, # 'â–’' + 178: 193, # 'â–“' + 179: 194, # '│' + 180: 195, # '┤' + 181: 196, # 'â•¡' + 182: 197, # 'â•¢' + 183: 198, # 'â•–' + 184: 199, # 'â••' + 185: 200, # 'â•£' + 186: 201, # 'â•‘' + 187: 202, # 'â•—' + 188: 203, # 'â•' + 189: 204, # '╜' + 190: 205, # 'â•›' + 191: 206, # 'â”' + 192: 207, # 'â””' + 193: 208, # 'â”´' + 194: 209, # '┬' + 195: 210, # '├' + 196: 211, # '─' + 197: 212, # '┼' + 198: 213, # '╞' + 199: 214, # '╟' + 200: 215, # '╚' + 201: 216, # 'â•”' + 202: 217, # 'â•©' + 203: 218, # '╦' + 204: 219, # 'â• ' + 205: 220, # 'â•' + 206: 221, # '╬' + 207: 222, # 'â•§' + 208: 223, # '╨' + 209: 224, # '╤' + 210: 225, # 'â•¥' + 211: 226, # 'â•™' + 212: 227, # '╘' + 213: 228, # 'â•’' + 214: 229, # 'â•“' + 215: 230, # 'â•«' + 216: 231, # '╪' + 217: 232, # '┘' + 218: 233, # '┌' + 219: 234, # 'â–ˆ' + 220: 235, # 'â–„' + 221: 236, # 'â–Œ' + 222: 237, # 'â–' + 223: 238, # 'â–€' + 224: 9, # 'Ñ€' + 225: 7, # 'Ñ' + 226: 6, # 'Ñ‚' + 227: 14, # 'у' + 228: 39, # 'Ñ„' + 229: 26, # 'Ñ…' + 230: 28, # 'ц' + 231: 22, # 'ч' + 232: 25, # 'ш' + 233: 29, # 'щ' + 234: 54, # 'ÑŠ' + 235: 18, # 'Ñ‹' + 236: 17, # 'ÑŒ' + 237: 30, # 'Ñ' + 238: 27, # 'ÑŽ' + 239: 16, # 'Ñ' + 240: 239, # 'Ð' + 241: 68, # 'Ñ‘' + 242: 240, # 'Є' + 243: 241, # 'Ñ”' + 244: 242, # 'Ї' + 245: 243, # 'Ñ—' + 246: 244, # 'ÐŽ' + 247: 245, # 'Ñž' + 248: 246, # '°' + 249: 247, # '∙' + 250: 248, # '·' + 251: 249, # '√' + 252: 250, # 'â„–' + 253: 251, # '¤' + 254: 252, # 'â– ' + 255: 255, # '\xa0' +} + +IBM866_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='IBM866', + language='Russian', + char_to_order_map=IBM866_RUSSIAN_CHAR_TO_ORDER, + language_model=RUSSIAN_LANG_MODEL, + typical_positive_ratio=0.976601, + keep_ascii_letters=False, + alphabet='ÐÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрÑтуфхцчшщъыьÑÑŽÑÑ‘') + +WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 142, # 'A' + 66: 143, # 'B' + 67: 144, # 'C' + 68: 145, # 'D' + 69: 146, # 'E' + 70: 147, # 'F' + 71: 148, # 'G' + 72: 149, # 'H' + 73: 150, # 'I' + 74: 151, # 'J' + 75: 152, # 'K' + 76: 74, # 'L' + 77: 153, # 'M' + 78: 75, # 'N' + 79: 154, # 'O' + 80: 155, # 'P' + 81: 156, # 'Q' + 82: 157, # 'R' + 83: 158, # 'S' + 84: 159, # 'T' + 85: 160, # 'U' + 86: 161, # 'V' + 87: 162, # 'W' + 88: 163, # 'X' + 89: 164, # 'Y' + 90: 165, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 71, # 'a' + 98: 172, # 'b' + 99: 66, # 'c' + 100: 173, # 'd' + 101: 65, # 'e' + 102: 174, # 'f' + 103: 76, # 'g' + 104: 175, # 'h' + 105: 64, # 'i' + 106: 176, # 'j' + 107: 177, # 'k' + 108: 77, # 'l' + 109: 72, # 'm' + 110: 178, # 'n' + 111: 69, # 'o' + 112: 67, # 'p' + 113: 179, # 'q' + 114: 78, # 'r' + 115: 73, # 's' + 116: 180, # 't' + 117: 181, # 'u' + 118: 79, # 'v' + 119: 182, # 'w' + 120: 183, # 'x' + 121: 184, # 'y' + 122: 185, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 191, # 'Ђ' + 129: 192, # 'Ѓ' + 130: 193, # '‚' + 131: 194, # 'Ñ“' + 132: 195, # '„' + 133: 196, # '…' + 134: 197, # '†' + 135: 198, # '‡' + 136: 199, # '€' + 137: 200, # '‰' + 138: 201, # 'Љ' + 139: 202, # '‹' + 140: 203, # 'Њ' + 141: 204, # 'ÐŒ' + 142: 205, # 'Ћ' + 143: 206, # 'Ð' + 144: 207, # 'Ñ’' + 145: 208, # '‘' + 146: 209, # '’' + 147: 210, # '“' + 148: 211, # 'â€' + 149: 212, # '•' + 150: 213, # '–' + 151: 214, # '—' + 152: 215, # None + 153: 216, # 'â„¢' + 154: 217, # 'Ñ™' + 155: 218, # '›' + 156: 219, # 'Ñš' + 157: 220, # 'Ñœ' + 158: 221, # 'Ñ›' + 159: 222, # 'ÑŸ' + 160: 223, # '\xa0' + 161: 224, # 'ÐŽ' + 162: 225, # 'Ñž' + 163: 226, # 'Ј' + 164: 227, # '¤' + 165: 228, # 'Ò' + 166: 229, # '¦' + 167: 230, # '§' + 168: 231, # 'Ð' + 169: 232, # '©' + 170: 233, # 'Є' + 171: 234, # '«' + 172: 235, # '¬' + 173: 236, # '\xad' + 174: 237, # '®' + 175: 238, # 'Ї' + 176: 239, # '°' + 177: 240, # '±' + 178: 241, # 'І' + 179: 242, # 'Ñ–' + 180: 243, # 'Ò‘' + 181: 244, # 'µ' + 182: 245, # '¶' + 183: 246, # '·' + 184: 68, # 'Ñ‘' + 185: 247, # 'â„–' + 186: 248, # 'Ñ”' + 187: 249, # '»' + 188: 250, # 'ј' + 189: 251, # 'Ð…' + 190: 252, # 'Ñ•' + 191: 253, # 'Ñ—' + 192: 37, # 'Ð' + 193: 44, # 'Б' + 194: 33, # 'Ð’' + 195: 46, # 'Г' + 196: 41, # 'Д' + 197: 48, # 'Е' + 198: 56, # 'Ж' + 199: 51, # 'З' + 200: 42, # 'И' + 201: 60, # 'Й' + 202: 36, # 'К' + 203: 49, # 'Л' + 204: 38, # 'М' + 205: 31, # 'Ð' + 206: 34, # 'О' + 207: 35, # 'П' + 208: 45, # 'Р' + 209: 32, # 'С' + 210: 40, # 'Т' + 211: 52, # 'У' + 212: 53, # 'Ф' + 213: 55, # 'Ð¥' + 214: 58, # 'Ц' + 215: 50, # 'Ч' + 216: 57, # 'Ш' + 217: 63, # 'Щ' + 218: 70, # 'Ъ' + 219: 62, # 'Ы' + 220: 61, # 'Ь' + 221: 47, # 'Э' + 222: 59, # 'Ю' + 223: 43, # 'Я' + 224: 3, # 'а' + 225: 21, # 'б' + 226: 10, # 'в' + 227: 19, # 'г' + 228: 13, # 'д' + 229: 2, # 'е' + 230: 24, # 'ж' + 231: 20, # 'з' + 232: 4, # 'и' + 233: 23, # 'й' + 234: 11, # 'к' + 235: 8, # 'л' + 236: 12, # 'м' + 237: 5, # 'н' + 238: 1, # 'о' + 239: 15, # 'п' + 240: 9, # 'Ñ€' + 241: 7, # 'Ñ' + 242: 6, # 'Ñ‚' + 243: 14, # 'у' + 244: 39, # 'Ñ„' + 245: 26, # 'Ñ…' + 246: 28, # 'ц' + 247: 22, # 'ч' + 248: 25, # 'ш' + 249: 29, # 'щ' + 250: 54, # 'ÑŠ' + 251: 18, # 'Ñ‹' + 252: 17, # 'ÑŒ' + 253: 30, # 'Ñ' + 254: 27, # 'ÑŽ' + 255: 16, # 'Ñ' +} + +WINDOWS_1251_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='windows-1251', + language='Russian', + char_to_order_map=WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER, + language_model=RUSSIAN_LANG_MODEL, + typical_positive_ratio=0.976601, + keep_ascii_letters=False, + alphabet='ÐÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрÑтуфхцчшщъыьÑÑŽÑÑ‘') + +IBM855_RUSSIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 142, # 'A' + 66: 143, # 'B' + 67: 144, # 'C' + 68: 145, # 'D' + 69: 146, # 'E' + 70: 147, # 'F' + 71: 148, # 'G' + 72: 149, # 'H' + 73: 150, # 'I' + 74: 151, # 'J' + 75: 152, # 'K' + 76: 74, # 'L' + 77: 153, # 'M' + 78: 75, # 'N' + 79: 154, # 'O' + 80: 155, # 'P' + 81: 156, # 'Q' + 82: 157, # 'R' + 83: 158, # 'S' + 84: 159, # 'T' + 85: 160, # 'U' + 86: 161, # 'V' + 87: 162, # 'W' + 88: 163, # 'X' + 89: 164, # 'Y' + 90: 165, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 71, # 'a' + 98: 172, # 'b' + 99: 66, # 'c' + 100: 173, # 'd' + 101: 65, # 'e' + 102: 174, # 'f' + 103: 76, # 'g' + 104: 175, # 'h' + 105: 64, # 'i' + 106: 176, # 'j' + 107: 177, # 'k' + 108: 77, # 'l' + 109: 72, # 'm' + 110: 178, # 'n' + 111: 69, # 'o' + 112: 67, # 'p' + 113: 179, # 'q' + 114: 78, # 'r' + 115: 73, # 's' + 116: 180, # 't' + 117: 181, # 'u' + 118: 79, # 'v' + 119: 182, # 'w' + 120: 183, # 'x' + 121: 184, # 'y' + 122: 185, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 191, # 'Ñ’' + 129: 192, # 'Ђ' + 130: 193, # 'Ñ“' + 131: 194, # 'Ѓ' + 132: 68, # 'Ñ‘' + 133: 195, # 'Ð' + 134: 196, # 'Ñ”' + 135: 197, # 'Є' + 136: 198, # 'Ñ•' + 137: 199, # 'Ð…' + 138: 200, # 'Ñ–' + 139: 201, # 'І' + 140: 202, # 'Ñ—' + 141: 203, # 'Ї' + 142: 204, # 'ј' + 143: 205, # 'Ј' + 144: 206, # 'Ñ™' + 145: 207, # 'Љ' + 146: 208, # 'Ñš' + 147: 209, # 'Њ' + 148: 210, # 'Ñ›' + 149: 211, # 'Ћ' + 150: 212, # 'Ñœ' + 151: 213, # 'ÐŒ' + 152: 214, # 'Ñž' + 153: 215, # 'ÐŽ' + 154: 216, # 'ÑŸ' + 155: 217, # 'Ð' + 156: 27, # 'ÑŽ' + 157: 59, # 'Ю' + 158: 54, # 'ÑŠ' + 159: 70, # 'Ъ' + 160: 3, # 'а' + 161: 37, # 'Ð' + 162: 21, # 'б' + 163: 44, # 'Б' + 164: 28, # 'ц' + 165: 58, # 'Ц' + 166: 13, # 'д' + 167: 41, # 'Д' + 168: 2, # 'е' + 169: 48, # 'Е' + 170: 39, # 'Ñ„' + 171: 53, # 'Ф' + 172: 19, # 'г' + 173: 46, # 'Г' + 174: 218, # '«' + 175: 219, # '»' + 176: 220, # 'â–‘' + 177: 221, # 'â–’' + 178: 222, # 'â–“' + 179: 223, # '│' + 180: 224, # '┤' + 181: 26, # 'Ñ…' + 182: 55, # 'Ð¥' + 183: 4, # 'и' + 184: 42, # 'И' + 185: 225, # 'â•£' + 186: 226, # 'â•‘' + 187: 227, # 'â•—' + 188: 228, # 'â•' + 189: 23, # 'й' + 190: 60, # 'Й' + 191: 229, # 'â”' + 192: 230, # 'â””' + 193: 231, # 'â”´' + 194: 232, # '┬' + 195: 233, # '├' + 196: 234, # '─' + 197: 235, # '┼' + 198: 11, # 'к' + 199: 36, # 'К' + 200: 236, # '╚' + 201: 237, # 'â•”' + 202: 238, # 'â•©' + 203: 239, # '╦' + 204: 240, # 'â• ' + 205: 241, # 'â•' + 206: 242, # '╬' + 207: 243, # '¤' + 208: 8, # 'л' + 209: 49, # 'Л' + 210: 12, # 'м' + 211: 38, # 'М' + 212: 5, # 'н' + 213: 31, # 'Ð' + 214: 1, # 'о' + 215: 34, # 'О' + 216: 15, # 'п' + 217: 244, # '┘' + 218: 245, # '┌' + 219: 246, # 'â–ˆ' + 220: 247, # 'â–„' + 221: 35, # 'П' + 222: 16, # 'Ñ' + 223: 248, # 'â–€' + 224: 43, # 'Я' + 225: 9, # 'Ñ€' + 226: 45, # 'Р' + 227: 7, # 'Ñ' + 228: 32, # 'С' + 229: 6, # 'Ñ‚' + 230: 40, # 'Т' + 231: 14, # 'у' + 232: 52, # 'У' + 233: 24, # 'ж' + 234: 56, # 'Ж' + 235: 10, # 'в' + 236: 33, # 'Ð’' + 237: 17, # 'ÑŒ' + 238: 61, # 'Ь' + 239: 249, # 'â„–' + 240: 250, # '\xad' + 241: 18, # 'Ñ‹' + 242: 62, # 'Ы' + 243: 20, # 'з' + 244: 51, # 'З' + 245: 25, # 'ш' + 246: 57, # 'Ш' + 247: 30, # 'Ñ' + 248: 47, # 'Э' + 249: 29, # 'щ' + 250: 63, # 'Щ' + 251: 22, # 'ч' + 252: 50, # 'Ч' + 253: 251, # '§' + 254: 252, # 'â– ' + 255: 255, # '\xa0' +} + +IBM855_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='IBM855', + language='Russian', + char_to_order_map=IBM855_RUSSIAN_CHAR_TO_ORDER, + language_model=RUSSIAN_LANG_MODEL, + typical_positive_ratio=0.976601, + keep_ascii_letters=False, + alphabet='ÐÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрÑтуфхцчшщъыьÑÑŽÑÑ‘') + +KOI8_R_RUSSIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 142, # 'A' + 66: 143, # 'B' + 67: 144, # 'C' + 68: 145, # 'D' + 69: 146, # 'E' + 70: 147, # 'F' + 71: 148, # 'G' + 72: 149, # 'H' + 73: 150, # 'I' + 74: 151, # 'J' + 75: 152, # 'K' + 76: 74, # 'L' + 77: 153, # 'M' + 78: 75, # 'N' + 79: 154, # 'O' + 80: 155, # 'P' + 81: 156, # 'Q' + 82: 157, # 'R' + 83: 158, # 'S' + 84: 159, # 'T' + 85: 160, # 'U' + 86: 161, # 'V' + 87: 162, # 'W' + 88: 163, # 'X' + 89: 164, # 'Y' + 90: 165, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 71, # 'a' + 98: 172, # 'b' + 99: 66, # 'c' + 100: 173, # 'd' + 101: 65, # 'e' + 102: 174, # 'f' + 103: 76, # 'g' + 104: 175, # 'h' + 105: 64, # 'i' + 106: 176, # 'j' + 107: 177, # 'k' + 108: 77, # 'l' + 109: 72, # 'm' + 110: 178, # 'n' + 111: 69, # 'o' + 112: 67, # 'p' + 113: 179, # 'q' + 114: 78, # 'r' + 115: 73, # 's' + 116: 180, # 't' + 117: 181, # 'u' + 118: 79, # 'v' + 119: 182, # 'w' + 120: 183, # 'x' + 121: 184, # 'y' + 122: 185, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 191, # '─' + 129: 192, # '│' + 130: 193, # '┌' + 131: 194, # 'â”' + 132: 195, # 'â””' + 133: 196, # '┘' + 134: 197, # '├' + 135: 198, # '┤' + 136: 199, # '┬' + 137: 200, # 'â”´' + 138: 201, # '┼' + 139: 202, # 'â–€' + 140: 203, # 'â–„' + 141: 204, # 'â–ˆ' + 142: 205, # 'â–Œ' + 143: 206, # 'â–' + 144: 207, # 'â–‘' + 145: 208, # 'â–’' + 146: 209, # 'â–“' + 147: 210, # '⌠' + 148: 211, # 'â– ' + 149: 212, # '∙' + 150: 213, # '√' + 151: 214, # '≈' + 152: 215, # '≤' + 153: 216, # '≥' + 154: 217, # '\xa0' + 155: 218, # '⌡' + 156: 219, # '°' + 157: 220, # '²' + 158: 221, # '·' + 159: 222, # '÷' + 160: 223, # 'â•' + 161: 224, # 'â•‘' + 162: 225, # 'â•’' + 163: 68, # 'Ñ‘' + 164: 226, # 'â•“' + 165: 227, # 'â•”' + 166: 228, # 'â••' + 167: 229, # 'â•–' + 168: 230, # 'â•—' + 169: 231, # '╘' + 170: 232, # 'â•™' + 171: 233, # '╚' + 172: 234, # 'â•›' + 173: 235, # '╜' + 174: 236, # 'â•' + 175: 237, # '╞' + 176: 238, # '╟' + 177: 239, # 'â• ' + 178: 240, # 'â•¡' + 179: 241, # 'Ð' + 180: 242, # 'â•¢' + 181: 243, # 'â•£' + 182: 244, # '╤' + 183: 245, # 'â•¥' + 184: 246, # '╦' + 185: 247, # 'â•§' + 186: 248, # '╨' + 187: 249, # 'â•©' + 188: 250, # '╪' + 189: 251, # 'â•«' + 190: 252, # '╬' + 191: 253, # '©' + 192: 27, # 'ÑŽ' + 193: 3, # 'а' + 194: 21, # 'б' + 195: 28, # 'ц' + 196: 13, # 'д' + 197: 2, # 'е' + 198: 39, # 'Ñ„' + 199: 19, # 'г' + 200: 26, # 'Ñ…' + 201: 4, # 'и' + 202: 23, # 'й' + 203: 11, # 'к' + 204: 8, # 'л' + 205: 12, # 'м' + 206: 5, # 'н' + 207: 1, # 'о' + 208: 15, # 'п' + 209: 16, # 'Ñ' + 210: 9, # 'Ñ€' + 211: 7, # 'Ñ' + 212: 6, # 'Ñ‚' + 213: 14, # 'у' + 214: 24, # 'ж' + 215: 10, # 'в' + 216: 17, # 'ÑŒ' + 217: 18, # 'Ñ‹' + 218: 20, # 'з' + 219: 25, # 'ш' + 220: 30, # 'Ñ' + 221: 29, # 'щ' + 222: 22, # 'ч' + 223: 54, # 'ÑŠ' + 224: 59, # 'Ю' + 225: 37, # 'Ð' + 226: 44, # 'Б' + 227: 58, # 'Ц' + 228: 41, # 'Д' + 229: 48, # 'Е' + 230: 53, # 'Ф' + 231: 46, # 'Г' + 232: 55, # 'Ð¥' + 233: 42, # 'И' + 234: 60, # 'Й' + 235: 36, # 'К' + 236: 49, # 'Л' + 237: 38, # 'М' + 238: 31, # 'Ð' + 239: 34, # 'О' + 240: 35, # 'П' + 241: 43, # 'Я' + 242: 45, # 'Р' + 243: 32, # 'С' + 244: 40, # 'Т' + 245: 52, # 'У' + 246: 56, # 'Ж' + 247: 33, # 'Ð’' + 248: 61, # 'Ь' + 249: 62, # 'Ы' + 250: 51, # 'З' + 251: 57, # 'Ш' + 252: 47, # 'Э' + 253: 63, # 'Щ' + 254: 50, # 'Ч' + 255: 70, # 'Ъ' +} + +KOI8_R_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='KOI8-R', + language='Russian', + char_to_order_map=KOI8_R_RUSSIAN_CHAR_TO_ORDER, + language_model=RUSSIAN_LANG_MODEL, + typical_positive_ratio=0.976601, + keep_ascii_letters=False, + alphabet='ÐÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрÑтуфхцчшщъыьÑÑŽÑÑ‘') + +MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 142, # 'A' + 66: 143, # 'B' + 67: 144, # 'C' + 68: 145, # 'D' + 69: 146, # 'E' + 70: 147, # 'F' + 71: 148, # 'G' + 72: 149, # 'H' + 73: 150, # 'I' + 74: 151, # 'J' + 75: 152, # 'K' + 76: 74, # 'L' + 77: 153, # 'M' + 78: 75, # 'N' + 79: 154, # 'O' + 80: 155, # 'P' + 81: 156, # 'Q' + 82: 157, # 'R' + 83: 158, # 'S' + 84: 159, # 'T' + 85: 160, # 'U' + 86: 161, # 'V' + 87: 162, # 'W' + 88: 163, # 'X' + 89: 164, # 'Y' + 90: 165, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 71, # 'a' + 98: 172, # 'b' + 99: 66, # 'c' + 100: 173, # 'd' + 101: 65, # 'e' + 102: 174, # 'f' + 103: 76, # 'g' + 104: 175, # 'h' + 105: 64, # 'i' + 106: 176, # 'j' + 107: 177, # 'k' + 108: 77, # 'l' + 109: 72, # 'm' + 110: 178, # 'n' + 111: 69, # 'o' + 112: 67, # 'p' + 113: 179, # 'q' + 114: 78, # 'r' + 115: 73, # 's' + 116: 180, # 't' + 117: 181, # 'u' + 118: 79, # 'v' + 119: 182, # 'w' + 120: 183, # 'x' + 121: 184, # 'y' + 122: 185, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 37, # 'Ð' + 129: 44, # 'Б' + 130: 33, # 'Ð’' + 131: 46, # 'Г' + 132: 41, # 'Д' + 133: 48, # 'Е' + 134: 56, # 'Ж' + 135: 51, # 'З' + 136: 42, # 'И' + 137: 60, # 'Й' + 138: 36, # 'К' + 139: 49, # 'Л' + 140: 38, # 'М' + 141: 31, # 'Ð' + 142: 34, # 'О' + 143: 35, # 'П' + 144: 45, # 'Р' + 145: 32, # 'С' + 146: 40, # 'Т' + 147: 52, # 'У' + 148: 53, # 'Ф' + 149: 55, # 'Ð¥' + 150: 58, # 'Ц' + 151: 50, # 'Ч' + 152: 57, # 'Ш' + 153: 63, # 'Щ' + 154: 70, # 'Ъ' + 155: 62, # 'Ы' + 156: 61, # 'Ь' + 157: 47, # 'Э' + 158: 59, # 'Ю' + 159: 43, # 'Я' + 160: 191, # '†' + 161: 192, # '°' + 162: 193, # 'Ò' + 163: 194, # '£' + 164: 195, # '§' + 165: 196, # '•' + 166: 197, # '¶' + 167: 198, # 'І' + 168: 199, # '®' + 169: 200, # '©' + 170: 201, # 'â„¢' + 171: 202, # 'Ђ' + 172: 203, # 'Ñ’' + 173: 204, # '≠' + 174: 205, # 'Ѓ' + 175: 206, # 'Ñ“' + 176: 207, # '∞' + 177: 208, # '±' + 178: 209, # '≤' + 179: 210, # '≥' + 180: 211, # 'Ñ–' + 181: 212, # 'µ' + 182: 213, # 'Ò‘' + 183: 214, # 'Ј' + 184: 215, # 'Є' + 185: 216, # 'Ñ”' + 186: 217, # 'Ї' + 187: 218, # 'Ñ—' + 188: 219, # 'Љ' + 189: 220, # 'Ñ™' + 190: 221, # 'Њ' + 191: 222, # 'Ñš' + 192: 223, # 'ј' + 193: 224, # 'Ð…' + 194: 225, # '¬' + 195: 226, # '√' + 196: 227, # 'Æ’' + 197: 228, # '≈' + 198: 229, # '∆' + 199: 230, # '«' + 200: 231, # '»' + 201: 232, # '…' + 202: 233, # '\xa0' + 203: 234, # 'Ћ' + 204: 235, # 'Ñ›' + 205: 236, # 'ÐŒ' + 206: 237, # 'Ñœ' + 207: 238, # 'Ñ•' + 208: 239, # '–' + 209: 240, # '—' + 210: 241, # '“' + 211: 242, # 'â€' + 212: 243, # '‘' + 213: 244, # '’' + 214: 245, # '÷' + 215: 246, # '„' + 216: 247, # 'ÐŽ' + 217: 248, # 'Ñž' + 218: 249, # 'Ð' + 219: 250, # 'ÑŸ' + 220: 251, # 'â„–' + 221: 252, # 'Ð' + 222: 68, # 'Ñ‘' + 223: 16, # 'Ñ' + 224: 3, # 'а' + 225: 21, # 'б' + 226: 10, # 'в' + 227: 19, # 'г' + 228: 13, # 'д' + 229: 2, # 'е' + 230: 24, # 'ж' + 231: 20, # 'з' + 232: 4, # 'и' + 233: 23, # 'й' + 234: 11, # 'к' + 235: 8, # 'л' + 236: 12, # 'м' + 237: 5, # 'н' + 238: 1, # 'о' + 239: 15, # 'п' + 240: 9, # 'Ñ€' + 241: 7, # 'Ñ' + 242: 6, # 'Ñ‚' + 243: 14, # 'у' + 244: 39, # 'Ñ„' + 245: 26, # 'Ñ…' + 246: 28, # 'ц' + 247: 22, # 'ч' + 248: 25, # 'ш' + 249: 29, # 'щ' + 250: 54, # 'ÑŠ' + 251: 18, # 'Ñ‹' + 252: 17, # 'ÑŒ' + 253: 30, # 'Ñ' + 254: 27, # 'ÑŽ' + 255: 255, # '€' +} + +MACCYRILLIC_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='MacCyrillic', + language='Russian', + char_to_order_map=MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER, + language_model=RUSSIAN_LANG_MODEL, + typical_positive_ratio=0.976601, + keep_ascii_letters=False, + alphabet='ÐÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрÑтуфхцчшщъыьÑÑŽÑÑ‘') + +ISO_8859_5_RUSSIAN_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 142, # 'A' + 66: 143, # 'B' + 67: 144, # 'C' + 68: 145, # 'D' + 69: 146, # 'E' + 70: 147, # 'F' + 71: 148, # 'G' + 72: 149, # 'H' + 73: 150, # 'I' + 74: 151, # 'J' + 75: 152, # 'K' + 76: 74, # 'L' + 77: 153, # 'M' + 78: 75, # 'N' + 79: 154, # 'O' + 80: 155, # 'P' + 81: 156, # 'Q' + 82: 157, # 'R' + 83: 158, # 'S' + 84: 159, # 'T' + 85: 160, # 'U' + 86: 161, # 'V' + 87: 162, # 'W' + 88: 163, # 'X' + 89: 164, # 'Y' + 90: 165, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 71, # 'a' + 98: 172, # 'b' + 99: 66, # 'c' + 100: 173, # 'd' + 101: 65, # 'e' + 102: 174, # 'f' + 103: 76, # 'g' + 104: 175, # 'h' + 105: 64, # 'i' + 106: 176, # 'j' + 107: 177, # 'k' + 108: 77, # 'l' + 109: 72, # 'm' + 110: 178, # 'n' + 111: 69, # 'o' + 112: 67, # 'p' + 113: 179, # 'q' + 114: 78, # 'r' + 115: 73, # 's' + 116: 180, # 't' + 117: 181, # 'u' + 118: 79, # 'v' + 119: 182, # 'w' + 120: 183, # 'x' + 121: 184, # 'y' + 122: 185, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 191, # '\x80' + 129: 192, # '\x81' + 130: 193, # '\x82' + 131: 194, # '\x83' + 132: 195, # '\x84' + 133: 196, # '\x85' + 134: 197, # '\x86' + 135: 198, # '\x87' + 136: 199, # '\x88' + 137: 200, # '\x89' + 138: 201, # '\x8a' + 139: 202, # '\x8b' + 140: 203, # '\x8c' + 141: 204, # '\x8d' + 142: 205, # '\x8e' + 143: 206, # '\x8f' + 144: 207, # '\x90' + 145: 208, # '\x91' + 146: 209, # '\x92' + 147: 210, # '\x93' + 148: 211, # '\x94' + 149: 212, # '\x95' + 150: 213, # '\x96' + 151: 214, # '\x97' + 152: 215, # '\x98' + 153: 216, # '\x99' + 154: 217, # '\x9a' + 155: 218, # '\x9b' + 156: 219, # '\x9c' + 157: 220, # '\x9d' + 158: 221, # '\x9e' + 159: 222, # '\x9f' + 160: 223, # '\xa0' + 161: 224, # 'Ð' + 162: 225, # 'Ђ' + 163: 226, # 'Ѓ' + 164: 227, # 'Є' + 165: 228, # 'Ð…' + 166: 229, # 'І' + 167: 230, # 'Ї' + 168: 231, # 'Ј' + 169: 232, # 'Љ' + 170: 233, # 'Њ' + 171: 234, # 'Ћ' + 172: 235, # 'ÐŒ' + 173: 236, # '\xad' + 174: 237, # 'ÐŽ' + 175: 238, # 'Ð' + 176: 37, # 'Ð' + 177: 44, # 'Б' + 178: 33, # 'Ð’' + 179: 46, # 'Г' + 180: 41, # 'Д' + 181: 48, # 'Е' + 182: 56, # 'Ж' + 183: 51, # 'З' + 184: 42, # 'И' + 185: 60, # 'Й' + 186: 36, # 'К' + 187: 49, # 'Л' + 188: 38, # 'М' + 189: 31, # 'Ð' + 190: 34, # 'О' + 191: 35, # 'П' + 192: 45, # 'Р' + 193: 32, # 'С' + 194: 40, # 'Т' + 195: 52, # 'У' + 196: 53, # 'Ф' + 197: 55, # 'Ð¥' + 198: 58, # 'Ц' + 199: 50, # 'Ч' + 200: 57, # 'Ш' + 201: 63, # 'Щ' + 202: 70, # 'Ъ' + 203: 62, # 'Ы' + 204: 61, # 'Ь' + 205: 47, # 'Э' + 206: 59, # 'Ю' + 207: 43, # 'Я' + 208: 3, # 'а' + 209: 21, # 'б' + 210: 10, # 'в' + 211: 19, # 'г' + 212: 13, # 'д' + 213: 2, # 'е' + 214: 24, # 'ж' + 215: 20, # 'з' + 216: 4, # 'и' + 217: 23, # 'й' + 218: 11, # 'к' + 219: 8, # 'л' + 220: 12, # 'м' + 221: 5, # 'н' + 222: 1, # 'о' + 223: 15, # 'п' + 224: 9, # 'Ñ€' + 225: 7, # 'Ñ' + 226: 6, # 'Ñ‚' + 227: 14, # 'у' + 228: 39, # 'Ñ„' + 229: 26, # 'Ñ…' + 230: 28, # 'ц' + 231: 22, # 'ч' + 232: 25, # 'ш' + 233: 29, # 'щ' + 234: 54, # 'ÑŠ' + 235: 18, # 'Ñ‹' + 236: 17, # 'ÑŒ' + 237: 30, # 'Ñ' + 238: 27, # 'ÑŽ' + 239: 16, # 'Ñ' + 240: 239, # 'â„–' + 241: 68, # 'Ñ‘' + 242: 240, # 'Ñ’' + 243: 241, # 'Ñ“' + 244: 242, # 'Ñ”' + 245: 243, # 'Ñ•' + 246: 244, # 'Ñ–' + 247: 245, # 'Ñ—' + 248: 246, # 'ј' + 249: 247, # 'Ñ™' + 250: 248, # 'Ñš' + 251: 249, # 'Ñ›' + 252: 250, # 'Ñœ' + 253: 251, # '§' + 254: 252, # 'Ñž' + 255: 255, # 'ÑŸ' +} + +ISO_8859_5_RUSSIAN_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-5', + language='Russian', + char_to_order_map=ISO_8859_5_RUSSIAN_CHAR_TO_ORDER, + language_model=RUSSIAN_LANG_MODEL, + typical_positive_ratio=0.976601, + keep_ascii_letters=False, + alphabet='ÐÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрÑтуфхцчшщъыьÑÑŽÑÑ‘') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langthaimodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langthaimodel.py new file mode 100644 index 0000000..9a37db5 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langthaimodel.py @@ -0,0 +1,4383 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +THAI_LANG_MODEL = { + 5: { # 'à¸' + 5: 2, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 2, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 3, # 'ฎ' + 57: 2, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 2, # 'ณ' + 20: 2, # 'ด' + 19: 3, # 'ต' + 44: 0, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 1, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 1, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 2, # 'ม' + 16: 1, # 'ย' + 2: 3, # 'ร' + 61: 2, # 'ฤ' + 15: 3, # 'ล' + 12: 3, # 'ว' + 42: 2, # 'ศ' + 46: 3, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 3, # 'อ' + 63: 1, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 3, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 0, # 'ึ' + 27: 2, # 'ื' + 32: 2, # 'ุ' + 35: 1, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 1, # 'ใ' + 33: 2, # 'ไ' + 50: 1, # 'ๆ' + 37: 3, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 30: { # 'ข' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 1, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 2, # 'ณ' + 20: 0, # 'ด' + 19: 2, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 1, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 2, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 1, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 2, # 'ี' + 40: 3, # 'ึ' + 27: 1, # 'ื' + 32: 1, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 2, # '่' + 7: 3, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 24: { # 'ค' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 2, # 'ค' + 8: 2, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 2, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 0, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 2, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 3, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 2, # 'า' + 36: 3, # 'ำ' + 23: 3, # 'ิ' + 13: 2, # 'ี' + 40: 0, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 2, # 'ู' + 11: 1, # 'เ' + 28: 0, # 'à¹' + 41: 3, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 8: { # 'ง' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 3, # 'ค' + 8: 2, # 'ง' + 26: 2, # 'จ' + 52: 1, # 'ฉ' + 34: 2, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 3, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 1, # 'à¸' + 31: 2, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 2, # 'ม' + 16: 1, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 2, # 'ว' + 42: 2, # 'ศ' + 46: 1, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 1, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 1, # 'ื' + 32: 1, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 3, # 'ๆ' + 37: 0, # '็' + 6: 2, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 26: { # 'จ' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 0, # 'ค' + 8: 2, # 'ง' + 26: 3, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 1, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 1, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 1, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 1, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 3, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 3, # 'ำ' + 23: 2, # 'ิ' + 13: 1, # 'ี' + 40: 3, # 'ึ' + 27: 1, # 'ื' + 32: 3, # 'ุ' + 35: 2, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 2, # '่' + 7: 2, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 52: { # 'ฉ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 3, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 3, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 1, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 1, # 'ั' + 1: 1, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 1, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 34: { # 'ช' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 1, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 1, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 1, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 2, # 'ั' + 1: 3, # 'า' + 36: 1, # 'ำ' + 23: 3, # 'ิ' + 13: 2, # 'ี' + 40: 0, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 1, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 51: { # 'ซ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 1, # 'ั' + 1: 1, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 2, # 'ี' + 40: 3, # 'ึ' + 27: 2, # 'ื' + 32: 1, # 'ุ' + 35: 1, # 'ู' + 11: 1, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 1, # '่' + 7: 2, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 47: { # 'à¸' + 5: 1, # 'à¸' + 30: 1, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 3, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 1, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 2, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 2, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 0, # 'ไ' + 50: 1, # 'ๆ' + 37: 0, # '็' + 6: 2, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 58: { # 'ฎ' + 5: 2, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 1, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 2, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 57: { # 'à¸' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 3, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 49: { # 'à¸' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 2, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 53: { # 'ฑ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 55: { # 'ฒ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 43: { # 'ณ' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 3, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 3, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 1, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 3, # 'ะ' + 10: 0, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 2, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 20: { # 'ด' + 5: 2, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 1, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 2, # 'ม' + 16: 3, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 3, # 'ั' + 1: 2, # 'า' + 36: 2, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 1, # 'ึ' + 27: 2, # 'ื' + 32: 3, # 'ุ' + 35: 2, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 2, # 'ๆ' + 37: 2, # '็' + 6: 1, # '่' + 7: 3, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 19: { # 'ต' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 1, # 'ต' + 44: 2, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 1, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 2, # 'ภ' + 9: 1, # 'ม' + 16: 1, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 3, # 'ส' + 21: 0, # 'ห' + 4: 3, # 'อ' + 63: 1, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 2, # 'ำ' + 23: 3, # 'ิ' + 13: 2, # 'ี' + 40: 1, # 'ึ' + 27: 1, # 'ื' + 32: 3, # 'ุ' + 35: 2, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 1, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 2, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 44: { # 'ถ' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 2, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 2, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 1, # 'ี' + 40: 3, # 'ึ' + 27: 2, # 'ื' + 32: 2, # 'ุ' + 35: 3, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 2, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 14: { # 'ท' + 5: 1, # 'à¸' + 30: 1, # 'ข' + 24: 3, # 'ค' + 8: 1, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 3, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 2, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 3, # 'ย' + 2: 3, # 'ร' + 61: 1, # 'ฤ' + 15: 1, # 'ล' + 12: 2, # 'ว' + 42: 3, # 'ศ' + 46: 1, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 3, # 'ำ' + 23: 2, # 'ิ' + 13: 3, # 'ี' + 40: 2, # 'ึ' + 27: 1, # 'ื' + 32: 3, # 'ุ' + 35: 1, # 'ู' + 11: 0, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 48: { # 'ธ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 1, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 2, # 'า' + 36: 0, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 2, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 3: { # 'น' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 3, # 'ค' + 8: 1, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 1, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 2, # 'ถ' + 14: 3, # 'ท' + 48: 3, # 'ธ' + 3: 2, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 0, # 'à¸' + 31: 2, # 'พ' + 54: 1, # 'ฟ' + 45: 1, # 'ภ' + 9: 2, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 1, # 'ฤ' + 15: 2, # 'ล' + 12: 3, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 3, # 'อ' + 63: 1, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 3, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 3, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 2, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 3, # 'โ' + 29: 3, # 'ใ' + 33: 3, # 'ไ' + 50: 2, # 'ๆ' + 37: 1, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 17: { # 'บ' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 1, # 'ง' + 26: 1, # 'จ' + 52: 1, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 3, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 1, # 'ฟ' + 45: 1, # 'ภ' + 9: 1, # 'ม' + 16: 0, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 3, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 2, # 'อ' + 63: 1, # 'ฯ' + 22: 0, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 2, # 'ำ' + 23: 2, # 'ิ' + 13: 2, # 'ี' + 40: 0, # 'ึ' + 27: 2, # 'ื' + 32: 3, # 'ุ' + 35: 2, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 2, # '่' + 7: 2, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 25: { # 'ป' + 5: 2, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 1, # 'ฎ' + 57: 3, # 'à¸' + 49: 1, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 1, # 'ต' + 44: 1, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 0, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 1, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 0, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 1, # 'ษ' + 18: 2, # 'ส' + 21: 1, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 3, # 'ั' + 1: 1, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 3, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 1, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 2, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 2, # 'ไ' + 50: 0, # 'ๆ' + 37: 3, # '็' + 6: 1, # '่' + 7: 2, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 39: { # 'ผ' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 1, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 2, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 1, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 1, # 'ื' + 32: 0, # 'ุ' + 35: 3, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 1, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 62: { # 'à¸' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 1, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 1, # 'ี' + 40: 2, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 2, # '่' + 7: 1, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 31: { # 'พ' + 5: 1, # 'à¸' + 30: 1, # 'ข' + 24: 1, # 'ค' + 8: 1, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 1, # 'ณ' + 20: 1, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 2, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 0, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 2, # 'ย' + 2: 3, # 'ร' + 61: 2, # 'ฤ' + 15: 2, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 1, # 'ห' + 4: 2, # 'อ' + 63: 1, # 'ฯ' + 22: 0, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 3, # 'ิ' + 13: 2, # 'ี' + 40: 1, # 'ึ' + 27: 3, # 'ื' + 32: 1, # 'ุ' + 35: 2, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 1, # '็' + 6: 0, # '่' + 7: 1, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 54: { # 'ฟ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 2, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 2, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 1, # 'ื' + 32: 1, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 2, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 45: { # 'ภ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 3, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 2, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 9: { # 'ม' + 5: 2, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 2, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 1, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 3, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 2, # 'ม' + 16: 1, # 'ย' + 2: 2, # 'ร' + 61: 2, # 'ฤ' + 15: 2, # 'ล' + 12: 2, # 'ว' + 42: 1, # 'ศ' + 46: 1, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 0, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 3, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 2, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 1, # 'ๆ' + 37: 1, # '็' + 6: 3, # '่' + 7: 2, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 16: { # 'ย' + 5: 3, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 2, # 'ช' + 51: 0, # 'ซ' + 47: 2, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 2, # 'ม' + 16: 0, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 3, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 1, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 3, # 'ี' + 40: 1, # 'ึ' + 27: 2, # 'ื' + 32: 2, # 'ุ' + 35: 3, # 'ู' + 11: 2, # 'เ' + 28: 1, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 2, # 'ๆ' + 37: 1, # '็' + 6: 3, # '่' + 7: 2, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 2: { # 'ร' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 2, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 3, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 3, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 3, # 'ถ' + 14: 3, # 'ท' + 48: 1, # 'ธ' + 3: 2, # 'น' + 17: 2, # 'บ' + 25: 3, # 'ป' + 39: 2, # 'ผ' + 62: 1, # 'à¸' + 31: 2, # 'พ' + 54: 1, # 'ฟ' + 45: 1, # 'ภ' + 9: 3, # 'ม' + 16: 2, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 3, # 'ว' + 42: 2, # 'ศ' + 46: 2, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 3, # 'อ' + 63: 1, # 'ฯ' + 22: 3, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 2, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 3, # 'ู' + 11: 3, # 'เ' + 28: 3, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 3, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 3, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 61: { # 'ฤ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 2, # 'ต' + 44: 0, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 2, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 15: { # 'ล' + 5: 2, # 'à¸' + 30: 3, # 'ข' + 24: 1, # 'ค' + 8: 3, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 1, # 'ม' + 16: 3, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 1, # 'ห' + 4: 3, # 'อ' + 63: 2, # 'ฯ' + 22: 3, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 2, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 2, # 'ึ' + 27: 3, # 'ื' + 32: 2, # 'ุ' + 35: 3, # 'ู' + 11: 2, # 'เ' + 28: 1, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 2, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 12: { # 'ว' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 1, # 'ค' + 8: 3, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 1, # 'ณ' + 20: 2, # 'ด' + 19: 1, # 'ต' + 44: 1, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 1, # 'ฟ' + 45: 0, # 'ภ' + 9: 3, # 'ม' + 16: 3, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 3, # 'ิ' + 13: 2, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 2, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 1, # 'ใ' + 33: 2, # 'ไ' + 50: 1, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 42: { # 'ศ' + 5: 1, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 1, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 2, # 'ว' + 42: 1, # 'ศ' + 46: 2, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 2, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 2, # 'ิ' + 13: 0, # 'ี' + 40: 3, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 2, # 'ู' + 11: 0, # 'เ' + 28: 1, # 'à¹' + 41: 0, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 46: { # 'ษ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 2, # 'ฎ' + 57: 1, # 'à¸' + 49: 2, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 3, # 'ณ' + 20: 0, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 1, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 2, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 18: { # 'ส' + 5: 2, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 2, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 3, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 2, # 'ภ' + 9: 3, # 'ม' + 16: 1, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 2, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 3, # 'ำ' + 23: 3, # 'ิ' + 13: 3, # 'ี' + 40: 2, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 3, # 'ู' + 11: 2, # 'เ' + 28: 0, # 'à¹' + 41: 1, # 'โ' + 29: 0, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 1, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 21: { # 'ห' + 5: 3, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 1, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 2, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 3, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 0, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 3, # 'ม' + 16: 2, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 0, # 'ำ' + 23: 1, # 'ิ' + 13: 1, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 1, # 'ุ' + 35: 1, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 3, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 4: { # 'อ' + 5: 3, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 1, # 'ฟ' + 45: 1, # 'ภ' + 9: 3, # 'ม' + 16: 3, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 2, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 2, # 'ะ' + 10: 3, # 'ั' + 1: 3, # 'า' + 36: 2, # 'ำ' + 23: 2, # 'ิ' + 13: 3, # 'ี' + 40: 0, # 'ึ' + 27: 3, # 'ื' + 32: 3, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 1, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 1, # 'ๆ' + 37: 1, # '็' + 6: 2, # '่' + 7: 2, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 63: { # 'ฯ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 22: { # 'ะ' + 5: 3, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 1, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 3, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 1, # 'ถ' + 14: 3, # 'ท' + 48: 1, # 'ธ' + 3: 2, # 'น' + 17: 3, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 2, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 3, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 2, # 'อ' + 63: 1, # 'ฯ' + 22: 1, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 10: { # 'ั' + 5: 3, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 3, # 'ง' + 26: 3, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 3, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 2, # 'à¸' + 53: 0, # 'ฑ' + 55: 3, # 'ฒ' + 43: 3, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 0, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 2, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 3, # 'ม' + 16: 3, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 3, # 'ว' + 42: 2, # 'ศ' + 46: 0, # 'ษ' + 18: 3, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 1: { # 'า' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 3, # 'ค' + 8: 3, # 'ง' + 26: 3, # 'จ' + 52: 0, # 'ฉ' + 34: 3, # 'ช' + 51: 1, # 'ซ' + 47: 2, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 3, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 1, # 'ถ' + 14: 3, # 'ท' + 48: 2, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 1, # 'à¸' + 31: 3, # 'พ' + 54: 1, # 'ฟ' + 45: 1, # 'ภ' + 9: 3, # 'ม' + 16: 3, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 3, # 'ว' + 42: 2, # 'ศ' + 46: 3, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 2, # 'อ' + 63: 1, # 'ฯ' + 22: 3, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 1, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 36: { # 'ำ' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 3, # 'ค' + 8: 2, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 1, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 1, # 'ต' + 44: 1, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 1, # 'บ' + 25: 1, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 1, # 'ม' + 16: 0, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 3, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 23: { # 'ิ' + 5: 3, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 3, # 'จ' + 52: 0, # 'ฉ' + 34: 3, # 'ช' + 51: 0, # 'ซ' + 47: 2, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 1, # 'ถ' + 14: 3, # 'ท' + 48: 3, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 0, # 'à¸' + 31: 3, # 'พ' + 54: 1, # 'ฟ' + 45: 2, # 'ภ' + 9: 3, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 3, # 'ว' + 42: 3, # 'ศ' + 46: 2, # 'ษ' + 18: 2, # 'ส' + 21: 3, # 'ห' + 4: 1, # 'อ' + 63: 1, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 1, # 'à¹' + 41: 1, # 'โ' + 29: 1, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 2, # '้' + 38: 2, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 13: { # 'ี' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 1, # 'ผ' + 62: 0, # 'à¸' + 31: 2, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 3, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 2, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 1, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 1, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 40: { # 'ึ' + 5: 3, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 3, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 1, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 27: { # 'ื' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 3, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 32: { # 'ุ' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 3, # 'ค' + 8: 3, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 2, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 1, # 'ฒ' + 43: 3, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 1, # 'ธ' + 3: 2, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 1, # 'ภ' + 9: 3, # 'ม' + 16: 1, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 1, # 'ว' + 42: 1, # 'ศ' + 46: 2, # 'ษ' + 18: 1, # 'ส' + 21: 1, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 0, # 'à¹' + 41: 1, # 'โ' + 29: 0, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 2, # '้' + 38: 1, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 35: { # 'ู' + 5: 3, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 2, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 2, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 1, # 'ณ' + 20: 2, # 'ด' + 19: 2, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 2, # 'น' + 17: 0, # 'บ' + 25: 3, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 1, # 'à¹' + 41: 1, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 3, # '่' + 7: 3, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 11: { # 'เ' + 5: 3, # 'à¸' + 30: 3, # 'ข' + 24: 3, # 'ค' + 8: 2, # 'ง' + 26: 3, # 'จ' + 52: 3, # 'ฉ' + 34: 3, # 'ช' + 51: 2, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 1, # 'ณ' + 20: 3, # 'ด' + 19: 3, # 'ต' + 44: 1, # 'ถ' + 14: 3, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 3, # 'ป' + 39: 2, # 'ผ' + 62: 1, # 'à¸' + 31: 3, # 'พ' + 54: 1, # 'ฟ' + 45: 3, # 'ภ' + 9: 3, # 'ม' + 16: 2, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 3, # 'ว' + 42: 2, # 'ศ' + 46: 0, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 28: { # 'à¹' + 5: 3, # 'à¸' + 30: 2, # 'ข' + 24: 2, # 'ค' + 8: 1, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 3, # 'ต' + 44: 2, # 'ถ' + 14: 3, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 2, # 'ป' + 39: 3, # 'ผ' + 62: 0, # 'à¸' + 31: 2, # 'พ' + 54: 2, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 41: { # 'โ' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 1, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 2, # 'ต' + 44: 0, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 1, # 'บ' + 25: 3, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 1, # 'ฟ' + 45: 1, # 'ภ' + 9: 1, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 3, # 'ล' + 12: 0, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 0, # 'ห' + 4: 2, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 29: { # 'ใ' + 5: 2, # 'à¸' + 30: 0, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 3, # 'จ' + 52: 0, # 'ฉ' + 34: 3, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 1, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 3, # 'ส' + 21: 3, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 33: { # 'ไ' + 5: 1, # 'à¸' + 30: 2, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 3, # 'ด' + 19: 1, # 'ต' + 44: 0, # 'ถ' + 14: 3, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 1, # 'บ' + 25: 3, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 2, # 'ฟ' + 45: 0, # 'ภ' + 9: 3, # 'ม' + 16: 0, # 'ย' + 2: 3, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 3, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 2, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 50: { # 'ๆ' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 37: { # '็' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 2, # 'ง' + 26: 3, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 1, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 2, # 'ต' + 44: 0, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 3, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 1, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 2, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 0, # 'ห' + 4: 1, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 1, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 6: { # '่' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 1, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 1, # 'ธ' + 3: 3, # 'น' + 17: 1, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 1, # 'à¸' + 31: 1, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 3, # 'ม' + 16: 3, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 2, # 'ล' + 12: 3, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 1, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 1, # 'ะ' + 10: 0, # 'ั' + 1: 3, # 'า' + 36: 2, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 3, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 1, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 7: { # '้' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 2, # 'ค' + 8: 3, # 'ง' + 26: 2, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 1, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 1, # 'ด' + 19: 2, # 'ต' + 44: 1, # 'ถ' + 14: 2, # 'ท' + 48: 0, # 'ธ' + 3: 3, # 'น' + 17: 2, # 'บ' + 25: 2, # 'ป' + 39: 2, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 1, # 'ฟ' + 45: 0, # 'ภ' + 9: 3, # 'ม' + 16: 2, # 'ย' + 2: 2, # 'ร' + 61: 0, # 'ฤ' + 15: 1, # 'ล' + 12: 3, # 'ว' + 42: 1, # 'ศ' + 46: 0, # 'ษ' + 18: 2, # 'ส' + 21: 2, # 'ห' + 4: 3, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 3, # 'า' + 36: 2, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 2, # 'ใ' + 33: 2, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 38: { # '์' + 5: 2, # 'à¸' + 30: 1, # 'ข' + 24: 1, # 'ค' + 8: 0, # 'ง' + 26: 1, # 'จ' + 52: 0, # 'ฉ' + 34: 1, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 2, # 'ด' + 19: 1, # 'ต' + 44: 1, # 'ถ' + 14: 1, # 'ท' + 48: 0, # 'ธ' + 3: 1, # 'น' + 17: 1, # 'บ' + 25: 1, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 1, # 'พ' + 54: 1, # 'ฟ' + 45: 0, # 'ภ' + 9: 2, # 'ม' + 16: 0, # 'ย' + 2: 1, # 'ร' + 61: 1, # 'ฤ' + 15: 1, # 'ล' + 12: 1, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 1, # 'ส' + 21: 1, # 'ห' + 4: 2, # 'อ' + 63: 1, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 2, # 'เ' + 28: 2, # 'à¹' + 41: 1, # 'โ' + 29: 1, # 'ใ' + 33: 1, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 0, # '๑' + 59: 0, # '๒' + 60: 0, # '๕' + }, + 56: { # '๑' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 2, # '๑' + 59: 1, # '๒' + 60: 1, # '๕' + }, + 59: { # '๒' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 1, # '๑' + 59: 1, # '๒' + 60: 3, # '๕' + }, + 60: { # '๕' + 5: 0, # 'à¸' + 30: 0, # 'ข' + 24: 0, # 'ค' + 8: 0, # 'ง' + 26: 0, # 'จ' + 52: 0, # 'ฉ' + 34: 0, # 'ช' + 51: 0, # 'ซ' + 47: 0, # 'à¸' + 58: 0, # 'ฎ' + 57: 0, # 'à¸' + 49: 0, # 'à¸' + 53: 0, # 'ฑ' + 55: 0, # 'ฒ' + 43: 0, # 'ณ' + 20: 0, # 'ด' + 19: 0, # 'ต' + 44: 0, # 'ถ' + 14: 0, # 'ท' + 48: 0, # 'ธ' + 3: 0, # 'น' + 17: 0, # 'บ' + 25: 0, # 'ป' + 39: 0, # 'ผ' + 62: 0, # 'à¸' + 31: 0, # 'พ' + 54: 0, # 'ฟ' + 45: 0, # 'ภ' + 9: 0, # 'ม' + 16: 0, # 'ย' + 2: 0, # 'ร' + 61: 0, # 'ฤ' + 15: 0, # 'ล' + 12: 0, # 'ว' + 42: 0, # 'ศ' + 46: 0, # 'ษ' + 18: 0, # 'ส' + 21: 0, # 'ห' + 4: 0, # 'อ' + 63: 0, # 'ฯ' + 22: 0, # 'ะ' + 10: 0, # 'ั' + 1: 0, # 'า' + 36: 0, # 'ำ' + 23: 0, # 'ิ' + 13: 0, # 'ี' + 40: 0, # 'ึ' + 27: 0, # 'ื' + 32: 0, # 'ุ' + 35: 0, # 'ู' + 11: 0, # 'เ' + 28: 0, # 'à¹' + 41: 0, # 'โ' + 29: 0, # 'ใ' + 33: 0, # 'ไ' + 50: 0, # 'ๆ' + 37: 0, # '็' + 6: 0, # '่' + 7: 0, # '้' + 38: 0, # '์' + 56: 2, # '๑' + 59: 1, # '๒' + 60: 0, # '๕' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +TIS_620_THAI_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 254, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 254, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 253, # ' ' + 33: 253, # '!' + 34: 253, # '"' + 35: 253, # '#' + 36: 253, # '$' + 37: 253, # '%' + 38: 253, # '&' + 39: 253, # "'" + 40: 253, # '(' + 41: 253, # ')' + 42: 253, # '*' + 43: 253, # '+' + 44: 253, # ',' + 45: 253, # '-' + 46: 253, # '.' + 47: 253, # '/' + 48: 252, # '0' + 49: 252, # '1' + 50: 252, # '2' + 51: 252, # '3' + 52: 252, # '4' + 53: 252, # '5' + 54: 252, # '6' + 55: 252, # '7' + 56: 252, # '8' + 57: 252, # '9' + 58: 253, # ':' + 59: 253, # ';' + 60: 253, # '<' + 61: 253, # '=' + 62: 253, # '>' + 63: 253, # '?' + 64: 253, # '@' + 65: 182, # 'A' + 66: 106, # 'B' + 67: 107, # 'C' + 68: 100, # 'D' + 69: 183, # 'E' + 70: 184, # 'F' + 71: 185, # 'G' + 72: 101, # 'H' + 73: 94, # 'I' + 74: 186, # 'J' + 75: 187, # 'K' + 76: 108, # 'L' + 77: 109, # 'M' + 78: 110, # 'N' + 79: 111, # 'O' + 80: 188, # 'P' + 81: 189, # 'Q' + 82: 190, # 'R' + 83: 89, # 'S' + 84: 95, # 'T' + 85: 112, # 'U' + 86: 113, # 'V' + 87: 191, # 'W' + 88: 192, # 'X' + 89: 193, # 'Y' + 90: 194, # 'Z' + 91: 253, # '[' + 92: 253, # '\\' + 93: 253, # ']' + 94: 253, # '^' + 95: 253, # '_' + 96: 253, # '`' + 97: 64, # 'a' + 98: 72, # 'b' + 99: 73, # 'c' + 100: 114, # 'd' + 101: 74, # 'e' + 102: 115, # 'f' + 103: 116, # 'g' + 104: 102, # 'h' + 105: 81, # 'i' + 106: 201, # 'j' + 107: 117, # 'k' + 108: 90, # 'l' + 109: 103, # 'm' + 110: 78, # 'n' + 111: 82, # 'o' + 112: 96, # 'p' + 113: 202, # 'q' + 114: 91, # 'r' + 115: 79, # 's' + 116: 84, # 't' + 117: 104, # 'u' + 118: 105, # 'v' + 119: 97, # 'w' + 120: 98, # 'x' + 121: 92, # 'y' + 122: 203, # 'z' + 123: 253, # '{' + 124: 253, # '|' + 125: 253, # '}' + 126: 253, # '~' + 127: 253, # '\x7f' + 128: 209, # '\x80' + 129: 210, # '\x81' + 130: 211, # '\x82' + 131: 212, # '\x83' + 132: 213, # '\x84' + 133: 88, # '\x85' + 134: 214, # '\x86' + 135: 215, # '\x87' + 136: 216, # '\x88' + 137: 217, # '\x89' + 138: 218, # '\x8a' + 139: 219, # '\x8b' + 140: 220, # '\x8c' + 141: 118, # '\x8d' + 142: 221, # '\x8e' + 143: 222, # '\x8f' + 144: 223, # '\x90' + 145: 224, # '\x91' + 146: 99, # '\x92' + 147: 85, # '\x93' + 148: 83, # '\x94' + 149: 225, # '\x95' + 150: 226, # '\x96' + 151: 227, # '\x97' + 152: 228, # '\x98' + 153: 229, # '\x99' + 154: 230, # '\x9a' + 155: 231, # '\x9b' + 156: 232, # '\x9c' + 157: 233, # '\x9d' + 158: 234, # '\x9e' + 159: 235, # '\x9f' + 160: 236, # None + 161: 5, # 'à¸' + 162: 30, # 'ข' + 163: 237, # 'ฃ' + 164: 24, # 'ค' + 165: 238, # 'ฅ' + 166: 75, # 'ฆ' + 167: 8, # 'ง' + 168: 26, # 'จ' + 169: 52, # 'ฉ' + 170: 34, # 'ช' + 171: 51, # 'ซ' + 172: 119, # 'ฌ' + 173: 47, # 'à¸' + 174: 58, # 'ฎ' + 175: 57, # 'à¸' + 176: 49, # 'à¸' + 177: 53, # 'ฑ' + 178: 55, # 'ฒ' + 179: 43, # 'ณ' + 180: 20, # 'ด' + 181: 19, # 'ต' + 182: 44, # 'ถ' + 183: 14, # 'ท' + 184: 48, # 'ธ' + 185: 3, # 'น' + 186: 17, # 'บ' + 187: 25, # 'ป' + 188: 39, # 'ผ' + 189: 62, # 'à¸' + 190: 31, # 'พ' + 191: 54, # 'ฟ' + 192: 45, # 'ภ' + 193: 9, # 'ม' + 194: 16, # 'ย' + 195: 2, # 'ร' + 196: 61, # 'ฤ' + 197: 15, # 'ล' + 198: 239, # 'ฦ' + 199: 12, # 'ว' + 200: 42, # 'ศ' + 201: 46, # 'ษ' + 202: 18, # 'ส' + 203: 21, # 'ห' + 204: 76, # 'ฬ' + 205: 4, # 'อ' + 206: 66, # 'ฮ' + 207: 63, # 'ฯ' + 208: 22, # 'ะ' + 209: 10, # 'ั' + 210: 1, # 'า' + 211: 36, # 'ำ' + 212: 23, # 'ิ' + 213: 13, # 'ี' + 214: 40, # 'ึ' + 215: 27, # 'ื' + 216: 32, # 'ุ' + 217: 35, # 'ู' + 218: 86, # 'ฺ' + 219: 240, # None + 220: 241, # None + 221: 242, # None + 222: 243, # None + 223: 244, # '฿' + 224: 11, # 'เ' + 225: 28, # 'à¹' + 226: 41, # 'โ' + 227: 29, # 'ใ' + 228: 33, # 'ไ' + 229: 245, # 'ๅ' + 230: 50, # 'ๆ' + 231: 37, # '็' + 232: 6, # '่' + 233: 7, # '้' + 234: 67, # '๊' + 235: 77, # '๋' + 236: 38, # '์' + 237: 93, # 'à¹' + 238: 246, # '๎' + 239: 247, # 'à¹' + 240: 68, # 'à¹' + 241: 56, # '๑' + 242: 59, # '๒' + 243: 65, # '๓' + 244: 69, # '๔' + 245: 60, # '๕' + 246: 70, # '๖' + 247: 80, # '๗' + 248: 71, # '๘' + 249: 87, # '๙' + 250: 248, # '๚' + 251: 249, # '๛' + 252: 250, # None + 253: 251, # None + 254: 252, # None + 255: 253, # None +} + +TIS_620_THAI_MODEL = SingleByteCharSetModel(charset_name='TIS-620', + language='Thai', + char_to_order_map=TIS_620_THAI_CHAR_TO_ORDER, + language_model=THAI_LANG_MODEL, + typical_positive_ratio=0.926386, + keep_ascii_letters=False, + alphabet='à¸à¸‚ฃคฅฆงจฉชซฌà¸à¸Žà¸à¸à¸‘ฒณดตถทธนบปผà¸à¸žà¸Ÿà¸ à¸¡à¸¢à¸£à¸¤à¸¥à¸¦à¸§à¸¨à¸©à¸ªà¸«à¸¬à¸­à¸®à¸¯à¸°à¸±à¸²à¸³à¸´à¸µà¸¶à¸·à¸¸à¸¹à¸ºà¸¿à¹€à¹à¹‚ใไๅๆ็่้๊๋์à¹à¹Žà¹à¹à¹‘๒๓๔๕๖๗๘๙๚๛') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/langturkishmodel.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langturkishmodel.py new file mode 100644 index 0000000..43f4230 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/langturkishmodel.py @@ -0,0 +1,4383 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel + + +# 3: Positive +# 2: Likely +# 1: Unlikely +# 0: Negative + +TURKISH_LANG_MODEL = { + 23: { # 'A' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 1, # 'h' + 3: 1, # 'i' + 24: 0, # 'j' + 10: 2, # 'k' + 5: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 1, # 'r' + 8: 1, # 's' + 9: 1, # 't' + 14: 1, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 37: { # 'B' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 2, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 1, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 47: { # 'C' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 1, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 1, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 2, # 'j' + 10: 1, # 'k' + 5: 2, # 'l' + 13: 2, # 'm' + 4: 2, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 2, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 1, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 39: { # 'D' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 1, # 'l' + 13: 3, # 'm' + 4: 0, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 1, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 1, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 29: { # 'E' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 1, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 0, # 'h' + 3: 1, # 'i' + 24: 1, # 'j' + 10: 0, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 1, # 's' + 9: 1, # 't' + 14: 1, # 'u' + 32: 1, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 52: { # 'F' + 23: 0, # 'A' + 37: 1, # 'B' + 47: 1, # 'C' + 39: 1, # 'D' + 29: 1, # 'E' + 52: 2, # 'F' + 36: 0, # 'G' + 45: 2, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 1, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 1, # 'b' + 28: 1, # 'c' + 12: 1, # 'd' + 2: 0, # 'e' + 18: 1, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 2, # 'i' + 24: 1, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 1, # 'm' + 4: 2, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 2, # 'r' + 8: 1, # 's' + 9: 1, # 't' + 14: 1, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 1, # 'Ö' + 55: 2, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 2, # 'ÅŸ' + }, + 36: { # 'G' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 2, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 2, # 'N' + 42: 1, # 'O' + 48: 1, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 1, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 1, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 0, # 'r' + 8: 1, # 's' + 9: 1, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 1, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 2, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 45: { # 'H' + 23: 0, # 'A' + 37: 1, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 2, # 'G' + 45: 1, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 1, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 2, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 2, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 1, # 'o' + 26: 1, # 'p' + 7: 1, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 2, # 'ÄŸ' + 41: 1, # 'İ' + 6: 0, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 53: { # 'I' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 2, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 1, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 60: { # 'J' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 1, # 'd' + 2: 0, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 1, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 1, # 's' + 9: 0, # 't' + 14: 0, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 16: { # 'K' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 1, # 'e' + 18: 3, # 'f' + 27: 3, # 'g' + 25: 3, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 0, # 'u' + 32: 3, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 1, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 49: { # 'L' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 2, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 2, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 0, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 2, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 2, # 'n' + 15: 1, # 'o' + 26: 1, # 'p' + 7: 1, # 'r' + 8: 1, # 's' + 9: 1, # 't' + 14: 0, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 2, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 1, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 20: { # 'M' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 1, # 'h' + 3: 2, # 'i' + 24: 2, # 'j' + 10: 2, # 'k' + 5: 2, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 3, # 'r' + 8: 0, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 46: { # 'N' + 23: 0, # 'A' + 37: 1, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 2, # 'j' + 10: 1, # 'k' + 5: 1, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 1, # 'o' + 26: 1, # 'p' + 7: 1, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 1, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 1, # 'İ' + 6: 2, # 'ı' + 40: 1, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 42: { # 'O' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 1, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 0, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 2, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 2, # 'İ' + 6: 1, # 'ı' + 40: 1, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 48: { # 'P' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 2, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 2, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 2, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 0, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 44: { # 'R' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 1, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 2, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 1, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 1, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 35: { # 'S' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 1, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 1, # 'l' + 13: 2, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 1, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 2, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 31: { # 'T' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 0, # 'c' + 12: 1, # 'd' + 2: 3, # 'e' + 18: 2, # 'f' + 27: 2, # 'g' + 25: 0, # 'h' + 3: 1, # 'i' + 24: 1, # 'j' + 10: 2, # 'k' + 5: 2, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 2, # 'r' + 8: 0, # 's' + 9: 2, # 't' + 14: 2, # 'u' + 32: 1, # 'v' + 57: 1, # 'w' + 58: 1, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 51: { # 'U' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 1, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 1, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 38: { # 'V' + 23: 1, # 'A' + 37: 1, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 1, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 2, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 2, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 1, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 1, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 3, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 62: { # 'W' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 0, # 'd' + 2: 0, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 0, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 0, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 43: { # 'Y' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 0, # 'G' + 45: 1, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 2, # 'N' + 42: 0, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 1, # 'j' + 10: 1, # 'k' + 5: 1, # 'l' + 13: 3, # 'm' + 4: 0, # 'n' + 15: 2, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 2, # 'Ö' + 55: 1, # 'Ü' + 59: 1, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 0, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 56: { # 'Z' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 2, # 'Z' + 1: 2, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 2, # 'i' + 24: 1, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 1, # 'r' + 8: 1, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 1: { # 'a' + 23: 3, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 1, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 3, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 2, # 'Z' + 1: 2, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 2, # 'e' + 18: 3, # 'f' + 27: 3, # 'g' + 25: 3, # 'h' + 3: 3, # 'i' + 24: 3, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 3, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 3, # 'v' + 57: 2, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 1, # 'î' + 34: 1, # 'ö' + 17: 3, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 21: { # 'b' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 3, # 'g' + 25: 1, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 3, # 'p' + 7: 1, # 'r' + 8: 2, # 's' + 9: 2, # 't' + 14: 2, # 'u' + 32: 1, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 28: { # 'c' + 23: 0, # 'A' + 37: 1, # 'B' + 47: 1, # 'C' + 39: 1, # 'D' + 29: 2, # 'E' + 52: 0, # 'F' + 36: 2, # 'G' + 45: 2, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 2, # 'T' + 51: 2, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 3, # 'Y' + 56: 0, # 'Z' + 1: 1, # 'a' + 21: 1, # 'b' + 28: 2, # 'c' + 12: 2, # 'd' + 2: 1, # 'e' + 18: 1, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 3, # 'i' + 24: 1, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 2, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 1, # 'u' + 32: 0, # 'v' + 57: 1, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 1, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 1, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 1, # 'î' + 34: 2, # 'ö' + 17: 2, # 'ü' + 30: 2, # 'ÄŸ' + 41: 1, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 2, # 'ÅŸ' + }, + 12: { # 'd' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 2, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 1, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 1, # 'f' + 27: 3, # 'g' + 25: 3, # 'h' + 3: 2, # 'i' + 24: 3, # 'j' + 10: 2, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 2, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 1, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 3, # 'y' + 22: 1, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 2: { # 'e' + 23: 2, # 'A' + 37: 0, # 'B' + 47: 2, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 1, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 1, # 'R' + 35: 0, # 'S' + 31: 3, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 2, # 'e' + 18: 3, # 'f' + 27: 3, # 'g' + 25: 3, # 'h' + 3: 3, # 'i' + 24: 3, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 3, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 3, # 'v' + 57: 2, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 1, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 3, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 18: { # 'f' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 2, # 'f' + 27: 1, # 'g' + 25: 1, # 'h' + 3: 1, # 'i' + 24: 1, # 'j' + 10: 1, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 1, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 1, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 27: { # 'g' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 1, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 1, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 1, # 'h' + 3: 2, # 'i' + 24: 3, # 'j' + 10: 2, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 2, # 'r' + 8: 2, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 1, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 25: { # 'h' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 2, # 'h' + 3: 2, # 'i' + 24: 3, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 1, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 1, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 3: { # 'i' + 23: 2, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 0, # 'N' + 42: 1, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 1, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 2, # 'f' + 27: 3, # 'g' + 25: 1, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 3, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 1, # 'w' + 58: 1, # 'x' + 11: 3, # 'y' + 22: 1, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 1, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 3, # 'ü' + 30: 0, # 'ÄŸ' + 41: 1, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 24: { # 'j' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 2, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 1, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 2, # 'f' + 27: 1, # 'g' + 25: 1, # 'h' + 3: 2, # 'i' + 24: 1, # 'j' + 10: 2, # 'k' + 5: 2, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 2, # 'r' + 8: 3, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 2, # 'x' + 11: 1, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 10: { # 'k' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 3, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 3, # 'e' + 18: 1, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 2, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 3, # 'p' + 7: 2, # 'r' + 8: 2, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 3, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 5: { # 'l' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 1, # 'e' + 18: 3, # 'f' + 27: 3, # 'g' + 25: 2, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 1, # 'l' + 13: 1, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 2, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 13: { # 'm' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 3, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 2, # 'e' + 18: 3, # 'f' + 27: 3, # 'g' + 25: 3, # 'h' + 3: 3, # 'i' + 24: 3, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 2, # 'u' + 32: 2, # 'v' + 57: 1, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 3, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 4: { # 'n' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 2, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 1, # 'f' + 27: 2, # 'g' + 25: 3, # 'h' + 3: 2, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 3, # 'p' + 7: 2, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 2, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 15: { # 'o' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 2, # 'L' + 20: 0, # 'M' + 46: 2, # 'N' + 42: 1, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 1, # 'i' + 24: 2, # 'j' + 10: 1, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 2, # 'o' + 26: 0, # 'p' + 7: 1, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 2, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 2, # 'ÄŸ' + 41: 2, # 'İ' + 6: 3, # 'ı' + 40: 2, # 'Åž' + 19: 2, # 'ÅŸ' + }, + 26: { # 'p' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 1, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 1, # 'h' + 3: 2, # 'i' + 24: 3, # 'j' + 10: 1, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 2, # 'r' + 8: 1, # 's' + 9: 1, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 1, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 7: { # 'r' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 1, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 2, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 1, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 3, # 'h' + 3: 2, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 3, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 8: { # 's' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 2, # 'i' + 24: 3, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 3, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 2, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 9: { # 't' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 2, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 2, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 3, # 'v' + 57: 0, # 'w' + 58: 2, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 14: { # 'u' + 23: 3, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 2, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 3, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 2, # 'Z' + 1: 2, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 2, # 'e' + 18: 2, # 'f' + 27: 3, # 'g' + 25: 3, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 3, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 2, # 'v' + 57: 2, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 3, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 32: { # 'v' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 1, # 'j' + 10: 1, # 'k' + 5: 3, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 1, # 'r' + 8: 2, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 1, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 57: { # 'w' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 1, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 1, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 1, # 's' + 9: 0, # 't' + 14: 1, # 'u' + 32: 0, # 'v' + 57: 2, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 58: { # 'x' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 1, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 1, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 2, # 'i' + 24: 2, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 2, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 1, # 'r' + 8: 2, # 's' + 9: 1, # 't' + 14: 0, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 11: { # 'y' + 23: 1, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 2, # 'i' + 24: 1, # 'j' + 10: 2, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 2, # 'r' + 8: 1, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 1, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 3, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 22: { # 'z' + 23: 2, # 'A' + 37: 2, # 'B' + 47: 1, # 'C' + 39: 2, # 'D' + 29: 3, # 'E' + 52: 1, # 'F' + 36: 2, # 'G' + 45: 2, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 2, # 'N' + 42: 2, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 3, # 'T' + 51: 2, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 1, # 'Z' + 1: 1, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 2, # 'd' + 2: 2, # 'e' + 18: 3, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 2, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 0, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 3, # 'y' + 22: 2, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 2, # 'Ü' + 59: 1, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 2, # 'ü' + 30: 2, # 'ÄŸ' + 41: 1, # 'İ' + 6: 3, # 'ı' + 40: 1, # 'Åž' + 19: 2, # 'ÅŸ' + }, + 63: { # '·' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 0, # 'd' + 2: 1, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 0, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 54: { # 'Ç' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 1, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 1, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 1, # 'b' + 28: 0, # 'c' + 12: 1, # 'd' + 2: 0, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 0, # 'h' + 3: 3, # 'i' + 24: 0, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 2, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 2, # 'r' + 8: 0, # 's' + 9: 1, # 't' + 14: 0, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 2, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 50: { # 'Ö' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 1, # 'D' + 29: 2, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 2, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 1, # 'N' + 42: 2, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 2, # 'd' + 2: 0, # 'e' + 18: 1, # 'f' + 27: 1, # 'g' + 25: 1, # 'h' + 3: 2, # 'i' + 24: 0, # 'j' + 10: 2, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 3, # 'n' + 15: 2, # 'o' + 26: 2, # 'p' + 7: 3, # 'r' + 8: 1, # 's' + 9: 2, # 't' + 14: 0, # 'u' + 32: 1, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 2, # 'ü' + 30: 1, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 55: { # 'Ü' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 1, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 1, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 1, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 1, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 1, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 0, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 59: { # 'â' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 0, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 2, # 'm' + 4: 0, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 2, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 1, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 33: { # 'ç' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 3, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 0, # 'Z' + 1: 0, # 'a' + 21: 3, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 0, # 'e' + 18: 2, # 'f' + 27: 1, # 'g' + 25: 3, # 'h' + 3: 3, # 'i' + 24: 0, # 'j' + 10: 3, # 'k' + 5: 0, # 'l' + 13: 0, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 3, # 'r' + 8: 2, # 's' + 9: 3, # 't' + 14: 0, # 'u' + 32: 2, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 1, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 61: { # 'î' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 0, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 0, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 2, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 1, # 'j' + 10: 0, # 'k' + 5: 0, # 'l' + 13: 1, # 'm' + 4: 1, # 'n' + 15: 0, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 1, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 1, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 1, # 'î' + 34: 0, # 'ö' + 17: 0, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 1, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 34: { # 'ö' + 23: 0, # 'A' + 37: 1, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 1, # 'G' + 45: 1, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 1, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 2, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 1, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 2, # 'c' + 12: 1, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 2, # 'h' + 3: 1, # 'i' + 24: 2, # 'j' + 10: 1, # 'k' + 5: 2, # 'l' + 13: 3, # 'm' + 4: 2, # 'n' + 15: 2, # 'o' + 26: 0, # 'p' + 7: 0, # 'r' + 8: 3, # 's' + 9: 1, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 1, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 0, # 'ü' + 30: 2, # 'ÄŸ' + 41: 1, # 'İ' + 6: 1, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 17: { # 'ü' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 0, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 1, # 'J' + 16: 1, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 0, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 0, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 0, # 'c' + 12: 1, # 'd' + 2: 3, # 'e' + 18: 1, # 'f' + 27: 2, # 'g' + 25: 0, # 'h' + 3: 1, # 'i' + 24: 1, # 'j' + 10: 2, # 'k' + 5: 3, # 'l' + 13: 2, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 2, # 'p' + 7: 2, # 'r' + 8: 3, # 's' + 9: 2, # 't' + 14: 3, # 'u' + 32: 1, # 'v' + 57: 1, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 2, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 30: { # 'ÄŸ' + 23: 0, # 'A' + 37: 2, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 1, # 'M' + 46: 2, # 'N' + 42: 2, # 'O' + 48: 1, # 'P' + 44: 1, # 'R' + 35: 0, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 2, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 0, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 2, # 'e' + 18: 0, # 'f' + 27: 0, # 'g' + 25: 0, # 'h' + 3: 0, # 'i' + 24: 3, # 'j' + 10: 1, # 'k' + 5: 2, # 'l' + 13: 3, # 'm' + 4: 0, # 'n' + 15: 1, # 'o' + 26: 0, # 'p' + 7: 1, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 2, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 2, # 'İ' + 6: 2, # 'ı' + 40: 2, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 41: { # 'İ' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 1, # 'D' + 29: 1, # 'E' + 52: 0, # 'F' + 36: 2, # 'G' + 45: 2, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 2, # 'P' + 44: 0, # 'R' + 35: 1, # 'S' + 31: 1, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 0, # 'Z' + 1: 1, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 2, # 'd' + 2: 1, # 'e' + 18: 0, # 'f' + 27: 3, # 'g' + 25: 2, # 'h' + 3: 2, # 'i' + 24: 2, # 'j' + 10: 2, # 'k' + 5: 0, # 'l' + 13: 1, # 'm' + 4: 3, # 'n' + 15: 1, # 'o' + 26: 1, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 2, # 't' + 14: 0, # 'u' + 32: 0, # 'v' + 57: 1, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 1, # 'Ü' + 59: 1, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 1, # 'ö' + 17: 1, # 'ü' + 30: 2, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 1, # 'ÅŸ' + }, + 6: { # 'ı' + 23: 2, # 'A' + 37: 0, # 'B' + 47: 0, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 2, # 'J' + 16: 3, # 'K' + 49: 0, # 'L' + 20: 3, # 'M' + 46: 1, # 'N' + 42: 0, # 'O' + 48: 0, # 'P' + 44: 0, # 'R' + 35: 0, # 'S' + 31: 2, # 'T' + 51: 0, # 'U' + 38: 0, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 1, # 'Z' + 1: 3, # 'a' + 21: 2, # 'b' + 28: 1, # 'c' + 12: 3, # 'd' + 2: 3, # 'e' + 18: 3, # 'f' + 27: 3, # 'g' + 25: 2, # 'h' + 3: 3, # 'i' + 24: 3, # 'j' + 10: 3, # 'k' + 5: 3, # 'l' + 13: 3, # 'm' + 4: 3, # 'n' + 15: 0, # 'o' + 26: 3, # 'p' + 7: 3, # 'r' + 8: 3, # 's' + 9: 3, # 't' + 14: 3, # 'u' + 32: 3, # 'v' + 57: 1, # 'w' + 58: 1, # 'x' + 11: 3, # 'y' + 22: 0, # 'z' + 63: 1, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 2, # 'ç' + 61: 0, # 'î' + 34: 0, # 'ö' + 17: 3, # 'ü' + 30: 0, # 'ÄŸ' + 41: 0, # 'İ' + 6: 3, # 'ı' + 40: 0, # 'Åž' + 19: 0, # 'ÅŸ' + }, + 40: { # 'Åž' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 1, # 'D' + 29: 1, # 'E' + 52: 0, # 'F' + 36: 1, # 'G' + 45: 2, # 'H' + 53: 1, # 'I' + 60: 0, # 'J' + 16: 0, # 'K' + 49: 0, # 'L' + 20: 2, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 2, # 'P' + 44: 2, # 'R' + 35: 1, # 'S' + 31: 1, # 'T' + 51: 0, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 2, # 'Y' + 56: 1, # 'Z' + 1: 0, # 'a' + 21: 2, # 'b' + 28: 0, # 'c' + 12: 2, # 'd' + 2: 0, # 'e' + 18: 3, # 'f' + 27: 0, # 'g' + 25: 2, # 'h' + 3: 3, # 'i' + 24: 2, # 'j' + 10: 1, # 'k' + 5: 0, # 'l' + 13: 1, # 'm' + 4: 3, # 'n' + 15: 2, # 'o' + 26: 0, # 'p' + 7: 3, # 'r' + 8: 2, # 's' + 9: 2, # 't' + 14: 1, # 'u' + 32: 3, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 2, # 'y' + 22: 0, # 'z' + 63: 0, # '·' + 54: 0, # 'Ç' + 50: 0, # 'Ö' + 55: 1, # 'Ü' + 59: 0, # 'â' + 33: 0, # 'ç' + 61: 0, # 'î' + 34: 2, # 'ö' + 17: 1, # 'ü' + 30: 2, # 'ÄŸ' + 41: 0, # 'İ' + 6: 2, # 'ı' + 40: 1, # 'Åž' + 19: 2, # 'ÅŸ' + }, + 19: { # 'ÅŸ' + 23: 0, # 'A' + 37: 0, # 'B' + 47: 1, # 'C' + 39: 0, # 'D' + 29: 0, # 'E' + 52: 2, # 'F' + 36: 1, # 'G' + 45: 0, # 'H' + 53: 0, # 'I' + 60: 0, # 'J' + 16: 3, # 'K' + 49: 2, # 'L' + 20: 0, # 'M' + 46: 1, # 'N' + 42: 1, # 'O' + 48: 1, # 'P' + 44: 1, # 'R' + 35: 1, # 'S' + 31: 0, # 'T' + 51: 1, # 'U' + 38: 1, # 'V' + 62: 0, # 'W' + 43: 1, # 'Y' + 56: 0, # 'Z' + 1: 3, # 'a' + 21: 1, # 'b' + 28: 2, # 'c' + 12: 0, # 'd' + 2: 3, # 'e' + 18: 0, # 'f' + 27: 2, # 'g' + 25: 1, # 'h' + 3: 1, # 'i' + 24: 0, # 'j' + 10: 2, # 'k' + 5: 2, # 'l' + 13: 3, # 'm' + 4: 0, # 'n' + 15: 0, # 'o' + 26: 1, # 'p' + 7: 3, # 'r' + 8: 0, # 's' + 9: 0, # 't' + 14: 3, # 'u' + 32: 0, # 'v' + 57: 0, # 'w' + 58: 0, # 'x' + 11: 0, # 'y' + 22: 2, # 'z' + 63: 0, # '·' + 54: 1, # 'Ç' + 50: 2, # 'Ö' + 55: 0, # 'Ü' + 59: 0, # 'â' + 33: 1, # 'ç' + 61: 1, # 'î' + 34: 2, # 'ö' + 17: 0, # 'ü' + 30: 1, # 'ÄŸ' + 41: 1, # 'İ' + 6: 1, # 'ı' + 40: 1, # 'Åž' + 19: 1, # 'ÅŸ' + }, +} + +# 255: Undefined characters that did not exist in training text +# 254: Carriage/Return +# 253: symbol (punctuation) that does not belong to word +# 252: 0 - 9 +# 251: Control characters + +# Character Mapping Table(s): +ISO_8859_9_TURKISH_CHAR_TO_ORDER = { + 0: 255, # '\x00' + 1: 255, # '\x01' + 2: 255, # '\x02' + 3: 255, # '\x03' + 4: 255, # '\x04' + 5: 255, # '\x05' + 6: 255, # '\x06' + 7: 255, # '\x07' + 8: 255, # '\x08' + 9: 255, # '\t' + 10: 255, # '\n' + 11: 255, # '\x0b' + 12: 255, # '\x0c' + 13: 255, # '\r' + 14: 255, # '\x0e' + 15: 255, # '\x0f' + 16: 255, # '\x10' + 17: 255, # '\x11' + 18: 255, # '\x12' + 19: 255, # '\x13' + 20: 255, # '\x14' + 21: 255, # '\x15' + 22: 255, # '\x16' + 23: 255, # '\x17' + 24: 255, # '\x18' + 25: 255, # '\x19' + 26: 255, # '\x1a' + 27: 255, # '\x1b' + 28: 255, # '\x1c' + 29: 255, # '\x1d' + 30: 255, # '\x1e' + 31: 255, # '\x1f' + 32: 255, # ' ' + 33: 255, # '!' + 34: 255, # '"' + 35: 255, # '#' + 36: 255, # '$' + 37: 255, # '%' + 38: 255, # '&' + 39: 255, # "'" + 40: 255, # '(' + 41: 255, # ')' + 42: 255, # '*' + 43: 255, # '+' + 44: 255, # ',' + 45: 255, # '-' + 46: 255, # '.' + 47: 255, # '/' + 48: 255, # '0' + 49: 255, # '1' + 50: 255, # '2' + 51: 255, # '3' + 52: 255, # '4' + 53: 255, # '5' + 54: 255, # '6' + 55: 255, # '7' + 56: 255, # '8' + 57: 255, # '9' + 58: 255, # ':' + 59: 255, # ';' + 60: 255, # '<' + 61: 255, # '=' + 62: 255, # '>' + 63: 255, # '?' + 64: 255, # '@' + 65: 23, # 'A' + 66: 37, # 'B' + 67: 47, # 'C' + 68: 39, # 'D' + 69: 29, # 'E' + 70: 52, # 'F' + 71: 36, # 'G' + 72: 45, # 'H' + 73: 53, # 'I' + 74: 60, # 'J' + 75: 16, # 'K' + 76: 49, # 'L' + 77: 20, # 'M' + 78: 46, # 'N' + 79: 42, # 'O' + 80: 48, # 'P' + 81: 69, # 'Q' + 82: 44, # 'R' + 83: 35, # 'S' + 84: 31, # 'T' + 85: 51, # 'U' + 86: 38, # 'V' + 87: 62, # 'W' + 88: 65, # 'X' + 89: 43, # 'Y' + 90: 56, # 'Z' + 91: 255, # '[' + 92: 255, # '\\' + 93: 255, # ']' + 94: 255, # '^' + 95: 255, # '_' + 96: 255, # '`' + 97: 1, # 'a' + 98: 21, # 'b' + 99: 28, # 'c' + 100: 12, # 'd' + 101: 2, # 'e' + 102: 18, # 'f' + 103: 27, # 'g' + 104: 25, # 'h' + 105: 3, # 'i' + 106: 24, # 'j' + 107: 10, # 'k' + 108: 5, # 'l' + 109: 13, # 'm' + 110: 4, # 'n' + 111: 15, # 'o' + 112: 26, # 'p' + 113: 64, # 'q' + 114: 7, # 'r' + 115: 8, # 's' + 116: 9, # 't' + 117: 14, # 'u' + 118: 32, # 'v' + 119: 57, # 'w' + 120: 58, # 'x' + 121: 11, # 'y' + 122: 22, # 'z' + 123: 255, # '{' + 124: 255, # '|' + 125: 255, # '}' + 126: 255, # '~' + 127: 255, # '\x7f' + 128: 180, # '\x80' + 129: 179, # '\x81' + 130: 178, # '\x82' + 131: 177, # '\x83' + 132: 176, # '\x84' + 133: 175, # '\x85' + 134: 174, # '\x86' + 135: 173, # '\x87' + 136: 172, # '\x88' + 137: 171, # '\x89' + 138: 170, # '\x8a' + 139: 169, # '\x8b' + 140: 168, # '\x8c' + 141: 167, # '\x8d' + 142: 166, # '\x8e' + 143: 165, # '\x8f' + 144: 164, # '\x90' + 145: 163, # '\x91' + 146: 162, # '\x92' + 147: 161, # '\x93' + 148: 160, # '\x94' + 149: 159, # '\x95' + 150: 101, # '\x96' + 151: 158, # '\x97' + 152: 157, # '\x98' + 153: 156, # '\x99' + 154: 155, # '\x9a' + 155: 154, # '\x9b' + 156: 153, # '\x9c' + 157: 152, # '\x9d' + 158: 151, # '\x9e' + 159: 106, # '\x9f' + 160: 150, # '\xa0' + 161: 149, # '¡' + 162: 148, # '¢' + 163: 147, # '£' + 164: 146, # '¤' + 165: 145, # 'Â¥' + 166: 144, # '¦' + 167: 100, # '§' + 168: 143, # '¨' + 169: 142, # '©' + 170: 141, # 'ª' + 171: 140, # '«' + 172: 139, # '¬' + 173: 138, # '\xad' + 174: 137, # '®' + 175: 136, # '¯' + 176: 94, # '°' + 177: 80, # '±' + 178: 93, # '²' + 179: 135, # '³' + 180: 105, # '´' + 181: 134, # 'µ' + 182: 133, # '¶' + 183: 63, # '·' + 184: 132, # '¸' + 185: 131, # '¹' + 186: 130, # 'º' + 187: 129, # '»' + 188: 128, # '¼' + 189: 127, # '½' + 190: 126, # '¾' + 191: 125, # '¿' + 192: 124, # 'À' + 193: 104, # 'Ã' + 194: 73, # 'Â' + 195: 99, # 'Ã' + 196: 79, # 'Ä' + 197: 85, # 'Ã…' + 198: 123, # 'Æ' + 199: 54, # 'Ç' + 200: 122, # 'È' + 201: 98, # 'É' + 202: 92, # 'Ê' + 203: 121, # 'Ë' + 204: 120, # 'ÃŒ' + 205: 91, # 'Ã' + 206: 103, # 'ÃŽ' + 207: 119, # 'Ã' + 208: 68, # 'Äž' + 209: 118, # 'Ñ' + 210: 117, # 'Ã’' + 211: 97, # 'Ó' + 212: 116, # 'Ô' + 213: 115, # 'Õ' + 214: 50, # 'Ö' + 215: 90, # '×' + 216: 114, # 'Ø' + 217: 113, # 'Ù' + 218: 112, # 'Ú' + 219: 111, # 'Û' + 220: 55, # 'Ü' + 221: 41, # 'İ' + 222: 40, # 'Åž' + 223: 86, # 'ß' + 224: 89, # 'à' + 225: 70, # 'á' + 226: 59, # 'â' + 227: 78, # 'ã' + 228: 71, # 'ä' + 229: 82, # 'Ã¥' + 230: 88, # 'æ' + 231: 33, # 'ç' + 232: 77, # 'è' + 233: 66, # 'é' + 234: 84, # 'ê' + 235: 83, # 'ë' + 236: 110, # 'ì' + 237: 75, # 'í' + 238: 61, # 'î' + 239: 96, # 'ï' + 240: 30, # 'ÄŸ' + 241: 67, # 'ñ' + 242: 109, # 'ò' + 243: 74, # 'ó' + 244: 87, # 'ô' + 245: 102, # 'õ' + 246: 34, # 'ö' + 247: 95, # '÷' + 248: 81, # 'ø' + 249: 108, # 'ù' + 250: 76, # 'ú' + 251: 72, # 'û' + 252: 17, # 'ü' + 253: 6, # 'ı' + 254: 19, # 'ÅŸ' + 255: 107, # 'ÿ' +} + +ISO_8859_9_TURKISH_MODEL = SingleByteCharSetModel(charset_name='ISO-8859-9', + language='Turkish', + char_to_order_map=ISO_8859_9_TURKISH_CHAR_TO_ORDER, + language_model=TURKISH_LANG_MODEL, + typical_positive_ratio=0.97029, + keep_ascii_letters=True, + alphabet='ABCDEFGHIJKLMNOPRSTUVYZabcdefghijklmnoprstuvyzÂÇÎÖÛÜâçîöûüĞğİıŞş') + diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/latin1prober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/latin1prober.py new file mode 100644 index 0000000..7d1e8c2 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/latin1prober.py @@ -0,0 +1,145 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState + +FREQ_CAT_NUM = 4 + +UDF = 0 # undefined +OTH = 1 # other +ASC = 2 # ascii capital letter +ASS = 3 # ascii small letter +ACV = 4 # accent capital vowel +ACO = 5 # accent capital other +ASV = 6 # accent small vowel +ASO = 7 # accent small other +CLASS_NUM = 8 # total classes + +Latin1_CharToClass = ( + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 00 - 07 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 08 - 0F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 10 - 17 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 18 - 1F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 20 - 27 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 28 - 2F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 30 - 37 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 38 - 3F + OTH, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 40 - 47 + ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 48 - 4F + ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 50 - 57 + ASC, ASC, ASC, OTH, OTH, OTH, OTH, OTH, # 58 - 5F + OTH, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 60 - 67 + ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 68 - 6F + ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 70 - 77 + ASS, ASS, ASS, OTH, OTH, OTH, OTH, OTH, # 78 - 7F + OTH, UDF, OTH, ASO, OTH, OTH, OTH, OTH, # 80 - 87 + OTH, OTH, ACO, OTH, ACO, UDF, ACO, UDF, # 88 - 8F + UDF, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 90 - 97 + OTH, OTH, ASO, OTH, ASO, UDF, ASO, ACO, # 98 - 9F + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A0 - A7 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A8 - AF + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B0 - B7 + OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B8 - BF + ACV, ACV, ACV, ACV, ACV, ACV, ACO, ACO, # C0 - C7 + ACV, ACV, ACV, ACV, ACV, ACV, ACV, ACV, # C8 - CF + ACO, ACO, ACV, ACV, ACV, ACV, ACV, OTH, # D0 - D7 + ACV, ACV, ACV, ACV, ACV, ACO, ACO, ACO, # D8 - DF + ASV, ASV, ASV, ASV, ASV, ASV, ASO, ASO, # E0 - E7 + ASV, ASV, ASV, ASV, ASV, ASV, ASV, ASV, # E8 - EF + ASO, ASO, ASV, ASV, ASV, ASV, ASV, OTH, # F0 - F7 + ASV, ASV, ASV, ASV, ASV, ASO, ASO, ASO, # F8 - FF +) + +# 0 : illegal +# 1 : very unlikely +# 2 : normal +# 3 : very likely +Latin1ClassModel = ( +# UDF OTH ASC ASS ACV ACO ASV ASO + 0, 0, 0, 0, 0, 0, 0, 0, # UDF + 0, 3, 3, 3, 3, 3, 3, 3, # OTH + 0, 3, 3, 3, 3, 3, 3, 3, # ASC + 0, 3, 3, 3, 1, 1, 3, 3, # ASS + 0, 3, 3, 3, 1, 2, 1, 2, # ACV + 0, 3, 3, 3, 3, 3, 3, 3, # ACO + 0, 3, 1, 3, 1, 1, 1, 3, # ASV + 0, 3, 1, 3, 1, 1, 3, 3, # ASO +) + + +class Latin1Prober(CharSetProber): + def __init__(self): + super(Latin1Prober, self).__init__() + self._last_char_class = None + self._freq_counter = None + self.reset() + + def reset(self): + self._last_char_class = OTH + self._freq_counter = [0] * FREQ_CAT_NUM + CharSetProber.reset(self) + + @property + def charset_name(self): + return "ISO-8859-1" + + @property + def language(self): + return "" + + def feed(self, byte_str): + byte_str = self.filter_with_english_letters(byte_str) + for c in byte_str: + char_class = Latin1_CharToClass[c] + freq = Latin1ClassModel[(self._last_char_class * CLASS_NUM) + + char_class] + if freq == 0: + self._state = ProbingState.NOT_ME + break + self._freq_counter[freq] += 1 + self._last_char_class = char_class + + return self.state + + def get_confidence(self): + if self.state == ProbingState.NOT_ME: + return 0.01 + + total = sum(self._freq_counter) + if total < 0.01: + confidence = 0.0 + else: + confidence = ((self._freq_counter[3] - self._freq_counter[1] * 20.0) + / total) + if confidence < 0.0: + confidence = 0.0 + # lower the confidence of latin1 so that other more accurate + # detector can take priority. + confidence = confidence * 0.73 + return confidence diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcharsetprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcharsetprober.py new file mode 100644 index 0000000..6256ecf --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcharsetprober.py @@ -0,0 +1,91 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# Proofpoint, Inc. +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState, MachineState + + +class MultiByteCharSetProber(CharSetProber): + """ + MultiByteCharSetProber + """ + + def __init__(self, lang_filter=None): + super(MultiByteCharSetProber, self).__init__(lang_filter=lang_filter) + self.distribution_analyzer = None + self.coding_sm = None + self._last_char = [0, 0] + + def reset(self): + super(MultiByteCharSetProber, self).reset() + if self.coding_sm: + self.coding_sm.reset() + if self.distribution_analyzer: + self.distribution_analyzer.reset() + self._last_char = [0, 0] + + @property + def charset_name(self): + raise NotImplementedError + + @property + def language(self): + raise NotImplementedError + + def feed(self, byte_str): + for i in range(len(byte_str)): + coding_state = self.coding_sm.next_state(byte_str[i]) + if coding_state == MachineState.ERROR: + self.logger.debug('%s %s prober hit error at byte %s', + self.charset_name, self.language, i) + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + char_len = self.coding_sm.get_current_charlen() + if i == 0: + self._last_char[1] = byte_str[0] + self.distribution_analyzer.feed(self._last_char, char_len) + else: + self.distribution_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + + self._last_char[0] = byte_str[-1] + + if self.state == ProbingState.DETECTING: + if (self.distribution_analyzer.got_enough_data() and + (self.get_confidence() > self.SHORTCUT_THRESHOLD)): + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + return self.distribution_analyzer.get_confidence() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcsgroupprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcsgroupprober.py new file mode 100644 index 0000000..530abe7 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcsgroupprober.py @@ -0,0 +1,54 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# Proofpoint, Inc. +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetgroupprober import CharSetGroupProber +from .utf8prober import UTF8Prober +from .sjisprober import SJISProber +from .eucjpprober import EUCJPProber +from .gb2312prober import GB2312Prober +from .euckrprober import EUCKRProber +from .cp949prober import CP949Prober +from .big5prober import Big5Prober +from .euctwprober import EUCTWProber + + +class MBCSGroupProber(CharSetGroupProber): + def __init__(self, lang_filter=None): + super(MBCSGroupProber, self).__init__(lang_filter=lang_filter) + self.probers = [ + UTF8Prober(), + SJISProber(), + EUCJPProber(), + GB2312Prober(), + EUCKRProber(), + CP949Prober(), + Big5Prober(), + EUCTWProber() + ] + self.reset() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcssm.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcssm.py new file mode 100644 index 0000000..8360d0f --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/mbcssm.py @@ -0,0 +1,572 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .enums import MachineState + +# BIG5 + +BIG5_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 #allow 0x00 as legal value + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,1, # 78 - 7f + 4,4,4,4,4,4,4,4, # 80 - 87 + 4,4,4,4,4,4,4,4, # 88 - 8f + 4,4,4,4,4,4,4,4, # 90 - 97 + 4,4,4,4,4,4,4,4, # 98 - 9f + 4,3,3,3,3,3,3,3, # a0 - a7 + 3,3,3,3,3,3,3,3, # a8 - af + 3,3,3,3,3,3,3,3, # b0 - b7 + 3,3,3,3,3,3,3,3, # b8 - bf + 3,3,3,3,3,3,3,3, # c0 - c7 + 3,3,3,3,3,3,3,3, # c8 - cf + 3,3,3,3,3,3,3,3, # d0 - d7 + 3,3,3,3,3,3,3,3, # d8 - df + 3,3,3,3,3,3,3,3, # e0 - e7 + 3,3,3,3,3,3,3,3, # e8 - ef + 3,3,3,3,3,3,3,3, # f0 - f7 + 3,3,3,3,3,3,3,0 # f8 - ff +) + +BIG5_ST = ( + MachineState.ERROR,MachineState.START,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,#08-0f + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START#10-17 +) + +BIG5_CHAR_LEN_TABLE = (0, 1, 1, 2, 0) + +BIG5_SM_MODEL = {'class_table': BIG5_CLS, + 'class_factor': 5, + 'state_table': BIG5_ST, + 'char_len_table': BIG5_CHAR_LEN_TABLE, + 'name': 'Big5'} + +# CP949 + +CP949_CLS = ( + 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,0,0, # 00 - 0f + 1,1,1,1,1,1,1,1, 1,1,1,0,1,1,1,1, # 10 - 1f + 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, # 20 - 2f + 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, # 30 - 3f + 1,4,4,4,4,4,4,4, 4,4,4,4,4,4,4,4, # 40 - 4f + 4,4,5,5,5,5,5,5, 5,5,5,1,1,1,1,1, # 50 - 5f + 1,5,5,5,5,5,5,5, 5,5,5,5,5,5,5,5, # 60 - 6f + 5,5,5,5,5,5,5,5, 5,5,5,1,1,1,1,1, # 70 - 7f + 0,6,6,6,6,6,6,6, 6,6,6,6,6,6,6,6, # 80 - 8f + 6,6,6,6,6,6,6,6, 6,6,6,6,6,6,6,6, # 90 - 9f + 6,7,7,7,7,7,7,7, 7,7,7,7,7,8,8,8, # a0 - af + 7,7,7,7,7,7,7,7, 7,7,7,7,7,7,7,7, # b0 - bf + 7,7,7,7,7,7,9,2, 2,3,2,2,2,2,2,2, # c0 - cf + 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2, # d0 - df + 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,2, # e0 - ef + 2,2,2,2,2,2,2,2, 2,2,2,2,2,2,2,0, # f0 - ff +) + +CP949_ST = ( +#cls= 0 1 2 3 4 5 6 7 8 9 # previous state = + MachineState.ERROR,MachineState.START, 3,MachineState.ERROR,MachineState.START,MachineState.START, 4, 5,MachineState.ERROR, 6, # MachineState.START + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, # MachineState.ERROR + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME, # MachineState.ITS_ME + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START, # 3 + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, # 4 + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, # 5 + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START, # 6 +) + +CP949_CHAR_LEN_TABLE = (0, 1, 2, 0, 1, 1, 2, 2, 0, 2) + +CP949_SM_MODEL = {'class_table': CP949_CLS, + 'class_factor': 10, + 'state_table': CP949_ST, + 'char_len_table': CP949_CHAR_LEN_TABLE, + 'name': 'CP949'} + +# EUC-JP + +EUCJP_CLS = ( + 4,4,4,4,4,4,4,4, # 00 - 07 + 4,4,4,4,4,4,5,5, # 08 - 0f + 4,4,4,4,4,4,4,4, # 10 - 17 + 4,4,4,5,4,4,4,4, # 18 - 1f + 4,4,4,4,4,4,4,4, # 20 - 27 + 4,4,4,4,4,4,4,4, # 28 - 2f + 4,4,4,4,4,4,4,4, # 30 - 37 + 4,4,4,4,4,4,4,4, # 38 - 3f + 4,4,4,4,4,4,4,4, # 40 - 47 + 4,4,4,4,4,4,4,4, # 48 - 4f + 4,4,4,4,4,4,4,4, # 50 - 57 + 4,4,4,4,4,4,4,4, # 58 - 5f + 4,4,4,4,4,4,4,4, # 60 - 67 + 4,4,4,4,4,4,4,4, # 68 - 6f + 4,4,4,4,4,4,4,4, # 70 - 77 + 4,4,4,4,4,4,4,4, # 78 - 7f + 5,5,5,5,5,5,5,5, # 80 - 87 + 5,5,5,5,5,5,1,3, # 88 - 8f + 5,5,5,5,5,5,5,5, # 90 - 97 + 5,5,5,5,5,5,5,5, # 98 - 9f + 5,2,2,2,2,2,2,2, # a0 - a7 + 2,2,2,2,2,2,2,2, # a8 - af + 2,2,2,2,2,2,2,2, # b0 - b7 + 2,2,2,2,2,2,2,2, # b8 - bf + 2,2,2,2,2,2,2,2, # c0 - c7 + 2,2,2,2,2,2,2,2, # c8 - cf + 2,2,2,2,2,2,2,2, # d0 - d7 + 2,2,2,2,2,2,2,2, # d8 - df + 0,0,0,0,0,0,0,0, # e0 - e7 + 0,0,0,0,0,0,0,0, # e8 - ef + 0,0,0,0,0,0,0,0, # f0 - f7 + 0,0,0,0,0,0,0,5 # f8 - ff +) + +EUCJP_ST = ( + 3, 4, 3, 5,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.START,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#10-17 + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 3,MachineState.ERROR,#18-1f + 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START#20-27 +) + +EUCJP_CHAR_LEN_TABLE = (2, 2, 2, 3, 1, 0) + +EUCJP_SM_MODEL = {'class_table': EUCJP_CLS, + 'class_factor': 6, + 'state_table': EUCJP_ST, + 'char_len_table': EUCJP_CHAR_LEN_TABLE, + 'name': 'EUC-JP'} + +# EUC-KR + +EUCKR_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 1,1,1,1,1,1,1,1, # 40 - 47 + 1,1,1,1,1,1,1,1, # 48 - 4f + 1,1,1,1,1,1,1,1, # 50 - 57 + 1,1,1,1,1,1,1,1, # 58 - 5f + 1,1,1,1,1,1,1,1, # 60 - 67 + 1,1,1,1,1,1,1,1, # 68 - 6f + 1,1,1,1,1,1,1,1, # 70 - 77 + 1,1,1,1,1,1,1,1, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,0,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,2,2,2,2,2,2,2, # a0 - a7 + 2,2,2,2,2,3,3,3, # a8 - af + 2,2,2,2,2,2,2,2, # b0 - b7 + 2,2,2,2,2,2,2,2, # b8 - bf + 2,2,2,2,2,2,2,2, # c0 - c7 + 2,3,2,2,2,2,2,2, # c8 - cf + 2,2,2,2,2,2,2,2, # d0 - d7 + 2,2,2,2,2,2,2,2, # d8 - df + 2,2,2,2,2,2,2,2, # e0 - e7 + 2,2,2,2,2,2,2,2, # e8 - ef + 2,2,2,2,2,2,2,2, # f0 - f7 + 2,2,2,2,2,2,2,0 # f8 - ff +) + +EUCKR_ST = ( + MachineState.ERROR,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START #08-0f +) + +EUCKR_CHAR_LEN_TABLE = (0, 1, 2, 0) + +EUCKR_SM_MODEL = {'class_table': EUCKR_CLS, + 'class_factor': 4, + 'state_table': EUCKR_ST, + 'char_len_table': EUCKR_CHAR_LEN_TABLE, + 'name': 'EUC-KR'} + +# EUC-TW + +EUCTW_CLS = ( + 2,2,2,2,2,2,2,2, # 00 - 07 + 2,2,2,2,2,2,0,0, # 08 - 0f + 2,2,2,2,2,2,2,2, # 10 - 17 + 2,2,2,0,2,2,2,2, # 18 - 1f + 2,2,2,2,2,2,2,2, # 20 - 27 + 2,2,2,2,2,2,2,2, # 28 - 2f + 2,2,2,2,2,2,2,2, # 30 - 37 + 2,2,2,2,2,2,2,2, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,2, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,6,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,3,4,4,4,4,4,4, # a0 - a7 + 5,5,1,1,1,1,1,1, # a8 - af + 1,1,1,1,1,1,1,1, # b0 - b7 + 1,1,1,1,1,1,1,1, # b8 - bf + 1,1,3,1,3,3,3,3, # c0 - c7 + 3,3,3,3,3,3,3,3, # c8 - cf + 3,3,3,3,3,3,3,3, # d0 - d7 + 3,3,3,3,3,3,3,3, # d8 - df + 3,3,3,3,3,3,3,3, # e0 - e7 + 3,3,3,3,3,3,3,3, # e8 - ef + 3,3,3,3,3,3,3,3, # f0 - f7 + 3,3,3,3,3,3,3,0 # f8 - ff +) + +EUCTW_ST = ( + MachineState.ERROR,MachineState.ERROR,MachineState.START, 3, 3, 3, 4,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.START,MachineState.ERROR,#10-17 + MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f + 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.START,MachineState.START,#20-27 + MachineState.START,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START #28-2f +) + +EUCTW_CHAR_LEN_TABLE = (0, 0, 1, 2, 2, 2, 3) + +EUCTW_SM_MODEL = {'class_table': EUCTW_CLS, + 'class_factor': 7, + 'state_table': EUCTW_ST, + 'char_len_table': EUCTW_CHAR_LEN_TABLE, + 'name': 'x-euc-tw'} + +# GB2312 + +GB2312_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 3,3,3,3,3,3,3,3, # 30 - 37 + 3,3,1,1,1,1,1,1, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,4, # 78 - 7f + 5,6,6,6,6,6,6,6, # 80 - 87 + 6,6,6,6,6,6,6,6, # 88 - 8f + 6,6,6,6,6,6,6,6, # 90 - 97 + 6,6,6,6,6,6,6,6, # 98 - 9f + 6,6,6,6,6,6,6,6, # a0 - a7 + 6,6,6,6,6,6,6,6, # a8 - af + 6,6,6,6,6,6,6,6, # b0 - b7 + 6,6,6,6,6,6,6,6, # b8 - bf + 6,6,6,6,6,6,6,6, # c0 - c7 + 6,6,6,6,6,6,6,6, # c8 - cf + 6,6,6,6,6,6,6,6, # d0 - d7 + 6,6,6,6,6,6,6,6, # d8 - df + 6,6,6,6,6,6,6,6, # e0 - e7 + 6,6,6,6,6,6,6,6, # e8 - ef + 6,6,6,6,6,6,6,6, # f0 - f7 + 6,6,6,6,6,6,6,0 # f8 - ff +) + +GB2312_ST = ( + MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START, 3,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,#10-17 + 4,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f + MachineState.ERROR,MachineState.ERROR, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,#20-27 + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.START #28-2f +) + +# To be accurate, the length of class 6 can be either 2 or 4. +# But it is not necessary to discriminate between the two since +# it is used for frequency analysis only, and we are validating +# each code range there as well. So it is safe to set it to be +# 2 here. +GB2312_CHAR_LEN_TABLE = (0, 1, 1, 1, 1, 1, 2) + +GB2312_SM_MODEL = {'class_table': GB2312_CLS, + 'class_factor': 7, + 'state_table': GB2312_ST, + 'char_len_table': GB2312_CHAR_LEN_TABLE, + 'name': 'GB2312'} + +# Shift_JIS + +SJIS_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 2,2,2,2,2,2,2,2, # 40 - 47 + 2,2,2,2,2,2,2,2, # 48 - 4f + 2,2,2,2,2,2,2,2, # 50 - 57 + 2,2,2,2,2,2,2,2, # 58 - 5f + 2,2,2,2,2,2,2,2, # 60 - 67 + 2,2,2,2,2,2,2,2, # 68 - 6f + 2,2,2,2,2,2,2,2, # 70 - 77 + 2,2,2,2,2,2,2,1, # 78 - 7f + 3,3,3,3,3,2,2,3, # 80 - 87 + 3,3,3,3,3,3,3,3, # 88 - 8f + 3,3,3,3,3,3,3,3, # 90 - 97 + 3,3,3,3,3,3,3,3, # 98 - 9f + #0xa0 is illegal in sjis encoding, but some pages does + #contain such byte. We need to be more error forgiven. + 2,2,2,2,2,2,2,2, # a0 - a7 + 2,2,2,2,2,2,2,2, # a8 - af + 2,2,2,2,2,2,2,2, # b0 - b7 + 2,2,2,2,2,2,2,2, # b8 - bf + 2,2,2,2,2,2,2,2, # c0 - c7 + 2,2,2,2,2,2,2,2, # c8 - cf + 2,2,2,2,2,2,2,2, # d0 - d7 + 2,2,2,2,2,2,2,2, # d8 - df + 3,3,3,3,3,3,3,3, # e0 - e7 + 3,3,3,3,3,4,4,4, # e8 - ef + 3,3,3,3,3,3,3,3, # f0 - f7 + 3,3,3,3,3,0,0,0) # f8 - ff + + +SJIS_ST = ( + MachineState.ERROR,MachineState.START,MachineState.START, 3,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START #10-17 +) + +SJIS_CHAR_LEN_TABLE = (0, 1, 1, 2, 0, 0) + +SJIS_SM_MODEL = {'class_table': SJIS_CLS, + 'class_factor': 6, + 'state_table': SJIS_ST, + 'char_len_table': SJIS_CHAR_LEN_TABLE, + 'name': 'Shift_JIS'} + +# UCS2-BE + +UCS2BE_CLS = ( + 0,0,0,0,0,0,0,0, # 00 - 07 + 0,0,1,0,0,2,0,0, # 08 - 0f + 0,0,0,0,0,0,0,0, # 10 - 17 + 0,0,0,3,0,0,0,0, # 18 - 1f + 0,0,0,0,0,0,0,0, # 20 - 27 + 0,3,3,3,3,3,0,0, # 28 - 2f + 0,0,0,0,0,0,0,0, # 30 - 37 + 0,0,0,0,0,0,0,0, # 38 - 3f + 0,0,0,0,0,0,0,0, # 40 - 47 + 0,0,0,0,0,0,0,0, # 48 - 4f + 0,0,0,0,0,0,0,0, # 50 - 57 + 0,0,0,0,0,0,0,0, # 58 - 5f + 0,0,0,0,0,0,0,0, # 60 - 67 + 0,0,0,0,0,0,0,0, # 68 - 6f + 0,0,0,0,0,0,0,0, # 70 - 77 + 0,0,0,0,0,0,0,0, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,0,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,0,0,0,0,0,0,0, # a0 - a7 + 0,0,0,0,0,0,0,0, # a8 - af + 0,0,0,0,0,0,0,0, # b0 - b7 + 0,0,0,0,0,0,0,0, # b8 - bf + 0,0,0,0,0,0,0,0, # c0 - c7 + 0,0,0,0,0,0,0,0, # c8 - cf + 0,0,0,0,0,0,0,0, # d0 - d7 + 0,0,0,0,0,0,0,0, # d8 - df + 0,0,0,0,0,0,0,0, # e0 - e7 + 0,0,0,0,0,0,0,0, # e8 - ef + 0,0,0,0,0,0,0,0, # f0 - f7 + 0,0,0,0,0,0,4,5 # f8 - ff +) + +UCS2BE_ST = ( + 5, 7, 7,MachineState.ERROR, 4, 3,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME, 6, 6, 6, 6,MachineState.ERROR,MachineState.ERROR,#10-17 + 6, 6, 6, 6, 6,MachineState.ITS_ME, 6, 6,#18-1f + 6, 6, 6, 6, 5, 7, 7,MachineState.ERROR,#20-27 + 5, 8, 6, 6,MachineState.ERROR, 6, 6, 6,#28-2f + 6, 6, 6, 6,MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START #30-37 +) + +UCS2BE_CHAR_LEN_TABLE = (2, 2, 2, 0, 2, 2) + +UCS2BE_SM_MODEL = {'class_table': UCS2BE_CLS, + 'class_factor': 6, + 'state_table': UCS2BE_ST, + 'char_len_table': UCS2BE_CHAR_LEN_TABLE, + 'name': 'UTF-16BE'} + +# UCS2-LE + +UCS2LE_CLS = ( + 0,0,0,0,0,0,0,0, # 00 - 07 + 0,0,1,0,0,2,0,0, # 08 - 0f + 0,0,0,0,0,0,0,0, # 10 - 17 + 0,0,0,3,0,0,0,0, # 18 - 1f + 0,0,0,0,0,0,0,0, # 20 - 27 + 0,3,3,3,3,3,0,0, # 28 - 2f + 0,0,0,0,0,0,0,0, # 30 - 37 + 0,0,0,0,0,0,0,0, # 38 - 3f + 0,0,0,0,0,0,0,0, # 40 - 47 + 0,0,0,0,0,0,0,0, # 48 - 4f + 0,0,0,0,0,0,0,0, # 50 - 57 + 0,0,0,0,0,0,0,0, # 58 - 5f + 0,0,0,0,0,0,0,0, # 60 - 67 + 0,0,0,0,0,0,0,0, # 68 - 6f + 0,0,0,0,0,0,0,0, # 70 - 77 + 0,0,0,0,0,0,0,0, # 78 - 7f + 0,0,0,0,0,0,0,0, # 80 - 87 + 0,0,0,0,0,0,0,0, # 88 - 8f + 0,0,0,0,0,0,0,0, # 90 - 97 + 0,0,0,0,0,0,0,0, # 98 - 9f + 0,0,0,0,0,0,0,0, # a0 - a7 + 0,0,0,0,0,0,0,0, # a8 - af + 0,0,0,0,0,0,0,0, # b0 - b7 + 0,0,0,0,0,0,0,0, # b8 - bf + 0,0,0,0,0,0,0,0, # c0 - c7 + 0,0,0,0,0,0,0,0, # c8 - cf + 0,0,0,0,0,0,0,0, # d0 - d7 + 0,0,0,0,0,0,0,0, # d8 - df + 0,0,0,0,0,0,0,0, # e0 - e7 + 0,0,0,0,0,0,0,0, # e8 - ef + 0,0,0,0,0,0,0,0, # f0 - f7 + 0,0,0,0,0,0,4,5 # f8 - ff +) + +UCS2LE_ST = ( + 6, 6, 7, 6, 4, 3,MachineState.ERROR,MachineState.ERROR,#00-07 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#08-0f + MachineState.ITS_ME,MachineState.ITS_ME, 5, 5, 5,MachineState.ERROR,MachineState.ITS_ME,MachineState.ERROR,#10-17 + 5, 5, 5,MachineState.ERROR, 5,MachineState.ERROR, 6, 6,#18-1f + 7, 6, 8, 8, 5, 5, 5,MachineState.ERROR,#20-27 + 5, 5, 5,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5, 5,#28-2f + 5, 5, 5,MachineState.ERROR, 5,MachineState.ERROR,MachineState.START,MachineState.START #30-37 +) + +UCS2LE_CHAR_LEN_TABLE = (2, 2, 2, 2, 2, 2) + +UCS2LE_SM_MODEL = {'class_table': UCS2LE_CLS, + 'class_factor': 6, + 'state_table': UCS2LE_ST, + 'char_len_table': UCS2LE_CHAR_LEN_TABLE, + 'name': 'UTF-16LE'} + +# UTF-8 + +UTF8_CLS = ( + 1,1,1,1,1,1,1,1, # 00 - 07 #allow 0x00 as a legal value + 1,1,1,1,1,1,0,0, # 08 - 0f + 1,1,1,1,1,1,1,1, # 10 - 17 + 1,1,1,0,1,1,1,1, # 18 - 1f + 1,1,1,1,1,1,1,1, # 20 - 27 + 1,1,1,1,1,1,1,1, # 28 - 2f + 1,1,1,1,1,1,1,1, # 30 - 37 + 1,1,1,1,1,1,1,1, # 38 - 3f + 1,1,1,1,1,1,1,1, # 40 - 47 + 1,1,1,1,1,1,1,1, # 48 - 4f + 1,1,1,1,1,1,1,1, # 50 - 57 + 1,1,1,1,1,1,1,1, # 58 - 5f + 1,1,1,1,1,1,1,1, # 60 - 67 + 1,1,1,1,1,1,1,1, # 68 - 6f + 1,1,1,1,1,1,1,1, # 70 - 77 + 1,1,1,1,1,1,1,1, # 78 - 7f + 2,2,2,2,3,3,3,3, # 80 - 87 + 4,4,4,4,4,4,4,4, # 88 - 8f + 4,4,4,4,4,4,4,4, # 90 - 97 + 4,4,4,4,4,4,4,4, # 98 - 9f + 5,5,5,5,5,5,5,5, # a0 - a7 + 5,5,5,5,5,5,5,5, # a8 - af + 5,5,5,5,5,5,5,5, # b0 - b7 + 5,5,5,5,5,5,5,5, # b8 - bf + 0,0,6,6,6,6,6,6, # c0 - c7 + 6,6,6,6,6,6,6,6, # c8 - cf + 6,6,6,6,6,6,6,6, # d0 - d7 + 6,6,6,6,6,6,6,6, # d8 - df + 7,8,8,8,8,8,8,8, # e0 - e7 + 8,8,8,8,8,9,8,8, # e8 - ef + 10,11,11,11,11,11,11,11, # f0 - f7 + 12,13,13,13,14,15,0,0 # f8 - ff +) + +UTF8_ST = ( + MachineState.ERROR,MachineState.START,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 12, 10,#00-07 + 9, 11, 8, 7, 6, 5, 4, 3,#08-0f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#10-17 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#18-1f + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#20-27 + MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,MachineState.ITS_ME,#28-2f + MachineState.ERROR,MachineState.ERROR, 5, 5, 5, 5,MachineState.ERROR,MachineState.ERROR,#30-37 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#38-3f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 5, 5, 5,MachineState.ERROR,MachineState.ERROR,#40-47 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#48-4f + MachineState.ERROR,MachineState.ERROR, 7, 7, 7, 7,MachineState.ERROR,MachineState.ERROR,#50-57 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#58-5f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 7, 7,MachineState.ERROR,MachineState.ERROR,#60-67 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#68-6f + MachineState.ERROR,MachineState.ERROR, 9, 9, 9, 9,MachineState.ERROR,MachineState.ERROR,#70-77 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#78-7f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 9,MachineState.ERROR,MachineState.ERROR,#80-87 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#88-8f + MachineState.ERROR,MachineState.ERROR, 12, 12, 12, 12,MachineState.ERROR,MachineState.ERROR,#90-97 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#98-9f + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR, 12,MachineState.ERROR,MachineState.ERROR,#a0-a7 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#a8-af + MachineState.ERROR,MachineState.ERROR, 12, 12, 12,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#b0-b7 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,#b8-bf + MachineState.ERROR,MachineState.ERROR,MachineState.START,MachineState.START,MachineState.START,MachineState.START,MachineState.ERROR,MachineState.ERROR,#c0-c7 + MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR,MachineState.ERROR #c8-cf +) + +UTF8_CHAR_LEN_TABLE = (0, 1, 0, 0, 0, 0, 2, 3, 3, 3, 4, 4, 5, 5, 6, 6) + +UTF8_SM_MODEL = {'class_table': UTF8_CLS, + 'class_factor': 16, + 'state_table': UTF8_ST, + 'char_len_table': UTF8_CHAR_LEN_TABLE, + 'name': 'UTF-8'} diff --git a/lib/python3.11/site-packages/pip/_vendor/chardet/cli/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/chardet/cli/__init__.py rename to python/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py new file mode 100644 index 0000000..3237d5a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/metadata/languages.py @@ -0,0 +1,310 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +""" +Metadata about languages used by our model training code for our +SingleByteCharSetProbers. Could be used for other things in the future. + +This code is based on the language metadata from the uchardet project. +""" +from __future__ import absolute_import, print_function + +from string import ascii_letters + + +# TODO: Add Ukranian (KOI8-U) + +class Language(object): + """Metadata about a language useful for training models + + :ivar name: The human name for the language, in English. + :type name: str + :ivar iso_code: 2-letter ISO 639-1 if possible, 3-letter ISO code otherwise, + or use another catalog as a last resort. + :type iso_code: str + :ivar use_ascii: Whether or not ASCII letters should be included in trained + models. + :type use_ascii: bool + :ivar charsets: The charsets we want to support and create data for. + :type charsets: list of str + :ivar alphabet: The characters in the language's alphabet. If `use_ascii` is + `True`, you only need to add those not in the ASCII set. + :type alphabet: str + :ivar wiki_start_pages: The Wikipedia pages to start from if we're crawling + Wikipedia for training data. + :type wiki_start_pages: list of str + """ + def __init__(self, name=None, iso_code=None, use_ascii=True, charsets=None, + alphabet=None, wiki_start_pages=None): + super(Language, self).__init__() + self.name = name + self.iso_code = iso_code + self.use_ascii = use_ascii + self.charsets = charsets + if self.use_ascii: + if alphabet: + alphabet += ascii_letters + else: + alphabet = ascii_letters + elif not alphabet: + raise ValueError('Must supply alphabet if use_ascii is False') + self.alphabet = ''.join(sorted(set(alphabet))) if alphabet else None + self.wiki_start_pages = wiki_start_pages + + def __repr__(self): + return '{}({})'.format(self.__class__.__name__, + ', '.join('{}={!r}'.format(k, v) + for k, v in self.__dict__.items() + if not k.startswith('_'))) + + +LANGUAGES = {'Arabic': Language(name='Arabic', + iso_code='ar', + use_ascii=False, + # We only support encodings that use isolated + # forms, because the current recommendation is + # that the rendering system handles presentation + # forms. This means we purposefully skip IBM864. + charsets=['ISO-8859-6', 'WINDOWS-1256', + 'CP720', 'CP864'], + alphabet=u'ءآأؤإئابةتثجحخدذرزسشصضطظعغػؼؽؾؿـÙقكلمنهوىيًٌÙÙŽÙÙÙ‘', + wiki_start_pages=[u'Ø§Ù„ØµÙØ­Ø©_الرئيسية']), + 'Belarusian': Language(name='Belarusian', + iso_code='be', + use_ascii=False, + charsets=['ISO-8859-5', 'WINDOWS-1251', + 'IBM866', 'MacCyrillic'], + alphabet=(u'ÐБВГДЕÐЖЗІЙКЛМÐОПРСТУЎФХЦЧШЫЬЭЮЯ' + u'абвгдеёжзійклмнопрÑтуўфхцчшыьÑÑŽÑʼ'), + wiki_start_pages=[u'ГалоўнаÑ_Ñтаронка']), + 'Bulgarian': Language(name='Bulgarian', + iso_code='bg', + use_ascii=False, + charsets=['ISO-8859-5', 'WINDOWS-1251', + 'IBM855'], + alphabet=(u'ÐБВГДЕЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЬЮЯ' + u'абвгдежзийклмнопрÑтуфхцчшщъьюÑ'), + wiki_start_pages=[u'Ðачална_Ñтраница']), + 'Czech': Language(name='Czech', + iso_code='cz', + use_ascii=True, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=u'áÄÄéěíňóřšťúůýžÃČĎÉĚÃŇÓŘŠŤÚŮÃŽ', + wiki_start_pages=[u'Hlavní_strana']), + 'Danish': Language(name='Danish', + iso_code='da', + use_ascii=True, + charsets=['ISO-8859-1', 'ISO-8859-15', + 'WINDOWS-1252'], + alphabet=u'æøåÆØÅ', + wiki_start_pages=[u'Forside']), + 'German': Language(name='German', + iso_code='de', + use_ascii=True, + charsets=['ISO-8859-1', 'WINDOWS-1252'], + alphabet=u'äöüßÄÖÜ', + wiki_start_pages=[u'Wikipedia:Hauptseite']), + 'Greek': Language(name='Greek', + iso_code='el', + use_ascii=False, + charsets=['ISO-8859-7', 'WINDOWS-1253'], + alphabet=(u'αβγδεζηθικλμνξοπÏσςτυφχψωάέήίόÏÏŽ' + u'ΑΒΓΔΕΖΗΘΙΚΛΜÎΞΟΠΡΣΣΤΥΦΧΨΩΆΈΉΊΌΎÎ'), + wiki_start_pages=[u'ΠÏλη:ΚÏÏια']), + 'English': Language(name='English', + iso_code='en', + use_ascii=True, + charsets=['ISO-8859-1', 'WINDOWS-1252'], + wiki_start_pages=[u'Main_Page']), + 'Esperanto': Language(name='Esperanto', + iso_code='eo', + # Q, W, X, and Y not used at all + use_ascii=False, + charsets=['ISO-8859-3'], + alphabet=(u'abcĉdefgÄhÄ¥ijĵklmnoprsÅtuÅ­vz' + u'ABCĈDEFGÄœHĤIJÄ´KLMNOPRSÅœTUŬVZ'), + wiki_start_pages=[u'Vikipedio:ĈefpaÄo']), + 'Spanish': Language(name='Spanish', + iso_code='es', + use_ascii=True, + charsets=['ISO-8859-1', 'ISO-8859-15', + 'WINDOWS-1252'], + alphabet=u'ñáéíóúüÑÃÉÃÓÚÜ', + wiki_start_pages=[u'Wikipedia:Portada']), + 'Estonian': Language(name='Estonian', + iso_code='et', + use_ascii=False, + charsets=['ISO-8859-4', 'ISO-8859-13', + 'WINDOWS-1257'], + # C, F, Å , Q, W, X, Y, Z, Ž are only for + # loanwords + alphabet=(u'ABDEGHIJKLMNOPRSTUVÕÄÖÜ' + u'abdeghijklmnoprstuvõäöü'), + wiki_start_pages=[u'Esileht']), + 'Finnish': Language(name='Finnish', + iso_code='fi', + use_ascii=True, + charsets=['ISO-8859-1', 'ISO-8859-15', + 'WINDOWS-1252'], + alphabet=u'ÅÄÖŠŽåäöšž', + wiki_start_pages=[u'Wikipedia:Etusivu']), + 'French': Language(name='French', + iso_code='fr', + use_ascii=True, + charsets=['ISO-8859-1', 'ISO-8859-15', + 'WINDOWS-1252'], + alphabet=u'œàâçèéîïùûêŒÀÂÇÈÉÎÃÙÛÊ', + wiki_start_pages=[u'Wikipédia:Accueil_principal', + u'BÅ“uf (animal)']), + 'Hebrew': Language(name='Hebrew', + iso_code='he', + use_ascii=False, + charsets=['ISO-8859-8', 'WINDOWS-1255'], + alphabet=u'×בגדהוזחטיךכל×מןנסעףפץצקרשתװױײ', + wiki_start_pages=[u'עמוד_ר×שי']), + 'Croatian': Language(name='Croatian', + iso_code='hr', + # Q, W, X, Y are only used for foreign words. + use_ascii=False, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=(u'abcÄćdÄ‘efghijklmnoprsÅ¡tuvzž' + u'ABCČĆDÄEFGHIJKLMNOPRSÅ TUVZŽ'), + wiki_start_pages=[u'Glavna_stranica']), + 'Hungarian': Language(name='Hungarian', + iso_code='hu', + # Q, W, X, Y are only used for foreign words. + use_ascii=False, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=(u'abcdefghijklmnoprstuvzáéíóöőúüű' + u'ABCDEFGHIJKLMNOPRSTUVZÃÉÃÓÖÅÚÜŰ'), + wiki_start_pages=[u'KezdÅ‘lap']), + 'Italian': Language(name='Italian', + iso_code='it', + use_ascii=True, + charsets=['ISO-8859-1', 'ISO-8859-15', + 'WINDOWS-1252'], + alphabet=u'ÀÈÉÌÒÓÙàèéìòóù', + wiki_start_pages=[u'Pagina_principale']), + 'Lithuanian': Language(name='Lithuanian', + iso_code='lt', + use_ascii=False, + charsets=['ISO-8859-13', 'WINDOWS-1257', + 'ISO-8859-4'], + # Q, W, and X not used at all + alphabet=(u'AÄ„BCÄŒDEĘĖFGHIÄ®YJKLMNOPRSÅ TUŲŪVZŽ' + u'aÄ…bcÄdeęėfghiįyjklmnoprsÅ¡tuųūvzž'), + wiki_start_pages=[u'Pagrindinis_puslapis']), + 'Latvian': Language(name='Latvian', + iso_code='lv', + use_ascii=False, + charsets=['ISO-8859-13', 'WINDOWS-1257', + 'ISO-8859-4'], + # Q, W, X, Y are only for loanwords + alphabet=(u'AÄ€BCÄŒDEÄ’FGÄ¢HIĪJKĶLÄ»MNÅ…OPRSÅ TUŪVZŽ' + u'aÄbcÄdeÄ“fgÄ£hiÄ«jkÄ·lļmnņoprsÅ¡tuÅ«vzž'), + wiki_start_pages=[u'SÄkumlapa']), + 'Macedonian': Language(name='Macedonian', + iso_code='mk', + use_ascii=False, + charsets=['ISO-8859-5', 'WINDOWS-1251', + 'MacCyrillic', 'IBM855'], + alphabet=(u'ÐБВГДЃЕЖЗЅИЈКЛЉМÐЊОПРСТЌУФХЦЧÐШ' + u'абвгдѓежзѕијклљмнњопрÑтќуфхцчџш'), + wiki_start_pages=[u'Главна_Ñтраница']), + 'Dutch': Language(name='Dutch', + iso_code='nl', + use_ascii=True, + charsets=['ISO-8859-1', 'WINDOWS-1252'], + wiki_start_pages=[u'Hoofdpagina']), + 'Polish': Language(name='Polish', + iso_code='pl', + # Q and X are only used for foreign words. + use_ascii=False, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=(u'AÄ„BCĆDEĘFGHIJKLÅMNŃOÓPRSÅšTUWYZŹŻ' + u'aÄ…bcćdeÄ™fghijklÅ‚mnÅ„oóprsÅ›tuwyzźż'), + wiki_start_pages=[u'Wikipedia:Strona_główna']), + 'Portuguese': Language(name='Portuguese', + iso_code='pt', + use_ascii=True, + charsets=['ISO-8859-1', 'ISO-8859-15', + 'WINDOWS-1252'], + alphabet=u'ÃÂÃÀÇÉÊÃÓÔÕÚáâãàçéêíóôõú', + wiki_start_pages=[u'Wikipédia:Página_principal']), + 'Romanian': Language(name='Romanian', + iso_code='ro', + use_ascii=True, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=u'ăâîșțĂÂÎȘȚ', + wiki_start_pages=[u'Pagina_principală']), + 'Russian': Language(name='Russian', + iso_code='ru', + use_ascii=False, + charsets=['ISO-8859-5', 'WINDOWS-1251', + 'KOI8-R', 'MacCyrillic', 'IBM866', + 'IBM855'], + alphabet=(u'абвгдеёжзийклмнопрÑтуфхцчшщъыьÑÑŽÑ' + u'ÐБВГДЕÐЖЗИЙКЛМÐОПРСТУФХЦЧШЩЪЫЬЭЮЯ'), + wiki_start_pages=[u'ЗаглавнаÑ_Ñтраница']), + 'Slovak': Language(name='Slovak', + iso_code='sk', + use_ascii=True, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=u'áäÄÄéíĺľňóôŕšťúýžÃÄČĎÉÃĹĽŇÓÔŔŠŤÚÃŽ', + wiki_start_pages=[u'Hlavná_stránka']), + 'Slovene': Language(name='Slovene', + iso_code='sl', + # Q, W, X, Y are only used for foreign words. + use_ascii=False, + charsets=['ISO-8859-2', 'WINDOWS-1250'], + alphabet=(u'abcÄdefghijklmnoprsÅ¡tuvzž' + u'ABCÄŒDEFGHIJKLMNOPRSÅ TUVZŽ'), + wiki_start_pages=[u'Glavna_stran']), + # Serbian can be written in both Latin and Cyrillic, but there's no + # simple way to get the Latin alphabet pages from Wikipedia through + # the API, so for now we just support Cyrillic. + 'Serbian': Language(name='Serbian', + iso_code='sr', + alphabet=(u'ÐБВГДЂЕЖЗИЈКЛЉМÐЊОПРСТЋУФХЦЧÐШ' + u'абвгдђежзијклљмнњопрÑтћуфхцчџш'), + charsets=['ISO-8859-5', 'WINDOWS-1251', + 'MacCyrillic', 'IBM855'], + wiki_start_pages=[u'Главна_Ñтрана']), + 'Thai': Language(name='Thai', + iso_code='th', + use_ascii=False, + charsets=['ISO-8859-11', 'TIS-620', 'CP874'], + alphabet=u'à¸à¸‚ฃคฅฆงจฉชซฌà¸à¸Žà¸à¸à¸‘ฒณดตถทธนบปผà¸à¸žà¸Ÿà¸ à¸¡à¸¢à¸£à¸¤à¸¥à¸¦à¸§à¸¨à¸©à¸ªà¸«à¸¬à¸­à¸®à¸¯à¸°à¸±à¸²à¸³à¸´à¸µà¸¶à¸·à¸ºà¸¸à¸¹à¸¿à¹€à¹à¹‚ใไๅๆ็่้๊๋์à¹à¹Žà¹à¹à¹‘๒๓๔๕๖๗๘๙๚๛', + wiki_start_pages=[u'หน้าหลัà¸']), + 'Turkish': Language(name='Turkish', + iso_code='tr', + # Q, W, and X are not used by Turkish + use_ascii=False, + charsets=['ISO-8859-3', 'ISO-8859-9', + 'WINDOWS-1254'], + alphabet=(u'abcçdefgÄŸhıijklmnoöprsÅŸtuüvyzâîû' + u'ABCÇDEFGÄžHIİJKLMNOÖPRSÅžTUÜVYZÂÎÛ'), + wiki_start_pages=[u'Ana_Sayfa']), + 'Vietnamese': Language(name='Vietnamese', + iso_code='vi', + use_ascii=False, + # Windows-1258 is the only common 8-bit + # Vietnamese encoding supported by Python. + # From Wikipedia: + # For systems that lack support for Unicode, + # dozens of 8-bit Vietnamese code pages are + # available.[1] The most common are VISCII + # (TCVN 5712:1993), VPS, and Windows-1258.[3] + # Where ASCII is required, such as when + # ensuring readability in plain text e-mail, + # Vietnamese letters are often encoded + # according to Vietnamese Quoted-Readable + # (VIQR) or VSCII Mnemonic (VSCII-MNEM),[4] + # though usage of either variable-width + # scheme has declined dramatically following + # the adoption of Unicode on the World Wide + # Web. + charsets=['WINDOWS-1258'], + alphabet=(u'aăâbcdÄ‘eêghiklmnoôơpqrstuưvxy' + u'AĂÂBCDÄEÊGHIKLMNOÔƠPQRSTUƯVXY'), + wiki_start_pages=[u'Chữ_Quốc_ngữ']), + } diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/sbcharsetprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/sbcharsetprober.py new file mode 100644 index 0000000..46ba835 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/sbcharsetprober.py @@ -0,0 +1,145 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from collections import namedtuple + +from .charsetprober import CharSetProber +from .enums import CharacterCategory, ProbingState, SequenceLikelihood + + +SingleByteCharSetModel = namedtuple('SingleByteCharSetModel', + ['charset_name', + 'language', + 'char_to_order_map', + 'language_model', + 'typical_positive_ratio', + 'keep_ascii_letters', + 'alphabet']) + + +class SingleByteCharSetProber(CharSetProber): + SAMPLE_SIZE = 64 + SB_ENOUGH_REL_THRESHOLD = 1024 # 0.25 * SAMPLE_SIZE^2 + POSITIVE_SHORTCUT_THRESHOLD = 0.95 + NEGATIVE_SHORTCUT_THRESHOLD = 0.05 + + def __init__(self, model, reversed=False, name_prober=None): + super(SingleByteCharSetProber, self).__init__() + self._model = model + # TRUE if we need to reverse every pair in the model lookup + self._reversed = reversed + # Optional auxiliary prober for name decision + self._name_prober = name_prober + self._last_order = None + self._seq_counters = None + self._total_seqs = None + self._total_char = None + self._freq_char = None + self.reset() + + def reset(self): + super(SingleByteCharSetProber, self).reset() + # char order of last character + self._last_order = 255 + self._seq_counters = [0] * SequenceLikelihood.get_num_categories() + self._total_seqs = 0 + self._total_char = 0 + # characters that fall in our sampling range + self._freq_char = 0 + + @property + def charset_name(self): + if self._name_prober: + return self._name_prober.charset_name + else: + return self._model.charset_name + + @property + def language(self): + if self._name_prober: + return self._name_prober.language + else: + return self._model.language + + def feed(self, byte_str): + # TODO: Make filter_international_words keep things in self.alphabet + if not self._model.keep_ascii_letters: + byte_str = self.filter_international_words(byte_str) + if not byte_str: + return self.state + char_to_order_map = self._model.char_to_order_map + language_model = self._model.language_model + for char in byte_str: + order = char_to_order_map.get(char, CharacterCategory.UNDEFINED) + # XXX: This was SYMBOL_CAT_ORDER before, with a value of 250, but + # CharacterCategory.SYMBOL is actually 253, so we use CONTROL + # to make it closer to the original intent. The only difference + # is whether or not we count digits and control characters for + # _total_char purposes. + if order < CharacterCategory.CONTROL: + self._total_char += 1 + # TODO: Follow uchardet's lead and discount confidence for frequent + # control characters. + # See https://github.com/BYVoid/uchardet/commit/55b4f23971db61 + if order < self.SAMPLE_SIZE: + self._freq_char += 1 + if self._last_order < self.SAMPLE_SIZE: + self._total_seqs += 1 + if not self._reversed: + lm_cat = language_model[self._last_order][order] + else: + lm_cat = language_model[order][self._last_order] + self._seq_counters[lm_cat] += 1 + self._last_order = order + + charset_name = self._model.charset_name + if self.state == ProbingState.DETECTING: + if self._total_seqs > self.SB_ENOUGH_REL_THRESHOLD: + confidence = self.get_confidence() + if confidence > self.POSITIVE_SHORTCUT_THRESHOLD: + self.logger.debug('%s confidence = %s, we have a winner', + charset_name, confidence) + self._state = ProbingState.FOUND_IT + elif confidence < self.NEGATIVE_SHORTCUT_THRESHOLD: + self.logger.debug('%s confidence = %s, below negative ' + 'shortcut threshhold %s', charset_name, + confidence, + self.NEGATIVE_SHORTCUT_THRESHOLD) + self._state = ProbingState.NOT_ME + + return self.state + + def get_confidence(self): + r = 0.01 + if self._total_seqs > 0: + r = ((1.0 * self._seq_counters[SequenceLikelihood.POSITIVE]) / + self._total_seqs / self._model.typical_positive_ratio) + r = r * self._freq_char / self._total_char + if r >= 1.0: + r = 0.99 + return r diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/sbcsgroupprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/sbcsgroupprober.py new file mode 100644 index 0000000..bdeef4e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/sbcsgroupprober.py @@ -0,0 +1,83 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetgroupprober import CharSetGroupProber +from .hebrewprober import HebrewProber +from .langbulgarianmodel import (ISO_8859_5_BULGARIAN_MODEL, + WINDOWS_1251_BULGARIAN_MODEL) +from .langgreekmodel import ISO_8859_7_GREEK_MODEL, WINDOWS_1253_GREEK_MODEL +from .langhebrewmodel import WINDOWS_1255_HEBREW_MODEL +# from .langhungarianmodel import (ISO_8859_2_HUNGARIAN_MODEL, +# WINDOWS_1250_HUNGARIAN_MODEL) +from .langrussianmodel import (IBM855_RUSSIAN_MODEL, IBM866_RUSSIAN_MODEL, + ISO_8859_5_RUSSIAN_MODEL, KOI8_R_RUSSIAN_MODEL, + MACCYRILLIC_RUSSIAN_MODEL, + WINDOWS_1251_RUSSIAN_MODEL) +from .langthaimodel import TIS_620_THAI_MODEL +from .langturkishmodel import ISO_8859_9_TURKISH_MODEL +from .sbcharsetprober import SingleByteCharSetProber + + +class SBCSGroupProber(CharSetGroupProber): + def __init__(self): + super(SBCSGroupProber, self).__init__() + hebrew_prober = HebrewProber() + logical_hebrew_prober = SingleByteCharSetProber(WINDOWS_1255_HEBREW_MODEL, + False, hebrew_prober) + # TODO: See if using ISO-8859-8 Hebrew model works better here, since + # it's actually the visual one + visual_hebrew_prober = SingleByteCharSetProber(WINDOWS_1255_HEBREW_MODEL, + True, hebrew_prober) + hebrew_prober.set_model_probers(logical_hebrew_prober, + visual_hebrew_prober) + # TODO: ORDER MATTERS HERE. I changed the order vs what was in master + # and several tests failed that did not before. Some thought + # should be put into the ordering, and we should consider making + # order not matter here, because that is very counter-intuitive. + self.probers = [ + SingleByteCharSetProber(WINDOWS_1251_RUSSIAN_MODEL), + SingleByteCharSetProber(KOI8_R_RUSSIAN_MODEL), + SingleByteCharSetProber(ISO_8859_5_RUSSIAN_MODEL), + SingleByteCharSetProber(MACCYRILLIC_RUSSIAN_MODEL), + SingleByteCharSetProber(IBM866_RUSSIAN_MODEL), + SingleByteCharSetProber(IBM855_RUSSIAN_MODEL), + SingleByteCharSetProber(ISO_8859_7_GREEK_MODEL), + SingleByteCharSetProber(WINDOWS_1253_GREEK_MODEL), + SingleByteCharSetProber(ISO_8859_5_BULGARIAN_MODEL), + SingleByteCharSetProber(WINDOWS_1251_BULGARIAN_MODEL), + # TODO: Restore Hungarian encodings (iso-8859-2 and windows-1250) + # after we retrain model. + # SingleByteCharSetProber(ISO_8859_2_HUNGARIAN_MODEL), + # SingleByteCharSetProber(WINDOWS_1250_HUNGARIAN_MODEL), + SingleByteCharSetProber(TIS_620_THAI_MODEL), + SingleByteCharSetProber(ISO_8859_9_TURKISH_MODEL), + hebrew_prober, + logical_hebrew_prober, + visual_hebrew_prober, + ] + self.reset() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py new file mode 100644 index 0000000..9e29623 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/sjisprober.py @@ -0,0 +1,92 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .mbcharsetprober import MultiByteCharSetProber +from .codingstatemachine import CodingStateMachine +from .chardistribution import SJISDistributionAnalysis +from .jpcntx import SJISContextAnalysis +from .mbcssm import SJIS_SM_MODEL +from .enums import ProbingState, MachineState + + +class SJISProber(MultiByteCharSetProber): + def __init__(self): + super(SJISProber, self).__init__() + self.coding_sm = CodingStateMachine(SJIS_SM_MODEL) + self.distribution_analyzer = SJISDistributionAnalysis() + self.context_analyzer = SJISContextAnalysis() + self.reset() + + def reset(self): + super(SJISProber, self).reset() + self.context_analyzer.reset() + + @property + def charset_name(self): + return self.context_analyzer.charset_name + + @property + def language(self): + return "Japanese" + + def feed(self, byte_str): + for i in range(len(byte_str)): + coding_state = self.coding_sm.next_state(byte_str[i]) + if coding_state == MachineState.ERROR: + self.logger.debug('%s %s prober hit error at byte %s', + self.charset_name, self.language, i) + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + char_len = self.coding_sm.get_current_charlen() + if i == 0: + self._last_char[1] = byte_str[0] + self.context_analyzer.feed(self._last_char[2 - char_len:], + char_len) + self.distribution_analyzer.feed(self._last_char, char_len) + else: + self.context_analyzer.feed(byte_str[i + 1 - char_len:i + 3 + - char_len], char_len) + self.distribution_analyzer.feed(byte_str[i - 1:i + 1], + char_len) + + self._last_char[0] = byte_str[-1] + + if self.state == ProbingState.DETECTING: + if (self.context_analyzer.got_enough_data() and + (self.get_confidence() > self.SHORTCUT_THRESHOLD)): + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + context_conf = self.context_analyzer.get_confidence() + distrib_conf = self.distribution_analyzer.get_confidence() + return max(context_conf, distrib_conf) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/universaldetector.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/universaldetector.py new file mode 100644 index 0000000..055a8ac --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/universaldetector.py @@ -0,0 +1,286 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is Mozilla Universal charset detector code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 2001 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# Shy Shalom - original C code +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### +""" +Module containing the UniversalDetector detector class, which is the primary +class a user of ``chardet`` should use. + +:author: Mark Pilgrim (initial port to Python) +:author: Shy Shalom (original C code) +:author: Dan Blanchard (major refactoring for 3.0) +:author: Ian Cordasco +""" + + +import codecs +import logging +import re + +from .charsetgroupprober import CharSetGroupProber +from .enums import InputState, LanguageFilter, ProbingState +from .escprober import EscCharSetProber +from .latin1prober import Latin1Prober +from .mbcsgroupprober import MBCSGroupProber +from .sbcsgroupprober import SBCSGroupProber + + +class UniversalDetector(object): + """ + The ``UniversalDetector`` class underlies the ``chardet.detect`` function + and coordinates all of the different charset probers. + + To get a ``dict`` containing an encoding and its confidence, you can simply + run: + + .. code:: + + u = UniversalDetector() + u.feed(some_bytes) + u.close() + detected = u.result + + """ + + MINIMUM_THRESHOLD = 0.20 + HIGH_BYTE_DETECTOR = re.compile(b'[\x80-\xFF]') + ESC_DETECTOR = re.compile(b'(\033|~{)') + WIN_BYTE_DETECTOR = re.compile(b'[\x80-\x9F]') + ISO_WIN_MAP = {'iso-8859-1': 'Windows-1252', + 'iso-8859-2': 'Windows-1250', + 'iso-8859-5': 'Windows-1251', + 'iso-8859-6': 'Windows-1256', + 'iso-8859-7': 'Windows-1253', + 'iso-8859-8': 'Windows-1255', + 'iso-8859-9': 'Windows-1254', + 'iso-8859-13': 'Windows-1257'} + + def __init__(self, lang_filter=LanguageFilter.ALL): + self._esc_charset_prober = None + self._charset_probers = [] + self.result = None + self.done = None + self._got_data = None + self._input_state = None + self._last_char = None + self.lang_filter = lang_filter + self.logger = logging.getLogger(__name__) + self._has_win_bytes = None + self.reset() + + def reset(self): + """ + Reset the UniversalDetector and all of its probers back to their + initial states. This is called by ``__init__``, so you only need to + call this directly in between analyses of different documents. + """ + self.result = {'encoding': None, 'confidence': 0.0, 'language': None} + self.done = False + self._got_data = False + self._has_win_bytes = False + self._input_state = InputState.PURE_ASCII + self._last_char = b'' + if self._esc_charset_prober: + self._esc_charset_prober.reset() + for prober in self._charset_probers: + prober.reset() + + def feed(self, byte_str): + """ + Takes a chunk of a document and feeds it through all of the relevant + charset probers. + + After calling ``feed``, you can check the value of the ``done`` + attribute to see if you need to continue feeding the + ``UniversalDetector`` more data, or if it has made a prediction + (in the ``result`` attribute). + + .. note:: + You should always call ``close`` when you're done feeding in your + document if ``done`` is not already ``True``. + """ + if self.done: + return + + if not len(byte_str): + return + + if not isinstance(byte_str, bytearray): + byte_str = bytearray(byte_str) + + # First check for known BOMs, since these are guaranteed to be correct + if not self._got_data: + # If the data starts with BOM, we know it is UTF + if byte_str.startswith(codecs.BOM_UTF8): + # EF BB BF UTF-8 with BOM + self.result = {'encoding': "UTF-8-SIG", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith((codecs.BOM_UTF32_LE, + codecs.BOM_UTF32_BE)): + # FF FE 00 00 UTF-32, little-endian BOM + # 00 00 FE FF UTF-32, big-endian BOM + self.result = {'encoding': "UTF-32", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith(b'\xFE\xFF\x00\x00'): + # FE FF 00 00 UCS-4, unusual octet order BOM (3412) + self.result = {'encoding': "X-ISO-10646-UCS-4-3412", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith(b'\x00\x00\xFF\xFE'): + # 00 00 FF FE UCS-4, unusual octet order BOM (2143) + self.result = {'encoding': "X-ISO-10646-UCS-4-2143", + 'confidence': 1.0, + 'language': ''} + elif byte_str.startswith((codecs.BOM_LE, codecs.BOM_BE)): + # FF FE UTF-16, little endian BOM + # FE FF UTF-16, big endian BOM + self.result = {'encoding': "UTF-16", + 'confidence': 1.0, + 'language': ''} + + self._got_data = True + if self.result['encoding'] is not None: + self.done = True + return + + # If none of those matched and we've only see ASCII so far, check + # for high bytes and escape sequences + if self._input_state == InputState.PURE_ASCII: + if self.HIGH_BYTE_DETECTOR.search(byte_str): + self._input_state = InputState.HIGH_BYTE + elif self._input_state == InputState.PURE_ASCII and \ + self.ESC_DETECTOR.search(self._last_char + byte_str): + self._input_state = InputState.ESC_ASCII + + self._last_char = byte_str[-1:] + + # If we've seen escape sequences, use the EscCharSetProber, which + # uses a simple state machine to check for known escape sequences in + # HZ and ISO-2022 encodings, since those are the only encodings that + # use such sequences. + if self._input_state == InputState.ESC_ASCII: + if not self._esc_charset_prober: + self._esc_charset_prober = EscCharSetProber(self.lang_filter) + if self._esc_charset_prober.feed(byte_str) == ProbingState.FOUND_IT: + self.result = {'encoding': + self._esc_charset_prober.charset_name, + 'confidence': + self._esc_charset_prober.get_confidence(), + 'language': + self._esc_charset_prober.language} + self.done = True + # If we've seen high bytes (i.e., those with values greater than 127), + # we need to do more complicated checks using all our multi-byte and + # single-byte probers that are left. The single-byte probers + # use character bigram distributions to determine the encoding, whereas + # the multi-byte probers use a combination of character unigram and + # bigram distributions. + elif self._input_state == InputState.HIGH_BYTE: + if not self._charset_probers: + self._charset_probers = [MBCSGroupProber(self.lang_filter)] + # If we're checking non-CJK encodings, use single-byte prober + if self.lang_filter & LanguageFilter.NON_CJK: + self._charset_probers.append(SBCSGroupProber()) + self._charset_probers.append(Latin1Prober()) + for prober in self._charset_probers: + if prober.feed(byte_str) == ProbingState.FOUND_IT: + self.result = {'encoding': prober.charset_name, + 'confidence': prober.get_confidence(), + 'language': prober.language} + self.done = True + break + if self.WIN_BYTE_DETECTOR.search(byte_str): + self._has_win_bytes = True + + def close(self): + """ + Stop analyzing the current document and come up with a final + prediction. + + :returns: The ``result`` attribute, a ``dict`` with the keys + `encoding`, `confidence`, and `language`. + """ + # Don't bother with checks if we're already done + if self.done: + return self.result + self.done = True + + if not self._got_data: + self.logger.debug('no data received!') + + # Default to ASCII if it is all we've seen so far + elif self._input_state == InputState.PURE_ASCII: + self.result = {'encoding': 'ascii', + 'confidence': 1.0, + 'language': ''} + + # If we have seen non-ASCII, return the best that met MINIMUM_THRESHOLD + elif self._input_state == InputState.HIGH_BYTE: + prober_confidence = None + max_prober_confidence = 0.0 + max_prober = None + for prober in self._charset_probers: + if not prober: + continue + prober_confidence = prober.get_confidence() + if prober_confidence > max_prober_confidence: + max_prober_confidence = prober_confidence + max_prober = prober + if max_prober and (max_prober_confidence > self.MINIMUM_THRESHOLD): + charset_name = max_prober.charset_name + lower_charset_name = max_prober.charset_name.lower() + confidence = max_prober.get_confidence() + # Use Windows encoding name instead of ISO-8859 if we saw any + # extra Windows-specific bytes + if lower_charset_name.startswith('iso-8859'): + if self._has_win_bytes: + charset_name = self.ISO_WIN_MAP.get(lower_charset_name, + charset_name) + self.result = {'encoding': charset_name, + 'confidence': confidence, + 'language': max_prober.language} + + # Log all prober confidences if none met MINIMUM_THRESHOLD + if self.logger.getEffectiveLevel() <= logging.DEBUG: + if self.result['encoding'] is None: + self.logger.debug('no probers hit minimum threshold') + for group_prober in self._charset_probers: + if not group_prober: + continue + if isinstance(group_prober, CharSetGroupProber): + for prober in group_prober.probers: + self.logger.debug('%s %s confidence = %s', + prober.charset_name, + prober.language, + prober.get_confidence()) + else: + self.logger.debug('%s %s confidence = %s', + group_prober.charset_name, + group_prober.language, + group_prober.get_confidence()) + return self.result diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/utf8prober.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/utf8prober.py new file mode 100644 index 0000000..6c3196c --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/utf8prober.py @@ -0,0 +1,82 @@ +######################## BEGIN LICENSE BLOCK ######################## +# The Original Code is mozilla.org code. +# +# The Initial Developer of the Original Code is +# Netscape Communications Corporation. +# Portions created by the Initial Developer are Copyright (C) 1998 +# the Initial Developer. All Rights Reserved. +# +# Contributor(s): +# Mark Pilgrim - port to Python +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA +# 02110-1301 USA +######################### END LICENSE BLOCK ######################### + +from .charsetprober import CharSetProber +from .enums import ProbingState, MachineState +from .codingstatemachine import CodingStateMachine +from .mbcssm import UTF8_SM_MODEL + + + +class UTF8Prober(CharSetProber): + ONE_CHAR_PROB = 0.5 + + def __init__(self): + super(UTF8Prober, self).__init__() + self.coding_sm = CodingStateMachine(UTF8_SM_MODEL) + self._num_mb_chars = None + self.reset() + + def reset(self): + super(UTF8Prober, self).reset() + self.coding_sm.reset() + self._num_mb_chars = 0 + + @property + def charset_name(self): + return "utf-8" + + @property + def language(self): + return "" + + def feed(self, byte_str): + for c in byte_str: + coding_state = self.coding_sm.next_state(c) + if coding_state == MachineState.ERROR: + self._state = ProbingState.NOT_ME + break + elif coding_state == MachineState.ITS_ME: + self._state = ProbingState.FOUND_IT + break + elif coding_state == MachineState.START: + if self.coding_sm.get_current_charlen() >= 2: + self._num_mb_chars += 1 + + if self.state == ProbingState.DETECTING: + if self.get_confidence() > self.SHORTCUT_THRESHOLD: + self._state = ProbingState.FOUND_IT + + return self.state + + def get_confidence(self): + unlike = 0.99 + if self._num_mb_chars < 6: + unlike *= self.ONE_CHAR_PROB ** self._num_mb_chars + return 1.0 - unlike + else: + return unlike diff --git a/python/lib/python3.10/site-packages/pip/_vendor/chardet/version.py b/python/lib/python3.10/site-packages/pip/_vendor/chardet/version.py new file mode 100644 index 0000000..70369b9 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/chardet/version.py @@ -0,0 +1,9 @@ +""" +This module exists only to simplify retrieving the version number of chardet +from within setup.py and from chardet subpackages. + +:author: Dan Blanchard (dan.blanchard@gmail.com) +""" + +__version__ = "4.0.0" +VERSION = __version__.split('.') diff --git a/python/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py new file mode 100644 index 0000000..b149ed7 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/colorama/__init__.py @@ -0,0 +1,6 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +from .initialise import init, deinit, reinit, colorama_text +from .ansi import Fore, Back, Style, Cursor +from .ansitowin32 import AnsiToWin32 + +__version__ = '0.4.4' diff --git a/lib/python3.11/site-packages/pip/_vendor/colorama/ansi.py b/python/lib/python3.10/site-packages/pip/_vendor/colorama/ansi.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/colorama/ansi.py rename to python/lib/python3.10/site-packages/pip/_vendor/colorama/ansi.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py b/python/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py new file mode 100644 index 0000000..6039a05 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py @@ -0,0 +1,258 @@ +# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. +import re +import sys +import os + +from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style, BEL +from .winterm import WinTerm, WinColor, WinStyle +from .win32 import windll, winapi_test + + +winterm = None +if windll is not None: + winterm = WinTerm() + + +class StreamWrapper(object): + ''' + Wraps a stream (such as stdout), acting as a transparent proxy for all + attribute access apart from method 'write()', which is delegated to our + Converter instance. + ''' + def __init__(self, wrapped, converter): + # double-underscore everything to prevent clashes with names of + # attributes on the wrapped stream object. + self.__wrapped = wrapped + self.__convertor = converter + + def __getattr__(self, name): + return getattr(self.__wrapped, name) + + def __enter__(self, *args, **kwargs): + # special method lookup bypasses __getattr__/__getattribute__, see + # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit + # thus, contextlib magic methods are not proxied via __getattr__ + return self.__wrapped.__enter__(*args, **kwargs) + + def __exit__(self, *args, **kwargs): + return self.__wrapped.__exit__(*args, **kwargs) + + def write(self, text): + self.__convertor.write(text) + + def isatty(self): + stream = self.__wrapped + if 'PYCHARM_HOSTED' in os.environ: + if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__): + return True + try: + stream_isatty = stream.isatty + except AttributeError: + return False + else: + return stream_isatty() + + @property + def closed(self): + stream = self.__wrapped + try: + return stream.closed + except AttributeError: + return True + + +class AnsiToWin32(object): + ''' + Implements a 'write()' method which, on Windows, will strip ANSI character + sequences from the text, and if outputting to a tty, will convert them into + win32 function calls. + ''' + ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer + ANSI_OSC_RE = re.compile('\001?\033\\]([^\a]*)(\a)\002?') # Operating System Command + + def __init__(self, wrapped, convert=None, strip=None, autoreset=False): + # The wrapped stream (normally sys.stdout or sys.stderr) + self.wrapped = wrapped + + # should we reset colors to defaults after every .write() + self.autoreset = autoreset + + # create the proxy wrapping our output stream + self.stream = StreamWrapper(wrapped, self) + + on_windows = os.name == 'nt' + # We test if the WinAPI works, because even if we are on Windows + # we may be using a terminal that doesn't support the WinAPI + # (e.g. Cygwin Terminal). In this case it's up to the terminal + # to support the ANSI codes. + conversion_supported = on_windows and winapi_test() + + # should we strip ANSI sequences from our output? + if strip is None: + strip = conversion_supported or (not self.stream.closed and not self.stream.isatty()) + self.strip = strip + + # should we should convert ANSI sequences into win32 calls? + if convert is None: + convert = conversion_supported and not self.stream.closed and self.stream.isatty() + self.convert = convert + + # dict of ansi codes to win32 functions and parameters + self.win32_calls = self.get_win32_calls() + + # are we wrapping stderr? + self.on_stderr = self.wrapped is sys.stderr + + def should_wrap(self): + ''' + True if this class is actually needed. If false, then the output + stream will not be affected, nor will win32 calls be issued, so + wrapping stdout is not actually required. This will generally be + False on non-Windows platforms, unless optional functionality like + autoreset has been requested using kwargs to init() + ''' + return self.convert or self.strip or self.autoreset + + def get_win32_calls(self): + if self.convert and winterm: + return { + AnsiStyle.RESET_ALL: (winterm.reset_all, ), + AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT), + AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL), + AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL), + AnsiFore.BLACK: (winterm.fore, WinColor.BLACK), + AnsiFore.RED: (winterm.fore, WinColor.RED), + AnsiFore.GREEN: (winterm.fore, WinColor.GREEN), + AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW), + AnsiFore.BLUE: (winterm.fore, WinColor.BLUE), + AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA), + AnsiFore.CYAN: (winterm.fore, WinColor.CYAN), + AnsiFore.WHITE: (winterm.fore, WinColor.GREY), + AnsiFore.RESET: (winterm.fore, ), + AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True), + AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True), + AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True), + AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True), + AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True), + AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True), + AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True), + AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True), + AnsiBack.BLACK: (winterm.back, WinColor.BLACK), + AnsiBack.RED: (winterm.back, WinColor.RED), + AnsiBack.GREEN: (winterm.back, WinColor.GREEN), + AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW), + AnsiBack.BLUE: (winterm.back, WinColor.BLUE), + AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA), + AnsiBack.CYAN: (winterm.back, WinColor.CYAN), + AnsiBack.WHITE: (winterm.back, WinColor.GREY), + AnsiBack.RESET: (winterm.back, ), + AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True), + AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True), + AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True), + AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True), + AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True), + AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True), + AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True), + AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True), + } + return dict() + + def write(self, text): + if self.strip or self.convert: + self.write_and_convert(text) + else: + self.wrapped.write(text) + self.wrapped.flush() + if self.autoreset: + self.reset_all() + + + def reset_all(self): + if self.convert: + self.call_win32('m', (0,)) + elif not self.strip and not self.stream.closed: + self.wrapped.write(Style.RESET_ALL) + + + def write_and_convert(self, text): + ''' + Write the given text to our wrapped stream, stripping any ANSI + sequences from the text, and optionally converting them into win32 + calls. + ''' + cursor = 0 + text = self.convert_osc(text) + for match in self.ANSI_CSI_RE.finditer(text): + start, end = match.span() + self.write_plain_text(text, cursor, start) + self.convert_ansi(*match.groups()) + cursor = end + self.write_plain_text(text, cursor, len(text)) + + + def write_plain_text(self, text, start, end): + if start < end: + self.wrapped.write(text[start:end]) + self.wrapped.flush() + + + def convert_ansi(self, paramstring, command): + if self.convert: + params = self.extract_params(command, paramstring) + self.call_win32(command, params) + + + def extract_params(self, command, paramstring): + if command in 'Hf': + params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';')) + while len(params) < 2: + # defaults: + params = params + (1,) + else: + params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0) + if len(params) == 0: + # defaults: + if command in 'JKm': + params = (0,) + elif command in 'ABCD': + params = (1,) + + return params + + + def call_win32(self, command, params): + if command == 'm': + for param in params: + if param in self.win32_calls: + func_args = self.win32_calls[param] + func = func_args[0] + args = func_args[1:] + kwargs = dict(on_stderr=self.on_stderr) + func(*args, **kwargs) + elif command in 'J': + winterm.erase_screen(params[0], on_stderr=self.on_stderr) + elif command in 'K': + winterm.erase_line(params[0], on_stderr=self.on_stderr) + elif command in 'Hf': # cursor position - absolute + winterm.set_cursor_position(params, on_stderr=self.on_stderr) + elif command in 'ABCD': # cursor position - relative + n = params[0] + # A - up, B - down, C - forward, D - back + x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command] + winterm.cursor_adjust(x, y, on_stderr=self.on_stderr) + + + def convert_osc(self, text): + for match in self.ANSI_OSC_RE.finditer(text): + start, end = match.span() + text = text[:start] + text[end:] + paramstring, command = match.groups() + if command == BEL: + if paramstring.count(";") == 1: + params = paramstring.split(";") + # 0 - change title and icon (we will only change title) + # 1 - change icon (we don't support this) + # 2 - change title + if params[0] in '02': + winterm.set_title(params[1]) + return text diff --git a/lib/python3.11/site-packages/pip/_vendor/colorama/initialise.py b/python/lib/python3.10/site-packages/pip/_vendor/colorama/initialise.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/colorama/initialise.py rename to python/lib/python3.10/site-packages/pip/_vendor/colorama/initialise.py diff --git a/lib/python3.11/site-packages/pip/_vendor/colorama/win32.py b/python/lib/python3.10/site-packages/pip/_vendor/colorama/win32.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/colorama/win32.py rename to python/lib/python3.10/site-packages/pip/_vendor/colorama/win32.py diff --git a/lib/python3.11/site-packages/pip/_vendor/colorama/winterm.py b/python/lib/python3.10/site-packages/pip/_vendor/colorama/winterm.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/colorama/winterm.py rename to python/lib/python3.10/site-packages/pip/_vendor/colorama/winterm.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/__init__.py new file mode 100644 index 0000000..6878387 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/__init__.py @@ -0,0 +1,23 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2019 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import logging + +__version__ = '0.3.4' + +class DistlibException(Exception): + pass + +try: + from logging import NullHandler +except ImportError: # pragma: no cover + class NullHandler(logging.Handler): + def handle(self, record): pass + def emit(self, record): pass + def createLock(self): self.lock = None + +logger = logging.getLogger(__name__) +logger.addHandler(NullHandler()) diff --git a/lib/python3.11/site-packages/pip/_vendor/distlib/compat.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/compat.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/distlib/compat.py rename to python/lib/python3.10/site-packages/pip/_vendor/distlib/compat.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/database.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/database.py new file mode 100644 index 0000000..f486994 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/database.py @@ -0,0 +1,1345 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2017 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""PEP 376 implementation.""" + +from __future__ import unicode_literals + +import base64 +import codecs +import contextlib +import hashlib +import logging +import os +import posixpath +import sys +import zipimport + +from . import DistlibException, resources +from .compat import StringIO +from .version import get_scheme, UnsupportedVersionError +from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME) +from .util import (parse_requirement, cached_property, parse_name_and_version, + read_exports, write_exports, CSVReader, CSVWriter) + + +__all__ = ['Distribution', 'BaseInstalledDistribution', + 'InstalledDistribution', 'EggInfoDistribution', + 'DistributionPath'] + + +logger = logging.getLogger(__name__) + +EXPORTS_FILENAME = 'pydist-exports.json' +COMMANDS_FILENAME = 'pydist-commands.json' + +DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', + 'RESOURCES', EXPORTS_FILENAME, 'SHARED') + +DISTINFO_EXT = '.dist-info' + + +class _Cache(object): + """ + A simple cache mapping names and .dist-info paths to distributions + """ + def __init__(self): + """ + Initialise an instance. There is normally one for each DistributionPath. + """ + self.name = {} + self.path = {} + self.generated = False + + def clear(self): + """ + Clear the cache, setting it to its initial state. + """ + self.name.clear() + self.path.clear() + self.generated = False + + def add(self, dist): + """ + Add a distribution to the cache. + :param dist: The distribution to add. + """ + if dist.path not in self.path: + self.path[dist.path] = dist + self.name.setdefault(dist.key, []).append(dist) + + +class DistributionPath(object): + """ + Represents a set of distributions installed on a path (typically sys.path). + """ + def __init__(self, path=None, include_egg=False): + """ + Create an instance from a path, optionally including legacy (distutils/ + setuptools/distribute) distributions. + :param path: The path to use, as a list of directories. If not specified, + sys.path is used. + :param include_egg: If True, this instance will look for and return legacy + distributions as well as those based on PEP 376. + """ + if path is None: + path = sys.path + self.path = path + self._include_dist = True + self._include_egg = include_egg + + self._cache = _Cache() + self._cache_egg = _Cache() + self._cache_enabled = True + self._scheme = get_scheme('default') + + def _get_cache_enabled(self): + return self._cache_enabled + + def _set_cache_enabled(self, value): + self._cache_enabled = value + + cache_enabled = property(_get_cache_enabled, _set_cache_enabled) + + def clear_cache(self): + """ + Clears the internal cache. + """ + self._cache.clear() + self._cache_egg.clear() + + + def _yield_distributions(self): + """ + Yield .dist-info and/or .egg(-info) distributions. + """ + # We need to check if we've seen some resources already, because on + # some Linux systems (e.g. some Debian/Ubuntu variants) there are + # symlinks which alias other files in the environment. + seen = set() + for path in self.path: + finder = resources.finder_for_path(path) + if finder is None: + continue + r = finder.find('') + if not r or not r.is_container: + continue + rset = sorted(r.resources) + for entry in rset: + r = finder.find(entry) + if not r or r.path in seen: + continue + try: + if self._include_dist and entry.endswith(DISTINFO_EXT): + possible_filenames = [METADATA_FILENAME, + WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME] + for metadata_filename in possible_filenames: + metadata_path = posixpath.join(entry, metadata_filename) + pydist = finder.find(metadata_path) + if pydist: + break + else: + continue + + with contextlib.closing(pydist.as_stream()) as stream: + metadata = Metadata(fileobj=stream, scheme='legacy') + logger.debug('Found %s', r.path) + seen.add(r.path) + yield new_dist_class(r.path, metadata=metadata, + env=self) + elif self._include_egg and entry.endswith(('.egg-info', + '.egg')): + logger.debug('Found %s', r.path) + seen.add(r.path) + yield old_dist_class(r.path, self) + except Exception as e: + msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s' + logger.warning(msg, r.path, e) + import warnings + warnings.warn(msg % (r.path, e), stacklevel=2) + + def _generate_cache(self): + """ + Scan the path for distributions and populate the cache with + those that are found. + """ + gen_dist = not self._cache.generated + gen_egg = self._include_egg and not self._cache_egg.generated + if gen_dist or gen_egg: + for dist in self._yield_distributions(): + if isinstance(dist, InstalledDistribution): + self._cache.add(dist) + else: + self._cache_egg.add(dist) + + if gen_dist: + self._cache.generated = True + if gen_egg: + self._cache_egg.generated = True + + @classmethod + def distinfo_dirname(cls, name, version): + """ + The *name* and *version* parameters are converted into their + filename-escaped form, i.e. any ``'-'`` characters are replaced + with ``'_'`` other than the one in ``'dist-info'`` and the one + separating the name from the version number. + + :parameter name: is converted to a standard distribution name by replacing + any runs of non- alphanumeric characters with a single + ``'-'``. + :type name: string + :parameter version: is converted to a standard version string. Spaces + become dots, and all other non-alphanumeric characters + (except dots) become dashes, with runs of multiple + dashes condensed to a single dash. + :type version: string + :returns: directory name + :rtype: string""" + name = name.replace('-', '_') + return '-'.join([name, version]) + DISTINFO_EXT + + def get_distributions(self): + """ + Provides an iterator that looks for distributions and returns + :class:`InstalledDistribution` or + :class:`EggInfoDistribution` instances for each one of them. + + :rtype: iterator of :class:`InstalledDistribution` and + :class:`EggInfoDistribution` instances + """ + if not self._cache_enabled: + for dist in self._yield_distributions(): + yield dist + else: + self._generate_cache() + + for dist in self._cache.path.values(): + yield dist + + if self._include_egg: + for dist in self._cache_egg.path.values(): + yield dist + + def get_distribution(self, name): + """ + Looks for a named distribution on the path. + + This function only returns the first result found, as no more than one + value is expected. If nothing is found, ``None`` is returned. + + :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` + or ``None`` + """ + result = None + name = name.lower() + if not self._cache_enabled: + for dist in self._yield_distributions(): + if dist.key == name: + result = dist + break + else: + self._generate_cache() + + if name in self._cache.name: + result = self._cache.name[name][0] + elif self._include_egg and name in self._cache_egg.name: + result = self._cache_egg.name[name][0] + return result + + def provides_distribution(self, name, version=None): + """ + Iterates over all distributions to find which distributions provide *name*. + If a *version* is provided, it will be used to filter the results. + + This function only returns the first result found, since no more than + one values are expected. If the directory is not found, returns ``None``. + + :parameter version: a version specifier that indicates the version + required, conforming to the format in ``PEP-345`` + + :type name: string + :type version: string + """ + matcher = None + if version is not None: + try: + matcher = self._scheme.matcher('%s (%s)' % (name, version)) + except ValueError: + raise DistlibException('invalid name or version: %r, %r' % + (name, version)) + + for dist in self.get_distributions(): + # We hit a problem on Travis where enum34 was installed and doesn't + # have a provides attribute ... + if not hasattr(dist, 'provides'): + logger.debug('No "provides": %s', dist) + else: + provided = dist.provides + + for p in provided: + p_name, p_ver = parse_name_and_version(p) + if matcher is None: + if p_name == name: + yield dist + break + else: + if p_name == name and matcher.match(p_ver): + yield dist + break + + def get_file_path(self, name, relative_path): + """ + Return the path to a resource file. + """ + dist = self.get_distribution(name) + if dist is None: + raise LookupError('no distribution named %r found' % name) + return dist.get_resource_path(relative_path) + + def get_exported_entries(self, category, name=None): + """ + Return all of the exported entries in a particular category. + + :param category: The category to search for entries. + :param name: If specified, only entries with that name are returned. + """ + for dist in self.get_distributions(): + r = dist.exports + if category in r: + d = r[category] + if name is not None: + if name in d: + yield d[name] + else: + for v in d.values(): + yield v + + +class Distribution(object): + """ + A base class for distributions, whether installed or from indexes. + Either way, it must have some metadata, so that's all that's needed + for construction. + """ + + build_time_dependency = False + """ + Set to True if it's known to be only a build-time dependency (i.e. + not needed after installation). + """ + + requested = False + """A boolean that indicates whether the ``REQUESTED`` metadata file is + present (in other words, whether the package was installed by user + request or it was installed as a dependency).""" + + def __init__(self, metadata): + """ + Initialise an instance. + :param metadata: The instance of :class:`Metadata` describing this + distribution. + """ + self.metadata = metadata + self.name = metadata.name + self.key = self.name.lower() # for case-insensitive comparisons + self.version = metadata.version + self.locator = None + self.digest = None + self.extras = None # additional features requested + self.context = None # environment marker overrides + self.download_urls = set() + self.digests = {} + + @property + def source_url(self): + """ + The source archive download URL for this distribution. + """ + return self.metadata.source_url + + download_url = source_url # Backward compatibility + + @property + def name_and_version(self): + """ + A utility property which displays the name and version in parentheses. + """ + return '%s (%s)' % (self.name, self.version) + + @property + def provides(self): + """ + A set of distribution names and versions provided by this distribution. + :return: A set of "name (version)" strings. + """ + plist = self.metadata.provides + s = '%s (%s)' % (self.name, self.version) + if s not in plist: + plist.append(s) + return plist + + def _get_requirements(self, req_attr): + md = self.metadata + logger.debug('Getting requirements from metadata %r', md.todict()) + reqts = getattr(md, req_attr) + return set(md.get_requirements(reqts, extras=self.extras, + env=self.context)) + + @property + def run_requires(self): + return self._get_requirements('run_requires') + + @property + def meta_requires(self): + return self._get_requirements('meta_requires') + + @property + def build_requires(self): + return self._get_requirements('build_requires') + + @property + def test_requires(self): + return self._get_requirements('test_requires') + + @property + def dev_requires(self): + return self._get_requirements('dev_requires') + + def matches_requirement(self, req): + """ + Say if this instance matches (fulfills) a requirement. + :param req: The requirement to match. + :rtype req: str + :return: True if it matches, else False. + """ + # Requirement may contain extras - parse to lose those + # from what's passed to the matcher + r = parse_requirement(req) + scheme = get_scheme(self.metadata.scheme) + try: + matcher = scheme.matcher(r.requirement) + except UnsupportedVersionError: + # XXX compat-mode if cannot read the version + logger.warning('could not read version %r - using name only', + req) + name = req.split()[0] + matcher = scheme.matcher(name) + + name = matcher.key # case-insensitive + + result = False + for p in self.provides: + p_name, p_ver = parse_name_and_version(p) + if p_name != name: + continue + try: + result = matcher.match(p_ver) + break + except UnsupportedVersionError: + pass + return result + + def __repr__(self): + """ + Return a textual representation of this instance, + """ + if self.source_url: + suffix = ' [%s]' % self.source_url + else: + suffix = '' + return '' % (self.name, self.version, suffix) + + def __eq__(self, other): + """ + See if this distribution is the same as another. + :param other: The distribution to compare with. To be equal to one + another. distributions must have the same type, name, + version and source_url. + :return: True if it is the same, else False. + """ + if type(other) is not type(self): + result = False + else: + result = (self.name == other.name and + self.version == other.version and + self.source_url == other.source_url) + return result + + def __hash__(self): + """ + Compute hash in a way which matches the equality test. + """ + return hash(self.name) + hash(self.version) + hash(self.source_url) + + +class BaseInstalledDistribution(Distribution): + """ + This is the base class for installed distributions (whether PEP 376 or + legacy). + """ + + hasher = None + + def __init__(self, metadata, path, env=None): + """ + Initialise an instance. + :param metadata: An instance of :class:`Metadata` which describes the + distribution. This will normally have been initialised + from a metadata file in the ``path``. + :param path: The path of the ``.dist-info`` or ``.egg-info`` + directory for the distribution. + :param env: This is normally the :class:`DistributionPath` + instance where this distribution was found. + """ + super(BaseInstalledDistribution, self).__init__(metadata) + self.path = path + self.dist_path = env + + def get_hash(self, data, hasher=None): + """ + Get the hash of some data, using a particular hash algorithm, if + specified. + + :param data: The data to be hashed. + :type data: bytes + :param hasher: The name of a hash implementation, supported by hashlib, + or ``None``. Examples of valid values are ``'sha1'``, + ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and + ``'sha512'``. If no hasher is specified, the ``hasher`` + attribute of the :class:`InstalledDistribution` instance + is used. If the hasher is determined to be ``None``, MD5 + is used as the hashing algorithm. + :returns: The hash of the data. If a hasher was explicitly specified, + the returned hash will be prefixed with the specified hasher + followed by '='. + :rtype: str + """ + if hasher is None: + hasher = self.hasher + if hasher is None: + hasher = hashlib.md5 + prefix = '' + else: + hasher = getattr(hashlib, hasher) + prefix = '%s=' % self.hasher + digest = hasher(data).digest() + digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') + return '%s%s' % (prefix, digest) + + +class InstalledDistribution(BaseInstalledDistribution): + """ + Created with the *path* of the ``.dist-info`` directory provided to the + constructor. It reads the metadata contained in ``pydist.json`` when it is + instantiated., or uses a passed in Metadata instance (useful for when + dry-run mode is being used). + """ + + hasher = 'sha256' + + def __init__(self, path, metadata=None, env=None): + self.modules = [] + self.finder = finder = resources.finder_for_path(path) + if finder is None: + raise ValueError('finder unavailable for %s' % path) + if env and env._cache_enabled and path in env._cache.path: + metadata = env._cache.path[path].metadata + elif metadata is None: + r = finder.find(METADATA_FILENAME) + # Temporary - for Wheel 0.23 support + if r is None: + r = finder.find(WHEEL_METADATA_FILENAME) + # Temporary - for legacy support + if r is None: + r = finder.find(LEGACY_METADATA_FILENAME) + if r is None: + raise ValueError('no %s found in %s' % (METADATA_FILENAME, + path)) + with contextlib.closing(r.as_stream()) as stream: + metadata = Metadata(fileobj=stream, scheme='legacy') + + super(InstalledDistribution, self).__init__(metadata, path, env) + + if env and env._cache_enabled: + env._cache.add(self) + + r = finder.find('REQUESTED') + self.requested = r is not None + p = os.path.join(path, 'top_level.txt') + if os.path.exists(p): + with open(p, 'rb') as f: + data = f.read().decode('utf-8') + self.modules = data.splitlines() + + def __repr__(self): + return '' % ( + self.name, self.version, self.path) + + def __str__(self): + return "%s %s" % (self.name, self.version) + + def _get_records(self): + """ + Get the list of installed files for the distribution + :return: A list of tuples of path, hash and size. Note that hash and + size might be ``None`` for some entries. The path is exactly + as stored in the file (which is as in PEP 376). + """ + results = [] + r = self.get_distinfo_resource('RECORD') + with contextlib.closing(r.as_stream()) as stream: + with CSVReader(stream=stream) as record_reader: + # Base location is parent dir of .dist-info dir + #base_location = os.path.dirname(self.path) + #base_location = os.path.abspath(base_location) + for row in record_reader: + missing = [None for i in range(len(row), 3)] + path, checksum, size = row + missing + #if not os.path.isabs(path): + # path = path.replace('/', os.sep) + # path = os.path.join(base_location, path) + results.append((path, checksum, size)) + return results + + @cached_property + def exports(self): + """ + Return the information exported by this distribution. + :return: A dictionary of exports, mapping an export category to a dict + of :class:`ExportEntry` instances describing the individual + export entries, and keyed by name. + """ + result = {} + r = self.get_distinfo_resource(EXPORTS_FILENAME) + if r: + result = self.read_exports() + return result + + def read_exports(self): + """ + Read exports data from a file in .ini format. + + :return: A dictionary of exports, mapping an export category to a list + of :class:`ExportEntry` instances describing the individual + export entries. + """ + result = {} + r = self.get_distinfo_resource(EXPORTS_FILENAME) + if r: + with contextlib.closing(r.as_stream()) as stream: + result = read_exports(stream) + return result + + def write_exports(self, exports): + """ + Write a dictionary of exports to a file in .ini format. + :param exports: A dictionary of exports, mapping an export category to + a list of :class:`ExportEntry` instances describing the + individual export entries. + """ + rf = self.get_distinfo_file(EXPORTS_FILENAME) + with open(rf, 'w') as f: + write_exports(exports, f) + + def get_resource_path(self, relative_path): + """ + NOTE: This API may change in the future. + + Return the absolute path to a resource file with the given relative + path. + + :param relative_path: The path, relative to .dist-info, of the resource + of interest. + :return: The absolute path where the resource is to be found. + """ + r = self.get_distinfo_resource('RESOURCES') + with contextlib.closing(r.as_stream()) as stream: + with CSVReader(stream=stream) as resources_reader: + for relative, destination in resources_reader: + if relative == relative_path: + return destination + raise KeyError('no resource file with relative path %r ' + 'is installed' % relative_path) + + def list_installed_files(self): + """ + Iterates over the ``RECORD`` entries and returns a tuple + ``(path, hash, size)`` for each line. + + :returns: iterator of (path, hash, size) + """ + for result in self._get_records(): + yield result + + def write_installed_files(self, paths, prefix, dry_run=False): + """ + Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any + existing ``RECORD`` file is silently overwritten. + + prefix is used to determine when to write absolute paths. + """ + prefix = os.path.join(prefix, '') + base = os.path.dirname(self.path) + base_under_prefix = base.startswith(prefix) + base = os.path.join(base, '') + record_path = self.get_distinfo_file('RECORD') + logger.info('creating %s', record_path) + if dry_run: + return None + with CSVWriter(record_path) as writer: + for path in paths: + if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): + # do not put size and hash, as in PEP-376 + hash_value = size = '' + else: + size = '%d' % os.path.getsize(path) + with open(path, 'rb') as fp: + hash_value = self.get_hash(fp.read()) + if path.startswith(base) or (base_under_prefix and + path.startswith(prefix)): + path = os.path.relpath(path, base) + writer.writerow((path, hash_value, size)) + + # add the RECORD file itself + if record_path.startswith(base): + record_path = os.path.relpath(record_path, base) + writer.writerow((record_path, '', '')) + return record_path + + def check_installed_files(self): + """ + Checks that the hashes and sizes of the files in ``RECORD`` are + matched by the files themselves. Returns a (possibly empty) list of + mismatches. Each entry in the mismatch list will be a tuple consisting + of the path, 'exists', 'size' or 'hash' according to what didn't match + (existence is checked first, then size, then hash), the expected + value and the actual value. + """ + mismatches = [] + base = os.path.dirname(self.path) + record_path = self.get_distinfo_file('RECORD') + for path, hash_value, size in self.list_installed_files(): + if not os.path.isabs(path): + path = os.path.join(base, path) + if path == record_path: + continue + if not os.path.exists(path): + mismatches.append((path, 'exists', True, False)) + elif os.path.isfile(path): + actual_size = str(os.path.getsize(path)) + if size and actual_size != size: + mismatches.append((path, 'size', size, actual_size)) + elif hash_value: + if '=' in hash_value: + hasher = hash_value.split('=', 1)[0] + else: + hasher = None + + with open(path, 'rb') as f: + actual_hash = self.get_hash(f.read(), hasher) + if actual_hash != hash_value: + mismatches.append((path, 'hash', hash_value, actual_hash)) + return mismatches + + @cached_property + def shared_locations(self): + """ + A dictionary of shared locations whose keys are in the set 'prefix', + 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. + The corresponding value is the absolute path of that category for + this distribution, and takes into account any paths selected by the + user at installation time (e.g. via command-line arguments). In the + case of the 'namespace' key, this would be a list of absolute paths + for the roots of namespace packages in this distribution. + + The first time this property is accessed, the relevant information is + read from the SHARED file in the .dist-info directory. + """ + result = {} + shared_path = os.path.join(self.path, 'SHARED') + if os.path.isfile(shared_path): + with codecs.open(shared_path, 'r', encoding='utf-8') as f: + lines = f.read().splitlines() + for line in lines: + key, value = line.split('=', 1) + if key == 'namespace': + result.setdefault(key, []).append(value) + else: + result[key] = value + return result + + def write_shared_locations(self, paths, dry_run=False): + """ + Write shared location information to the SHARED file in .dist-info. + :param paths: A dictionary as described in the documentation for + :meth:`shared_locations`. + :param dry_run: If True, the action is logged but no file is actually + written. + :return: The path of the file written to. + """ + shared_path = os.path.join(self.path, 'SHARED') + logger.info('creating %s', shared_path) + if dry_run: + return None + lines = [] + for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): + path = paths[key] + if os.path.isdir(paths[key]): + lines.append('%s=%s' % (key, path)) + for ns in paths.get('namespace', ()): + lines.append('namespace=%s' % ns) + + with codecs.open(shared_path, 'w', encoding='utf-8') as f: + f.write('\n'.join(lines)) + return shared_path + + def get_distinfo_resource(self, path): + if path not in DIST_FILES: + raise DistlibException('invalid path for a dist-info file: ' + '%r at %r' % (path, self.path)) + finder = resources.finder_for_path(self.path) + if finder is None: + raise DistlibException('Unable to get a finder for %s' % self.path) + return finder.find(path) + + def get_distinfo_file(self, path): + """ + Returns a path located under the ``.dist-info`` directory. Returns a + string representing the path. + + :parameter path: a ``'/'``-separated path relative to the + ``.dist-info`` directory or an absolute path; + If *path* is an absolute path and doesn't start + with the ``.dist-info`` directory path, + a :class:`DistlibException` is raised + :type path: str + :rtype: str + """ + # Check if it is an absolute path # XXX use relpath, add tests + if path.find(os.sep) >= 0: + # it's an absolute path? + distinfo_dirname, path = path.split(os.sep)[-2:] + if distinfo_dirname != self.path.split(os.sep)[-1]: + raise DistlibException( + 'dist-info file %r does not belong to the %r %s ' + 'distribution' % (path, self.name, self.version)) + + # The file must be relative + if path not in DIST_FILES: + raise DistlibException('invalid path for a dist-info file: ' + '%r at %r' % (path, self.path)) + + return os.path.join(self.path, path) + + def list_distinfo_files(self): + """ + Iterates over the ``RECORD`` entries and returns paths for each line if + the path is pointing to a file located in the ``.dist-info`` directory + or one of its subdirectories. + + :returns: iterator of paths + """ + base = os.path.dirname(self.path) + for path, checksum, size in self._get_records(): + # XXX add separator or use real relpath algo + if not os.path.isabs(path): + path = os.path.join(base, path) + if path.startswith(self.path): + yield path + + def __eq__(self, other): + return (isinstance(other, InstalledDistribution) and + self.path == other.path) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + __hash__ = object.__hash__ + + +class EggInfoDistribution(BaseInstalledDistribution): + """Created with the *path* of the ``.egg-info`` directory or file provided + to the constructor. It reads the metadata contained in the file itself, or + if the given path happens to be a directory, the metadata is read from the + file ``PKG-INFO`` under that directory.""" + + requested = True # as we have no way of knowing, assume it was + shared_locations = {} + + def __init__(self, path, env=None): + def set_name_and_version(s, n, v): + s.name = n + s.key = n.lower() # for case-insensitive comparisons + s.version = v + + self.path = path + self.dist_path = env + if env and env._cache_enabled and path in env._cache_egg.path: + metadata = env._cache_egg.path[path].metadata + set_name_and_version(self, metadata.name, metadata.version) + else: + metadata = self._get_metadata(path) + + # Need to be set before caching + set_name_and_version(self, metadata.name, metadata.version) + + if env and env._cache_enabled: + env._cache_egg.add(self) + super(EggInfoDistribution, self).__init__(metadata, path, env) + + def _get_metadata(self, path): + requires = None + + def parse_requires_data(data): + """Create a list of dependencies from a requires.txt file. + + *data*: the contents of a setuptools-produced requires.txt file. + """ + reqs = [] + lines = data.splitlines() + for line in lines: + line = line.strip() + if line.startswith('['): + logger.warning('Unexpected line: quitting requirement scan: %r', + line) + break + r = parse_requirement(line) + if not r: + logger.warning('Not recognised as a requirement: %r', line) + continue + if r.extras: + logger.warning('extra requirements in requires.txt are ' + 'not supported') + if not r.constraints: + reqs.append(r.name) + else: + cons = ', '.join('%s%s' % c for c in r.constraints) + reqs.append('%s (%s)' % (r.name, cons)) + return reqs + + def parse_requires_path(req_path): + """Create a list of dependencies from a requires.txt file. + + *req_path*: the path to a setuptools-produced requires.txt file. + """ + + reqs = [] + try: + with codecs.open(req_path, 'r', 'utf-8') as fp: + reqs = parse_requires_data(fp.read()) + except IOError: + pass + return reqs + + tl_path = tl_data = None + if path.endswith('.egg'): + if os.path.isdir(path): + p = os.path.join(path, 'EGG-INFO') + meta_path = os.path.join(p, 'PKG-INFO') + metadata = Metadata(path=meta_path, scheme='legacy') + req_path = os.path.join(p, 'requires.txt') + tl_path = os.path.join(p, 'top_level.txt') + requires = parse_requires_path(req_path) + else: + # FIXME handle the case where zipfile is not available + zipf = zipimport.zipimporter(path) + fileobj = StringIO( + zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) + metadata = Metadata(fileobj=fileobj, scheme='legacy') + try: + data = zipf.get_data('EGG-INFO/requires.txt') + tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') + requires = parse_requires_data(data.decode('utf-8')) + except IOError: + requires = None + elif path.endswith('.egg-info'): + if os.path.isdir(path): + req_path = os.path.join(path, 'requires.txt') + requires = parse_requires_path(req_path) + path = os.path.join(path, 'PKG-INFO') + tl_path = os.path.join(path, 'top_level.txt') + metadata = Metadata(path=path, scheme='legacy') + else: + raise DistlibException('path must end with .egg-info or .egg, ' + 'got %r' % path) + + if requires: + metadata.add_requirements(requires) + # look for top-level modules in top_level.txt, if present + if tl_data is None: + if tl_path is not None and os.path.exists(tl_path): + with open(tl_path, 'rb') as f: + tl_data = f.read().decode('utf-8') + if not tl_data: + tl_data = [] + else: + tl_data = tl_data.splitlines() + self.modules = tl_data + return metadata + + def __repr__(self): + return '' % ( + self.name, self.version, self.path) + + def __str__(self): + return "%s %s" % (self.name, self.version) + + def check_installed_files(self): + """ + Checks that the hashes and sizes of the files in ``RECORD`` are + matched by the files themselves. Returns a (possibly empty) list of + mismatches. Each entry in the mismatch list will be a tuple consisting + of the path, 'exists', 'size' or 'hash' according to what didn't match + (existence is checked first, then size, then hash), the expected + value and the actual value. + """ + mismatches = [] + record_path = os.path.join(self.path, 'installed-files.txt') + if os.path.exists(record_path): + for path, _, _ in self.list_installed_files(): + if path == record_path: + continue + if not os.path.exists(path): + mismatches.append((path, 'exists', True, False)) + return mismatches + + def list_installed_files(self): + """ + Iterates over the ``installed-files.txt`` entries and returns a tuple + ``(path, hash, size)`` for each line. + + :returns: a list of (path, hash, size) + """ + + def _md5(path): + f = open(path, 'rb') + try: + content = f.read() + finally: + f.close() + return hashlib.md5(content).hexdigest() + + def _size(path): + return os.stat(path).st_size + + record_path = os.path.join(self.path, 'installed-files.txt') + result = [] + if os.path.exists(record_path): + with codecs.open(record_path, 'r', encoding='utf-8') as f: + for line in f: + line = line.strip() + p = os.path.normpath(os.path.join(self.path, line)) + # "./" is present as a marker between installed files + # and installation metadata files + if not os.path.exists(p): + logger.warning('Non-existent file: %s', p) + if p.endswith(('.pyc', '.pyo')): + continue + #otherwise fall through and fail + if not os.path.isdir(p): + result.append((p, _md5(p), _size(p))) + result.append((record_path, None, None)) + return result + + def list_distinfo_files(self, absolute=False): + """ + Iterates over the ``installed-files.txt`` entries and returns paths for + each line if the path is pointing to a file located in the + ``.egg-info`` directory or one of its subdirectories. + + :parameter absolute: If *absolute* is ``True``, each returned path is + transformed into a local absolute path. Otherwise the + raw value from ``installed-files.txt`` is returned. + :type absolute: boolean + :returns: iterator of paths + """ + record_path = os.path.join(self.path, 'installed-files.txt') + if os.path.exists(record_path): + skip = True + with codecs.open(record_path, 'r', encoding='utf-8') as f: + for line in f: + line = line.strip() + if line == './': + skip = False + continue + if not skip: + p = os.path.normpath(os.path.join(self.path, line)) + if p.startswith(self.path): + if absolute: + yield p + else: + yield line + + def __eq__(self, other): + return (isinstance(other, EggInfoDistribution) and + self.path == other.path) + + # See http://docs.python.org/reference/datamodel#object.__hash__ + __hash__ = object.__hash__ + +new_dist_class = InstalledDistribution +old_dist_class = EggInfoDistribution + + +class DependencyGraph(object): + """ + Represents a dependency graph between distributions. + + The dependency relationships are stored in an ``adjacency_list`` that maps + distributions to a list of ``(other, label)`` tuples where ``other`` + is a distribution and the edge is labeled with ``label`` (i.e. the version + specifier, if such was provided). Also, for more efficient traversal, for + every distribution ``x``, a list of predecessors is kept in + ``reverse_list[x]``. An edge from distribution ``a`` to + distribution ``b`` means that ``a`` depends on ``b``. If any missing + dependencies are found, they are stored in ``missing``, which is a + dictionary that maps distributions to a list of requirements that were not + provided by any other distributions. + """ + + def __init__(self): + self.adjacency_list = {} + self.reverse_list = {} + self.missing = {} + + def add_distribution(self, distribution): + """Add the *distribution* to the graph. + + :type distribution: :class:`distutils2.database.InstalledDistribution` + or :class:`distutils2.database.EggInfoDistribution` + """ + self.adjacency_list[distribution] = [] + self.reverse_list[distribution] = [] + #self.missing[distribution] = [] + + def add_edge(self, x, y, label=None): + """Add an edge from distribution *x* to distribution *y* with the given + *label*. + + :type x: :class:`distutils2.database.InstalledDistribution` or + :class:`distutils2.database.EggInfoDistribution` + :type y: :class:`distutils2.database.InstalledDistribution` or + :class:`distutils2.database.EggInfoDistribution` + :type label: ``str`` or ``None`` + """ + self.adjacency_list[x].append((y, label)) + # multiple edges are allowed, so be careful + if x not in self.reverse_list[y]: + self.reverse_list[y].append(x) + + def add_missing(self, distribution, requirement): + """ + Add a missing *requirement* for the given *distribution*. + + :type distribution: :class:`distutils2.database.InstalledDistribution` + or :class:`distutils2.database.EggInfoDistribution` + :type requirement: ``str`` + """ + logger.debug('%s missing %r', distribution, requirement) + self.missing.setdefault(distribution, []).append(requirement) + + def _repr_dist(self, dist): + return '%s %s' % (dist.name, dist.version) + + def repr_node(self, dist, level=1): + """Prints only a subgraph""" + output = [self._repr_dist(dist)] + for other, label in self.adjacency_list[dist]: + dist = self._repr_dist(other) + if label is not None: + dist = '%s [%s]' % (dist, label) + output.append(' ' * level + str(dist)) + suboutput = self.repr_node(other, level + 1) + subs = suboutput.split('\n') + output.extend(subs[1:]) + return '\n'.join(output) + + def to_dot(self, f, skip_disconnected=True): + """Writes a DOT output for the graph to the provided file *f*. + + If *skip_disconnected* is set to ``True``, then all distributions + that are not dependent on any other distribution are skipped. + + :type f: has to support ``file``-like operations + :type skip_disconnected: ``bool`` + """ + disconnected = [] + + f.write("digraph dependencies {\n") + for dist, adjs in self.adjacency_list.items(): + if len(adjs) == 0 and not skip_disconnected: + disconnected.append(dist) + for other, label in adjs: + if not label is None: + f.write('"%s" -> "%s" [label="%s"]\n' % + (dist.name, other.name, label)) + else: + f.write('"%s" -> "%s"\n' % (dist.name, other.name)) + if not skip_disconnected and len(disconnected) > 0: + f.write('subgraph disconnected {\n') + f.write('label = "Disconnected"\n') + f.write('bgcolor = red\n') + + for dist in disconnected: + f.write('"%s"' % dist.name) + f.write('\n') + f.write('}\n') + f.write('}\n') + + def topological_sort(self): + """ + Perform a topological sort of the graph. + :return: A tuple, the first element of which is a topologically sorted + list of distributions, and the second element of which is a + list of distributions that cannot be sorted because they have + circular dependencies and so form a cycle. + """ + result = [] + # Make a shallow copy of the adjacency list + alist = {} + for k, v in self.adjacency_list.items(): + alist[k] = v[:] + while True: + # See what we can remove in this run + to_remove = [] + for k, v in list(alist.items())[:]: + if not v: + to_remove.append(k) + del alist[k] + if not to_remove: + # What's left in alist (if anything) is a cycle. + break + # Remove from the adjacency list of others + for k, v in alist.items(): + alist[k] = [(d, r) for d, r in v if d not in to_remove] + logger.debug('Moving to result: %s', + ['%s (%s)' % (d.name, d.version) for d in to_remove]) + result.extend(to_remove) + return result, list(alist.keys()) + + def __repr__(self): + """Representation of the graph""" + output = [] + for dist, adjs in self.adjacency_list.items(): + output.append(self.repr_node(dist)) + return '\n'.join(output) + + +def make_graph(dists, scheme='default'): + """Makes a dependency graph from the given distributions. + + :parameter dists: a list of distributions + :type dists: list of :class:`distutils2.database.InstalledDistribution` and + :class:`distutils2.database.EggInfoDistribution` instances + :rtype: a :class:`DependencyGraph` instance + """ + scheme = get_scheme(scheme) + graph = DependencyGraph() + provided = {} # maps names to lists of (version, dist) tuples + + # first, build the graph and find out what's provided + for dist in dists: + graph.add_distribution(dist) + + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Add to provided: %s, %s, %s', name, version, dist) + provided.setdefault(name, []).append((version, dist)) + + # now make the edges + for dist in dists: + requires = (dist.run_requires | dist.meta_requires | + dist.build_requires | dist.dev_requires) + for req in requires: + try: + matcher = scheme.matcher(req) + except UnsupportedVersionError: + # XXX compat-mode if cannot read the version + logger.warning('could not read version %r - using name only', + req) + name = req.split()[0] + matcher = scheme.matcher(name) + + name = matcher.key # case-insensitive + + matched = False + if name in provided: + for version, provider in provided[name]: + try: + match = matcher.match(version) + except UnsupportedVersionError: + match = False + + if match: + graph.add_edge(dist, provider, req) + matched = True + break + if not matched: + graph.add_missing(dist, req) + return graph + + +def get_dependent_dists(dists, dist): + """Recursively generate a list of distributions from *dists* that are + dependent on *dist*. + + :param dists: a list of distributions + :param dist: a distribution, member of *dists* for which we are interested + """ + if dist not in dists: + raise DistlibException('given distribution %r is not a member ' + 'of the list' % dist.name) + graph = make_graph(dists) + + dep = [dist] # dependent distributions + todo = graph.reverse_list[dist] # list of nodes we should inspect + + while todo: + d = todo.pop() + dep.append(d) + for succ in graph.reverse_list[d]: + if succ not in dep: + todo.append(succ) + + dep.pop(0) # remove dist from dep, was there to prevent infinite loops + return dep + + +def get_required_dists(dists, dist): + """Recursively generate a list of distributions from *dists* that are + required by *dist*. + + :param dists: a list of distributions + :param dist: a distribution, member of *dists* for which we are interested + """ + if dist not in dists: + raise DistlibException('given distribution %r is not a member ' + 'of the list' % dist.name) + graph = make_graph(dists) + + req = [] # required distributions + todo = graph.adjacency_list[dist] # list of nodes we should inspect + + while todo: + d = todo.pop()[0] + req.append(d) + for pred in graph.adjacency_list[d]: + if pred not in req: + todo.append(pred) + + return req + + +def make_dist(name, version, **kwargs): + """ + A convenience method for making a dist given just a name and version. + """ + summary = kwargs.pop('summary', 'Placeholder for summary') + md = Metadata(**kwargs) + md.name = name + md.version = version + md.summary = summary or 'Placeholder for summary' + return Distribution(md) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/index.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/index.py new file mode 100644 index 0000000..b1fbbf8 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/index.py @@ -0,0 +1,509 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +import hashlib +import logging +import os +import shutil +import subprocess +import tempfile +try: + from threading import Thread +except ImportError: + from dummy_threading import Thread + +from . import DistlibException +from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr, + urlparse, build_opener, string_types) +from .util import zip_dir, ServerProxy + +logger = logging.getLogger(__name__) + +DEFAULT_INDEX = 'https://pypi.org/pypi' +DEFAULT_REALM = 'pypi' + +class PackageIndex(object): + """ + This class represents a package index compatible with PyPI, the Python + Package Index. + """ + + boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$' + + def __init__(self, url=None): + """ + Initialise an instance. + + :param url: The URL of the index. If not specified, the URL for PyPI is + used. + """ + self.url = url or DEFAULT_INDEX + self.read_configuration() + scheme, netloc, path, params, query, frag = urlparse(self.url) + if params or query or frag or scheme not in ('http', 'https'): + raise DistlibException('invalid repository: %s' % self.url) + self.password_handler = None + self.ssl_verifier = None + self.gpg = None + self.gpg_home = None + with open(os.devnull, 'w') as sink: + # Use gpg by default rather than gpg2, as gpg2 insists on + # prompting for passwords + for s in ('gpg', 'gpg2'): + try: + rc = subprocess.check_call([s, '--version'], stdout=sink, + stderr=sink) + if rc == 0: + self.gpg = s + break + except OSError: + pass + + def _get_pypirc_command(self): + """ + Get the distutils command for interacting with PyPI configurations. + :return: the command. + """ + from .util import _get_pypirc_command as cmd + return cmd() + + def read_configuration(self): + """ + Read the PyPI access configuration as supported by distutils. This populates + ``username``, ``password``, ``realm`` and ``url`` attributes from the + configuration. + """ + from .util import _load_pypirc + cfg = _load_pypirc(self) + self.username = cfg.get('username') + self.password = cfg.get('password') + self.realm = cfg.get('realm', 'pypi') + self.url = cfg.get('repository', self.url) + + def save_configuration(self): + """ + Save the PyPI access configuration. You must have set ``username`` and + ``password`` attributes before calling this method. + """ + self.check_credentials() + from .util import _store_pypirc + _store_pypirc(self) + + def check_credentials(self): + """ + Check that ``username`` and ``password`` have been set, and raise an + exception if not. + """ + if self.username is None or self.password is None: + raise DistlibException('username and password must be set') + pm = HTTPPasswordMgr() + _, netloc, _, _, _, _ = urlparse(self.url) + pm.add_password(self.realm, netloc, self.username, self.password) + self.password_handler = HTTPBasicAuthHandler(pm) + + def register(self, metadata): + """ + Register a distribution on PyPI, using the provided metadata. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the distribution to be + registered. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + metadata.validate() + d = metadata.todict() + d[':action'] = 'verify' + request = self.encode_request(d.items(), []) + response = self.send_request(request) + d[':action'] = 'submit' + request = self.encode_request(d.items(), []) + return self.send_request(request) + + def _reader(self, name, stream, outbuf): + """ + Thread runner for reading lines of from a subprocess into a buffer. + + :param name: The logical name of the stream (used for logging only). + :param stream: The stream to read from. This will typically a pipe + connected to the output stream of a subprocess. + :param outbuf: The list to append the read lines to. + """ + while True: + s = stream.readline() + if not s: + break + s = s.decode('utf-8').rstrip() + outbuf.append(s) + logger.debug('%s: %s' % (name, s)) + stream.close() + + def get_sign_command(self, filename, signer, sign_password, + keystore=None): + """ + Return a suitable command for signing a file. + + :param filename: The pathname to the file to be signed. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: The signing command as a list suitable to be + passed to :class:`subprocess.Popen`. + """ + cmd = [self.gpg, '--status-fd', '2', '--no-tty'] + if keystore is None: + keystore = self.gpg_home + if keystore: + cmd.extend(['--homedir', keystore]) + if sign_password is not None: + cmd.extend(['--batch', '--passphrase-fd', '0']) + td = tempfile.mkdtemp() + sf = os.path.join(td, os.path.basename(filename) + '.asc') + cmd.extend(['--detach-sign', '--armor', '--local-user', + signer, '--output', sf, filename]) + logger.debug('invoking: %s', ' '.join(cmd)) + return cmd, sf + + def run_command(self, cmd, input_data=None): + """ + Run a command in a child process , passing it any input data specified. + + :param cmd: The command to run. + :param input_data: If specified, this must be a byte string containing + data to be sent to the child process. + :return: A tuple consisting of the subprocess' exit code, a list of + lines read from the subprocess' ``stdout``, and a list of + lines read from the subprocess' ``stderr``. + """ + kwargs = { + 'stdout': subprocess.PIPE, + 'stderr': subprocess.PIPE, + } + if input_data is not None: + kwargs['stdin'] = subprocess.PIPE + stdout = [] + stderr = [] + p = subprocess.Popen(cmd, **kwargs) + # We don't use communicate() here because we may need to + # get clever with interacting with the command + t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout)) + t1.start() + t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr)) + t2.start() + if input_data is not None: + p.stdin.write(input_data) + p.stdin.close() + + p.wait() + t1.join() + t2.join() + return p.returncode, stdout, stderr + + def sign_file(self, filename, signer, sign_password, keystore=None): + """ + Sign a file. + + :param filename: The pathname to the file to be signed. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param keystore: The path to a directory which contains the keys + used in signing. If not specified, the instance's + ``gpg_home`` attribute is used instead. + :return: The absolute pathname of the file where the signature is + stored. + """ + cmd, sig_file = self.get_sign_command(filename, signer, sign_password, + keystore) + rc, stdout, stderr = self.run_command(cmd, + sign_password.encode('utf-8')) + if rc != 0: + raise DistlibException('sign command failed with error ' + 'code %s' % rc) + return sig_file + + def upload_file(self, metadata, filename, signer=None, sign_password=None, + filetype='sdist', pyversion='source', keystore=None): + """ + Upload a release file to the index. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the file to be uploaded. + :param filename: The pathname of the file to be uploaded. + :param signer: The identifier of the signer of the file. + :param sign_password: The passphrase for the signer's + private key used for signing. + :param filetype: The type of the file being uploaded. This is the + distutils command which produced that file, e.g. + ``sdist`` or ``bdist_wheel``. + :param pyversion: The version of Python which the release relates + to. For code compatible with any Python, this would + be ``source``, otherwise it would be e.g. ``3.2``. + :param keystore: The path to a directory which contains the keys + used in signing. If not specified, the instance's + ``gpg_home`` attribute is used instead. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + if not os.path.exists(filename): + raise DistlibException('not found: %s' % filename) + metadata.validate() + d = metadata.todict() + sig_file = None + if signer: + if not self.gpg: + logger.warning('no signing program available - not signed') + else: + sig_file = self.sign_file(filename, signer, sign_password, + keystore) + with open(filename, 'rb') as f: + file_data = f.read() + md5_digest = hashlib.md5(file_data).hexdigest() + sha256_digest = hashlib.sha256(file_data).hexdigest() + d.update({ + ':action': 'file_upload', + 'protocol_version': '1', + 'filetype': filetype, + 'pyversion': pyversion, + 'md5_digest': md5_digest, + 'sha256_digest': sha256_digest, + }) + files = [('content', os.path.basename(filename), file_data)] + if sig_file: + with open(sig_file, 'rb') as f: + sig_data = f.read() + files.append(('gpg_signature', os.path.basename(sig_file), + sig_data)) + shutil.rmtree(os.path.dirname(sig_file)) + request = self.encode_request(d.items(), files) + return self.send_request(request) + + def upload_documentation(self, metadata, doc_dir): + """ + Upload documentation to the index. + + :param metadata: A :class:`Metadata` instance defining at least a name + and version number for the documentation to be + uploaded. + :param doc_dir: The pathname of the directory which contains the + documentation. This should be the directory that + contains the ``index.html`` for the documentation. + :return: The HTTP response received from PyPI upon submission of the + request. + """ + self.check_credentials() + if not os.path.isdir(doc_dir): + raise DistlibException('not a directory: %r' % doc_dir) + fn = os.path.join(doc_dir, 'index.html') + if not os.path.exists(fn): + raise DistlibException('not found: %r' % fn) + metadata.validate() + name, version = metadata.name, metadata.version + zip_data = zip_dir(doc_dir).getvalue() + fields = [(':action', 'doc_upload'), + ('name', name), ('version', version)] + files = [('content', name, zip_data)] + request = self.encode_request(fields, files) + return self.send_request(request) + + def get_verify_command(self, signature_filename, data_filename, + keystore=None): + """ + Return a suitable command for verifying a file. + + :param signature_filename: The pathname to the file containing the + signature. + :param data_filename: The pathname to the file containing the + signed data. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: The verifying command as a list suitable to be + passed to :class:`subprocess.Popen`. + """ + cmd = [self.gpg, '--status-fd', '2', '--no-tty'] + if keystore is None: + keystore = self.gpg_home + if keystore: + cmd.extend(['--homedir', keystore]) + cmd.extend(['--verify', signature_filename, data_filename]) + logger.debug('invoking: %s', ' '.join(cmd)) + return cmd + + def verify_signature(self, signature_filename, data_filename, + keystore=None): + """ + Verify a signature for a file. + + :param signature_filename: The pathname to the file containing the + signature. + :param data_filename: The pathname to the file containing the + signed data. + :param keystore: The path to a directory which contains the keys + used in verification. If not specified, the + instance's ``gpg_home`` attribute is used instead. + :return: True if the signature was verified, else False. + """ + if not self.gpg: + raise DistlibException('verification unavailable because gpg ' + 'unavailable') + cmd = self.get_verify_command(signature_filename, data_filename, + keystore) + rc, stdout, stderr = self.run_command(cmd) + if rc not in (0, 1): + raise DistlibException('verify command failed with error ' + 'code %s' % rc) + return rc == 0 + + def download_file(self, url, destfile, digest=None, reporthook=None): + """ + This is a convenience method for downloading a file from an URL. + Normally, this will be a file from the index, though currently + no check is made for this (i.e. a file can be downloaded from + anywhere). + + The method is just like the :func:`urlretrieve` function in the + standard library, except that it allows digest computation to be + done during download and checking that the downloaded data + matched any expected value. + + :param url: The URL of the file to be downloaded (assumed to be + available via an HTTP GET request). + :param destfile: The pathname where the downloaded file is to be + saved. + :param digest: If specified, this must be a (hasher, value) + tuple, where hasher is the algorithm used (e.g. + ``'md5'``) and ``value`` is the expected value. + :param reporthook: The same as for :func:`urlretrieve` in the + standard library. + """ + if digest is None: + digester = None + logger.debug('No digest specified') + else: + if isinstance(digest, (list, tuple)): + hasher, digest = digest + else: + hasher = 'md5' + digester = getattr(hashlib, hasher)() + logger.debug('Digest specified: %s' % digest) + # The following code is equivalent to urlretrieve. + # We need to do it this way so that we can compute the + # digest of the file as we go. + with open(destfile, 'wb') as dfp: + # addinfourl is not a context manager on 2.x + # so we have to use try/finally + sfp = self.send_request(Request(url)) + try: + headers = sfp.info() + blocksize = 8192 + size = -1 + read = 0 + blocknum = 0 + if "content-length" in headers: + size = int(headers["Content-Length"]) + if reporthook: + reporthook(blocknum, blocksize, size) + while True: + block = sfp.read(blocksize) + if not block: + break + read += len(block) + dfp.write(block) + if digester: + digester.update(block) + blocknum += 1 + if reporthook: + reporthook(blocknum, blocksize, size) + finally: + sfp.close() + + # check that we got the whole file, if we can + if size >= 0 and read < size: + raise DistlibException( + 'retrieval incomplete: got only %d out of %d bytes' + % (read, size)) + # if we have a digest, it must match. + if digester: + actual = digester.hexdigest() + if digest != actual: + raise DistlibException('%s digest mismatch for %s: expected ' + '%s, got %s' % (hasher, destfile, + digest, actual)) + logger.debug('Digest verified: %s', digest) + + def send_request(self, req): + """ + Send a standard library :class:`Request` to PyPI and return its + response. + + :param req: The request to send. + :return: The HTTP response from PyPI (a standard library HTTPResponse). + """ + handlers = [] + if self.password_handler: + handlers.append(self.password_handler) + if self.ssl_verifier: + handlers.append(self.ssl_verifier) + opener = build_opener(*handlers) + return opener.open(req) + + def encode_request(self, fields, files): + """ + Encode fields and files for posting to an HTTP server. + + :param fields: The fields to send as a list of (fieldname, value) + tuples. + :param files: The files to send as a list of (fieldname, filename, + file_bytes) tuple. + """ + # Adapted from packaging, which in turn was adapted from + # http://code.activestate.com/recipes/146306 + + parts = [] + boundary = self.boundary + for k, values in fields: + if not isinstance(values, (list, tuple)): + values = [values] + + for v in values: + parts.extend(( + b'--' + boundary, + ('Content-Disposition: form-data; name="%s"' % + k).encode('utf-8'), + b'', + v.encode('utf-8'))) + for key, filename, value in files: + parts.extend(( + b'--' + boundary, + ('Content-Disposition: form-data; name="%s"; filename="%s"' % + (key, filename)).encode('utf-8'), + b'', + value)) + + parts.extend((b'--' + boundary + b'--', b'')) + + body = b'\r\n'.join(parts) + ct = b'multipart/form-data; boundary=' + boundary + headers = { + 'Content-type': ct, + 'Content-length': str(len(body)) + } + return Request(self.url, body, headers) + + def search(self, terms, operator=None): + if isinstance(terms, string_types): + terms = {'name': terms} + rpc_proxy = ServerProxy(self.url, timeout=3.0) + try: + return rpc_proxy.search(terms, operator or 'and') + finally: + rpc_proxy('close')() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/locators.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/locators.py new file mode 100644 index 0000000..c78bc9e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/locators.py @@ -0,0 +1,1300 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012-2015 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# + +import gzip +from io import BytesIO +import json +import logging +import os +import posixpath +import re +try: + import threading +except ImportError: # pragma: no cover + import dummy_threading as threading +import zlib + +from . import DistlibException +from .compat import (urljoin, urlparse, urlunparse, url2pathname, pathname2url, + queue, quote, unescape, build_opener, + HTTPRedirectHandler as BaseRedirectHandler, text_type, + Request, HTTPError, URLError) +from .database import Distribution, DistributionPath, make_dist +from .metadata import Metadata, MetadataInvalidError +from .util import (cached_property, ensure_slash, split_filename, get_project_data, + parse_requirement, parse_name_and_version, ServerProxy, + normalize_name) +from .version import get_scheme, UnsupportedVersionError +from .wheel import Wheel, is_compatible + +logger = logging.getLogger(__name__) + +HASHER_HASH = re.compile(r'^(\w+)=([a-f0-9]+)') +CHARSET = re.compile(r';\s*charset\s*=\s*(.*)\s*$', re.I) +HTML_CONTENT_TYPE = re.compile('text/html|application/x(ht)?ml') +DEFAULT_INDEX = 'https://pypi.org/pypi' + +def get_all_distribution_names(url=None): + """ + Return all distribution names known by an index. + :param url: The URL of the index. + :return: A list of all known distribution names. + """ + if url is None: + url = DEFAULT_INDEX + client = ServerProxy(url, timeout=3.0) + try: + return client.list_packages() + finally: + client('close')() + +class RedirectHandler(BaseRedirectHandler): + """ + A class to work around a bug in some Python 3.2.x releases. + """ + # There's a bug in the base version for some 3.2.x + # (e.g. 3.2.2 on Ubuntu Oneiric). If a Location header + # returns e.g. /abc, it bails because it says the scheme '' + # is bogus, when actually it should use the request's + # URL for the scheme. See Python issue #13696. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + newurl = None + for key in ('location', 'uri'): + if key in headers: + newurl = headers[key] + break + if newurl is None: # pragma: no cover + return + urlparts = urlparse(newurl) + if urlparts.scheme == '': + newurl = urljoin(req.get_full_url(), newurl) + if hasattr(headers, 'replace_header'): + headers.replace_header(key, newurl) + else: + headers[key] = newurl + return BaseRedirectHandler.http_error_302(self, req, fp, code, msg, + headers) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + +class Locator(object): + """ + A base class for locators - things that locate distributions. + """ + source_extensions = ('.tar.gz', '.tar.bz2', '.tar', '.zip', '.tgz', '.tbz') + binary_extensions = ('.egg', '.exe', '.whl') + excluded_extensions = ('.pdf',) + + # A list of tags indicating which wheels you want to match. The default + # value of None matches against the tags compatible with the running + # Python. If you want to match other values, set wheel_tags on a locator + # instance to a list of tuples (pyver, abi, arch) which you want to match. + wheel_tags = None + + downloadable_extensions = source_extensions + ('.whl',) + + def __init__(self, scheme='default'): + """ + Initialise an instance. + :param scheme: Because locators look for most recent versions, they + need to know the version scheme to use. This specifies + the current PEP-recommended scheme - use ``'legacy'`` + if you need to support existing distributions on PyPI. + """ + self._cache = {} + self.scheme = scheme + # Because of bugs in some of the handlers on some of the platforms, + # we use our own opener rather than just using urlopen. + self.opener = build_opener(RedirectHandler()) + # If get_project() is called from locate(), the matcher instance + # is set from the requirement passed to locate(). See issue #18 for + # why this can be useful to know. + self.matcher = None + self.errors = queue.Queue() + + def get_errors(self): + """ + Return any errors which have occurred. + """ + result = [] + while not self.errors.empty(): # pragma: no cover + try: + e = self.errors.get(False) + result.append(e) + except self.errors.Empty: + continue + self.errors.task_done() + return result + + def clear_errors(self): + """ + Clear any errors which may have been logged. + """ + # Just get the errors and throw them away + self.get_errors() + + def clear_cache(self): + self._cache.clear() + + def _get_scheme(self): + return self._scheme + + def _set_scheme(self, value): + self._scheme = value + + scheme = property(_get_scheme, _set_scheme) + + def _get_project(self, name): + """ + For a given project, get a dictionary mapping available versions to Distribution + instances. + + This should be implemented in subclasses. + + If called from a locate() request, self.matcher will be set to a + matcher for the requirement to satisfy, otherwise it will be None. + """ + raise NotImplementedError('Please implement in the subclass') + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Please implement in the subclass') + + def get_project(self, name): + """ + For a given project, get a dictionary mapping available versions to Distribution + instances. + + This calls _get_project to do all the work, and just implements a caching layer on top. + """ + if self._cache is None: # pragma: no cover + result = self._get_project(name) + elif name in self._cache: + result = self._cache[name] + else: + self.clear_errors() + result = self._get_project(name) + self._cache[name] = result + return result + + def score_url(self, url): + """ + Give an url a score which can be used to choose preferred URLs + for a given project release. + """ + t = urlparse(url) + basename = posixpath.basename(t.path) + compatible = True + is_wheel = basename.endswith('.whl') + is_downloadable = basename.endswith(self.downloadable_extensions) + if is_wheel: + compatible = is_compatible(Wheel(basename), self.wheel_tags) + return (t.scheme == 'https', 'pypi.org' in t.netloc, + is_downloadable, is_wheel, compatible, basename) + + def prefer_url(self, url1, url2): + """ + Choose one of two URLs where both are candidates for distribution + archives for the same version of a distribution (for example, + .tar.gz vs. zip). + + The current implementation favours https:// URLs over http://, archives + from PyPI over those from other locations, wheel compatibility (if a + wheel) and then the archive name. + """ + result = url2 + if url1: + s1 = self.score_url(url1) + s2 = self.score_url(url2) + if s1 > s2: + result = url1 + if result != url2: + logger.debug('Not replacing %r with %r', url1, url2) + else: + logger.debug('Replacing %r with %r', url1, url2) + return result + + def split_filename(self, filename, project_name): + """ + Attempt to split a filename in project name, version and Python version. + """ + return split_filename(filename, project_name) + + def convert_url_to_download_info(self, url, project_name): + """ + See if a URL is a candidate for a download URL for a project (the URL + has typically been scraped from an HTML page). + + If it is, a dictionary is returned with keys "name", "version", + "filename" and "url"; otherwise, None is returned. + """ + def same_project(name1, name2): + return normalize_name(name1) == normalize_name(name2) + + result = None + scheme, netloc, path, params, query, frag = urlparse(url) + if frag.lower().startswith('egg='): # pragma: no cover + logger.debug('%s: version hint in fragment: %r', + project_name, frag) + m = HASHER_HASH.match(frag) + if m: + algo, digest = m.groups() + else: + algo, digest = None, None + origpath = path + if path and path[-1] == '/': # pragma: no cover + path = path[:-1] + if path.endswith('.whl'): + try: + wheel = Wheel(path) + if not is_compatible(wheel, self.wheel_tags): + logger.debug('Wheel not compatible: %s', path) + else: + if project_name is None: + include = True + else: + include = same_project(wheel.name, project_name) + if include: + result = { + 'name': wheel.name, + 'version': wheel.version, + 'filename': wheel.filename, + 'url': urlunparse((scheme, netloc, origpath, + params, query, '')), + 'python-version': ', '.join( + ['.'.join(list(v[2:])) for v in wheel.pyver]), + } + except Exception as e: # pragma: no cover + logger.warning('invalid path for wheel: %s', path) + elif not path.endswith(self.downloadable_extensions): # pragma: no cover + logger.debug('Not downloadable: %s', path) + else: # downloadable extension + path = filename = posixpath.basename(path) + for ext in self.downloadable_extensions: + if path.endswith(ext): + path = path[:-len(ext)] + t = self.split_filename(path, project_name) + if not t: # pragma: no cover + logger.debug('No match for project/version: %s', path) + else: + name, version, pyver = t + if not project_name or same_project(project_name, name): + result = { + 'name': name, + 'version': version, + 'filename': filename, + 'url': urlunparse((scheme, netloc, origpath, + params, query, '')), + #'packagetype': 'sdist', + } + if pyver: # pragma: no cover + result['python-version'] = pyver + break + if result and algo: + result['%s_digest' % algo] = digest + return result + + def _get_digest(self, info): + """ + Get a digest from a dictionary by looking at a "digests" dictionary + or keys of the form 'algo_digest'. + + Returns a 2-tuple (algo, digest) if found, else None. Currently + looks only for SHA256, then MD5. + """ + result = None + if 'digests' in info: + digests = info['digests'] + for algo in ('sha256', 'md5'): + if algo in digests: + result = (algo, digests[algo]) + break + if not result: + for algo in ('sha256', 'md5'): + key = '%s_digest' % algo + if key in info: + result = (algo, info[key]) + break + return result + + def _update_version_data(self, result, info): + """ + Update a result dictionary (the final result from _get_project) with a + dictionary for a specific version, which typically holds information + gleaned from a filename or URL for an archive for the distribution. + """ + name = info.pop('name') + version = info.pop('version') + if version in result: + dist = result[version] + md = dist.metadata + else: + dist = make_dist(name, version, scheme=self.scheme) + md = dist.metadata + dist.digest = digest = self._get_digest(info) + url = info['url'] + result['digests'][url] = digest + if md.source_url != info['url']: + md.source_url = self.prefer_url(md.source_url, url) + result['urls'].setdefault(version, set()).add(url) + dist.locator = self + result[version] = dist + + def locate(self, requirement, prereleases=False): + """ + Find the most recent distribution which matches the given + requirement. + + :param requirement: A requirement of the form 'foo (1.0)' or perhaps + 'foo (>= 1.0, < 2.0, != 1.3)' + :param prereleases: If ``True``, allow pre-release versions + to be located. Otherwise, pre-release versions + are not returned. + :return: A :class:`Distribution` instance, or ``None`` if no such + distribution could be located. + """ + result = None + r = parse_requirement(requirement) + if r is None: # pragma: no cover + raise DistlibException('Not a valid requirement: %r' % requirement) + scheme = get_scheme(self.scheme) + self.matcher = matcher = scheme.matcher(r.requirement) + logger.debug('matcher: %s (%s)', matcher, type(matcher).__name__) + versions = self.get_project(r.name) + if len(versions) > 2: # urls and digests keys are present + # sometimes, versions are invalid + slist = [] + vcls = matcher.version_class + for k in versions: + if k in ('urls', 'digests'): + continue + try: + if not matcher.match(k): + pass # logger.debug('%s did not match %r', matcher, k) + else: + if prereleases or not vcls(k).is_prerelease: + slist.append(k) + # else: + # logger.debug('skipping pre-release ' + # 'version %s of %s', k, matcher.name) + except Exception: # pragma: no cover + logger.warning('error matching %s with %r', matcher, k) + pass # slist.append(k) + if len(slist) > 1: + slist = sorted(slist, key=scheme.key) + if slist: + logger.debug('sorted list: %s', slist) + version = slist[-1] + result = versions[version] + if result: + if r.extras: + result.extras = r.extras + result.download_urls = versions.get('urls', {}).get(version, set()) + d = {} + sd = versions.get('digests', {}) + for url in result.download_urls: + if url in sd: # pragma: no cover + d[url] = sd[url] + result.digests = d + self.matcher = None + return result + + +class PyPIRPCLocator(Locator): + """ + This locator uses XML-RPC to locate distributions. It therefore + cannot be used with simple mirrors (that only mirror file content). + """ + def __init__(self, url, **kwargs): + """ + Initialise an instance. + + :param url: The URL to use for XML-RPC. + :param kwargs: Passed to the superclass constructor. + """ + super(PyPIRPCLocator, self).__init__(**kwargs) + self.base_url = url + self.client = ServerProxy(url, timeout=3.0) + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + return set(self.client.list_packages()) + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + versions = self.client.package_releases(name, True) + for v in versions: + urls = self.client.release_urls(name, v) + data = self.client.release_data(name, v) + metadata = Metadata(scheme=self.scheme) + metadata.name = data['name'] + metadata.version = data['version'] + metadata.license = data.get('license') + metadata.keywords = data.get('keywords', []) + metadata.summary = data.get('summary') + dist = Distribution(metadata) + if urls: + info = urls[0] + metadata.source_url = info['url'] + dist.digest = self._get_digest(info) + dist.locator = self + result[v] = dist + for info in urls: + url = info['url'] + digest = self._get_digest(info) + result['urls'].setdefault(v, set()).add(url) + result['digests'][url] = digest + return result + +class PyPIJSONLocator(Locator): + """ + This locator uses PyPI's JSON interface. It's very limited in functionality + and probably not worth using. + """ + def __init__(self, url, **kwargs): + super(PyPIJSONLocator, self).__init__(**kwargs) + self.base_url = ensure_slash(url) + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Not available from this locator') + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + url = urljoin(self.base_url, '%s/json' % quote(name)) + try: + resp = self.opener.open(url) + data = resp.read().decode() # for now + d = json.loads(data) + md = Metadata(scheme=self.scheme) + data = d['info'] + md.name = data['name'] + md.version = data['version'] + md.license = data.get('license') + md.keywords = data.get('keywords', []) + md.summary = data.get('summary') + dist = Distribution(md) + dist.locator = self + urls = d['urls'] + result[md.version] = dist + for info in d['urls']: + url = info['url'] + dist.download_urls.add(url) + dist.digests[url] = self._get_digest(info) + result['urls'].setdefault(md.version, set()).add(url) + result['digests'][url] = self._get_digest(info) + # Now get other releases + for version, infos in d['releases'].items(): + if version == md.version: + continue # already done + omd = Metadata(scheme=self.scheme) + omd.name = md.name + omd.version = version + odist = Distribution(omd) + odist.locator = self + result[version] = odist + for info in infos: + url = info['url'] + odist.download_urls.add(url) + odist.digests[url] = self._get_digest(info) + result['urls'].setdefault(version, set()).add(url) + result['digests'][url] = self._get_digest(info) +# for info in urls: +# md.source_url = info['url'] +# dist.digest = self._get_digest(info) +# dist.locator = self +# for info in urls: +# url = info['url'] +# result['urls'].setdefault(md.version, set()).add(url) +# result['digests'][url] = self._get_digest(info) + except Exception as e: + self.errors.put(text_type(e)) + logger.exception('JSON fetch failed: %s', e) + return result + + +class Page(object): + """ + This class represents a scraped HTML page. + """ + # The following slightly hairy-looking regex just looks for the contents of + # an anchor link, which has an attribute "href" either immediately preceded + # or immediately followed by a "rel" attribute. The attribute values can be + # declared with double quotes, single quotes or no quotes - which leads to + # the length of the expression. + _href = re.compile(""" +(rel\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*))\\s+)? +href\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*)) +(\\s+rel\\s*=\\s*(?:"(?P[^"]*)"|'(?P[^']*)'|(?P[^>\\s\n]*)))? +""", re.I | re.S | re.X) + _base = re.compile(r"""]+)""", re.I | re.S) + + def __init__(self, data, url): + """ + Initialise an instance with the Unicode page contents and the URL they + came from. + """ + self.data = data + self.base_url = self.url = url + m = self._base.search(self.data) + if m: + self.base_url = m.group(1) + + _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I) + + @cached_property + def links(self): + """ + Return the URLs of all the links on a page together with information + about their "rel" attribute, for determining which ones to treat as + downloads and which ones to queue for further scraping. + """ + def clean(url): + "Tidy up an URL." + scheme, netloc, path, params, query, frag = urlparse(url) + return urlunparse((scheme, netloc, quote(path), + params, query, frag)) + + result = set() + for match in self._href.finditer(self.data): + d = match.groupdict('') + rel = (d['rel1'] or d['rel2'] or d['rel3'] or + d['rel4'] or d['rel5'] or d['rel6']) + url = d['url1'] or d['url2'] or d['url3'] + url = urljoin(self.base_url, url) + url = unescape(url) + url = self._clean_re.sub(lambda m: '%%%2x' % ord(m.group(0)), url) + result.add((url, rel)) + # We sort the result, hoping to bring the most recent versions + # to the front + result = sorted(result, key=lambda t: t[0], reverse=True) + return result + + +class SimpleScrapingLocator(Locator): + """ + A locator which scrapes HTML pages to locate downloads for a distribution. + This runs multiple threads to do the I/O; performance is at least as good + as pip's PackageFinder, which works in an analogous fashion. + """ + + # These are used to deal with various Content-Encoding schemes. + decoders = { + 'deflate': zlib.decompress, + 'gzip': lambda b: gzip.GzipFile(fileobj=BytesIO(b)).read(), + 'none': lambda b: b, + } + + def __init__(self, url, timeout=None, num_workers=10, **kwargs): + """ + Initialise an instance. + :param url: The root URL to use for scraping. + :param timeout: The timeout, in seconds, to be applied to requests. + This defaults to ``None`` (no timeout specified). + :param num_workers: The number of worker threads you want to do I/O, + This defaults to 10. + :param kwargs: Passed to the superclass. + """ + super(SimpleScrapingLocator, self).__init__(**kwargs) + self.base_url = ensure_slash(url) + self.timeout = timeout + self._page_cache = {} + self._seen = set() + self._to_fetch = queue.Queue() + self._bad_hosts = set() + self.skip_externals = False + self.num_workers = num_workers + self._lock = threading.RLock() + # See issue #45: we need to be resilient when the locator is used + # in a thread, e.g. with concurrent.futures. We can't use self._lock + # as it is for coordinating our internal threads - the ones created + # in _prepare_threads. + self._gplock = threading.RLock() + self.platform_check = False # See issue #112 + + def _prepare_threads(self): + """ + Threads are created only when get_project is called, and terminate + before it returns. They are there primarily to parallelise I/O (i.e. + fetching web pages). + """ + self._threads = [] + for i in range(self.num_workers): + t = threading.Thread(target=self._fetch) + t.daemon = True + t.start() + self._threads.append(t) + + def _wait_threads(self): + """ + Tell all the threads to terminate (by sending a sentinel value) and + wait for them to do so. + """ + # Note that you need two loops, since you can't say which + # thread will get each sentinel + for t in self._threads: + self._to_fetch.put(None) # sentinel + for t in self._threads: + t.join() + self._threads = [] + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + with self._gplock: + self.result = result + self.project_name = name + url = urljoin(self.base_url, '%s/' % quote(name)) + self._seen.clear() + self._page_cache.clear() + self._prepare_threads() + try: + logger.debug('Queueing %s', url) + self._to_fetch.put(url) + self._to_fetch.join() + finally: + self._wait_threads() + del self.result + return result + + platform_dependent = re.compile(r'\b(linux_(i\d86|x86_64|arm\w+)|' + r'win(32|_amd64)|macosx_?\d+)\b', re.I) + + def _is_platform_dependent(self, url): + """ + Does an URL refer to a platform-specific download? + """ + return self.platform_dependent.search(url) + + def _process_download(self, url): + """ + See if an URL is a suitable download for a project. + + If it is, register information in the result dictionary (for + _get_project) about the specific version it's for. + + Note that the return value isn't actually used other than as a boolean + value. + """ + if self.platform_check and self._is_platform_dependent(url): + info = None + else: + info = self.convert_url_to_download_info(url, self.project_name) + logger.debug('process_download: %s -> %s', url, info) + if info: + with self._lock: # needed because self.result is shared + self._update_version_data(self.result, info) + return info + + def _should_queue(self, link, referrer, rel): + """ + Determine whether a link URL from a referring page and with a + particular "rel" attribute should be queued for scraping. + """ + scheme, netloc, path, _, _, _ = urlparse(link) + if path.endswith(self.source_extensions + self.binary_extensions + + self.excluded_extensions): + result = False + elif self.skip_externals and not link.startswith(self.base_url): + result = False + elif not referrer.startswith(self.base_url): + result = False + elif rel not in ('homepage', 'download'): + result = False + elif scheme not in ('http', 'https', 'ftp'): + result = False + elif self._is_platform_dependent(link): + result = False + else: + host = netloc.split(':', 1)[0] + if host.lower() == 'localhost': + result = False + else: + result = True + logger.debug('should_queue: %s (%s) from %s -> %s', link, rel, + referrer, result) + return result + + def _fetch(self): + """ + Get a URL to fetch from the work queue, get the HTML page, examine its + links for download candidates and candidates for further scraping. + + This is a handy method to run in a thread. + """ + while True: + url = self._to_fetch.get() + try: + if url: + page = self.get_page(url) + if page is None: # e.g. after an error + continue + for link, rel in page.links: + if link not in self._seen: + try: + self._seen.add(link) + if (not self._process_download(link) and + self._should_queue(link, url, rel)): + logger.debug('Queueing %s from %s', link, url) + self._to_fetch.put(link) + except MetadataInvalidError: # e.g. invalid versions + pass + except Exception as e: # pragma: no cover + self.errors.put(text_type(e)) + finally: + # always do this, to avoid hangs :-) + self._to_fetch.task_done() + if not url: + #logger.debug('Sentinel seen, quitting.') + break + + def get_page(self, url): + """ + Get the HTML for an URL, possibly from an in-memory cache. + + XXX TODO Note: this cache is never actually cleared. It's assumed that + the data won't get stale over the lifetime of a locator instance (not + necessarily true for the default_locator). + """ + # http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api + scheme, netloc, path, _, _, _ = urlparse(url) + if scheme == 'file' and os.path.isdir(url2pathname(path)): + url = urljoin(ensure_slash(url), 'index.html') + + if url in self._page_cache: + result = self._page_cache[url] + logger.debug('Returning %s from cache: %s', url, result) + else: + host = netloc.split(':', 1)[0] + result = None + if host in self._bad_hosts: + logger.debug('Skipping %s due to bad host %s', url, host) + else: + req = Request(url, headers={'Accept-encoding': 'identity'}) + try: + logger.debug('Fetching %s', url) + resp = self.opener.open(req, timeout=self.timeout) + logger.debug('Fetched %s', url) + headers = resp.info() + content_type = headers.get('Content-Type', '') + if HTML_CONTENT_TYPE.match(content_type): + final_url = resp.geturl() + data = resp.read() + encoding = headers.get('Content-Encoding') + if encoding: + decoder = self.decoders[encoding] # fail if not found + data = decoder(data) + encoding = 'utf-8' + m = CHARSET.search(content_type) + if m: + encoding = m.group(1) + try: + data = data.decode(encoding) + except UnicodeError: # pragma: no cover + data = data.decode('latin-1') # fallback + result = Page(data, final_url) + self._page_cache[final_url] = result + except HTTPError as e: + if e.code != 404: + logger.exception('Fetch failed: %s: %s', url, e) + except URLError as e: # pragma: no cover + logger.exception('Fetch failed: %s: %s', url, e) + with self._lock: + self._bad_hosts.add(host) + except Exception as e: # pragma: no cover + logger.exception('Fetch failed: %s: %s', url, e) + finally: + self._page_cache[url] = result # even if None (failure) + return result + + _distname_re = re.compile(']*>([^<]+)<') + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + page = self.get_page(self.base_url) + if not page: + raise DistlibException('Unable to get %s' % self.base_url) + for match in self._distname_re.finditer(page.data): + result.add(match.group(1)) + return result + +class DirectoryLocator(Locator): + """ + This class locates distributions in a directory tree. + """ + + def __init__(self, path, **kwargs): + """ + Initialise an instance. + :param path: The root of the directory tree to search. + :param kwargs: Passed to the superclass constructor, + except for: + * recursive - if True (the default), subdirectories are + recursed into. If False, only the top-level directory + is searched, + """ + self.recursive = kwargs.pop('recursive', True) + super(DirectoryLocator, self).__init__(**kwargs) + path = os.path.abspath(path) + if not os.path.isdir(path): # pragma: no cover + raise DistlibException('Not a directory: %r' % path) + self.base_dir = path + + def should_include(self, filename, parent): + """ + Should a filename be considered as a candidate for a distribution + archive? As well as the filename, the directory which contains it + is provided, though not used by the current implementation. + """ + return filename.endswith(self.downloadable_extensions) + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + for root, dirs, files in os.walk(self.base_dir): + for fn in files: + if self.should_include(fn, root): + fn = os.path.join(root, fn) + url = urlunparse(('file', '', + pathname2url(os.path.abspath(fn)), + '', '', '')) + info = self.convert_url_to_download_info(url, name) + if info: + self._update_version_data(result, info) + if not self.recursive: + break + return result + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + for root, dirs, files in os.walk(self.base_dir): + for fn in files: + if self.should_include(fn, root): + fn = os.path.join(root, fn) + url = urlunparse(('file', '', + pathname2url(os.path.abspath(fn)), + '', '', '')) + info = self.convert_url_to_download_info(url, None) + if info: + result.add(info['name']) + if not self.recursive: + break + return result + +class JSONLocator(Locator): + """ + This locator uses special extended metadata (not available on PyPI) and is + the basis of performant dependency resolution in distlib. Other locators + require archive downloads before dependencies can be determined! As you + might imagine, that can be slow. + """ + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + raise NotImplementedError('Not available from this locator') + + def _get_project(self, name): + result = {'urls': {}, 'digests': {}} + data = get_project_data(name) + if data: + for info in data.get('files', []): + if info['ptype'] != 'sdist' or info['pyversion'] != 'source': + continue + # We don't store summary in project metadata as it makes + # the data bigger for no benefit during dependency + # resolution + dist = make_dist(data['name'], info['version'], + summary=data.get('summary', + 'Placeholder for summary'), + scheme=self.scheme) + md = dist.metadata + md.source_url = info['url'] + # TODO SHA256 digest + if 'digest' in info and info['digest']: + dist.digest = ('md5', info['digest']) + md.dependencies = info.get('requirements', {}) + dist.exports = info.get('exports', {}) + result[dist.version] = dist + result['urls'].setdefault(dist.version, set()).add(info['url']) + return result + +class DistPathLocator(Locator): + """ + This locator finds installed distributions in a path. It can be useful for + adding to an :class:`AggregatingLocator`. + """ + def __init__(self, distpath, **kwargs): + """ + Initialise an instance. + + :param distpath: A :class:`DistributionPath` instance to search. + """ + super(DistPathLocator, self).__init__(**kwargs) + assert isinstance(distpath, DistributionPath) + self.distpath = distpath + + def _get_project(self, name): + dist = self.distpath.get_distribution(name) + if dist is None: + result = {'urls': {}, 'digests': {}} + else: + result = { + dist.version: dist, + 'urls': {dist.version: set([dist.source_url])}, + 'digests': {dist.version: set([None])} + } + return result + + +class AggregatingLocator(Locator): + """ + This class allows you to chain and/or merge a list of locators. + """ + def __init__(self, *locators, **kwargs): + """ + Initialise an instance. + + :param locators: The list of locators to search. + :param kwargs: Passed to the superclass constructor, + except for: + * merge - if False (the default), the first successful + search from any of the locators is returned. If True, + the results from all locators are merged (this can be + slow). + """ + self.merge = kwargs.pop('merge', False) + self.locators = locators + super(AggregatingLocator, self).__init__(**kwargs) + + def clear_cache(self): + super(AggregatingLocator, self).clear_cache() + for locator in self.locators: + locator.clear_cache() + + def _set_scheme(self, value): + self._scheme = value + for locator in self.locators: + locator.scheme = value + + scheme = property(Locator.scheme.fget, _set_scheme) + + def _get_project(self, name): + result = {} + for locator in self.locators: + d = locator.get_project(name) + if d: + if self.merge: + files = result.get('urls', {}) + digests = result.get('digests', {}) + # next line could overwrite result['urls'], result['digests'] + result.update(d) + df = result.get('urls') + if files and df: + for k, v in files.items(): + if k in df: + df[k] |= v + else: + df[k] = v + dd = result.get('digests') + if digests and dd: + dd.update(digests) + else: + # See issue #18. If any dists are found and we're looking + # for specific constraints, we only return something if + # a match is found. For example, if a DirectoryLocator + # returns just foo (1.0) while we're looking for + # foo (>= 2.0), we'll pretend there was nothing there so + # that subsequent locators can be queried. Otherwise we + # would just return foo (1.0) which would then lead to a + # failure to find foo (>= 2.0), because other locators + # weren't searched. Note that this only matters when + # merge=False. + if self.matcher is None: + found = True + else: + found = False + for k in d: + if self.matcher.match(k): + found = True + break + if found: + result = d + break + return result + + def get_distribution_names(self): + """ + Return all the distribution names known to this locator. + """ + result = set() + for locator in self.locators: + try: + result |= locator.get_distribution_names() + except NotImplementedError: + pass + return result + + +# We use a legacy scheme simply because most of the dists on PyPI use legacy +# versions which don't conform to PEP 426 / PEP 440. +default_locator = AggregatingLocator( + JSONLocator(), + SimpleScrapingLocator('https://pypi.org/simple/', + timeout=3.0), + scheme='legacy') + +locate = default_locator.locate + + +class DependencyFinder(object): + """ + Locate dependencies for distributions. + """ + + def __init__(self, locator=None): + """ + Initialise an instance, using the specified locator + to locate distributions. + """ + self.locator = locator or default_locator + self.scheme = get_scheme(self.locator.scheme) + + def add_distribution(self, dist): + """ + Add a distribution to the finder. This will update internal information + about who provides what. + :param dist: The distribution to add. + """ + logger.debug('adding distribution %s', dist) + name = dist.key + self.dists_by_name[name] = dist + self.dists[(name, dist.version)] = dist + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Add to provided: %s, %s, %s', name, version, dist) + self.provided.setdefault(name, set()).add((version, dist)) + + def remove_distribution(self, dist): + """ + Remove a distribution from the finder. This will update internal + information about who provides what. + :param dist: The distribution to remove. + """ + logger.debug('removing distribution %s', dist) + name = dist.key + del self.dists_by_name[name] + del self.dists[(name, dist.version)] + for p in dist.provides: + name, version = parse_name_and_version(p) + logger.debug('Remove from provided: %s, %s, %s', name, version, dist) + s = self.provided[name] + s.remove((version, dist)) + if not s: + del self.provided[name] + + def get_matcher(self, reqt): + """ + Get a version matcher for a requirement. + :param reqt: The requirement + :type reqt: str + :return: A version matcher (an instance of + :class:`distlib.version.Matcher`). + """ + try: + matcher = self.scheme.matcher(reqt) + except UnsupportedVersionError: # pragma: no cover + # XXX compat-mode if cannot read the version + name = reqt.split()[0] + matcher = self.scheme.matcher(name) + return matcher + + def find_providers(self, reqt): + """ + Find the distributions which can fulfill a requirement. + + :param reqt: The requirement. + :type reqt: str + :return: A set of distribution which can fulfill the requirement. + """ + matcher = self.get_matcher(reqt) + name = matcher.key # case-insensitive + result = set() + provided = self.provided + if name in provided: + for version, provider in provided[name]: + try: + match = matcher.match(version) + except UnsupportedVersionError: + match = False + + if match: + result.add(provider) + break + return result + + def try_to_replace(self, provider, other, problems): + """ + Attempt to replace one provider with another. This is typically used + when resolving dependencies from multiple sources, e.g. A requires + (B >= 1.0) while C requires (B >= 1.1). + + For successful replacement, ``provider`` must meet all the requirements + which ``other`` fulfills. + + :param provider: The provider we are trying to replace with. + :param other: The provider we're trying to replace. + :param problems: If False is returned, this will contain what + problems prevented replacement. This is currently + a tuple of the literal string 'cantreplace', + ``provider``, ``other`` and the set of requirements + that ``provider`` couldn't fulfill. + :return: True if we can replace ``other`` with ``provider``, else + False. + """ + rlist = self.reqts[other] + unmatched = set() + for s in rlist: + matcher = self.get_matcher(s) + if not matcher.match(provider.version): + unmatched.add(s) + if unmatched: + # can't replace other with provider + problems.add(('cantreplace', provider, other, + frozenset(unmatched))) + result = False + else: + # can replace other with provider + self.remove_distribution(other) + del self.reqts[other] + for s in rlist: + self.reqts.setdefault(provider, set()).add(s) + self.add_distribution(provider) + result = True + return result + + def find(self, requirement, meta_extras=None, prereleases=False): + """ + Find a distribution and all distributions it depends on. + + :param requirement: The requirement specifying the distribution to + find, or a Distribution instance. + :param meta_extras: A list of meta extras such as :test:, :build: and + so on. + :param prereleases: If ``True``, allow pre-release versions to be + returned - otherwise, don't return prereleases + unless they're all that's available. + + Return a set of :class:`Distribution` instances and a set of + problems. + + The distributions returned should be such that they have the + :attr:`required` attribute set to ``True`` if they were + from the ``requirement`` passed to ``find()``, and they have the + :attr:`build_time_dependency` attribute set to ``True`` unless they + are post-installation dependencies of the ``requirement``. + + The problems should be a tuple consisting of the string + ``'unsatisfied'`` and the requirement which couldn't be satisfied + by any distribution known to the locator. + """ + + self.provided = {} + self.dists = {} + self.dists_by_name = {} + self.reqts = {} + + meta_extras = set(meta_extras or []) + if ':*:' in meta_extras: + meta_extras.remove(':*:') + # :meta: and :run: are implicitly included + meta_extras |= set([':test:', ':build:', ':dev:']) + + if isinstance(requirement, Distribution): + dist = odist = requirement + logger.debug('passed %s as requirement', odist) + else: + dist = odist = self.locator.locate(requirement, + prereleases=prereleases) + if dist is None: + raise DistlibException('Unable to locate %r' % requirement) + logger.debug('located %s', odist) + dist.requested = True + problems = set() + todo = set([dist]) + install_dists = set([odist]) + while todo: + dist = todo.pop() + name = dist.key # case-insensitive + if name not in self.dists_by_name: + self.add_distribution(dist) + else: + #import pdb; pdb.set_trace() + other = self.dists_by_name[name] + if other != dist: + self.try_to_replace(dist, other, problems) + + ireqts = dist.run_requires | dist.meta_requires + sreqts = dist.build_requires + ereqts = set() + if meta_extras and dist in install_dists: + for key in ('test', 'build', 'dev'): + e = ':%s:' % key + if e in meta_extras: + ereqts |= getattr(dist, '%s_requires' % key) + all_reqts = ireqts | sreqts | ereqts + for r in all_reqts: + providers = self.find_providers(r) + if not providers: + logger.debug('No providers found for %r', r) + provider = self.locator.locate(r, prereleases=prereleases) + # If no provider is found and we didn't consider + # prereleases, consider them now. + if provider is None and not prereleases: + provider = self.locator.locate(r, prereleases=True) + if provider is None: + logger.debug('Cannot satisfy %r', r) + problems.add(('unsatisfied', r)) + else: + n, v = provider.key, provider.version + if (n, v) not in self.dists: + todo.add(provider) + providers.add(provider) + if r in ireqts and dist in install_dists: + install_dists.add(provider) + logger.debug('Adding %s to install_dists', + provider.name_and_version) + for p in providers: + name = p.key + if name not in self.dists_by_name: + self.reqts.setdefault(p, set()).add(r) + else: + other = self.dists_by_name[name] + if other != p: + # see if other can be replaced by p + self.try_to_replace(p, other, problems) + + dists = set(self.dists.values()) + for dist in dists: + dist.build_time_dependency = dist not in install_dists + if dist.build_time_dependency: + logger.debug('%s is a build-time dependency only.', + dist.name_and_version) + logger.debug('find done for %s', odist) + return dists, problems diff --git a/lib/python3.11/site-packages/pip/_vendor/distlib/manifest.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/manifest.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/distlib/manifest.py rename to python/lib/python3.10/site-packages/pip/_vendor/distlib/manifest.py diff --git a/lib/python3.11/site-packages/pip/_vendor/distlib/markers.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/markers.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/distlib/markers.py rename to python/lib/python3.10/site-packages/pip/_vendor/distlib/markers.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/metadata.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/metadata.py new file mode 100644 index 0000000..6a26b0a --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/metadata.py @@ -0,0 +1,1058 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2012 The Python Software Foundation. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +"""Implementation of the Metadata for Python packages PEPs. + +Supports all metadata formats (1.0, 1.1, 1.2, 1.3/2.1 and withdrawn 2.0). +""" +from __future__ import unicode_literals + +import codecs +from email import message_from_file +import json +import logging +import re + + +from . import DistlibException, __version__ +from .compat import StringIO, string_types, text_type +from .markers import interpret +from .util import extract_by_key, get_extras +from .version import get_scheme, PEP440_VERSION_RE + +logger = logging.getLogger(__name__) + + +class MetadataMissingError(DistlibException): + """A required metadata is missing""" + + +class MetadataConflictError(DistlibException): + """Attempt to read or write metadata fields that are conflictual.""" + + +class MetadataUnrecognizedVersionError(DistlibException): + """Unknown metadata version number.""" + + +class MetadataInvalidError(DistlibException): + """A metadata value is invalid""" + +# public API of this module +__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION'] + +# Encoding used for the PKG-INFO files +PKG_INFO_ENCODING = 'utf-8' + +# preferred version. Hopefully will be changed +# to 1.2 once PEP 345 is supported everywhere +PKG_INFO_PREFERRED_VERSION = '1.1' + +_LINE_PREFIX_1_2 = re.compile('\n \\|') +_LINE_PREFIX_PRE_1_2 = re.compile('\n ') +_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'License') + +_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'License', 'Classifier', 'Download-URL', 'Obsoletes', + 'Provides', 'Requires') + +_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier', + 'Download-URL') + +_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'Maintainer', 'Maintainer-email', 'License', + 'Classifier', 'Download-URL', 'Obsoletes-Dist', + 'Project-URL', 'Provides-Dist', 'Requires-Dist', + 'Requires-Python', 'Requires-External') + +_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python', + 'Obsoletes-Dist', 'Requires-External', 'Maintainer', + 'Maintainer-email', 'Project-URL') + +_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform', + 'Supported-Platform', 'Summary', 'Description', + 'Keywords', 'Home-page', 'Author', 'Author-email', + 'Maintainer', 'Maintainer-email', 'License', + 'Classifier', 'Download-URL', 'Obsoletes-Dist', + 'Project-URL', 'Provides-Dist', 'Requires-Dist', + 'Requires-Python', 'Requires-External', 'Private-Version', + 'Obsoleted-By', 'Setup-Requires-Dist', 'Extension', + 'Provides-Extra') + +_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By', + 'Setup-Requires-Dist', 'Extension') + +# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in +# the metadata. Include them in the tuple literal below to allow them +# (for now). +# Ditto for Obsoletes - see issue #140. +_566_FIELDS = _426_FIELDS + ('Description-Content-Type', + 'Requires', 'Provides', 'Obsoletes') + +_566_MARKERS = ('Description-Content-Type',) + +_ALL_FIELDS = set() +_ALL_FIELDS.update(_241_FIELDS) +_ALL_FIELDS.update(_314_FIELDS) +_ALL_FIELDS.update(_345_FIELDS) +_ALL_FIELDS.update(_426_FIELDS) +_ALL_FIELDS.update(_566_FIELDS) + +EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''') + + +def _version2fieldlist(version): + if version == '1.0': + return _241_FIELDS + elif version == '1.1': + return _314_FIELDS + elif version == '1.2': + return _345_FIELDS + elif version in ('1.3', '2.1'): + # avoid adding field names if already there + return _345_FIELDS + tuple(f for f in _566_FIELDS if f not in _345_FIELDS) + elif version == '2.0': + return _426_FIELDS + raise MetadataUnrecognizedVersionError(version) + + +def _best_version(fields): + """Detect the best version depending on the fields used.""" + def _has_marker(keys, markers): + for marker in markers: + if marker in keys: + return True + return False + + keys = [] + for key, value in fields.items(): + if value in ([], 'UNKNOWN', None): + continue + keys.append(key) + + possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.0', '2.1'] + + # first let's try to see if a field is not part of one of the version + for key in keys: + if key not in _241_FIELDS and '1.0' in possible_versions: + possible_versions.remove('1.0') + logger.debug('Removed 1.0 due to %s', key) + if key not in _314_FIELDS and '1.1' in possible_versions: + possible_versions.remove('1.1') + logger.debug('Removed 1.1 due to %s', key) + if key not in _345_FIELDS and '1.2' in possible_versions: + possible_versions.remove('1.2') + logger.debug('Removed 1.2 due to %s', key) + if key not in _566_FIELDS and '1.3' in possible_versions: + possible_versions.remove('1.3') + logger.debug('Removed 1.3 due to %s', key) + if key not in _566_FIELDS and '2.1' in possible_versions: + if key != 'Description': # In 2.1, description allowed after headers + possible_versions.remove('2.1') + logger.debug('Removed 2.1 due to %s', key) + if key not in _426_FIELDS and '2.0' in possible_versions: + possible_versions.remove('2.0') + logger.debug('Removed 2.0 due to %s', key) + + # possible_version contains qualified versions + if len(possible_versions) == 1: + return possible_versions[0] # found ! + elif len(possible_versions) == 0: + logger.debug('Out of options - unknown metadata set: %s', fields) + raise MetadataConflictError('Unknown metadata set') + + # let's see if one unique marker is found + is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS) + is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS) + is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS) + is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS) + if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_0) > 1: + raise MetadataConflictError('You used incompatible 1.1/1.2/2.0/2.1 fields') + + # we have the choice, 1.0, or 1.2, or 2.0 + # - 1.0 has a broken Summary field but works with all tools + # - 1.1 is to avoid + # - 1.2 fixes Summary but has little adoption + # - 2.0 adds more features and is very new + if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_0: + # we couldn't find any specific marker + if PKG_INFO_PREFERRED_VERSION in possible_versions: + return PKG_INFO_PREFERRED_VERSION + if is_1_1: + return '1.1' + if is_1_2: + return '1.2' + if is_2_1: + return '2.1' + + return '2.0' + +# This follows the rules about transforming keys as described in +# https://www.python.org/dev/peps/pep-0566/#id17 +_ATTR2FIELD = { + name.lower().replace("-", "_"): name for name in _ALL_FIELDS +} +_FIELD2ATTR = {field: attr for attr, field in _ATTR2FIELD.items()} + +_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist') +_VERSIONS_FIELDS = ('Requires-Python',) +_VERSION_FIELDS = ('Version',) +_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes', + 'Requires', 'Provides', 'Obsoletes-Dist', + 'Provides-Dist', 'Requires-Dist', 'Requires-External', + 'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist', + 'Provides-Extra', 'Extension') +_LISTTUPLEFIELDS = ('Project-URL',) + +_ELEMENTSFIELD = ('Keywords',) + +_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description') + +_MISSING = object() + +_FILESAFE = re.compile('[^A-Za-z0-9.]+') + + +def _get_name_and_version(name, version, for_filename=False): + """Return the distribution name with version. + + If for_filename is true, return a filename-escaped form.""" + if for_filename: + # For both name and version any runs of non-alphanumeric or '.' + # characters are replaced with a single '-'. Additionally any + # spaces in the version string become '.' + name = _FILESAFE.sub('-', name) + version = _FILESAFE.sub('-', version.replace(' ', '.')) + return '%s-%s' % (name, version) + + +class LegacyMetadata(object): + """The legacy metadata of a release. + + Supports versions 1.0, 1.1, 1.2, 2.0 and 1.3/2.1 (auto-detected). You can + instantiate the class with one of these arguments (or none): + - *path*, the path to a metadata file + - *fileobj* give a file-like object with metadata as content + - *mapping* is a dict-like object + - *scheme* is a version scheme name + """ + # TODO document the mapping API and UNKNOWN default key + + def __init__(self, path=None, fileobj=None, mapping=None, + scheme='default'): + if [path, fileobj, mapping].count(None) < 2: + raise TypeError('path, fileobj and mapping are exclusive') + self._fields = {} + self.requires_files = [] + self._dependencies = None + self.scheme = scheme + if path is not None: + self.read(path) + elif fileobj is not None: + self.read_file(fileobj) + elif mapping is not None: + self.update(mapping) + self.set_metadata_version() + + def set_metadata_version(self): + self._fields['Metadata-Version'] = _best_version(self._fields) + + def _write_field(self, fileobj, name, value): + fileobj.write('%s: %s\n' % (name, value)) + + def __getitem__(self, name): + return self.get(name) + + def __setitem__(self, name, value): + return self.set(name, value) + + def __delitem__(self, name): + field_name = self._convert_name(name) + try: + del self._fields[field_name] + except KeyError: + raise KeyError(name) + + def __contains__(self, name): + return (name in self._fields or + self._convert_name(name) in self._fields) + + def _convert_name(self, name): + if name in _ALL_FIELDS: + return name + name = name.replace('-', '_').lower() + return _ATTR2FIELD.get(name, name) + + def _default_value(self, name): + if name in _LISTFIELDS or name in _ELEMENTSFIELD: + return [] + return 'UNKNOWN' + + def _remove_line_prefix(self, value): + if self.metadata_version in ('1.0', '1.1'): + return _LINE_PREFIX_PRE_1_2.sub('\n', value) + else: + return _LINE_PREFIX_1_2.sub('\n', value) + + def __getattr__(self, name): + if name in _ATTR2FIELD: + return self[name] + raise AttributeError(name) + + # + # Public API + # + +# dependencies = property(_get_dependencies, _set_dependencies) + + def get_fullname(self, filesafe=False): + """Return the distribution name with version. + + If filesafe is true, return a filename-escaped form.""" + return _get_name_and_version(self['Name'], self['Version'], filesafe) + + def is_field(self, name): + """return True if name is a valid metadata key""" + name = self._convert_name(name) + return name in _ALL_FIELDS + + def is_multi_field(self, name): + name = self._convert_name(name) + return name in _LISTFIELDS + + def read(self, filepath): + """Read the metadata values from a file path.""" + fp = codecs.open(filepath, 'r', encoding='utf-8') + try: + self.read_file(fp) + finally: + fp.close() + + def read_file(self, fileob): + """Read the metadata values from a file object.""" + msg = message_from_file(fileob) + self._fields['Metadata-Version'] = msg['metadata-version'] + + # When reading, get all the fields we can + for field in _ALL_FIELDS: + if field not in msg: + continue + if field in _LISTFIELDS: + # we can have multiple lines + values = msg.get_all(field) + if field in _LISTTUPLEFIELDS and values is not None: + values = [tuple(value.split(',')) for value in values] + self.set(field, values) + else: + # single line + value = msg[field] + if value is not None and value != 'UNKNOWN': + self.set(field, value) + + # PEP 566 specifies that the body be used for the description, if + # available + body = msg.get_payload() + self["Description"] = body if body else self["Description"] + # logger.debug('Attempting to set metadata for %s', self) + # self.set_metadata_version() + + def write(self, filepath, skip_unknown=False): + """Write the metadata fields to filepath.""" + fp = codecs.open(filepath, 'w', encoding='utf-8') + try: + self.write_file(fp, skip_unknown) + finally: + fp.close() + + def write_file(self, fileobject, skip_unknown=False): + """Write the PKG-INFO format data to a file object.""" + self.set_metadata_version() + + for field in _version2fieldlist(self['Metadata-Version']): + values = self.get(field) + if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']): + continue + if field in _ELEMENTSFIELD: + self._write_field(fileobject, field, ','.join(values)) + continue + if field not in _LISTFIELDS: + if field == 'Description': + if self.metadata_version in ('1.0', '1.1'): + values = values.replace('\n', '\n ') + else: + values = values.replace('\n', '\n |') + values = [values] + + if field in _LISTTUPLEFIELDS: + values = [','.join(value) for value in values] + + for value in values: + self._write_field(fileobject, field, value) + + def update(self, other=None, **kwargs): + """Set metadata values from the given iterable `other` and kwargs. + + Behavior is like `dict.update`: If `other` has a ``keys`` method, + they are looped over and ``self[key]`` is assigned ``other[key]``. + Else, ``other`` is an iterable of ``(key, value)`` iterables. + + Keys that don't match a metadata field or that have an empty value are + dropped. + """ + def _set(key, value): + if key in _ATTR2FIELD and value: + self.set(self._convert_name(key), value) + + if not other: + # other is None or empty container + pass + elif hasattr(other, 'keys'): + for k in other.keys(): + _set(k, other[k]) + else: + for k, v in other: + _set(k, v) + + if kwargs: + for k, v in kwargs.items(): + _set(k, v) + + def set(self, name, value): + """Control then set a metadata field.""" + name = self._convert_name(name) + + if ((name in _ELEMENTSFIELD or name == 'Platform') and + not isinstance(value, (list, tuple))): + if isinstance(value, string_types): + value = [v.strip() for v in value.split(',')] + else: + value = [] + elif (name in _LISTFIELDS and + not isinstance(value, (list, tuple))): + if isinstance(value, string_types): + value = [value] + else: + value = [] + + if logger.isEnabledFor(logging.WARNING): + project_name = self['Name'] + + scheme = get_scheme(self.scheme) + if name in _PREDICATE_FIELDS and value is not None: + for v in value: + # check that the values are valid + if not scheme.is_valid_matcher(v.split(';')[0]): + logger.warning( + "'%s': '%s' is not valid (field '%s')", + project_name, v, name) + # FIXME this rejects UNKNOWN, is that right? + elif name in _VERSIONS_FIELDS and value is not None: + if not scheme.is_valid_constraint_list(value): + logger.warning("'%s': '%s' is not a valid version (field '%s')", + project_name, value, name) + elif name in _VERSION_FIELDS and value is not None: + if not scheme.is_valid_version(value): + logger.warning("'%s': '%s' is not a valid version (field '%s')", + project_name, value, name) + + if name in _UNICODEFIELDS: + if name == 'Description': + value = self._remove_line_prefix(value) + + self._fields[name] = value + + def get(self, name, default=_MISSING): + """Get a metadata field.""" + name = self._convert_name(name) + if name not in self._fields: + if default is _MISSING: + default = self._default_value(name) + return default + if name in _UNICODEFIELDS: + value = self._fields[name] + return value + elif name in _LISTFIELDS: + value = self._fields[name] + if value is None: + return [] + res = [] + for val in value: + if name not in _LISTTUPLEFIELDS: + res.append(val) + else: + # That's for Project-URL + res.append((val[0], val[1])) + return res + + elif name in _ELEMENTSFIELD: + value = self._fields[name] + if isinstance(value, string_types): + return value.split(',') + return self._fields[name] + + def check(self, strict=False): + """Check if the metadata is compliant. If strict is True then raise if + no Name or Version are provided""" + self.set_metadata_version() + + # XXX should check the versions (if the file was loaded) + missing, warnings = [], [] + + for attr in ('Name', 'Version'): # required by PEP 345 + if attr not in self: + missing.append(attr) + + if strict and missing != []: + msg = 'missing required metadata: %s' % ', '.join(missing) + raise MetadataMissingError(msg) + + for attr in ('Home-page', 'Author'): + if attr not in self: + missing.append(attr) + + # checking metadata 1.2 (XXX needs to check 1.1, 1.0) + if self['Metadata-Version'] != '1.2': + return missing, warnings + + scheme = get_scheme(self.scheme) + + def are_valid_constraints(value): + for v in value: + if not scheme.is_valid_matcher(v.split(';')[0]): + return False + return True + + for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints), + (_VERSIONS_FIELDS, + scheme.is_valid_constraint_list), + (_VERSION_FIELDS, + scheme.is_valid_version)): + for field in fields: + value = self.get(field, None) + if value is not None and not controller(value): + warnings.append("Wrong value for '%s': %s" % (field, value)) + + return missing, warnings + + def todict(self, skip_missing=False): + """Return fields as a dict. + + Field names will be converted to use the underscore-lowercase style + instead of hyphen-mixed case (i.e. home_page instead of Home-page). + This is as per https://www.python.org/dev/peps/pep-0566/#id17. + """ + self.set_metadata_version() + + fields = _version2fieldlist(self['Metadata-Version']) + + data = {} + + for field_name in fields: + if not skip_missing or field_name in self._fields: + key = _FIELD2ATTR[field_name] + if key != 'project_url': + data[key] = self[field_name] + else: + data[key] = [','.join(u) for u in self[field_name]] + + return data + + def add_requirements(self, requirements): + if self['Metadata-Version'] == '1.1': + # we can't have 1.1 metadata *and* Setuptools requires + for field in ('Obsoletes', 'Requires', 'Provides'): + if field in self: + del self[field] + self['Requires-Dist'] += requirements + + # Mapping API + # TODO could add iter* variants + + def keys(self): + return list(_version2fieldlist(self['Metadata-Version'])) + + def __iter__(self): + for key in self.keys(): + yield key + + def values(self): + return [self[key] for key in self.keys()] + + def items(self): + return [(key, self[key]) for key in self.keys()] + + def __repr__(self): + return '<%s %s %s>' % (self.__class__.__name__, self.name, + self.version) + + +METADATA_FILENAME = 'pydist.json' +WHEEL_METADATA_FILENAME = 'metadata.json' +LEGACY_METADATA_FILENAME = 'METADATA' + + +class Metadata(object): + """ + The metadata of a release. This implementation uses 2.0 (JSON) + metadata where possible. If not possible, it wraps a LegacyMetadata + instance which handles the key-value metadata format. + """ + + METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$') + + NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I) + + VERSION_MATCHER = PEP440_VERSION_RE + + SUMMARY_MATCHER = re.compile('.{1,2047}') + + METADATA_VERSION = '2.0' + + GENERATOR = 'distlib (%s)' % __version__ + + MANDATORY_KEYS = { + 'name': (), + 'version': (), + 'summary': ('legacy',), + } + + INDEX_KEYS = ('name version license summary description author ' + 'author_email keywords platform home_page classifiers ' + 'download_url') + + DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires ' + 'dev_requires provides meta_requires obsoleted_by ' + 'supports_environments') + + SYNTAX_VALIDATORS = { + 'metadata_version': (METADATA_VERSION_MATCHER, ()), + 'name': (NAME_MATCHER, ('legacy',)), + 'version': (VERSION_MATCHER, ('legacy',)), + 'summary': (SUMMARY_MATCHER, ('legacy',)), + } + + __slots__ = ('_legacy', '_data', 'scheme') + + def __init__(self, path=None, fileobj=None, mapping=None, + scheme='default'): + if [path, fileobj, mapping].count(None) < 2: + raise TypeError('path, fileobj and mapping are exclusive') + self._legacy = None + self._data = None + self.scheme = scheme + #import pdb; pdb.set_trace() + if mapping is not None: + try: + self._validate_mapping(mapping, scheme) + self._data = mapping + except MetadataUnrecognizedVersionError: + self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme) + self.validate() + else: + data = None + if path: + with open(path, 'rb') as f: + data = f.read() + elif fileobj: + data = fileobj.read() + if data is None: + # Initialised with no args - to be added + self._data = { + 'metadata_version': self.METADATA_VERSION, + 'generator': self.GENERATOR, + } + else: + if not isinstance(data, text_type): + data = data.decode('utf-8') + try: + self._data = json.loads(data) + self._validate_mapping(self._data, scheme) + except ValueError: + # Note: MetadataUnrecognizedVersionError does not + # inherit from ValueError (it's a DistlibException, + # which should not inherit from ValueError). + # The ValueError comes from the json.load - if that + # succeeds and we get a validation error, we want + # that to propagate + self._legacy = LegacyMetadata(fileobj=StringIO(data), + scheme=scheme) + self.validate() + + common_keys = set(('name', 'version', 'license', 'keywords', 'summary')) + + none_list = (None, list) + none_dict = (None, dict) + + mapped_keys = { + 'run_requires': ('Requires-Dist', list), + 'build_requires': ('Setup-Requires-Dist', list), + 'dev_requires': none_list, + 'test_requires': none_list, + 'meta_requires': none_list, + 'extras': ('Provides-Extra', list), + 'modules': none_list, + 'namespaces': none_list, + 'exports': none_dict, + 'commands': none_dict, + 'classifiers': ('Classifier', list), + 'source_url': ('Download-URL', None), + 'metadata_version': ('Metadata-Version', None), + } + + del none_list, none_dict + + def __getattribute__(self, key): + common = object.__getattribute__(self, 'common_keys') + mapped = object.__getattribute__(self, 'mapped_keys') + if key in mapped: + lk, maker = mapped[key] + if self._legacy: + if lk is None: + result = None if maker is None else maker() + else: + result = self._legacy.get(lk) + else: + value = None if maker is None else maker() + if key not in ('commands', 'exports', 'modules', 'namespaces', + 'classifiers'): + result = self._data.get(key, value) + else: + # special cases for PEP 459 + sentinel = object() + result = sentinel + d = self._data.get('extensions') + if d: + if key == 'commands': + result = d.get('python.commands', value) + elif key == 'classifiers': + d = d.get('python.details') + if d: + result = d.get(key, value) + else: + d = d.get('python.exports') + if not d: + d = self._data.get('python.exports') + if d: + result = d.get(key, value) + if result is sentinel: + result = value + elif key not in common: + result = object.__getattribute__(self, key) + elif self._legacy: + result = self._legacy.get(key) + else: + result = self._data.get(key) + return result + + def _validate_value(self, key, value, scheme=None): + if key in self.SYNTAX_VALIDATORS: + pattern, exclusions = self.SYNTAX_VALIDATORS[key] + if (scheme or self.scheme) not in exclusions: + m = pattern.match(value) + if not m: + raise MetadataInvalidError("'%s' is an invalid value for " + "the '%s' property" % (value, + key)) + + def __setattr__(self, key, value): + self._validate_value(key, value) + common = object.__getattribute__(self, 'common_keys') + mapped = object.__getattribute__(self, 'mapped_keys') + if key in mapped: + lk, _ = mapped[key] + if self._legacy: + if lk is None: + raise NotImplementedError + self._legacy[lk] = value + elif key not in ('commands', 'exports', 'modules', 'namespaces', + 'classifiers'): + self._data[key] = value + else: + # special cases for PEP 459 + d = self._data.setdefault('extensions', {}) + if key == 'commands': + d['python.commands'] = value + elif key == 'classifiers': + d = d.setdefault('python.details', {}) + d[key] = value + else: + d = d.setdefault('python.exports', {}) + d[key] = value + elif key not in common: + object.__setattr__(self, key, value) + else: + if key == 'keywords': + if isinstance(value, string_types): + value = value.strip() + if value: + value = value.split() + else: + value = [] + if self._legacy: + self._legacy[key] = value + else: + self._data[key] = value + + @property + def name_and_version(self): + return _get_name_and_version(self.name, self.version, True) + + @property + def provides(self): + if self._legacy: + result = self._legacy['Provides-Dist'] + else: + result = self._data.setdefault('provides', []) + s = '%s (%s)' % (self.name, self.version) + if s not in result: + result.append(s) + return result + + @provides.setter + def provides(self, value): + if self._legacy: + self._legacy['Provides-Dist'] = value + else: + self._data['provides'] = value + + def get_requirements(self, reqts, extras=None, env=None): + """ + Base method to get dependencies, given a set of extras + to satisfy and an optional environment context. + :param reqts: A list of sometimes-wanted dependencies, + perhaps dependent on extras and environment. + :param extras: A list of optional components being requested. + :param env: An optional environment for marker evaluation. + """ + if self._legacy: + result = reqts + else: + result = [] + extras = get_extras(extras or [], self.extras) + for d in reqts: + if 'extra' not in d and 'environment' not in d: + # unconditional + include = True + else: + if 'extra' not in d: + # Not extra-dependent - only environment-dependent + include = True + else: + include = d.get('extra') in extras + if include: + # Not excluded because of extras, check environment + marker = d.get('environment') + if marker: + include = interpret(marker, env) + if include: + result.extend(d['requires']) + for key in ('build', 'dev', 'test'): + e = ':%s:' % key + if e in extras: + extras.remove(e) + # A recursive call, but it should terminate since 'test' + # has been removed from the extras + reqts = self._data.get('%s_requires' % key, []) + result.extend(self.get_requirements(reqts, extras=extras, + env=env)) + return result + + @property + def dictionary(self): + if self._legacy: + return self._from_legacy() + return self._data + + @property + def dependencies(self): + if self._legacy: + raise NotImplementedError + else: + return extract_by_key(self._data, self.DEPENDENCY_KEYS) + + @dependencies.setter + def dependencies(self, value): + if self._legacy: + raise NotImplementedError + else: + self._data.update(value) + + def _validate_mapping(self, mapping, scheme): + if mapping.get('metadata_version') != self.METADATA_VERSION: + raise MetadataUnrecognizedVersionError() + missing = [] + for key, exclusions in self.MANDATORY_KEYS.items(): + if key not in mapping: + if scheme not in exclusions: + missing.append(key) + if missing: + msg = 'Missing metadata items: %s' % ', '.join(missing) + raise MetadataMissingError(msg) + for k, v in mapping.items(): + self._validate_value(k, v, scheme) + + def validate(self): + if self._legacy: + missing, warnings = self._legacy.check(True) + if missing or warnings: + logger.warning('Metadata: missing: %s, warnings: %s', + missing, warnings) + else: + self._validate_mapping(self._data, self.scheme) + + def todict(self): + if self._legacy: + return self._legacy.todict(True) + else: + result = extract_by_key(self._data, self.INDEX_KEYS) + return result + + def _from_legacy(self): + assert self._legacy and not self._data + result = { + 'metadata_version': self.METADATA_VERSION, + 'generator': self.GENERATOR, + } + lmd = self._legacy.todict(True) # skip missing ones + for k in ('name', 'version', 'license', 'summary', 'description', + 'classifier'): + if k in lmd: + if k == 'classifier': + nk = 'classifiers' + else: + nk = k + result[nk] = lmd[k] + kw = lmd.get('Keywords', []) + if kw == ['']: + kw = [] + result['keywords'] = kw + keys = (('requires_dist', 'run_requires'), + ('setup_requires_dist', 'build_requires')) + for ok, nk in keys: + if ok in lmd and lmd[ok]: + result[nk] = [{'requires': lmd[ok]}] + result['provides'] = self.provides + author = {} + maintainer = {} + return result + + LEGACY_MAPPING = { + 'name': 'Name', + 'version': 'Version', + ('extensions', 'python.details', 'license'): 'License', + 'summary': 'Summary', + 'description': 'Description', + ('extensions', 'python.project', 'project_urls', 'Home'): 'Home-page', + ('extensions', 'python.project', 'contacts', 0, 'name'): 'Author', + ('extensions', 'python.project', 'contacts', 0, 'email'): 'Author-email', + 'source_url': 'Download-URL', + ('extensions', 'python.details', 'classifiers'): 'Classifier', + } + + def _to_legacy(self): + def process_entries(entries): + reqts = set() + for e in entries: + extra = e.get('extra') + env = e.get('environment') + rlist = e['requires'] + for r in rlist: + if not env and not extra: + reqts.add(r) + else: + marker = '' + if extra: + marker = 'extra == "%s"' % extra + if env: + if marker: + marker = '(%s) and %s' % (env, marker) + else: + marker = env + reqts.add(';'.join((r, marker))) + return reqts + + assert self._data and not self._legacy + result = LegacyMetadata() + nmd = self._data + # import pdb; pdb.set_trace() + for nk, ok in self.LEGACY_MAPPING.items(): + if not isinstance(nk, tuple): + if nk in nmd: + result[ok] = nmd[nk] + else: + d = nmd + found = True + for k in nk: + try: + d = d[k] + except (KeyError, IndexError): + found = False + break + if found: + result[ok] = d + r1 = process_entries(self.run_requires + self.meta_requires) + r2 = process_entries(self.build_requires + self.dev_requires) + if self.extras: + result['Provides-Extra'] = sorted(self.extras) + result['Requires-Dist'] = sorted(r1) + result['Setup-Requires-Dist'] = sorted(r2) + # TODO: any other fields wanted + return result + + def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True): + if [path, fileobj].count(None) != 1: + raise ValueError('Exactly one of path and fileobj is needed') + self.validate() + if legacy: + if self._legacy: + legacy_md = self._legacy + else: + legacy_md = self._to_legacy() + if path: + legacy_md.write(path, skip_unknown=skip_unknown) + else: + legacy_md.write_file(fileobj, skip_unknown=skip_unknown) + else: + if self._legacy: + d = self._from_legacy() + else: + d = self._data + if fileobj: + json.dump(d, fileobj, ensure_ascii=True, indent=2, + sort_keys=True) + else: + with codecs.open(path, 'w', 'utf-8') as f: + json.dump(d, f, ensure_ascii=True, indent=2, + sort_keys=True) + + def add_requirements(self, requirements): + if self._legacy: + self._legacy.add_requirements(requirements) + else: + run_requires = self._data.setdefault('run_requires', []) + always = None + for entry in run_requires: + if 'environment' not in entry and 'extra' not in entry: + always = entry + break + if always is None: + always = { 'requires': requirements } + run_requires.insert(0, always) + else: + rset = set(always['requires']) | set(requirements) + always['requires'] = sorted(rset) + + def __repr__(self): + name = self.name or '(no name)' + version = self.version or 'no version' + return '<%s %s %s (%s)>' % (self.__class__.__name__, + self.metadata_version, name, version) diff --git a/lib/python3.11/site-packages/pip/_vendor/distlib/resources.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/resources.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/distlib/resources.py rename to python/lib/python3.10/site-packages/pip/_vendor/distlib/resources.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py new file mode 100644 index 0000000..913912c --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py @@ -0,0 +1,429 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2015 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from io import BytesIO +import logging +import os +import re +import struct +import sys + +from .compat import sysconfig, detect_encoding, ZipFile +from .resources import finder +from .util import (FileOperator, get_export_entry, convert_path, + get_executable, get_platform, in_venv) + +logger = logging.getLogger(__name__) + +_DEFAULT_MANIFEST = ''' + + + + + + + + + + + + +'''.strip() + +# check if Python is called on the first line with this expression +FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$') +SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*- +import re +import sys +from %(module)s import %(import_name)s +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(%(func)s()) +''' + + +def enquote_executable(executable): + if ' ' in executable: + # make sure we quote only the executable in case of env + # for example /usr/bin/env "/dir with spaces/bin/jython" + # instead of "/usr/bin/env /dir with spaces/bin/jython" + # otherwise whole + if executable.startswith('/usr/bin/env '): + env, _executable = executable.split(' ', 1) + if ' ' in _executable and not _executable.startswith('"'): + executable = '%s "%s"' % (env, _executable) + else: + if not executable.startswith('"'): + executable = '"%s"' % executable + return executable + +# Keep the old name around (for now), as there is at least one project using it! +_enquote_executable = enquote_executable + +class ScriptMaker(object): + """ + A class to copy or create scripts from source scripts or callable + specifications. + """ + script_template = SCRIPT_TEMPLATE + + executable = None # for shebangs + + def __init__(self, source_dir, target_dir, add_launchers=True, + dry_run=False, fileop=None): + self.source_dir = source_dir + self.target_dir = target_dir + self.add_launchers = add_launchers + self.force = False + self.clobber = False + # It only makes sense to set mode bits on POSIX. + self.set_mode = (os.name == 'posix') or (os.name == 'java' and + os._name == 'posix') + self.variants = set(('', 'X.Y')) + self._fileop = fileop or FileOperator(dry_run) + + self._is_nt = os.name == 'nt' or ( + os.name == 'java' and os._name == 'nt') + self.version_info = sys.version_info + + def _get_alternate_executable(self, executable, options): + if options.get('gui', False) and self._is_nt: # pragma: no cover + dn, fn = os.path.split(executable) + fn = fn.replace('python', 'pythonw') + executable = os.path.join(dn, fn) + return executable + + if sys.platform.startswith('java'): # pragma: no cover + def _is_shell(self, executable): + """ + Determine if the specified executable is a script + (contains a #! line) + """ + try: + with open(executable) as fp: + return fp.read(2) == '#!' + except (OSError, IOError): + logger.warning('Failed to open %s', executable) + return False + + def _fix_jython_executable(self, executable): + if self._is_shell(executable): + # Workaround for Jython is not needed on Linux systems. + import java + + if java.lang.System.getProperty('os.name') == 'Linux': + return executable + elif executable.lower().endswith('jython.exe'): + # Use wrapper exe for Jython on Windows + return executable + return '/usr/bin/env %s' % executable + + def _build_shebang(self, executable, post_interp): + """ + Build a shebang line. In the simple case (on Windows, or a shebang line + which is not too long or contains spaces) use a simple formulation for + the shebang. Otherwise, use /bin/sh as the executable, with a contrived + shebang which allows the script to run either under Python or sh, using + suitable quoting. Thanks to Harald Nordgren for his input. + + See also: http://www.in-ulm.de/~mascheck/various/shebang/#length + https://hg.mozilla.org/mozilla-central/file/tip/mach + """ + if os.name != 'posix': + simple_shebang = True + else: + # Add 3 for '#!' prefix and newline suffix. + shebang_length = len(executable) + len(post_interp) + 3 + if sys.platform == 'darwin': + max_shebang_length = 512 + else: + max_shebang_length = 127 + simple_shebang = ((b' ' not in executable) and + (shebang_length <= max_shebang_length)) + + if simple_shebang: + result = b'#!' + executable + post_interp + b'\n' + else: + result = b'#!/bin/sh\n' + result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n' + result += b"' '''" + return result + + def _get_shebang(self, encoding, post_interp=b'', options=None): + enquote = True + if self.executable: + executable = self.executable + enquote = False # assume this will be taken care of + elif not sysconfig.is_python_build(): + executable = get_executable() + elif in_venv(): # pragma: no cover + executable = os.path.join(sysconfig.get_path('scripts'), + 'python%s' % sysconfig.get_config_var('EXE')) + else: # pragma: no cover + executable = os.path.join( + sysconfig.get_config_var('BINDIR'), + 'python%s%s' % (sysconfig.get_config_var('VERSION'), + sysconfig.get_config_var('EXE'))) + if not os.path.isfile(executable): + # for Python builds from source on Windows, no Python executables with + # a version suffix are created, so we use python.exe + executable = os.path.join(sysconfig.get_config_var('BINDIR'), + 'python%s' % (sysconfig.get_config_var('EXE'))) + if options: + executable = self._get_alternate_executable(executable, options) + + if sys.platform.startswith('java'): # pragma: no cover + executable = self._fix_jython_executable(executable) + + # Normalise case for Windows - COMMENTED OUT + # executable = os.path.normcase(executable) + # N.B. The normalising operation above has been commented out: See + # issue #124. Although paths in Windows are generally case-insensitive, + # they aren't always. For example, a path containing a ẞ (which is a + # LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a + # LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by + # Windows as equivalent in path names. + + # If the user didn't specify an executable, it may be necessary to + # cater for executable paths with spaces (not uncommon on Windows) + if enquote: + executable = enquote_executable(executable) + # Issue #51: don't use fsencode, since we later try to + # check that the shebang is decodable using utf-8. + executable = executable.encode('utf-8') + # in case of IronPython, play safe and enable frames support + if (sys.platform == 'cli' and '-X:Frames' not in post_interp + and '-X:FullFrames' not in post_interp): # pragma: no cover + post_interp += b' -X:Frames' + shebang = self._build_shebang(executable, post_interp) + # Python parser starts to read a script using UTF-8 until + # it gets a #coding:xxx cookie. The shebang has to be the + # first line of a file, the #coding:xxx cookie cannot be + # written before. So the shebang has to be decodable from + # UTF-8. + try: + shebang.decode('utf-8') + except UnicodeDecodeError: # pragma: no cover + raise ValueError( + 'The shebang (%r) is not decodable from utf-8' % shebang) + # If the script is encoded to a custom encoding (use a + # #coding:xxx cookie), the shebang has to be decodable from + # the script encoding too. + if encoding != 'utf-8': + try: + shebang.decode(encoding) + except UnicodeDecodeError: # pragma: no cover + raise ValueError( + 'The shebang (%r) is not decodable ' + 'from the script encoding (%r)' % (shebang, encoding)) + return shebang + + def _get_script_text(self, entry): + return self.script_template % dict(module=entry.prefix, + import_name=entry.suffix.split('.')[0], + func=entry.suffix) + + manifest = _DEFAULT_MANIFEST + + def get_manifest(self, exename): + base = os.path.basename(exename) + return self.manifest % base + + def _write_script(self, names, shebang, script_bytes, filenames, ext): + use_launcher = self.add_launchers and self._is_nt + linesep = os.linesep.encode('utf-8') + if not shebang.endswith(linesep): + shebang += linesep + if not use_launcher: + script_bytes = shebang + script_bytes + else: # pragma: no cover + if ext == 'py': + launcher = self._get_launcher('t') + else: + launcher = self._get_launcher('w') + stream = BytesIO() + with ZipFile(stream, 'w') as zf: + zf.writestr('__main__.py', script_bytes) + zip_data = stream.getvalue() + script_bytes = launcher + shebang + zip_data + for name in names: + outname = os.path.join(self.target_dir, name) + if use_launcher: # pragma: no cover + n, e = os.path.splitext(outname) + if e.startswith('.py'): + outname = n + outname = '%s.exe' % outname + try: + self._fileop.write_binary_file(outname, script_bytes) + except Exception: + # Failed writing an executable - it might be in use. + logger.warning('Failed to write executable - trying to ' + 'use .deleteme logic') + dfname = '%s.deleteme' % outname + if os.path.exists(dfname): + os.remove(dfname) # Not allowed to fail here + os.rename(outname, dfname) # nor here + self._fileop.write_binary_file(outname, script_bytes) + logger.debug('Able to replace executable using ' + '.deleteme logic') + try: + os.remove(dfname) + except Exception: + pass # still in use - ignore error + else: + if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover + outname = '%s.%s' % (outname, ext) + if os.path.exists(outname) and not self.clobber: + logger.warning('Skipping existing file %s', outname) + continue + self._fileop.write_binary_file(outname, script_bytes) + if self.set_mode: + self._fileop.set_executable_mode([outname]) + filenames.append(outname) + + variant_separator = '-' + + def get_script_filenames(self, name): + result = set() + if '' in self.variants: + result.add(name) + if 'X' in self.variants: + result.add('%s%s' % (name, self.version_info[0])) + if 'X.Y' in self.variants: + result.add('%s%s%s.%s' % (name, self.variant_separator, + self.version_info[0], self.version_info[1])) + return result + + def _make_script(self, entry, filenames, options=None): + post_interp = b'' + if options: + args = options.get('interpreter_args', []) + if args: + args = ' %s' % ' '.join(args) + post_interp = args.encode('utf-8') + shebang = self._get_shebang('utf-8', post_interp, options=options) + script = self._get_script_text(entry).encode('utf-8') + scriptnames = self.get_script_filenames(entry.name) + if options and options.get('gui', False): + ext = 'pyw' + else: + ext = 'py' + self._write_script(scriptnames, shebang, script, filenames, ext) + + def _copy_script(self, script, filenames): + adjust = False + script = os.path.join(self.source_dir, convert_path(script)) + outname = os.path.join(self.target_dir, os.path.basename(script)) + if not self.force and not self._fileop.newer(script, outname): + logger.debug('not copying %s (up-to-date)', script) + return + + # Always open the file, but ignore failures in dry-run mode -- + # that way, we'll get accurate feedback if we can read the + # script. + try: + f = open(script, 'rb') + except IOError: # pragma: no cover + if not self.dry_run: + raise + f = None + else: + first_line = f.readline() + if not first_line: # pragma: no cover + logger.warning('%s is an empty file (skipping)', script) + return + + match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n')) + if match: + adjust = True + post_interp = match.group(1) or b'' + + if not adjust: + if f: + f.close() + self._fileop.copy_file(script, outname) + if self.set_mode: + self._fileop.set_executable_mode([outname]) + filenames.append(outname) + else: + logger.info('copying and adjusting %s -> %s', script, + self.target_dir) + if not self._fileop.dry_run: + encoding, lines = detect_encoding(f.readline) + f.seek(0) + shebang = self._get_shebang(encoding, post_interp) + if b'pythonw' in first_line: # pragma: no cover + ext = 'pyw' + else: + ext = 'py' + n = os.path.basename(outname) + self._write_script([n], shebang, f.read(), filenames, ext) + if f: + f.close() + + @property + def dry_run(self): + return self._fileop.dry_run + + @dry_run.setter + def dry_run(self, value): + self._fileop.dry_run = value + + if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover + # Executable launcher support. + # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/ + + def _get_launcher(self, kind): + if struct.calcsize('P') == 8: # 64-bit + bits = '64' + else: + bits = '32' + platform_suffix = '-arm' if get_platform() == 'win-arm64' else '' + name = '%s%s%s.exe' % (kind, bits, platform_suffix) + # Issue 31: don't hardcode an absolute package name, but + # determine it relative to the current package + distlib_package = __name__.rsplit('.', 1)[0] + resource = finder(distlib_package).find(name) + if not resource: + msg = ('Unable to find resource %s in package %s' % (name, + distlib_package)) + raise ValueError(msg) + return resource.bytes + + # Public API follows + + def make(self, specification, options=None): + """ + Make a script. + + :param specification: The specification, which is either a valid export + entry specification (to make a script from a + callable) or a filename (to make a script by + copying from a source location). + :param options: A dictionary of options controlling script generation. + :return: A list of all absolute pathnames written to. + """ + filenames = [] + entry = get_export_entry(specification) + if entry is None: + self._copy_script(specification, filenames) + else: + self._make_script(entry, filenames, options=options) + return filenames + + def make_multiple(self, specifications, options=None): + """ + Take a list of specifications and make scripts from them, + :param specifications: A list of specifications. + :return: A list of all absolute pathnames written to, + """ + filenames = [] + for specification in specifications: + filenames.extend(self.make(specification, options)) + return filenames diff --git a/lib/python3.11/site-packages/pip/_vendor/distlib/util.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/util.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/distlib/util.py rename to python/lib/python3.10/site-packages/pip/_vendor/distlib/util.py diff --git a/lib/python3.11/site-packages/pip/_vendor/distlib/version.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/version.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/distlib/version.py rename to python/lib/python3.10/site-packages/pip/_vendor/distlib/version.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distlib/wheel.py b/python/lib/python3.10/site-packages/pip/_vendor/distlib/wheel.py new file mode 100644 index 0000000..48abfde --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distlib/wheel.py @@ -0,0 +1,1053 @@ +# -*- coding: utf-8 -*- +# +# Copyright (C) 2013-2020 Vinay Sajip. +# Licensed to the Python Software Foundation under a contributor agreement. +# See LICENSE.txt and CONTRIBUTORS.txt. +# +from __future__ import unicode_literals + +import base64 +import codecs +import datetime +from email import message_from_file +import hashlib +import imp +import json +import logging +import os +import posixpath +import re +import shutil +import sys +import tempfile +import zipfile + +from . import __version__, DistlibException +from .compat import sysconfig, ZipFile, fsdecode, text_type, filter +from .database import InstalledDistribution +from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, + LEGACY_METADATA_FILENAME) +from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache, + cached_property, get_cache_base, read_exports, tempdir, + get_platform) +from .version import NormalizedVersion, UnsupportedVersionError + +logger = logging.getLogger(__name__) + +cache = None # created when needed + +if hasattr(sys, 'pypy_version_info'): # pragma: no cover + IMP_PREFIX = 'pp' +elif sys.platform.startswith('java'): # pragma: no cover + IMP_PREFIX = 'jy' +elif sys.platform == 'cli': # pragma: no cover + IMP_PREFIX = 'ip' +else: + IMP_PREFIX = 'cp' + +VER_SUFFIX = sysconfig.get_config_var('py_version_nodot') +if not VER_SUFFIX: # pragma: no cover + VER_SUFFIX = '%s%s' % sys.version_info[:2] +PYVER = 'py' + VER_SUFFIX +IMPVER = IMP_PREFIX + VER_SUFFIX + +ARCH = get_platform().replace('-', '_').replace('.', '_') + +ABI = sysconfig.get_config_var('SOABI') +if ABI and ABI.startswith('cpython-'): + ABI = ABI.replace('cpython-', 'cp').split('-')[0] +else: + def _derive_abi(): + parts = ['cp', VER_SUFFIX] + if sysconfig.get_config_var('Py_DEBUG'): + parts.append('d') + if sysconfig.get_config_var('WITH_PYMALLOC'): + parts.append('m') + if sysconfig.get_config_var('Py_UNICODE_SIZE') == 4: + parts.append('u') + return ''.join(parts) + ABI = _derive_abi() + del _derive_abi + +FILENAME_RE = re.compile(r''' +(?P[^-]+) +-(?P\d+[^-]*) +(-(?P\d+[^-]*))? +-(?P\w+\d+(\.\w+\d+)*) +-(?P\w+) +-(?P\w+(\.\w+)*) +\.whl$ +''', re.IGNORECASE | re.VERBOSE) + +NAME_VERSION_RE = re.compile(r''' +(?P[^-]+) +-(?P\d+[^-]*) +(-(?P\d+[^-]*))?$ +''', re.IGNORECASE | re.VERBOSE) + +SHEBANG_RE = re.compile(br'\s*#![^\r\n]*') +SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$') +SHEBANG_PYTHON = b'#!python' +SHEBANG_PYTHONW = b'#!pythonw' + +if os.sep == '/': + to_posix = lambda o: o +else: + to_posix = lambda o: o.replace(os.sep, '/') + + +class Mounter(object): + def __init__(self): + self.impure_wheels = {} + self.libs = {} + + def add(self, pathname, extensions): + self.impure_wheels[pathname] = extensions + self.libs.update(extensions) + + def remove(self, pathname): + extensions = self.impure_wheels.pop(pathname) + for k, v in extensions: + if k in self.libs: + del self.libs[k] + + def find_module(self, fullname, path=None): + if fullname in self.libs: + result = self + else: + result = None + return result + + def load_module(self, fullname): + if fullname in sys.modules: + result = sys.modules[fullname] + else: + if fullname not in self.libs: + raise ImportError('unable to find extension for %s' % fullname) + result = imp.load_dynamic(fullname, self.libs[fullname]) + result.__loader__ = self + parts = fullname.rsplit('.', 1) + if len(parts) > 1: + result.__package__ = parts[0] + return result + +_hook = Mounter() + + +class Wheel(object): + """ + Class to build and install from Wheel files (PEP 427). + """ + + wheel_version = (1, 1) + hash_kind = 'sha256' + + def __init__(self, filename=None, sign=False, verify=False): + """ + Initialise an instance using a (valid) filename. + """ + self.sign = sign + self.should_verify = verify + self.buildver = '' + self.pyver = [PYVER] + self.abi = ['none'] + self.arch = ['any'] + self.dirname = os.getcwd() + if filename is None: + self.name = 'dummy' + self.version = '0.1' + self._filename = self.filename + else: + m = NAME_VERSION_RE.match(filename) + if m: + info = m.groupdict('') + self.name = info['nm'] + # Reinstate the local version separator + self.version = info['vn'].replace('_', '-') + self.buildver = info['bn'] + self._filename = self.filename + else: + dirname, filename = os.path.split(filename) + m = FILENAME_RE.match(filename) + if not m: + raise DistlibException('Invalid name or ' + 'filename: %r' % filename) + if dirname: + self.dirname = os.path.abspath(dirname) + self._filename = filename + info = m.groupdict('') + self.name = info['nm'] + self.version = info['vn'] + self.buildver = info['bn'] + self.pyver = info['py'].split('.') + self.abi = info['bi'].split('.') + self.arch = info['ar'].split('.') + + @property + def filename(self): + """ + Build and return a filename from the various components. + """ + if self.buildver: + buildver = '-' + self.buildver + else: + buildver = '' + pyver = '.'.join(self.pyver) + abi = '.'.join(self.abi) + arch = '.'.join(self.arch) + # replace - with _ as a local version separator + version = self.version.replace('-', '_') + return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver, + pyver, abi, arch) + + @property + def exists(self): + path = os.path.join(self.dirname, self.filename) + return os.path.isfile(path) + + @property + def tags(self): + for pyver in self.pyver: + for abi in self.abi: + for arch in self.arch: + yield pyver, abi, arch + + @cached_property + def metadata(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + wrapper = codecs.getreader('utf-8') + with ZipFile(pathname, 'r') as zf: + wheel_metadata = self.get_wheel_metadata(zf) + wv = wheel_metadata['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + # if file_version < (1, 1): + # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME, + # LEGACY_METADATA_FILENAME] + # else: + # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME] + fns = [WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME] + result = None + for fn in fns: + try: + metadata_filename = posixpath.join(info_dir, fn) + with zf.open(metadata_filename) as bf: + wf = wrapper(bf) + result = Metadata(fileobj=wf) + if result: + break + except KeyError: + pass + if not result: + raise ValueError('Invalid wheel, because metadata is ' + 'missing: looked in %s' % ', '.join(fns)) + return result + + def get_wheel_metadata(self, zf): + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + metadata_filename = posixpath.join(info_dir, 'WHEEL') + with zf.open(metadata_filename) as bf: + wf = codecs.getreader('utf-8')(bf) + message = message_from_file(wf) + return dict(message) + + @cached_property + def info(self): + pathname = os.path.join(self.dirname, self.filename) + with ZipFile(pathname, 'r') as zf: + result = self.get_wheel_metadata(zf) + return result + + def process_shebang(self, data): + m = SHEBANG_RE.match(data) + if m: + end = m.end() + shebang, data_after_shebang = data[:end], data[end:] + # Preserve any arguments after the interpreter + if b'pythonw' in shebang.lower(): + shebang_python = SHEBANG_PYTHONW + else: + shebang_python = SHEBANG_PYTHON + m = SHEBANG_DETAIL_RE.match(shebang) + if m: + args = b' ' + m.groups()[-1] + else: + args = b'' + shebang = shebang_python + args + data = shebang + data_after_shebang + else: + cr = data.find(b'\r') + lf = data.find(b'\n') + if cr < 0 or cr > lf: + term = b'\n' + else: + if data[cr:cr + 2] == b'\r\n': + term = b'\r\n' + else: + term = b'\r' + data = SHEBANG_PYTHON + term + data + return data + + def get_hash(self, data, hash_kind=None): + if hash_kind is None: + hash_kind = self.hash_kind + try: + hasher = getattr(hashlib, hash_kind) + except AttributeError: + raise DistlibException('Unsupported hash algorithm: %r' % hash_kind) + result = hasher(data).digest() + result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii') + return hash_kind, result + + def write_record(self, records, record_path, base): + records = list(records) # make a copy, as mutated + p = to_posix(os.path.relpath(record_path, base)) + records.append((p, '', '')) + with CSVWriter(record_path) as writer: + for row in records: + writer.writerow(row) + + def write_records(self, info, libdir, archive_paths): + records = [] + distinfo, info_dir = info + hasher = getattr(hashlib, self.hash_kind) + for ap, p in archive_paths: + with open(p, 'rb') as f: + data = f.read() + digest = '%s=%s' % self.get_hash(data) + size = os.path.getsize(p) + records.append((ap, digest, size)) + + p = os.path.join(distinfo, 'RECORD') + self.write_record(records, p, libdir) + ap = to_posix(os.path.join(info_dir, 'RECORD')) + archive_paths.append((ap, p)) + + def build_zip(self, pathname, archive_paths): + with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf: + for ap, p in archive_paths: + logger.debug('Wrote %s to %s in wheel', p, ap) + zf.write(p, ap) + + def build(self, paths, tags=None, wheel_version=None): + """ + Build a wheel from files in specified paths, and use any specified tags + when determining the name of the wheel. + """ + if tags is None: + tags = {} + + libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0] + if libkey == 'platlib': + is_pure = 'false' + default_pyver = [IMPVER] + default_abi = [ABI] + default_arch = [ARCH] + else: + is_pure = 'true' + default_pyver = [PYVER] + default_abi = ['none'] + default_arch = ['any'] + + self.pyver = tags.get('pyver', default_pyver) + self.abi = tags.get('abi', default_abi) + self.arch = tags.get('arch', default_arch) + + libdir = paths[libkey] + + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + archive_paths = [] + + # First, stuff which is not in site-packages + for key in ('data', 'headers', 'scripts'): + if key not in paths: + continue + path = paths[key] + if os.path.isdir(path): + for root, dirs, files in os.walk(path): + for fn in files: + p = fsdecode(os.path.join(root, fn)) + rp = os.path.relpath(p, path) + ap = to_posix(os.path.join(data_dir, key, rp)) + archive_paths.append((ap, p)) + if key == 'scripts' and not p.endswith('.exe'): + with open(p, 'rb') as f: + data = f.read() + data = self.process_shebang(data) + with open(p, 'wb') as f: + f.write(data) + + # Now, stuff which is in site-packages, other than the + # distinfo stuff. + path = libdir + distinfo = None + for root, dirs, files in os.walk(path): + if root == path: + # At the top level only, save distinfo for later + # and skip it for now + for i, dn in enumerate(dirs): + dn = fsdecode(dn) + if dn.endswith('.dist-info'): + distinfo = os.path.join(root, dn) + del dirs[i] + break + assert distinfo, '.dist-info directory expected, not found' + + for fn in files: + # comment out next suite to leave .pyc files in + if fsdecode(fn).endswith(('.pyc', '.pyo')): + continue + p = os.path.join(root, fn) + rp = to_posix(os.path.relpath(p, path)) + archive_paths.append((rp, p)) + + # Now distinfo. Assumed to be flat, i.e. os.listdir is enough. + files = os.listdir(distinfo) + for fn in files: + if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'): + p = fsdecode(os.path.join(distinfo, fn)) + ap = to_posix(os.path.join(info_dir, fn)) + archive_paths.append((ap, p)) + + wheel_metadata = [ + 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version), + 'Generator: distlib %s' % __version__, + 'Root-Is-Purelib: %s' % is_pure, + ] + for pyver, abi, arch in self.tags: + wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch)) + p = os.path.join(distinfo, 'WHEEL') + with open(p, 'w') as f: + f.write('\n'.join(wheel_metadata)) + ap = to_posix(os.path.join(info_dir, 'WHEEL')) + archive_paths.append((ap, p)) + + # sort the entries by archive path. Not needed by any spec, but it + # keeps the archive listing and RECORD tidier than they would otherwise + # be. Use the number of path segments to keep directory entries together, + # and keep the dist-info stuff at the end. + def sorter(t): + ap = t[0] + n = ap.count('/') + if '.dist-info' in ap: + n += 10000 + return (n, ap) + archive_paths = sorted(archive_paths, key=sorter) + + # Now, at last, RECORD. + # Paths in here are archive paths - nothing else makes sense. + self.write_records((distinfo, info_dir), libdir, archive_paths) + # Now, ready to build the zip file + pathname = os.path.join(self.dirname, self.filename) + self.build_zip(pathname, archive_paths) + return pathname + + def skip_entry(self, arcname): + """ + Determine whether an archive entry should be skipped when verifying + or installing. + """ + # The signature file won't be in RECORD, + # and we don't currently don't do anything with it + # We also skip directories, as they won't be in RECORD + # either. See: + # + # https://github.com/pypa/wheel/issues/294 + # https://github.com/pypa/wheel/issues/287 + # https://github.com/pypa/wheel/pull/289 + # + return arcname.endswith(('/', '/RECORD.jws')) + + def install(self, paths, maker, **kwargs): + """ + Install a wheel to the specified paths. If kwarg ``warner`` is + specified, it should be a callable, which will be called with two + tuples indicating the wheel version of this software and the wheel + version in the file, if there is a discrepancy in the versions. + This can be used to issue any warnings to raise any exceptions. + If kwarg ``lib_only`` is True, only the purelib/platlib files are + installed, and the headers, scripts, data and dist-info metadata are + not written. If kwarg ``bytecode_hashed_invalidation`` is True, written + bytecode will try to use file-hash based invalidation (PEP-552) on + supported interpreter versions (CPython 2.7+). + + The return value is a :class:`InstalledDistribution` instance unless + ``options.lib_only`` is True, in which case the return value is ``None``. + """ + + dry_run = maker.dry_run + warner = kwargs.get('warner') + lib_only = kwargs.get('lib_only', False) + bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False) + + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME) + wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') + record_name = posixpath.join(info_dir, 'RECORD') + + wrapper = codecs.getreader('utf-8') + + with ZipFile(pathname, 'r') as zf: + with zf.open(wheel_metadata_name) as bwf: + wf = wrapper(bwf) + message = message_from_file(wf) + wv = message['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + if (file_version != self.wheel_version) and warner: + warner(self.wheel_version, file_version) + + if message['Root-Is-Purelib'] == 'true': + libdir = paths['purelib'] + else: + libdir = paths['platlib'] + + records = {} + with zf.open(record_name) as bf: + with CSVReader(stream=bf) as reader: + for row in reader: + p = row[0] + records[p] = row + + data_pfx = posixpath.join(data_dir, '') + info_pfx = posixpath.join(info_dir, '') + script_pfx = posixpath.join(data_dir, 'scripts', '') + + # make a new instance rather than a copy of maker's, + # as we mutate it + fileop = FileOperator(dry_run=dry_run) + fileop.record = True # so we can rollback if needed + + bc = not sys.dont_write_bytecode # Double negatives. Lovely! + + outfiles = [] # for RECORD writing + + # for script copying/shebang processing + workdir = tempfile.mkdtemp() + # set target dir later + # we default add_launchers to False, as the + # Python Launcher should be used instead + maker.source_dir = workdir + maker.target_dir = None + try: + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + if self.skip_entry(u_arcname): + continue + row = records[u_arcname] + if row[2] and str(zinfo.file_size) != row[2]: + raise DistlibException('size mismatch for ' + '%s' % u_arcname) + if row[1]: + kind, value = row[1].split('=', 1) + with zf.open(arcname) as bf: + data = bf.read() + _, digest = self.get_hash(data, kind) + if digest != value: + raise DistlibException('digest mismatch for ' + '%s' % arcname) + + if lib_only and u_arcname.startswith((info_pfx, data_pfx)): + logger.debug('lib_only: skipping %s', u_arcname) + continue + is_script = (u_arcname.startswith(script_pfx) + and not u_arcname.endswith('.exe')) + + if u_arcname.startswith(data_pfx): + _, where, rp = u_arcname.split('/', 2) + outfile = os.path.join(paths[where], convert_path(rp)) + else: + # meant for site-packages. + if u_arcname in (wheel_metadata_name, record_name): + continue + outfile = os.path.join(libdir, convert_path(u_arcname)) + if not is_script: + with zf.open(arcname) as bf: + fileop.copy_stream(bf, outfile) + # Issue #147: permission bits aren't preserved. Using + # zf.extract(zinfo, libdir) should have worked, but didn't, + # see https://www.thetopsites.net/article/53834422.shtml + # So ... manually preserve permission bits as given in zinfo + if os.name == 'posix': + # just set the normal permission bits + os.chmod(outfile, (zinfo.external_attr >> 16) & 0x1FF) + outfiles.append(outfile) + # Double check the digest of the written file + if not dry_run and row[1]: + with open(outfile, 'rb') as bf: + data = bf.read() + _, newdigest = self.get_hash(data, kind) + if newdigest != digest: + raise DistlibException('digest mismatch ' + 'on write for ' + '%s' % outfile) + if bc and outfile.endswith('.py'): + try: + pyc = fileop.byte_compile(outfile, + hashed_invalidation=bc_hashed_invalidation) + outfiles.append(pyc) + except Exception: + # Don't give up if byte-compilation fails, + # but log it and perhaps warn the user + logger.warning('Byte-compilation failed', + exc_info=True) + else: + fn = os.path.basename(convert_path(arcname)) + workname = os.path.join(workdir, fn) + with zf.open(arcname) as bf: + fileop.copy_stream(bf, workname) + + dn, fn = os.path.split(outfile) + maker.target_dir = dn + filenames = maker.make(fn) + fileop.set_executable_mode(filenames) + outfiles.extend(filenames) + + if lib_only: + logger.debug('lib_only: returning None') + dist = None + else: + # Generate scripts + + # Try to get pydist.json so we can see if there are + # any commands to generate. If this fails (e.g. because + # of a legacy wheel), log a warning but don't give up. + commands = None + file_version = self.info['Wheel-Version'] + if file_version == '1.0': + # Use legacy info + ep = posixpath.join(info_dir, 'entry_points.txt') + try: + with zf.open(ep) as bwf: + epdata = read_exports(bwf) + commands = {} + for key in ('console', 'gui'): + k = '%s_scripts' % key + if k in epdata: + commands['wrap_%s' % key] = d = {} + for v in epdata[k].values(): + s = '%s:%s' % (v.prefix, v.suffix) + if v.flags: + s += ' [%s]' % ','.join(v.flags) + d[v.name] = s + except Exception: + logger.warning('Unable to read legacy script ' + 'metadata, so cannot generate ' + 'scripts') + else: + try: + with zf.open(metadata_name) as bwf: + wf = wrapper(bwf) + commands = json.load(wf).get('extensions') + if commands: + commands = commands.get('python.commands') + except Exception: + logger.warning('Unable to read JSON metadata, so ' + 'cannot generate scripts') + if commands: + console_scripts = commands.get('wrap_console', {}) + gui_scripts = commands.get('wrap_gui', {}) + if console_scripts or gui_scripts: + script_dir = paths.get('scripts', '') + if not os.path.isdir(script_dir): + raise ValueError('Valid script path not ' + 'specified') + maker.target_dir = script_dir + for k, v in console_scripts.items(): + script = '%s = %s' % (k, v) + filenames = maker.make(script) + fileop.set_executable_mode(filenames) + + if gui_scripts: + options = {'gui': True } + for k, v in gui_scripts.items(): + script = '%s = %s' % (k, v) + filenames = maker.make(script, options) + fileop.set_executable_mode(filenames) + + p = os.path.join(libdir, info_dir) + dist = InstalledDistribution(p) + + # Write SHARED + paths = dict(paths) # don't change passed in dict + del paths['purelib'] + del paths['platlib'] + paths['lib'] = libdir + p = dist.write_shared_locations(paths, dry_run) + if p: + outfiles.append(p) + + # Write RECORD + dist.write_installed_files(outfiles, paths['prefix'], + dry_run) + return dist + except Exception: # pragma: no cover + logger.exception('installation failed.') + fileop.rollback() + raise + finally: + shutil.rmtree(workdir) + + def _get_dylib_cache(self): + global cache + if cache is None: + # Use native string to avoid issues on 2.x: see Python #20140. + base = os.path.join(get_cache_base(), str('dylib-cache'), + '%s.%s' % sys.version_info[:2]) + cache = Cache(base) + return cache + + def _get_extensions(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + arcname = posixpath.join(info_dir, 'EXTENSIONS') + wrapper = codecs.getreader('utf-8') + result = [] + with ZipFile(pathname, 'r') as zf: + try: + with zf.open(arcname) as bf: + wf = wrapper(bf) + extensions = json.load(wf) + cache = self._get_dylib_cache() + prefix = cache.prefix_to_dir(pathname) + cache_base = os.path.join(cache.base, prefix) + if not os.path.isdir(cache_base): + os.makedirs(cache_base) + for name, relpath in extensions.items(): + dest = os.path.join(cache_base, convert_path(relpath)) + if not os.path.exists(dest): + extract = True + else: + file_time = os.stat(dest).st_mtime + file_time = datetime.datetime.fromtimestamp(file_time) + info = zf.getinfo(relpath) + wheel_time = datetime.datetime(*info.date_time) + extract = wheel_time > file_time + if extract: + zf.extract(relpath, cache_base) + result.append((name, dest)) + except KeyError: + pass + return result + + def is_compatible(self): + """ + Determine if a wheel is compatible with the running system. + """ + return is_compatible(self) + + def is_mountable(self): + """ + Determine if a wheel is asserted as mountable by its metadata. + """ + return True # for now - metadata details TBD + + def mount(self, append=False): + pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) + if not self.is_compatible(): + msg = 'Wheel %s not compatible with this Python.' % pathname + raise DistlibException(msg) + if not self.is_mountable(): + msg = 'Wheel %s is marked as not mountable.' % pathname + raise DistlibException(msg) + if pathname in sys.path: + logger.debug('%s already in path', pathname) + else: + if append: + sys.path.append(pathname) + else: + sys.path.insert(0, pathname) + extensions = self._get_extensions() + if extensions: + if _hook not in sys.meta_path: + sys.meta_path.append(_hook) + _hook.add(pathname, extensions) + + def unmount(self): + pathname = os.path.abspath(os.path.join(self.dirname, self.filename)) + if pathname not in sys.path: + logger.debug('%s not in path', pathname) + else: + sys.path.remove(pathname) + if pathname in _hook.impure_wheels: + _hook.remove(pathname) + if not _hook.impure_wheels: + if _hook in sys.meta_path: + sys.meta_path.remove(_hook) + + def verify(self): + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + data_dir = '%s.data' % name_ver + info_dir = '%s.dist-info' % name_ver + + metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME) + wheel_metadata_name = posixpath.join(info_dir, 'WHEEL') + record_name = posixpath.join(info_dir, 'RECORD') + + wrapper = codecs.getreader('utf-8') + + with ZipFile(pathname, 'r') as zf: + with zf.open(wheel_metadata_name) as bwf: + wf = wrapper(bwf) + message = message_from_file(wf) + wv = message['Wheel-Version'].split('.', 1) + file_version = tuple([int(i) for i in wv]) + # TODO version verification + + records = {} + with zf.open(record_name) as bf: + with CSVReader(stream=bf) as reader: + for row in reader: + p = row[0] + records[p] = row + + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + # See issue #115: some wheels have .. in their entries, but + # in the filename ... e.g. __main__..py ! So the check is + # updated to look for .. in the directory portions + p = u_arcname.split('/') + if '..' in p: + raise DistlibException('invalid entry in ' + 'wheel: %r' % u_arcname) + + if self.skip_entry(u_arcname): + continue + row = records[u_arcname] + if row[2] and str(zinfo.file_size) != row[2]: + raise DistlibException('size mismatch for ' + '%s' % u_arcname) + if row[1]: + kind, value = row[1].split('=', 1) + with zf.open(arcname) as bf: + data = bf.read() + _, digest = self.get_hash(data, kind) + if digest != value: + raise DistlibException('digest mismatch for ' + '%s' % arcname) + + def update(self, modifier, dest_dir=None, **kwargs): + """ + Update the contents of a wheel in a generic way. The modifier should + be a callable which expects a dictionary argument: its keys are + archive-entry paths, and its values are absolute filesystem paths + where the contents the corresponding archive entries can be found. The + modifier is free to change the contents of the files pointed to, add + new entries and remove entries, before returning. This method will + extract the entire contents of the wheel to a temporary location, call + the modifier, and then use the passed (and possibly updated) + dictionary to write a new wheel. If ``dest_dir`` is specified, the new + wheel is written there -- otherwise, the original wheel is overwritten. + + The modifier should return True if it updated the wheel, else False. + This method returns the same value the modifier returns. + """ + + def get_version(path_map, info_dir): + version = path = None + key = '%s/%s' % (info_dir, LEGACY_METADATA_FILENAME) + if key not in path_map: + key = '%s/PKG-INFO' % info_dir + if key in path_map: + path = path_map[key] + version = Metadata(path=path).version + return version, path + + def update_version(version, path): + updated = None + try: + v = NormalizedVersion(version) + i = version.find('-') + if i < 0: + updated = '%s+1' % version + else: + parts = [int(s) for s in version[i + 1:].split('.')] + parts[-1] += 1 + updated = '%s+%s' % (version[:i], + '.'.join(str(i) for i in parts)) + except UnsupportedVersionError: + logger.debug('Cannot update non-compliant (PEP-440) ' + 'version %r', version) + if updated: + md = Metadata(path=path) + md.version = updated + legacy = path.endswith(LEGACY_METADATA_FILENAME) + md.write(path=path, legacy=legacy) + logger.debug('Version updated from %r to %r', version, + updated) + + pathname = os.path.join(self.dirname, self.filename) + name_ver = '%s-%s' % (self.name, self.version) + info_dir = '%s.dist-info' % name_ver + record_name = posixpath.join(info_dir, 'RECORD') + with tempdir() as workdir: + with ZipFile(pathname, 'r') as zf: + path_map = {} + for zinfo in zf.infolist(): + arcname = zinfo.filename + if isinstance(arcname, text_type): + u_arcname = arcname + else: + u_arcname = arcname.decode('utf-8') + if u_arcname == record_name: + continue + if '..' in u_arcname: + raise DistlibException('invalid entry in ' + 'wheel: %r' % u_arcname) + zf.extract(zinfo, workdir) + path = os.path.join(workdir, convert_path(u_arcname)) + path_map[u_arcname] = path + + # Remember the version. + original_version, _ = get_version(path_map, info_dir) + # Files extracted. Call the modifier. + modified = modifier(path_map, **kwargs) + if modified: + # Something changed - need to build a new wheel. + current_version, path = get_version(path_map, info_dir) + if current_version and (current_version == original_version): + # Add or update local version to signify changes. + update_version(current_version, path) + # Decide where the new wheel goes. + if dest_dir is None: + fd, newpath = tempfile.mkstemp(suffix='.whl', + prefix='wheel-update-', + dir=workdir) + os.close(fd) + else: + if not os.path.isdir(dest_dir): + raise DistlibException('Not a directory: %r' % dest_dir) + newpath = os.path.join(dest_dir, self.filename) + archive_paths = list(path_map.items()) + distinfo = os.path.join(workdir, info_dir) + info = distinfo, info_dir + self.write_records(info, workdir, archive_paths) + self.build_zip(newpath, archive_paths) + if dest_dir is None: + shutil.copyfile(newpath, pathname) + return modified + +def _get_glibc_version(): + import platform + ver = platform.libc_ver() + result = [] + if ver[0] == 'glibc': + for s in ver[1].split('.'): + result.append(int(s) if s.isdigit() else 0) + result = tuple(result) + return result + +def compatible_tags(): + """ + Return (pyver, abi, arch) tuples compatible with this Python. + """ + versions = [VER_SUFFIX] + major = VER_SUFFIX[0] + for minor in range(sys.version_info[1] - 1, - 1, -1): + versions.append(''.join([major, str(minor)])) + + abis = [] + for suffix, _, _ in imp.get_suffixes(): + if suffix.startswith('.abi'): + abis.append(suffix.split('.', 2)[1]) + abis.sort() + if ABI != 'none': + abis.insert(0, ABI) + abis.append('none') + result = [] + + arches = [ARCH] + if sys.platform == 'darwin': + m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH) + if m: + name, major, minor, arch = m.groups() + minor = int(minor) + matches = [arch] + if arch in ('i386', 'ppc'): + matches.append('fat') + if arch in ('i386', 'ppc', 'x86_64'): + matches.append('fat3') + if arch in ('ppc64', 'x86_64'): + matches.append('fat64') + if arch in ('i386', 'x86_64'): + matches.append('intel') + if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'): + matches.append('universal') + while minor >= 0: + for match in matches: + s = '%s_%s_%s_%s' % (name, major, minor, match) + if s != ARCH: # already there + arches.append(s) + minor -= 1 + + # Most specific - our Python version, ABI and arch + for abi in abis: + for arch in arches: + result.append((''.join((IMP_PREFIX, versions[0])), abi, arch)) + # manylinux + if abi != 'none' and sys.platform.startswith('linux'): + arch = arch.replace('linux_', '') + parts = _get_glibc_version() + if len(parts) == 2: + if parts >= (2, 5): + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux1_%s' % arch)) + if parts >= (2, 12): + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux2010_%s' % arch)) + if parts >= (2, 17): + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux2014_%s' % arch)) + result.append((''.join((IMP_PREFIX, versions[0])), abi, + 'manylinux_%s_%s_%s' % (parts[0], parts[1], + arch))) + + # where no ABI / arch dependency, but IMP_PREFIX dependency + for i, version in enumerate(versions): + result.append((''.join((IMP_PREFIX, version)), 'none', 'any')) + if i == 0: + result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any')) + + # no IMP_PREFIX, ABI or arch dependency + for i, version in enumerate(versions): + result.append((''.join(('py', version)), 'none', 'any')) + if i == 0: + result.append((''.join(('py', version[0])), 'none', 'any')) + + return set(result) + + +COMPATIBLE_TAGS = compatible_tags() + +del compatible_tags + + +def is_compatible(wheel, tags=None): + if not isinstance(wheel, Wheel): + wheel = Wheel(wheel) # assume it's a filename + result = False + if tags is None: + tags = COMPATIBLE_TAGS + for ver, abi, arch in tags: + if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch: + result = True + break + return result diff --git a/python/lib/python3.10/site-packages/pip/_vendor/distro.py b/python/lib/python3.10/site-packages/pip/_vendor/distro.py new file mode 100644 index 0000000..7892741 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/distro.py @@ -0,0 +1,1386 @@ +# Copyright 2015,2016,2017 Nir Cohen +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +The ``distro`` package (``distro`` stands for Linux Distribution) provides +information about the Linux distribution it runs on, such as a reliable +machine-readable distro ID, or version information. + +It is the recommended replacement for Python's original +:py:func:`platform.linux_distribution` function, but it provides much more +functionality. An alternative implementation became necessary because Python +3.5 deprecated this function, and Python 3.8 removed it altogether. Its +predecessor function :py:func:`platform.dist` was already deprecated since +Python 2.6 and removed in Python 3.8. Still, there are many cases in which +access to OS distribution information is needed. See `Python issue 1322 +`_ for more information. +""" + +import argparse +import json +import logging +import os +import re +import shlex +import subprocess +import sys +import warnings + +__version__ = "1.6.0" + +# Use `if False` to avoid an ImportError on Python 2. After dropping Python 2 +# support, can use typing.TYPE_CHECKING instead. See: +# https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING +if False: # pragma: nocover + from typing import ( + Any, + Callable, + Dict, + Iterable, + Optional, + Sequence, + TextIO, + Tuple, + Type, + TypedDict, + Union, + ) + + VersionDict = TypedDict( + "VersionDict", {"major": str, "minor": str, "build_number": str} + ) + InfoDict = TypedDict( + "InfoDict", + { + "id": str, + "version": str, + "version_parts": VersionDict, + "like": str, + "codename": str, + }, + ) + + +_UNIXCONFDIR = os.environ.get("UNIXCONFDIR", "/etc") +_UNIXUSRLIBDIR = os.environ.get("UNIXUSRLIBDIR", "/usr/lib") +_OS_RELEASE_BASENAME = "os-release" + +#: Translation table for normalizing the "ID" attribute defined in os-release +#: files, for use by the :func:`distro.id` method. +#: +#: * Key: Value as defined in the os-release file, translated to lower case, +#: with blanks translated to underscores. +#: +#: * Value: Normalized value. +NORMALIZED_OS_ID = { + "ol": "oracle", # Oracle Linux +} + +#: Translation table for normalizing the "Distributor ID" attribute returned by +#: the lsb_release command, for use by the :func:`distro.id` method. +#: +#: * Key: Value as returned by the lsb_release command, translated to lower +#: case, with blanks translated to underscores. +#: +#: * Value: Normalized value. +NORMALIZED_LSB_ID = { + "enterpriseenterpriseas": "oracle", # Oracle Enterprise Linux 4 + "enterpriseenterpriseserver": "oracle", # Oracle Linux 5 + "redhatenterpriseworkstation": "rhel", # RHEL 6, 7 Workstation + "redhatenterpriseserver": "rhel", # RHEL 6, 7 Server + "redhatenterprisecomputenode": "rhel", # RHEL 6 ComputeNode +} + +#: Translation table for normalizing the distro ID derived from the file name +#: of distro release files, for use by the :func:`distro.id` method. +#: +#: * Key: Value as derived from the file name of a distro release file, +#: translated to lower case, with blanks translated to underscores. +#: +#: * Value: Normalized value. +NORMALIZED_DISTRO_ID = { + "redhat": "rhel", # RHEL 6.x, 7.x +} + +# Pattern for content of distro release file (reversed) +_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile( + r"(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)" +) + +# Pattern for base file name of distro release file +_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(r"(\w+)[-_](release|version)$") + +# Base file names to be ignored when searching for distro release file +_DISTRO_RELEASE_IGNORE_BASENAMES = ( + "debian_version", + "lsb-release", + "oem-release", + _OS_RELEASE_BASENAME, + "system-release", + "plesk-release", + "iredmail-release", +) + + +def linux_distribution(full_distribution_name=True): + # type: (bool) -> Tuple[str, str, str] + """ + .. deprecated:: 1.6.0 + + :func:`distro.linux_distribution()` is deprecated. It should only be + used as a compatibility shim with Python's + :py:func:`platform.linux_distribution()`. Please use :func:`distro.id`, + :func:`distro.version` and :func:`distro.name` instead. + + Return information about the current OS distribution as a tuple + ``(id_name, version, codename)`` with items as follows: + + * ``id_name``: If *full_distribution_name* is false, the result of + :func:`distro.id`. Otherwise, the result of :func:`distro.name`. + + * ``version``: The result of :func:`distro.version`. + + * ``codename``: The result of :func:`distro.codename`. + + The interface of this function is compatible with the original + :py:func:`platform.linux_distribution` function, supporting a subset of + its parameters. + + The data it returns may not exactly be the same, because it uses more data + sources than the original function, and that may lead to different data if + the OS distribution is not consistent across multiple data sources it + provides (there are indeed such distributions ...). + + Another reason for differences is the fact that the :func:`distro.id` + method normalizes the distro ID string to a reliable machine-readable value + for a number of popular OS distributions. + """ + warnings.warn( + "distro.linux_distribution() is deprecated. It should only be used as a " + "compatibility shim with Python's platform.linux_distribution(). Please use " + "distro.id(), distro.version() and distro.name() instead.", + DeprecationWarning, + stacklevel=2, + ) + return _distro.linux_distribution(full_distribution_name) + + +def id(): + # type: () -> str + """ + Return the distro ID of the current distribution, as a + machine-readable string. + + For a number of OS distributions, the returned distro ID value is + *reliable*, in the sense that it is documented and that it does not change + across releases of the distribution. + + This package maintains the following reliable distro ID values: + + ============== ========================================= + Distro ID Distribution + ============== ========================================= + "ubuntu" Ubuntu + "debian" Debian + "rhel" RedHat Enterprise Linux + "centos" CentOS + "fedora" Fedora + "sles" SUSE Linux Enterprise Server + "opensuse" openSUSE + "amazon" Amazon Linux + "arch" Arch Linux + "cloudlinux" CloudLinux OS + "exherbo" Exherbo Linux + "gentoo" GenToo Linux + "ibm_powerkvm" IBM PowerKVM + "kvmibm" KVM for IBM z Systems + "linuxmint" Linux Mint + "mageia" Mageia + "mandriva" Mandriva Linux + "parallels" Parallels + "pidora" Pidora + "raspbian" Raspbian + "oracle" Oracle Linux (and Oracle Enterprise Linux) + "scientific" Scientific Linux + "slackware" Slackware + "xenserver" XenServer + "openbsd" OpenBSD + "netbsd" NetBSD + "freebsd" FreeBSD + "midnightbsd" MidnightBSD + ============== ========================================= + + If you have a need to get distros for reliable IDs added into this set, + or if you find that the :func:`distro.id` function returns a different + distro ID for one of the listed distros, please create an issue in the + `distro issue tracker`_. + + **Lookup hierarchy and transformations:** + + First, the ID is obtained from the following sources, in the specified + order. The first available and non-empty value is used: + + * the value of the "ID" attribute of the os-release file, + + * the value of the "Distributor ID" attribute returned by the lsb_release + command, + + * the first part of the file name of the distro release file, + + The so determined ID value then passes the following transformations, + before it is returned by this method: + + * it is translated to lower case, + + * blanks (which should not be there anyway) are translated to underscores, + + * a normalization of the ID is performed, based upon + `normalization tables`_. The purpose of this normalization is to ensure + that the ID is as reliable as possible, even across incompatible changes + in the OS distributions. A common reason for an incompatible change is + the addition of an os-release file, or the addition of the lsb_release + command, with ID values that differ from what was previously determined + from the distro release file name. + """ + return _distro.id() + + +def name(pretty=False): + # type: (bool) -> str + """ + Return the name of the current OS distribution, as a human-readable + string. + + If *pretty* is false, the name is returned without version or codename. + (e.g. "CentOS Linux") + + If *pretty* is true, the version and codename are appended. + (e.g. "CentOS Linux 7.1.1503 (Core)") + + **Lookup hierarchy:** + + The name is obtained from the following sources, in the specified order. + The first available and non-empty value is used: + + * If *pretty* is false: + + - the value of the "NAME" attribute of the os-release file, + + - the value of the "Distributor ID" attribute returned by the lsb_release + command, + + - the value of the "" field of the distro release file. + + * If *pretty* is true: + + - the value of the "PRETTY_NAME" attribute of the os-release file, + + - the value of the "Description" attribute returned by the lsb_release + command, + + - the value of the "" field of the distro release file, appended + with the value of the pretty version ("" and "" + fields) of the distro release file, if available. + """ + return _distro.name(pretty) + + +def version(pretty=False, best=False): + # type: (bool, bool) -> str + """ + Return the version of the current OS distribution, as a human-readable + string. + + If *pretty* is false, the version is returned without codename (e.g. + "7.0"). + + If *pretty* is true, the codename in parenthesis is appended, if the + codename is non-empty (e.g. "7.0 (Maipo)"). + + Some distributions provide version numbers with different precisions in + the different sources of distribution information. Examining the different + sources in a fixed priority order does not always yield the most precise + version (e.g. for Debian 8.2, or CentOS 7.1). + + The *best* parameter can be used to control the approach for the returned + version: + + If *best* is false, the first non-empty version number in priority order of + the examined sources is returned. + + If *best* is true, the most precise version number out of all examined + sources is returned. + + **Lookup hierarchy:** + + In all cases, the version number is obtained from the following sources. + If *best* is false, this order represents the priority order: + + * the value of the "VERSION_ID" attribute of the os-release file, + * the value of the "Release" attribute returned by the lsb_release + command, + * the version number parsed from the "" field of the first line + of the distro release file, + * the version number parsed from the "PRETTY_NAME" attribute of the + os-release file, if it follows the format of the distro release files. + * the version number parsed from the "Description" attribute returned by + the lsb_release command, if it follows the format of the distro release + files. + """ + return _distro.version(pretty, best) + + +def version_parts(best=False): + # type: (bool) -> Tuple[str, str, str] + """ + Return the version of the current OS distribution as a tuple + ``(major, minor, build_number)`` with items as follows: + + * ``major``: The result of :func:`distro.major_version`. + + * ``minor``: The result of :func:`distro.minor_version`. + + * ``build_number``: The result of :func:`distro.build_number`. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.version_parts(best) + + +def major_version(best=False): + # type: (bool) -> str + """ + Return the major version of the current OS distribution, as a string, + if provided. + Otherwise, the empty string is returned. The major version is the first + part of the dot-separated version string. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.major_version(best) + + +def minor_version(best=False): + # type: (bool) -> str + """ + Return the minor version of the current OS distribution, as a string, + if provided. + Otherwise, the empty string is returned. The minor version is the second + part of the dot-separated version string. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.minor_version(best) + + +def build_number(best=False): + # type: (bool) -> str + """ + Return the build number of the current OS distribution, as a string, + if provided. + Otherwise, the empty string is returned. The build number is the third part + of the dot-separated version string. + + For a description of the *best* parameter, see the :func:`distro.version` + method. + """ + return _distro.build_number(best) + + +def like(): + # type: () -> str + """ + Return a space-separated list of distro IDs of distributions that are + closely related to the current OS distribution in regards to packaging + and programming interfaces, for example distributions the current + distribution is a derivative from. + + **Lookup hierarchy:** + + This information item is only provided by the os-release file. + For details, see the description of the "ID_LIKE" attribute in the + `os-release man page + `_. + """ + return _distro.like() + + +def codename(): + # type: () -> str + """ + Return the codename for the release of the current OS distribution, + as a string. + + If the distribution does not have a codename, an empty string is returned. + + Note that the returned codename is not always really a codename. For + example, openSUSE returns "x86_64". This function does not handle such + cases in any special way and just returns the string it finds, if any. + + **Lookup hierarchy:** + + * the codename within the "VERSION" attribute of the os-release file, if + provided, + + * the value of the "Codename" attribute returned by the lsb_release + command, + + * the value of the "" field of the distro release file. + """ + return _distro.codename() + + +def info(pretty=False, best=False): + # type: (bool, bool) -> InfoDict + """ + Return certain machine-readable information items about the current OS + distribution in a dictionary, as shown in the following example: + + .. sourcecode:: python + + { + 'id': 'rhel', + 'version': '7.0', + 'version_parts': { + 'major': '7', + 'minor': '0', + 'build_number': '' + }, + 'like': 'fedora', + 'codename': 'Maipo' + } + + The dictionary structure and keys are always the same, regardless of which + information items are available in the underlying data sources. The values + for the various keys are as follows: + + * ``id``: The result of :func:`distro.id`. + + * ``version``: The result of :func:`distro.version`. + + * ``version_parts -> major``: The result of :func:`distro.major_version`. + + * ``version_parts -> minor``: The result of :func:`distro.minor_version`. + + * ``version_parts -> build_number``: The result of + :func:`distro.build_number`. + + * ``like``: The result of :func:`distro.like`. + + * ``codename``: The result of :func:`distro.codename`. + + For a description of the *pretty* and *best* parameters, see the + :func:`distro.version` method. + """ + return _distro.info(pretty, best) + + +def os_release_info(): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information items + from the os-release file data source of the current OS distribution. + + See `os-release file`_ for details about these information items. + """ + return _distro.os_release_info() + + +def lsb_release_info(): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information items + from the lsb_release command data source of the current OS distribution. + + See `lsb_release command output`_ for details about these information + items. + """ + return _distro.lsb_release_info() + + +def distro_release_info(): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information items + from the distro release file data source of the current OS distribution. + + See `distro release file`_ for details about these information items. + """ + return _distro.distro_release_info() + + +def uname_info(): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information items + from the distro release file data source of the current OS distribution. + """ + return _distro.uname_info() + + +def os_release_attr(attribute): + # type: (str) -> str + """ + Return a single named information item from the os-release file data source + of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + + See `os-release file`_ for details about these information items. + """ + return _distro.os_release_attr(attribute) + + +def lsb_release_attr(attribute): + # type: (str) -> str + """ + Return a single named information item from the lsb_release command output + data source of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + + See `lsb_release command output`_ for details about these information + items. + """ + return _distro.lsb_release_attr(attribute) + + +def distro_release_attr(attribute): + # type: (str) -> str + """ + Return a single named information item from the distro release file + data source of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + + See `distro release file`_ for details about these information items. + """ + return _distro.distro_release_attr(attribute) + + +def uname_attr(attribute): + # type: (str) -> str + """ + Return a single named information item from the distro release file + data source of the current OS distribution. + + Parameters: + + * ``attribute`` (string): Key of the information item. + + Returns: + + * (string): Value of the information item, if the item exists. + The empty string, if the item does not exist. + """ + return _distro.uname_attr(attribute) + + +try: + from functools import cached_property +except ImportError: + # Python < 3.8 + class cached_property(object): # type: ignore + """A version of @property which caches the value. On access, it calls the + underlying function and sets the value in `__dict__` so future accesses + will not re-call the property. + """ + + def __init__(self, f): + # type: (Callable[[Any], Any]) -> None + self._fname = f.__name__ + self._f = f + + def __get__(self, obj, owner): + # type: (Any, Type[Any]) -> Any + assert obj is not None, "call {} on an instance".format(self._fname) + ret = obj.__dict__[self._fname] = self._f(obj) + return ret + + +class LinuxDistribution(object): + """ + Provides information about a OS distribution. + + This package creates a private module-global instance of this class with + default initialization arguments, that is used by the + `consolidated accessor functions`_ and `single source accessor functions`_. + By using default initialization arguments, that module-global instance + returns data about the current OS distribution (i.e. the distro this + package runs on). + + Normally, it is not necessary to create additional instances of this class. + However, in situations where control is needed over the exact data sources + that are used, instances of this class can be created with a specific + distro release file, or a specific os-release file, or without invoking the + lsb_release command. + """ + + def __init__( + self, + include_lsb=True, + os_release_file="", + distro_release_file="", + include_uname=True, + root_dir=None, + ): + # type: (bool, str, str, bool, Optional[str]) -> None + """ + The initialization method of this class gathers information from the + available data sources, and stores that in private instance attributes. + Subsequent access to the information items uses these private instance + attributes, so that the data sources are read only once. + + Parameters: + + * ``include_lsb`` (bool): Controls whether the + `lsb_release command output`_ is included as a data source. + + If the lsb_release command is not available in the program execution + path, the data source for the lsb_release command will be empty. + + * ``os_release_file`` (string): The path name of the + `os-release file`_ that is to be used as a data source. + + An empty string (the default) will cause the default path name to + be used (see `os-release file`_ for details). + + If the specified or defaulted os-release file does not exist, the + data source for the os-release file will be empty. + + * ``distro_release_file`` (string): The path name of the + `distro release file`_ that is to be used as a data source. + + An empty string (the default) will cause a default search algorithm + to be used (see `distro release file`_ for details). + + If the specified distro release file does not exist, or if no default + distro release file can be found, the data source for the distro + release file will be empty. + + * ``include_uname`` (bool): Controls whether uname command output is + included as a data source. If the uname command is not available in + the program execution path the data source for the uname command will + be empty. + + * ``root_dir`` (string): The absolute path to the root directory to use + to find distro-related information files. + + Public instance attributes: + + * ``os_release_file`` (string): The path name of the + `os-release file`_ that is actually used as a data source. The + empty string if no distro release file is used as a data source. + + * ``distro_release_file`` (string): The path name of the + `distro release file`_ that is actually used as a data source. The + empty string if no distro release file is used as a data source. + + * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter. + This controls whether the lsb information will be loaded. + + * ``include_uname`` (bool): The result of the ``include_uname`` + parameter. This controls whether the uname information will + be loaded. + + Raises: + + * :py:exc:`IOError`: Some I/O issue with an os-release file or distro + release file. + + * :py:exc:`subprocess.CalledProcessError`: The lsb_release command had + some issue (other than not being available in the program execution + path). + + * :py:exc:`UnicodeError`: A data source has unexpected characters or + uses an unexpected encoding. + """ + self.root_dir = root_dir + self.etc_dir = os.path.join(root_dir, "etc") if root_dir else _UNIXCONFDIR + self.usr_lib_dir = ( + os.path.join(root_dir, "usr/lib") if root_dir else _UNIXUSRLIBDIR + ) + + if os_release_file: + self.os_release_file = os_release_file + else: + etc_dir_os_release_file = os.path.join(self.etc_dir, _OS_RELEASE_BASENAME) + usr_lib_os_release_file = os.path.join( + self.usr_lib_dir, _OS_RELEASE_BASENAME + ) + + # NOTE: The idea is to respect order **and** have it set + # at all times for API backwards compatibility. + if os.path.isfile(etc_dir_os_release_file) or not os.path.isfile( + usr_lib_os_release_file + ): + self.os_release_file = etc_dir_os_release_file + else: + self.os_release_file = usr_lib_os_release_file + + self.distro_release_file = distro_release_file or "" # updated later + self.include_lsb = include_lsb + self.include_uname = include_uname + + def __repr__(self): + # type: () -> str + """Return repr of all info""" + return ( + "LinuxDistribution(" + "os_release_file={self.os_release_file!r}, " + "distro_release_file={self.distro_release_file!r}, " + "include_lsb={self.include_lsb!r}, " + "include_uname={self.include_uname!r}, " + "_os_release_info={self._os_release_info!r}, " + "_lsb_release_info={self._lsb_release_info!r}, " + "_distro_release_info={self._distro_release_info!r}, " + "_uname_info={self._uname_info!r})".format(self=self) + ) + + def linux_distribution(self, full_distribution_name=True): + # type: (bool) -> Tuple[str, str, str] + """ + Return information about the OS distribution that is compatible + with Python's :func:`platform.linux_distribution`, supporting a subset + of its parameters. + + For details, see :func:`distro.linux_distribution`. + """ + return ( + self.name() if full_distribution_name else self.id(), + self.version(), + self.codename(), + ) + + def id(self): + # type: () -> str + """Return the distro ID of the OS distribution, as a string. + + For details, see :func:`distro.id`. + """ + + def normalize(distro_id, table): + # type: (str, Dict[str, str]) -> str + distro_id = distro_id.lower().replace(" ", "_") + return table.get(distro_id, distro_id) + + distro_id = self.os_release_attr("id") + if distro_id: + return normalize(distro_id, NORMALIZED_OS_ID) + + distro_id = self.lsb_release_attr("distributor_id") + if distro_id: + return normalize(distro_id, NORMALIZED_LSB_ID) + + distro_id = self.distro_release_attr("id") + if distro_id: + return normalize(distro_id, NORMALIZED_DISTRO_ID) + + distro_id = self.uname_attr("id") + if distro_id: + return normalize(distro_id, NORMALIZED_DISTRO_ID) + + return "" + + def name(self, pretty=False): + # type: (bool) -> str + """ + Return the name of the OS distribution, as a string. + + For details, see :func:`distro.name`. + """ + name = ( + self.os_release_attr("name") + or self.lsb_release_attr("distributor_id") + or self.distro_release_attr("name") + or self.uname_attr("name") + ) + if pretty: + name = self.os_release_attr("pretty_name") or self.lsb_release_attr( + "description" + ) + if not name: + name = self.distro_release_attr("name") or self.uname_attr("name") + version = self.version(pretty=True) + if version: + name = name + " " + version + return name or "" + + def version(self, pretty=False, best=False): + # type: (bool, bool) -> str + """ + Return the version of the OS distribution, as a string. + + For details, see :func:`distro.version`. + """ + versions = [ + self.os_release_attr("version_id"), + self.lsb_release_attr("release"), + self.distro_release_attr("version_id"), + self._parse_distro_release_content(self.os_release_attr("pretty_name")).get( + "version_id", "" + ), + self._parse_distro_release_content( + self.lsb_release_attr("description") + ).get("version_id", ""), + self.uname_attr("release"), + ] + version = "" + if best: + # This algorithm uses the last version in priority order that has + # the best precision. If the versions are not in conflict, that + # does not matter; otherwise, using the last one instead of the + # first one might be considered a surprise. + for v in versions: + if v.count(".") > version.count(".") or version == "": + version = v + else: + for v in versions: + if v != "": + version = v + break + if pretty and version and self.codename(): + version = "{0} ({1})".format(version, self.codename()) + return version + + def version_parts(self, best=False): + # type: (bool) -> Tuple[str, str, str] + """ + Return the version of the OS distribution, as a tuple of version + numbers. + + For details, see :func:`distro.version_parts`. + """ + version_str = self.version(best=best) + if version_str: + version_regex = re.compile(r"(\d+)\.?(\d+)?\.?(\d+)?") + matches = version_regex.match(version_str) + if matches: + major, minor, build_number = matches.groups() + return major, minor or "", build_number or "" + return "", "", "" + + def major_version(self, best=False): + # type: (bool) -> str + """ + Return the major version number of the current distribution. + + For details, see :func:`distro.major_version`. + """ + return self.version_parts(best)[0] + + def minor_version(self, best=False): + # type: (bool) -> str + """ + Return the minor version number of the current distribution. + + For details, see :func:`distro.minor_version`. + """ + return self.version_parts(best)[1] + + def build_number(self, best=False): + # type: (bool) -> str + """ + Return the build number of the current distribution. + + For details, see :func:`distro.build_number`. + """ + return self.version_parts(best)[2] + + def like(self): + # type: () -> str + """ + Return the IDs of distributions that are like the OS distribution. + + For details, see :func:`distro.like`. + """ + return self.os_release_attr("id_like") or "" + + def codename(self): + # type: () -> str + """ + Return the codename of the OS distribution. + + For details, see :func:`distro.codename`. + """ + try: + # Handle os_release specially since distros might purposefully set + # this to empty string to have no codename + return self._os_release_info["codename"] + except KeyError: + return ( + self.lsb_release_attr("codename") + or self.distro_release_attr("codename") + or "" + ) + + def info(self, pretty=False, best=False): + # type: (bool, bool) -> InfoDict + """ + Return certain machine-readable information about the OS + distribution. + + For details, see :func:`distro.info`. + """ + return dict( + id=self.id(), + version=self.version(pretty, best), + version_parts=dict( + major=self.major_version(best), + minor=self.minor_version(best), + build_number=self.build_number(best), + ), + like=self.like(), + codename=self.codename(), + ) + + def os_release_info(self): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information + items from the os-release file data source of the OS distribution. + + For details, see :func:`distro.os_release_info`. + """ + return self._os_release_info + + def lsb_release_info(self): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information + items from the lsb_release command data source of the OS + distribution. + + For details, see :func:`distro.lsb_release_info`. + """ + return self._lsb_release_info + + def distro_release_info(self): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information + items from the distro release file data source of the OS + distribution. + + For details, see :func:`distro.distro_release_info`. + """ + return self._distro_release_info + + def uname_info(self): + # type: () -> Dict[str, str] + """ + Return a dictionary containing key-value pairs for the information + items from the uname command data source of the OS distribution. + + For details, see :func:`distro.uname_info`. + """ + return self._uname_info + + def os_release_attr(self, attribute): + # type: (str) -> str + """ + Return a single named information item from the os-release file data + source of the OS distribution. + + For details, see :func:`distro.os_release_attr`. + """ + return self._os_release_info.get(attribute, "") + + def lsb_release_attr(self, attribute): + # type: (str) -> str + """ + Return a single named information item from the lsb_release command + output data source of the OS distribution. + + For details, see :func:`distro.lsb_release_attr`. + """ + return self._lsb_release_info.get(attribute, "") + + def distro_release_attr(self, attribute): + # type: (str) -> str + """ + Return a single named information item from the distro release file + data source of the OS distribution. + + For details, see :func:`distro.distro_release_attr`. + """ + return self._distro_release_info.get(attribute, "") + + def uname_attr(self, attribute): + # type: (str) -> str + """ + Return a single named information item from the uname command + output data source of the OS distribution. + + For details, see :func:`distro.uname_attr`. + """ + return self._uname_info.get(attribute, "") + + @cached_property + def _os_release_info(self): + # type: () -> Dict[str, str] + """ + Get the information items from the specified os-release file. + + Returns: + A dictionary containing all information items. + """ + if os.path.isfile(self.os_release_file): + with open(self.os_release_file) as release_file: + return self._parse_os_release_content(release_file) + return {} + + @staticmethod + def _parse_os_release_content(lines): + # type: (TextIO) -> Dict[str, str] + """ + Parse the lines of an os-release file. + + Parameters: + + * lines: Iterable through the lines in the os-release file. + Each line must be a unicode string or a UTF-8 encoded byte + string. + + Returns: + A dictionary containing all information items. + """ + props = {} + lexer = shlex.shlex(lines, posix=True) + lexer.whitespace_split = True + + # The shlex module defines its `wordchars` variable using literals, + # making it dependent on the encoding of the Python source file. + # In Python 2.6 and 2.7, the shlex source file is encoded in + # 'iso-8859-1', and the `wordchars` variable is defined as a byte + # string. This causes a UnicodeDecodeError to be raised when the + # parsed content is a unicode object. The following fix resolves that + # (... but it should be fixed in shlex...): + if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes): + lexer.wordchars = lexer.wordchars.decode("iso-8859-1") + + tokens = list(lexer) + for token in tokens: + # At this point, all shell-like parsing has been done (i.e. + # comments processed, quotes and backslash escape sequences + # processed, multi-line values assembled, trailing newlines + # stripped, etc.), so the tokens are now either: + # * variable assignments: var=value + # * commands or their arguments (not allowed in os-release) + if "=" in token: + k, v = token.split("=", 1) + props[k.lower()] = v + else: + # Ignore any tokens that are not variable assignments + pass + + if "version_codename" in props: + # os-release added a version_codename field. Use that in + # preference to anything else Note that some distros purposefully + # do not have code names. They should be setting + # version_codename="" + props["codename"] = props["version_codename"] + elif "ubuntu_codename" in props: + # Same as above but a non-standard field name used on older Ubuntus + props["codename"] = props["ubuntu_codename"] + elif "version" in props: + # If there is no version_codename, parse it from the version + match = re.search(r"(\(\D+\))|,(\s+)?\D+", props["version"]) + if match: + codename = match.group() + codename = codename.strip("()") + codename = codename.strip(",") + codename = codename.strip() + # codename appears within paranthese. + props["codename"] = codename + + return props + + @cached_property + def _lsb_release_info(self): + # type: () -> Dict[str, str] + """ + Get the information items from the lsb_release command output. + + Returns: + A dictionary containing all information items. + """ + if not self.include_lsb: + return {} + with open(os.devnull, "wb") as devnull: + try: + cmd = ("lsb_release", "-a") + stdout = subprocess.check_output(cmd, stderr=devnull) + # Command not found or lsb_release returned error + except (OSError, subprocess.CalledProcessError): + return {} + content = self._to_str(stdout).splitlines() + return self._parse_lsb_release_content(content) + + @staticmethod + def _parse_lsb_release_content(lines): + # type: (Iterable[str]) -> Dict[str, str] + """ + Parse the output of the lsb_release command. + + Parameters: + + * lines: Iterable through the lines of the lsb_release output. + Each line must be a unicode string or a UTF-8 encoded byte + string. + + Returns: + A dictionary containing all information items. + """ + props = {} + for line in lines: + kv = line.strip("\n").split(":", 1) + if len(kv) != 2: + # Ignore lines without colon. + continue + k, v = kv + props.update({k.replace(" ", "_").lower(): v.strip()}) + return props + + @cached_property + def _uname_info(self): + # type: () -> Dict[str, str] + with open(os.devnull, "wb") as devnull: + try: + cmd = ("uname", "-rs") + stdout = subprocess.check_output(cmd, stderr=devnull) + except OSError: + return {} + content = self._to_str(stdout).splitlines() + return self._parse_uname_content(content) + + @staticmethod + def _parse_uname_content(lines): + # type: (Sequence[str]) -> Dict[str, str] + props = {} + match = re.search(r"^([^\s]+)\s+([\d\.]+)", lines[0].strip()) + if match: + name, version = match.groups() + + # This is to prevent the Linux kernel version from + # appearing as the 'best' version on otherwise + # identifiable distributions. + if name == "Linux": + return {} + props["id"] = name.lower() + props["name"] = name + props["release"] = version + return props + + @staticmethod + def _to_str(text): + # type: (Union[bytes, str]) -> str + encoding = sys.getfilesystemencoding() + encoding = "utf-8" if encoding == "ascii" else encoding + + if sys.version_info[0] >= 3: + if isinstance(text, bytes): + return text.decode(encoding) + else: + if isinstance(text, unicode): # noqa + return text.encode(encoding) + + return text + + @cached_property + def _distro_release_info(self): + # type: () -> Dict[str, str] + """ + Get the information items from the specified distro release file. + + Returns: + A dictionary containing all information items. + """ + if self.distro_release_file: + # If it was specified, we use it and parse what we can, even if + # its file name or content does not match the expected pattern. + distro_info = self._parse_distro_release_file(self.distro_release_file) + basename = os.path.basename(self.distro_release_file) + # The file name pattern for user-specified distro release files + # is somewhat more tolerant (compared to when searching for the + # file), because we want to use what was specified as best as + # possible. + match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) + if "name" in distro_info and "cloudlinux" in distro_info["name"].lower(): + distro_info["id"] = "cloudlinux" + elif match: + distro_info["id"] = match.group(1) + return distro_info + else: + try: + basenames = os.listdir(self.etc_dir) + # We sort for repeatability in cases where there are multiple + # distro specific files; e.g. CentOS, Oracle, Enterprise all + # containing `redhat-release` on top of their own. + basenames.sort() + except OSError: + # This may occur when /etc is not readable but we can't be + # sure about the *-release files. Check common entries of + # /etc for information. If they turn out to not be there the + # error is handled in `_parse_distro_release_file()`. + basenames = [ + "SuSE-release", + "arch-release", + "base-release", + "centos-release", + "fedora-release", + "gentoo-release", + "mageia-release", + "mandrake-release", + "mandriva-release", + "mandrivalinux-release", + "manjaro-release", + "oracle-release", + "redhat-release", + "sl-release", + "slackware-version", + ] + for basename in basenames: + if basename in _DISTRO_RELEASE_IGNORE_BASENAMES: + continue + match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) + if match: + filepath = os.path.join(self.etc_dir, basename) + distro_info = self._parse_distro_release_file(filepath) + if "name" in distro_info: + # The name is always present if the pattern matches + self.distro_release_file = filepath + distro_info["id"] = match.group(1) + if "cloudlinux" in distro_info["name"].lower(): + distro_info["id"] = "cloudlinux" + return distro_info + return {} + + def _parse_distro_release_file(self, filepath): + # type: (str) -> Dict[str, str] + """ + Parse a distro release file. + + Parameters: + + * filepath: Path name of the distro release file. + + Returns: + A dictionary containing all information items. + """ + try: + with open(filepath) as fp: + # Only parse the first line. For instance, on SLES there + # are multiple lines. We don't want them... + return self._parse_distro_release_content(fp.readline()) + except (OSError, IOError): + # Ignore not being able to read a specific, seemingly version + # related file. + # See https://github.com/python-distro/distro/issues/162 + return {} + + @staticmethod + def _parse_distro_release_content(line): + # type: (str) -> Dict[str, str] + """ + Parse a line from a distro release file. + + Parameters: + * line: Line from the distro release file. Must be a unicode string + or a UTF-8 encoded byte string. + + Returns: + A dictionary containing all information items. + """ + matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1]) + distro_info = {} + if matches: + # regexp ensures non-None + distro_info["name"] = matches.group(3)[::-1] + if matches.group(2): + distro_info["version_id"] = matches.group(2)[::-1] + if matches.group(1): + distro_info["codename"] = matches.group(1)[::-1] + elif line: + distro_info["name"] = line.strip() + return distro_info + + +_distro = LinuxDistribution() + + +def main(): + # type: () -> None + logger = logging.getLogger(__name__) + logger.setLevel(logging.DEBUG) + logger.addHandler(logging.StreamHandler(sys.stdout)) + + parser = argparse.ArgumentParser(description="OS distro info tool") + parser.add_argument( + "--json", "-j", help="Output in machine readable format", action="store_true" + ) + + parser.add_argument( + "--root-dir", + "-r", + type=str, + dest="root_dir", + help="Path to the root filesystem directory (defaults to /)", + ) + + args = parser.parse_args() + + if args.root_dir: + dist = LinuxDistribution( + include_lsb=False, include_uname=False, root_dir=args.root_dir + ) + else: + dist = _distro + + if args.json: + logger.info(json.dumps(dist.info(), indent=4, sort_keys=True)) + else: + logger.info("Name: %s", dist.name(pretty=True)) + distribution_version = dist.version(pretty=True) + logger.info("Version: %s", distribution_version) + distribution_codename = dist.codename() + logger.info("Codename: %s", distribution_codename) + + +if __name__ == "__main__": + main() diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py new file mode 100644 index 0000000..d1d82f1 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py @@ -0,0 +1,35 @@ +""" +HTML parsing library based on the `WHATWG HTML specification +`_. The parser is designed to be compatible with +existing HTML found in the wild and implements well-defined error recovery that +is largely compatible with modern desktop web browsers. + +Example usage:: + + from pip._vendor import html5lib + with open("my_document.html", "rb") as f: + tree = html5lib.parse(f) + +For convenience, this module re-exports the following names: + +* :func:`~.html5parser.parse` +* :func:`~.html5parser.parseFragment` +* :class:`~.html5parser.HTMLParser` +* :func:`~.treebuilders.getTreeBuilder` +* :func:`~.treewalkers.getTreeWalker` +* :func:`~.serializer.serialize` +""" + +from __future__ import absolute_import, division, unicode_literals + +from .html5parser import HTMLParser, parse, parseFragment +from .treebuilders import getTreeBuilder +from .treewalkers import getTreeWalker +from .serializer import serialize + +__all__ = ["HTMLParser", "parse", "parseFragment", "getTreeBuilder", + "getTreeWalker", "serialize"] + +# this has to be at the top level, see how setup.py parses this +#: Distribution version number. +__version__ = "1.1" diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_ihatexml.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_ihatexml.py new file mode 100644 index 0000000..3ff803c --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_ihatexml.py @@ -0,0 +1,289 @@ +from __future__ import absolute_import, division, unicode_literals + +import re +import warnings + +from .constants import DataLossWarning + +baseChar = """ +[#x0041-#x005A] | [#x0061-#x007A] | [#x00C0-#x00D6] | [#x00D8-#x00F6] | +[#x00F8-#x00FF] | [#x0100-#x0131] | [#x0134-#x013E] | [#x0141-#x0148] | +[#x014A-#x017E] | [#x0180-#x01C3] | [#x01CD-#x01F0] | [#x01F4-#x01F5] | +[#x01FA-#x0217] | [#x0250-#x02A8] | [#x02BB-#x02C1] | #x0386 | +[#x0388-#x038A] | #x038C | [#x038E-#x03A1] | [#x03A3-#x03CE] | +[#x03D0-#x03D6] | #x03DA | #x03DC | #x03DE | #x03E0 | [#x03E2-#x03F3] | +[#x0401-#x040C] | [#x040E-#x044F] | [#x0451-#x045C] | [#x045E-#x0481] | +[#x0490-#x04C4] | [#x04C7-#x04C8] | [#x04CB-#x04CC] | [#x04D0-#x04EB] | +[#x04EE-#x04F5] | [#x04F8-#x04F9] | [#x0531-#x0556] | #x0559 | +[#x0561-#x0586] | [#x05D0-#x05EA] | [#x05F0-#x05F2] | [#x0621-#x063A] | +[#x0641-#x064A] | [#x0671-#x06B7] | [#x06BA-#x06BE] | [#x06C0-#x06CE] | +[#x06D0-#x06D3] | #x06D5 | [#x06E5-#x06E6] | [#x0905-#x0939] | #x093D | +[#x0958-#x0961] | [#x0985-#x098C] | [#x098F-#x0990] | [#x0993-#x09A8] | +[#x09AA-#x09B0] | #x09B2 | [#x09B6-#x09B9] | [#x09DC-#x09DD] | +[#x09DF-#x09E1] | [#x09F0-#x09F1] | [#x0A05-#x0A0A] | [#x0A0F-#x0A10] | +[#x0A13-#x0A28] | [#x0A2A-#x0A30] | [#x0A32-#x0A33] | [#x0A35-#x0A36] | +[#x0A38-#x0A39] | [#x0A59-#x0A5C] | #x0A5E | [#x0A72-#x0A74] | +[#x0A85-#x0A8B] | #x0A8D | [#x0A8F-#x0A91] | [#x0A93-#x0AA8] | +[#x0AAA-#x0AB0] | [#x0AB2-#x0AB3] | [#x0AB5-#x0AB9] | #x0ABD | #x0AE0 | +[#x0B05-#x0B0C] | [#x0B0F-#x0B10] | [#x0B13-#x0B28] | [#x0B2A-#x0B30] | +[#x0B32-#x0B33] | [#x0B36-#x0B39] | #x0B3D | [#x0B5C-#x0B5D] | +[#x0B5F-#x0B61] | [#x0B85-#x0B8A] | [#x0B8E-#x0B90] | [#x0B92-#x0B95] | +[#x0B99-#x0B9A] | #x0B9C | [#x0B9E-#x0B9F] | [#x0BA3-#x0BA4] | +[#x0BA8-#x0BAA] | [#x0BAE-#x0BB5] | [#x0BB7-#x0BB9] | [#x0C05-#x0C0C] | +[#x0C0E-#x0C10] | [#x0C12-#x0C28] | [#x0C2A-#x0C33] | [#x0C35-#x0C39] | +[#x0C60-#x0C61] | [#x0C85-#x0C8C] | [#x0C8E-#x0C90] | [#x0C92-#x0CA8] | +[#x0CAA-#x0CB3] | [#x0CB5-#x0CB9] | #x0CDE | [#x0CE0-#x0CE1] | +[#x0D05-#x0D0C] | [#x0D0E-#x0D10] | [#x0D12-#x0D28] | [#x0D2A-#x0D39] | +[#x0D60-#x0D61] | [#x0E01-#x0E2E] | #x0E30 | [#x0E32-#x0E33] | +[#x0E40-#x0E45] | [#x0E81-#x0E82] | #x0E84 | [#x0E87-#x0E88] | #x0E8A | +#x0E8D | [#x0E94-#x0E97] | [#x0E99-#x0E9F] | [#x0EA1-#x0EA3] | #x0EA5 | +#x0EA7 | [#x0EAA-#x0EAB] | [#x0EAD-#x0EAE] | #x0EB0 | [#x0EB2-#x0EB3] | +#x0EBD | [#x0EC0-#x0EC4] | [#x0F40-#x0F47] | [#x0F49-#x0F69] | +[#x10A0-#x10C5] | [#x10D0-#x10F6] | #x1100 | [#x1102-#x1103] | +[#x1105-#x1107] | #x1109 | [#x110B-#x110C] | [#x110E-#x1112] | #x113C | +#x113E | #x1140 | #x114C | #x114E | #x1150 | [#x1154-#x1155] | #x1159 | +[#x115F-#x1161] | #x1163 | #x1165 | #x1167 | #x1169 | [#x116D-#x116E] | +[#x1172-#x1173] | #x1175 | #x119E | #x11A8 | #x11AB | [#x11AE-#x11AF] | +[#x11B7-#x11B8] | #x11BA | [#x11BC-#x11C2] | #x11EB | #x11F0 | #x11F9 | +[#x1E00-#x1E9B] | [#x1EA0-#x1EF9] | [#x1F00-#x1F15] | [#x1F18-#x1F1D] | +[#x1F20-#x1F45] | [#x1F48-#x1F4D] | [#x1F50-#x1F57] | #x1F59 | #x1F5B | +#x1F5D | [#x1F5F-#x1F7D] | [#x1F80-#x1FB4] | [#x1FB6-#x1FBC] | #x1FBE | +[#x1FC2-#x1FC4] | [#x1FC6-#x1FCC] | [#x1FD0-#x1FD3] | [#x1FD6-#x1FDB] | +[#x1FE0-#x1FEC] | [#x1FF2-#x1FF4] | [#x1FF6-#x1FFC] | #x2126 | +[#x212A-#x212B] | #x212E | [#x2180-#x2182] | [#x3041-#x3094] | +[#x30A1-#x30FA] | [#x3105-#x312C] | [#xAC00-#xD7A3]""" + +ideographic = """[#x4E00-#x9FA5] | #x3007 | [#x3021-#x3029]""" + +combiningCharacter = """ +[#x0300-#x0345] | [#x0360-#x0361] | [#x0483-#x0486] | [#x0591-#x05A1] | +[#x05A3-#x05B9] | [#x05BB-#x05BD] | #x05BF | [#x05C1-#x05C2] | #x05C4 | +[#x064B-#x0652] | #x0670 | [#x06D6-#x06DC] | [#x06DD-#x06DF] | +[#x06E0-#x06E4] | [#x06E7-#x06E8] | [#x06EA-#x06ED] | [#x0901-#x0903] | +#x093C | [#x093E-#x094C] | #x094D | [#x0951-#x0954] | [#x0962-#x0963] | +[#x0981-#x0983] | #x09BC | #x09BE | #x09BF | [#x09C0-#x09C4] | +[#x09C7-#x09C8] | [#x09CB-#x09CD] | #x09D7 | [#x09E2-#x09E3] | #x0A02 | +#x0A3C | #x0A3E | #x0A3F | [#x0A40-#x0A42] | [#x0A47-#x0A48] | +[#x0A4B-#x0A4D] | [#x0A70-#x0A71] | [#x0A81-#x0A83] | #x0ABC | +[#x0ABE-#x0AC5] | [#x0AC7-#x0AC9] | [#x0ACB-#x0ACD] | [#x0B01-#x0B03] | +#x0B3C | [#x0B3E-#x0B43] | [#x0B47-#x0B48] | [#x0B4B-#x0B4D] | +[#x0B56-#x0B57] | [#x0B82-#x0B83] | [#x0BBE-#x0BC2] | [#x0BC6-#x0BC8] | +[#x0BCA-#x0BCD] | #x0BD7 | [#x0C01-#x0C03] | [#x0C3E-#x0C44] | +[#x0C46-#x0C48] | [#x0C4A-#x0C4D] | [#x0C55-#x0C56] | [#x0C82-#x0C83] | +[#x0CBE-#x0CC4] | [#x0CC6-#x0CC8] | [#x0CCA-#x0CCD] | [#x0CD5-#x0CD6] | +[#x0D02-#x0D03] | [#x0D3E-#x0D43] | [#x0D46-#x0D48] | [#x0D4A-#x0D4D] | +#x0D57 | #x0E31 | [#x0E34-#x0E3A] | [#x0E47-#x0E4E] | #x0EB1 | +[#x0EB4-#x0EB9] | [#x0EBB-#x0EBC] | [#x0EC8-#x0ECD] | [#x0F18-#x0F19] | +#x0F35 | #x0F37 | #x0F39 | #x0F3E | #x0F3F | [#x0F71-#x0F84] | +[#x0F86-#x0F8B] | [#x0F90-#x0F95] | #x0F97 | [#x0F99-#x0FAD] | +[#x0FB1-#x0FB7] | #x0FB9 | [#x20D0-#x20DC] | #x20E1 | [#x302A-#x302F] | +#x3099 | #x309A""" + +digit = """ +[#x0030-#x0039] | [#x0660-#x0669] | [#x06F0-#x06F9] | [#x0966-#x096F] | +[#x09E6-#x09EF] | [#x0A66-#x0A6F] | [#x0AE6-#x0AEF] | [#x0B66-#x0B6F] | +[#x0BE7-#x0BEF] | [#x0C66-#x0C6F] | [#x0CE6-#x0CEF] | [#x0D66-#x0D6F] | +[#x0E50-#x0E59] | [#x0ED0-#x0ED9] | [#x0F20-#x0F29]""" + +extender = """ +#x00B7 | #x02D0 | #x02D1 | #x0387 | #x0640 | #x0E46 | #x0EC6 | #x3005 | +#[#x3031-#x3035] | [#x309D-#x309E] | [#x30FC-#x30FE]""" + +letter = " | ".join([baseChar, ideographic]) + +# Without the +name = " | ".join([letter, digit, ".", "-", "_", combiningCharacter, + extender]) +nameFirst = " | ".join([letter, "_"]) + +reChar = re.compile(r"#x([\d|A-F]{4,4})") +reCharRange = re.compile(r"\[#x([\d|A-F]{4,4})-#x([\d|A-F]{4,4})\]") + + +def charStringToList(chars): + charRanges = [item.strip() for item in chars.split(" | ")] + rv = [] + for item in charRanges: + foundMatch = False + for regexp in (reChar, reCharRange): + match = regexp.match(item) + if match is not None: + rv.append([hexToInt(item) for item in match.groups()]) + if len(rv[-1]) == 1: + rv[-1] = rv[-1] * 2 + foundMatch = True + break + if not foundMatch: + assert len(item) == 1 + + rv.append([ord(item)] * 2) + rv = normaliseCharList(rv) + return rv + + +def normaliseCharList(charList): + charList = sorted(charList) + for item in charList: + assert item[1] >= item[0] + rv = [] + i = 0 + while i < len(charList): + j = 1 + rv.append(charList[i]) + while i + j < len(charList) and charList[i + j][0] <= rv[-1][1] + 1: + rv[-1][1] = charList[i + j][1] + j += 1 + i += j + return rv + + +# We don't really support characters above the BMP :( +max_unicode = int("FFFF", 16) + + +def missingRanges(charList): + rv = [] + if charList[0] != 0: + rv.append([0, charList[0][0] - 1]) + for i, item in enumerate(charList[:-1]): + rv.append([item[1] + 1, charList[i + 1][0] - 1]) + if charList[-1][1] != max_unicode: + rv.append([charList[-1][1] + 1, max_unicode]) + return rv + + +def listToRegexpStr(charList): + rv = [] + for item in charList: + if item[0] == item[1]: + rv.append(escapeRegexp(chr(item[0]))) + else: + rv.append(escapeRegexp(chr(item[0])) + "-" + + escapeRegexp(chr(item[1]))) + return "[%s]" % "".join(rv) + + +def hexToInt(hex_str): + return int(hex_str, 16) + + +def escapeRegexp(string): + specialCharacters = (".", "^", "$", "*", "+", "?", "{", "}", + "[", "]", "|", "(", ")", "-") + for char in specialCharacters: + string = string.replace(char, "\\" + char) + + return string + +# output from the above +nonXmlNameBMPRegexp = re.compile('[\x00-,/:-@\\[-\\^`\\{-\xb6\xb8-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u02cf\u02d2-\u02ff\u0346-\u035f\u0362-\u0385\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482\u0487-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u0590\u05a2\u05ba\u05be\u05c0\u05c3\u05c5-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u063f\u0653-\u065f\u066a-\u066f\u06b8-\u06b9\u06bf\u06cf\u06d4\u06e9\u06ee-\u06ef\u06fa-\u0900\u0904\u093a-\u093b\u094e-\u0950\u0955-\u0957\u0964-\u0965\u0970-\u0980\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09bd\u09c5-\u09c6\u09c9-\u09ca\u09ce-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09f2-\u0a01\u0a03-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a58\u0a5d\u0a5f-\u0a65\u0a75-\u0a80\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0adf\u0ae1-\u0ae5\u0af0-\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3b\u0b44-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b62-\u0b65\u0b70-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bd6\u0bd8-\u0be6\u0bf0-\u0c00\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c3d\u0c45\u0c49\u0c4e-\u0c54\u0c57-\u0c5f\u0c62-\u0c65\u0c70-\u0c81\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbd\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce2-\u0ce5\u0cf0-\u0d01\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d3d\u0d44-\u0d45\u0d49\u0d4e-\u0d56\u0d58-\u0d5f\u0d62-\u0d65\u0d70-\u0e00\u0e2f\u0e3b-\u0e3f\u0e4f\u0e5a-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0f17\u0f1a-\u0f1f\u0f2a-\u0f34\u0f36\u0f38\u0f3a-\u0f3d\u0f48\u0f6a-\u0f70\u0f85\u0f8c-\u0f8f\u0f96\u0f98\u0fae-\u0fb0\u0fb8\u0fba-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u20cf\u20dd-\u20e0\u20e2-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3004\u3006\u3008-\u3020\u3030\u3036-\u3040\u3095-\u3098\u309b-\u309c\u309f-\u30a0\u30fb\u30ff-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]') # noqa + +nonXmlNameFirstBMPRegexp = re.compile('[\x00-@\\[-\\^`\\{-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u0385\u0387\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u0640\u064b-\u0670\u06b8-\u06b9\u06bf\u06cf\u06d4\u06d6-\u06e4\u06e7-\u0904\u093a-\u093c\u093e-\u0957\u0962-\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09db\u09de\u09e2-\u09ef\u09f2-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a58\u0a5d\u0a5f-\u0a71\u0a75-\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abc\u0abe-\u0adf\u0ae1-\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3c\u0b3e-\u0b5b\u0b5e\u0b62-\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c5f\u0c62-\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cdd\u0cdf\u0ce2-\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d5f\u0d62-\u0e00\u0e2f\u0e31\u0e34-\u0e3f\u0e46-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eb1\u0eb4-\u0ebc\u0ebe-\u0ebf\u0ec5-\u0f3f\u0f48\u0f6a-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3006\u3008-\u3020\u302a-\u3040\u3095-\u30a0\u30fb-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]') # noqa + +# Simpler things +nonPubidCharRegexp = re.compile("[^\x20\x0D\x0Aa-zA-Z0-9\\-'()+,./:=?;!*#@$_%]") + + +class InfosetFilter(object): + replacementRegexp = re.compile(r"U[\dA-F]{5,5}") + + def __init__(self, + dropXmlnsLocalName=False, + dropXmlnsAttrNs=False, + preventDoubleDashComments=False, + preventDashAtCommentEnd=False, + replaceFormFeedCharacters=True, + preventSingleQuotePubid=False): + + self.dropXmlnsLocalName = dropXmlnsLocalName + self.dropXmlnsAttrNs = dropXmlnsAttrNs + + self.preventDoubleDashComments = preventDoubleDashComments + self.preventDashAtCommentEnd = preventDashAtCommentEnd + + self.replaceFormFeedCharacters = replaceFormFeedCharacters + + self.preventSingleQuotePubid = preventSingleQuotePubid + + self.replaceCache = {} + + def coerceAttribute(self, name, namespace=None): + if self.dropXmlnsLocalName and name.startswith("xmlns:"): + warnings.warn("Attributes cannot begin with xmlns", DataLossWarning) + return None + elif (self.dropXmlnsAttrNs and + namespace == "http://www.w3.org/2000/xmlns/"): + warnings.warn("Attributes cannot be in the xml namespace", DataLossWarning) + return None + else: + return self.toXmlName(name) + + def coerceElement(self, name): + return self.toXmlName(name) + + def coerceComment(self, data): + if self.preventDoubleDashComments: + while "--" in data: + warnings.warn("Comments cannot contain adjacent dashes", DataLossWarning) + data = data.replace("--", "- -") + if data.endswith("-"): + warnings.warn("Comments cannot end in a dash", DataLossWarning) + data += " " + return data + + def coerceCharacters(self, data): + if self.replaceFormFeedCharacters: + for _ in range(data.count("\x0C")): + warnings.warn("Text cannot contain U+000C", DataLossWarning) + data = data.replace("\x0C", " ") + # Other non-xml characters + return data + + def coercePubid(self, data): + dataOutput = data + for char in nonPubidCharRegexp.findall(data): + warnings.warn("Coercing non-XML pubid", DataLossWarning) + replacement = self.getReplacementCharacter(char) + dataOutput = dataOutput.replace(char, replacement) + if self.preventSingleQuotePubid and dataOutput.find("'") >= 0: + warnings.warn("Pubid cannot contain single quote", DataLossWarning) + dataOutput = dataOutput.replace("'", self.getReplacementCharacter("'")) + return dataOutput + + def toXmlName(self, name): + nameFirst = name[0] + nameRest = name[1:] + m = nonXmlNameFirstBMPRegexp.match(nameFirst) + if m: + warnings.warn("Coercing non-XML name: %s" % name, DataLossWarning) + nameFirstOutput = self.getReplacementCharacter(nameFirst) + else: + nameFirstOutput = nameFirst + + nameRestOutput = nameRest + replaceChars = set(nonXmlNameBMPRegexp.findall(nameRest)) + for char in replaceChars: + warnings.warn("Coercing non-XML name: %s" % name, DataLossWarning) + replacement = self.getReplacementCharacter(char) + nameRestOutput = nameRestOutput.replace(char, replacement) + return nameFirstOutput + nameRestOutput + + def getReplacementCharacter(self, char): + if char in self.replaceCache: + replacement = self.replaceCache[char] + else: + replacement = self.escapeChar(char) + return replacement + + def fromXmlName(self, name): + for item in set(self.replacementRegexp.findall(name)): + name = name.replace(item, self.unescapeChar(item)) + return name + + def escapeChar(self, char): + replacement = "U%05X" % ord(char) + self.replaceCache[char] = replacement + return replacement + + def unescapeChar(self, charcode): + return chr(int(charcode[1:], 16)) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_inputstream.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_inputstream.py new file mode 100644 index 0000000..e0bb376 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_inputstream.py @@ -0,0 +1,918 @@ +from __future__ import absolute_import, division, unicode_literals + +from pip._vendor.six import text_type +from pip._vendor.six.moves import http_client, urllib + +import codecs +import re +from io import BytesIO, StringIO + +from pip._vendor import webencodings + +from .constants import EOF, spaceCharacters, asciiLetters, asciiUppercase +from .constants import _ReparseException +from . import _utils + +# Non-unicode versions of constants for use in the pre-parser +spaceCharactersBytes = frozenset([item.encode("ascii") for item in spaceCharacters]) +asciiLettersBytes = frozenset([item.encode("ascii") for item in asciiLetters]) +asciiUppercaseBytes = frozenset([item.encode("ascii") for item in asciiUppercase]) +spacesAngleBrackets = spaceCharactersBytes | frozenset([b">", b"<"]) + + +invalid_unicode_no_surrogate = "[\u0001-\u0008\u000B\u000E-\u001F\u007F-\u009F\uFDD0-\uFDEF\uFFFE\uFFFF\U0001FFFE\U0001FFFF\U0002FFFE\U0002FFFF\U0003FFFE\U0003FFFF\U0004FFFE\U0004FFFF\U0005FFFE\U0005FFFF\U0006FFFE\U0006FFFF\U0007FFFE\U0007FFFF\U0008FFFE\U0008FFFF\U0009FFFE\U0009FFFF\U000AFFFE\U000AFFFF\U000BFFFE\U000BFFFF\U000CFFFE\U000CFFFF\U000DFFFE\U000DFFFF\U000EFFFE\U000EFFFF\U000FFFFE\U000FFFFF\U0010FFFE\U0010FFFF]" # noqa + +if _utils.supports_lone_surrogates: + # Use one extra step of indirection and create surrogates with + # eval. Not using this indirection would introduce an illegal + # unicode literal on platforms not supporting such lone + # surrogates. + assert invalid_unicode_no_surrogate[-1] == "]" and invalid_unicode_no_surrogate.count("]") == 1 + invalid_unicode_re = re.compile(invalid_unicode_no_surrogate[:-1] + + eval('"\\uD800-\\uDFFF"') + # pylint:disable=eval-used + "]") +else: + invalid_unicode_re = re.compile(invalid_unicode_no_surrogate) + +non_bmp_invalid_codepoints = {0x1FFFE, 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE, + 0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE, 0x5FFFF, + 0x6FFFE, 0x6FFFF, 0x7FFFE, 0x7FFFF, 0x8FFFE, + 0x8FFFF, 0x9FFFE, 0x9FFFF, 0xAFFFE, 0xAFFFF, + 0xBFFFE, 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE, + 0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE, 0xFFFFF, + 0x10FFFE, 0x10FFFF} + +ascii_punctuation_re = re.compile("[\u0009-\u000D\u0020-\u002F\u003A-\u0040\u005C\u005B-\u0060\u007B-\u007E]") + +# Cache for charsUntil() +charsUntilRegEx = {} + + +class BufferedStream(object): + """Buffering for streams that do not have buffering of their own + + The buffer is implemented as a list of chunks on the assumption that + joining many strings will be slow since it is O(n**2) + """ + + def __init__(self, stream): + self.stream = stream + self.buffer = [] + self.position = [-1, 0] # chunk number, offset + + def tell(self): + pos = 0 + for chunk in self.buffer[:self.position[0]]: + pos += len(chunk) + pos += self.position[1] + return pos + + def seek(self, pos): + assert pos <= self._bufferedBytes() + offset = pos + i = 0 + while len(self.buffer[i]) < offset: + offset -= len(self.buffer[i]) + i += 1 + self.position = [i, offset] + + def read(self, bytes): + if not self.buffer: + return self._readStream(bytes) + elif (self.position[0] == len(self.buffer) and + self.position[1] == len(self.buffer[-1])): + return self._readStream(bytes) + else: + return self._readFromBuffer(bytes) + + def _bufferedBytes(self): + return sum([len(item) for item in self.buffer]) + + def _readStream(self, bytes): + data = self.stream.read(bytes) + self.buffer.append(data) + self.position[0] += 1 + self.position[1] = len(data) + return data + + def _readFromBuffer(self, bytes): + remainingBytes = bytes + rv = [] + bufferIndex = self.position[0] + bufferOffset = self.position[1] + while bufferIndex < len(self.buffer) and remainingBytes != 0: + assert remainingBytes > 0 + bufferedData = self.buffer[bufferIndex] + + if remainingBytes <= len(bufferedData) - bufferOffset: + bytesToRead = remainingBytes + self.position = [bufferIndex, bufferOffset + bytesToRead] + else: + bytesToRead = len(bufferedData) - bufferOffset + self.position = [bufferIndex, len(bufferedData)] + bufferIndex += 1 + rv.append(bufferedData[bufferOffset:bufferOffset + bytesToRead]) + remainingBytes -= bytesToRead + + bufferOffset = 0 + + if remainingBytes: + rv.append(self._readStream(remainingBytes)) + + return b"".join(rv) + + +def HTMLInputStream(source, **kwargs): + # Work around Python bug #20007: read(0) closes the connection. + # http://bugs.python.org/issue20007 + if (isinstance(source, http_client.HTTPResponse) or + # Also check for addinfourl wrapping HTTPResponse + (isinstance(source, urllib.response.addbase) and + isinstance(source.fp, http_client.HTTPResponse))): + isUnicode = False + elif hasattr(source, "read"): + isUnicode = isinstance(source.read(0), text_type) + else: + isUnicode = isinstance(source, text_type) + + if isUnicode: + encodings = [x for x in kwargs if x.endswith("_encoding")] + if encodings: + raise TypeError("Cannot set an encoding with a unicode input, set %r" % encodings) + + return HTMLUnicodeInputStream(source, **kwargs) + else: + return HTMLBinaryInputStream(source, **kwargs) + + +class HTMLUnicodeInputStream(object): + """Provides a unicode stream of characters to the HTMLTokenizer. + + This class takes care of character encoding and removing or replacing + incorrect byte-sequences and also provides column and line tracking. + + """ + + _defaultChunkSize = 10240 + + def __init__(self, source): + """Initialises the HTMLInputStream. + + HTMLInputStream(source, [encoding]) -> Normalized stream from source + for use by html5lib. + + source can be either a file-object, local filename or a string. + + The optional encoding parameter must be a string that indicates + the encoding. If specified, that encoding will be used, + regardless of any BOM or later declaration (such as in a meta + element) + + """ + + if not _utils.supports_lone_surrogates: + # Such platforms will have already checked for such + # surrogate errors, so no need to do this checking. + self.reportCharacterErrors = None + elif len("\U0010FFFF") == 1: + self.reportCharacterErrors = self.characterErrorsUCS4 + else: + self.reportCharacterErrors = self.characterErrorsUCS2 + + # List of where new lines occur + self.newLines = [0] + + self.charEncoding = (lookupEncoding("utf-8"), "certain") + self.dataStream = self.openStream(source) + + self.reset() + + def reset(self): + self.chunk = "" + self.chunkSize = 0 + self.chunkOffset = 0 + self.errors = [] + + # number of (complete) lines in previous chunks + self.prevNumLines = 0 + # number of columns in the last line of the previous chunk + self.prevNumCols = 0 + + # Deal with CR LF and surrogates split over chunk boundaries + self._bufferedCharacter = None + + def openStream(self, source): + """Produces a file object from source. + + source can be either a file object, local filename or a string. + + """ + # Already a file object + if hasattr(source, 'read'): + stream = source + else: + stream = StringIO(source) + + return stream + + def _position(self, offset): + chunk = self.chunk + nLines = chunk.count('\n', 0, offset) + positionLine = self.prevNumLines + nLines + lastLinePos = chunk.rfind('\n', 0, offset) + if lastLinePos == -1: + positionColumn = self.prevNumCols + offset + else: + positionColumn = offset - (lastLinePos + 1) + return (positionLine, positionColumn) + + def position(self): + """Returns (line, col) of the current position in the stream.""" + line, col = self._position(self.chunkOffset) + return (line + 1, col) + + def char(self): + """ Read one character from the stream or queue if available. Return + EOF when EOF is reached. + """ + # Read a new chunk from the input stream if necessary + if self.chunkOffset >= self.chunkSize: + if not self.readChunk(): + return EOF + + chunkOffset = self.chunkOffset + char = self.chunk[chunkOffset] + self.chunkOffset = chunkOffset + 1 + + return char + + def readChunk(self, chunkSize=None): + if chunkSize is None: + chunkSize = self._defaultChunkSize + + self.prevNumLines, self.prevNumCols = self._position(self.chunkSize) + + self.chunk = "" + self.chunkSize = 0 + self.chunkOffset = 0 + + data = self.dataStream.read(chunkSize) + + # Deal with CR LF and surrogates broken across chunks + if self._bufferedCharacter: + data = self._bufferedCharacter + data + self._bufferedCharacter = None + elif not data: + # We have no more data, bye-bye stream + return False + + if len(data) > 1: + lastv = ord(data[-1]) + if lastv == 0x0D or 0xD800 <= lastv <= 0xDBFF: + self._bufferedCharacter = data[-1] + data = data[:-1] + + if self.reportCharacterErrors: + self.reportCharacterErrors(data) + + # Replace invalid characters + data = data.replace("\r\n", "\n") + data = data.replace("\r", "\n") + + self.chunk = data + self.chunkSize = len(data) + + return True + + def characterErrorsUCS4(self, data): + for _ in range(len(invalid_unicode_re.findall(data))): + self.errors.append("invalid-codepoint") + + def characterErrorsUCS2(self, data): + # Someone picked the wrong compile option + # You lose + skip = False + for match in invalid_unicode_re.finditer(data): + if skip: + continue + codepoint = ord(match.group()) + pos = match.start() + # Pretty sure there should be endianness issues here + if _utils.isSurrogatePair(data[pos:pos + 2]): + # We have a surrogate pair! + char_val = _utils.surrogatePairToCodepoint(data[pos:pos + 2]) + if char_val in non_bmp_invalid_codepoints: + self.errors.append("invalid-codepoint") + skip = True + elif (codepoint >= 0xD800 and codepoint <= 0xDFFF and + pos == len(data) - 1): + self.errors.append("invalid-codepoint") + else: + skip = False + self.errors.append("invalid-codepoint") + + def charsUntil(self, characters, opposite=False): + """ Returns a string of characters from the stream up to but not + including any character in 'characters' or EOF. 'characters' must be + a container that supports the 'in' method and iteration over its + characters. + """ + + # Use a cache of regexps to find the required characters + try: + chars = charsUntilRegEx[(characters, opposite)] + except KeyError: + if __debug__: + for c in characters: + assert(ord(c) < 128) + regex = "".join(["\\x%02x" % ord(c) for c in characters]) + if not opposite: + regex = "^%s" % regex + chars = charsUntilRegEx[(characters, opposite)] = re.compile("[%s]+" % regex) + + rv = [] + + while True: + # Find the longest matching prefix + m = chars.match(self.chunk, self.chunkOffset) + if m is None: + # If nothing matched, and it wasn't because we ran out of chunk, + # then stop + if self.chunkOffset != self.chunkSize: + break + else: + end = m.end() + # If not the whole chunk matched, return everything + # up to the part that didn't match + if end != self.chunkSize: + rv.append(self.chunk[self.chunkOffset:end]) + self.chunkOffset = end + break + # If the whole remainder of the chunk matched, + # use it all and read the next chunk + rv.append(self.chunk[self.chunkOffset:]) + if not self.readChunk(): + # Reached EOF + break + + r = "".join(rv) + return r + + def unget(self, char): + # Only one character is allowed to be ungotten at once - it must + # be consumed again before any further call to unget + if char is not EOF: + if self.chunkOffset == 0: + # unget is called quite rarely, so it's a good idea to do + # more work here if it saves a bit of work in the frequently + # called char and charsUntil. + # So, just prepend the ungotten character onto the current + # chunk: + self.chunk = char + self.chunk + self.chunkSize += 1 + else: + self.chunkOffset -= 1 + assert self.chunk[self.chunkOffset] == char + + +class HTMLBinaryInputStream(HTMLUnicodeInputStream): + """Provides a unicode stream of characters to the HTMLTokenizer. + + This class takes care of character encoding and removing or replacing + incorrect byte-sequences and also provides column and line tracking. + + """ + + def __init__(self, source, override_encoding=None, transport_encoding=None, + same_origin_parent_encoding=None, likely_encoding=None, + default_encoding="windows-1252", useChardet=True): + """Initialises the HTMLInputStream. + + HTMLInputStream(source, [encoding]) -> Normalized stream from source + for use by html5lib. + + source can be either a file-object, local filename or a string. + + The optional encoding parameter must be a string that indicates + the encoding. If specified, that encoding will be used, + regardless of any BOM or later declaration (such as in a meta + element) + + """ + # Raw Stream - for unicode objects this will encode to utf-8 and set + # self.charEncoding as appropriate + self.rawStream = self.openStream(source) + + HTMLUnicodeInputStream.__init__(self, self.rawStream) + + # Encoding Information + # Number of bytes to use when looking for a meta element with + # encoding information + self.numBytesMeta = 1024 + # Number of bytes to use when using detecting encoding using chardet + self.numBytesChardet = 100 + # Things from args + self.override_encoding = override_encoding + self.transport_encoding = transport_encoding + self.same_origin_parent_encoding = same_origin_parent_encoding + self.likely_encoding = likely_encoding + self.default_encoding = default_encoding + + # Determine encoding + self.charEncoding = self.determineEncoding(useChardet) + assert self.charEncoding[0] is not None + + # Call superclass + self.reset() + + def reset(self): + self.dataStream = self.charEncoding[0].codec_info.streamreader(self.rawStream, 'replace') + HTMLUnicodeInputStream.reset(self) + + def openStream(self, source): + """Produces a file object from source. + + source can be either a file object, local filename or a string. + + """ + # Already a file object + if hasattr(source, 'read'): + stream = source + else: + stream = BytesIO(source) + + try: + stream.seek(stream.tell()) + except Exception: + stream = BufferedStream(stream) + + return stream + + def determineEncoding(self, chardet=True): + # BOMs take precedence over everything + # This will also read past the BOM if present + charEncoding = self.detectBOM(), "certain" + if charEncoding[0] is not None: + return charEncoding + + # If we've been overridden, we've been overridden + charEncoding = lookupEncoding(self.override_encoding), "certain" + if charEncoding[0] is not None: + return charEncoding + + # Now check the transport layer + charEncoding = lookupEncoding(self.transport_encoding), "certain" + if charEncoding[0] is not None: + return charEncoding + + # Look for meta elements with encoding information + charEncoding = self.detectEncodingMeta(), "tentative" + if charEncoding[0] is not None: + return charEncoding + + # Parent document encoding + charEncoding = lookupEncoding(self.same_origin_parent_encoding), "tentative" + if charEncoding[0] is not None and not charEncoding[0].name.startswith("utf-16"): + return charEncoding + + # "likely" encoding + charEncoding = lookupEncoding(self.likely_encoding), "tentative" + if charEncoding[0] is not None: + return charEncoding + + # Guess with chardet, if available + if chardet: + try: + from pip._vendor.chardet.universaldetector import UniversalDetector + except ImportError: + pass + else: + buffers = [] + detector = UniversalDetector() + while not detector.done: + buffer = self.rawStream.read(self.numBytesChardet) + assert isinstance(buffer, bytes) + if not buffer: + break + buffers.append(buffer) + detector.feed(buffer) + detector.close() + encoding = lookupEncoding(detector.result['encoding']) + self.rawStream.seek(0) + if encoding is not None: + return encoding, "tentative" + + # Try the default encoding + charEncoding = lookupEncoding(self.default_encoding), "tentative" + if charEncoding[0] is not None: + return charEncoding + + # Fallback to html5lib's default if even that hasn't worked + return lookupEncoding("windows-1252"), "tentative" + + def changeEncoding(self, newEncoding): + assert self.charEncoding[1] != "certain" + newEncoding = lookupEncoding(newEncoding) + if newEncoding is None: + return + if newEncoding.name in ("utf-16be", "utf-16le"): + newEncoding = lookupEncoding("utf-8") + assert newEncoding is not None + elif newEncoding == self.charEncoding[0]: + self.charEncoding = (self.charEncoding[0], "certain") + else: + self.rawStream.seek(0) + self.charEncoding = (newEncoding, "certain") + self.reset() + raise _ReparseException("Encoding changed from %s to %s" % (self.charEncoding[0], newEncoding)) + + def detectBOM(self): + """Attempts to detect at BOM at the start of the stream. If + an encoding can be determined from the BOM return the name of the + encoding otherwise return None""" + bomDict = { + codecs.BOM_UTF8: 'utf-8', + codecs.BOM_UTF16_LE: 'utf-16le', codecs.BOM_UTF16_BE: 'utf-16be', + codecs.BOM_UTF32_LE: 'utf-32le', codecs.BOM_UTF32_BE: 'utf-32be' + } + + # Go to beginning of file and read in 4 bytes + string = self.rawStream.read(4) + assert isinstance(string, bytes) + + # Try detecting the BOM using bytes from the string + encoding = bomDict.get(string[:3]) # UTF-8 + seek = 3 + if not encoding: + # Need to detect UTF-32 before UTF-16 + encoding = bomDict.get(string) # UTF-32 + seek = 4 + if not encoding: + encoding = bomDict.get(string[:2]) # UTF-16 + seek = 2 + + # Set the read position past the BOM if one was found, otherwise + # set it to the start of the stream + if encoding: + self.rawStream.seek(seek) + return lookupEncoding(encoding) + else: + self.rawStream.seek(0) + return None + + def detectEncodingMeta(self): + """Report the encoding declared by the meta element + """ + buffer = self.rawStream.read(self.numBytesMeta) + assert isinstance(buffer, bytes) + parser = EncodingParser(buffer) + self.rawStream.seek(0) + encoding = parser.getEncoding() + + if encoding is not None and encoding.name in ("utf-16be", "utf-16le"): + encoding = lookupEncoding("utf-8") + + return encoding + + +class EncodingBytes(bytes): + """String-like object with an associated position and various extra methods + If the position is ever greater than the string length then an exception is + raised""" + def __new__(self, value): + assert isinstance(value, bytes) + return bytes.__new__(self, value.lower()) + + def __init__(self, value): + # pylint:disable=unused-argument + self._position = -1 + + def __iter__(self): + return self + + def __next__(self): + p = self._position = self._position + 1 + if p >= len(self): + raise StopIteration + elif p < 0: + raise TypeError + return self[p:p + 1] + + def next(self): + # Py2 compat + return self.__next__() + + def previous(self): + p = self._position + if p >= len(self): + raise StopIteration + elif p < 0: + raise TypeError + self._position = p = p - 1 + return self[p:p + 1] + + def setPosition(self, position): + if self._position >= len(self): + raise StopIteration + self._position = position + + def getPosition(self): + if self._position >= len(self): + raise StopIteration + if self._position >= 0: + return self._position + else: + return None + + position = property(getPosition, setPosition) + + def getCurrentByte(self): + return self[self.position:self.position + 1] + + currentByte = property(getCurrentByte) + + def skip(self, chars=spaceCharactersBytes): + """Skip past a list of characters""" + p = self.position # use property for the error-checking + while p < len(self): + c = self[p:p + 1] + if c not in chars: + self._position = p + return c + p += 1 + self._position = p + return None + + def skipUntil(self, chars): + p = self.position + while p < len(self): + c = self[p:p + 1] + if c in chars: + self._position = p + return c + p += 1 + self._position = p + return None + + def matchBytes(self, bytes): + """Look for a sequence of bytes at the start of a string. If the bytes + are found return True and advance the position to the byte after the + match. Otherwise return False and leave the position alone""" + rv = self.startswith(bytes, self.position) + if rv: + self.position += len(bytes) + return rv + + def jumpTo(self, bytes): + """Look for the next sequence of bytes matching a given sequence. If + a match is found advance the position to the last byte of the match""" + try: + self._position = self.index(bytes, self.position) + len(bytes) - 1 + except ValueError: + raise StopIteration + return True + + +class EncodingParser(object): + """Mini parser for detecting character encoding from meta elements""" + + def __init__(self, data): + """string - the data to work on for encoding detection""" + self.data = EncodingBytes(data) + self.encoding = None + + def getEncoding(self): + if b"") + + def handleMeta(self): + if self.data.currentByte not in spaceCharactersBytes: + # if we have ") + + def getAttribute(self): + """Return a name,value pair for the next attribute in the stream, + if one is found, or None""" + data = self.data + # Step 1 (skip chars) + c = data.skip(spaceCharactersBytes | frozenset([b"/"])) + assert c is None or len(c) == 1 + # Step 2 + if c in (b">", None): + return None + # Step 3 + attrName = [] + attrValue = [] + # Step 4 attribute name + while True: + if c == b"=" and attrName: + break + elif c in spaceCharactersBytes: + # Step 6! + c = data.skip() + break + elif c in (b"/", b">"): + return b"".join(attrName), b"" + elif c in asciiUppercaseBytes: + attrName.append(c.lower()) + elif c is None: + return None + else: + attrName.append(c) + # Step 5 + c = next(data) + # Step 7 + if c != b"=": + data.previous() + return b"".join(attrName), b"" + # Step 8 + next(data) + # Step 9 + c = data.skip() + # Step 10 + if c in (b"'", b'"'): + # 10.1 + quoteChar = c + while True: + # 10.2 + c = next(data) + # 10.3 + if c == quoteChar: + next(data) + return b"".join(attrName), b"".join(attrValue) + # 10.4 + elif c in asciiUppercaseBytes: + attrValue.append(c.lower()) + # 10.5 + else: + attrValue.append(c) + elif c == b">": + return b"".join(attrName), b"" + elif c in asciiUppercaseBytes: + attrValue.append(c.lower()) + elif c is None: + return None + else: + attrValue.append(c) + # Step 11 + while True: + c = next(data) + if c in spacesAngleBrackets: + return b"".join(attrName), b"".join(attrValue) + elif c in asciiUppercaseBytes: + attrValue.append(c.lower()) + elif c is None: + return None + else: + attrValue.append(c) + + +class ContentAttrParser(object): + def __init__(self, data): + assert isinstance(data, bytes) + self.data = data + + def parse(self): + try: + # Check if the attr name is charset + # otherwise return + self.data.jumpTo(b"charset") + self.data.position += 1 + self.data.skip() + if not self.data.currentByte == b"=": + # If there is no = sign keep looking for attrs + return None + self.data.position += 1 + self.data.skip() + # Look for an encoding between matching quote marks + if self.data.currentByte in (b'"', b"'"): + quoteMark = self.data.currentByte + self.data.position += 1 + oldPosition = self.data.position + if self.data.jumpTo(quoteMark): + return self.data[oldPosition:self.data.position] + else: + return None + else: + # Unquoted value + oldPosition = self.data.position + try: + self.data.skipUntil(spaceCharactersBytes) + return self.data[oldPosition:self.data.position] + except StopIteration: + # Return the whole remaining value + return self.data[oldPosition:] + except StopIteration: + return None + + +def lookupEncoding(encoding): + """Return the python codec name corresponding to an encoding or None if the + string doesn't correspond to a valid encoding.""" + if isinstance(encoding, bytes): + try: + encoding = encoding.decode("ascii") + except UnicodeDecodeError: + return None + + if encoding is not None: + try: + return webencodings.lookup(encoding) + except AttributeError: + return None + else: + return None diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_tokenizer.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_tokenizer.py new file mode 100644 index 0000000..5f00253 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_tokenizer.py @@ -0,0 +1,1735 @@ +from __future__ import absolute_import, division, unicode_literals + +from pip._vendor.six import unichr as chr + +from collections import deque, OrderedDict +from sys import version_info + +from .constants import spaceCharacters +from .constants import entities +from .constants import asciiLetters, asciiUpper2Lower +from .constants import digits, hexDigits, EOF +from .constants import tokenTypes, tagTokenTypes +from .constants import replacementCharacters + +from ._inputstream import HTMLInputStream + +from ._trie import Trie + +entitiesTrie = Trie(entities) + +if version_info >= (3, 7): + attributeMap = dict +else: + attributeMap = OrderedDict + + +class HTMLTokenizer(object): + """ This class takes care of tokenizing HTML. + + * self.currentToken + Holds the token that is currently being processed. + + * self.state + Holds a reference to the method to be invoked... XXX + + * self.stream + Points to HTMLInputStream object. + """ + + def __init__(self, stream, parser=None, **kwargs): + + self.stream = HTMLInputStream(stream, **kwargs) + self.parser = parser + + # Setup the initial tokenizer state + self.escapeFlag = False + self.lastFourChars = [] + self.state = self.dataState + self.escape = False + + # The current token being created + self.currentToken = None + super(HTMLTokenizer, self).__init__() + + def __iter__(self): + """ This is where the magic happens. + + We do our usually processing through the states and when we have a token + to return we yield the token which pauses processing until the next token + is requested. + """ + self.tokenQueue = deque([]) + # Start processing. When EOF is reached self.state will return False + # instead of True and the loop will terminate. + while self.state(): + while self.stream.errors: + yield {"type": tokenTypes["ParseError"], "data": self.stream.errors.pop(0)} + while self.tokenQueue: + yield self.tokenQueue.popleft() + + def consumeNumberEntity(self, isHex): + """This function returns either U+FFFD or the character based on the + decimal or hexadecimal representation. It also discards ";" if present. + If not present self.tokenQueue.append({"type": tokenTypes["ParseError"]}) is invoked. + """ + + allowed = digits + radix = 10 + if isHex: + allowed = hexDigits + radix = 16 + + charStack = [] + + # Consume all the characters that are in range while making sure we + # don't hit an EOF. + c = self.stream.char() + while c in allowed and c is not EOF: + charStack.append(c) + c = self.stream.char() + + # Convert the set of characters consumed to an int. + charAsInt = int("".join(charStack), radix) + + # Certain characters get replaced with others + if charAsInt in replacementCharacters: + char = replacementCharacters[charAsInt] + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "illegal-codepoint-for-numeric-entity", + "datavars": {"charAsInt": charAsInt}}) + elif ((0xD800 <= charAsInt <= 0xDFFF) or + (charAsInt > 0x10FFFF)): + char = "\uFFFD" + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "illegal-codepoint-for-numeric-entity", + "datavars": {"charAsInt": charAsInt}}) + else: + # Should speed up this check somehow (e.g. move the set to a constant) + if ((0x0001 <= charAsInt <= 0x0008) or + (0x000E <= charAsInt <= 0x001F) or + (0x007F <= charAsInt <= 0x009F) or + (0xFDD0 <= charAsInt <= 0xFDEF) or + charAsInt in frozenset([0x000B, 0xFFFE, 0xFFFF, 0x1FFFE, + 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE, + 0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE, + 0x5FFFF, 0x6FFFE, 0x6FFFF, 0x7FFFE, + 0x7FFFF, 0x8FFFE, 0x8FFFF, 0x9FFFE, + 0x9FFFF, 0xAFFFE, 0xAFFFF, 0xBFFFE, + 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE, + 0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE, + 0xFFFFF, 0x10FFFE, 0x10FFFF])): + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": + "illegal-codepoint-for-numeric-entity", + "datavars": {"charAsInt": charAsInt}}) + try: + # Try/except needed as UCS-2 Python builds' unichar only works + # within the BMP. + char = chr(charAsInt) + except ValueError: + v = charAsInt - 0x10000 + char = chr(0xD800 | (v >> 10)) + chr(0xDC00 | (v & 0x3FF)) + + # Discard the ; if present. Otherwise, put it back on the queue and + # invoke parseError on parser. + if c != ";": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "numeric-entity-without-semicolon"}) + self.stream.unget(c) + + return char + + def consumeEntity(self, allowedChar=None, fromAttribute=False): + # Initialise to the default output for when no entity is matched + output = "&" + + charStack = [self.stream.char()] + if (charStack[0] in spaceCharacters or charStack[0] in (EOF, "<", "&") or + (allowedChar is not None and allowedChar == charStack[0])): + self.stream.unget(charStack[0]) + + elif charStack[0] == "#": + # Read the next character to see if it's hex or decimal + hex = False + charStack.append(self.stream.char()) + if charStack[-1] in ("x", "X"): + hex = True + charStack.append(self.stream.char()) + + # charStack[-1] should be the first digit + if (hex and charStack[-1] in hexDigits) \ + or (not hex and charStack[-1] in digits): + # At least one digit found, so consume the whole number + self.stream.unget(charStack[-1]) + output = self.consumeNumberEntity(hex) + else: + # No digits found + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "expected-numeric-entity"}) + self.stream.unget(charStack.pop()) + output = "&" + "".join(charStack) + + else: + # At this point in the process might have named entity. Entities + # are stored in the global variable "entities". + # + # Consume characters and compare to these to a substring of the + # entity names in the list until the substring no longer matches. + while (charStack[-1] is not EOF): + if not entitiesTrie.has_keys_with_prefix("".join(charStack)): + break + charStack.append(self.stream.char()) + + # At this point we have a string that starts with some characters + # that may match an entity + # Try to find the longest entity the string will match to take care + # of ¬i for instance. + try: + entityName = entitiesTrie.longest_prefix("".join(charStack[:-1])) + entityLength = len(entityName) + except KeyError: + entityName = None + + if entityName is not None: + if entityName[-1] != ";": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "named-entity-without-semicolon"}) + if (entityName[-1] != ";" and fromAttribute and + (charStack[entityLength] in asciiLetters or + charStack[entityLength] in digits or + charStack[entityLength] == "=")): + self.stream.unget(charStack.pop()) + output = "&" + "".join(charStack) + else: + output = entities[entityName] + self.stream.unget(charStack.pop()) + output += "".join(charStack[entityLength:]) + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-named-entity"}) + self.stream.unget(charStack.pop()) + output = "&" + "".join(charStack) + + if fromAttribute: + self.currentToken["data"][-1][1] += output + else: + if output in spaceCharacters: + tokenType = "SpaceCharacters" + else: + tokenType = "Characters" + self.tokenQueue.append({"type": tokenTypes[tokenType], "data": output}) + + def processEntityInAttribute(self, allowedChar): + """This method replaces the need for "entityInAttributeValueState". + """ + self.consumeEntity(allowedChar=allowedChar, fromAttribute=True) + + def emitCurrentToken(self): + """This method is a generic handler for emitting the tags. It also sets + the state to "data" because that's what's needed after a token has been + emitted. + """ + token = self.currentToken + # Add token to the queue to be yielded + if (token["type"] in tagTokenTypes): + token["name"] = token["name"].translate(asciiUpper2Lower) + if token["type"] == tokenTypes["StartTag"]: + raw = token["data"] + data = attributeMap(raw) + if len(raw) > len(data): + # we had some duplicated attribute, fix so first wins + data.update(raw[::-1]) + token["data"] = data + + if token["type"] == tokenTypes["EndTag"]: + if token["data"]: + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "attributes-in-end-tag"}) + if token["selfClosing"]: + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "self-closing-flag-on-end-tag"}) + self.tokenQueue.append(token) + self.state = self.dataState + + # Below are the various tokenizer states worked out. + def dataState(self): + data = self.stream.char() + if data == "&": + self.state = self.entityDataState + elif data == "<": + self.state = self.tagOpenState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\u0000"}) + elif data is EOF: + # Tokenization ends. + return False + elif data in spaceCharacters: + # Directly after emitting a token you switch back to the "data + # state". At that point spaceCharacters are important so they are + # emitted separately. + self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data": + data + self.stream.charsUntil(spaceCharacters, True)}) + # No need to update lastFourChars here, since the first space will + # have already been appended to lastFourChars and will have broken + # any sequences + else: + chars = self.stream.charsUntil(("&", "<", "\u0000")) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": + data + chars}) + return True + + def entityDataState(self): + self.consumeEntity() + self.state = self.dataState + return True + + def rcdataState(self): + data = self.stream.char() + if data == "&": + self.state = self.characterReferenceInRcdata + elif data == "<": + self.state = self.rcdataLessThanSignState + elif data == EOF: + # Tokenization ends. + return False + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + elif data in spaceCharacters: + # Directly after emitting a token you switch back to the "data + # state". At that point spaceCharacters are important so they are + # emitted separately. + self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data": + data + self.stream.charsUntil(spaceCharacters, True)}) + # No need to update lastFourChars here, since the first space will + # have already been appended to lastFourChars and will have broken + # any sequences + else: + chars = self.stream.charsUntil(("&", "<", "\u0000")) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": + data + chars}) + return True + + def characterReferenceInRcdata(self): + self.consumeEntity() + self.state = self.rcdataState + return True + + def rawtextState(self): + data = self.stream.char() + if data == "<": + self.state = self.rawtextLessThanSignState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + elif data == EOF: + # Tokenization ends. + return False + else: + chars = self.stream.charsUntil(("<", "\u0000")) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": + data + chars}) + return True + + def scriptDataState(self): + data = self.stream.char() + if data == "<": + self.state = self.scriptDataLessThanSignState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + elif data == EOF: + # Tokenization ends. + return False + else: + chars = self.stream.charsUntil(("<", "\u0000")) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": + data + chars}) + return True + + def plaintextState(self): + data = self.stream.char() + if data == EOF: + # Tokenization ends. + return False + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": + data + self.stream.charsUntil("\u0000")}) + return True + + def tagOpenState(self): + data = self.stream.char() + if data == "!": + self.state = self.markupDeclarationOpenState + elif data == "/": + self.state = self.closeTagOpenState + elif data in asciiLetters: + self.currentToken = {"type": tokenTypes["StartTag"], + "name": data, "data": [], + "selfClosing": False, + "selfClosingAcknowledged": False} + self.state = self.tagNameState + elif data == ">": + # XXX In theory it could be something besides a tag name. But + # do we really care? + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-tag-name-but-got-right-bracket"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<>"}) + self.state = self.dataState + elif data == "?": + # XXX In theory it could be something besides a tag name. But + # do we really care? + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-tag-name-but-got-question-mark"}) + self.stream.unget(data) + self.state = self.bogusCommentState + else: + # XXX + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-tag-name"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) + self.stream.unget(data) + self.state = self.dataState + return True + + def closeTagOpenState(self): + data = self.stream.char() + if data in asciiLetters: + self.currentToken = {"type": tokenTypes["EndTag"], "name": data, + "data": [], "selfClosing": False} + self.state = self.tagNameState + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-closing-tag-but-got-right-bracket"}) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-closing-tag-but-got-eof"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "": + self.emitCurrentToken() + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-tag-name"}) + self.state = self.dataState + elif data == "/": + self.state = self.selfClosingStartTagState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["name"] += "\uFFFD" + else: + self.currentToken["name"] += data + # (Don't use charsUntil here, because tag names are + # very short and it's faster to not do anything fancy) + return True + + def rcdataLessThanSignState(self): + data = self.stream.char() + if data == "/": + self.temporaryBuffer = "" + self.state = self.rcdataEndTagOpenState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) + self.stream.unget(data) + self.state = self.rcdataState + return True + + def rcdataEndTagOpenState(self): + data = self.stream.char() + if data in asciiLetters: + self.temporaryBuffer += data + self.state = self.rcdataEndTagNameState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "" and appropriate: + self.currentToken = {"type": tokenTypes["EndTag"], + "name": self.temporaryBuffer, + "data": [], "selfClosing": False} + self.emitCurrentToken() + self.state = self.dataState + elif data in asciiLetters: + self.temporaryBuffer += data + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "" and appropriate: + self.currentToken = {"type": tokenTypes["EndTag"], + "name": self.temporaryBuffer, + "data": [], "selfClosing": False} + self.emitCurrentToken() + self.state = self.dataState + elif data in asciiLetters: + self.temporaryBuffer += data + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "" and appropriate: + self.currentToken = {"type": tokenTypes["EndTag"], + "name": self.temporaryBuffer, + "data": [], "selfClosing": False} + self.emitCurrentToken() + self.state = self.dataState + elif data in asciiLetters: + self.temporaryBuffer += data + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"}) + self.state = self.scriptDataState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + self.state = self.scriptDataEscapedState + elif data == EOF: + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + self.state = self.scriptDataEscapedState + return True + + def scriptDataEscapedLessThanSignState(self): + data = self.stream.char() + if data == "/": + self.temporaryBuffer = "" + self.state = self.scriptDataEscapedEndTagOpenState + elif data in asciiLetters: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<" + data}) + self.temporaryBuffer = data + self.state = self.scriptDataDoubleEscapeStartState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) + self.stream.unget(data) + self.state = self.scriptDataEscapedState + return True + + def scriptDataEscapedEndTagOpenState(self): + data = self.stream.char() + if data in asciiLetters: + self.temporaryBuffer = data + self.state = self.scriptDataEscapedEndTagNameState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "" and appropriate: + self.currentToken = {"type": tokenTypes["EndTag"], + "name": self.temporaryBuffer, + "data": [], "selfClosing": False} + self.emitCurrentToken() + self.state = self.dataState + elif data in asciiLetters: + self.temporaryBuffer += data + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": ""))): + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + if self.temporaryBuffer.lower() == "script": + self.state = self.scriptDataDoubleEscapedState + else: + self.state = self.scriptDataEscapedState + elif data in asciiLetters: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + self.temporaryBuffer += data + else: + self.stream.unget(data) + self.state = self.scriptDataEscapedState + return True + + def scriptDataDoubleEscapedState(self): + data = self.stream.char() + if data == "-": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) + self.state = self.scriptDataDoubleEscapedDashState + elif data == "<": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) + self.state = self.scriptDataDoubleEscapedLessThanSignState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + elif data == EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-script-in-script"}) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + return True + + def scriptDataDoubleEscapedDashState(self): + data = self.stream.char() + if data == "-": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) + self.state = self.scriptDataDoubleEscapedDashDashState + elif data == "<": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) + self.state = self.scriptDataDoubleEscapedLessThanSignState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + self.state = self.scriptDataDoubleEscapedState + elif data == EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-script-in-script"}) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + self.state = self.scriptDataDoubleEscapedState + return True + + def scriptDataDoubleEscapedDashDashState(self): + data = self.stream.char() + if data == "-": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"}) + elif data == "<": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"}) + self.state = self.scriptDataDoubleEscapedLessThanSignState + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"}) + self.state = self.scriptDataState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": "\uFFFD"}) + self.state = self.scriptDataDoubleEscapedState + elif data == EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-script-in-script"}) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + self.state = self.scriptDataDoubleEscapedState + return True + + def scriptDataDoubleEscapedLessThanSignState(self): + data = self.stream.char() + if data == "/": + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "/"}) + self.temporaryBuffer = "" + self.state = self.scriptDataDoubleEscapeEndState + else: + self.stream.unget(data) + self.state = self.scriptDataDoubleEscapedState + return True + + def scriptDataDoubleEscapeEndState(self): + data = self.stream.char() + if data in (spaceCharacters | frozenset(("/", ">"))): + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + if self.temporaryBuffer.lower() == "script": + self.state = self.scriptDataEscapedState + else: + self.state = self.scriptDataDoubleEscapedState + elif data in asciiLetters: + self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data}) + self.temporaryBuffer += data + else: + self.stream.unget(data) + self.state = self.scriptDataDoubleEscapedState + return True + + def beforeAttributeNameState(self): + data = self.stream.char() + if data in spaceCharacters: + self.stream.charsUntil(spaceCharacters, True) + elif data in asciiLetters: + self.currentToken["data"].append([data, ""]) + self.state = self.attributeNameState + elif data == ">": + self.emitCurrentToken() + elif data == "/": + self.state = self.selfClosingStartTagState + elif data in ("'", '"', "=", "<"): + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "invalid-character-in-attribute-name"}) + self.currentToken["data"].append([data, ""]) + self.state = self.attributeNameState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"].append(["\uFFFD", ""]) + self.state = self.attributeNameState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-attribute-name-but-got-eof"}) + self.state = self.dataState + else: + self.currentToken["data"].append([data, ""]) + self.state = self.attributeNameState + return True + + def attributeNameState(self): + data = self.stream.char() + leavingThisState = True + emitToken = False + if data == "=": + self.state = self.beforeAttributeValueState + elif data in asciiLetters: + self.currentToken["data"][-1][0] += data +\ + self.stream.charsUntil(asciiLetters, True) + leavingThisState = False + elif data == ">": + # XXX If we emit here the attributes are converted to a dict + # without being checked and when the code below runs we error + # because data is a dict not a list + emitToken = True + elif data in spaceCharacters: + self.state = self.afterAttributeNameState + elif data == "/": + self.state = self.selfClosingStartTagState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"][-1][0] += "\uFFFD" + leavingThisState = False + elif data in ("'", '"', "<"): + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": + "invalid-character-in-attribute-name"}) + self.currentToken["data"][-1][0] += data + leavingThisState = False + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "eof-in-attribute-name"}) + self.state = self.dataState + else: + self.currentToken["data"][-1][0] += data + leavingThisState = False + + if leavingThisState: + # Attributes are not dropped at this stage. That happens when the + # start tag token is emitted so values can still be safely appended + # to attributes, but we do want to report the parse error in time. + self.currentToken["data"][-1][0] = ( + self.currentToken["data"][-1][0].translate(asciiUpper2Lower)) + for name, _ in self.currentToken["data"][:-1]: + if self.currentToken["data"][-1][0] == name: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "duplicate-attribute"}) + break + # XXX Fix for above XXX + if emitToken: + self.emitCurrentToken() + return True + + def afterAttributeNameState(self): + data = self.stream.char() + if data in spaceCharacters: + self.stream.charsUntil(spaceCharacters, True) + elif data == "=": + self.state = self.beforeAttributeValueState + elif data == ">": + self.emitCurrentToken() + elif data in asciiLetters: + self.currentToken["data"].append([data, ""]) + self.state = self.attributeNameState + elif data == "/": + self.state = self.selfClosingStartTagState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"].append(["\uFFFD", ""]) + self.state = self.attributeNameState + elif data in ("'", '"', "<"): + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "invalid-character-after-attribute-name"}) + self.currentToken["data"].append([data, ""]) + self.state = self.attributeNameState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-end-of-tag-but-got-eof"}) + self.state = self.dataState + else: + self.currentToken["data"].append([data, ""]) + self.state = self.attributeNameState + return True + + def beforeAttributeValueState(self): + data = self.stream.char() + if data in spaceCharacters: + self.stream.charsUntil(spaceCharacters, True) + elif data == "\"": + self.state = self.attributeValueDoubleQuotedState + elif data == "&": + self.state = self.attributeValueUnQuotedState + self.stream.unget(data) + elif data == "'": + self.state = self.attributeValueSingleQuotedState + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-attribute-value-but-got-right-bracket"}) + self.emitCurrentToken() + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"][-1][1] += "\uFFFD" + self.state = self.attributeValueUnQuotedState + elif data in ("=", "<", "`"): + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "equals-in-unquoted-attribute-value"}) + self.currentToken["data"][-1][1] += data + self.state = self.attributeValueUnQuotedState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-attribute-value-but-got-eof"}) + self.state = self.dataState + else: + self.currentToken["data"][-1][1] += data + self.state = self.attributeValueUnQuotedState + return True + + def attributeValueDoubleQuotedState(self): + data = self.stream.char() + if data == "\"": + self.state = self.afterAttributeValueState + elif data == "&": + self.processEntityInAttribute('"') + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"][-1][1] += "\uFFFD" + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-attribute-value-double-quote"}) + self.state = self.dataState + else: + self.currentToken["data"][-1][1] += data +\ + self.stream.charsUntil(("\"", "&", "\u0000")) + return True + + def attributeValueSingleQuotedState(self): + data = self.stream.char() + if data == "'": + self.state = self.afterAttributeValueState + elif data == "&": + self.processEntityInAttribute("'") + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"][-1][1] += "\uFFFD" + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-attribute-value-single-quote"}) + self.state = self.dataState + else: + self.currentToken["data"][-1][1] += data +\ + self.stream.charsUntil(("'", "&", "\u0000")) + return True + + def attributeValueUnQuotedState(self): + data = self.stream.char() + if data in spaceCharacters: + self.state = self.beforeAttributeNameState + elif data == "&": + self.processEntityInAttribute(">") + elif data == ">": + self.emitCurrentToken() + elif data in ('"', "'", "=", "<", "`"): + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-character-in-unquoted-attribute-value"}) + self.currentToken["data"][-1][1] += data + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"][-1][1] += "\uFFFD" + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-attribute-value-no-quotes"}) + self.state = self.dataState + else: + self.currentToken["data"][-1][1] += data + self.stream.charsUntil( + frozenset(("&", ">", '"', "'", "=", "<", "`", "\u0000")) | spaceCharacters) + return True + + def afterAttributeValueState(self): + data = self.stream.char() + if data in spaceCharacters: + self.state = self.beforeAttributeNameState + elif data == ">": + self.emitCurrentToken() + elif data == "/": + self.state = self.selfClosingStartTagState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-EOF-after-attribute-value"}) + self.stream.unget(data) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-character-after-attribute-value"}) + self.stream.unget(data) + self.state = self.beforeAttributeNameState + return True + + def selfClosingStartTagState(self): + data = self.stream.char() + if data == ">": + self.currentToken["selfClosing"] = True + self.emitCurrentToken() + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": + "unexpected-EOF-after-solidus-in-tag"}) + self.stream.unget(data) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-character-after-solidus-in-tag"}) + self.stream.unget(data) + self.state = self.beforeAttributeNameState + return True + + def bogusCommentState(self): + # Make a new comment token and give it as value all the characters + # until the first > or EOF (charsUntil checks for EOF automatically) + # and emit it. + data = self.stream.charsUntil(">") + data = data.replace("\u0000", "\uFFFD") + self.tokenQueue.append( + {"type": tokenTypes["Comment"], "data": data}) + + # Eat the character directly after the bogus comment which is either a + # ">" or an EOF. + self.stream.char() + self.state = self.dataState + return True + + def markupDeclarationOpenState(self): + charStack = [self.stream.char()] + if charStack[-1] == "-": + charStack.append(self.stream.char()) + if charStack[-1] == "-": + self.currentToken = {"type": tokenTypes["Comment"], "data": ""} + self.state = self.commentStartState + return True + elif charStack[-1] in ('d', 'D'): + matched = True + for expected in (('o', 'O'), ('c', 'C'), ('t', 'T'), + ('y', 'Y'), ('p', 'P'), ('e', 'E')): + charStack.append(self.stream.char()) + if charStack[-1] not in expected: + matched = False + break + if matched: + self.currentToken = {"type": tokenTypes["Doctype"], + "name": "", + "publicId": None, "systemId": None, + "correct": True} + self.state = self.doctypeState + return True + elif (charStack[-1] == "[" and + self.parser is not None and + self.parser.tree.openElements and + self.parser.tree.openElements[-1].namespace != self.parser.tree.defaultNamespace): + matched = True + for expected in ["C", "D", "A", "T", "A", "["]: + charStack.append(self.stream.char()) + if charStack[-1] != expected: + matched = False + break + if matched: + self.state = self.cdataSectionState + return True + + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-dashes-or-doctype"}) + + while charStack: + self.stream.unget(charStack.pop()) + self.state = self.bogusCommentState + return True + + def commentStartState(self): + data = self.stream.char() + if data == "-": + self.state = self.commentStartDashState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"] += "\uFFFD" + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "incorrect-comment"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-comment"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["data"] += data + self.state = self.commentState + return True + + def commentStartDashState(self): + data = self.stream.char() + if data == "-": + self.state = self.commentEndState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"] += "-\uFFFD" + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "incorrect-comment"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-comment"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["data"] += "-" + data + self.state = self.commentState + return True + + def commentState(self): + data = self.stream.char() + if data == "-": + self.state = self.commentEndDashState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"] += "\uFFFD" + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "eof-in-comment"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["data"] += data + \ + self.stream.charsUntil(("-", "\u0000")) + return True + + def commentEndDashState(self): + data = self.stream.char() + if data == "-": + self.state = self.commentEndState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"] += "-\uFFFD" + self.state = self.commentState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-comment-end-dash"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["data"] += "-" + data + self.state = self.commentState + return True + + def commentEndState(self): + data = self.stream.char() + if data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"] += "--\uFFFD" + self.state = self.commentState + elif data == "!": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-bang-after-double-dash-in-comment"}) + self.state = self.commentEndBangState + elif data == "-": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-dash-after-double-dash-in-comment"}) + self.currentToken["data"] += data + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-comment-double-dash"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + # XXX + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-comment"}) + self.currentToken["data"] += "--" + data + self.state = self.commentState + return True + + def commentEndBangState(self): + data = self.stream.char() + if data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data == "-": + self.currentToken["data"] += "--!" + self.state = self.commentEndDashState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["data"] += "--!\uFFFD" + self.state = self.commentState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-comment-end-bang-state"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["data"] += "--!" + data + self.state = self.commentState + return True + + def doctypeState(self): + data = self.stream.char() + if data in spaceCharacters: + self.state = self.beforeDoctypeNameState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-doctype-name-but-got-eof"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "need-space-after-doctype"}) + self.stream.unget(data) + self.state = self.beforeDoctypeNameState + return True + + def beforeDoctypeNameState(self): + data = self.stream.char() + if data in spaceCharacters: + pass + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-doctype-name-but-got-right-bracket"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["name"] = "\uFFFD" + self.state = self.doctypeNameState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-doctype-name-but-got-eof"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["name"] = data + self.state = self.doctypeNameState + return True + + def doctypeNameState(self): + data = self.stream.char() + if data in spaceCharacters: + self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower) + self.state = self.afterDoctypeNameState + elif data == ">": + self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["name"] += "\uFFFD" + self.state = self.doctypeNameState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype-name"}) + self.currentToken["correct"] = False + self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["name"] += data + return True + + def afterDoctypeNameState(self): + data = self.stream.char() + if data in spaceCharacters: + pass + elif data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.currentToken["correct"] = False + self.stream.unget(data) + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + if data in ("p", "P"): + matched = True + for expected in (("u", "U"), ("b", "B"), ("l", "L"), + ("i", "I"), ("c", "C")): + data = self.stream.char() + if data not in expected: + matched = False + break + if matched: + self.state = self.afterDoctypePublicKeywordState + return True + elif data in ("s", "S"): + matched = True + for expected in (("y", "Y"), ("s", "S"), ("t", "T"), + ("e", "E"), ("m", "M")): + data = self.stream.char() + if data not in expected: + matched = False + break + if matched: + self.state = self.afterDoctypeSystemKeywordState + return True + + # All the characters read before the current 'data' will be + # [a-zA-Z], so they're garbage in the bogus doctype and can be + # discarded; only the latest character might be '>' or EOF + # and needs to be ungetted + self.stream.unget(data) + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "expected-space-or-right-bracket-in-doctype", "datavars": + {"data": data}}) + self.currentToken["correct"] = False + self.state = self.bogusDoctypeState + + return True + + def afterDoctypePublicKeywordState(self): + data = self.stream.char() + if data in spaceCharacters: + self.state = self.beforeDoctypePublicIdentifierState + elif data in ("'", '"'): + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.stream.unget(data) + self.state = self.beforeDoctypePublicIdentifierState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.stream.unget(data) + self.state = self.beforeDoctypePublicIdentifierState + return True + + def beforeDoctypePublicIdentifierState(self): + data = self.stream.char() + if data in spaceCharacters: + pass + elif data == "\"": + self.currentToken["publicId"] = "" + self.state = self.doctypePublicIdentifierDoubleQuotedState + elif data == "'": + self.currentToken["publicId"] = "" + self.state = self.doctypePublicIdentifierSingleQuotedState + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-end-of-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["correct"] = False + self.state = self.bogusDoctypeState + return True + + def doctypePublicIdentifierDoubleQuotedState(self): + data = self.stream.char() + if data == "\"": + self.state = self.afterDoctypePublicIdentifierState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["publicId"] += "\uFFFD" + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-end-of-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["publicId"] += data + return True + + def doctypePublicIdentifierSingleQuotedState(self): + data = self.stream.char() + if data == "'": + self.state = self.afterDoctypePublicIdentifierState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["publicId"] += "\uFFFD" + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-end-of-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["publicId"] += data + return True + + def afterDoctypePublicIdentifierState(self): + data = self.stream.char() + if data in spaceCharacters: + self.state = self.betweenDoctypePublicAndSystemIdentifiersState + elif data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data == '"': + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["systemId"] = "" + self.state = self.doctypeSystemIdentifierDoubleQuotedState + elif data == "'": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["systemId"] = "" + self.state = self.doctypeSystemIdentifierSingleQuotedState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["correct"] = False + self.state = self.bogusDoctypeState + return True + + def betweenDoctypePublicAndSystemIdentifiersState(self): + data = self.stream.char() + if data in spaceCharacters: + pass + elif data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data == '"': + self.currentToken["systemId"] = "" + self.state = self.doctypeSystemIdentifierDoubleQuotedState + elif data == "'": + self.currentToken["systemId"] = "" + self.state = self.doctypeSystemIdentifierSingleQuotedState + elif data == EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["correct"] = False + self.state = self.bogusDoctypeState + return True + + def afterDoctypeSystemKeywordState(self): + data = self.stream.char() + if data in spaceCharacters: + self.state = self.beforeDoctypeSystemIdentifierState + elif data in ("'", '"'): + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.stream.unget(data) + self.state = self.beforeDoctypeSystemIdentifierState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.stream.unget(data) + self.state = self.beforeDoctypeSystemIdentifierState + return True + + def beforeDoctypeSystemIdentifierState(self): + data = self.stream.char() + if data in spaceCharacters: + pass + elif data == "\"": + self.currentToken["systemId"] = "" + self.state = self.doctypeSystemIdentifierDoubleQuotedState + elif data == "'": + self.currentToken["systemId"] = "" + self.state = self.doctypeSystemIdentifierSingleQuotedState + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.currentToken["correct"] = False + self.state = self.bogusDoctypeState + return True + + def doctypeSystemIdentifierDoubleQuotedState(self): + data = self.stream.char() + if data == "\"": + self.state = self.afterDoctypeSystemIdentifierState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["systemId"] += "\uFFFD" + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-end-of-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["systemId"] += data + return True + + def doctypeSystemIdentifierSingleQuotedState(self): + data = self.stream.char() + if data == "'": + self.state = self.afterDoctypeSystemIdentifierState + elif data == "\u0000": + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + self.currentToken["systemId"] += "\uFFFD" + elif data == ">": + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-end-of-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.currentToken["systemId"] += data + return True + + def afterDoctypeSystemIdentifierState(self): + data = self.stream.char() + if data in spaceCharacters: + pass + elif data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "eof-in-doctype"}) + self.currentToken["correct"] = False + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + self.tokenQueue.append({"type": tokenTypes["ParseError"], "data": + "unexpected-char-in-doctype"}) + self.state = self.bogusDoctypeState + return True + + def bogusDoctypeState(self): + data = self.stream.char() + if data == ">": + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + elif data is EOF: + # XXX EMIT + self.stream.unget(data) + self.tokenQueue.append(self.currentToken) + self.state = self.dataState + else: + pass + return True + + def cdataSectionState(self): + data = [] + while True: + data.append(self.stream.charsUntil("]")) + data.append(self.stream.charsUntil(">")) + char = self.stream.char() + if char == EOF: + break + else: + assert char == ">" + if data[-1][-2:] == "]]": + data[-1] = data[-1][:-2] + break + else: + data.append(char) + + data = "".join(data) # pylint:disable=redefined-variable-type + # Deal with null here rather than in the parser + nullCount = data.count("\u0000") + if nullCount > 0: + for _ in range(nullCount): + self.tokenQueue.append({"type": tokenTypes["ParseError"], + "data": "invalid-codepoint"}) + data = data.replace("\u0000", "\uFFFD") + if data: + self.tokenQueue.append({"type": tokenTypes["Characters"], + "data": data}) + self.state = self.dataState + return True diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py new file mode 100644 index 0000000..07bad5d --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/__init__.py @@ -0,0 +1,5 @@ +from __future__ import absolute_import, division, unicode_literals + +from .py import Trie + +__all__ = ["Trie"] diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/_base.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/_base.py new file mode 100644 index 0000000..6b71975 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/_base.py @@ -0,0 +1,40 @@ +from __future__ import absolute_import, division, unicode_literals + +try: + from collections.abc import Mapping +except ImportError: # Python 2.7 + from collections import Mapping + + +class Trie(Mapping): + """Abstract base class for tries""" + + def keys(self, prefix=None): + # pylint:disable=arguments-differ + keys = super(Trie, self).keys() + + if prefix is None: + return set(keys) + + return {x for x in keys if x.startswith(prefix)} + + def has_keys_with_prefix(self, prefix): + for key in self.keys(): + if key.startswith(prefix): + return True + + return False + + def longest_prefix(self, prefix): + if prefix in self: + return prefix + + for i in range(1, len(prefix) + 1): + if prefix[:-i] in self: + return prefix[:-i] + + raise KeyError(prefix) + + def longest_prefix_item(self, prefix): + lprefix = self.longest_prefix(prefix) + return (lprefix, self[lprefix]) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/py.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/py.py new file mode 100644 index 0000000..c178b21 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_trie/py.py @@ -0,0 +1,67 @@ +from __future__ import absolute_import, division, unicode_literals +from pip._vendor.six import text_type + +from bisect import bisect_left + +from ._base import Trie as ABCTrie + + +class Trie(ABCTrie): + def __init__(self, data): + if not all(isinstance(x, text_type) for x in data.keys()): + raise TypeError("All keys must be strings") + + self._data = data + self._keys = sorted(data.keys()) + self._cachestr = "" + self._cachepoints = (0, len(data)) + + def __contains__(self, key): + return key in self._data + + def __len__(self): + return len(self._data) + + def __iter__(self): + return iter(self._data) + + def __getitem__(self, key): + return self._data[key] + + def keys(self, prefix=None): + if prefix is None or prefix == "" or not self._keys: + return set(self._keys) + + if prefix.startswith(self._cachestr): + lo, hi = self._cachepoints + start = i = bisect_left(self._keys, prefix, lo, hi) + else: + start = i = bisect_left(self._keys, prefix) + + keys = set() + if start == len(self._keys): + return keys + + while self._keys[i].startswith(prefix): + keys.add(self._keys[i]) + i += 1 + + self._cachestr = prefix + self._cachepoints = (start, i) + + return keys + + def has_keys_with_prefix(self, prefix): + if prefix in self._data: + return True + + if prefix.startswith(self._cachestr): + lo, hi = self._cachepoints + i = bisect_left(self._keys, prefix, lo, hi) + else: + i = bisect_left(self._keys, prefix) + + if i == len(self._keys): + return False + + return self._keys[i].startswith(prefix) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_utils.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_utils.py new file mode 100644 index 0000000..d7c4926 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/_utils.py @@ -0,0 +1,159 @@ +from __future__ import absolute_import, division, unicode_literals + +from types import ModuleType + +try: + from collections.abc import Mapping +except ImportError: + from collections import Mapping + +from pip._vendor.six import text_type, PY3 + +if PY3: + import xml.etree.ElementTree as default_etree +else: + try: + import xml.etree.cElementTree as default_etree + except ImportError: + import xml.etree.ElementTree as default_etree + + +__all__ = ["default_etree", "MethodDispatcher", "isSurrogatePair", + "surrogatePairToCodepoint", "moduleFactoryFactory", + "supports_lone_surrogates"] + + +# Platforms not supporting lone surrogates (\uD800-\uDFFF) should be +# caught by the below test. In general this would be any platform +# using UTF-16 as its encoding of unicode strings, such as +# Jython. This is because UTF-16 itself is based on the use of such +# surrogates, and there is no mechanism to further escape such +# escapes. +try: + _x = eval('"\\uD800"') # pylint:disable=eval-used + if not isinstance(_x, text_type): + # We need this with u"" because of http://bugs.jython.org/issue2039 + _x = eval('u"\\uD800"') # pylint:disable=eval-used + assert isinstance(_x, text_type) +except Exception: + supports_lone_surrogates = False +else: + supports_lone_surrogates = True + + +class MethodDispatcher(dict): + """Dict with 2 special properties: + + On initiation, keys that are lists, sets or tuples are converted to + multiple keys so accessing any one of the items in the original + list-like object returns the matching value + + md = MethodDispatcher({("foo", "bar"):"baz"}) + md["foo"] == "baz" + + A default value which can be set through the default attribute. + """ + + def __init__(self, items=()): + _dictEntries = [] + for name, value in items: + if isinstance(name, (list, tuple, frozenset, set)): + for item in name: + _dictEntries.append((item, value)) + else: + _dictEntries.append((name, value)) + dict.__init__(self, _dictEntries) + assert len(self) == len(_dictEntries) + self.default = None + + def __getitem__(self, key): + return dict.get(self, key, self.default) + + def __get__(self, instance, owner=None): + return BoundMethodDispatcher(instance, self) + + +class BoundMethodDispatcher(Mapping): + """Wraps a MethodDispatcher, binding its return values to `instance`""" + def __init__(self, instance, dispatcher): + self.instance = instance + self.dispatcher = dispatcher + + def __getitem__(self, key): + # see https://docs.python.org/3/reference/datamodel.html#object.__get__ + # on a function, __get__ is used to bind a function to an instance as a bound method + return self.dispatcher[key].__get__(self.instance) + + def get(self, key, default): + if key in self.dispatcher: + return self[key] + else: + return default + + def __iter__(self): + return iter(self.dispatcher) + + def __len__(self): + return len(self.dispatcher) + + def __contains__(self, key): + return key in self.dispatcher + + +# Some utility functions to deal with weirdness around UCS2 vs UCS4 +# python builds + +def isSurrogatePair(data): + return (len(data) == 2 and + ord(data[0]) >= 0xD800 and ord(data[0]) <= 0xDBFF and + ord(data[1]) >= 0xDC00 and ord(data[1]) <= 0xDFFF) + + +def surrogatePairToCodepoint(data): + char_val = (0x10000 + (ord(data[0]) - 0xD800) * 0x400 + + (ord(data[1]) - 0xDC00)) + return char_val + +# Module Factory Factory (no, this isn't Java, I know) +# Here to stop this being duplicated all over the place. + + +def moduleFactoryFactory(factory): + moduleCache = {} + + def moduleFactory(baseModule, *args, **kwargs): + if isinstance(ModuleType.__name__, type("")): + name = "_%s_factory" % baseModule.__name__ + else: + name = b"_%s_factory" % baseModule.__name__ + + kwargs_tuple = tuple(kwargs.items()) + + try: + return moduleCache[name][args][kwargs_tuple] + except KeyError: + mod = ModuleType(name) + objs = factory(baseModule, *args, **kwargs) + mod.__dict__.update(objs) + if "name" not in moduleCache: + moduleCache[name] = {} + if "args" not in moduleCache[name]: + moduleCache[name][args] = {} + if "kwargs" not in moduleCache[name][args]: + moduleCache[name][args][kwargs_tuple] = {} + moduleCache[name][args][kwargs_tuple] = mod + return mod + + return moduleFactory + + +def memoize(func): + cache = {} + + def wrapped(*args, **kwargs): + key = (tuple(args), tuple(kwargs.items())) + if key not in cache: + cache[key] = func(*args, **kwargs) + return cache[key] + + return wrapped diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/constants.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/constants.py new file mode 100644 index 0000000..fe3e237 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/constants.py @@ -0,0 +1,2946 @@ +from __future__ import absolute_import, division, unicode_literals + +import string + +EOF = None + +E = { + "null-character": + "Null character in input stream, replaced with U+FFFD.", + "invalid-codepoint": + "Invalid codepoint in stream.", + "incorrectly-placed-solidus": + "Solidus (/) incorrectly placed in tag.", + "incorrect-cr-newline-entity": + "Incorrect CR newline entity, replaced with LF.", + "illegal-windows-1252-entity": + "Entity used with illegal number (windows-1252 reference).", + "cant-convert-numeric-entity": + "Numeric entity couldn't be converted to character " + "(codepoint U+%(charAsInt)08x).", + "illegal-codepoint-for-numeric-entity": + "Numeric entity represents an illegal codepoint: " + "U+%(charAsInt)08x.", + "numeric-entity-without-semicolon": + "Numeric entity didn't end with ';'.", + "expected-numeric-entity-but-got-eof": + "Numeric entity expected. Got end of file instead.", + "expected-numeric-entity": + "Numeric entity expected but none found.", + "named-entity-without-semicolon": + "Named entity didn't end with ';'.", + "expected-named-entity": + "Named entity expected. Got none.", + "attributes-in-end-tag": + "End tag contains unexpected attributes.", + 'self-closing-flag-on-end-tag': + "End tag contains unexpected self-closing flag.", + "expected-tag-name-but-got-right-bracket": + "Expected tag name. Got '>' instead.", + "expected-tag-name-but-got-question-mark": + "Expected tag name. Got '?' instead. (HTML doesn't " + "support processing instructions.)", + "expected-tag-name": + "Expected tag name. Got something else instead", + "expected-closing-tag-but-got-right-bracket": + "Expected closing tag. Got '>' instead. Ignoring ''.", + "expected-closing-tag-but-got-eof": + "Expected closing tag. Unexpected end of file.", + "expected-closing-tag-but-got-char": + "Expected closing tag. Unexpected character '%(data)s' found.", + "eof-in-tag-name": + "Unexpected end of file in the tag name.", + "expected-attribute-name-but-got-eof": + "Unexpected end of file. Expected attribute name instead.", + "eof-in-attribute-name": + "Unexpected end of file in attribute name.", + "invalid-character-in-attribute-name": + "Invalid character in attribute name", + "duplicate-attribute": + "Dropped duplicate attribute on tag.", + "expected-end-of-tag-name-but-got-eof": + "Unexpected end of file. Expected = or end of tag.", + "expected-attribute-value-but-got-eof": + "Unexpected end of file. Expected attribute value.", + "expected-attribute-value-but-got-right-bracket": + "Expected attribute value. Got '>' instead.", + 'equals-in-unquoted-attribute-value': + "Unexpected = in unquoted attribute", + 'unexpected-character-in-unquoted-attribute-value': + "Unexpected character in unquoted attribute", + "invalid-character-after-attribute-name": + "Unexpected character after attribute name.", + "unexpected-character-after-attribute-value": + "Unexpected character after attribute value.", + "eof-in-attribute-value-double-quote": + "Unexpected end of file in attribute value (\").", + "eof-in-attribute-value-single-quote": + "Unexpected end of file in attribute value (').", + "eof-in-attribute-value-no-quotes": + "Unexpected end of file in attribute value.", + "unexpected-EOF-after-solidus-in-tag": + "Unexpected end of file in tag. Expected >", + "unexpected-character-after-solidus-in-tag": + "Unexpected character after / in tag. Expected >", + "expected-dashes-or-doctype": + "Expected '--' or 'DOCTYPE'. Not found.", + "unexpected-bang-after-double-dash-in-comment": + "Unexpected ! after -- in comment", + "unexpected-space-after-double-dash-in-comment": + "Unexpected space after -- in comment", + "incorrect-comment": + "Incorrect comment.", + "eof-in-comment": + "Unexpected end of file in comment.", + "eof-in-comment-end-dash": + "Unexpected end of file in comment (-)", + "unexpected-dash-after-double-dash-in-comment": + "Unexpected '-' after '--' found in comment.", + "eof-in-comment-double-dash": + "Unexpected end of file in comment (--).", + "eof-in-comment-end-space-state": + "Unexpected end of file in comment.", + "eof-in-comment-end-bang-state": + "Unexpected end of file in comment.", + "unexpected-char-in-comment": + "Unexpected character in comment found.", + "need-space-after-doctype": + "No space after literal string 'DOCTYPE'.", + "expected-doctype-name-but-got-right-bracket": + "Unexpected > character. Expected DOCTYPE name.", + "expected-doctype-name-but-got-eof": + "Unexpected end of file. Expected DOCTYPE name.", + "eof-in-doctype-name": + "Unexpected end of file in DOCTYPE name.", + "eof-in-doctype": + "Unexpected end of file in DOCTYPE.", + "expected-space-or-right-bracket-in-doctype": + "Expected space or '>'. Got '%(data)s'", + "unexpected-end-of-doctype": + "Unexpected end of DOCTYPE.", + "unexpected-char-in-doctype": + "Unexpected character in DOCTYPE.", + "eof-in-innerhtml": + "XXX innerHTML EOF", + "unexpected-doctype": + "Unexpected DOCTYPE. Ignored.", + "non-html-root": + "html needs to be the first start tag.", + "expected-doctype-but-got-eof": + "Unexpected End of file. Expected DOCTYPE.", + "unknown-doctype": + "Erroneous DOCTYPE.", + "expected-doctype-but-got-chars": + "Unexpected non-space characters. Expected DOCTYPE.", + "expected-doctype-but-got-start-tag": + "Unexpected start tag (%(name)s). Expected DOCTYPE.", + "expected-doctype-but-got-end-tag": + "Unexpected end tag (%(name)s). Expected DOCTYPE.", + "end-tag-after-implied-root": + "Unexpected end tag (%(name)s) after the (implied) root element.", + "expected-named-closing-tag-but-got-eof": + "Unexpected end of file. Expected end tag (%(name)s).", + "two-heads-are-not-better-than-one": + "Unexpected start tag head in existing head. Ignored.", + "unexpected-end-tag": + "Unexpected end tag (%(name)s). Ignored.", + "unexpected-start-tag-out-of-my-head": + "Unexpected start tag (%(name)s) that can be in head. Moved.", + "unexpected-start-tag": + "Unexpected start tag (%(name)s).", + "missing-end-tag": + "Missing end tag (%(name)s).", + "missing-end-tags": + "Missing end tags (%(name)s).", + "unexpected-start-tag-implies-end-tag": + "Unexpected start tag (%(startName)s) " + "implies end tag (%(endName)s).", + "unexpected-start-tag-treated-as": + "Unexpected start tag (%(originalName)s). Treated as %(newName)s.", + "deprecated-tag": + "Unexpected start tag %(name)s. Don't use it!", + "unexpected-start-tag-ignored": + "Unexpected start tag %(name)s. Ignored.", + "expected-one-end-tag-but-got-another": + "Unexpected end tag (%(gotName)s). " + "Missing end tag (%(expectedName)s).", + "end-tag-too-early": + "End tag (%(name)s) seen too early. Expected other end tag.", + "end-tag-too-early-named": + "Unexpected end tag (%(gotName)s). Expected end tag (%(expectedName)s).", + "end-tag-too-early-ignored": + "End tag (%(name)s) seen too early. Ignored.", + "adoption-agency-1.1": + "End tag (%(name)s) violates step 1, " + "paragraph 1 of the adoption agency algorithm.", + "adoption-agency-1.2": + "End tag (%(name)s) violates step 1, " + "paragraph 2 of the adoption agency algorithm.", + "adoption-agency-1.3": + "End tag (%(name)s) violates step 1, " + "paragraph 3 of the adoption agency algorithm.", + "adoption-agency-4.4": + "End tag (%(name)s) violates step 4, " + "paragraph 4 of the adoption agency algorithm.", + "unexpected-end-tag-treated-as": + "Unexpected end tag (%(originalName)s). Treated as %(newName)s.", + "no-end-tag": + "This element (%(name)s) has no end tag.", + "unexpected-implied-end-tag-in-table": + "Unexpected implied end tag (%(name)s) in the table phase.", + "unexpected-implied-end-tag-in-table-body": + "Unexpected implied end tag (%(name)s) in the table body phase.", + "unexpected-char-implies-table-voodoo": + "Unexpected non-space characters in " + "table context caused voodoo mode.", + "unexpected-hidden-input-in-table": + "Unexpected input with type hidden in table context.", + "unexpected-form-in-table": + "Unexpected form in table context.", + "unexpected-start-tag-implies-table-voodoo": + "Unexpected start tag (%(name)s) in " + "table context caused voodoo mode.", + "unexpected-end-tag-implies-table-voodoo": + "Unexpected end tag (%(name)s) in " + "table context caused voodoo mode.", + "unexpected-cell-in-table-body": + "Unexpected table cell start tag (%(name)s) " + "in the table body phase.", + "unexpected-cell-end-tag": + "Got table cell end tag (%(name)s) " + "while required end tags are missing.", + "unexpected-end-tag-in-table-body": + "Unexpected end tag (%(name)s) in the table body phase. Ignored.", + "unexpected-implied-end-tag-in-table-row": + "Unexpected implied end tag (%(name)s) in the table row phase.", + "unexpected-end-tag-in-table-row": + "Unexpected end tag (%(name)s) in the table row phase. Ignored.", + "unexpected-select-in-select": + "Unexpected select start tag in the select phase " + "treated as select end tag.", + "unexpected-input-in-select": + "Unexpected input start tag in the select phase.", + "unexpected-start-tag-in-select": + "Unexpected start tag token (%(name)s in the select phase. " + "Ignored.", + "unexpected-end-tag-in-select": + "Unexpected end tag (%(name)s) in the select phase. Ignored.", + "unexpected-table-element-start-tag-in-select-in-table": + "Unexpected table element start tag (%(name)s) in the select in table phase.", + "unexpected-table-element-end-tag-in-select-in-table": + "Unexpected table element end tag (%(name)s) in the select in table phase.", + "unexpected-char-after-body": + "Unexpected non-space characters in the after body phase.", + "unexpected-start-tag-after-body": + "Unexpected start tag token (%(name)s)" + " in the after body phase.", + "unexpected-end-tag-after-body": + "Unexpected end tag token (%(name)s)" + " in the after body phase.", + "unexpected-char-in-frameset": + "Unexpected characters in the frameset phase. Characters ignored.", + "unexpected-start-tag-in-frameset": + "Unexpected start tag token (%(name)s)" + " in the frameset phase. Ignored.", + "unexpected-frameset-in-frameset-innerhtml": + "Unexpected end tag token (frameset) " + "in the frameset phase (innerHTML).", + "unexpected-end-tag-in-frameset": + "Unexpected end tag token (%(name)s)" + " in the frameset phase. Ignored.", + "unexpected-char-after-frameset": + "Unexpected non-space characters in the " + "after frameset phase. Ignored.", + "unexpected-start-tag-after-frameset": + "Unexpected start tag (%(name)s)" + " in the after frameset phase. Ignored.", + "unexpected-end-tag-after-frameset": + "Unexpected end tag (%(name)s)" + " in the after frameset phase. Ignored.", + "unexpected-end-tag-after-body-innerhtml": + "Unexpected end tag after body(innerHtml)", + "expected-eof-but-got-char": + "Unexpected non-space characters. Expected end of file.", + "expected-eof-but-got-start-tag": + "Unexpected start tag (%(name)s)" + ". Expected end of file.", + "expected-eof-but-got-end-tag": + "Unexpected end tag (%(name)s)" + ". Expected end of file.", + "eof-in-table": + "Unexpected end of file. Expected table content.", + "eof-in-select": + "Unexpected end of file. Expected select content.", + "eof-in-frameset": + "Unexpected end of file. Expected frameset content.", + "eof-in-script-in-script": + "Unexpected end of file. Expected script content.", + "eof-in-foreign-lands": + "Unexpected end of file. Expected foreign content", + "non-void-element-with-trailing-solidus": + "Trailing solidus not allowed on element %(name)s", + "unexpected-html-element-in-foreign-content": + "Element %(name)s not allowed in a non-html context", + "unexpected-end-tag-before-html": + "Unexpected end tag (%(name)s) before html.", + "unexpected-inhead-noscript-tag": + "Element %(name)s not allowed in a inhead-noscript context", + "eof-in-head-noscript": + "Unexpected end of file. Expected inhead-noscript content", + "char-in-head-noscript": + "Unexpected non-space character. Expected inhead-noscript content", + "XXX-undefined-error": + "Undefined error (this sucks and should be fixed)", +} + +namespaces = { + "html": "http://www.w3.org/1999/xhtml", + "mathml": "http://www.w3.org/1998/Math/MathML", + "svg": "http://www.w3.org/2000/svg", + "xlink": "http://www.w3.org/1999/xlink", + "xml": "http://www.w3.org/XML/1998/namespace", + "xmlns": "http://www.w3.org/2000/xmlns/" +} + +scopingElements = frozenset([ + (namespaces["html"], "applet"), + (namespaces["html"], "caption"), + (namespaces["html"], "html"), + (namespaces["html"], "marquee"), + (namespaces["html"], "object"), + (namespaces["html"], "table"), + (namespaces["html"], "td"), + (namespaces["html"], "th"), + (namespaces["mathml"], "mi"), + (namespaces["mathml"], "mo"), + (namespaces["mathml"], "mn"), + (namespaces["mathml"], "ms"), + (namespaces["mathml"], "mtext"), + (namespaces["mathml"], "annotation-xml"), + (namespaces["svg"], "foreignObject"), + (namespaces["svg"], "desc"), + (namespaces["svg"], "title"), +]) + +formattingElements = frozenset([ + (namespaces["html"], "a"), + (namespaces["html"], "b"), + (namespaces["html"], "big"), + (namespaces["html"], "code"), + (namespaces["html"], "em"), + (namespaces["html"], "font"), + (namespaces["html"], "i"), + (namespaces["html"], "nobr"), + (namespaces["html"], "s"), + (namespaces["html"], "small"), + (namespaces["html"], "strike"), + (namespaces["html"], "strong"), + (namespaces["html"], "tt"), + (namespaces["html"], "u") +]) + +specialElements = frozenset([ + (namespaces["html"], "address"), + (namespaces["html"], "applet"), + (namespaces["html"], "area"), + (namespaces["html"], "article"), + (namespaces["html"], "aside"), + (namespaces["html"], "base"), + (namespaces["html"], "basefont"), + (namespaces["html"], "bgsound"), + (namespaces["html"], "blockquote"), + (namespaces["html"], "body"), + (namespaces["html"], "br"), + (namespaces["html"], "button"), + (namespaces["html"], "caption"), + (namespaces["html"], "center"), + (namespaces["html"], "col"), + (namespaces["html"], "colgroup"), + (namespaces["html"], "command"), + (namespaces["html"], "dd"), + (namespaces["html"], "details"), + (namespaces["html"], "dir"), + (namespaces["html"], "div"), + (namespaces["html"], "dl"), + (namespaces["html"], "dt"), + (namespaces["html"], "embed"), + (namespaces["html"], "fieldset"), + (namespaces["html"], "figure"), + (namespaces["html"], "footer"), + (namespaces["html"], "form"), + (namespaces["html"], "frame"), + (namespaces["html"], "frameset"), + (namespaces["html"], "h1"), + (namespaces["html"], "h2"), + (namespaces["html"], "h3"), + (namespaces["html"], "h4"), + (namespaces["html"], "h5"), + (namespaces["html"], "h6"), + (namespaces["html"], "head"), + (namespaces["html"], "header"), + (namespaces["html"], "hr"), + (namespaces["html"], "html"), + (namespaces["html"], "iframe"), + # Note that image is commented out in the spec as "this isn't an + # element that can end up on the stack, so it doesn't matter," + (namespaces["html"], "image"), + (namespaces["html"], "img"), + (namespaces["html"], "input"), + (namespaces["html"], "isindex"), + (namespaces["html"], "li"), + (namespaces["html"], "link"), + (namespaces["html"], "listing"), + (namespaces["html"], "marquee"), + (namespaces["html"], "menu"), + (namespaces["html"], "meta"), + (namespaces["html"], "nav"), + (namespaces["html"], "noembed"), + (namespaces["html"], "noframes"), + (namespaces["html"], "noscript"), + (namespaces["html"], "object"), + (namespaces["html"], "ol"), + (namespaces["html"], "p"), + (namespaces["html"], "param"), + (namespaces["html"], "plaintext"), + (namespaces["html"], "pre"), + (namespaces["html"], "script"), + (namespaces["html"], "section"), + (namespaces["html"], "select"), + (namespaces["html"], "style"), + (namespaces["html"], "table"), + (namespaces["html"], "tbody"), + (namespaces["html"], "td"), + (namespaces["html"], "textarea"), + (namespaces["html"], "tfoot"), + (namespaces["html"], "th"), + (namespaces["html"], "thead"), + (namespaces["html"], "title"), + (namespaces["html"], "tr"), + (namespaces["html"], "ul"), + (namespaces["html"], "wbr"), + (namespaces["html"], "xmp"), + (namespaces["svg"], "foreignObject") +]) + +htmlIntegrationPointElements = frozenset([ + (namespaces["mathml"], "annotation-xml"), + (namespaces["svg"], "foreignObject"), + (namespaces["svg"], "desc"), + (namespaces["svg"], "title") +]) + +mathmlTextIntegrationPointElements = frozenset([ + (namespaces["mathml"], "mi"), + (namespaces["mathml"], "mo"), + (namespaces["mathml"], "mn"), + (namespaces["mathml"], "ms"), + (namespaces["mathml"], "mtext") +]) + +adjustSVGAttributes = { + "attributename": "attributeName", + "attributetype": "attributeType", + "basefrequency": "baseFrequency", + "baseprofile": "baseProfile", + "calcmode": "calcMode", + "clippathunits": "clipPathUnits", + "contentscripttype": "contentScriptType", + "contentstyletype": "contentStyleType", + "diffuseconstant": "diffuseConstant", + "edgemode": "edgeMode", + "externalresourcesrequired": "externalResourcesRequired", + "filterres": "filterRes", + "filterunits": "filterUnits", + "glyphref": "glyphRef", + "gradienttransform": "gradientTransform", + "gradientunits": "gradientUnits", + "kernelmatrix": "kernelMatrix", + "kernelunitlength": "kernelUnitLength", + "keypoints": "keyPoints", + "keysplines": "keySplines", + "keytimes": "keyTimes", + "lengthadjust": "lengthAdjust", + "limitingconeangle": "limitingConeAngle", + "markerheight": "markerHeight", + "markerunits": "markerUnits", + "markerwidth": "markerWidth", + "maskcontentunits": "maskContentUnits", + "maskunits": "maskUnits", + "numoctaves": "numOctaves", + "pathlength": "pathLength", + "patterncontentunits": "patternContentUnits", + "patterntransform": "patternTransform", + "patternunits": "patternUnits", + "pointsatx": "pointsAtX", + "pointsaty": "pointsAtY", + "pointsatz": "pointsAtZ", + "preservealpha": "preserveAlpha", + "preserveaspectratio": "preserveAspectRatio", + "primitiveunits": "primitiveUnits", + "refx": "refX", + "refy": "refY", + "repeatcount": "repeatCount", + "repeatdur": "repeatDur", + "requiredextensions": "requiredExtensions", + "requiredfeatures": "requiredFeatures", + "specularconstant": "specularConstant", + "specularexponent": "specularExponent", + "spreadmethod": "spreadMethod", + "startoffset": "startOffset", + "stddeviation": "stdDeviation", + "stitchtiles": "stitchTiles", + "surfacescale": "surfaceScale", + "systemlanguage": "systemLanguage", + "tablevalues": "tableValues", + "targetx": "targetX", + "targety": "targetY", + "textlength": "textLength", + "viewbox": "viewBox", + "viewtarget": "viewTarget", + "xchannelselector": "xChannelSelector", + "ychannelselector": "yChannelSelector", + "zoomandpan": "zoomAndPan" +} + +adjustMathMLAttributes = {"definitionurl": "definitionURL"} + +adjustForeignAttributes = { + "xlink:actuate": ("xlink", "actuate", namespaces["xlink"]), + "xlink:arcrole": ("xlink", "arcrole", namespaces["xlink"]), + "xlink:href": ("xlink", "href", namespaces["xlink"]), + "xlink:role": ("xlink", "role", namespaces["xlink"]), + "xlink:show": ("xlink", "show", namespaces["xlink"]), + "xlink:title": ("xlink", "title", namespaces["xlink"]), + "xlink:type": ("xlink", "type", namespaces["xlink"]), + "xml:base": ("xml", "base", namespaces["xml"]), + "xml:lang": ("xml", "lang", namespaces["xml"]), + "xml:space": ("xml", "space", namespaces["xml"]), + "xmlns": (None, "xmlns", namespaces["xmlns"]), + "xmlns:xlink": ("xmlns", "xlink", namespaces["xmlns"]) +} + +unadjustForeignAttributes = {(ns, local): qname for qname, (prefix, local, ns) in + adjustForeignAttributes.items()} + +spaceCharacters = frozenset([ + "\t", + "\n", + "\u000C", + " ", + "\r" +]) + +tableInsertModeElements = frozenset([ + "table", + "tbody", + "tfoot", + "thead", + "tr" +]) + +asciiLowercase = frozenset(string.ascii_lowercase) +asciiUppercase = frozenset(string.ascii_uppercase) +asciiLetters = frozenset(string.ascii_letters) +digits = frozenset(string.digits) +hexDigits = frozenset(string.hexdigits) + +asciiUpper2Lower = {ord(c): ord(c.lower()) for c in string.ascii_uppercase} + +# Heading elements need to be ordered +headingElements = ( + "h1", + "h2", + "h3", + "h4", + "h5", + "h6" +) + +voidElements = frozenset([ + "base", + "command", + "event-source", + "link", + "meta", + "hr", + "br", + "img", + "embed", + "param", + "area", + "col", + "input", + "source", + "track" +]) + +cdataElements = frozenset(['title', 'textarea']) + +rcdataElements = frozenset([ + 'style', + 'script', + 'xmp', + 'iframe', + 'noembed', + 'noframes', + 'noscript' +]) + +booleanAttributes = { + "": frozenset(["irrelevant", "itemscope"]), + "style": frozenset(["scoped"]), + "img": frozenset(["ismap"]), + "audio": frozenset(["autoplay", "controls"]), + "video": frozenset(["autoplay", "controls"]), + "script": frozenset(["defer", "async"]), + "details": frozenset(["open"]), + "datagrid": frozenset(["multiple", "disabled"]), + "command": frozenset(["hidden", "disabled", "checked", "default"]), + "hr": frozenset(["noshade"]), + "menu": frozenset(["autosubmit"]), + "fieldset": frozenset(["disabled", "readonly"]), + "option": frozenset(["disabled", "readonly", "selected"]), + "optgroup": frozenset(["disabled", "readonly"]), + "button": frozenset(["disabled", "autofocus"]), + "input": frozenset(["disabled", "readonly", "required", "autofocus", "checked", "ismap"]), + "select": frozenset(["disabled", "readonly", "autofocus", "multiple"]), + "output": frozenset(["disabled", "readonly"]), + "iframe": frozenset(["seamless"]), +} + +# entitiesWindows1252 has to be _ordered_ and needs to have an index. It +# therefore can't be a frozenset. +entitiesWindows1252 = ( + 8364, # 0x80 0x20AC EURO SIGN + 65533, # 0x81 UNDEFINED + 8218, # 0x82 0x201A SINGLE LOW-9 QUOTATION MARK + 402, # 0x83 0x0192 LATIN SMALL LETTER F WITH HOOK + 8222, # 0x84 0x201E DOUBLE LOW-9 QUOTATION MARK + 8230, # 0x85 0x2026 HORIZONTAL ELLIPSIS + 8224, # 0x86 0x2020 DAGGER + 8225, # 0x87 0x2021 DOUBLE DAGGER + 710, # 0x88 0x02C6 MODIFIER LETTER CIRCUMFLEX ACCENT + 8240, # 0x89 0x2030 PER MILLE SIGN + 352, # 0x8A 0x0160 LATIN CAPITAL LETTER S WITH CARON + 8249, # 0x8B 0x2039 SINGLE LEFT-POINTING ANGLE QUOTATION MARK + 338, # 0x8C 0x0152 LATIN CAPITAL LIGATURE OE + 65533, # 0x8D UNDEFINED + 381, # 0x8E 0x017D LATIN CAPITAL LETTER Z WITH CARON + 65533, # 0x8F UNDEFINED + 65533, # 0x90 UNDEFINED + 8216, # 0x91 0x2018 LEFT SINGLE QUOTATION MARK + 8217, # 0x92 0x2019 RIGHT SINGLE QUOTATION MARK + 8220, # 0x93 0x201C LEFT DOUBLE QUOTATION MARK + 8221, # 0x94 0x201D RIGHT DOUBLE QUOTATION MARK + 8226, # 0x95 0x2022 BULLET + 8211, # 0x96 0x2013 EN DASH + 8212, # 0x97 0x2014 EM DASH + 732, # 0x98 0x02DC SMALL TILDE + 8482, # 0x99 0x2122 TRADE MARK SIGN + 353, # 0x9A 0x0161 LATIN SMALL LETTER S WITH CARON + 8250, # 0x9B 0x203A SINGLE RIGHT-POINTING ANGLE QUOTATION MARK + 339, # 0x9C 0x0153 LATIN SMALL LIGATURE OE + 65533, # 0x9D UNDEFINED + 382, # 0x9E 0x017E LATIN SMALL LETTER Z WITH CARON + 376 # 0x9F 0x0178 LATIN CAPITAL LETTER Y WITH DIAERESIS +) + +xmlEntities = frozenset(['lt;', 'gt;', 'amp;', 'apos;', 'quot;']) + +entities = { + "AElig": "\xc6", + "AElig;": "\xc6", + "AMP": "&", + "AMP;": "&", + "Aacute": "\xc1", + "Aacute;": "\xc1", + "Abreve;": "\u0102", + "Acirc": "\xc2", + "Acirc;": "\xc2", + "Acy;": "\u0410", + "Afr;": "\U0001d504", + "Agrave": "\xc0", + "Agrave;": "\xc0", + "Alpha;": "\u0391", + "Amacr;": "\u0100", + "And;": "\u2a53", + "Aogon;": "\u0104", + "Aopf;": "\U0001d538", + "ApplyFunction;": "\u2061", + "Aring": "\xc5", + "Aring;": "\xc5", + "Ascr;": "\U0001d49c", + "Assign;": "\u2254", + "Atilde": "\xc3", + "Atilde;": "\xc3", + "Auml": "\xc4", + "Auml;": "\xc4", + "Backslash;": "\u2216", + "Barv;": "\u2ae7", + "Barwed;": "\u2306", + "Bcy;": "\u0411", + "Because;": "\u2235", + "Bernoullis;": "\u212c", + "Beta;": "\u0392", + "Bfr;": "\U0001d505", + "Bopf;": "\U0001d539", + "Breve;": "\u02d8", + "Bscr;": "\u212c", + "Bumpeq;": "\u224e", + "CHcy;": "\u0427", + "COPY": "\xa9", + "COPY;": "\xa9", + "Cacute;": "\u0106", + "Cap;": "\u22d2", + "CapitalDifferentialD;": "\u2145", + "Cayleys;": "\u212d", + "Ccaron;": "\u010c", + "Ccedil": "\xc7", + "Ccedil;": "\xc7", + "Ccirc;": "\u0108", + "Cconint;": "\u2230", + "Cdot;": "\u010a", + "Cedilla;": "\xb8", + "CenterDot;": "\xb7", + "Cfr;": "\u212d", + "Chi;": "\u03a7", + "CircleDot;": "\u2299", + "CircleMinus;": "\u2296", + "CirclePlus;": "\u2295", + "CircleTimes;": "\u2297", + "ClockwiseContourIntegral;": "\u2232", + "CloseCurlyDoubleQuote;": "\u201d", + "CloseCurlyQuote;": "\u2019", + "Colon;": "\u2237", + "Colone;": "\u2a74", + "Congruent;": "\u2261", + "Conint;": "\u222f", + "ContourIntegral;": "\u222e", + "Copf;": "\u2102", + "Coproduct;": "\u2210", + "CounterClockwiseContourIntegral;": "\u2233", + "Cross;": "\u2a2f", + "Cscr;": "\U0001d49e", + "Cup;": "\u22d3", + "CupCap;": "\u224d", + "DD;": "\u2145", + "DDotrahd;": "\u2911", + "DJcy;": "\u0402", + "DScy;": "\u0405", + "DZcy;": "\u040f", + "Dagger;": "\u2021", + "Darr;": "\u21a1", + "Dashv;": "\u2ae4", + "Dcaron;": "\u010e", + "Dcy;": "\u0414", + "Del;": "\u2207", + "Delta;": "\u0394", + "Dfr;": "\U0001d507", + "DiacriticalAcute;": "\xb4", + "DiacriticalDot;": "\u02d9", + "DiacriticalDoubleAcute;": "\u02dd", + "DiacriticalGrave;": "`", + "DiacriticalTilde;": "\u02dc", + "Diamond;": "\u22c4", + "DifferentialD;": "\u2146", + "Dopf;": "\U0001d53b", + "Dot;": "\xa8", + "DotDot;": "\u20dc", + "DotEqual;": "\u2250", + "DoubleContourIntegral;": "\u222f", + "DoubleDot;": "\xa8", + "DoubleDownArrow;": "\u21d3", + "DoubleLeftArrow;": "\u21d0", + "DoubleLeftRightArrow;": "\u21d4", + "DoubleLeftTee;": "\u2ae4", + "DoubleLongLeftArrow;": "\u27f8", + "DoubleLongLeftRightArrow;": "\u27fa", + "DoubleLongRightArrow;": "\u27f9", + "DoubleRightArrow;": "\u21d2", + "DoubleRightTee;": "\u22a8", + "DoubleUpArrow;": "\u21d1", + "DoubleUpDownArrow;": "\u21d5", + "DoubleVerticalBar;": "\u2225", + "DownArrow;": "\u2193", + "DownArrowBar;": "\u2913", + "DownArrowUpArrow;": "\u21f5", + "DownBreve;": "\u0311", + "DownLeftRightVector;": "\u2950", + "DownLeftTeeVector;": "\u295e", + "DownLeftVector;": "\u21bd", + "DownLeftVectorBar;": "\u2956", + "DownRightTeeVector;": "\u295f", + "DownRightVector;": "\u21c1", + "DownRightVectorBar;": "\u2957", + "DownTee;": "\u22a4", + "DownTeeArrow;": "\u21a7", + "Downarrow;": "\u21d3", + "Dscr;": "\U0001d49f", + "Dstrok;": "\u0110", + "ENG;": "\u014a", + "ETH": "\xd0", + "ETH;": "\xd0", + "Eacute": "\xc9", + "Eacute;": "\xc9", + "Ecaron;": "\u011a", + "Ecirc": "\xca", + "Ecirc;": "\xca", + "Ecy;": "\u042d", + "Edot;": "\u0116", + "Efr;": "\U0001d508", + "Egrave": "\xc8", + "Egrave;": "\xc8", + "Element;": "\u2208", + "Emacr;": "\u0112", + "EmptySmallSquare;": "\u25fb", + "EmptyVerySmallSquare;": "\u25ab", + "Eogon;": "\u0118", + "Eopf;": "\U0001d53c", + "Epsilon;": "\u0395", + "Equal;": "\u2a75", + "EqualTilde;": "\u2242", + "Equilibrium;": "\u21cc", + "Escr;": "\u2130", + "Esim;": "\u2a73", + "Eta;": "\u0397", + "Euml": "\xcb", + "Euml;": "\xcb", + "Exists;": "\u2203", + "ExponentialE;": "\u2147", + "Fcy;": "\u0424", + "Ffr;": "\U0001d509", + "FilledSmallSquare;": "\u25fc", + "FilledVerySmallSquare;": "\u25aa", + "Fopf;": "\U0001d53d", + "ForAll;": "\u2200", + "Fouriertrf;": "\u2131", + "Fscr;": "\u2131", + "GJcy;": "\u0403", + "GT": ">", + "GT;": ">", + "Gamma;": "\u0393", + "Gammad;": "\u03dc", + "Gbreve;": "\u011e", + "Gcedil;": "\u0122", + "Gcirc;": "\u011c", + "Gcy;": "\u0413", + "Gdot;": "\u0120", + "Gfr;": "\U0001d50a", + "Gg;": "\u22d9", + "Gopf;": "\U0001d53e", + "GreaterEqual;": "\u2265", + "GreaterEqualLess;": "\u22db", + "GreaterFullEqual;": "\u2267", + "GreaterGreater;": "\u2aa2", + "GreaterLess;": "\u2277", + "GreaterSlantEqual;": "\u2a7e", + "GreaterTilde;": "\u2273", + "Gscr;": "\U0001d4a2", + "Gt;": "\u226b", + "HARDcy;": "\u042a", + "Hacek;": "\u02c7", + "Hat;": "^", + "Hcirc;": "\u0124", + "Hfr;": "\u210c", + "HilbertSpace;": "\u210b", + "Hopf;": "\u210d", + "HorizontalLine;": "\u2500", + "Hscr;": "\u210b", + "Hstrok;": "\u0126", + "HumpDownHump;": "\u224e", + "HumpEqual;": "\u224f", + "IEcy;": "\u0415", + "IJlig;": "\u0132", + "IOcy;": "\u0401", + "Iacute": "\xcd", + "Iacute;": "\xcd", + "Icirc": "\xce", + "Icirc;": "\xce", + "Icy;": "\u0418", + "Idot;": "\u0130", + "Ifr;": "\u2111", + "Igrave": "\xcc", + "Igrave;": "\xcc", + "Im;": "\u2111", + "Imacr;": "\u012a", + "ImaginaryI;": "\u2148", + "Implies;": "\u21d2", + "Int;": "\u222c", + "Integral;": "\u222b", + "Intersection;": "\u22c2", + "InvisibleComma;": "\u2063", + "InvisibleTimes;": "\u2062", + "Iogon;": "\u012e", + "Iopf;": "\U0001d540", + "Iota;": "\u0399", + "Iscr;": "\u2110", + "Itilde;": "\u0128", + "Iukcy;": "\u0406", + "Iuml": "\xcf", + "Iuml;": "\xcf", + "Jcirc;": "\u0134", + "Jcy;": "\u0419", + "Jfr;": "\U0001d50d", + "Jopf;": "\U0001d541", + "Jscr;": "\U0001d4a5", + "Jsercy;": "\u0408", + "Jukcy;": "\u0404", + "KHcy;": "\u0425", + "KJcy;": "\u040c", + "Kappa;": "\u039a", + "Kcedil;": "\u0136", + "Kcy;": "\u041a", + "Kfr;": "\U0001d50e", + "Kopf;": "\U0001d542", + "Kscr;": "\U0001d4a6", + "LJcy;": "\u0409", + "LT": "<", + "LT;": "<", + "Lacute;": "\u0139", + "Lambda;": "\u039b", + "Lang;": "\u27ea", + "Laplacetrf;": "\u2112", + "Larr;": "\u219e", + "Lcaron;": "\u013d", + "Lcedil;": "\u013b", + "Lcy;": "\u041b", + "LeftAngleBracket;": "\u27e8", + "LeftArrow;": "\u2190", + "LeftArrowBar;": "\u21e4", + "LeftArrowRightArrow;": "\u21c6", + "LeftCeiling;": "\u2308", + "LeftDoubleBracket;": "\u27e6", + "LeftDownTeeVector;": "\u2961", + "LeftDownVector;": "\u21c3", + "LeftDownVectorBar;": "\u2959", + "LeftFloor;": "\u230a", + "LeftRightArrow;": "\u2194", + "LeftRightVector;": "\u294e", + "LeftTee;": "\u22a3", + "LeftTeeArrow;": "\u21a4", + "LeftTeeVector;": "\u295a", + "LeftTriangle;": "\u22b2", + "LeftTriangleBar;": "\u29cf", + "LeftTriangleEqual;": "\u22b4", + "LeftUpDownVector;": "\u2951", + "LeftUpTeeVector;": "\u2960", + "LeftUpVector;": "\u21bf", + "LeftUpVectorBar;": "\u2958", + "LeftVector;": "\u21bc", + "LeftVectorBar;": "\u2952", + "Leftarrow;": "\u21d0", + "Leftrightarrow;": "\u21d4", + "LessEqualGreater;": "\u22da", + "LessFullEqual;": "\u2266", + "LessGreater;": "\u2276", + "LessLess;": "\u2aa1", + "LessSlantEqual;": "\u2a7d", + "LessTilde;": "\u2272", + "Lfr;": "\U0001d50f", + "Ll;": "\u22d8", + "Lleftarrow;": "\u21da", + "Lmidot;": "\u013f", + "LongLeftArrow;": "\u27f5", + "LongLeftRightArrow;": "\u27f7", + "LongRightArrow;": "\u27f6", + "Longleftarrow;": "\u27f8", + "Longleftrightarrow;": "\u27fa", + "Longrightarrow;": "\u27f9", + "Lopf;": "\U0001d543", + "LowerLeftArrow;": "\u2199", + "LowerRightArrow;": "\u2198", + "Lscr;": "\u2112", + "Lsh;": "\u21b0", + "Lstrok;": "\u0141", + "Lt;": "\u226a", + "Map;": "\u2905", + "Mcy;": "\u041c", + "MediumSpace;": "\u205f", + "Mellintrf;": "\u2133", + "Mfr;": "\U0001d510", + "MinusPlus;": "\u2213", + "Mopf;": "\U0001d544", + "Mscr;": "\u2133", + "Mu;": "\u039c", + "NJcy;": "\u040a", + "Nacute;": "\u0143", + "Ncaron;": "\u0147", + "Ncedil;": "\u0145", + "Ncy;": "\u041d", + "NegativeMediumSpace;": "\u200b", + "NegativeThickSpace;": "\u200b", + "NegativeThinSpace;": "\u200b", + "NegativeVeryThinSpace;": "\u200b", + "NestedGreaterGreater;": "\u226b", + "NestedLessLess;": "\u226a", + "NewLine;": "\n", + "Nfr;": "\U0001d511", + "NoBreak;": "\u2060", + "NonBreakingSpace;": "\xa0", + "Nopf;": "\u2115", + "Not;": "\u2aec", + "NotCongruent;": "\u2262", + "NotCupCap;": "\u226d", + "NotDoubleVerticalBar;": "\u2226", + "NotElement;": "\u2209", + "NotEqual;": "\u2260", + "NotEqualTilde;": "\u2242\u0338", + "NotExists;": "\u2204", + "NotGreater;": "\u226f", + "NotGreaterEqual;": "\u2271", + "NotGreaterFullEqual;": "\u2267\u0338", + "NotGreaterGreater;": "\u226b\u0338", + "NotGreaterLess;": "\u2279", + "NotGreaterSlantEqual;": "\u2a7e\u0338", + "NotGreaterTilde;": "\u2275", + "NotHumpDownHump;": "\u224e\u0338", + "NotHumpEqual;": "\u224f\u0338", + "NotLeftTriangle;": "\u22ea", + "NotLeftTriangleBar;": "\u29cf\u0338", + "NotLeftTriangleEqual;": "\u22ec", + "NotLess;": "\u226e", + "NotLessEqual;": "\u2270", + "NotLessGreater;": "\u2278", + "NotLessLess;": "\u226a\u0338", + "NotLessSlantEqual;": "\u2a7d\u0338", + "NotLessTilde;": "\u2274", + "NotNestedGreaterGreater;": "\u2aa2\u0338", + "NotNestedLessLess;": "\u2aa1\u0338", + "NotPrecedes;": "\u2280", + "NotPrecedesEqual;": "\u2aaf\u0338", + "NotPrecedesSlantEqual;": "\u22e0", + "NotReverseElement;": "\u220c", + "NotRightTriangle;": "\u22eb", + "NotRightTriangleBar;": "\u29d0\u0338", + "NotRightTriangleEqual;": "\u22ed", + "NotSquareSubset;": "\u228f\u0338", + "NotSquareSubsetEqual;": "\u22e2", + "NotSquareSuperset;": "\u2290\u0338", + "NotSquareSupersetEqual;": "\u22e3", + "NotSubset;": "\u2282\u20d2", + "NotSubsetEqual;": "\u2288", + "NotSucceeds;": "\u2281", + "NotSucceedsEqual;": "\u2ab0\u0338", + "NotSucceedsSlantEqual;": "\u22e1", + "NotSucceedsTilde;": "\u227f\u0338", + "NotSuperset;": "\u2283\u20d2", + "NotSupersetEqual;": "\u2289", + "NotTilde;": "\u2241", + "NotTildeEqual;": "\u2244", + "NotTildeFullEqual;": "\u2247", + "NotTildeTilde;": "\u2249", + "NotVerticalBar;": "\u2224", + "Nscr;": "\U0001d4a9", + "Ntilde": "\xd1", + "Ntilde;": "\xd1", + "Nu;": "\u039d", + "OElig;": "\u0152", + "Oacute": "\xd3", + "Oacute;": "\xd3", + "Ocirc": "\xd4", + "Ocirc;": "\xd4", + "Ocy;": "\u041e", + "Odblac;": "\u0150", + "Ofr;": "\U0001d512", + "Ograve": "\xd2", + "Ograve;": "\xd2", + "Omacr;": "\u014c", + "Omega;": "\u03a9", + "Omicron;": "\u039f", + "Oopf;": "\U0001d546", + "OpenCurlyDoubleQuote;": "\u201c", + "OpenCurlyQuote;": "\u2018", + "Or;": "\u2a54", + "Oscr;": "\U0001d4aa", + "Oslash": "\xd8", + "Oslash;": "\xd8", + "Otilde": "\xd5", + "Otilde;": "\xd5", + "Otimes;": "\u2a37", + "Ouml": "\xd6", + "Ouml;": "\xd6", + "OverBar;": "\u203e", + "OverBrace;": "\u23de", + "OverBracket;": "\u23b4", + "OverParenthesis;": "\u23dc", + "PartialD;": "\u2202", + "Pcy;": "\u041f", + "Pfr;": "\U0001d513", + "Phi;": "\u03a6", + "Pi;": "\u03a0", + "PlusMinus;": "\xb1", + "Poincareplane;": "\u210c", + "Popf;": "\u2119", + "Pr;": "\u2abb", + "Precedes;": "\u227a", + "PrecedesEqual;": "\u2aaf", + "PrecedesSlantEqual;": "\u227c", + "PrecedesTilde;": "\u227e", + "Prime;": "\u2033", + "Product;": "\u220f", + "Proportion;": "\u2237", + "Proportional;": "\u221d", + "Pscr;": "\U0001d4ab", + "Psi;": "\u03a8", + "QUOT": "\"", + "QUOT;": "\"", + "Qfr;": "\U0001d514", + "Qopf;": "\u211a", + "Qscr;": "\U0001d4ac", + "RBarr;": "\u2910", + "REG": "\xae", + "REG;": "\xae", + "Racute;": "\u0154", + "Rang;": "\u27eb", + "Rarr;": "\u21a0", + "Rarrtl;": "\u2916", + "Rcaron;": "\u0158", + "Rcedil;": "\u0156", + "Rcy;": "\u0420", + "Re;": "\u211c", + "ReverseElement;": "\u220b", + "ReverseEquilibrium;": "\u21cb", + "ReverseUpEquilibrium;": "\u296f", + "Rfr;": "\u211c", + "Rho;": "\u03a1", + "RightAngleBracket;": "\u27e9", + "RightArrow;": "\u2192", + "RightArrowBar;": "\u21e5", + "RightArrowLeftArrow;": "\u21c4", + "RightCeiling;": "\u2309", + "RightDoubleBracket;": "\u27e7", + "RightDownTeeVector;": "\u295d", + "RightDownVector;": "\u21c2", + "RightDownVectorBar;": "\u2955", + "RightFloor;": "\u230b", + "RightTee;": "\u22a2", + "RightTeeArrow;": "\u21a6", + "RightTeeVector;": "\u295b", + "RightTriangle;": "\u22b3", + "RightTriangleBar;": "\u29d0", + "RightTriangleEqual;": "\u22b5", + "RightUpDownVector;": "\u294f", + "RightUpTeeVector;": "\u295c", + "RightUpVector;": "\u21be", + "RightUpVectorBar;": "\u2954", + "RightVector;": "\u21c0", + "RightVectorBar;": "\u2953", + "Rightarrow;": "\u21d2", + "Ropf;": "\u211d", + "RoundImplies;": "\u2970", + "Rrightarrow;": "\u21db", + "Rscr;": "\u211b", + "Rsh;": "\u21b1", + "RuleDelayed;": "\u29f4", + "SHCHcy;": "\u0429", + "SHcy;": "\u0428", + "SOFTcy;": "\u042c", + "Sacute;": "\u015a", + "Sc;": "\u2abc", + "Scaron;": "\u0160", + "Scedil;": "\u015e", + "Scirc;": "\u015c", + "Scy;": "\u0421", + "Sfr;": "\U0001d516", + "ShortDownArrow;": "\u2193", + "ShortLeftArrow;": "\u2190", + "ShortRightArrow;": "\u2192", + "ShortUpArrow;": "\u2191", + "Sigma;": "\u03a3", + "SmallCircle;": "\u2218", + "Sopf;": "\U0001d54a", + "Sqrt;": "\u221a", + "Square;": "\u25a1", + "SquareIntersection;": "\u2293", + "SquareSubset;": "\u228f", + "SquareSubsetEqual;": "\u2291", + "SquareSuperset;": "\u2290", + "SquareSupersetEqual;": "\u2292", + "SquareUnion;": "\u2294", + "Sscr;": "\U0001d4ae", + "Star;": "\u22c6", + "Sub;": "\u22d0", + "Subset;": "\u22d0", + "SubsetEqual;": "\u2286", + "Succeeds;": "\u227b", + "SucceedsEqual;": "\u2ab0", + "SucceedsSlantEqual;": "\u227d", + "SucceedsTilde;": "\u227f", + "SuchThat;": "\u220b", + "Sum;": "\u2211", + "Sup;": "\u22d1", + "Superset;": "\u2283", + "SupersetEqual;": "\u2287", + "Supset;": "\u22d1", + "THORN": "\xde", + "THORN;": "\xde", + "TRADE;": "\u2122", + "TSHcy;": "\u040b", + "TScy;": "\u0426", + "Tab;": "\t", + "Tau;": "\u03a4", + "Tcaron;": "\u0164", + "Tcedil;": "\u0162", + "Tcy;": "\u0422", + "Tfr;": "\U0001d517", + "Therefore;": "\u2234", + "Theta;": "\u0398", + "ThickSpace;": "\u205f\u200a", + "ThinSpace;": "\u2009", + "Tilde;": "\u223c", + "TildeEqual;": "\u2243", + "TildeFullEqual;": "\u2245", + "TildeTilde;": "\u2248", + "Topf;": "\U0001d54b", + "TripleDot;": "\u20db", + "Tscr;": "\U0001d4af", + "Tstrok;": "\u0166", + "Uacute": "\xda", + "Uacute;": "\xda", + "Uarr;": "\u219f", + "Uarrocir;": "\u2949", + "Ubrcy;": "\u040e", + "Ubreve;": "\u016c", + "Ucirc": "\xdb", + "Ucirc;": "\xdb", + "Ucy;": "\u0423", + "Udblac;": "\u0170", + "Ufr;": "\U0001d518", + "Ugrave": "\xd9", + "Ugrave;": "\xd9", + "Umacr;": "\u016a", + "UnderBar;": "_", + "UnderBrace;": "\u23df", + "UnderBracket;": "\u23b5", + "UnderParenthesis;": "\u23dd", + "Union;": "\u22c3", + "UnionPlus;": "\u228e", + "Uogon;": "\u0172", + "Uopf;": "\U0001d54c", + "UpArrow;": "\u2191", + "UpArrowBar;": "\u2912", + "UpArrowDownArrow;": "\u21c5", + "UpDownArrow;": "\u2195", + "UpEquilibrium;": "\u296e", + "UpTee;": "\u22a5", + "UpTeeArrow;": "\u21a5", + "Uparrow;": "\u21d1", + "Updownarrow;": "\u21d5", + "UpperLeftArrow;": "\u2196", + "UpperRightArrow;": "\u2197", + "Upsi;": "\u03d2", + "Upsilon;": "\u03a5", + "Uring;": "\u016e", + "Uscr;": "\U0001d4b0", + "Utilde;": "\u0168", + "Uuml": "\xdc", + "Uuml;": "\xdc", + "VDash;": "\u22ab", + "Vbar;": "\u2aeb", + "Vcy;": "\u0412", + "Vdash;": "\u22a9", + "Vdashl;": "\u2ae6", + "Vee;": "\u22c1", + "Verbar;": "\u2016", + "Vert;": "\u2016", + "VerticalBar;": "\u2223", + "VerticalLine;": "|", + "VerticalSeparator;": "\u2758", + "VerticalTilde;": "\u2240", + "VeryThinSpace;": "\u200a", + "Vfr;": "\U0001d519", + "Vopf;": "\U0001d54d", + "Vscr;": "\U0001d4b1", + "Vvdash;": "\u22aa", + "Wcirc;": "\u0174", + "Wedge;": "\u22c0", + "Wfr;": "\U0001d51a", + "Wopf;": "\U0001d54e", + "Wscr;": "\U0001d4b2", + "Xfr;": "\U0001d51b", + "Xi;": "\u039e", + "Xopf;": "\U0001d54f", + "Xscr;": "\U0001d4b3", + "YAcy;": "\u042f", + "YIcy;": "\u0407", + "YUcy;": "\u042e", + "Yacute": "\xdd", + "Yacute;": "\xdd", + "Ycirc;": "\u0176", + "Ycy;": "\u042b", + "Yfr;": "\U0001d51c", + "Yopf;": "\U0001d550", + "Yscr;": "\U0001d4b4", + "Yuml;": "\u0178", + "ZHcy;": "\u0416", + "Zacute;": "\u0179", + "Zcaron;": "\u017d", + "Zcy;": "\u0417", + "Zdot;": "\u017b", + "ZeroWidthSpace;": "\u200b", + "Zeta;": "\u0396", + "Zfr;": "\u2128", + "Zopf;": "\u2124", + "Zscr;": "\U0001d4b5", + "aacute": "\xe1", + "aacute;": "\xe1", + "abreve;": "\u0103", + "ac;": "\u223e", + "acE;": "\u223e\u0333", + "acd;": "\u223f", + "acirc": "\xe2", + "acirc;": "\xe2", + "acute": "\xb4", + "acute;": "\xb4", + "acy;": "\u0430", + "aelig": "\xe6", + "aelig;": "\xe6", + "af;": "\u2061", + "afr;": "\U0001d51e", + "agrave": "\xe0", + "agrave;": "\xe0", + "alefsym;": "\u2135", + "aleph;": "\u2135", + "alpha;": "\u03b1", + "amacr;": "\u0101", + "amalg;": "\u2a3f", + "amp": "&", + "amp;": "&", + "and;": "\u2227", + "andand;": "\u2a55", + "andd;": "\u2a5c", + "andslope;": "\u2a58", + "andv;": "\u2a5a", + "ang;": "\u2220", + "ange;": "\u29a4", + "angle;": "\u2220", + "angmsd;": "\u2221", + "angmsdaa;": "\u29a8", + "angmsdab;": "\u29a9", + "angmsdac;": "\u29aa", + "angmsdad;": "\u29ab", + "angmsdae;": "\u29ac", + "angmsdaf;": "\u29ad", + "angmsdag;": "\u29ae", + "angmsdah;": "\u29af", + "angrt;": "\u221f", + "angrtvb;": "\u22be", + "angrtvbd;": "\u299d", + "angsph;": "\u2222", + "angst;": "\xc5", + "angzarr;": "\u237c", + "aogon;": "\u0105", + "aopf;": "\U0001d552", + "ap;": "\u2248", + "apE;": "\u2a70", + "apacir;": "\u2a6f", + "ape;": "\u224a", + "apid;": "\u224b", + "apos;": "'", + "approx;": "\u2248", + "approxeq;": "\u224a", + "aring": "\xe5", + "aring;": "\xe5", + "ascr;": "\U0001d4b6", + "ast;": "*", + "asymp;": "\u2248", + "asympeq;": "\u224d", + "atilde": "\xe3", + "atilde;": "\xe3", + "auml": "\xe4", + "auml;": "\xe4", + "awconint;": "\u2233", + "awint;": "\u2a11", + "bNot;": "\u2aed", + "backcong;": "\u224c", + "backepsilon;": "\u03f6", + "backprime;": "\u2035", + "backsim;": "\u223d", + "backsimeq;": "\u22cd", + "barvee;": "\u22bd", + "barwed;": "\u2305", + "barwedge;": "\u2305", + "bbrk;": "\u23b5", + "bbrktbrk;": "\u23b6", + "bcong;": "\u224c", + "bcy;": "\u0431", + "bdquo;": "\u201e", + "becaus;": "\u2235", + "because;": "\u2235", + "bemptyv;": "\u29b0", + "bepsi;": "\u03f6", + "bernou;": "\u212c", + "beta;": "\u03b2", + "beth;": "\u2136", + "between;": "\u226c", + "bfr;": "\U0001d51f", + "bigcap;": "\u22c2", + "bigcirc;": "\u25ef", + "bigcup;": "\u22c3", + "bigodot;": "\u2a00", + "bigoplus;": "\u2a01", + "bigotimes;": "\u2a02", + "bigsqcup;": "\u2a06", + "bigstar;": "\u2605", + "bigtriangledown;": "\u25bd", + "bigtriangleup;": "\u25b3", + "biguplus;": "\u2a04", + "bigvee;": "\u22c1", + "bigwedge;": "\u22c0", + "bkarow;": "\u290d", + "blacklozenge;": "\u29eb", + "blacksquare;": "\u25aa", + "blacktriangle;": "\u25b4", + "blacktriangledown;": "\u25be", + "blacktriangleleft;": "\u25c2", + "blacktriangleright;": "\u25b8", + "blank;": "\u2423", + "blk12;": "\u2592", + "blk14;": "\u2591", + "blk34;": "\u2593", + "block;": "\u2588", + "bne;": "=\u20e5", + "bnequiv;": "\u2261\u20e5", + "bnot;": "\u2310", + "bopf;": "\U0001d553", + "bot;": "\u22a5", + "bottom;": "\u22a5", + "bowtie;": "\u22c8", + "boxDL;": "\u2557", + "boxDR;": "\u2554", + "boxDl;": "\u2556", + "boxDr;": "\u2553", + "boxH;": "\u2550", + "boxHD;": "\u2566", + "boxHU;": "\u2569", + "boxHd;": "\u2564", + "boxHu;": "\u2567", + "boxUL;": "\u255d", + "boxUR;": "\u255a", + "boxUl;": "\u255c", + "boxUr;": "\u2559", + "boxV;": "\u2551", + "boxVH;": "\u256c", + "boxVL;": "\u2563", + "boxVR;": "\u2560", + "boxVh;": "\u256b", + "boxVl;": "\u2562", + "boxVr;": "\u255f", + "boxbox;": "\u29c9", + "boxdL;": "\u2555", + "boxdR;": "\u2552", + "boxdl;": "\u2510", + "boxdr;": "\u250c", + "boxh;": "\u2500", + "boxhD;": "\u2565", + "boxhU;": "\u2568", + "boxhd;": "\u252c", + "boxhu;": "\u2534", + "boxminus;": "\u229f", + "boxplus;": "\u229e", + "boxtimes;": "\u22a0", + "boxuL;": "\u255b", + "boxuR;": "\u2558", + "boxul;": "\u2518", + "boxur;": "\u2514", + "boxv;": "\u2502", + "boxvH;": "\u256a", + "boxvL;": "\u2561", + "boxvR;": "\u255e", + "boxvh;": "\u253c", + "boxvl;": "\u2524", + "boxvr;": "\u251c", + "bprime;": "\u2035", + "breve;": "\u02d8", + "brvbar": "\xa6", + "brvbar;": "\xa6", + "bscr;": "\U0001d4b7", + "bsemi;": "\u204f", + "bsim;": "\u223d", + "bsime;": "\u22cd", + "bsol;": "\\", + "bsolb;": "\u29c5", + "bsolhsub;": "\u27c8", + "bull;": "\u2022", + "bullet;": "\u2022", + "bump;": "\u224e", + "bumpE;": "\u2aae", + "bumpe;": "\u224f", + "bumpeq;": "\u224f", + "cacute;": "\u0107", + "cap;": "\u2229", + "capand;": "\u2a44", + "capbrcup;": "\u2a49", + "capcap;": "\u2a4b", + "capcup;": "\u2a47", + "capdot;": "\u2a40", + "caps;": "\u2229\ufe00", + "caret;": "\u2041", + "caron;": "\u02c7", + "ccaps;": "\u2a4d", + "ccaron;": "\u010d", + "ccedil": "\xe7", + "ccedil;": "\xe7", + "ccirc;": "\u0109", + "ccups;": "\u2a4c", + "ccupssm;": "\u2a50", + "cdot;": "\u010b", + "cedil": "\xb8", + "cedil;": "\xb8", + "cemptyv;": "\u29b2", + "cent": "\xa2", + "cent;": "\xa2", + "centerdot;": "\xb7", + "cfr;": "\U0001d520", + "chcy;": "\u0447", + "check;": "\u2713", + "checkmark;": "\u2713", + "chi;": "\u03c7", + "cir;": "\u25cb", + "cirE;": "\u29c3", + "circ;": "\u02c6", + "circeq;": "\u2257", + "circlearrowleft;": "\u21ba", + "circlearrowright;": "\u21bb", + "circledR;": "\xae", + "circledS;": "\u24c8", + "circledast;": "\u229b", + "circledcirc;": "\u229a", + "circleddash;": "\u229d", + "cire;": "\u2257", + "cirfnint;": "\u2a10", + "cirmid;": "\u2aef", + "cirscir;": "\u29c2", + "clubs;": "\u2663", + "clubsuit;": "\u2663", + "colon;": ":", + "colone;": "\u2254", + "coloneq;": "\u2254", + "comma;": ",", + "commat;": "@", + "comp;": "\u2201", + "compfn;": "\u2218", + "complement;": "\u2201", + "complexes;": "\u2102", + "cong;": "\u2245", + "congdot;": "\u2a6d", + "conint;": "\u222e", + "copf;": "\U0001d554", + "coprod;": "\u2210", + "copy": "\xa9", + "copy;": "\xa9", + "copysr;": "\u2117", + "crarr;": "\u21b5", + "cross;": "\u2717", + "cscr;": "\U0001d4b8", + "csub;": "\u2acf", + "csube;": "\u2ad1", + "csup;": "\u2ad0", + "csupe;": "\u2ad2", + "ctdot;": "\u22ef", + "cudarrl;": "\u2938", + "cudarrr;": "\u2935", + "cuepr;": "\u22de", + "cuesc;": "\u22df", + "cularr;": "\u21b6", + "cularrp;": "\u293d", + "cup;": "\u222a", + "cupbrcap;": "\u2a48", + "cupcap;": "\u2a46", + "cupcup;": "\u2a4a", + "cupdot;": "\u228d", + "cupor;": "\u2a45", + "cups;": "\u222a\ufe00", + "curarr;": "\u21b7", + "curarrm;": "\u293c", + "curlyeqprec;": "\u22de", + "curlyeqsucc;": "\u22df", + "curlyvee;": "\u22ce", + "curlywedge;": "\u22cf", + "curren": "\xa4", + "curren;": "\xa4", + "curvearrowleft;": "\u21b6", + "curvearrowright;": "\u21b7", + "cuvee;": "\u22ce", + "cuwed;": "\u22cf", + "cwconint;": "\u2232", + "cwint;": "\u2231", + "cylcty;": "\u232d", + "dArr;": "\u21d3", + "dHar;": "\u2965", + "dagger;": "\u2020", + "daleth;": "\u2138", + "darr;": "\u2193", + "dash;": "\u2010", + "dashv;": "\u22a3", + "dbkarow;": "\u290f", + "dblac;": "\u02dd", + "dcaron;": "\u010f", + "dcy;": "\u0434", + "dd;": "\u2146", + "ddagger;": "\u2021", + "ddarr;": "\u21ca", + "ddotseq;": "\u2a77", + "deg": "\xb0", + "deg;": "\xb0", + "delta;": "\u03b4", + "demptyv;": "\u29b1", + "dfisht;": "\u297f", + "dfr;": "\U0001d521", + "dharl;": "\u21c3", + "dharr;": "\u21c2", + "diam;": "\u22c4", + "diamond;": "\u22c4", + "diamondsuit;": "\u2666", + "diams;": "\u2666", + "die;": "\xa8", + "digamma;": "\u03dd", + "disin;": "\u22f2", + "div;": "\xf7", + "divide": "\xf7", + "divide;": "\xf7", + "divideontimes;": "\u22c7", + "divonx;": "\u22c7", + "djcy;": "\u0452", + "dlcorn;": "\u231e", + "dlcrop;": "\u230d", + "dollar;": "$", + "dopf;": "\U0001d555", + "dot;": "\u02d9", + "doteq;": "\u2250", + "doteqdot;": "\u2251", + "dotminus;": "\u2238", + "dotplus;": "\u2214", + "dotsquare;": "\u22a1", + "doublebarwedge;": "\u2306", + "downarrow;": "\u2193", + "downdownarrows;": "\u21ca", + "downharpoonleft;": "\u21c3", + "downharpoonright;": "\u21c2", + "drbkarow;": "\u2910", + "drcorn;": "\u231f", + "drcrop;": "\u230c", + "dscr;": "\U0001d4b9", + "dscy;": "\u0455", + "dsol;": "\u29f6", + "dstrok;": "\u0111", + "dtdot;": "\u22f1", + "dtri;": "\u25bf", + "dtrif;": "\u25be", + "duarr;": "\u21f5", + "duhar;": "\u296f", + "dwangle;": "\u29a6", + "dzcy;": "\u045f", + "dzigrarr;": "\u27ff", + "eDDot;": "\u2a77", + "eDot;": "\u2251", + "eacute": "\xe9", + "eacute;": "\xe9", + "easter;": "\u2a6e", + "ecaron;": "\u011b", + "ecir;": "\u2256", + "ecirc": "\xea", + "ecirc;": "\xea", + "ecolon;": "\u2255", + "ecy;": "\u044d", + "edot;": "\u0117", + "ee;": "\u2147", + "efDot;": "\u2252", + "efr;": "\U0001d522", + "eg;": "\u2a9a", + "egrave": "\xe8", + "egrave;": "\xe8", + "egs;": "\u2a96", + "egsdot;": "\u2a98", + "el;": "\u2a99", + "elinters;": "\u23e7", + "ell;": "\u2113", + "els;": "\u2a95", + "elsdot;": "\u2a97", + "emacr;": "\u0113", + "empty;": "\u2205", + "emptyset;": "\u2205", + "emptyv;": "\u2205", + "emsp13;": "\u2004", + "emsp14;": "\u2005", + "emsp;": "\u2003", + "eng;": "\u014b", + "ensp;": "\u2002", + "eogon;": "\u0119", + "eopf;": "\U0001d556", + "epar;": "\u22d5", + "eparsl;": "\u29e3", + "eplus;": "\u2a71", + "epsi;": "\u03b5", + "epsilon;": "\u03b5", + "epsiv;": "\u03f5", + "eqcirc;": "\u2256", + "eqcolon;": "\u2255", + "eqsim;": "\u2242", + "eqslantgtr;": "\u2a96", + "eqslantless;": "\u2a95", + "equals;": "=", + "equest;": "\u225f", + "equiv;": "\u2261", + "equivDD;": "\u2a78", + "eqvparsl;": "\u29e5", + "erDot;": "\u2253", + "erarr;": "\u2971", + "escr;": "\u212f", + "esdot;": "\u2250", + "esim;": "\u2242", + "eta;": "\u03b7", + "eth": "\xf0", + "eth;": "\xf0", + "euml": "\xeb", + "euml;": "\xeb", + "euro;": "\u20ac", + "excl;": "!", + "exist;": "\u2203", + "expectation;": "\u2130", + "exponentiale;": "\u2147", + "fallingdotseq;": "\u2252", + "fcy;": "\u0444", + "female;": "\u2640", + "ffilig;": "\ufb03", + "fflig;": "\ufb00", + "ffllig;": "\ufb04", + "ffr;": "\U0001d523", + "filig;": "\ufb01", + "fjlig;": "fj", + "flat;": "\u266d", + "fllig;": "\ufb02", + "fltns;": "\u25b1", + "fnof;": "\u0192", + "fopf;": "\U0001d557", + "forall;": "\u2200", + "fork;": "\u22d4", + "forkv;": "\u2ad9", + "fpartint;": "\u2a0d", + "frac12": "\xbd", + "frac12;": "\xbd", + "frac13;": "\u2153", + "frac14": "\xbc", + "frac14;": "\xbc", + "frac15;": "\u2155", + "frac16;": "\u2159", + "frac18;": "\u215b", + "frac23;": "\u2154", + "frac25;": "\u2156", + "frac34": "\xbe", + "frac34;": "\xbe", + "frac35;": "\u2157", + "frac38;": "\u215c", + "frac45;": "\u2158", + "frac56;": "\u215a", + "frac58;": "\u215d", + "frac78;": "\u215e", + "frasl;": "\u2044", + "frown;": "\u2322", + "fscr;": "\U0001d4bb", + "gE;": "\u2267", + "gEl;": "\u2a8c", + "gacute;": "\u01f5", + "gamma;": "\u03b3", + "gammad;": "\u03dd", + "gap;": "\u2a86", + "gbreve;": "\u011f", + "gcirc;": "\u011d", + "gcy;": "\u0433", + "gdot;": "\u0121", + "ge;": "\u2265", + "gel;": "\u22db", + "geq;": "\u2265", + "geqq;": "\u2267", + "geqslant;": "\u2a7e", + "ges;": "\u2a7e", + "gescc;": "\u2aa9", + "gesdot;": "\u2a80", + "gesdoto;": "\u2a82", + "gesdotol;": "\u2a84", + "gesl;": "\u22db\ufe00", + "gesles;": "\u2a94", + "gfr;": "\U0001d524", + "gg;": "\u226b", + "ggg;": "\u22d9", + "gimel;": "\u2137", + "gjcy;": "\u0453", + "gl;": "\u2277", + "glE;": "\u2a92", + "gla;": "\u2aa5", + "glj;": "\u2aa4", + "gnE;": "\u2269", + "gnap;": "\u2a8a", + "gnapprox;": "\u2a8a", + "gne;": "\u2a88", + "gneq;": "\u2a88", + "gneqq;": "\u2269", + "gnsim;": "\u22e7", + "gopf;": "\U0001d558", + "grave;": "`", + "gscr;": "\u210a", + "gsim;": "\u2273", + "gsime;": "\u2a8e", + "gsiml;": "\u2a90", + "gt": ">", + "gt;": ">", + "gtcc;": "\u2aa7", + "gtcir;": "\u2a7a", + "gtdot;": "\u22d7", + "gtlPar;": "\u2995", + "gtquest;": "\u2a7c", + "gtrapprox;": "\u2a86", + "gtrarr;": "\u2978", + "gtrdot;": "\u22d7", + "gtreqless;": "\u22db", + "gtreqqless;": "\u2a8c", + "gtrless;": "\u2277", + "gtrsim;": "\u2273", + "gvertneqq;": "\u2269\ufe00", + "gvnE;": "\u2269\ufe00", + "hArr;": "\u21d4", + "hairsp;": "\u200a", + "half;": "\xbd", + "hamilt;": "\u210b", + "hardcy;": "\u044a", + "harr;": "\u2194", + "harrcir;": "\u2948", + "harrw;": "\u21ad", + "hbar;": "\u210f", + "hcirc;": "\u0125", + "hearts;": "\u2665", + "heartsuit;": "\u2665", + "hellip;": "\u2026", + "hercon;": "\u22b9", + "hfr;": "\U0001d525", + "hksearow;": "\u2925", + "hkswarow;": "\u2926", + "hoarr;": "\u21ff", + "homtht;": "\u223b", + "hookleftarrow;": "\u21a9", + "hookrightarrow;": "\u21aa", + "hopf;": "\U0001d559", + "horbar;": "\u2015", + "hscr;": "\U0001d4bd", + "hslash;": "\u210f", + "hstrok;": "\u0127", + "hybull;": "\u2043", + "hyphen;": "\u2010", + "iacute": "\xed", + "iacute;": "\xed", + "ic;": "\u2063", + "icirc": "\xee", + "icirc;": "\xee", + "icy;": "\u0438", + "iecy;": "\u0435", + "iexcl": "\xa1", + "iexcl;": "\xa1", + "iff;": "\u21d4", + "ifr;": "\U0001d526", + "igrave": "\xec", + "igrave;": "\xec", + "ii;": "\u2148", + "iiiint;": "\u2a0c", + "iiint;": "\u222d", + "iinfin;": "\u29dc", + "iiota;": "\u2129", + "ijlig;": "\u0133", + "imacr;": "\u012b", + "image;": "\u2111", + "imagline;": "\u2110", + "imagpart;": "\u2111", + "imath;": "\u0131", + "imof;": "\u22b7", + "imped;": "\u01b5", + "in;": "\u2208", + "incare;": "\u2105", + "infin;": "\u221e", + "infintie;": "\u29dd", + "inodot;": "\u0131", + "int;": "\u222b", + "intcal;": "\u22ba", + "integers;": "\u2124", + "intercal;": "\u22ba", + "intlarhk;": "\u2a17", + "intprod;": "\u2a3c", + "iocy;": "\u0451", + "iogon;": "\u012f", + "iopf;": "\U0001d55a", + "iota;": "\u03b9", + "iprod;": "\u2a3c", + "iquest": "\xbf", + "iquest;": "\xbf", + "iscr;": "\U0001d4be", + "isin;": "\u2208", + "isinE;": "\u22f9", + "isindot;": "\u22f5", + "isins;": "\u22f4", + "isinsv;": "\u22f3", + "isinv;": "\u2208", + "it;": "\u2062", + "itilde;": "\u0129", + "iukcy;": "\u0456", + "iuml": "\xef", + "iuml;": "\xef", + "jcirc;": "\u0135", + "jcy;": "\u0439", + "jfr;": "\U0001d527", + "jmath;": "\u0237", + "jopf;": "\U0001d55b", + "jscr;": "\U0001d4bf", + "jsercy;": "\u0458", + "jukcy;": "\u0454", + "kappa;": "\u03ba", + "kappav;": "\u03f0", + "kcedil;": "\u0137", + "kcy;": "\u043a", + "kfr;": "\U0001d528", + "kgreen;": "\u0138", + "khcy;": "\u0445", + "kjcy;": "\u045c", + "kopf;": "\U0001d55c", + "kscr;": "\U0001d4c0", + "lAarr;": "\u21da", + "lArr;": "\u21d0", + "lAtail;": "\u291b", + "lBarr;": "\u290e", + "lE;": "\u2266", + "lEg;": "\u2a8b", + "lHar;": "\u2962", + "lacute;": "\u013a", + "laemptyv;": "\u29b4", + "lagran;": "\u2112", + "lambda;": "\u03bb", + "lang;": "\u27e8", + "langd;": "\u2991", + "langle;": "\u27e8", + "lap;": "\u2a85", + "laquo": "\xab", + "laquo;": "\xab", + "larr;": "\u2190", + "larrb;": "\u21e4", + "larrbfs;": "\u291f", + "larrfs;": "\u291d", + "larrhk;": "\u21a9", + "larrlp;": "\u21ab", + "larrpl;": "\u2939", + "larrsim;": "\u2973", + "larrtl;": "\u21a2", + "lat;": "\u2aab", + "latail;": "\u2919", + "late;": "\u2aad", + "lates;": "\u2aad\ufe00", + "lbarr;": "\u290c", + "lbbrk;": "\u2772", + "lbrace;": "{", + "lbrack;": "[", + "lbrke;": "\u298b", + "lbrksld;": "\u298f", + "lbrkslu;": "\u298d", + "lcaron;": "\u013e", + "lcedil;": "\u013c", + "lceil;": "\u2308", + "lcub;": "{", + "lcy;": "\u043b", + "ldca;": "\u2936", + "ldquo;": "\u201c", + "ldquor;": "\u201e", + "ldrdhar;": "\u2967", + "ldrushar;": "\u294b", + "ldsh;": "\u21b2", + "le;": "\u2264", + "leftarrow;": "\u2190", + "leftarrowtail;": "\u21a2", + "leftharpoondown;": "\u21bd", + "leftharpoonup;": "\u21bc", + "leftleftarrows;": "\u21c7", + "leftrightarrow;": "\u2194", + "leftrightarrows;": "\u21c6", + "leftrightharpoons;": "\u21cb", + "leftrightsquigarrow;": "\u21ad", + "leftthreetimes;": "\u22cb", + "leg;": "\u22da", + "leq;": "\u2264", + "leqq;": "\u2266", + "leqslant;": "\u2a7d", + "les;": "\u2a7d", + "lescc;": "\u2aa8", + "lesdot;": "\u2a7f", + "lesdoto;": "\u2a81", + "lesdotor;": "\u2a83", + "lesg;": "\u22da\ufe00", + "lesges;": "\u2a93", + "lessapprox;": "\u2a85", + "lessdot;": "\u22d6", + "lesseqgtr;": "\u22da", + "lesseqqgtr;": "\u2a8b", + "lessgtr;": "\u2276", + "lesssim;": "\u2272", + "lfisht;": "\u297c", + "lfloor;": "\u230a", + "lfr;": "\U0001d529", + "lg;": "\u2276", + "lgE;": "\u2a91", + "lhard;": "\u21bd", + "lharu;": "\u21bc", + "lharul;": "\u296a", + "lhblk;": "\u2584", + "ljcy;": "\u0459", + "ll;": "\u226a", + "llarr;": "\u21c7", + "llcorner;": "\u231e", + "llhard;": "\u296b", + "lltri;": "\u25fa", + "lmidot;": "\u0140", + "lmoust;": "\u23b0", + "lmoustache;": "\u23b0", + "lnE;": "\u2268", + "lnap;": "\u2a89", + "lnapprox;": "\u2a89", + "lne;": "\u2a87", + "lneq;": "\u2a87", + "lneqq;": "\u2268", + "lnsim;": "\u22e6", + "loang;": "\u27ec", + "loarr;": "\u21fd", + "lobrk;": "\u27e6", + "longleftarrow;": "\u27f5", + "longleftrightarrow;": "\u27f7", + "longmapsto;": "\u27fc", + "longrightarrow;": "\u27f6", + "looparrowleft;": "\u21ab", + "looparrowright;": "\u21ac", + "lopar;": "\u2985", + "lopf;": "\U0001d55d", + "loplus;": "\u2a2d", + "lotimes;": "\u2a34", + "lowast;": "\u2217", + "lowbar;": "_", + "loz;": "\u25ca", + "lozenge;": "\u25ca", + "lozf;": "\u29eb", + "lpar;": "(", + "lparlt;": "\u2993", + "lrarr;": "\u21c6", + "lrcorner;": "\u231f", + "lrhar;": "\u21cb", + "lrhard;": "\u296d", + "lrm;": "\u200e", + "lrtri;": "\u22bf", + "lsaquo;": "\u2039", + "lscr;": "\U0001d4c1", + "lsh;": "\u21b0", + "lsim;": "\u2272", + "lsime;": "\u2a8d", + "lsimg;": "\u2a8f", + "lsqb;": "[", + "lsquo;": "\u2018", + "lsquor;": "\u201a", + "lstrok;": "\u0142", + "lt": "<", + "lt;": "<", + "ltcc;": "\u2aa6", + "ltcir;": "\u2a79", + "ltdot;": "\u22d6", + "lthree;": "\u22cb", + "ltimes;": "\u22c9", + "ltlarr;": "\u2976", + "ltquest;": "\u2a7b", + "ltrPar;": "\u2996", + "ltri;": "\u25c3", + "ltrie;": "\u22b4", + "ltrif;": "\u25c2", + "lurdshar;": "\u294a", + "luruhar;": "\u2966", + "lvertneqq;": "\u2268\ufe00", + "lvnE;": "\u2268\ufe00", + "mDDot;": "\u223a", + "macr": "\xaf", + "macr;": "\xaf", + "male;": "\u2642", + "malt;": "\u2720", + "maltese;": "\u2720", + "map;": "\u21a6", + "mapsto;": "\u21a6", + "mapstodown;": "\u21a7", + "mapstoleft;": "\u21a4", + "mapstoup;": "\u21a5", + "marker;": "\u25ae", + "mcomma;": "\u2a29", + "mcy;": "\u043c", + "mdash;": "\u2014", + "measuredangle;": "\u2221", + "mfr;": "\U0001d52a", + "mho;": "\u2127", + "micro": "\xb5", + "micro;": "\xb5", + "mid;": "\u2223", + "midast;": "*", + "midcir;": "\u2af0", + "middot": "\xb7", + "middot;": "\xb7", + "minus;": "\u2212", + "minusb;": "\u229f", + "minusd;": "\u2238", + "minusdu;": "\u2a2a", + "mlcp;": "\u2adb", + "mldr;": "\u2026", + "mnplus;": "\u2213", + "models;": "\u22a7", + "mopf;": "\U0001d55e", + "mp;": "\u2213", + "mscr;": "\U0001d4c2", + "mstpos;": "\u223e", + "mu;": "\u03bc", + "multimap;": "\u22b8", + "mumap;": "\u22b8", + "nGg;": "\u22d9\u0338", + "nGt;": "\u226b\u20d2", + "nGtv;": "\u226b\u0338", + "nLeftarrow;": "\u21cd", + "nLeftrightarrow;": "\u21ce", + "nLl;": "\u22d8\u0338", + "nLt;": "\u226a\u20d2", + "nLtv;": "\u226a\u0338", + "nRightarrow;": "\u21cf", + "nVDash;": "\u22af", + "nVdash;": "\u22ae", + "nabla;": "\u2207", + "nacute;": "\u0144", + "nang;": "\u2220\u20d2", + "nap;": "\u2249", + "napE;": "\u2a70\u0338", + "napid;": "\u224b\u0338", + "napos;": "\u0149", + "napprox;": "\u2249", + "natur;": "\u266e", + "natural;": "\u266e", + "naturals;": "\u2115", + "nbsp": "\xa0", + "nbsp;": "\xa0", + "nbump;": "\u224e\u0338", + "nbumpe;": "\u224f\u0338", + "ncap;": "\u2a43", + "ncaron;": "\u0148", + "ncedil;": "\u0146", + "ncong;": "\u2247", + "ncongdot;": "\u2a6d\u0338", + "ncup;": "\u2a42", + "ncy;": "\u043d", + "ndash;": "\u2013", + "ne;": "\u2260", + "neArr;": "\u21d7", + "nearhk;": "\u2924", + "nearr;": "\u2197", + "nearrow;": "\u2197", + "nedot;": "\u2250\u0338", + "nequiv;": "\u2262", + "nesear;": "\u2928", + "nesim;": "\u2242\u0338", + "nexist;": "\u2204", + "nexists;": "\u2204", + "nfr;": "\U0001d52b", + "ngE;": "\u2267\u0338", + "nge;": "\u2271", + "ngeq;": "\u2271", + "ngeqq;": "\u2267\u0338", + "ngeqslant;": "\u2a7e\u0338", + "nges;": "\u2a7e\u0338", + "ngsim;": "\u2275", + "ngt;": "\u226f", + "ngtr;": "\u226f", + "nhArr;": "\u21ce", + "nharr;": "\u21ae", + "nhpar;": "\u2af2", + "ni;": "\u220b", + "nis;": "\u22fc", + "nisd;": "\u22fa", + "niv;": "\u220b", + "njcy;": "\u045a", + "nlArr;": "\u21cd", + "nlE;": "\u2266\u0338", + "nlarr;": "\u219a", + "nldr;": "\u2025", + "nle;": "\u2270", + "nleftarrow;": "\u219a", + "nleftrightarrow;": "\u21ae", + "nleq;": "\u2270", + "nleqq;": "\u2266\u0338", + "nleqslant;": "\u2a7d\u0338", + "nles;": "\u2a7d\u0338", + "nless;": "\u226e", + "nlsim;": "\u2274", + "nlt;": "\u226e", + "nltri;": "\u22ea", + "nltrie;": "\u22ec", + "nmid;": "\u2224", + "nopf;": "\U0001d55f", + "not": "\xac", + "not;": "\xac", + "notin;": "\u2209", + "notinE;": "\u22f9\u0338", + "notindot;": "\u22f5\u0338", + "notinva;": "\u2209", + "notinvb;": "\u22f7", + "notinvc;": "\u22f6", + "notni;": "\u220c", + "notniva;": "\u220c", + "notnivb;": "\u22fe", + "notnivc;": "\u22fd", + "npar;": "\u2226", + "nparallel;": "\u2226", + "nparsl;": "\u2afd\u20e5", + "npart;": "\u2202\u0338", + "npolint;": "\u2a14", + "npr;": "\u2280", + "nprcue;": "\u22e0", + "npre;": "\u2aaf\u0338", + "nprec;": "\u2280", + "npreceq;": "\u2aaf\u0338", + "nrArr;": "\u21cf", + "nrarr;": "\u219b", + "nrarrc;": "\u2933\u0338", + "nrarrw;": "\u219d\u0338", + "nrightarrow;": "\u219b", + "nrtri;": "\u22eb", + "nrtrie;": "\u22ed", + "nsc;": "\u2281", + "nsccue;": "\u22e1", + "nsce;": "\u2ab0\u0338", + "nscr;": "\U0001d4c3", + "nshortmid;": "\u2224", + "nshortparallel;": "\u2226", + "nsim;": "\u2241", + "nsime;": "\u2244", + "nsimeq;": "\u2244", + "nsmid;": "\u2224", + "nspar;": "\u2226", + "nsqsube;": "\u22e2", + "nsqsupe;": "\u22e3", + "nsub;": "\u2284", + "nsubE;": "\u2ac5\u0338", + "nsube;": "\u2288", + "nsubset;": "\u2282\u20d2", + "nsubseteq;": "\u2288", + "nsubseteqq;": "\u2ac5\u0338", + "nsucc;": "\u2281", + "nsucceq;": "\u2ab0\u0338", + "nsup;": "\u2285", + "nsupE;": "\u2ac6\u0338", + "nsupe;": "\u2289", + "nsupset;": "\u2283\u20d2", + "nsupseteq;": "\u2289", + "nsupseteqq;": "\u2ac6\u0338", + "ntgl;": "\u2279", + "ntilde": "\xf1", + "ntilde;": "\xf1", + "ntlg;": "\u2278", + "ntriangleleft;": "\u22ea", + "ntrianglelefteq;": "\u22ec", + "ntriangleright;": "\u22eb", + "ntrianglerighteq;": "\u22ed", + "nu;": "\u03bd", + "num;": "#", + "numero;": "\u2116", + "numsp;": "\u2007", + "nvDash;": "\u22ad", + "nvHarr;": "\u2904", + "nvap;": "\u224d\u20d2", + "nvdash;": "\u22ac", + "nvge;": "\u2265\u20d2", + "nvgt;": ">\u20d2", + "nvinfin;": "\u29de", + "nvlArr;": "\u2902", + "nvle;": "\u2264\u20d2", + "nvlt;": "<\u20d2", + "nvltrie;": "\u22b4\u20d2", + "nvrArr;": "\u2903", + "nvrtrie;": "\u22b5\u20d2", + "nvsim;": "\u223c\u20d2", + "nwArr;": "\u21d6", + "nwarhk;": "\u2923", + "nwarr;": "\u2196", + "nwarrow;": "\u2196", + "nwnear;": "\u2927", + "oS;": "\u24c8", + "oacute": "\xf3", + "oacute;": "\xf3", + "oast;": "\u229b", + "ocir;": "\u229a", + "ocirc": "\xf4", + "ocirc;": "\xf4", + "ocy;": "\u043e", + "odash;": "\u229d", + "odblac;": "\u0151", + "odiv;": "\u2a38", + "odot;": "\u2299", + "odsold;": "\u29bc", + "oelig;": "\u0153", + "ofcir;": "\u29bf", + "ofr;": "\U0001d52c", + "ogon;": "\u02db", + "ograve": "\xf2", + "ograve;": "\xf2", + "ogt;": "\u29c1", + "ohbar;": "\u29b5", + "ohm;": "\u03a9", + "oint;": "\u222e", + "olarr;": "\u21ba", + "olcir;": "\u29be", + "olcross;": "\u29bb", + "oline;": "\u203e", + "olt;": "\u29c0", + "omacr;": "\u014d", + "omega;": "\u03c9", + "omicron;": "\u03bf", + "omid;": "\u29b6", + "ominus;": "\u2296", + "oopf;": "\U0001d560", + "opar;": "\u29b7", + "operp;": "\u29b9", + "oplus;": "\u2295", + "or;": "\u2228", + "orarr;": "\u21bb", + "ord;": "\u2a5d", + "order;": "\u2134", + "orderof;": "\u2134", + "ordf": "\xaa", + "ordf;": "\xaa", + "ordm": "\xba", + "ordm;": "\xba", + "origof;": "\u22b6", + "oror;": "\u2a56", + "orslope;": "\u2a57", + "orv;": "\u2a5b", + "oscr;": "\u2134", + "oslash": "\xf8", + "oslash;": "\xf8", + "osol;": "\u2298", + "otilde": "\xf5", + "otilde;": "\xf5", + "otimes;": "\u2297", + "otimesas;": "\u2a36", + "ouml": "\xf6", + "ouml;": "\xf6", + "ovbar;": "\u233d", + "par;": "\u2225", + "para": "\xb6", + "para;": "\xb6", + "parallel;": "\u2225", + "parsim;": "\u2af3", + "parsl;": "\u2afd", + "part;": "\u2202", + "pcy;": "\u043f", + "percnt;": "%", + "period;": ".", + "permil;": "\u2030", + "perp;": "\u22a5", + "pertenk;": "\u2031", + "pfr;": "\U0001d52d", + "phi;": "\u03c6", + "phiv;": "\u03d5", + "phmmat;": "\u2133", + "phone;": "\u260e", + "pi;": "\u03c0", + "pitchfork;": "\u22d4", + "piv;": "\u03d6", + "planck;": "\u210f", + "planckh;": "\u210e", + "plankv;": "\u210f", + "plus;": "+", + "plusacir;": "\u2a23", + "plusb;": "\u229e", + "pluscir;": "\u2a22", + "plusdo;": "\u2214", + "plusdu;": "\u2a25", + "pluse;": "\u2a72", + "plusmn": "\xb1", + "plusmn;": "\xb1", + "plussim;": "\u2a26", + "plustwo;": "\u2a27", + "pm;": "\xb1", + "pointint;": "\u2a15", + "popf;": "\U0001d561", + "pound": "\xa3", + "pound;": "\xa3", + "pr;": "\u227a", + "prE;": "\u2ab3", + "prap;": "\u2ab7", + "prcue;": "\u227c", + "pre;": "\u2aaf", + "prec;": "\u227a", + "precapprox;": "\u2ab7", + "preccurlyeq;": "\u227c", + "preceq;": "\u2aaf", + "precnapprox;": "\u2ab9", + "precneqq;": "\u2ab5", + "precnsim;": "\u22e8", + "precsim;": "\u227e", + "prime;": "\u2032", + "primes;": "\u2119", + "prnE;": "\u2ab5", + "prnap;": "\u2ab9", + "prnsim;": "\u22e8", + "prod;": "\u220f", + "profalar;": "\u232e", + "profline;": "\u2312", + "profsurf;": "\u2313", + "prop;": "\u221d", + "propto;": "\u221d", + "prsim;": "\u227e", + "prurel;": "\u22b0", + "pscr;": "\U0001d4c5", + "psi;": "\u03c8", + "puncsp;": "\u2008", + "qfr;": "\U0001d52e", + "qint;": "\u2a0c", + "qopf;": "\U0001d562", + "qprime;": "\u2057", + "qscr;": "\U0001d4c6", + "quaternions;": "\u210d", + "quatint;": "\u2a16", + "quest;": "?", + "questeq;": "\u225f", + "quot": "\"", + "quot;": "\"", + "rAarr;": "\u21db", + "rArr;": "\u21d2", + "rAtail;": "\u291c", + "rBarr;": "\u290f", + "rHar;": "\u2964", + "race;": "\u223d\u0331", + "racute;": "\u0155", + "radic;": "\u221a", + "raemptyv;": "\u29b3", + "rang;": "\u27e9", + "rangd;": "\u2992", + "range;": "\u29a5", + "rangle;": "\u27e9", + "raquo": "\xbb", + "raquo;": "\xbb", + "rarr;": "\u2192", + "rarrap;": "\u2975", + "rarrb;": "\u21e5", + "rarrbfs;": "\u2920", + "rarrc;": "\u2933", + "rarrfs;": "\u291e", + "rarrhk;": "\u21aa", + "rarrlp;": "\u21ac", + "rarrpl;": "\u2945", + "rarrsim;": "\u2974", + "rarrtl;": "\u21a3", + "rarrw;": "\u219d", + "ratail;": "\u291a", + "ratio;": "\u2236", + "rationals;": "\u211a", + "rbarr;": "\u290d", + "rbbrk;": "\u2773", + "rbrace;": "}", + "rbrack;": "]", + "rbrke;": "\u298c", + "rbrksld;": "\u298e", + "rbrkslu;": "\u2990", + "rcaron;": "\u0159", + "rcedil;": "\u0157", + "rceil;": "\u2309", + "rcub;": "}", + "rcy;": "\u0440", + "rdca;": "\u2937", + "rdldhar;": "\u2969", + "rdquo;": "\u201d", + "rdquor;": "\u201d", + "rdsh;": "\u21b3", + "real;": "\u211c", + "realine;": "\u211b", + "realpart;": "\u211c", + "reals;": "\u211d", + "rect;": "\u25ad", + "reg": "\xae", + "reg;": "\xae", + "rfisht;": "\u297d", + "rfloor;": "\u230b", + "rfr;": "\U0001d52f", + "rhard;": "\u21c1", + "rharu;": "\u21c0", + "rharul;": "\u296c", + "rho;": "\u03c1", + "rhov;": "\u03f1", + "rightarrow;": "\u2192", + "rightarrowtail;": "\u21a3", + "rightharpoondown;": "\u21c1", + "rightharpoonup;": "\u21c0", + "rightleftarrows;": "\u21c4", + "rightleftharpoons;": "\u21cc", + "rightrightarrows;": "\u21c9", + "rightsquigarrow;": "\u219d", + "rightthreetimes;": "\u22cc", + "ring;": "\u02da", + "risingdotseq;": "\u2253", + "rlarr;": "\u21c4", + "rlhar;": "\u21cc", + "rlm;": "\u200f", + "rmoust;": "\u23b1", + "rmoustache;": "\u23b1", + "rnmid;": "\u2aee", + "roang;": "\u27ed", + "roarr;": "\u21fe", + "robrk;": "\u27e7", + "ropar;": "\u2986", + "ropf;": "\U0001d563", + "roplus;": "\u2a2e", + "rotimes;": "\u2a35", + "rpar;": ")", + "rpargt;": "\u2994", + "rppolint;": "\u2a12", + "rrarr;": "\u21c9", + "rsaquo;": "\u203a", + "rscr;": "\U0001d4c7", + "rsh;": "\u21b1", + "rsqb;": "]", + "rsquo;": "\u2019", + "rsquor;": "\u2019", + "rthree;": "\u22cc", + "rtimes;": "\u22ca", + "rtri;": "\u25b9", + "rtrie;": "\u22b5", + "rtrif;": "\u25b8", + "rtriltri;": "\u29ce", + "ruluhar;": "\u2968", + "rx;": "\u211e", + "sacute;": "\u015b", + "sbquo;": "\u201a", + "sc;": "\u227b", + "scE;": "\u2ab4", + "scap;": "\u2ab8", + "scaron;": "\u0161", + "sccue;": "\u227d", + "sce;": "\u2ab0", + "scedil;": "\u015f", + "scirc;": "\u015d", + "scnE;": "\u2ab6", + "scnap;": "\u2aba", + "scnsim;": "\u22e9", + "scpolint;": "\u2a13", + "scsim;": "\u227f", + "scy;": "\u0441", + "sdot;": "\u22c5", + "sdotb;": "\u22a1", + "sdote;": "\u2a66", + "seArr;": "\u21d8", + "searhk;": "\u2925", + "searr;": "\u2198", + "searrow;": "\u2198", + "sect": "\xa7", + "sect;": "\xa7", + "semi;": ";", + "seswar;": "\u2929", + "setminus;": "\u2216", + "setmn;": "\u2216", + "sext;": "\u2736", + "sfr;": "\U0001d530", + "sfrown;": "\u2322", + "sharp;": "\u266f", + "shchcy;": "\u0449", + "shcy;": "\u0448", + "shortmid;": "\u2223", + "shortparallel;": "\u2225", + "shy": "\xad", + "shy;": "\xad", + "sigma;": "\u03c3", + "sigmaf;": "\u03c2", + "sigmav;": "\u03c2", + "sim;": "\u223c", + "simdot;": "\u2a6a", + "sime;": "\u2243", + "simeq;": "\u2243", + "simg;": "\u2a9e", + "simgE;": "\u2aa0", + "siml;": "\u2a9d", + "simlE;": "\u2a9f", + "simne;": "\u2246", + "simplus;": "\u2a24", + "simrarr;": "\u2972", + "slarr;": "\u2190", + "smallsetminus;": "\u2216", + "smashp;": "\u2a33", + "smeparsl;": "\u29e4", + "smid;": "\u2223", + "smile;": "\u2323", + "smt;": "\u2aaa", + "smte;": "\u2aac", + "smtes;": "\u2aac\ufe00", + "softcy;": "\u044c", + "sol;": "/", + "solb;": "\u29c4", + "solbar;": "\u233f", + "sopf;": "\U0001d564", + "spades;": "\u2660", + "spadesuit;": "\u2660", + "spar;": "\u2225", + "sqcap;": "\u2293", + "sqcaps;": "\u2293\ufe00", + "sqcup;": "\u2294", + "sqcups;": "\u2294\ufe00", + "sqsub;": "\u228f", + "sqsube;": "\u2291", + "sqsubset;": "\u228f", + "sqsubseteq;": "\u2291", + "sqsup;": "\u2290", + "sqsupe;": "\u2292", + "sqsupset;": "\u2290", + "sqsupseteq;": "\u2292", + "squ;": "\u25a1", + "square;": "\u25a1", + "squarf;": "\u25aa", + "squf;": "\u25aa", + "srarr;": "\u2192", + "sscr;": "\U0001d4c8", + "ssetmn;": "\u2216", + "ssmile;": "\u2323", + "sstarf;": "\u22c6", + "star;": "\u2606", + "starf;": "\u2605", + "straightepsilon;": "\u03f5", + "straightphi;": "\u03d5", + "strns;": "\xaf", + "sub;": "\u2282", + "subE;": "\u2ac5", + "subdot;": "\u2abd", + "sube;": "\u2286", + "subedot;": "\u2ac3", + "submult;": "\u2ac1", + "subnE;": "\u2acb", + "subne;": "\u228a", + "subplus;": "\u2abf", + "subrarr;": "\u2979", + "subset;": "\u2282", + "subseteq;": "\u2286", + "subseteqq;": "\u2ac5", + "subsetneq;": "\u228a", + "subsetneqq;": "\u2acb", + "subsim;": "\u2ac7", + "subsub;": "\u2ad5", + "subsup;": "\u2ad3", + "succ;": "\u227b", + "succapprox;": "\u2ab8", + "succcurlyeq;": "\u227d", + "succeq;": "\u2ab0", + "succnapprox;": "\u2aba", + "succneqq;": "\u2ab6", + "succnsim;": "\u22e9", + "succsim;": "\u227f", + "sum;": "\u2211", + "sung;": "\u266a", + "sup1": "\xb9", + "sup1;": "\xb9", + "sup2": "\xb2", + "sup2;": "\xb2", + "sup3": "\xb3", + "sup3;": "\xb3", + "sup;": "\u2283", + "supE;": "\u2ac6", + "supdot;": "\u2abe", + "supdsub;": "\u2ad8", + "supe;": "\u2287", + "supedot;": "\u2ac4", + "suphsol;": "\u27c9", + "suphsub;": "\u2ad7", + "suplarr;": "\u297b", + "supmult;": "\u2ac2", + "supnE;": "\u2acc", + "supne;": "\u228b", + "supplus;": "\u2ac0", + "supset;": "\u2283", + "supseteq;": "\u2287", + "supseteqq;": "\u2ac6", + "supsetneq;": "\u228b", + "supsetneqq;": "\u2acc", + "supsim;": "\u2ac8", + "supsub;": "\u2ad4", + "supsup;": "\u2ad6", + "swArr;": "\u21d9", + "swarhk;": "\u2926", + "swarr;": "\u2199", + "swarrow;": "\u2199", + "swnwar;": "\u292a", + "szlig": "\xdf", + "szlig;": "\xdf", + "target;": "\u2316", + "tau;": "\u03c4", + "tbrk;": "\u23b4", + "tcaron;": "\u0165", + "tcedil;": "\u0163", + "tcy;": "\u0442", + "tdot;": "\u20db", + "telrec;": "\u2315", + "tfr;": "\U0001d531", + "there4;": "\u2234", + "therefore;": "\u2234", + "theta;": "\u03b8", + "thetasym;": "\u03d1", + "thetav;": "\u03d1", + "thickapprox;": "\u2248", + "thicksim;": "\u223c", + "thinsp;": "\u2009", + "thkap;": "\u2248", + "thksim;": "\u223c", + "thorn": "\xfe", + "thorn;": "\xfe", + "tilde;": "\u02dc", + "times": "\xd7", + "times;": "\xd7", + "timesb;": "\u22a0", + "timesbar;": "\u2a31", + "timesd;": "\u2a30", + "tint;": "\u222d", + "toea;": "\u2928", + "top;": "\u22a4", + "topbot;": "\u2336", + "topcir;": "\u2af1", + "topf;": "\U0001d565", + "topfork;": "\u2ada", + "tosa;": "\u2929", + "tprime;": "\u2034", + "trade;": "\u2122", + "triangle;": "\u25b5", + "triangledown;": "\u25bf", + "triangleleft;": "\u25c3", + "trianglelefteq;": "\u22b4", + "triangleq;": "\u225c", + "triangleright;": "\u25b9", + "trianglerighteq;": "\u22b5", + "tridot;": "\u25ec", + "trie;": "\u225c", + "triminus;": "\u2a3a", + "triplus;": "\u2a39", + "trisb;": "\u29cd", + "tritime;": "\u2a3b", + "trpezium;": "\u23e2", + "tscr;": "\U0001d4c9", + "tscy;": "\u0446", + "tshcy;": "\u045b", + "tstrok;": "\u0167", + "twixt;": "\u226c", + "twoheadleftarrow;": "\u219e", + "twoheadrightarrow;": "\u21a0", + "uArr;": "\u21d1", + "uHar;": "\u2963", + "uacute": "\xfa", + "uacute;": "\xfa", + "uarr;": "\u2191", + "ubrcy;": "\u045e", + "ubreve;": "\u016d", + "ucirc": "\xfb", + "ucirc;": "\xfb", + "ucy;": "\u0443", + "udarr;": "\u21c5", + "udblac;": "\u0171", + "udhar;": "\u296e", + "ufisht;": "\u297e", + "ufr;": "\U0001d532", + "ugrave": "\xf9", + "ugrave;": "\xf9", + "uharl;": "\u21bf", + "uharr;": "\u21be", + "uhblk;": "\u2580", + "ulcorn;": "\u231c", + "ulcorner;": "\u231c", + "ulcrop;": "\u230f", + "ultri;": "\u25f8", + "umacr;": "\u016b", + "uml": "\xa8", + "uml;": "\xa8", + "uogon;": "\u0173", + "uopf;": "\U0001d566", + "uparrow;": "\u2191", + "updownarrow;": "\u2195", + "upharpoonleft;": "\u21bf", + "upharpoonright;": "\u21be", + "uplus;": "\u228e", + "upsi;": "\u03c5", + "upsih;": "\u03d2", + "upsilon;": "\u03c5", + "upuparrows;": "\u21c8", + "urcorn;": "\u231d", + "urcorner;": "\u231d", + "urcrop;": "\u230e", + "uring;": "\u016f", + "urtri;": "\u25f9", + "uscr;": "\U0001d4ca", + "utdot;": "\u22f0", + "utilde;": "\u0169", + "utri;": "\u25b5", + "utrif;": "\u25b4", + "uuarr;": "\u21c8", + "uuml": "\xfc", + "uuml;": "\xfc", + "uwangle;": "\u29a7", + "vArr;": "\u21d5", + "vBar;": "\u2ae8", + "vBarv;": "\u2ae9", + "vDash;": "\u22a8", + "vangrt;": "\u299c", + "varepsilon;": "\u03f5", + "varkappa;": "\u03f0", + "varnothing;": "\u2205", + "varphi;": "\u03d5", + "varpi;": "\u03d6", + "varpropto;": "\u221d", + "varr;": "\u2195", + "varrho;": "\u03f1", + "varsigma;": "\u03c2", + "varsubsetneq;": "\u228a\ufe00", + "varsubsetneqq;": "\u2acb\ufe00", + "varsupsetneq;": "\u228b\ufe00", + "varsupsetneqq;": "\u2acc\ufe00", + "vartheta;": "\u03d1", + "vartriangleleft;": "\u22b2", + "vartriangleright;": "\u22b3", + "vcy;": "\u0432", + "vdash;": "\u22a2", + "vee;": "\u2228", + "veebar;": "\u22bb", + "veeeq;": "\u225a", + "vellip;": "\u22ee", + "verbar;": "|", + "vert;": "|", + "vfr;": "\U0001d533", + "vltri;": "\u22b2", + "vnsub;": "\u2282\u20d2", + "vnsup;": "\u2283\u20d2", + "vopf;": "\U0001d567", + "vprop;": "\u221d", + "vrtri;": "\u22b3", + "vscr;": "\U0001d4cb", + "vsubnE;": "\u2acb\ufe00", + "vsubne;": "\u228a\ufe00", + "vsupnE;": "\u2acc\ufe00", + "vsupne;": "\u228b\ufe00", + "vzigzag;": "\u299a", + "wcirc;": "\u0175", + "wedbar;": "\u2a5f", + "wedge;": "\u2227", + "wedgeq;": "\u2259", + "weierp;": "\u2118", + "wfr;": "\U0001d534", + "wopf;": "\U0001d568", + "wp;": "\u2118", + "wr;": "\u2240", + "wreath;": "\u2240", + "wscr;": "\U0001d4cc", + "xcap;": "\u22c2", + "xcirc;": "\u25ef", + "xcup;": "\u22c3", + "xdtri;": "\u25bd", + "xfr;": "\U0001d535", + "xhArr;": "\u27fa", + "xharr;": "\u27f7", + "xi;": "\u03be", + "xlArr;": "\u27f8", + "xlarr;": "\u27f5", + "xmap;": "\u27fc", + "xnis;": "\u22fb", + "xodot;": "\u2a00", + "xopf;": "\U0001d569", + "xoplus;": "\u2a01", + "xotime;": "\u2a02", + "xrArr;": "\u27f9", + "xrarr;": "\u27f6", + "xscr;": "\U0001d4cd", + "xsqcup;": "\u2a06", + "xuplus;": "\u2a04", + "xutri;": "\u25b3", + "xvee;": "\u22c1", + "xwedge;": "\u22c0", + "yacute": "\xfd", + "yacute;": "\xfd", + "yacy;": "\u044f", + "ycirc;": "\u0177", + "ycy;": "\u044b", + "yen": "\xa5", + "yen;": "\xa5", + "yfr;": "\U0001d536", + "yicy;": "\u0457", + "yopf;": "\U0001d56a", + "yscr;": "\U0001d4ce", + "yucy;": "\u044e", + "yuml": "\xff", + "yuml;": "\xff", + "zacute;": "\u017a", + "zcaron;": "\u017e", + "zcy;": "\u0437", + "zdot;": "\u017c", + "zeetrf;": "\u2128", + "zeta;": "\u03b6", + "zfr;": "\U0001d537", + "zhcy;": "\u0436", + "zigrarr;": "\u21dd", + "zopf;": "\U0001d56b", + "zscr;": "\U0001d4cf", + "zwj;": "\u200d", + "zwnj;": "\u200c", +} + +replacementCharacters = { + 0x0: "\uFFFD", + 0x0d: "\u000D", + 0x80: "\u20AC", + 0x81: "\u0081", + 0x82: "\u201A", + 0x83: "\u0192", + 0x84: "\u201E", + 0x85: "\u2026", + 0x86: "\u2020", + 0x87: "\u2021", + 0x88: "\u02C6", + 0x89: "\u2030", + 0x8A: "\u0160", + 0x8B: "\u2039", + 0x8C: "\u0152", + 0x8D: "\u008D", + 0x8E: "\u017D", + 0x8F: "\u008F", + 0x90: "\u0090", + 0x91: "\u2018", + 0x92: "\u2019", + 0x93: "\u201C", + 0x94: "\u201D", + 0x95: "\u2022", + 0x96: "\u2013", + 0x97: "\u2014", + 0x98: "\u02DC", + 0x99: "\u2122", + 0x9A: "\u0161", + 0x9B: "\u203A", + 0x9C: "\u0153", + 0x9D: "\u009D", + 0x9E: "\u017E", + 0x9F: "\u0178", +} + +tokenTypes = { + "Doctype": 0, + "Characters": 1, + "SpaceCharacters": 2, + "StartTag": 3, + "EndTag": 4, + "EmptyTag": 5, + "Comment": 6, + "ParseError": 7 +} + +tagTokenTypes = frozenset([tokenTypes["StartTag"], tokenTypes["EndTag"], + tokenTypes["EmptyTag"]]) + + +prefixes = {v: k for k, v in namespaces.items()} +prefixes["http://www.w3.org/1998/Math/MathML"] = "math" + + +class DataLossWarning(UserWarning): + """Raised when the current tree is unable to represent the input data""" + pass + + +class _ReparseException(Exception): + pass diff --git a/lib/python3.11/site-packages/pip/_vendor/chardet/metadata/__init__.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/__init__.py similarity index 100% rename from lib/python3.11/site-packages/pip/_vendor/chardet/metadata/__init__.py rename to python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/__init__.py diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/alphabeticalattributes.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/alphabeticalattributes.py new file mode 100644 index 0000000..5ba926e --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/alphabeticalattributes.py @@ -0,0 +1,29 @@ +from __future__ import absolute_import, division, unicode_literals + +from . import base + +from collections import OrderedDict + + +def _attr_key(attr): + """Return an appropriate key for an attribute for sorting + + Attributes have a namespace that can be either ``None`` or a string. We + can't compare the two because they're different types, so we convert + ``None`` to an empty string first. + + """ + return (attr[0][0] or ''), attr[0][1] + + +class Filter(base.Filter): + """Alphabetizes attributes for elements""" + def __iter__(self): + for token in base.Filter.__iter__(self): + if token["type"] in ("StartTag", "EmptyTag"): + attrs = OrderedDict() + for name, value in sorted(token["data"].items(), + key=_attr_key): + attrs[name] = value + token["data"] = attrs + yield token diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/base.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/base.py new file mode 100644 index 0000000..c7dbaed --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/base.py @@ -0,0 +1,12 @@ +from __future__ import absolute_import, division, unicode_literals + + +class Filter(object): + def __init__(self, source): + self.source = source + + def __iter__(self): + return iter(self.source) + + def __getattr__(self, name): + return getattr(self.source, name) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py new file mode 100644 index 0000000..aefb5c8 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/inject_meta_charset.py @@ -0,0 +1,73 @@ +from __future__ import absolute_import, division, unicode_literals + +from . import base + + +class Filter(base.Filter): + """Injects ```` tag into head of document""" + def __init__(self, source, encoding): + """Creates a Filter + + :arg source: the source token stream + + :arg encoding: the encoding to set + + """ + base.Filter.__init__(self, source) + self.encoding = encoding + + def __iter__(self): + state = "pre_head" + meta_found = (self.encoding is None) + pending = [] + + for token in base.Filter.__iter__(self): + type = token["type"] + if type == "StartTag": + if token["name"].lower() == "head": + state = "in_head" + + elif type == "EmptyTag": + if token["name"].lower() == "meta": + # replace charset with actual encoding + has_http_equiv_content_type = False + for (namespace, name), value in token["data"].items(): + if namespace is not None: + continue + elif name.lower() == 'charset': + token["data"][(namespace, name)] = self.encoding + meta_found = True + break + elif name == 'http-equiv' and value.lower() == 'content-type': + has_http_equiv_content_type = True + else: + if has_http_equiv_content_type and (None, "content") in token["data"]: + token["data"][(None, "content")] = 'text/html; charset=%s' % self.encoding + meta_found = True + + elif token["name"].lower() == "head" and not meta_found: + # insert meta into empty head + yield {"type": "StartTag", "name": "head", + "data": token["data"]} + yield {"type": "EmptyTag", "name": "meta", + "data": {(None, "charset"): self.encoding}} + yield {"type": "EndTag", "name": "head"} + meta_found = True + continue + + elif type == "EndTag": + if token["name"].lower() == "head" and pending: + # insert meta into head (if necessary) and flush pending queue + yield pending.pop(0) + if not meta_found: + yield {"type": "EmptyTag", "name": "meta", + "data": {(None, "charset"): self.encoding}} + while pending: + yield pending.pop(0) + meta_found = True + state = "post_head" + + if state == "in_head": + pending.append(token) + else: + yield token diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/lint.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/lint.py new file mode 100644 index 0000000..fcc07ee --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/lint.py @@ -0,0 +1,93 @@ +from __future__ import absolute_import, division, unicode_literals + +from pip._vendor.six import text_type + +from . import base +from ..constants import namespaces, voidElements + +from ..constants import spaceCharacters +spaceCharacters = "".join(spaceCharacters) + + +class Filter(base.Filter): + """Lints the token stream for errors + + If it finds any errors, it'll raise an ``AssertionError``. + + """ + def __init__(self, source, require_matching_tags=True): + """Creates a Filter + + :arg source: the source token stream + + :arg require_matching_tags: whether or not to require matching tags + + """ + super(Filter, self).__init__(source) + self.require_matching_tags = require_matching_tags + + def __iter__(self): + open_elements = [] + for token in base.Filter.__iter__(self): + type = token["type"] + if type in ("StartTag", "EmptyTag"): + namespace = token["namespace"] + name = token["name"] + assert namespace is None or isinstance(namespace, text_type) + assert namespace != "" + assert isinstance(name, text_type) + assert name != "" + assert isinstance(token["data"], dict) + if (not namespace or namespace == namespaces["html"]) and name in voidElements: + assert type == "EmptyTag" + else: + assert type == "StartTag" + if type == "StartTag" and self.require_matching_tags: + open_elements.append((namespace, name)) + for (namespace, name), value in token["data"].items(): + assert namespace is None or isinstance(namespace, text_type) + assert namespace != "" + assert isinstance(name, text_type) + assert name != "" + assert isinstance(value, text_type) + + elif type == "EndTag": + namespace = token["namespace"] + name = token["name"] + assert namespace is None or isinstance(namespace, text_type) + assert namespace != "" + assert isinstance(name, text_type) + assert name != "" + if (not namespace or namespace == namespaces["html"]) and name in voidElements: + assert False, "Void element reported as EndTag token: %(tag)s" % {"tag": name} + elif self.require_matching_tags: + start = open_elements.pop() + assert start == (namespace, name) + + elif type == "Comment": + data = token["data"] + assert isinstance(data, text_type) + + elif type in ("Characters", "SpaceCharacters"): + data = token["data"] + assert isinstance(data, text_type) + assert data != "" + if type == "SpaceCharacters": + assert data.strip(spaceCharacters) == "" + + elif type == "Doctype": + name = token["name"] + assert name is None or isinstance(name, text_type) + assert token["publicId"] is None or isinstance(name, text_type) + assert token["systemId"] is None or isinstance(name, text_type) + + elif type == "Entity": + assert isinstance(token["name"], text_type) + + elif type == "SerializerError": + assert isinstance(token["data"], text_type) + + else: + assert False, "Unknown token type: %(type)s" % {"type": type} + + yield token diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/optionaltags.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/optionaltags.py new file mode 100644 index 0000000..4a86501 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/optionaltags.py @@ -0,0 +1,207 @@ +from __future__ import absolute_import, division, unicode_literals + +from . import base + + +class Filter(base.Filter): + """Removes optional tags from the token stream""" + def slider(self): + previous1 = previous2 = None + for token in self.source: + if previous1 is not None: + yield previous2, previous1, token + previous2 = previous1 + previous1 = token + if previous1 is not None: + yield previous2, previous1, None + + def __iter__(self): + for previous, token, next in self.slider(): + type = token["type"] + if type == "StartTag": + if (token["data"] or + not self.is_optional_start(token["name"], previous, next)): + yield token + elif type == "EndTag": + if not self.is_optional_end(token["name"], next): + yield token + else: + yield token + + def is_optional_start(self, tagname, previous, next): + type = next and next["type"] or None + if tagname in 'html': + # An html element's start tag may be omitted if the first thing + # inside the html element is not a space character or a comment. + return type not in ("Comment", "SpaceCharacters") + elif tagname == 'head': + # A head element's start tag may be omitted if the first thing + # inside the head element is an element. + # XXX: we also omit the start tag if the head element is empty + if type in ("StartTag", "EmptyTag"): + return True + elif type == "EndTag": + return next["name"] == "head" + elif tagname == 'body': + # A body element's start tag may be omitted if the first thing + # inside the body element is not a space character or a comment, + # except if the first thing inside the body element is a script + # or style element and the node immediately preceding the body + # element is a head element whose end tag has been omitted. + if type in ("Comment", "SpaceCharacters"): + return False + elif type == "StartTag": + # XXX: we do not look at the preceding event, so we never omit + # the body element's start tag if it's followed by a script or + # a style element. + return next["name"] not in ('script', 'style') + else: + return True + elif tagname == 'colgroup': + # A colgroup element's start tag may be omitted if the first thing + # inside the colgroup element is a col element, and if the element + # is not immediately preceded by another colgroup element whose + # end tag has been omitted. + if type in ("StartTag", "EmptyTag"): + # XXX: we do not look at the preceding event, so instead we never + # omit the colgroup element's end tag when it is immediately + # followed by another colgroup element. See is_optional_end. + return next["name"] == "col" + else: + return False + elif tagname == 'tbody': + # A tbody element's start tag may be omitted if the first thing + # inside the tbody element is a tr element, and if the element is + # not immediately preceded by a tbody, thead, or tfoot element + # whose end tag has been omitted. + if type == "StartTag": + # omit the thead and tfoot elements' end tag when they are + # immediately followed by a tbody element. See is_optional_end. + if previous and previous['type'] == 'EndTag' and \ + previous['name'] in ('tbody', 'thead', 'tfoot'): + return False + return next["name"] == 'tr' + else: + return False + return False + + def is_optional_end(self, tagname, next): + type = next and next["type"] or None + if tagname in ('html', 'head', 'body'): + # An html element's end tag may be omitted if the html element + # is not immediately followed by a space character or a comment. + return type not in ("Comment", "SpaceCharacters") + elif tagname in ('li', 'optgroup', 'tr'): + # A li element's end tag may be omitted if the li element is + # immediately followed by another li element or if there is + # no more content in the parent element. + # An optgroup element's end tag may be omitted if the optgroup + # element is immediately followed by another optgroup element, + # or if there is no more content in the parent element. + # A tr element's end tag may be omitted if the tr element is + # immediately followed by another tr element, or if there is + # no more content in the parent element. + if type == "StartTag": + return next["name"] == tagname + else: + return type == "EndTag" or type is None + elif tagname in ('dt', 'dd'): + # A dt element's end tag may be omitted if the dt element is + # immediately followed by another dt element or a dd element. + # A dd element's end tag may be omitted if the dd element is + # immediately followed by another dd element or a dt element, + # or if there is no more content in the parent element. + if type == "StartTag": + return next["name"] in ('dt', 'dd') + elif tagname == 'dd': + return type == "EndTag" or type is None + else: + return False + elif tagname == 'p': + # A p element's end tag may be omitted if the p element is + # immediately followed by an address, article, aside, + # blockquote, datagrid, dialog, dir, div, dl, fieldset, + # footer, form, h1, h2, h3, h4, h5, h6, header, hr, menu, + # nav, ol, p, pre, section, table, or ul, element, or if + # there is no more content in the parent element. + if type in ("StartTag", "EmptyTag"): + return next["name"] in ('address', 'article', 'aside', + 'blockquote', 'datagrid', 'dialog', + 'dir', 'div', 'dl', 'fieldset', 'footer', + 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', + 'header', 'hr', 'menu', 'nav', 'ol', + 'p', 'pre', 'section', 'table', 'ul') + else: + return type == "EndTag" or type is None + elif tagname == 'option': + # An option element's end tag may be omitted if the option + # element is immediately followed by another option element, + # or if it is immediately followed by an optgroup + # element, or if there is no more content in the parent + # element. + if type == "StartTag": + return next["name"] in ('option', 'optgroup') + else: + return type == "EndTag" or type is None + elif tagname in ('rt', 'rp'): + # An rt element's end tag may be omitted if the rt element is + # immediately followed by an rt or rp element, or if there is + # no more content in the parent element. + # An rp element's end tag may be omitted if the rp element is + # immediately followed by an rt or rp element, or if there is + # no more content in the parent element. + if type == "StartTag": + return next["name"] in ('rt', 'rp') + else: + return type == "EndTag" or type is None + elif tagname == 'colgroup': + # A colgroup element's end tag may be omitted if the colgroup + # element is not immediately followed by a space character or + # a comment. + if type in ("Comment", "SpaceCharacters"): + return False + elif type == "StartTag": + # XXX: we also look for an immediately following colgroup + # element. See is_optional_start. + return next["name"] != 'colgroup' + else: + return True + elif tagname in ('thead', 'tbody'): + # A thead element's end tag may be omitted if the thead element + # is immediately followed by a tbody or tfoot element. + # A tbody element's end tag may be omitted if the tbody element + # is immediately followed by a tbody or tfoot element, or if + # there is no more content in the parent element. + # A tfoot element's end tag may be omitted if the tfoot element + # is immediately followed by a tbody element, or if there is no + # more content in the parent element. + # XXX: we never omit the end tag when the following element is + # a tbody. See is_optional_start. + if type == "StartTag": + return next["name"] in ['tbody', 'tfoot'] + elif tagname == 'tbody': + return type == "EndTag" or type is None + else: + return False + elif tagname == 'tfoot': + # A tfoot element's end tag may be omitted if the tfoot element + # is immediately followed by a tbody element, or if there is no + # more content in the parent element. + # XXX: we never omit the end tag when the following element is + # a tbody. See is_optional_start. + if type == "StartTag": + return next["name"] == 'tbody' + else: + return type == "EndTag" or type is None + elif tagname in ('td', 'th'): + # A td element's end tag may be omitted if the td element is + # immediately followed by a td or th element, or if there is + # no more content in the parent element. + # A th element's end tag may be omitted if the th element is + # immediately followed by a td or th element, or if there is + # no more content in the parent element. + if type == "StartTag": + return next["name"] in ('td', 'th') + else: + return type == "EndTag" or type is None + return False diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/sanitizer.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/sanitizer.py new file mode 100644 index 0000000..aa7431d --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/sanitizer.py @@ -0,0 +1,916 @@ +"""Deprecated from html5lib 1.1. + +See `here `_ for +information about its deprecation; `Bleach `_ +is recommended as a replacement. Please let us know in the aforementioned issue +if Bleach is unsuitable for your needs. + +""" +from __future__ import absolute_import, division, unicode_literals + +import re +import warnings +from xml.sax.saxutils import escape, unescape + +from pip._vendor.six.moves import urllib_parse as urlparse + +from . import base +from ..constants import namespaces, prefixes + +__all__ = ["Filter"] + + +_deprecation_msg = ( + "html5lib's sanitizer is deprecated; see " + + "https://github.com/html5lib/html5lib-python/issues/443 and please let " + + "us know if Bleach is unsuitable for your needs" +) + +warnings.warn(_deprecation_msg, DeprecationWarning) + +allowed_elements = frozenset(( + (namespaces['html'], 'a'), + (namespaces['html'], 'abbr'), + (namespaces['html'], 'acronym'), + (namespaces['html'], 'address'), + (namespaces['html'], 'area'), + (namespaces['html'], 'article'), + (namespaces['html'], 'aside'), + (namespaces['html'], 'audio'), + (namespaces['html'], 'b'), + (namespaces['html'], 'big'), + (namespaces['html'], 'blockquote'), + (namespaces['html'], 'br'), + (namespaces['html'], 'button'), + (namespaces['html'], 'canvas'), + (namespaces['html'], 'caption'), + (namespaces['html'], 'center'), + (namespaces['html'], 'cite'), + (namespaces['html'], 'code'), + (namespaces['html'], 'col'), + (namespaces['html'], 'colgroup'), + (namespaces['html'], 'command'), + (namespaces['html'], 'datagrid'), + (namespaces['html'], 'datalist'), + (namespaces['html'], 'dd'), + (namespaces['html'], 'del'), + (namespaces['html'], 'details'), + (namespaces['html'], 'dfn'), + (namespaces['html'], 'dialog'), + (namespaces['html'], 'dir'), + (namespaces['html'], 'div'), + (namespaces['html'], 'dl'), + (namespaces['html'], 'dt'), + (namespaces['html'], 'em'), + (namespaces['html'], 'event-source'), + (namespaces['html'], 'fieldset'), + (namespaces['html'], 'figcaption'), + (namespaces['html'], 'figure'), + (namespaces['html'], 'footer'), + (namespaces['html'], 'font'), + (namespaces['html'], 'form'), + (namespaces['html'], 'header'), + (namespaces['html'], 'h1'), + (namespaces['html'], 'h2'), + (namespaces['html'], 'h3'), + (namespaces['html'], 'h4'), + (namespaces['html'], 'h5'), + (namespaces['html'], 'h6'), + (namespaces['html'], 'hr'), + (namespaces['html'], 'i'), + (namespaces['html'], 'img'), + (namespaces['html'], 'input'), + (namespaces['html'], 'ins'), + (namespaces['html'], 'keygen'), + (namespaces['html'], 'kbd'), + (namespaces['html'], 'label'), + (namespaces['html'], 'legend'), + (namespaces['html'], 'li'), + (namespaces['html'], 'm'), + (namespaces['html'], 'map'), + (namespaces['html'], 'menu'), + (namespaces['html'], 'meter'), + (namespaces['html'], 'multicol'), + (namespaces['html'], 'nav'), + (namespaces['html'], 'nextid'), + (namespaces['html'], 'ol'), + (namespaces['html'], 'output'), + (namespaces['html'], 'optgroup'), + (namespaces['html'], 'option'), + (namespaces['html'], 'p'), + (namespaces['html'], 'pre'), + (namespaces['html'], 'progress'), + (namespaces['html'], 'q'), + (namespaces['html'], 's'), + (namespaces['html'], 'samp'), + (namespaces['html'], 'section'), + (namespaces['html'], 'select'), + (namespaces['html'], 'small'), + (namespaces['html'], 'sound'), + (namespaces['html'], 'source'), + (namespaces['html'], 'spacer'), + (namespaces['html'], 'span'), + (namespaces['html'], 'strike'), + (namespaces['html'], 'strong'), + (namespaces['html'], 'sub'), + (namespaces['html'], 'sup'), + (namespaces['html'], 'table'), + (namespaces['html'], 'tbody'), + (namespaces['html'], 'td'), + (namespaces['html'], 'textarea'), + (namespaces['html'], 'time'), + (namespaces['html'], 'tfoot'), + (namespaces['html'], 'th'), + (namespaces['html'], 'thead'), + (namespaces['html'], 'tr'), + (namespaces['html'], 'tt'), + (namespaces['html'], 'u'), + (namespaces['html'], 'ul'), + (namespaces['html'], 'var'), + (namespaces['html'], 'video'), + (namespaces['mathml'], 'maction'), + (namespaces['mathml'], 'math'), + (namespaces['mathml'], 'merror'), + (namespaces['mathml'], 'mfrac'), + (namespaces['mathml'], 'mi'), + (namespaces['mathml'], 'mmultiscripts'), + (namespaces['mathml'], 'mn'), + (namespaces['mathml'], 'mo'), + (namespaces['mathml'], 'mover'), + (namespaces['mathml'], 'mpadded'), + (namespaces['mathml'], 'mphantom'), + (namespaces['mathml'], 'mprescripts'), + (namespaces['mathml'], 'mroot'), + (namespaces['mathml'], 'mrow'), + (namespaces['mathml'], 'mspace'), + (namespaces['mathml'], 'msqrt'), + (namespaces['mathml'], 'mstyle'), + (namespaces['mathml'], 'msub'), + (namespaces['mathml'], 'msubsup'), + (namespaces['mathml'], 'msup'), + (namespaces['mathml'], 'mtable'), + (namespaces['mathml'], 'mtd'), + (namespaces['mathml'], 'mtext'), + (namespaces['mathml'], 'mtr'), + (namespaces['mathml'], 'munder'), + (namespaces['mathml'], 'munderover'), + (namespaces['mathml'], 'none'), + (namespaces['svg'], 'a'), + (namespaces['svg'], 'animate'), + (namespaces['svg'], 'animateColor'), + (namespaces['svg'], 'animateMotion'), + (namespaces['svg'], 'animateTransform'), + (namespaces['svg'], 'clipPath'), + (namespaces['svg'], 'circle'), + (namespaces['svg'], 'defs'), + (namespaces['svg'], 'desc'), + (namespaces['svg'], 'ellipse'), + (namespaces['svg'], 'font-face'), + (namespaces['svg'], 'font-face-name'), + (namespaces['svg'], 'font-face-src'), + (namespaces['svg'], 'g'), + (namespaces['svg'], 'glyph'), + (namespaces['svg'], 'hkern'), + (namespaces['svg'], 'linearGradient'), + (namespaces['svg'], 'line'), + (namespaces['svg'], 'marker'), + (namespaces['svg'], 'metadata'), + (namespaces['svg'], 'missing-glyph'), + (namespaces['svg'], 'mpath'), + (namespaces['svg'], 'path'), + (namespaces['svg'], 'polygon'), + (namespaces['svg'], 'polyline'), + (namespaces['svg'], 'radialGradient'), + (namespaces['svg'], 'rect'), + (namespaces['svg'], 'set'), + (namespaces['svg'], 'stop'), + (namespaces['svg'], 'svg'), + (namespaces['svg'], 'switch'), + (namespaces['svg'], 'text'), + (namespaces['svg'], 'title'), + (namespaces['svg'], 'tspan'), + (namespaces['svg'], 'use'), +)) + +allowed_attributes = frozenset(( + # HTML attributes + (None, 'abbr'), + (None, 'accept'), + (None, 'accept-charset'), + (None, 'accesskey'), + (None, 'action'), + (None, 'align'), + (None, 'alt'), + (None, 'autocomplete'), + (None, 'autofocus'), + (None, 'axis'), + (None, 'background'), + (None, 'balance'), + (None, 'bgcolor'), + (None, 'bgproperties'), + (None, 'border'), + (None, 'bordercolor'), + (None, 'bordercolordark'), + (None, 'bordercolorlight'), + (None, 'bottompadding'), + (None, 'cellpadding'), + (None, 'cellspacing'), + (None, 'ch'), + (None, 'challenge'), + (None, 'char'), + (None, 'charoff'), + (None, 'choff'), + (None, 'charset'), + (None, 'checked'), + (None, 'cite'), + (None, 'class'), + (None, 'clear'), + (None, 'color'), + (None, 'cols'), + (None, 'colspan'), + (None, 'compact'), + (None, 'contenteditable'), + (None, 'controls'), + (None, 'coords'), + (None, 'data'), + (None, 'datafld'), + (None, 'datapagesize'), + (None, 'datasrc'), + (None, 'datetime'), + (None, 'default'), + (None, 'delay'), + (None, 'dir'), + (None, 'disabled'), + (None, 'draggable'), + (None, 'dynsrc'), + (None, 'enctype'), + (None, 'end'), + (None, 'face'), + (None, 'for'), + (None, 'form'), + (None, 'frame'), + (None, 'galleryimg'), + (None, 'gutter'), + (None, 'headers'), + (None, 'height'), + (None, 'hidefocus'), + (None, 'hidden'), + (None, 'high'), + (None, 'href'), + (None, 'hreflang'), + (None, 'hspace'), + (None, 'icon'), + (None, 'id'), + (None, 'inputmode'), + (None, 'ismap'), + (None, 'keytype'), + (None, 'label'), + (None, 'leftspacing'), + (None, 'lang'), + (None, 'list'), + (None, 'longdesc'), + (None, 'loop'), + (None, 'loopcount'), + (None, 'loopend'), + (None, 'loopstart'), + (None, 'low'), + (None, 'lowsrc'), + (None, 'max'), + (None, 'maxlength'), + (None, 'media'), + (None, 'method'), + (None, 'min'), + (None, 'multiple'), + (None, 'name'), + (None, 'nohref'), + (None, 'noshade'), + (None, 'nowrap'), + (None, 'open'), + (None, 'optimum'), + (None, 'pattern'), + (None, 'ping'), + (None, 'point-size'), + (None, 'poster'), + (None, 'pqg'), + (None, 'preload'), + (None, 'prompt'), + (None, 'radiogroup'), + (None, 'readonly'), + (None, 'rel'), + (None, 'repeat-max'), + (None, 'repeat-min'), + (None, 'replace'), + (None, 'required'), + (None, 'rev'), + (None, 'rightspacing'), + (None, 'rows'), + (None, 'rowspan'), + (None, 'rules'), + (None, 'scope'), + (None, 'selected'), + (None, 'shape'), + (None, 'size'), + (None, 'span'), + (None, 'src'), + (None, 'start'), + (None, 'step'), + (None, 'style'), + (None, 'summary'), + (None, 'suppress'), + (None, 'tabindex'), + (None, 'target'), + (None, 'template'), + (None, 'title'), + (None, 'toppadding'), + (None, 'type'), + (None, 'unselectable'), + (None, 'usemap'), + (None, 'urn'), + (None, 'valign'), + (None, 'value'), + (None, 'variable'), + (None, 'volume'), + (None, 'vspace'), + (None, 'vrml'), + (None, 'width'), + (None, 'wrap'), + (namespaces['xml'], 'lang'), + # MathML attributes + (None, 'actiontype'), + (None, 'align'), + (None, 'columnalign'), + (None, 'columnalign'), + (None, 'columnalign'), + (None, 'columnlines'), + (None, 'columnspacing'), + (None, 'columnspan'), + (None, 'depth'), + (None, 'display'), + (None, 'displaystyle'), + (None, 'equalcolumns'), + (None, 'equalrows'), + (None, 'fence'), + (None, 'fontstyle'), + (None, 'fontweight'), + (None, 'frame'), + (None, 'height'), + (None, 'linethickness'), + (None, 'lspace'), + (None, 'mathbackground'), + (None, 'mathcolor'), + (None, 'mathvariant'), + (None, 'mathvariant'), + (None, 'maxsize'), + (None, 'minsize'), + (None, 'other'), + (None, 'rowalign'), + (None, 'rowalign'), + (None, 'rowalign'), + (None, 'rowlines'), + (None, 'rowspacing'), + (None, 'rowspan'), + (None, 'rspace'), + (None, 'scriptlevel'), + (None, 'selection'), + (None, 'separator'), + (None, 'stretchy'), + (None, 'width'), + (None, 'width'), + (namespaces['xlink'], 'href'), + (namespaces['xlink'], 'show'), + (namespaces['xlink'], 'type'), + # SVG attributes + (None, 'accent-height'), + (None, 'accumulate'), + (None, 'additive'), + (None, 'alphabetic'), + (None, 'arabic-form'), + (None, 'ascent'), + (None, 'attributeName'), + (None, 'attributeType'), + (None, 'baseProfile'), + (None, 'bbox'), + (None, 'begin'), + (None, 'by'), + (None, 'calcMode'), + (None, 'cap-height'), + (None, 'class'), + (None, 'clip-path'), + (None, 'color'), + (None, 'color-rendering'), + (None, 'content'), + (None, 'cx'), + (None, 'cy'), + (None, 'd'), + (None, 'dx'), + (None, 'dy'), + (None, 'descent'), + (None, 'display'), + (None, 'dur'), + (None, 'end'), + (None, 'fill'), + (None, 'fill-opacity'), + (None, 'fill-rule'), + (None, 'font-family'), + (None, 'font-size'), + (None, 'font-stretch'), + (None, 'font-style'), + (None, 'font-variant'), + (None, 'font-weight'), + (None, 'from'), + (None, 'fx'), + (None, 'fy'), + (None, 'g1'), + (None, 'g2'), + (None, 'glyph-name'), + (None, 'gradientUnits'), + (None, 'hanging'), + (None, 'height'), + (None, 'horiz-adv-x'), + (None, 'horiz-origin-x'), + (None, 'id'), + (None, 'ideographic'), + (None, 'k'), + (None, 'keyPoints'), + (None, 'keySplines'), + (None, 'keyTimes'), + (None, 'lang'), + (None, 'marker-end'), + (None, 'marker-mid'), + (None, 'marker-start'), + (None, 'markerHeight'), + (None, 'markerUnits'), + (None, 'markerWidth'), + (None, 'mathematical'), + (None, 'max'), + (None, 'min'), + (None, 'name'), + (None, 'offset'), + (None, 'opacity'), + (None, 'orient'), + (None, 'origin'), + (None, 'overline-position'), + (None, 'overline-thickness'), + (None, 'panose-1'), + (None, 'path'), + (None, 'pathLength'), + (None, 'points'), + (None, 'preserveAspectRatio'), + (None, 'r'), + (None, 'refX'), + (None, 'refY'), + (None, 'repeatCount'), + (None, 'repeatDur'), + (None, 'requiredExtensions'), + (None, 'requiredFeatures'), + (None, 'restart'), + (None, 'rotate'), + (None, 'rx'), + (None, 'ry'), + (None, 'slope'), + (None, 'stemh'), + (None, 'stemv'), + (None, 'stop-color'), + (None, 'stop-opacity'), + (None, 'strikethrough-position'), + (None, 'strikethrough-thickness'), + (None, 'stroke'), + (None, 'stroke-dasharray'), + (None, 'stroke-dashoffset'), + (None, 'stroke-linecap'), + (None, 'stroke-linejoin'), + (None, 'stroke-miterlimit'), + (None, 'stroke-opacity'), + (None, 'stroke-width'), + (None, 'systemLanguage'), + (None, 'target'), + (None, 'text-anchor'), + (None, 'to'), + (None, 'transform'), + (None, 'type'), + (None, 'u1'), + (None, 'u2'), + (None, 'underline-position'), + (None, 'underline-thickness'), + (None, 'unicode'), + (None, 'unicode-range'), + (None, 'units-per-em'), + (None, 'values'), + (None, 'version'), + (None, 'viewBox'), + (None, 'visibility'), + (None, 'width'), + (None, 'widths'), + (None, 'x'), + (None, 'x-height'), + (None, 'x1'), + (None, 'x2'), + (namespaces['xlink'], 'actuate'), + (namespaces['xlink'], 'arcrole'), + (namespaces['xlink'], 'href'), + (namespaces['xlink'], 'role'), + (namespaces['xlink'], 'show'), + (namespaces['xlink'], 'title'), + (namespaces['xlink'], 'type'), + (namespaces['xml'], 'base'), + (namespaces['xml'], 'lang'), + (namespaces['xml'], 'space'), + (None, 'y'), + (None, 'y1'), + (None, 'y2'), + (None, 'zoomAndPan'), +)) + +attr_val_is_uri = frozenset(( + (None, 'href'), + (None, 'src'), + (None, 'cite'), + (None, 'action'), + (None, 'longdesc'), + (None, 'poster'), + (None, 'background'), + (None, 'datasrc'), + (None, 'dynsrc'), + (None, 'lowsrc'), + (None, 'ping'), + (namespaces['xlink'], 'href'), + (namespaces['xml'], 'base'), +)) + +svg_attr_val_allows_ref = frozenset(( + (None, 'clip-path'), + (None, 'color-profile'), + (None, 'cursor'), + (None, 'fill'), + (None, 'filter'), + (None, 'marker'), + (None, 'marker-start'), + (None, 'marker-mid'), + (None, 'marker-end'), + (None, 'mask'), + (None, 'stroke'), +)) + +svg_allow_local_href = frozenset(( + (None, 'altGlyph'), + (None, 'animate'), + (None, 'animateColor'), + (None, 'animateMotion'), + (None, 'animateTransform'), + (None, 'cursor'), + (None, 'feImage'), + (None, 'filter'), + (None, 'linearGradient'), + (None, 'pattern'), + (None, 'radialGradient'), + (None, 'textpath'), + (None, 'tref'), + (None, 'set'), + (None, 'use') +)) + +allowed_css_properties = frozenset(( + 'azimuth', + 'background-color', + 'border-bottom-color', + 'border-collapse', + 'border-color', + 'border-left-color', + 'border-right-color', + 'border-top-color', + 'clear', + 'color', + 'cursor', + 'direction', + 'display', + 'elevation', + 'float', + 'font', + 'font-family', + 'font-size', + 'font-style', + 'font-variant', + 'font-weight', + 'height', + 'letter-spacing', + 'line-height', + 'overflow', + 'pause', + 'pause-after', + 'pause-before', + 'pitch', + 'pitch-range', + 'richness', + 'speak', + 'speak-header', + 'speak-numeral', + 'speak-punctuation', + 'speech-rate', + 'stress', + 'text-align', + 'text-decoration', + 'text-indent', + 'unicode-bidi', + 'vertical-align', + 'voice-family', + 'volume', + 'white-space', + 'width', +)) + +allowed_css_keywords = frozenset(( + 'auto', + 'aqua', + 'black', + 'block', + 'blue', + 'bold', + 'both', + 'bottom', + 'brown', + 'center', + 'collapse', + 'dashed', + 'dotted', + 'fuchsia', + 'gray', + 'green', + '!important', + 'italic', + 'left', + 'lime', + 'maroon', + 'medium', + 'none', + 'navy', + 'normal', + 'nowrap', + 'olive', + 'pointer', + 'purple', + 'red', + 'right', + 'solid', + 'silver', + 'teal', + 'top', + 'transparent', + 'underline', + 'white', + 'yellow', +)) + +allowed_svg_properties = frozenset(( + 'fill', + 'fill-opacity', + 'fill-rule', + 'stroke', + 'stroke-width', + 'stroke-linecap', + 'stroke-linejoin', + 'stroke-opacity', +)) + +allowed_protocols = frozenset(( + 'ed2k', + 'ftp', + 'http', + 'https', + 'irc', + 'mailto', + 'news', + 'gopher', + 'nntp', + 'telnet', + 'webcal', + 'xmpp', + 'callto', + 'feed', + 'urn', + 'aim', + 'rsync', + 'tag', + 'ssh', + 'sftp', + 'rtsp', + 'afs', + 'data', +)) + +allowed_content_types = frozenset(( + 'image/png', + 'image/jpeg', + 'image/gif', + 'image/webp', + 'image/bmp', + 'text/plain', +)) + + +data_content_type = re.compile(r''' + ^ + # Match a content type / + (?P[-a-zA-Z0-9.]+/[-a-zA-Z0-9.]+) + # Match any character set and encoding + (?:(?:;charset=(?:[-a-zA-Z0-9]+)(?:;(?:base64))?) + |(?:;(?:base64))?(?:;charset=(?:[-a-zA-Z0-9]+))?) + # Assume the rest is data + ,.* + $ + ''', + re.VERBOSE) + + +class Filter(base.Filter): + """Sanitizes token stream of XHTML+MathML+SVG and of inline style attributes""" + def __init__(self, + source, + allowed_elements=allowed_elements, + allowed_attributes=allowed_attributes, + allowed_css_properties=allowed_css_properties, + allowed_css_keywords=allowed_css_keywords, + allowed_svg_properties=allowed_svg_properties, + allowed_protocols=allowed_protocols, + allowed_content_types=allowed_content_types, + attr_val_is_uri=attr_val_is_uri, + svg_attr_val_allows_ref=svg_attr_val_allows_ref, + svg_allow_local_href=svg_allow_local_href): + """Creates a Filter + + :arg allowed_elements: set of elements to allow--everything else will + be escaped + + :arg allowed_attributes: set of attributes to allow in + elements--everything else will be stripped + + :arg allowed_css_properties: set of CSS properties to allow--everything + else will be stripped + + :arg allowed_css_keywords: set of CSS keywords to allow--everything + else will be stripped + + :arg allowed_svg_properties: set of SVG properties to allow--everything + else will be removed + + :arg allowed_protocols: set of allowed protocols for URIs + + :arg allowed_content_types: set of allowed content types for ``data`` URIs. + + :arg attr_val_is_uri: set of attributes that have URI values--values + that have a scheme not listed in ``allowed_protocols`` are removed + + :arg svg_attr_val_allows_ref: set of SVG attributes that can have + references + + :arg svg_allow_local_href: set of SVG elements that can have local + hrefs--these are removed + + """ + super(Filter, self).__init__(source) + + warnings.warn(_deprecation_msg, DeprecationWarning) + + self.allowed_elements = allowed_elements + self.allowed_attributes = allowed_attributes + self.allowed_css_properties = allowed_css_properties + self.allowed_css_keywords = allowed_css_keywords + self.allowed_svg_properties = allowed_svg_properties + self.allowed_protocols = allowed_protocols + self.allowed_content_types = allowed_content_types + self.attr_val_is_uri = attr_val_is_uri + self.svg_attr_val_allows_ref = svg_attr_val_allows_ref + self.svg_allow_local_href = svg_allow_local_href + + def __iter__(self): + for token in base.Filter.__iter__(self): + token = self.sanitize_token(token) + if token: + yield token + + # Sanitize the +html+, escaping all elements not in ALLOWED_ELEMENTS, and + # stripping out all attributes not in ALLOWED_ATTRIBUTES. Style attributes + # are parsed, and a restricted set, specified by ALLOWED_CSS_PROPERTIES and + # ALLOWED_CSS_KEYWORDS, are allowed through. attributes in ATTR_VAL_IS_URI + # are scanned, and only URI schemes specified in ALLOWED_PROTOCOLS are + # allowed. + # + # sanitize_html('') + # => <script> do_nasty_stuff() </script> + # sanitize_html('Click here for $100') + # => Click here for $100 + def sanitize_token(self, token): + + # accommodate filters which use token_type differently + token_type = token["type"] + if token_type in ("StartTag", "EndTag", "EmptyTag"): + name = token["name"] + namespace = token["namespace"] + if ((namespace, name) in self.allowed_elements or + (namespace is None and + (namespaces["html"], name) in self.allowed_elements)): + return self.allowed_token(token) + else: + return self.disallowed_token(token) + elif token_type == "Comment": + pass + else: + return token + + def allowed_token(self, token): + if "data" in token: + attrs = token["data"] + attr_names = set(attrs.keys()) + + # Remove forbidden attributes + for to_remove in (attr_names - self.allowed_attributes): + del token["data"][to_remove] + attr_names.remove(to_remove) + + # Remove attributes with disallowed URL values + for attr in (attr_names & self.attr_val_is_uri): + assert attr in attrs + # I don't have a clue where this regexp comes from or why it matches those + # characters, nor why we call unescape. I just know it's always been here. + # Should you be worried by this comment in a sanitizer? Yes. On the other hand, all + # this will do is remove *more* than it otherwise would. + val_unescaped = re.sub("[`\x00-\x20\x7f-\xa0\\s]+", '', + unescape(attrs[attr])).lower() + # remove replacement characters from unescaped characters + val_unescaped = val_unescaped.replace("\ufffd", "") + try: + uri = urlparse.urlparse(val_unescaped) + except ValueError: + uri = None + del attrs[attr] + if uri and uri.scheme: + if uri.scheme not in self.allowed_protocols: + del attrs[attr] + if uri.scheme == 'data': + m = data_content_type.match(uri.path) + if not m: + del attrs[attr] + elif m.group('content_type') not in self.allowed_content_types: + del attrs[attr] + + for attr in self.svg_attr_val_allows_ref: + if attr in attrs: + attrs[attr] = re.sub(r'url\s*\(\s*[^#\s][^)]+?\)', + ' ', + unescape(attrs[attr])) + if (token["name"] in self.svg_allow_local_href and + (namespaces['xlink'], 'href') in attrs and re.search(r'^\s*[^#\s].*', + attrs[(namespaces['xlink'], 'href')])): + del attrs[(namespaces['xlink'], 'href')] + if (None, 'style') in attrs: + attrs[(None, 'style')] = self.sanitize_css(attrs[(None, 'style')]) + token["data"] = attrs + return token + + def disallowed_token(self, token): + token_type = token["type"] + if token_type == "EndTag": + token["data"] = "" % token["name"] + elif token["data"]: + assert token_type in ("StartTag", "EmptyTag") + attrs = [] + for (ns, name), v in token["data"].items(): + attrs.append(' %s="%s"' % (name if ns is None else "%s:%s" % (prefixes[ns], name), escape(v))) + token["data"] = "<%s%s>" % (token["name"], ''.join(attrs)) + else: + token["data"] = "<%s>" % token["name"] + if token.get("selfClosing"): + token["data"] = token["data"][:-1] + "/>" + + token["type"] = "Characters" + + del token["name"] + return token + + def sanitize_css(self, style): + # disallow urls + style = re.compile(r'url\s*\(\s*[^\s)]+?\s*\)\s*').sub(' ', style) + + # gauntlet + if not re.match(r"""^([:,;#%.\sa-zA-Z0-9!]|\w-\w|'[\s\w]+'|"[\s\w]+"|\([\d,\s]+\))*$""", style): + return '' + if not re.match(r"^\s*([-\w]+\s*:[^:;]*(;\s*|$))*$", style): + return '' + + clean = [] + for prop, value in re.findall(r"([-\w]+)\s*:\s*([^:;]*)", style): + if not value: + continue + if prop.lower() in self.allowed_css_properties: + clean.append(prop + ': ' + value + ';') + elif prop.split('-')[0].lower() in ['background', 'border', 'margin', + 'padding']: + for keyword in value.split(): + if keyword not in self.allowed_css_keywords and \ + not re.match(r"^(#[0-9a-fA-F]+|rgb\(\d+%?,\d*%?,?\d*%?\)?|\d{0,2}\.?\d{0,2}(cm|em|ex|in|mm|pc|pt|px|%|,|\))?)$", keyword): # noqa + break + else: + clean.append(prop + ': ' + value + ';') + elif prop.lower() in self.allowed_svg_properties: + clean.append(prop + ': ' + value + ';') + + return ' '.join(clean) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/whitespace.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/whitespace.py new file mode 100644 index 0000000..0d12584 --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/whitespace.py @@ -0,0 +1,38 @@ +from __future__ import absolute_import, division, unicode_literals + +import re + +from . import base +from ..constants import rcdataElements, spaceCharacters +spaceCharacters = "".join(spaceCharacters) + +SPACES_REGEX = re.compile("[%s]+" % spaceCharacters) + + +class Filter(base.Filter): + """Collapses whitespace except in pre, textarea, and script elements""" + spacePreserveElements = frozenset(["pre", "textarea"] + list(rcdataElements)) + + def __iter__(self): + preserve = 0 + for token in base.Filter.__iter__(self): + type = token["type"] + if type == "StartTag" \ + and (preserve or token["name"] in self.spacePreserveElements): + preserve += 1 + + elif type == "EndTag" and preserve: + preserve -= 1 + + elif not preserve and type == "SpaceCharacters" and token["data"]: + # Test on token["data"] above to not introduce spaces where there were not + token["data"] = " " + + elif not preserve and type == "Characters": + token["data"] = collapse_spaces(token["data"]) + + yield token + + +def collapse_spaces(text): + return SPACES_REGEX.sub(' ', text) diff --git a/python/lib/python3.10/site-packages/pip/_vendor/html5lib/html5parser.py b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/html5parser.py new file mode 100644 index 0000000..d06784f --- /dev/null +++ b/python/lib/python3.10/site-packages/pip/_vendor/html5lib/html5parser.py @@ -0,0 +1,2795 @@ +from __future__ import absolute_import, division, unicode_literals +from pip._vendor.six import with_metaclass, viewkeys + +import types + +from . import _inputstream +from . import _tokenizer + +from . import treebuilders +from .treebuilders.base import Marker + +from . import _utils +from .constants import ( + spaceCharacters, asciiUpper2Lower, + specialElements, headingElements, cdataElements, rcdataElements, + tokenTypes, tagTokenTypes, + namespaces, + htmlIntegrationPointElements, mathmlTextIntegrationPointElements, + adjustForeignAttributes as adjustForeignAttributesMap, + adjustMathMLAttributes, adjustSVGAttributes, + E, + _ReparseException +) + + +def parse(doc, treebuilder="etree", namespaceHTMLElements=True, **kwargs): + """Parse an HTML document as a string or file-like object into a tree + + :arg doc: the document to parse as a string or file-like object + + :arg treebuilder: the treebuilder to use when parsing + + :arg namespaceHTMLElements: whether or not to namespace HTML elements + + :returns: parsed tree + + Example: + + >>> from html5lib.html5parser import parse + >>> parse('

This is a doc

') + + + """ + tb = treebuilders.getTreeBuilder(treebuilder) + p = HTMLParser(tb, namespaceHTMLElements=namespaceHTMLElements) + return p.parse(doc, **kwargs) + + +def parseFragment(doc, container="div", treebuilder="etree", namespaceHTMLElements=True, **kwargs): + """Parse an HTML fragment as a string or file-like object into a tree + + :arg doc: the fragment to parse as a string or file-like object + + :arg container: the container context to parse the fragment in + + :arg treebuilder: the treebuilder to use when parsing + + :arg namespaceHTMLElements: whether or not to namespace HTML elements + + :returns: parsed tree + + Example: + + >>> from html5lib.html5libparser import parseFragment + >>> parseFragment('this is a fragment') + + + """ + tb = treebuilders.getTreeBuilder(treebuilder) + p = HTMLParser(tb, namespaceHTMLElements=namespaceHTMLElements) + return p.parseFragment(doc, container=container, **kwargs) + + +def method_decorator_metaclass(function): + class Decorated(type): + def __new__(meta, classname, bases, classDict): + for attributeName, attribute in classDict.items(): + if isinstance(attribute, types.FunctionType): + attribute = function(attribute) + + classDict[attributeName] = attribute + return type.__new__(meta, classname, bases, classDict) + return Decorated + + +class HTMLParser(object): + """HTML parser + + Generates a tree structure from a stream of (possibly malformed) HTML. + + """ + + def __init__(self, tree=None, strict=False, namespaceHTMLElements=True, debug=False): + """ + :arg tree: a treebuilder class controlling the type of tree that will be + returned. Built in treebuilders can be accessed through + html5lib.treebuilders.getTreeBuilder(treeType) + + :arg strict: raise an exception when a parse error is encountered + + :arg namespaceHTMLElements: whether or not to namespace HTML elements + + :arg debug: whether or not to enable debug mode which logs things + + Example: + + >>> from html5lib.html5parser import HTMLParser + >>> parser = HTMLParser() # generates parser with etree builder + >>> parser = HTMLParser('lxml', strict=True) # generates parser with lxml builder which is strict + + """ + + # Raise an exception on the first error encountered + self.strict = strict + + if tree is None: + tree = treebuilders.getTreeBuilder("etree") + self.tree = tree(namespaceHTMLElements) + self.errors = [] + + self.phases = {name: cls(self, self.tree) for name, cls in + getPhases(debug).items()} + + def _parse(self, stream, innerHTML=False, container="div", scripting=False, **kwargs): + + self.innerHTMLMode = innerHTML + self.container = container + self.scripting = scripting + self.tokenizer = _tokenizer.HTMLTokenizer(stream, parser=self, **kwargs) + self.reset() + + try: + self.mainLoop() + except _ReparseException: + self.reset() + self.mainLoop() + + def reset(self): + self.tree.reset() + self.firstStartTag = False + self.errors = [] + self.log = [] # only used with debug mode + # "quirks" / "limited quirks" / "no quirks" + self.compatMode = "no quirks" + + if self.innerHTMLMode: + self.innerHTML = self.container.lower() + + if self.innerHTML in cdataElements: + self.tokenizer.state = self.tokenizer.rcdataState + elif self.innerHTML in rcdataElements: + self.tokenizer.state = self.tokenizer.rawtextState + elif self.innerHTML == 'plaintext': + self.tokenizer.state = self.tokenizer.plaintextState + else: + # state already is data state + # self.tokenizer.state = self.tokenizer.dataState + pass + self.phase = self.phases["beforeHtml"] + self.phase.insertHtmlElement() + self.resetInsertionMode() + else: + self.innerHTML = False # pylint:disable=redefined-variable-type + self.phase = self.phases["initial"] + + self.lastPhase = None + + self.beforeRCDataPhase = None + + self.framesetOK = True + + @property + def documentEncoding(self): + """Name of the character encoding that was used to decode the input stream, or + :obj:`None` if that is not determined yet + + """ + if not hasattr(self, 'tokenizer'): + return None + return self.tokenizer.stream.charEncoding[0].name + + def isHTMLIntegrationPoint(self, element): + if (element.name == "annotation-xml" and + element.namespace == namespaces["mathml"]): + return ("encoding" in element.attributes and + element.attributes["encoding"].translate( + asciiUpper2Lower) in + ("text/html", "application/xhtml+xml")) + else: + return (element.namespace, element.name) in htmlIntegrationPointElements + + def isMathMLTextIntegrationPoint(self, element): + return (element.namespace, element.name) in mathmlTextIntegrationPointElements + + def mainLoop(self): + CharactersToken = tokenTypes["Characters"] + SpaceCharactersToken = tokenTypes["SpaceCharacters"] + StartTagToken = tokenTypes["StartTag"] + EndTagToken = tokenTypes["EndTag"] + CommentToken = tokenTypes["Comment"] + DoctypeToken = tokenTypes["Doctype"] + ParseErrorToken = tokenTypes["ParseError"] + + for token in self.tokenizer: + prev_token = None + new_token = token + while new_token is not None: + prev_token = new_token + currentNode = self.tree.openElements[-1] if self.tree.openElements else None + currentNodeNamespace = currentNode.namespace if currentNode else None + currentNodeName = currentNode.name if currentNode else None + + type = new_token["type"] + + if type == ParseErrorToken: + self.parseError(new_token["data"], new_token.get("datavars", {})) + new_token = None + else: + if (len(self.tree.openElements) == 0 or + currentNodeNamespace == self.tree.defaultNamespace or + (self.isMathMLTextIntegrationPoint(currentNode) and + ((type == StartTagToken and + token["name"] not in frozenset(["mglyph", "malignmark"])) or + type in (CharactersToken, SpaceCharactersToken))) or + (currentNodeNamespace == namespaces["mathml"] and + currentNodeName == "annotation-xml" and + type == StartTagToken and + token["name"] == "svg") or + (self.isHTMLIntegrationPoint(currentNode) and + type in (StartTagToken, CharactersToken, SpaceCharactersToken))): + phase = self.phase + else: + phase = self.phases["inForeignContent"] + + if type == CharactersToken: + new_token = phase.processCharacters(new_token) + elif type == SpaceCharactersToken: + new_token = phase.processSpaceCharacters(new_token) + elif type == StartTagToken: + new_token = phase.processStartTag(new_token) + elif type == EndTagToken: + new_token = phase.processEndTag(new_token) + elif type == CommentToken: + new_token = phase.processComment(new_token) + elif type == DoctypeToken: + new_token = phase.processDoctype(new_token) + + if (type == StartTagToken and prev_token["selfClosing"] and + not prev_token["selfClosingAcknowledged"]): + self.parseError("non-void-element-with-trailing-solidus", + {"name": prev_token["name"]}) + + # When the loop finishes it's EOF + reprocess = True + phases = [] + while reprocess: + phases.append(self.phase) + reprocess = self.phase.processEOF() + if reprocess: + assert self.phase not in phases + + def parse(self, stream, *args, **kwargs): + """Parse a HTML document into a well-formed tree + + :arg stream: a file-like object or string containing the HTML to be parsed + + The optional encoding parameter must be a string that indicates + the encoding. If specified, that encoding will be used, + regardless of any BOM or later declaration (such as in a meta + element). + + :arg scripting: treat noscript elements as if JavaScript was turned on + + :returns: parsed tree + + Example: + + >>> from html5lib.html5parser import HTMLParser + >>> parser = HTMLParser() + >>> parser.parse('

This is a doc

') + + + """ + self._parse(stream, False, None, *args, **kwargs) + return self.tree.getDocument() + + def parseFragment(self, stream, *args, **kwargs): + """Parse a HTML fragment into a well-formed tree fragment + + :arg container: name of the element we're setting the innerHTML + property if set to None, default to 'div' + + :arg stream: a file-like object or string containing the HTML to be parsed + + The optional encoding parameter must be a string that indicates + the encoding. If specified, that encoding will be used, + regardless of any BOM or later declaration (such as in a meta + element) + + :arg scripting: treat noscript elements as if JavaScript was turned on + + :returns: parsed tree + + Example: + + >>> from html5lib.html5libparser import HTMLParser + >>> parser = HTMLParser() + >>> parser.parseFragment('this is a fragment') + + + """ + self._parse(stream, True, *args, **kwargs) + return self.tree.getFragment() + + def parseError(self, errorcode="XXX-undefined-error", datavars=None): + # XXX The idea is to make errorcode mandatory. + if datavars is None: + datavars = {} + self.errors.append((self.tokenizer.stream.position(), errorcode, datavars)) + if self.strict: + raise ParseError(E[errorcode] % datavars) + + def adjustMathMLAttributes(self, token): + adjust_attributes(token, adjustMathMLAttributes) + + def adjustSVGAttributes(self, token): + adjust_attributes(token, adjustSVGAttributes) + + def adjustForeignAttributes(self, token): + adjust_attributes(token, adjustForeignAttributesMap) + + def reparseTokenNormal(self, token): + # pylint:disable=unused-argument + self.parser.phase() + + def resetInsertionMode(self): + # The name of this method is mostly historical. (It's also used in the + # specification.) + last = False + newModes = { + "select": "inSelect", + "td": "inCell", + "th": "inCell", + "tr": "inRow", + "tbody": "inTableBody", + "thead": "inTableBody", + "tfoot": "inTableBody", + "caption": "inCaption", + "colgroup": "inColumnGroup", + "table": "inTable", + "head": "inBody", + "body": "inBody", + "frameset": "inFrameset", + "html": "beforeHead" + } + for node in self.tree.openElements[::-1]: + nodeName = node.name + new_phase = None + if node == self.tree.openElements[0]: + assert self.innerHTML + last = True + nodeName = self.innerHTML + # Check for conditions that should only happen in the innerHTML + # case + if nodeName in ("select", "colgroup", "head", "html"): + assert self.innerHTML + + if not last and node.namespace != self.tree.defaultNamespace: + continue + + if nodeName in newModes: + new_phase = self.phases[newModes[nodeName]] + break + elif last: + new_phase = self.phases["inBody"] + break + + self.phase = new_phase + + def parseRCDataRawtext(self, token, contentType): + # Generic RCDATA/RAWTEXT Parsing algorithm + assert contentType in ("RAWTEXT", "RCDATA") + + self.tree.insertElement(token) + + if contentType == "RAWTEXT": + self.tokenizer.state = self.tokenizer.rawtextState + else: + self.tokenizer.state = self.tokenizer.rcdataState + + self.originalPhase = self.phase + + self.phase = self.phases["text"] + + +@_utils.memoize +def getPhases(debug): + def log(function): + """Logger that records which phase processes each token""" + type_names = {value: key for key, value in tokenTypes.items()} + + def wrapped(self, *args, **kwargs): + if function.__name__.startswith("process") and len(args) > 0: + token = args[0] + info = {"type": type_names[token['type']]} + if token['type'] in tagTokenTypes: + info["name"] = token['name'] + + self.parser.log.append((self.parser.tokenizer.state.__name__, + self.parser.phase.__class__.__name__, + self.__class__.__name__, + function.__name__, + info)) + return function(self, *args, **kwargs) + else: + return function(self, *args, **kwargs) + return wrapped + + def getMetaclass(use_metaclass, metaclass_func): + if use_metaclass: + return method_decorator_metaclass(metaclass_func) + else: + return type + + # pylint:disable=unused-argument + class Phase(with_metaclass(getMetaclass(debug, log))): + """Base class for helper object that implements each phase of processing + """ + __slots__ = ("parser", "tree", "__startTagCache", "__endTagCache") + + def __init__(self, parser, tree): + self.parser = parser + self.tree = tree + self.__startTagCache = {} + self.__endTagCache = {} + + def processEOF(self): + raise NotImplementedError + + def processComment(self, token): + # For most phases the following is correct. Where it's not it will be + # overridden. + self.tree.insertComment(token, self.tree.openElements[-1]) + + def processDoctype(self, token): + self.parser.parseError("unexpected-doctype") + + def processCharacters(self, token): + self.tree.insertText(token["data"]) + + def processSpaceCharacters(self, token): + self.tree.insertText(token["data"]) + + def processStartTag(self, token): + # Note the caching is done here rather than BoundMethodDispatcher as doing it there + # requires a circular reference to the Phase, and this ends up with a significant + # (CPython 2.7, 3.8) GC cost when parsing many short inputs + name = token["name"] + # In Py2, using `in` is quicker in general than try/except KeyError + # In Py3, `in` is quicker when there are few cache hits (typically short inputs) + if name in self.__startTagCache: + func = self.__startTagCache[name] + else: + func = self.__startTagCache[name] = self.startTagHandler[name] + # bound the cache size in case we get loads of unknown tags + while len(self.__startTagCache) > len(self.startTagHandler) * 1.1: + # this makes the eviction policy random on Py < 3.7 and FIFO >= 3.7 + self.__startTagCache.pop(next(iter(self.__startTagCache))) + return func(token) + + def startTagHtml(self, token): + if not self.parser.firstStartTag and token["name"] == "html": + self.parser.parseError("non-html-root") + # XXX Need a check here to see if the first start tag token emitted is + # this token... If it's not, invoke self.parser.parseError(). + for attr, value in token["data"].items(): + if attr not in self.tree.openElements[0].attributes: + self.tree.openElements[0].attributes[attr] = value + self.parser.firstStartTag = False + + def processEndTag(self, token): + # Note the caching is done here rather than BoundMethodDispatcher as doing it there + # requires a circular reference to the Phase, and this ends up with a significant + # (CPython 2.7, 3.8) GC cost when parsing many short inputs + name = token["name"] + # In Py2, using `in` is quicker in general than try/except KeyError + # In Py3, `in` is quicker when there are few cache hits (typically short inputs) + if name in self.__endTagCache: + func = self.__endTagCache[name] + else: + func = self.__endTagCache[name] = self.endTagHandler[name] + # bound the cache size in case we get loads of unknown tags + while len(self.__endTagCache) > len(self.endTagHandler) * 1.1: + # this makes the eviction policy random on Py < 3.7 and FIFO >= 3.7 + self.__endTagCache.pop(next(iter(self.__endTagCache))) + return func(token) + + class InitialPhase(Phase): + __slots__ = tuple() + + def processSpaceCharacters(self, token): + pass + + def processComment(self, token): + self.tree.insertComment(token, self.tree.document) + + def processDoctype(self, token): + name = token["name"] + publicId = token["publicId"] + systemId = token["systemId"] + correct = token["correct"] + + if (name != "html" or publicId is not None or + systemId is not None and systemId != "about:legacy-compat"): + self.parser.parseError("unknown-doctype") + + if publicId is None: + publicId = "" + + self.tree.insertDoctype(token) + + if publicId != "": + publicId = publicId.translate(asciiUpper2Lower) + + if (not correct or token["name"] != "html" or + publicId.startswith( + ("+//silmaril//dtd html pro v0r11 19970101//", + "-//advasoft ltd//dtd html 3.0 aswedit + extensions//", + "-//as//dtd html 3.0 aswedit + extensions//", + "-//ietf//dtd html 2.0 level 1//", + "-//ietf//dtd html 2.0 level 2//", + "-//ietf//dtd html 2.0 strict level 1//", + "-//ietf//dtd html 2.0 strict level 2//", + "-//ietf//dtd html 2.0 strict//", + "-//ietf//dtd html 2.0//", + "-//ietf//dtd html 2.1e//", + "-//ietf//dtd html 3.0//", + "-//ietf//dtd html 3.2 final//", + "-//ietf//dtd html 3.2//", + "-//ietf//dtd html 3//", + "-//ietf//dtd html level 0//", + "-//ietf//dtd html level 1//", + "-//ietf//dtd html level 2//", + "-//ietf//dtd html level 3//", + "-//ietf//dtd html strict level 0//", + "-//ietf//dtd html strict level 1//", + "-//ietf//dtd html strict level 2//", + "-//ietf//dtd html strict level 3//", + "-//ietf//dtd html strict//", + "-//ietf//dtd html//", + "-//metrius//dtd metrius presentational//", + "-//microsoft//dtd internet explorer 2.0 html strict//", + "-//microsoft//dtd internet explorer 2.0 html//", + "-//microsoft//dtd internet explorer 2.0 tables//", + "-//microsoft//dtd internet explorer 3.0 html strict//", + "-//microsoft//dtd internet explorer 3.0 html//", + "-//microsoft//dtd internet explorer 3.0 tables//", + "-//netscape comm. corp.//dtd html//", + "-//netscape comm. corp.//dtd strict html//", + "-//o'reilly and associates//dtd html 2.0//", + "-//o'reilly and associates//dtd html extended 1.0//", + "-//o'reilly and associates//dtd html extended relaxed 1.0//", + "-//softquad software//dtd hotmetal pro 6.0::19990601::extensions to html 4.0//", + "-//softquad//dtd hotmetal pro 4.0::19971010::extensions to html 4.0//", + "-//spyglass//dtd html 2.0 extended//", + "-//sq//dtd html 2.0 hotmetal + extensions//", + "-//sun microsystems corp.//dtd hotjava html//", + "-//sun microsystems corp.//dtd hotjava strict html//", + "-//w3c//dtd html 3 1995-03-24//", + "-//w3c//dtd html 3.2 draft//", + "-//w3c//dtd html 3.2 final//", + "-//w3c//dtd html 3.2//", + "-//w3c//dtd html 3.2s draft//", + "-//w3c//dtd html 4.0 frameset//", + "-//w3c//dtd html 4.0 transitional//", + "-//w3c//dtd html experimental 19960712//", + "-//w3c//dtd html experimental 970421//", + "-//w3c//dtd w3 html//", + "-//w3o//dtd w3 html 3.0//", + "-//webtechs//dtd mozilla html 2.0//", + "-//webtechs//dtd mozilla html//")) or + publicId in ("-//w3o//dtd w3 html strict 3.0//en//", + "-/w3c/dtd html 4.0 transitional/en", + "html") or + publicId.startswith( + ("-//w3c//dtd html 4.01 frameset//", + "-//w3c//dtd html 4.01 transitional//")) and + systemId is None or + systemId and systemId.lower() == "http://www.ibm.com/data/dtd/v11/ibmxhtml1-transitional.dtd"): + self.parser.compatMode = "quirks" + elif (publicId.startswith( + ("-//w3c//dtd xhtml 1.0 frameset//", + "-//w3c//dtd xhtml 1.0 transitional//")) or + publicId.startswith( + ("-//w3c//dtd html 4.01 frameset//", + "-//w3c//dtd html 4.01 transitional//")) and + systemId is not None): + self.parser.compatMode = "limited quirks" + + self.parser.phase = self.parser.phases["beforeHtml"] + + def anythingElse(self): + self.parser.compatMode = "quirks" + self.parser.phase = self.parser.phases["beforeHtml"] + + def processCharacters(self, token): + self.parser.parseError("expected-doctype-but-got-chars") + self.anythingElse() + return token + + def processStartTag(self, token): + self.parser.parseError("expected-doctype-but-got-start-tag", + {"name": token["name"]}) + self.anythingElse() + return token + + def processEndTag(self, token): + self.parser.parseError("expected-doctype-but-got-end-tag", + {"name": token["name"]}) + self.anythingElse() + return token + + def processEOF(self): + self.parser.parseError("expected-doctype-but-got-eof") + self.anythingElse() + return True + + class BeforeHtmlPhase(Phase): + __slots__ = tuple() + + # helper methods + def insertHtmlElement(self): + self.tree.insertRoot(impliedTagToken("html", "StartTag")) + self.parser.phase = self.parser.phases["beforeHead"] + + # other + def processEOF(self): + self.insertHtmlElement() + return True + + def processComment(self, token): + self.tree.insertComment(token, self.tree.document) + + def processSpaceCharacters(self, token): + pass + + def processCharacters(self, token): + self.insertHtmlElement() + return token + + def processStartTag(self, token): + if token["name"] == "html": + self.parser.firstStartTag = True + self.insertHtmlElement() + return token + + def processEndTag(self, token): + if token["name"] not in ("head", "body", "html", "br"): + self.parser.parseError("unexpected-end-tag-before-html", + {"name": token["name"]}) + else: + self.insertHtmlElement() + return token + + class BeforeHeadPhase(Phase): + __slots__ = tuple() + + def processEOF(self): + self.startTagHead(impliedTagToken("head", "StartTag")) + return True + + def processSpaceCharacters(self, token): + pass + + def processCharacters(self, token): + self.startTagHead(impliedTagToken("head", "StartTag")) + return token + + def startTagHtml(self, token): + return self.parser.phases["inBody"].processStartTag(token) + + def startTagHead(self, token): + self.tree.insertElement(token) + self.tree.headPointer = self.tree.openElements[-1] + self.parser.phase = self.parser.phases["inHead"] + + def startTagOther(self, token): + self.startTagHead(impliedTagToken("head", "StartTag")) + return token + + def endTagImplyHead(self, token): + self.startTagHead(impliedTagToken("head", "StartTag")) + return token + + def endTagOther(self, token): + self.parser.parseError("end-tag-after-implied-root", + {"name": token["name"]}) + + startTagHandler = _utils.MethodDispatcher([ + ("html", startTagHtml), + ("head", startTagHead) + ]) + startTagHandler.default = startTagOther + + endTagHandler = _utils.MethodDispatcher([ + (("head", "body", "html", "br"), endTagImplyHead) + ]) + endTagHandler.default = endTagOther + + class InHeadPhase(Phase): + __slots__ = tuple() + + # the real thing + def processEOF(self): + self.anythingElse() + return True + + def processCharacters(self, token): + self.anythingElse() + return token + + def startTagHtml(self, token): + return self.parser.phases["inBody"].processStartTag(token) + + def startTagHead(self, token): + self.parser.parseError("two-heads-are-not-better-than-one") + + def startTagBaseLinkCommand(self, token): + self.tree.insertElement(token) + self.tree.openElements.pop() + token["selfClosingAcknowledged"] = True + + def startTagMeta(self, token): + self.tree.insertElement(token) + self.tree.openElements.pop() + token["selfClosingAcknowledged"] = True + + attributes = token["data"] + if self.parser.tokenizer.stream.charEncoding[1] == "tentative": + if "charset" in attributes: + self.parser.tokenizer.stream.changeEncoding(attributes["charset"]) + elif ("content" in attributes and + "http-equiv" in attributes and + attributes["http-equiv"].lower() == "content-type"): + # Encoding it as UTF-8 here is a hack, as really we should pass + # the abstract Unicode string, and just use the + # ContentAttrParser on that, but using UTF-8 allows all chars + # to be encoded and as a ASCII-superset works. + data = _inputstream.EncodingBytes(attributes["content"].encode("utf-8")) + parser = _inputstream.ContentAttrParser(data) + codec = parser.parse() + self.parser.tokenizer.stream.changeEncoding(codec) + + def startTagTitle(self, token): + self.parser.parseRCDataRawtext(token, "RCDATA") + + def startTagNoFramesStyle(self, token): + # Need to decide whether to implement the scripting-disabled case + self.parser.parseRCDataRawtext(token, "RAWTEXT") + + def startTagNoscript(self, token): + if self.parser.scripting: + self.parser.parseRCDataRawtext(token, "RAWTEXT") + else: + self.tree.insertElement(token) + self.parser.phase = self.parser.phases["inHeadNoscript"] + + def startTagScript(self, token): + self.tree.insertElement(token) + self.parser.tokenizer.state = self.parser.tokenizer.scriptDataState + self.parser.originalPhase = self.parser.phase + self.parser.phase = self.parser.phases["text"] + + def startTagOther(self, token): + self.anythingElse() + return token + + def endTagHead(self, token): + node = self.parser.tree.openElements.pop() + assert node.name == "head", "Expected head got %s" % node.name + self.parser.phase = self.parser.phases["afterHead"] + + def endTagHtmlBodyBr(self, token): + self.anythingElse() + return token + + def endTagOther(self, token): + self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) + + def anythingElse(self): + self.endTagHead(impliedTagToken("head")) + + startTagHandler = _utils.MethodDispatcher([ + ("html", startTagHtml), + ("title", startTagTitle), + (("noframes", "style"), startTagNoFramesStyle), + ("noscript", startTagNoscript), + ("script", startTagScript), + (("base", "basefont", "bgsound", "command", "link"), + startTagBaseLinkCommand), + ("meta", startTagMeta), + ("head", startTagHead) + ]) + startTagHandler.default = startTagOther + + endTagHandler = _utils.MethodDispatcher([ + ("head", endTagHead), + (("br", "html", "body"), endTagHtmlBodyBr) + ]) + endTagHandler.default = endTagOther + + class InHeadNoscriptPhase(Phase): + __slots__ = tuple() + + def processEOF(self): + self.parser.parseError("eof-in-head-noscript") + self.anythingElse() + return True + + def processComment(self, token): + return self.parser.phases["inHead"].processComment(token) + + def processCharacters(self, token): + self.parser.parseError("char-in-head-noscript") + self.anythingElse() + return token + + def processSpaceCharacters(self, token): + return self.parser.phases["inHead"].processSpaceCharacters(token) + + def startTagHtml(self, token): + return self.parser.phases["inBody"].processStartTag(token) + + def startTagBaseLinkCommand(self, token): + return self.parser.phases["inHead"].processStartTag(token) + + def startTagHeadNoscript(self, token): + self.parser.parseError("unexpected-start-tag", {"name": token["name"]}) + + def startTagOther(self, token): + self.parser.parseError("unexpected-inhead-noscript-tag", {"name": token["name"]}) + self.anythingElse() + return token + + def endTagNoscript(self, token): + node = self.parser.tree.openElements.pop() + assert node.name == "noscript", "Expected noscript got %s" % node.name + self.parser.phase = self.parser.phases["inHead"] + + def endTagBr(self, token): + self.parser.parseError("unexpected-inhead-noscript-tag", {"name": token["name"]}) + self.anythingElse() + return token + + def endTagOther(self, token): + self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) + + def anythingElse(self): + # Caller must raise parse error first! + self.endTagNoscript(impliedTagToken("noscript")) + + startTagHandler = _utils.MethodDispatcher([ + ("html", startTagHtml), + (("basefont", "bgsound", "link", "meta", "noframes", "style"), startTagBaseLinkCommand), + (("head", "noscript"), startTagHeadNoscript), + ]) + startTagHandler.default = startTagOther + + endTagHandler = _utils.MethodDispatcher([ + ("noscript", endTagNoscript), + ("br", endTagBr), + ]) + endTagHandler.default = endTagOther + + class AfterHeadPhase(Phase): + __slots__ = tuple() + + def processEOF(self): + self.anythingElse() + return True + + def processCharacters(self, token): + self.anythingElse() + return token + + def startTagHtml(self, token): + return self.parser.phases["inBody"].processStartTag(token) + + def startTagBody(self, token): + self.parser.framesetOK = False + self.tree.insertElement(token) + self.parser.phase = self.parser.phases["inBody"] + + def startTagFrameset(self, token): + self.tree.insertElement(token) + self.parser.phase = self.parser.phases["inFrameset"] + + def startTagFromHead(self, token): + self.parser.parseError("unexpected-start-tag-out-of-my-head", + {"name": token["name"]}) + self.tree.openElements.append(self.tree.headPointer) + self.parser.phases["inHead"].processStartTag(token) + for node in self.tree.openElements[::-1]: + if node.name == "head": + self.tree.openElements.remove(node) + break + + def startTagHead(self, token): + self.parser.parseError("unexpected-start-tag", {"name": token["name"]}) + + def startTagOther(self, token): + self.anythingElse() + return token + + def endTagHtmlBodyBr(self, token): + self.anythingElse() + return token + + def endTagOther(self, token): + self.parser.parseError("unexpected-end-tag", {"name": token["name"]}) + + def anythingElse(self): + self.tree.insertElement(impliedTagToken("body", "StartTag")) + self.parser.phase = self.parser.phases["inBody"] + self.parser.framesetOK = True + + startTagHandler = _utils.MethodDispatcher([ + ("html", startTagHtml), + ("body", startTagBody), + ("frameset", startTagFrameset), + (("base", "basefont", "bgsound", "link", "meta", "noframes", "script", + "style", "title"), + startTagFromHead), + ("head", startTagHead) + ]) + startTagHandler.default = startTagOther + endTagHandler = _utils.MethodDispatcher([(("body", "html", "br"), + endTagHtmlBodyBr)]) + endTagHandler.default = endTagOther + + class InBodyPhase(Phase): + # http://www.whatwg.org/specs/web-apps/current-work/#parsing-main-inbody + # the really-really-really-very crazy mode + __slots__ = ("processSpaceCharacters",) + + def __init__(self, *args, **kwargs): + super(InBodyPhase, self).__init__(*args, **kwargs) + # Set this to the default handler + self.processSpaceCharacters = self.processSpaceCharactersNonPre + + def isMatchingFormattingElement(self, node1, node2): + return (node1.name == node2.name and + node1.namespace == node2.namespace and + node1.attributes == node2.attributes) + + # helper + def addFormattingElement(self, token): + self.tree.insertElement(token) + element = self.tree.openElements[-1] + + matchingElements = [] + for node in self.tree.activeFormattingElements[::-1]: + if node is Marker: + break + elif self.isMatchingFormattingElement(node, element): + matchingElements.append(node) + + assert len(matchingElements) <= 3 + if len(matchingElements) == 3: + self.tree.activeFormattingElements.remove(matchingElements[-1]) + self.tree.activeFormattingElements.append(element) + + # the real deal + def processEOF(self): + allowed_elements = frozenset(("dd", "dt", "li", "p", "tbody", "td", + "tfoot", "th", "thead", "tr", "body", + "html")) + for node in self.tree.openElements[::-1]: + if node.name not in allowed_elements: + self.parser.parseError("expected-closing-tag-but-got-eof") + break + # Stop parsing + + def processSpaceCharactersDropNewline(self, token): + # Sometimes (start of
, , and ', contains_pattern=r'\[(?s:.+)\]')
+        metainfo = {
+            'title': self._og_search_property('title', webpage, 'title', fatal=False),
+            'description': self._html_search_regex(
+                (rf']+\bid="album-desc-{suffix}"[^>]*>(.*?)' for suffix in ('more', 'dot')),
+                webpage, 'description', flags=re.S, fatal=False),
+            'thumbnail': self._og_search_property('image', webpage, 'thumbnail', fatal=False),
+            'upload_date': unified_strdate(self._html_search_meta('music:release_date', webpage, 'date', fatal=False)),
+        }
+        return self.playlist_result(self._get_entries(songs), album_id, **metainfo)
+
+
+class NetEaseMusicSingerIE(NetEaseMusicBaseIE):
+    IE_NAME = 'netease:singer'
+    IE_DESC = 'ç½‘æ˜“äº‘éŸ³ä¹ - 歌手'
+    _VALID_URL = r'https?://music\.163\.com/(#/)?artist\?id=(?P[0-9]+)'
+    _TESTS = [{
+        'note': 'Singer has aliases.',
+        'url': 'http://music.163.com/#/artist?id=10559',
+        'info_dict': {
+            'id': '10559',
+            'title': '张惠妹 - aMEI;阿妹;阿密特',
+        },
+        'playlist_count': 50,
+    }, {
+        'note': 'Singer has translated name.',
+        'url': 'http://music.163.com/#/artist?id=124098',
+        'info_dict': {
+            'id': '124098',
+            'title': 'æŽæ˜‡åŸº - ì´ìŠ¹ê¸°',
+        },
+        'playlist_count': 50,
+    }, {
+        'note': 'Singer with both translated and alias',
+        'url': 'https://music.163.com/#/artist?id=159692',
+        'info_dict': {
+            'id': '159692',
+            'title': 'åˆéŸ³ãƒŸã‚¯ - åˆéŸ³æœªæ¥;Hatsune Miku',
+        },
+        'playlist_count': 50,
+    }]
+
+    def _real_extract(self, url):
+        singer_id = self._match_id(url)
+
+        info = self.query_api(
+            f'artist/{singer_id}?id={singer_id}', singer_id, note='Downloading singer data')
+
+        name = join_nonempty(
+            traverse_obj(info, ('artist', 'name', {str})),
+            join_nonempty(*traverse_obj(info, ('artist', ('trans', ('alias', ...)), {str})), delim=';'),
+            delim=' - ')
+
+        return self.playlist_result(self._get_entries(info, 'hotSongs'), singer_id, name)
+
+
+class NetEaseMusicListIE(NetEaseMusicBaseIE):
+    IE_NAME = 'netease:playlist'
+    IE_DESC = 'ç½‘æ˜“äº‘éŸ³ä¹ - æ­Œå•'
+    _VALID_URL = r'https?://music\.163\.com/(#/)?(playlist|discover/toplist)\?id=(?P[0-9]+)'
+    _TESTS = [{
+        'url': 'http://music.163.com/#/playlist?id=79177352',
+        'info_dict': {
+            'id': '79177352',
+            'title': 'Billboard 2007 Top 100',
+            'description': 'md5:12fd0819cab2965b9583ace0f8b7b022',
+            'tags': ['欧美'],
+            'uploader': '浑然破ç­',
+            'uploader_id': '67549805',
+            'timestamp': int,
+            'upload_date': r're:\d{8}',
+        },
+        'playlist_mincount': 95,
+    }, {
+        'note': 'Toplist/Charts sample',
+        'url': 'https://music.163.com/#/discover/toplist?id=60198',
+        'info_dict': {
+            'id': '60198',
+            'title': 're:美国Billboard榜 [0-9]{4}-[0-9]{2}-[0-9]{2}',
+            'description': '美国Billboard排行榜',
+            'tags': ['æµè¡Œ', '欧美', '榜å•'],
+            'uploader': 'Billboard公告牌',
+            'uploader_id': '48171',
+            'timestamp': int,
+            'upload_date': r're:\d{8}',
+        },
+        'playlist_count': 100,
+    }, {
+        'note': 'Toplist/Charts sample',
+        'url': 'http://music.163.com/#/discover/toplist?id=3733003',
+        'info_dict': {
+            'id': '3733003',
+            'title': 're:韩国Melon排行榜周榜 [0-9]{4}-[0-9]{2}-[0-9]{2}',
+            'description': 'md5:73ec782a612711cadc7872d9c1e134fc',
+        },
+        'playlist_count': 50,
+        'skip': 'Blocked outside Mainland China',
+    }]
+
+    def _real_extract(self, url):
+        list_id = self._match_id(url)
+
+        info = self._download_eapi_json(
+            '/v3/playlist/detail', list_id,
+            {'id': list_id, 't': '-1', 'n': '500', 's': '0'},
+            note="Downloading playlist info")
+
+        metainfo = traverse_obj(info, ('playlist', {
+            'title': ('name', {str}),
+            'description': ('description', {str}),
+            'tags': ('tags', ..., {str}),
+            'uploader': ('creator', 'nickname', {str}),
+            'uploader_id': ('creator', 'userId', {str_or_none}),
+            'timestamp': ('updateTime', {self.kilo_or_none}),
+        }))
+        if traverse_obj(info, ('playlist', 'specialType')) == 10:
+            metainfo['title'] = f'{metainfo.get("title")} {strftime_or_none(metainfo.get("timestamp"), "%Y-%m-%d")}'
+
+        return self.playlist_result(self._get_entries(info, ('playlist', 'tracks')), list_id, **metainfo)
+
+
+class NetEaseMusicMvIE(NetEaseMusicBaseIE):
+    IE_NAME = 'netease:mv'
+    IE_DESC = 'ç½‘æ˜“äº‘éŸ³ä¹ - MV'
+    _VALID_URL = r'https?://music\.163\.com/(#/)?mv\?id=(?P[0-9]+)'
+    _TESTS = [{
+        'url': 'https://music.163.com/#/mv?id=10958064',
+        'info_dict': {
+            'id': '10958064',
+            'ext': 'mp4',
+            'title': '交æ¢ä½™ç”Ÿ',
+            'description': 'md5:e845872cff28820642a2b02eda428fea',
+            'creator': '林俊æ°',
+            'upload_date': '20200916',
+            'thumbnail': r're:http.*\.jpg',
+            'duration': 364,
+            'view_count': int,
+            'like_count': int,
+            'comment_count': int,
+        },
+    }, {
+        'url': 'http://music.163.com/#/mv?id=415350',
+        'info_dict': {
+            'id': '415350',
+            'ext': 'mp4',
+            'title': 'ì´ëŸ´ê±°ë©´ 그러지ë§ì§€',
+            'description': '白雅言自作曲唱甜蜜爱情',
+            'creator': '白娥娟',
+            'upload_date': '20150520',
+            'thumbnail': r're:http.*\.jpg',
+            'duration': 216,
+            'view_count': int,
+            'like_count': int,
+            'comment_count': int,
+        },
+    }]
+
+    def _real_extract(self, url):
+        mv_id = self._match_id(url)
+
+        info = self.query_api(
+            f'mv/detail?id={mv_id}&type=mp4', mv_id, 'Downloading mv info')['data']
+
+        formats = [
+            {'url': mv_url, 'ext': 'mp4', 'format_id': f'{brs}p', 'height': int_or_none(brs)}
+            for brs, mv_url in info['brs'].items()
+        ]
+
+        return {
+            'id': mv_id,
+            'formats': formats,
+            **traverse_obj(info, {
+                'title': ('name', {str}),
+                'description': (('desc', 'briefDesc'), {str}, {lambda x: x or None}),
+                'creator': ('artistName', {str}),
+                'upload_date': ('publishTime', {unified_strdate}),
+                'thumbnail': ('cover', {url_or_none}),
+                'duration': ('duration', {self.kilo_or_none}),
+                'view_count': ('playCount', {int_or_none}),
+                'like_count': ('likeCount', {int_or_none}),
+                'comment_count': ('commentCount', {int_or_none}),
+            }, get_all=False),
+        }
+
+
+class NetEaseMusicProgramIE(NetEaseMusicBaseIE):
+    IE_NAME = 'netease:program'
+    IE_DESC = 'ç½‘æ˜“äº‘éŸ³ä¹ - 电å°èŠ‚ç›®'
+    _VALID_URL = r'https?://music\.163\.com/(#/?)program\?id=(?P[0-9]+)'
+    _TESTS = [{
+        'url': 'http://music.163.com/#/program?id=10109055',
+        'info_dict': {
+            'id': '32593346',
+            'ext': 'mp3',
+            'title': 'ä¸ä¸¹è¶³çƒèƒŒåŽçš„æ•…事',
+            'description': 'å–œé©¬æ‹‰é›…äººçš„è¶³çƒæ¢¦ ...',
+            'creator': '大è¯è¥¿è—',
+            'timestamp': 1434179287,
+            'upload_date': '20150613',
+            'thumbnail': r're:http.*\.jpg',
+            'duration': 900,
+        },
+    }, {
+        'note': 'This program has accompanying songs.',
+        'url': 'http://music.163.com/#/program?id=10141022',
+        'info_dict': {
+            'id': '10141022',
+            'title': '滚滚电å°çš„æœ‰å£°èŠ‚ç›®',
+            'description': 'md5:8d594db46cc3e6509107ede70a4aaa3b',
+            'creator': '滚滚电å°ORZ',
+            'timestamp': 1434450733,
+            'upload_date': '20150616',
+            'thumbnail': r're:http.*\.jpg',
+        },
+        'playlist_count': 4,
+    }, {
+        'note': 'This program has accompanying songs.',
+        'url': 'http://music.163.com/#/program?id=10141022',
+        'info_dict': {
+            'id': '32647209',
+            'ext': 'mp3',
+            'title': '滚滚电å°çš„æœ‰å£°èŠ‚ç›®',
+            'description': 'md5:8d594db46cc3e6509107ede70a4aaa3b',
+            'creator': '滚滚电å°ORZ',
+            'timestamp': 1434450733,
+            'upload_date': '20150616',
+            'thumbnail': r're:http.*\.jpg',
+            'duration': 1104,
+        },
+        'params': {
+            'noplaylist': True
+        },
+    }]
+
+    def _real_extract(self, url):
+        program_id = self._match_id(url)
+
+        info = self.query_api(
+            f'dj/program/detail?id={program_id}', program_id, note='Downloading program info')['program']
+
+        metainfo = traverse_obj(info, {
+            'title': ('name', {str}),
+            'description': ('description', {str}),
+            'creator': ('dj', 'brand', {str}),
+            'thumbnail': ('coverUrl', {url_or_none}),
+            'timestamp': ('createTime', {self.kilo_or_none}),
+        })
+
+        if not self._yes_playlist(info['songs'] and program_id, info['mainSong']['id']):
+            formats = self.extract_formats(info['mainSong'])
+
+            return {
+                'id': str(info['mainSong']['id']),
+                'formats': formats,
+                'duration': traverse_obj(info, ('mainSong', 'duration', {self.kilo_or_none})),
+                **metainfo,
+            }
+
+        songs = traverse_obj(info, (('mainSong', ('songs', ...)),))
+        return self.playlist_result(self._get_entries(songs), program_id, **metainfo)
+
+
+class NetEaseMusicDjRadioIE(NetEaseMusicBaseIE):
+    IE_NAME = 'netease:djradio'
+    IE_DESC = 'ç½‘æ˜“äº‘éŸ³ä¹ - 电å°'
+    _VALID_URL = r'https?://music\.163\.com/(#/)?djradio\?id=(?P[0-9]+)'
+    _TEST = {
+        'url': 'http://music.163.com/#/djradio?id=42',
+        'info_dict': {
+            'id': '42',
+            'title': '声音蔓延',
+            'description': 'md5:c7381ebd7989f9f367668a5aee7d5f08'
+        },
+        'playlist_mincount': 40,
+    }
+    _PAGE_SIZE = 1000
+
+    def _real_extract(self, url):
+        dj_id = self._match_id(url)
+
+        metainfo = {}
+        entries = []
+        for offset in itertools.count(start=0, step=self._PAGE_SIZE):
+            info = self.query_api(
+                f'dj/program/byradio?asc=false&limit={self._PAGE_SIZE}&radioId={dj_id}&offset={offset}',
+                dj_id, note=f'Downloading dj programs - {offset}')
+
+            entries.extend(self.url_result(
+                f'http://music.163.com/#/program?id={program["id"]}', NetEaseMusicProgramIE,
+                program['id'], program.get('name')) for program in info['programs'])
+            if not metainfo:
+                metainfo = traverse_obj(info, ('programs', 0, 'radio', {
+                    'title': ('name', {str}),
+                    'description': ('desc', {str}),
+                }))
+
+            if not info['more']:
+                break
+
+        return self.playlist_result(entries, dj_id, **metainfo)
diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/netverse.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/netverse.py
new file mode 100644
index 0000000..ef53e15
--- /dev/null
+++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/netverse.py
@@ -0,0 +1,281 @@
+import itertools
+
+from .common import InfoExtractor, SearchInfoExtractor
+from .dailymotion import DailymotionIE
+from ..utils import smuggle_url, traverse_obj
+
+
+class NetverseBaseIE(InfoExtractor):
+    _ENDPOINTS = {
+        'watch': 'watchvideo',
+        'video': 'watchvideo',
+        'webseries': 'webseries',
+        'season': 'webseason_videos',
+    }
+
+    def _call_api(self, slug, endpoint, query={}, season_id='', display_id=None):
+        return self._download_json(
+            f'https://api.netverse.id/medias/api/v2/{self._ENDPOINTS[endpoint]}/{slug}/{season_id}',
+            display_id or slug, query=query)
+
+    def _get_comments(self, video_id):
+        last_page_number = None
+        for i in itertools.count(1):
+            comment_data = self._download_json(
+                f'https://api.netverse.id/mediadetails/api/v3/videos/comments/{video_id}',
+                video_id, data=b'', fatal=False, query={'page': i},
+                note=f'Downloading JSON comment metadata page {i}') or {}
+            yield from traverse_obj(comment_data, ('response', 'comments', 'data', ..., {
+                'id': '_id',
+                'text': 'comment',
+                'author_id': 'customer_id',
+                'author': ('customer', 'name'),
+                'author_thumbnail': ('customer', 'profile_picture'),
+            }))
+
+            if not last_page_number:
+                last_page_number = traverse_obj(comment_data, ('response', 'comments', 'last_page'))
+            if i >= (last_page_number or 0):
+                break
+
+
+class NetverseIE(NetverseBaseIE):
+    _VALID_URL = r'https?://(?:\w+\.)?netverse\.id/(?Pwatch|video)/(?P[^/?#&]+)'
+    _TESTS = [{
+        # Watch video
+        'url': 'https://www.netverse.id/watch/waktu-indonesia-bercanda-edisi-spesial-lebaran-2016',
+        'info_dict': {
+            'id': 'k4yhqUwINAGtmHx3NkL',
+            'title': 'Waktu Indonesia Bercanda - Edisi Spesial Lebaran 2016',
+            'ext': 'mp4',
+            'season': 'Season 2016',
+            'description': 'md5:d41d8cd98f00b204e9800998ecf8427e',
+            'thumbnail': r're:https?://s\d+\.dmcdn\.net/v/[^/]+/x1080',
+            'episode_number': 22,
+            'episode': 'Episode 22',
+            'uploader_id': 'x2ir3vq',
+            'age_limit': 0,
+            'tags': [],
+            'view_count': int,
+            'display_id': 'waktu-indonesia-bercanda-edisi-spesial-lebaran-2016',
+            'duration': 2990,
+            'upload_date': '20210722',
+            'timestamp': 1626919804,
+            'like_count': int,
+            'uploader': 'Net Prime',
+        }
+    }, {
+        # series
+        'url': 'https://www.netverse.id/watch/jadoo-seorang-model',
+        'info_dict': {
+            'id': 'x88izwc',
+            'title': 'Jadoo Seorang Model',
+            'ext': 'mp4',
+            'season': 'Season 2',
+            'description': 'md5:8a74f70812cca267e19ee0635f0af835',
+            'thumbnail': r're:https?://s\d+\.dmcdn\.net/v/[^/]+/x1080',
+            'episode_number': 2,
+            'episode': 'Episode 2',
+            'view_count': int,
+            'like_count': int,
+            'display_id': 'jadoo-seorang-model',
+            'uploader_id': 'x2ir3vq',
+            'duration': 635,
+            'timestamp': 1646372927,
+            'tags': ['PG069497-hellojadooseason2eps2'],
+            'upload_date': '20220304',
+            'uploader': 'Net Prime',
+            'age_limit': 0,
+        },
+        'skip': 'video get Geo-blocked for some country'
+    }, {
+        # non www host
+        'url': 'https://netverse.id/watch/tetangga-baru',
+        'info_dict': {
+            'id': 'k4CNGz7V0HJ7vfwZbXy',
+            'ext': 'mp4',
+            'title': 'Tetangga Baru',
+            'season': 'Season 1',
+            'description': 'md5:23fcf70e97d461d3029d25d59b2ccfb9',
+            'thumbnail': r're:https?://s\d+\.dmcdn\.net/v/[^/]+/x1080',
+            'episode_number': 1,
+            'episode': 'Episode 1',
+            'timestamp': 1624538169,
+            'view_count': int,
+            'upload_date': '20210624',
+            'age_limit': 0,
+            'uploader_id': 'x2ir3vq',
+            'like_count': int,
+            'uploader': 'Net Prime',
+            'tags': ['PG008534', 'tetangga', 'Baru'],
+            'display_id': 'tetangga-baru',
+            'duration': 1406,
+        },
+    }, {
+        # /video url
+        'url': 'https://www.netverse.id/video/pg067482-hellojadoo-season1',
+        'title': 'Namaku Choi Jadoo',
+        'info_dict': {
+            'id': 'x887jzz',
+            'ext': 'mp4',
+            'thumbnail': r're:https?://s\d+\.dmcdn\.net/v/[^/]+/x1080',
+            'season': 'Season 1',
+            'episode_number': 1,
+            'description': 'md5:d4f627b3e7a3f9acdc55f6cdd5ea41d5',
+            'title': 'Namaku Choi Jadoo',
+            'episode': 'Episode 1',
+            'age_limit': 0,
+            'like_count': int,
+            'view_count': int,
+            'tags': ['PG067482', 'PG067482-HelloJadoo-season1'],
+            'duration': 780,
+            'display_id': 'pg067482-hellojadoo-season1',
+            'uploader_id': 'x2ir3vq',
+            'uploader': 'Net Prime',
+            'timestamp': 1645764984,
+            'upload_date': '20220225',
+        },
+        'skip': 'This video get Geo-blocked for some country'
+    }, {
+        # video with comments
+        'url': 'https://netverse.id/video/episode-1-season-2016-ok-food',
+        'info_dict': {
+            'id': 'k6hetBPiQMljSxxvAy7',
+            'ext': 'mp4',
+            'thumbnail': r're:https?://s\d+\.dmcdn\.net/v/[^/]+/x1080',
+            'display_id': 'episode-1-season-2016-ok-food',
+            'like_count': int,
+            'description': '',
+            'duration': 1471,
+            'age_limit': 0,
+            'timestamp': 1642405848,
+            'episode_number': 1,
+            'season': 'Season 2016',
+            'uploader_id': 'x2ir3vq',
+            'title': 'Episode 1 - Season 2016 - Ok Food',
+            'upload_date': '20220117',
+            'tags': [],
+            'view_count': int,
+            'episode': 'Episode 1',
+            'uploader': 'Net Prime',
+            'comment_count': int,
+        },
+        'params': {
+            'getcomments': True
+        }
+    }, {
+        # video with multiple page comment
+        'url': 'https://netverse.id/video/match-island-eps-1-fix',
+        'info_dict': {
+            'id': 'x8aznjc',
+            'ext': 'mp4',
+            'like_count': int,
+            'tags': ['Match-Island', 'Pd00111'],
+            'display_id': 'match-island-eps-1-fix',
+            'view_count': int,
+            'episode': 'Episode 1',
+            'uploader': 'Net Prime',
+            'duration': 4070,
+            'timestamp': 1653068165,
+            'description': 'md5:e9cf3b480ad18e9c33b999e3494f223f',
+            'age_limit': 0,
+            'title': 'Welcome To Match Island',
+            'upload_date': '20220520',
+            'episode_number': 1,
+            'thumbnail': r're:https?://s\d+\.dmcdn\.net/v/[^/]+/x1080',
+            'uploader_id': 'x2ir3vq',
+            'season': 'Season 1',
+            'comment_count': int,
+        },
+        'params': {
+            'getcomments': True
+        }
+    }]
+
+    def _real_extract(self, url):
+        display_id, sites_type = self._match_valid_url(url).group('display_id', 'type')
+        program_json = self._call_api(display_id, sites_type)
+        videos = program_json['response']['videos']
+
+        return {
+            '_type': 'url_transparent',
+            'ie_key': DailymotionIE.ie_key(),
+            'url': smuggle_url(videos['dailymotion_url'], {'query': {'embedder': 'https://www.netverse.id'}}),
+            'display_id': display_id,
+            'title': videos.get('title'),
+            'season': videos.get('season_name'),
+            'thumbnail': traverse_obj(videos, ('program_detail', 'thumbnail_image')),
+            'description': traverse_obj(videos, ('program_detail', 'description')),
+            'episode_number': videos.get('episode_order'),
+            '__post_extractor': self.extract_comments(display_id),
+        }
+
+
+class NetversePlaylistIE(NetverseBaseIE):
+    _VALID_URL = r'https?://(?:\w+\.)?netverse\.id/(?Pwebseries)/(?P[^/?#&]+)'
+    _TESTS = [{
+        # multiple season
+        'url': 'https://netverse.id/webseries/tetangga-masa-gitu',
+        'info_dict': {
+            'id': 'tetangga-masa-gitu',
+            'title': 'Tetangga Masa Gitu',
+        },
+        'playlist_count': 519,
+    }, {
+        # single season
+        'url': 'https://netverse.id/webseries/kelas-internasional',
+        'info_dict': {
+            'id': 'kelas-internasional',
+            'title': 'Kelas Internasional',
+        },
+        'playlist_count': 203,
+    }]
+
+    def parse_playlist(self, json_data, playlist_id):
+        slug_sample = traverse_obj(json_data, ('related', 'data', ..., 'slug'))[0]
+        for season in traverse_obj(json_data, ('seasons', ..., 'id')):
+            playlist_json = self._call_api(
+                slug_sample, 'season', display_id=playlist_id, season_id=season)
+
+            for current_page in range(playlist_json['response']['season_list']['last_page']):
+                playlist_json = self._call_api(slug_sample, 'season', query={'page': current_page + 1},
+                                               season_id=season, display_id=playlist_id)
+                for slug in traverse_obj(playlist_json, ('response', ..., 'data', ..., 'slug')):
+                    yield self.url_result(f'https://www.netverse.id/video/{slug}', NetverseIE)
+
+    def _real_extract(self, url):
+        playlist_id, sites_type = self._match_valid_url(url).group('display_id', 'type')
+        playlist_data = self._call_api(playlist_id, sites_type)
+
+        return self.playlist_result(
+            self.parse_playlist(playlist_data['response'], playlist_id),
+            traverse_obj(playlist_data, ('response', 'webseries_info', 'slug')),
+            traverse_obj(playlist_data, ('response', 'webseries_info', 'title')))
+
+
+class NetverseSearchIE(SearchInfoExtractor):
+    _SEARCH_KEY = 'netsearch'
+
+    _TESTS = [{
+        'url': 'netsearch10:tetangga',
+        'info_dict': {
+            'id': 'tetangga',
+            'title': 'tetangga',
+        },
+        'playlist_count': 10,
+    }]
+
+    def _search_results(self, query):
+        last_page = None
+        for i in itertools.count(1):
+            search_data = self._download_json(
+                'https://api.netverse.id/search/elastic/search', query,
+                query={'q': query, 'page': i}, note=f'Downloading page {i}')
+
+            videos = traverse_obj(search_data, ('response', 'data', ...))
+            for video in videos:
+                yield self.url_result(f'https://netverse.id/video/{video["slug"]}', NetverseIE)
+
+            last_page = last_page or traverse_obj(search_data, ('response', 'lastpage'))
+            if not videos or i >= (last_page or 0):
+                break
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/netzkino.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/netzkino.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/netzkino.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/netzkino.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/newgrounds.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/newgrounds.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/newgrounds.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/newgrounds.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/newspicks.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/newspicks.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/newspicks.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/newspicks.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/newstube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/newstube.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/newstube.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/newstube.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/newsy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/newsy.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/newsy.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/newsy.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nextmedia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nextmedia.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/nextmedia.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nextmedia.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nexx.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nexx.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/nexx.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nexx.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nfb.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nfb.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/nfb.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nfb.py
diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nfhsnetwork.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nfhsnetwork.py
similarity index 100%
rename from lib/python3.11/site-packages/yt_dlp/extractor/nfhsnetwork.py
rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nfhsnetwork.py
diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nfl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nfl.py
new file mode 100644
index 0000000..3f83cd2
--- /dev/null
+++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nfl.py
@@ -0,0 +1,373 @@
+import base64
+import json
+import re
+import time
+import uuid
+
+from .anvato import AnvatoIE
+from .common import InfoExtractor
+from ..utils import (
+    ExtractorError,
+    clean_html,
+    determine_ext,
+    get_element_by_class,
+    traverse_obj,
+    urlencode_postdata,
+)
+
+
+class NFLBaseIE(InfoExtractor):
+    _VALID_URL_BASE = r'''(?x)
+                    https?://
+                        (?P
+                            (?:www\.)?
+                            (?:
+                                (?:
+                                    nfl|
+                                    buffalobills|
+                                    miamidolphins|
+                                    patriots|
+                                    newyorkjets|
+                                    baltimoreravens|
+                                    bengals|
+                                    clevelandbrowns|
+                                    steelers|
+                                    houstontexans|
+                                    colts|
+                                    jaguars|
+                                    (?:titansonline|tennesseetitans)|
+                                    denverbroncos|
+                                    (?:kc)?chiefs|
+                                    raiders|
+                                    chargers|
+                                    dallascowboys|
+                                    giants|
+                                    philadelphiaeagles|
+                                    (?:redskins|washingtonfootball)|
+                                    chicagobears|
+                                    detroitlions|
+                                    packers|
+                                    vikings|
+                                    atlantafalcons|
+                                    panthers|
+                                    neworleanssaints|
+                                    buccaneers|
+                                    azcardinals|
+                                    (?:stlouis|the)rams|
+                                    49ers|
+                                    seahawks
+                                )\.com|
+                                .+?\.clubs\.nfl\.com
+                            )
+                        )/
+                    '''
+    _VIDEO_CONFIG_REGEX = r']+id="[^"]*video-config-[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12}[^"]*"[^>]*>\s*({.+});?\s*'
+    _ANVATO_PREFIX = 'anvato:GXvEgwyJeWem8KCYXfeoHWknwP48Mboj:'
+
+    _CLIENT_DATA = {
+        'clientKey': '4cFUW6DmwJpzT9L7LrG3qRAcABG5s04g',
+        'clientSecret': 'CZuvCL49d9OwfGsR',
+        'deviceId': str(uuid.uuid4()),
+        'deviceInfo': base64.b64encode(json.dumps({
+            'model': 'desktop',
+            'version': 'Chrome',
+            'osName': 'Windows',
+            'osVersion': '10.0',
+        }, separators=(',', ':')).encode()).decode(),
+        'networkType': 'other',
+        'nflClaimGroupsToAdd': [],
+        'nflClaimGroupsToRemove': [],
+    }
+    _ACCOUNT_INFO = {}
+    _API_KEY = None
+
+    _TOKEN = None
+    _TOKEN_EXPIRY = 0
+
+    def _get_account_info(self, url, slug):
+        if not self._API_KEY:
+            webpage = self._download_webpage(url, slug, fatal=False) or ''
+            self._API_KEY = self._search_regex(
+                r'window\.gigyaApiKey\s*=\s*["\'](\w+)["\'];', webpage, 'API key',
+                fatal=False) or '3_Qa8TkWpIB8ESCBT8tY2TukbVKgO5F6BJVc7N1oComdwFzI7H2L9NOWdm11i_BY9f'
+
+        cookies = self._get_cookies('https://auth-id.nfl.com/')
+        login_token = traverse_obj(cookies, (
+            (f'glt_{self._API_KEY}', lambda k, _: k.startswith('glt_')), {lambda x: x.value}), get_all=False)
+        if not login_token:
+            self.raise_login_required()
+        if 'ucid' not in cookies:
+            raise ExtractorError(
+                'Required cookies for the auth-id.nfl.com domain were not found among passed cookies. '
+                'If using --cookies, these cookies must be exported along with .nfl.com cookies, '
+                'or else try using --cookies-from-browser instead', expected=True)
+
+        account = self._download_json(
+            'https://auth-id.nfl.com/accounts.getAccountInfo', slug,
+            note='Downloading account info', data=urlencode_postdata({
+                'include': 'profile,data',
+                'lang': 'en',
+                'APIKey': self._API_KEY,
+                'sdk': 'js_latest',
+                'login_token': login_token,
+                'authMode': 'cookie',
+                'pageURL': url,
+                'sdkBuild': traverse_obj(cookies, (
+                    'gig_canary_ver', {lambda x: x.value.partition('-')[0]}), default='15170'),
+                'format': 'json',
+            }), headers={'Content-Type': 'application/x-www-form-urlencoded'})
+
+        self._ACCOUNT_INFO = traverse_obj(account, {
+            'signatureTimestamp': 'signatureTimestamp',
+            'uid': 'UID',
+            'uidSignature': 'UIDSignature',
+        })
+
+        if len(self._ACCOUNT_INFO) != 3:
+            raise ExtractorError('Failed to retrieve account info with provided cookies', expected=True)
+
+    def _get_auth_token(self, url, slug):
+        if self._TOKEN and self._TOKEN_EXPIRY > int(time.time() + 30):
+            return
+
+        if not self._ACCOUNT_INFO:
+            self._get_account_info(url, slug)
+
+        token = self._download_json(
+            'https://api.nfl.com/identity/v3/token%s' % (
+                '/refresh' if self._ACCOUNT_INFO.get('refreshToken') else ''),
+            slug, headers={'Content-Type': 'application/json'}, note='Downloading access token',
+            data=json.dumps({**self._CLIENT_DATA, **self._ACCOUNT_INFO}, separators=(',', ':')).encode())
+
+        self._TOKEN = token['accessToken']
+        self._TOKEN_EXPIRY = token['expiresIn']
+        self._ACCOUNT_INFO['refreshToken'] = token['refreshToken']
+
+    def _parse_video_config(self, video_config, display_id):
+        video_config = self._parse_json(video_config, display_id)
+        item = video_config['playlist'][0]
+        mcp_id = item.get('mcpID')
+        if mcp_id:
+            info = self.url_result(f'{self._ANVATO_PREFIX}{mcp_id}', AnvatoIE, mcp_id)
+        else:
+            media_id = item.get('id') or item['entityId']
+            title = item.get('title')
+            item_url = item['url']
+            info = {'id': media_id}
+            ext = determine_ext(item_url)
+            if ext == 'm3u8':
+                info['formats'] = self._extract_m3u8_formats(item_url, media_id, 'mp4')
+            else:
+                info['url'] = item_url
+                if item.get('audio') is True:
+                    info['vcodec'] = 'none'
+            is_live = video_config.get('live') is True
+            thumbnails = None
+            image_url = item.get(item.get('imageSrc')) or item.get(item.get('posterImage'))
+            if image_url:
+                thumbnails = [{
+                    'url': image_url,
+                    'ext': determine_ext(image_url, 'jpg'),
+                }]
+            info.update({
+                'title': title,
+                'is_live': is_live,
+                'description': clean_html(item.get('description')),
+                'thumbnails': thumbnails,
+            })
+        return info
+
+
+class NFLIE(NFLBaseIE):
+    IE_NAME = 'nfl.com'
+    _VALID_URL = NFLBaseIE._VALID_URL_BASE + r'(?:videos?|listen|audio)/(?P[^/#?&]+)'
+    _TESTS = [{
+        'url': 'https://www.nfl.com/videos/baker-mayfield-s-game-changing-plays-from-3-td-game-week-14',
+        'info_dict': {
+            'id': '899441',
+            'ext': 'mp4',
+            'title': "Baker Mayfield's game-changing plays from 3-TD game Week 14",
+            'description': 'md5:85e05a3cc163f8c344340f220521136d',
+            'upload_date': '20201215',
+            'timestamp': 1608009755,
+            'thumbnail': r're:^https?://.*\.jpg$',
+            'uploader': 'NFL',
+            'tags': 'count:6',
+            'duration': 157,
+            'categories': 'count:3',
+        }
+    }, {
+        'url': 'https://www.chiefs.com/listen/patrick-mahomes-travis-kelce-react-to-win-over-dolphins-the-breakdown',
+        'md5': '6886b32c24b463038c760ceb55a34566',
+        'info_dict': {
+            'id': 'd87e8790-3e14-11eb-8ceb-ff05c2867f99',
+            'ext': 'mp3',
+            'title': 'Patrick Mahomes, Travis Kelce React to Win Over Dolphins | The Breakdown',
+            'description': 'md5:12ada8ee70e6762658c30e223e095075',
+        },
+        'skip': 'HTTP Error 404: Not Found',
+    }, {
+        'url': 'https://www.buffalobills.com/video/buffalo-bills-military-recognition-week-14',
+        'only_matching': True,
+    }, {
+        'url': 'https://www.raiders.com/audio/instant-reactions-raiders-week-14-loss-to-indianapolis-colts-espn-jason-fitz',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+        return self._parse_video_config(self._search_regex(
+            self._VIDEO_CONFIG_REGEX, webpage, 'video config'), display_id)
+
+
+class NFLArticleIE(NFLBaseIE):
+    IE_NAME = 'nfl.com:article'
+    _VALID_URL = NFLBaseIE._VALID_URL_BASE + r'news/(?P[^/#?&]+)'
+    _TEST = {
+        'url': 'https://www.buffalobills.com/news/the-only-thing-we-ve-earned-is-the-noise-bills-coaches-discuss-handling-rising-e',
+        'info_dict': {
+            'id': 'the-only-thing-we-ve-earned-is-the-noise-bills-coaches-discuss-handling-rising-e',
+            'title': "'The only thing we've earned is the noise' | Bills coaches discuss handling rising expectations",
+        },
+        'playlist_count': 4,
+    }
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+        entries = []
+        for video_config in re.findall(self._VIDEO_CONFIG_REGEX, webpage):
+            entries.append(self._parse_video_config(video_config, display_id))
+        title = clean_html(get_element_by_class(
+            'nfl-c-article__title', webpage)) or self._html_search_meta(
+            ['og:title', 'twitter:title'], webpage)
+        return self.playlist_result(entries, display_id, title)
+
+
+class NFLPlusReplayIE(NFLBaseIE):
+    IE_NAME = 'nfl.com:plus:replay'
+    _VALID_URL = r'https?://(?:www\.)?nfl\.com/plus/games/(?P[\w-]+)(?:/(?P\d+))?'
+    _TESTS = [{
+        'url': 'https://www.nfl.com/plus/games/giants-at-vikings-2022-post-1/1572108',
+        'info_dict': {
+            'id': '1572108',
+            'ext': 'mp4',
+            'title': 'New York Giants at Minnesota Vikings',
+            'description': 'New York Giants play the Minnesota Vikings at U.S. Bank Stadium on January 15, 2023',
+            'uploader': 'NFL',
+            'upload_date': '20230116',
+            'timestamp': 1673864520,
+            'duration': 7157,
+            'categories': ['Game Highlights'],
+            'tags': ['Minnesota Vikings', 'New York Giants', 'Minnesota Vikings vs. New York Giants'],
+            'thumbnail': r're:^https?://.*\.jpg',
+        },
+        'params': {'skip_download': 'm3u8'},
+    }, {
+        'note': 'Subscription required',
+        'url': 'https://www.nfl.com/plus/games/giants-at-vikings-2022-post-1',
+        'playlist_count': 4,
+        'info_dict': {
+            'id': 'giants-at-vikings-2022-post-1',
+        },
+    }, {
+        'note': 'Subscription required',
+        'url': 'https://www.nfl.com/plus/games/giants-at-patriots-2011-pre-4',
+        'playlist_count': 2,
+        'info_dict': {
+            'id': 'giants-at-patriots-2011-pre-4',
+        },
+    }, {
+        'note': 'Subscription required',
+        'url': 'https://www.nfl.com/plus/games/giants-at-patriots-2011-pre-4',
+        'info_dict': {
+            'id': '950701',
+            'ext': 'mp4',
+            'title': 'Giants @ Patriots',
+            'description': 'Giants at Patriots on September 01, 2011',
+            'uploader': 'NFL',
+            'upload_date': '20210724',
+            'timestamp': 1627085874,
+            'duration': 1532,
+            'categories': ['Game Highlights'],
+            'tags': ['play-by-play'],
+            'thumbnail': r're:^https?://.*\.jpg',
+        },
+        'params': {
+            'skip_download': 'm3u8',
+            'extractor_args': {'nflplusreplay': {'type': ['condensed_game']}},
+        },
+    }]
+
+    _REPLAY_TYPES = {
+        'full_game': 'Full Game',
+        'full_game_spanish': 'Full Game - Spanish',
+        'condensed_game': 'Condensed Game',
+        'all_22': 'All-22',
+    }
+
+    def _real_extract(self, url):
+        slug, video_id = self._match_valid_url(url).group('slug', 'id')
+        requested_types = self._configuration_arg('type', ['all'])
+        if 'all' in requested_types:
+            requested_types = list(self._REPLAY_TYPES.keys())
+        requested_types = traverse_obj(self._REPLAY_TYPES, (None, requested_types))
+
+        if not video_id:
+            self._get_auth_token(url, slug)
+            headers = {'Authorization': f'Bearer {self._TOKEN}'}
+            game_id = self._download_json(
+                f'https://api.nfl.com/football/v2/games/externalId/slug/{slug}', slug,
+                'Downloading game ID', query={'withExternalIds': 'true'}, headers=headers)['id']
+            replays = self._download_json(
+                'https://api.nfl.com/content/v1/videos/replays', slug, 'Downloading replays JSON',
+                query={'gameId': game_id}, headers=headers)
+            if len(requested_types) == 1:
+                video_id = traverse_obj(replays, (
+                    'items', lambda _, v: v['subType'] == requested_types[0], 'mcpPlaybackId'), get_all=False)
+
+        if video_id:
+            return self.url_result(f'{self._ANVATO_PREFIX}{video_id}', AnvatoIE, video_id)
+
+        def entries():
+            for replay in traverse_obj(
+                replays, ('items', lambda _, v: v['mcpPlaybackId'] and v['subType'] in requested_types)
+            ):
+                video_id = replay['mcpPlaybackId']
+                yield self.url_result(f'{self._ANVATO_PREFIX}{video_id}', AnvatoIE, video_id)
+
+        return self.playlist_result(entries(), slug)
+
+
+class NFLPlusEpisodeIE(NFLBaseIE):
+    IE_NAME = 'nfl.com:plus:episode'
+    _VALID_URL = r'https?://(?:www\.)?nfl\.com/plus/episodes/(?P[\w-]+)'
+    _TESTS = [{
+        'note': 'Subscription required',
+        'url': 'https://www.nfl.com/plus/episodes/kurt-s-qb-insider-conference-championships',
+        'info_dict': {
+            'id': '1576832',
+            'ext': 'mp4',
+            'title': 'Conference Championships',
+            'description': 'md5:944f7fab56f7a37430bf8473f5473857',
+            'uploader': 'NFL',
+            'upload_date': '20230127',
+            'timestamp': 1674782760,
+            'duration': 730,
+            'categories': ['Analysis'],
+            'tags': ['Cincinnati Bengals at Kansas City Chiefs (2022-POST-3)'],
+            'thumbnail': r're:^https?://.*\.jpg',
+        },
+        'params': {'skip_download': 'm3u8'},
+    }]
+
+    def _real_extract(self, url):
+        slug = self._match_id(url)
+        self._get_auth_token(url, slug)
+        video_id = self._download_json(
+            f'https://api.nfl.com/content/v1/videos/episodes/{slug}', slug, headers={
+                'Authorization': f'Bearer {self._TOKEN}',
+            })['mcpPlaybackId']
+
+        return self.url_result(f'{self._ANVATO_PREFIX}{video_id}', AnvatoIE, video_id)
diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nhk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nhk.py
new file mode 100644
index 0000000..f6b5c50
--- /dev/null
+++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nhk.py
@@ -0,0 +1,619 @@
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+    ExtractorError,
+    int_or_none,
+    join_nonempty,
+    parse_duration,
+    traverse_obj,
+    unescapeHTML,
+    unified_timestamp,
+    url_or_none,
+    urljoin,
+)
+
+
+class NhkBaseIE(InfoExtractor):
+    _API_URL_TEMPLATE = 'https://nwapi.nhk.jp/nhkworld/%sod%slist/v7b/%s/%s/%s/all%s.json'
+    _BASE_URL_REGEX = r'https?://www3\.nhk\.or\.jp/nhkworld/(?P[a-z]{2})/ondemand'
+    _TYPE_REGEX = r'/(?Pvideo|audio)/'
+
+    def _call_api(self, m_id, lang, is_video, is_episode, is_clip):
+        return self._download_json(
+            self._API_URL_TEMPLATE % (
+                'v' if is_video else 'r',
+                'clip' if is_clip else 'esd',
+                'episode' if is_episode else 'program',
+                m_id, lang, '/all' if is_video else ''),
+            m_id, query={'apikey': 'EJfK8jdS57GqlupFgAfAAwr573q01y6k'})['data']['episodes'] or []
+
+    def _get_api_info(self, refresh=True):
+        if not refresh:
+            return self.cache.load('nhk', 'api_info')
+
+        self.cache.store('nhk', 'api_info', {})
+        movie_player_js = self._download_webpage(
+            'https://movie-a.nhk.or.jp/world/player/js/movie-player.js', None,
+            note='Downloading stream API information')
+        api_info = {
+            'url': self._search_regex(
+                r'prod:[^;]+\bapiUrl:\s*[\'"]([^\'"]+)[\'"]', movie_player_js, None, 'stream API url'),
+            'token': self._search_regex(
+                r'prod:[^;]+\btoken:\s*[\'"]([^\'"]+)[\'"]', movie_player_js, None, 'stream API token'),
+        }
+        self.cache.store('nhk', 'api_info', api_info)
+        return api_info
+
+    def _extract_formats_and_subtitles(self, vod_id):
+        for refresh in (False, True):
+            api_info = self._get_api_info(refresh)
+            if not api_info:
+                continue
+
+            api_url = api_info.pop('url')
+            stream_url = traverse_obj(
+                self._download_json(
+                    api_url, vod_id, 'Downloading stream url info', fatal=False, query={
+                        **api_info,
+                        'type': 'json',
+                        'optional_id': vod_id,
+                        'active_flg': 1,
+                    }),
+                ('meta', 0, 'movie_url', ('mb_auto', 'auto_sp', 'auto_pc'), {url_or_none}), get_all=False)
+            if stream_url:
+                return self._extract_m3u8_formats_and_subtitles(stream_url, vod_id)
+
+        raise ExtractorError('Unable to extract stream url')
+
+    def _extract_episode_info(self, url, episode=None):
+        fetch_episode = episode is None
+        lang, m_type, episode_id = NhkVodIE._match_valid_url(url).group('lang', 'type', 'id')
+        is_video = m_type == 'video'
+
+        if is_video:
+            episode_id = episode_id[:4] + '-' + episode_id[4:]
+
+        if fetch_episode:
+            episode = self._call_api(
+                episode_id, lang, is_video, True, episode_id[:4] == '9999')[0]
+        title = episode.get('sub_title_clean') or episode['sub_title']
+
+        def get_clean_field(key):
+            return episode.get(key + '_clean') or episode.get(key)
+
+        series = get_clean_field('title')
+
+        thumbnails = []
+        for s, w, h in [('', 640, 360), ('_l', 1280, 720)]:
+            img_path = episode.get('image' + s)
+            if not img_path:
+                continue
+            thumbnails.append({
+                'id': '%dp' % h,
+                'height': h,
+                'width': w,
+                'url': 'https://www3.nhk.or.jp' + img_path,
+            })
+
+        info = {
+            'id': episode_id + '-' + lang,
+            'title': '%s - %s' % (series, title) if series and title else title,
+            'description': get_clean_field('description'),
+            'thumbnails': thumbnails,
+            'series': series,
+            'episode': title,
+        }
+        if is_video:
+            vod_id = episode['vod_id']
+            formats, subs = self._extract_formats_and_subtitles(vod_id)
+
+            info.update({
+                'id': vod_id,
+                'formats': formats,
+                'subtitles': subs,
+            })
+
+        else:
+            if fetch_episode:
+                audio_path = episode['audio']['audio']
+                info['formats'] = self._extract_m3u8_formats(
+                    'https://nhkworld-vh.akamaihd.net/i%s/master.m3u8' % audio_path,
+                    episode_id, 'm4a', entry_protocol='m3u8_native',
+                    m3u8_id='hls', fatal=False)
+                for f in info['formats']:
+                    f['language'] = lang
+            else:
+                info.update({
+                    '_type': 'url_transparent',
+                    'ie_key': NhkVodIE.ie_key(),
+                    'url': url,
+                })
+        return info
+
+
+class NhkVodIE(NhkBaseIE):
+    # the 7-character IDs can have alphabetic chars too: assume [a-z] rather than just [a-f], eg
+    _VALID_URL = [rf'{NhkBaseIE._BASE_URL_REGEX}/(?Pvideo)/(?P[0-9a-z]+)',
+                  rf'{NhkBaseIE._BASE_URL_REGEX}/(?Paudio)/(?P[^/?#]+?-\d{{8}}-[0-9a-z]+)']
+    # Content available only for a limited period of time. Visit
+    # https://www3.nhk.or.jp/nhkworld/en/ondemand/ for working samples.
+    _TESTS = [{
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/video/2049126/',
+        'info_dict': {
+            'id': 'nw_vod_v_en_2049_126_20230413233000_01_1681398302',
+            'ext': 'mp4',
+            'title': 'Japan Railway Journal - The Tohoku Shinkansen: Full Speed Ahead',
+            'description': 'md5:49f7c5b206e03868a2fdf0d0814b92f6',
+            'thumbnail': 'md5:51bcef4a21936e7fea1ff4e06353f463',
+            'episode': 'The Tohoku Shinkansen: Full Speed Ahead',
+            'series': 'Japan Railway Journal',
+        },
+    }, {
+        # video clip
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/video/9999011/',
+        'md5': '153c3016dfd252ba09726588149cf0e7',
+        'info_dict': {
+            'id': 'lpZXIwaDE6_Z-976CPsFdxyICyWUzlT5',
+            'ext': 'mp4',
+            'title': 'Dining with the Chef - Chef Saito\'s Family recipe: MENCHI-KATSU',
+            'description': 'md5:5aee4a9f9d81c26281862382103b0ea5',
+            'thumbnail': 'md5:d6a4d9b6e9be90aaadda0bcce89631ed',
+            'series': 'Dining with the Chef',
+            'episode': 'Chef Saito\'s Family recipe: MENCHI-KATSU',
+        },
+    }, {
+        # radio
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/audio/livinginjapan-20231001-1/',
+        'info_dict': {
+            'id': 'livinginjapan-20231001-1-en',
+            'ext': 'm4a',
+            'title': 'Living in Japan - Tips for Travelers to Japan / Ramen Vending Machines',
+            'series': 'Living in Japan',
+            'description': 'md5:850611969932874b4a3309e0cae06c2f',
+            'thumbnail': 'md5:960622fb6e06054a4a1a0c97ea752545',
+            'episode': 'Tips for Travelers to Japan / Ramen Vending Machines'
+        },
+    }, {
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/video/2015173/',
+        'only_matching': True,
+    }, {
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/audio/plugin-20190404-1/',
+        'only_matching': True,
+    }, {
+        'url': 'https://www3.nhk.or.jp/nhkworld/fr/ondemand/audio/plugin-20190404-1/',
+        'only_matching': True,
+    }, {
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/audio/j_art-20150903-1/',
+        'only_matching': True,
+    }, {
+        # video, alphabetic character in ID #29670
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/video/9999a34/',
+        'info_dict': {
+            'id': 'qfjay6cg',
+            'ext': 'mp4',
+            'title': 'DESIGN TALKS plus - Fishermen’s Finery',
+            'description': 'md5:8a8f958aaafb0d7cb59d38de53f1e448',
+            'thumbnail': r're:^https?:/(/[a-z0-9.-]+)+\.jpg\?w=1920&h=1080$',
+            'upload_date': '20210615',
+            'timestamp': 1623722008,
+        },
+        'skip': '404 Not Found',
+    }, {
+        # japanese-language, longer id than english
+        'url': 'https://www3.nhk.or.jp/nhkworld/ja/ondemand/video/0020271111/',
+        'info_dict': {
+            'id': 'nw_ja_v_jvod_ohayou_20231008',
+            'ext': 'mp4',
+            'title': 'ãŠã¯ã‚ˆã†æ—¥æœ¬ï¼ˆ7時å°ï¼‰ - 10月8日放é€',
+            'series': 'ãŠã¯ã‚ˆã†æ—¥æœ¬ï¼ˆ7時å°ï¼‰',
+            'episode': '10月8日放é€',
+            'thumbnail': 'md5:d733b1c8e965ab68fb02b2d347d0e9b4',
+            'description': 'md5:9c1d6cbeadb827b955b20e99ab920ff0',
+        },
+        'skip': 'expires 2023-10-15',
+    }]
+
+    def _real_extract(self, url):
+        return self._extract_episode_info(url)
+
+
+class NhkVodProgramIE(NhkBaseIE):
+    _VALID_URL = rf'{NhkBaseIE._BASE_URL_REGEX}/program{NhkBaseIE._TYPE_REGEX}(?P\w+)(?:.+?\btype=(?Pclip|(?:radio|tv)Episode))?'
+    _TESTS = [{
+        # video program episodes
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/video/sumo',
+        'info_dict': {
+            'id': 'sumo',
+            'title': 'GRAND SUMO Highlights',
+        },
+        'playlist_mincount': 12,
+    }, {
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/video/japanrailway',
+        'info_dict': {
+            'id': 'japanrailway',
+            'title': 'Japan Railway Journal',
+        },
+        'playlist_mincount': 12,
+    }, {
+        # video program clips
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/video/japanrailway/?type=clip',
+        'info_dict': {
+            'id': 'japanrailway',
+            'title': 'Japan Railway Journal',
+        },
+        'playlist_mincount': 5,
+    }, {
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/video/10yearshayaomiyazaki/',
+        'only_matching': True,
+    }, {
+        # audio program
+        'url': 'https://www3.nhk.or.jp/nhkworld/en/ondemand/program/audio/listener/',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        lang, m_type, program_id, episode_type = self._match_valid_url(url).group('lang', 'type', 'id', 'episode_type')
+        episodes = self._call_api(
+            program_id, lang, m_type == 'video', False, episode_type == 'clip')
+
+        entries = []
+        for episode in episodes:
+            episode_path = episode.get('url')
+            if not episode_path:
+                continue
+            entries.append(self._extract_episode_info(
+                urljoin(url, episode_path), episode))
+
+        program_title = None
+        if entries:
+            program_title = entries[0].get('series')
+
+        return self.playlist_result(entries, program_id, program_title)
+
+
+class NhkForSchoolBangumiIE(InfoExtractor):
+    _VALID_URL = r'https?://www2\.nhk\.or\.jp/school/movie/(?Pbangumi|clip)\.cgi\?das_id=(?P[a-zA-Z0-9_-]+)'
+    _TESTS = [{
+        'url': 'https://www2.nhk.or.jp/school/movie/bangumi.cgi?das_id=D0005150191_00000',
+        'info_dict': {
+            'id': 'D0005150191_00003',
+            'title': 'ã«ã¦ã„ã‚‹ ã‹ãª',
+            'duration': 599.999,
+            'timestamp': 1396414800,
+
+            'upload_date': '20140402',
+            'ext': 'mp4',
+
+            'chapters': 'count:12'
+        },
+        'params': {
+            # m3u8 download
+            'skip_download': True,
+        },
+    }]
+
+    def _real_extract(self, url):
+        program_type, video_id = self._match_valid_url(url).groups()
+
+        webpage = self._download_webpage(
+            f'https://www2.nhk.or.jp/school/movie/{program_type}.cgi?das_id={video_id}', video_id)
+
+        # searches all variables
+        base_values = {g.group(1): g.group(2) for g in re.finditer(r'var\s+([a-zA-Z_]+)\s*=\s*"([^"]+?)";', webpage)}
+        # and programObj values too
+        program_values = {g.group(1): g.group(3) for g in re.finditer(r'(?:program|clip)Obj\.([a-zA-Z_]+)\s*=\s*(["\'])([^"]+?)\2;', webpage)}
+        # extract all chapters
+        chapter_durations = [parse_duration(g.group(1)) for g in re.finditer(r'chapterTime\.push\(\'([0-9:]+?)\'\);', webpage)]
+        chapter_titles = [' '.join([g.group(1) or '', unescapeHTML(g.group(2))]).strip() for g in re.finditer(r'
(scene\s*\d+)?([^<]+?)
', webpage)] + + # this is how player_core.js is actually doing (!) + version = base_values.get('r_version') or program_values.get('version') + if version: + video_id = f'{video_id.split("_")[0]}_{version}' + + formats = self._extract_m3u8_formats( + f'https://nhks-vh.akamaihd.net/i/das/{video_id[0:8]}/{video_id}_V_000.f4v/master.m3u8', + video_id, ext='mp4', m3u8_id='hls') + + duration = parse_duration(base_values.get('r_duration')) + + chapters = None + if chapter_durations and chapter_titles and len(chapter_durations) == len(chapter_titles): + start_time = chapter_durations + end_time = chapter_durations[1:] + [duration] + chapters = [{ + 'start_time': s, + 'end_time': e, + 'title': t, + } for s, e, t in zip(start_time, end_time, chapter_titles)] + + return { + 'id': video_id, + 'title': program_values.get('name'), + 'duration': parse_duration(base_values.get('r_duration')), + 'timestamp': unified_timestamp(base_values['r_upload']), + 'formats': formats, + 'chapters': chapters, + } + + +class NhkForSchoolSubjectIE(InfoExtractor): + IE_DESC = 'Portal page for each school subjects, like Japanese (kokugo, 国語) or math (sansuu/suugaku or 算数・数学)' + KNOWN_SUBJECTS = ( + 'rika', 'syakai', 'kokugo', + 'sansuu', 'seikatsu', 'doutoku', + 'ongaku', 'taiiku', 'zukou', + 'gijutsu', 'katei', 'sougou', + 'eigo', 'tokkatsu', + 'tokushi', 'sonota', + ) + _VALID_URL = r'https?://www\.nhk\.or\.jp/school/(?P%s)/?(?:[\?#].*)?$' % '|'.join(re.escape(s) for s in KNOWN_SUBJECTS) + + _TESTS = [{ + 'url': 'https://www.nhk.or.jp/school/sougou/', + 'info_dict': { + 'id': 'sougou', + 'title': 'ç·åˆçš„ãªå­¦ç¿’ã®æ™‚é–“', + }, + 'playlist_mincount': 16, + }, { + 'url': 'https://www.nhk.or.jp/school/rika/', + 'info_dict': { + 'id': 'rika', + 'title': 'ç†ç§‘', + }, + 'playlist_mincount': 15, + }] + + def _real_extract(self, url): + subject_id = self._match_id(url) + webpage = self._download_webpage(url, subject_id) + + return self.playlist_from_matches( + re.finditer(rf'href="((?:https?://www\.nhk\.or\.jp)?/school/{re.escape(subject_id)}/[^/]+/)"', webpage), + subject_id, + self._html_search_regex(r'(?s)\s*\s*([^<]+?)', webpage, 'title', fatal=False), + lambda g: urljoin(url, g.group(1))) + + +class NhkForSchoolProgramListIE(InfoExtractor): + _VALID_URL = r'https?://www\.nhk\.or\.jp/school/(?P(?:%s)/[a-zA-Z0-9_-]+)' % ( + '|'.join(re.escape(s) for s in NhkForSchoolSubjectIE.KNOWN_SUBJECTS) + ) + _TESTS = [{ + 'url': 'https://www.nhk.or.jp/school/sougou/q/', + 'info_dict': { + 'id': 'sougou/q', + 'title': 'Q~ã“ã©ã‚‚ã®ãŸã‚ã®å“²å­¦', + }, + 'playlist_mincount': 20, + }] + + def _real_extract(self, url): + program_id = self._match_id(url) + + webpage = self._download_webpage(f'https://www.nhk.or.jp/school/{program_id}/', program_id) + + title = (self._generic_title('', webpage) + or self._html_search_regex(r'

([^<]+?)ã¨ã¯ï¼Ÿ\s*

', webpage, 'title', fatal=False)) + title = re.sub(r'\s*\|\s*NHK\s+for\s+School\s*$', '', title) if title else None + description = self._html_search_regex( + r'(?s)\s*

[^<]+

', + webpage, 'description', fatal=False, group=0) + + bangumi_list = self._download_json( + f'https://www.nhk.or.jp/school/{program_id}/meta/program.json', program_id) + # they're always bangumi + bangumis = [ + self.url_result(f'https://www2.nhk.or.jp/school/movie/bangumi.cgi?das_id={x}') + for x in traverse_obj(bangumi_list, ('part', ..., 'part-video-dasid')) or []] + + return self.playlist_result(bangumis, program_id, title, description) + + +class NhkRadiruIE(InfoExtractor): + _GEO_COUNTRIES = ['JP'] + IE_DESC = 'NHK らã˜ã‚‹ (Radiru/Rajiru)' + _VALID_URL = r'https?://www\.nhk\.or\.jp/radio/(?:player/ondemand|ondemand/detail)\.html\?p=(?P[\da-zA-Z]+)_(?P[\da-zA-Z]+)(?:_(?P[\da-zA-Z]+))?' + _TESTS = [{ + 'url': 'https://www.nhk.or.jp/radio/player/ondemand.html?p=0449_01_3853544', + 'skip': 'Episode expired on 2023-04-16', + 'info_dict': { + 'channel': 'NHK-FM', + 'description': 'md5:94b08bdeadde81a97df4ec882acce3e9', + 'ext': 'm4a', + 'id': '0449_01_3853544', + 'series': 'ジャズ・トゥナイト', + 'thumbnail': 'https://www.nhk.or.jp/prog/img/449/g449.jpg', + 'timestamp': 1680969600, + 'title': 'ジャズ・トゥナイト NEWジャズ特集', + 'upload_date': '20230408', + 'release_timestamp': 1680962400, + 'release_date': '20230408', + 'was_live': True, + }, + }, { + # playlist, airs every weekday so it should _hopefully_ be okay forever + 'url': 'https://www.nhk.or.jp/radio/ondemand/detail.html?p=0458_01', + 'info_dict': { + 'id': '0458_01', + 'title': 'ベストオブクラシック', + 'description': '世界中ã®ä¸Šè³ªãªæ¼”å¥ä¼šã‚’ã˜ã£ãり堪能ã™ã‚‹æœ¬æ ¼æ´¾ã‚¯ãƒ©ã‚·ãƒƒã‚¯ç•ªçµ„。', + 'channel': 'NHK-FM', + 'thumbnail': 'https://www.nhk.or.jp/prog/img/458/g458.jpg', + }, + 'playlist_mincount': 3, + }, { + # one with letters in the id + 'url': 'https://www.nhk.or.jp/radio/player/ondemand.html?p=F300_06_3738470', + 'note': 'Expires on 2024-03-31', + 'info_dict': { + 'id': 'F300_06_3738470', + 'ext': 'm4a', + 'title': '有島武郎「一房ã®ãµã‚™ã¨ã‚™ã†ã€', + 'description': '朗読:å·é‡Žä¸€å®‡ï¼ˆãƒ©ã‚¸ã‚ªæ·±å¤œä¾¿ã‚¢ãƒ³ã‚«ãƒ¼ï¼‰\r\n\r\n(2016å¹´12月8日放é€ã€Œãƒ©ã‚¸ã‚ªæ·±å¤œä¾¿ã€Žã‚¢ãƒ³ã‚«ãƒ¼æœ—読シリーズã€ã€ã‚ˆã‚Šï¼‰', + 'channel': 'NHKラジオ第1ã€NHK-FM', + 'timestamp': 1635757200, + 'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F300/img/corner/box_109_thumbnail.jpg', + 'release_date': '20161207', + 'series': 'らã˜ã‚‹æ–‡åº« by ラジオ深夜便 ', + 'release_timestamp': 1481126700, + 'upload_date': '20211101', + } + }, { + # news + 'url': 'https://www.nhk.or.jp/radio/player/ondemand.html?p=F261_01_3855109', + 'skip': 'Expires on 2023-04-17', + 'info_dict': { + 'id': 'F261_01_3855109', + 'ext': 'm4a', + 'channel': 'NHKラジオ第1', + 'timestamp': 1681635900, + 'release_date': '20230416', + 'series': 'NHKラジオニュース', + 'title': 'åˆå¾Œï¼–時ã®NHKニュース', + 'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F261/img/RADIONEWS_640.jpg', + 'upload_date': '20230416', + 'release_timestamp': 1681635600, + }, + }] + + def _extract_episode_info(self, headline, programme_id, series_meta): + episode_id = f'{programme_id}_{headline["headline_id"]}' + episode = traverse_obj(headline, ('file_list', 0, {dict})) + + return { + **series_meta, + 'id': episode_id, + 'formats': self._extract_m3u8_formats(episode.get('file_name'), episode_id, fatal=False), + 'container': 'm4a_dash', # force fixup, AAC-only HLS + 'was_live': True, + 'series': series_meta.get('title'), + 'thumbnail': url_or_none(headline.get('headline_image')) or series_meta.get('thumbnail'), + **traverse_obj(episode, { + 'title': 'file_title', + 'description': 'file_title_sub', + 'timestamp': ('open_time', {unified_timestamp}), + 'release_timestamp': ('aa_vinfo4', {lambda x: x.split('_')[0]}, {unified_timestamp}), + }), + } + + def _real_extract(self, url): + site_id, corner_id, headline_id = self._match_valid_url(url).group('site', 'corner', 'headline') + programme_id = f'{site_id}_{corner_id}' + + if site_id == 'F261': + json_url = 'https://www.nhk.or.jp/s-media/news/news-site/list/v1/all.json' + else: + json_url = f'https://www.nhk.or.jp/radioondemand/json/{site_id}/bangumi_{programme_id}.json' + + meta = self._download_json(json_url, programme_id)['main'] + + series_meta = traverse_obj(meta, { + 'title': 'program_name', + 'channel': 'media_name', + 'thumbnail': (('thumbnail_c', 'thumbnail_p'), {url_or_none}), + }, get_all=False) + + if headline_id: + return self._extract_episode_info( + traverse_obj(meta, ( + 'detail_list', lambda _, v: v['headline_id'] == headline_id), get_all=False), + programme_id, series_meta) + + def entries(): + for headline in traverse_obj(meta, ('detail_list', ..., {dict})): + yield self._extract_episode_info(headline, programme_id, series_meta) + + return self.playlist_result( + entries(), programme_id, playlist_description=meta.get('site_detail'), **series_meta) + + +class NhkRadioNewsPageIE(InfoExtractor): + _VALID_URL = r'https?://www\.nhk\.or\.jp/radionews/?(?:$|[?#])' + _TESTS = [{ + # airs daily, on-the-hour most hours + 'url': 'https://www.nhk.or.jp/radionews/', + 'playlist_mincount': 5, + 'info_dict': { + 'id': 'F261_01', + 'thumbnail': 'https://www.nhk.or.jp/radioondemand/json/F261/img/RADIONEWS_640.jpg', + 'description': 'md5:bf2c5b397e44bc7eb26de98d8f15d79d', + 'channel': 'NHKラジオ第1', + 'title': 'NHKラジオニュース', + } + }] + + def _real_extract(self, url): + return self.url_result('https://www.nhk.or.jp/radio/ondemand/detail.html?p=F261_01', NhkRadiruIE) + + +class NhkRadiruLiveIE(InfoExtractor): + _GEO_COUNTRIES = ['JP'] + _VALID_URL = r'https?://www\.nhk\.or\.jp/radio/player/\?ch=(?Pr[12]|fm)' + _TESTS = [{ + # radio 1, no area specified + 'url': 'https://www.nhk.or.jp/radio/player/?ch=r1', + 'info_dict': { + 'id': 'r1-tokyo', + 'title': 're:^NHKãƒãƒƒãƒˆãƒ©ã‚¸ã‚ªç¬¬1 æ±äº¬.+$', + 'ext': 'm4a', + 'thumbnail': 'https://www.nhk.or.jp/common/img/media/r1-200x200.png', + 'live_status': 'is_live', + }, + }, { + # radio 2, area specified + # (the area doesnt actually matter, r2 is national) + 'url': 'https://www.nhk.or.jp/radio/player/?ch=r2', + 'params': {'extractor_args': {'nhkradirulive': {'area': ['fukuoka']}}}, + 'info_dict': { + 'id': 'r2-fukuoka', + 'title': 're:^NHKãƒãƒƒãƒˆãƒ©ã‚¸ã‚ªç¬¬2 ç¦å²¡.+$', + 'ext': 'm4a', + 'thumbnail': 'https://www.nhk.or.jp/common/img/media/r2-200x200.png', + 'live_status': 'is_live', + }, + }, { + # fm, area specified + 'url': 'https://www.nhk.or.jp/radio/player/?ch=fm', + 'params': {'extractor_args': {'nhkradirulive': {'area': ['sapporo']}}}, + 'info_dict': { + 'id': 'fm-sapporo', + 'title': 're:^NHKãƒãƒƒãƒˆãƒ©ã‚¸ã‚ªï¼¦ï¼­ 札幌.+$', + 'ext': 'm4a', + 'thumbnail': 'https://www.nhk.or.jp/common/img/media/fm-200x200.png', + 'live_status': 'is_live', + } + }] + + _NOA_STATION_IDS = {'r1': 'n1', 'r2': 'n2', 'fm': 'n3'} + + def _real_extract(self, url): + station = self._match_id(url) + area = self._configuration_arg('area', ['tokyo'])[0] + + config = self._download_xml( + 'https://www.nhk.or.jp/radio/config/config_web.xml', station, 'Downloading area information') + data = config.find(f'.//data//area[.="{area}"]/..') + + if not data: + raise ExtractorError('Invalid area. Valid areas are: %s' % ', '.join( + [i.text for i in config.findall('.//data//area')]), expected=True) + + noa_info = self._download_json( + f'https:{config.find(".//url_program_noa").text}'.format(area=data.find('areakey').text), + station, note=f'Downloading {area} station metadata') + present_info = traverse_obj(noa_info, ('nowonair_list', self._NOA_STATION_IDS.get(station), 'present')) + + return { + 'title': ' '.join(traverse_obj(present_info, (('service', 'area',), 'name', {str}))), + 'id': join_nonempty(station, area), + 'thumbnails': traverse_obj(present_info, ('service', 'images', ..., { + 'url': 'url', + 'width': ('width', {int_or_none}), + 'height': ('height', {int_or_none}), + })), + 'formats': self._extract_m3u8_formats(data.find(f'{station}hls').text, station), + 'is_live': True, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nhl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nhl.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nhl.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nhl.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nick.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nick.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nick.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nick.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/niconico.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/niconico.py new file mode 100644 index 0000000..fa2d709 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/niconico.py @@ -0,0 +1,1058 @@ +import datetime +import functools +import itertools +import json +import re +import time + +from urllib.parse import urlparse + +from .common import InfoExtractor, SearchInfoExtractor +from ..dependencies import websockets +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + OnDemandPagedList, + WebSocketsWrapper, + bug_reports_message, + clean_html, + float_or_none, + int_or_none, + join_nonempty, + parse_duration, + parse_filesize, + parse_iso8601, + parse_resolution, + qualities, + remove_start, + str_or_none, + traverse_obj, + try_get, + unescapeHTML, + update_url_query, + url_or_none, + urlencode_postdata, + urljoin, +) + + +class NiconicoIE(InfoExtractor): + IE_NAME = 'niconico' + IE_DESC = 'ニコニコ動画' + + _TESTS = [{ + 'url': 'http://www.nicovideo.jp/watch/sm22312215', + 'md5': 'd1a75c0823e2f629128c43e1212760f9', + 'info_dict': { + 'id': 'sm22312215', + 'ext': 'mp4', + 'title': 'Big Buck Bunny', + 'thumbnail': r're:https?://.*', + 'uploader': 'takuya0301', + 'uploader_id': '2698420', + 'upload_date': '20131123', + 'timestamp': int, # timestamp is unstable + 'description': '(c) copyright 2008, Blender Foundation / www.bigbuckbunny.org', + 'duration': 33, + 'view_count': int, + 'comment_count': int, + }, + 'skip': 'Requires an account', + }, { + # File downloaded with and without credentials are different, so omit + # the md5 field + 'url': 'http://www.nicovideo.jp/watch/nm14296458', + 'info_dict': { + 'id': 'nm14296458', + 'ext': 'swf', + 'title': 'ã€é¡éŸ³ãƒªãƒ³ã€‘Dance on mediaã€ã‚ªãƒªã‚¸ãƒŠãƒ«ã€‘take2!', + 'description': 'md5:689f066d74610b3b22e0f1739add0f58', + 'thumbnail': r're:https?://.*', + 'uploader': 'りょã†ãŸ', + 'uploader_id': '18822557', + 'upload_date': '20110429', + 'timestamp': 1304065916, + 'duration': 209, + }, + 'skip': 'Requires an account', + }, { + # 'video exists but is marked as "deleted" + # md5 is unstable + 'url': 'http://www.nicovideo.jp/watch/sm10000', + 'info_dict': { + 'id': 'sm10000', + 'ext': 'unknown_video', + 'description': 'deleted', + 'title': 'ドラãˆã‚‚んエターナル第3話「決戦第3æ–°æ±äº¬å¸‚ã€ï¼œå‰ç·¨ï¼ž', + 'thumbnail': r're:https?://.*', + 'upload_date': '20071224', + 'timestamp': int, # timestamp field has different value if logged in + 'duration': 304, + 'view_count': int, + }, + 'skip': 'Requires an account', + }, { + 'url': 'http://www.nicovideo.jp/watch/so22543406', + 'info_dict': { + 'id': '1388129933', + 'ext': 'mp4', + 'title': 'ã€ç¬¬1回】RADIOアニメロミックス ラブライブï¼ï½žã®ãžãˆã‚ŠRadio Garden~', + 'description': 'md5:b27d224bb0ff53d3c8269e9f8b561cf1', + 'thumbnail': r're:https?://.*', + 'timestamp': 1388851200, + 'upload_date': '20140104', + 'uploader': 'アニメロãƒãƒ£ãƒ³ãƒãƒ«', + 'uploader_id': '312', + }, + 'skip': 'The viewing period of the video you were searching for has expired.', + }, { + # video not available via `getflv`; "old" HTML5 video + 'url': 'http://www.nicovideo.jp/watch/sm1151009', + 'md5': '8fa81c364eb619d4085354eab075598a', + 'info_dict': { + 'id': 'sm1151009', + 'ext': 'mp4', + 'title': 'マスターシステム本体内蔵ã®ã‚¹ãƒšãƒãƒªã®ãƒ¡ã‚¤ãƒ³ãƒ†ãƒ¼ãƒžï¼ˆï¼°ï¼³ï¼§ç‰ˆï¼‰', + 'description': 'md5:6ee077e0581ff5019773e2e714cdd0b7', + 'thumbnail': r're:https?://.*', + 'duration': 184, + 'timestamp': 1190868283, + 'upload_date': '20070927', + 'uploader': 'denden2', + 'uploader_id': '1392194', + 'view_count': int, + 'comment_count': int, + }, + 'skip': 'Requires an account', + }, { + # "New" HTML5 video + # md5 is unstable + 'url': 'http://www.nicovideo.jp/watch/sm31464864', + 'info_dict': { + 'id': 'sm31464864', + 'ext': 'mp4', + 'title': '新作TVアニメ「戦姫絶唱シンフォギアAXZã€PV 最高画質', + 'description': 'md5:e52974af9a96e739196b2c1ca72b5feb', + 'timestamp': 1498514060, + 'upload_date': '20170626', + 'uploader': 'ゲスト', + 'uploader_id': '40826363', + 'thumbnail': r're:https?://.*', + 'duration': 198, + 'view_count': int, + 'comment_count': int, + }, + 'skip': 'Requires an account', + }, { + # Video without owner + 'url': 'http://www.nicovideo.jp/watch/sm18238488', + 'md5': 'd265680a1f92bdcbbd2a507fc9e78a9e', + 'info_dict': { + 'id': 'sm18238488', + 'ext': 'mp4', + 'title': 'ã€å®Ÿå†™ç‰ˆã€‘ミュータントタートルズ', + 'description': 'md5:15df8988e47a86f9e978af2064bf6d8e', + 'timestamp': 1341160408, + 'upload_date': '20120701', + 'uploader': None, + 'uploader_id': None, + 'thumbnail': r're:https?://.*', + 'duration': 5271, + 'view_count': int, + 'comment_count': int, + }, + 'skip': 'Requires an account', + }, { + 'url': 'http://sp.nicovideo.jp/watch/sm28964488?ss_pos=1&cp_in=wt_tg', + 'only_matching': True, + }, { + 'note': 'a video that is only served as an ENCRYPTED HLS.', + 'url': 'https://www.nicovideo.jp/watch/so38016254', + 'only_matching': True, + }] + + _VALID_URL = r'https?://(?:(?:www\.|secure\.|sp\.)?nicovideo\.jp/watch|nico\.ms)/(?P(?:[a-z]{2})?[0-9]+)' + _NETRC_MACHINE = 'niconico' + _COMMENT_API_ENDPOINTS = ( + 'https://nvcomment.nicovideo.jp/legacy/api.json', + 'https://nmsg.nicovideo.jp/api.json',) + _API_HEADERS = { + 'X-Frontend-ID': '6', + 'X-Frontend-Version': '0', + 'X-Niconico-Language': 'en-us', + 'Referer': 'https://www.nicovideo.jp/', + 'Origin': 'https://www.nicovideo.jp', + } + + def _perform_login(self, username, password): + login_ok = True + login_form_strs = { + 'mail_tel': username, + 'password': password, + } + self._request_webpage( + 'https://account.nicovideo.jp/login', None, + note='Acquiring Login session') + page = self._download_webpage( + 'https://account.nicovideo.jp/login/redirector?show_button_twitter=1&site=niconico&show_button_facebook=1', None, + note='Logging in', errnote='Unable to log in', + data=urlencode_postdata(login_form_strs), + headers={ + 'Referer': 'https://account.nicovideo.jp/login', + 'Content-Type': 'application/x-www-form-urlencoded', + }) + if 'oneTimePw' in page: + post_url = self._search_regex( + r']+action=(["\'])(?P.+?)\1', page, 'post url', group='url') + page = self._download_webpage( + urljoin('https://account.nicovideo.jp', post_url), None, + note='Performing MFA', errnote='Unable to complete MFA', + data=urlencode_postdata({ + 'otp': self._get_tfa_info('6 digits code') + }), headers={ + 'Content-Type': 'application/x-www-form-urlencoded', + }) + if 'oneTimePw' in page or 'formError' in page: + err_msg = self._html_search_regex( + r'formError["\']+>(.*?)', page, 'form_error', + default='There\'s an error but the message can\'t be parsed.', + flags=re.DOTALL) + self.report_warning(f'Unable to log in: MFA challenge failed, "{err_msg}"') + return False + login_ok = 'class="notice error"' not in page + if not login_ok: + self.report_warning('Unable to log in: bad username or password') + return login_ok + + def _get_heartbeat_info(self, info_dict): + video_id, video_src_id, audio_src_id = info_dict['url'].split(':')[1].split('/') + dmc_protocol = info_dict['expected_protocol'] + + api_data = ( + info_dict.get('_api_data') + or self._parse_json( + self._html_search_regex( + 'data-api-data="([^"]+)"', + self._download_webpage('https://www.nicovideo.jp/watch/' + video_id, video_id), + 'API data', default='{}'), + video_id)) + + session_api_data = try_get(api_data, lambda x: x['media']['delivery']['movie']['session']) + session_api_endpoint = try_get(session_api_data, lambda x: x['urls'][0]) + + def ping(): + tracking_id = traverse_obj(api_data, ('media', 'delivery', 'trackingId')) + if tracking_id: + tracking_url = update_url_query('https://nvapi.nicovideo.jp/v1/2ab0cbaa/watch', {'t': tracking_id}) + watch_request_response = self._download_json( + tracking_url, video_id, + note='Acquiring permission for downloading video', fatal=False, + headers=self._API_HEADERS) + if traverse_obj(watch_request_response, ('meta', 'status')) != 200: + self.report_warning('Failed to acquire permission for playing video. Video download may fail.') + + yesno = lambda x: 'yes' if x else 'no' + + if dmc_protocol == 'http': + protocol = 'http' + protocol_parameters = { + 'http_output_download_parameters': { + 'use_ssl': yesno(session_api_data['urls'][0]['isSsl']), + 'use_well_known_port': yesno(session_api_data['urls'][0]['isWellKnownPort']), + } + } + elif dmc_protocol == 'hls': + protocol = 'm3u8' + segment_duration = try_get(self._configuration_arg('segment_duration'), lambda x: int(x[0])) or 6000 + parsed_token = self._parse_json(session_api_data['token'], video_id) + encryption = traverse_obj(api_data, ('media', 'delivery', 'encryption')) + protocol_parameters = { + 'hls_parameters': { + 'segment_duration': segment_duration, + 'transfer_preset': '', + 'use_ssl': yesno(session_api_data['urls'][0]['isSsl']), + 'use_well_known_port': yesno(session_api_data['urls'][0]['isWellKnownPort']), + } + } + if 'hls_encryption' in parsed_token and encryption: + protocol_parameters['hls_parameters']['encryption'] = { + parsed_token['hls_encryption']: { + 'encrypted_key': encryption['encryptedKey'], + 'key_uri': encryption['keyUri'], + } + } + else: + protocol = 'm3u8_native' + else: + raise ExtractorError(f'Unsupported DMC protocol: {dmc_protocol}') + + session_response = self._download_json( + session_api_endpoint['url'], video_id, + query={'_format': 'json'}, + headers={'Content-Type': 'application/json'}, + note='Downloading JSON metadata for %s' % info_dict['format_id'], + data=json.dumps({ + 'session': { + 'client_info': { + 'player_id': session_api_data.get('playerId'), + }, + 'content_auth': { + 'auth_type': try_get(session_api_data, lambda x: x['authTypes'][session_api_data['protocols'][0]]), + 'content_key_timeout': session_api_data.get('contentKeyTimeout'), + 'service_id': 'nicovideo', + 'service_user_id': session_api_data.get('serviceUserId') + }, + 'content_id': session_api_data.get('contentId'), + 'content_src_id_sets': [{ + 'content_src_ids': [{ + 'src_id_to_mux': { + 'audio_src_ids': [audio_src_id], + 'video_src_ids': [video_src_id], + } + }] + }], + 'content_type': 'movie', + 'content_uri': '', + 'keep_method': { + 'heartbeat': { + 'lifetime': session_api_data.get('heartbeatLifetime') + } + }, + 'priority': session_api_data['priority'], + 'protocol': { + 'name': 'http', + 'parameters': { + 'http_parameters': { + 'parameters': protocol_parameters + } + } + }, + 'recipe_id': session_api_data.get('recipeId'), + 'session_operation_auth': { + 'session_operation_auth_by_signature': { + 'signature': session_api_data.get('signature'), + 'token': session_api_data.get('token'), + } + }, + 'timing_constraint': 'unlimited' + } + }).encode()) + + info_dict['url'] = session_response['data']['session']['content_uri'] + info_dict['protocol'] = protocol + + # get heartbeat info + heartbeat_info_dict = { + 'url': session_api_endpoint['url'] + '/' + session_response['data']['session']['id'] + '?_format=json&_method=PUT', + 'data': json.dumps(session_response['data']), + # interval, convert milliseconds to seconds, then halve to make a buffer. + 'interval': float_or_none(session_api_data.get('heartbeatLifetime'), scale=3000), + 'ping': ping + } + + return info_dict, heartbeat_info_dict + + def _extract_format_for_quality(self, video_id, audio_quality, video_quality, dmc_protocol): + + if not audio_quality.get('isAvailable') or not video_quality.get('isAvailable'): + return None + + def extract_video_quality(video_quality): + return parse_filesize('%sB' % self._search_regex( + r'\| ([0-9]*\.?[0-9]*[MK])', video_quality, 'vbr', default='')) + + format_id = '-'.join( + [remove_start(s['id'], 'archive_') for s in (video_quality, audio_quality)] + [dmc_protocol]) + + vid_qual_label = traverse_obj(video_quality, ('metadata', 'label')) + vid_quality = traverse_obj(video_quality, ('metadata', 'bitrate')) + + return { + 'url': 'niconico_dmc:%s/%s/%s' % (video_id, video_quality['id'], audio_quality['id']), + 'format_id': format_id, + 'format_note': join_nonempty('DMC', vid_qual_label, dmc_protocol.upper(), delim=' '), + 'ext': 'mp4', # Session API are used in HTML5, which always serves mp4 + 'acodec': 'aac', + 'vcodec': 'h264', + 'abr': float_or_none(traverse_obj(audio_quality, ('metadata', 'bitrate')), 1000), + 'vbr': float_or_none(vid_quality if vid_quality > 0 else extract_video_quality(vid_qual_label), 1000), + 'height': traverse_obj(video_quality, ('metadata', 'resolution', 'height')), + 'width': traverse_obj(video_quality, ('metadata', 'resolution', 'width')), + 'quality': -2 if 'low' in video_quality['id'] else None, + 'protocol': 'niconico_dmc', + 'expected_protocol': dmc_protocol, # XXX: This is not a documented field + 'http_headers': { + 'Origin': 'https://www.nicovideo.jp', + 'Referer': 'https://www.nicovideo.jp/watch/' + video_id, + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + try: + webpage, handle = self._download_webpage_handle( + 'https://www.nicovideo.jp/watch/' + video_id, video_id) + if video_id.startswith('so'): + video_id = self._match_id(handle.url) + + api_data = self._parse_json(self._html_search_regex( + 'data-api-data="([^"]+)"', webpage, + 'API data', default='{}'), video_id) + except ExtractorError as e: + try: + api_data = self._download_json( + 'https://www.nicovideo.jp/api/watch/v3/%s?_frontendId=6&_frontendVersion=0&actionTrackId=AAAAAAAAAA_%d' % (video_id, round(time.time() * 1000)), video_id, + note='Downloading API JSON', errnote='Unable to fetch data')['data'] + except ExtractorError: + if not isinstance(e.cause, HTTPError): + raise + webpage = e.cause.response.read().decode('utf-8', 'replace') + error_msg = self._html_search_regex( + r'(?s)(.+?)', + webpage, 'error reason', default=None) + if not error_msg: + raise + raise ExtractorError(re.sub(r'\s+', ' ', error_msg), expected=True) + + formats = [] + + def get_video_info(*items, get_first=True, **kwargs): + return traverse_obj(api_data, ('video', *items), get_all=not get_first, **kwargs) + + quality_info = api_data['media']['delivery']['movie'] + session_api_data = quality_info['session'] + for (audio_quality, video_quality, protocol) in itertools.product(quality_info['audios'], quality_info['videos'], session_api_data['protocols']): + fmt = self._extract_format_for_quality(video_id, audio_quality, video_quality, protocol) + if fmt: + formats.append(fmt) + + # Start extracting information + tags = None + if webpage: + # use og:video:tag (not logged in) + og_video_tags = re.finditer(r'', webpage) + tags = list(filter(None, (clean_html(x.group(1)) for x in og_video_tags))) + if not tags: + # use keywords and split with comma (not logged in) + kwds = self._html_search_meta('keywords', webpage, default=None) + if kwds: + tags = [x for x in kwds.split(',') if x] + if not tags: + # find in json (logged in) + tags = traverse_obj(api_data, ('tag', 'items', ..., 'name')) + + thumb_prefs = qualities(['url', 'middleUrl', 'largeUrl', 'player', 'ogp']) + + return { + 'id': video_id, + '_api_data': api_data, + 'title': get_video_info(('originalTitle', 'title')) or self._og_search_title(webpage, default=None), + 'formats': formats, + 'thumbnails': [{ + 'id': key, + 'url': url, + 'ext': 'jpg', + 'preference': thumb_prefs(key), + **parse_resolution(url, lenient=True), + } for key, url in (get_video_info('thumbnail') or {}).items() if url], + 'description': clean_html(get_video_info('description')), + 'uploader': traverse_obj(api_data, ('owner', 'nickname'), ('channel', 'name'), ('community', 'name')), + 'uploader_id': str_or_none(traverse_obj(api_data, ('owner', 'id'), ('channel', 'id'), ('community', 'id'))), + 'timestamp': parse_iso8601(get_video_info('registeredAt')) or parse_iso8601( + self._html_search_meta('video:release_date', webpage, 'date published', default=None)), + 'channel': traverse_obj(api_data, ('channel', 'name'), ('community', 'name')), + 'channel_id': traverse_obj(api_data, ('channel', 'id'), ('community', 'id')), + 'view_count': int_or_none(get_video_info('count', 'view')), + 'tags': tags, + 'genre': traverse_obj(api_data, ('genre', 'label'), ('genre', 'key')), + 'comment_count': get_video_info('count', 'comment', expected_type=int), + 'duration': ( + parse_duration(self._html_search_meta('video:duration', webpage, 'video duration', default=None)) + or get_video_info('duration')), + 'webpage_url': url_or_none(url) or f'https://www.nicovideo.jp/watch/{video_id}', + 'subtitles': self.extract_subtitles(video_id, api_data, session_api_data), + } + + def _get_subtitles(self, video_id, api_data, session_api_data): + comment_user_key = traverse_obj(api_data, ('comment', 'keys', 'userKey')) + user_id_str = session_api_data.get('serviceUserId') + + thread_ids = traverse_obj(api_data, ('comment', 'threads', lambda _, v: v['isActive'])) + legacy_danmaku = self._extract_legacy_comments(video_id, thread_ids, user_id_str, comment_user_key) or [] + + new_comments = traverse_obj(api_data, ('comment', 'nvComment')) + new_danmaku = self._extract_new_comments( + new_comments.get('server'), video_id, + new_comments.get('params'), new_comments.get('threadKey')) + + if not legacy_danmaku and not new_danmaku: + self.report_warning(f'Failed to get comments. {bug_reports_message()}') + return + + return { + 'comments': [{ + 'ext': 'json', + 'data': json.dumps(legacy_danmaku + new_danmaku), + }], + } + + def _extract_legacy_comments(self, video_id, threads, user_id, user_key): + auth_data = { + 'user_id': user_id, + 'userkey': user_key, + } if user_id and user_key else {'user_id': ''} + + api_url = traverse_obj(threads, (..., 'server'), get_all=False) + + # Request Start + post_data = [{'ping': {'content': 'rs:0'}}] + for i, thread in enumerate(threads): + thread_id = thread['id'] + thread_fork = thread['fork'] + # Post Start (2N) + post_data.append({'ping': {'content': f'ps:{i * 2}'}}) + post_data.append({'thread': { + 'fork': thread_fork, + 'language': 0, + 'nicoru': 3, + 'scores': 1, + 'thread': thread_id, + 'version': '20090904', + 'with_global': 1, + **auth_data, + }}) + # Post Final (2N) + post_data.append({'ping': {'content': f'pf:{i * 2}'}}) + + # Post Start (2N+1) + post_data.append({'ping': {'content': f'ps:{i * 2 + 1}'}}) + post_data.append({'thread_leaves': { + # format is '-:,\d+)' + + _TESTS = [{ + 'url': 'http://www.nicovideo.jp/mylist/27411728', + 'info_dict': { + 'id': '27411728', + 'title': 'AKB48ã®ã‚ªãƒ¼ãƒ«ãƒŠã‚¤ãƒˆãƒ‹ãƒƒãƒãƒ³', + 'description': 'md5:d89694c5ded4b6c693dea2db6e41aa08', + 'uploader': 'ã®ã£ã', + 'uploader_id': '805442', + }, + 'playlist_mincount': 291, + }, { + 'url': 'https://www.nicovideo.jp/user/805442/mylist/27411728', + 'only_matching': True, + }, { + 'url': 'https://www.nicovideo.jp/my/mylist/#/68048635', + 'only_matching': True, + }] + + def _call_api(self, list_id, resource, query): + return self._download_json( + f'https://nvapi.nicovideo.jp/v2/mylists/{list_id}', list_id, + f'Downloading {resource}', query=query, + headers=self._API_HEADERS)['data']['mylist'] + + def _real_extract(self, url): + list_id = self._match_id(url) + mylist = self._call_api(list_id, 'list', { + 'pageSize': 1, + }) + return self.playlist_result( + self._entries(list_id), list_id, + mylist.get('name'), mylist.get('description'), **self._parse_owner(mylist)) + + +class NiconicoSeriesIE(InfoExtractor): + IE_NAME = 'niconico:series' + _VALID_URL = r'https?://(?:(?:www\.|sp\.)?nicovideo\.jp(?:/user/\d+)?|nico\.ms)/series/(?P\d+)' + + _TESTS = [{ + 'url': 'https://www.nicovideo.jp/user/44113208/series/110226', + 'info_dict': { + 'id': '110226', + 'title': 'ã”立派ァï¼ã®ã‚·ãƒªãƒ¼ã‚º', + }, + 'playlist_mincount': 10, + }, { + 'url': 'https://www.nicovideo.jp/series/12312/', + 'info_dict': { + 'id': '12312', + 'title': 'ãƒãƒˆãƒ«ã‚¹ãƒ”リッツ ãŠå‹§ã‚カード紹介(調整中)', + }, + 'playlist_mincount': 103, + }, { + 'url': 'https://nico.ms/series/203559', + 'only_matching': True, + }] + + def _real_extract(self, url): + list_id = self._match_id(url) + webpage = self._download_webpage(url, list_id) + + title = self._search_regex( + (r'「(.+)(全', + r'<div class="TwitterShareButton"\s+data-text="(.+)\s+https:'), + webpage, 'title', fatal=False) + if title: + title = unescapeHTML(title) + json_data = next(self._yield_json_ld(webpage, None, fatal=False)) + return self.playlist_from_matches( + traverse_obj(json_data, ('itemListElement', ..., 'url')), list_id, title, ie=NiconicoIE) + + +class NiconicoHistoryIE(NiconicoPlaylistBaseIE): + IE_NAME = 'niconico:history' + IE_DESC = 'NicoNico user history or likes. Requires cookies.' + _VALID_URL = r'https?://(?:www\.|sp\.)?nicovideo\.jp/my/(?P<id>history(?:/like)?)' + + _TESTS = [{ + 'note': 'PC page, with /video', + 'url': 'https://www.nicovideo.jp/my/history/video', + 'only_matching': True, + }, { + 'note': 'PC page, without /video', + 'url': 'https://www.nicovideo.jp/my/history', + 'only_matching': True, + }, { + 'note': 'mobile page, with /video', + 'url': 'https://sp.nicovideo.jp/my/history/video', + 'only_matching': True, + }, { + 'note': 'mobile page, without /video', + 'url': 'https://sp.nicovideo.jp/my/history', + 'only_matching': True, + }, { + 'note': 'PC page', + 'url': 'https://www.nicovideo.jp/my/history/like', + 'only_matching': True, + }, { + 'note': 'Mobile page', + 'url': 'https://sp.nicovideo.jp/my/history/like', + 'only_matching': True, + }] + + def _call_api(self, list_id, resource, query): + path = 'likes' if list_id == 'history/like' else 'watch/history' + return self._download_json( + f'https://nvapi.nicovideo.jp/v1/users/me/{path}', list_id, + f'Downloading {resource}', query=query, headers=self._API_HEADERS)['data'] + + def _real_extract(self, url): + list_id = self._match_id(url) + try: + mylist = self._call_api(list_id, 'list', {'pageSize': 1}) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + self.raise_login_required('You have to be logged in to get your history') + raise + return self.playlist_result(self._entries(list_id), list_id, **self._parse_owner(mylist)) + + +class NicovideoSearchBaseIE(InfoExtractor): + _SEARCH_TYPE = 'search' + + def _entries(self, url, item_id, query=None, note='Downloading page %(page)s'): + query = query or {} + pages = [query['page']] if 'page' in query else itertools.count(1) + for page_num in pages: + query['page'] = str(page_num) + webpage = self._download_webpage(url, item_id, query=query, note=note % {'page': page_num}) + results = re.findall(r'(?<=data-video-id=)["\']?(?P<videoid>.*?)(?=["\'])', webpage) + for item in results: + yield self.url_result(f'https://www.nicovideo.jp/watch/{item}', 'Niconico', item) + if not results: + break + + def _search_results(self, query): + return self._entries( + self._proto_relative_url(f'//www.nicovideo.jp/{self._SEARCH_TYPE}/{query}'), query) + + +class NicovideoSearchIE(NicovideoSearchBaseIE, SearchInfoExtractor): + IE_DESC = 'Nico video search' + IE_NAME = 'nicovideo:search' + _SEARCH_KEY = 'nicosearch' + + +class NicovideoSearchURLIE(NicovideoSearchBaseIE): + IE_NAME = f'{NicovideoSearchIE.IE_NAME}_url' + IE_DESC = 'Nico video search URLs' + _VALID_URL = r'https?://(?:www\.)?nicovideo\.jp/search/(?P<id>[^?#&]+)?' + _TESTS = [{ + 'url': 'http://www.nicovideo.jp/search/sm9', + 'info_dict': { + 'id': 'sm9', + 'title': 'sm9' + }, + 'playlist_mincount': 40, + }, { + 'url': 'https://www.nicovideo.jp/search/sm9?sort=h&order=d&end=2020-12-31&start=2020-01-01', + 'info_dict': { + 'id': 'sm9', + 'title': 'sm9' + }, + 'playlist_count': 31, + }] + + def _real_extract(self, url): + query = self._match_id(url) + return self.playlist_result(self._entries(url, query), query, query) + + +class NicovideoSearchDateIE(NicovideoSearchBaseIE, SearchInfoExtractor): + IE_DESC = 'Nico video search, newest first' + IE_NAME = f'{NicovideoSearchIE.IE_NAME}:date' + _SEARCH_KEY = 'nicosearchdate' + _TESTS = [{ + 'url': 'nicosearchdateall:a', + 'info_dict': { + 'id': 'a', + 'title': 'a' + }, + 'playlist_mincount': 1610, + }] + + _START_DATE = datetime.date(2007, 1, 1) + _RESULTS_PER_PAGE = 32 + _MAX_PAGES = 50 + + def _entries(self, url, item_id, start_date=None, end_date=None): + start_date, end_date = start_date or self._START_DATE, end_date or datetime.datetime.now().date() + + # If the last page has a full page of videos, we need to break down the query interval further + last_page_len = len(list(self._get_entries_for_date( + url, item_id, start_date, end_date, self._MAX_PAGES, + note=f'Checking number of videos from {start_date} to {end_date}'))) + if (last_page_len == self._RESULTS_PER_PAGE and start_date != end_date): + midpoint = start_date + ((end_date - start_date) // 2) + yield from self._entries(url, item_id, midpoint, end_date) + yield from self._entries(url, item_id, start_date, midpoint) + else: + self.to_screen(f'{item_id}: Downloading results from {start_date} to {end_date}') + yield from self._get_entries_for_date( + url, item_id, start_date, end_date, note=' Downloading page %(page)s') + + def _get_entries_for_date(self, url, item_id, start_date, end_date=None, page_num=None, note=None): + query = { + 'start': str(start_date), + 'end': str(end_date or start_date), + 'sort': 'f', + 'order': 'd', + } + if page_num: + query['page'] = str(page_num) + + yield from super()._entries(url, item_id, query=query, note=note) + + +class NicovideoTagURLIE(NicovideoSearchBaseIE): + IE_NAME = 'niconico:tag' + IE_DESC = 'NicoNico video tag URLs' + _SEARCH_TYPE = 'tag' + _VALID_URL = r'https?://(?:www\.)?nicovideo\.jp/tag/(?P<id>[^?#&]+)?' + _TESTS = [{ + 'url': 'https://www.nicovideo.jp/tag/ドキュメンタリー淫夢', + 'info_dict': { + 'id': 'ドキュメンタリー淫夢', + 'title': 'ドキュメンタリー淫夢' + }, + 'playlist_mincount': 400, + }] + + def _real_extract(self, url): + query = self._match_id(url) + return self.playlist_result(self._entries(url, query), query, query) + + +class NiconicoUserIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?nicovideo\.jp/user/(?P<id>\d+)/?(?:$|[#?])' + _TEST = { + 'url': 'https://www.nicovideo.jp/user/419948', + 'info_dict': { + 'id': '419948', + }, + 'playlist_mincount': 101, + } + _API_URL = "https://nvapi.nicovideo.jp/v1/users/%s/videos?sortKey=registeredAt&sortOrder=desc&pageSize=%s&page=%s" + _PAGE_SIZE = 100 + + _API_HEADERS = { + 'X-Frontend-ID': '6', + 'X-Frontend-Version': '0' + } + + def _entries(self, list_id): + total_count = 1 + count = page_num = 0 + while count < total_count: + json_parsed = self._download_json( + self._API_URL % (list_id, self._PAGE_SIZE, page_num + 1), list_id, + headers=self._API_HEADERS, + note='Downloading JSON metadata%s' % (' page %d' % page_num if page_num else '')) + if not page_num: + total_count = int_or_none(json_parsed['data'].get('totalCount')) + for entry in json_parsed["data"]["items"]: + count += 1 + yield self.url_result('https://www.nicovideo.jp/watch/%s' % entry['id']) + page_num += 1 + + def _real_extract(self, url): + list_id = self._match_id(url) + return self.playlist_result(self._entries(list_id), list_id, ie=NiconicoIE.ie_key()) + + +class NiconicoLiveIE(InfoExtractor): + IE_NAME = 'niconico:live' + IE_DESC = 'ニコニコ生放é€' + _VALID_URL = r'https?://(?:sp\.)?live2?\.nicovideo\.jp/(?:watch|gate)/(?P<id>lv\d+)' + _TESTS = [{ + 'note': 'this test case includes invisible characters for title, pasting them as-is', + 'url': 'https://live.nicovideo.jp/watch/lv339533123', + 'info_dict': { + 'id': 'lv339533123', + 'title': '激辛ペヤング食ã¹ã¾ã™â€ª( ;ᯅ; )‬(歌枠オーディションå‚加中)', + 'view_count': 1526, + 'comment_count': 1772, + 'description': 'åˆã‚ã¾ã—ã¦ã‚‚ã‹ã£ã¦è¨€ã„ã¾ã™â•\nã®ã‚“ã³ã‚Šè‡ªç”±ã«é©å½“ã«æš®ã‚‰ã—ã¦ã¾ã™', + 'uploader': 'ã‚‚ã‹', + 'channel': 'ゲストã•ã‚“ã®ã‚³ãƒŸãƒ¥ãƒ‹ãƒ†ã‚£', + 'channel_id': 'co5776900', + 'channel_url': 'https://com.nicovideo.jp/community/co5776900', + 'timestamp': 1670677328, + 'is_live': True, + }, + 'skip': 'livestream', + }, { + 'url': 'https://live2.nicovideo.jp/watch/lv339533123', + 'only_matching': True, + }, { + 'url': 'https://sp.live.nicovideo.jp/watch/lv339533123', + 'only_matching': True, + }, { + 'url': 'https://sp.live2.nicovideo.jp/watch/lv339533123', + 'only_matching': True, + }] + + _KNOWN_LATENCY = ('high', 'low') + + def _real_extract(self, url): + if not websockets: + raise ExtractorError('websockets library is not available. Please install it.', expected=True) + video_id = self._match_id(url) + webpage, urlh = self._download_webpage_handle(f'https://live.nicovideo.jp/watch/{video_id}', video_id) + + embedded_data = self._parse_json(unescapeHTML(self._search_regex( + r'<script\s+id="embedded-data"\s*data-props="(.+?)"', webpage, 'embedded data')), video_id) + + ws_url = traverse_obj(embedded_data, ('site', 'relive', 'webSocketUrl')) + if not ws_url: + raise ExtractorError('The live hasn\'t started yet or already ended.', expected=True) + ws_url = update_url_query(ws_url, { + 'frontend_id': traverse_obj(embedded_data, ('site', 'frontendId')) or '9', + }) + + hostname = remove_start(urlparse(urlh.url).hostname, 'sp.') + cookies = try_get(urlh.url, self._downloader._calc_cookies) + latency = try_get(self._configuration_arg('latency'), lambda x: x[0]) + if latency not in self._KNOWN_LATENCY: + latency = 'high' + + ws = WebSocketsWrapper(ws_url, { + 'Cookies': str_or_none(cookies) or '', + 'Origin': f'https://{hostname}', + 'Accept': '*/*', + 'User-Agent': self.get_param('http_headers')['User-Agent'], + }) + + self.write_debug('[debug] Sending HLS server request') + ws.send(json.dumps({ + 'type': 'startWatching', + 'data': { + 'stream': { + 'quality': 'abr', + 'protocol': 'hls+fmp4', + 'latency': latency, + 'chasePlay': False + }, + 'room': { + 'protocol': 'webSocket', + 'commentable': True + }, + 'reconnect': False, + } + })) + + while True: + recv = ws.recv() + if not recv: + continue + data = json.loads(recv) + if not isinstance(data, dict): + continue + if data.get('type') == 'stream': + m3u8_url = data['data']['uri'] + qualities = data['data']['availableQualities'] + break + elif data.get('type') == 'disconnect': + self.write_debug(recv) + raise ExtractorError('Disconnected at middle of extraction') + elif data.get('type') == 'error': + self.write_debug(recv) + message = traverse_obj(data, ('body', 'code')) or recv + raise ExtractorError(message) + elif self.get_param('verbose', False): + if len(recv) > 100: + recv = recv[:100] + '...' + self.write_debug('Server said: %s' % recv) + + title = traverse_obj(embedded_data, ('program', 'title')) or self._html_search_meta( + ('og:title', 'twitter:title'), webpage, 'live title', fatal=False) + + raw_thumbs = traverse_obj(embedded_data, ('program', 'thumbnail')) or {} + thumbnails = [] + for name, value in raw_thumbs.items(): + if not isinstance(value, dict): + thumbnails.append({ + 'id': name, + 'url': value, + **parse_resolution(value, lenient=True), + }) + continue + + for k, img_url in value.items(): + res = parse_resolution(k, lenient=True) or parse_resolution(img_url, lenient=True) + width, height = res.get('width'), res.get('height') + + thumbnails.append({ + 'id': f'{name}_{width}x{height}', + 'url': img_url, + **res, + }) + + formats = self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True) + for fmt, q in zip(formats, reversed(qualities[1:])): + fmt.update({ + 'format_id': q, + 'protocol': 'niconico_live', + 'ws': ws, + 'video_id': video_id, + 'cookies': cookies, + 'live_latency': latency, + 'origin': hostname, + }) + + return { + 'id': video_id, + 'title': title, + **traverse_obj(embedded_data, { + 'view_count': ('program', 'statistics', 'watchCount'), + 'comment_count': ('program', 'statistics', 'commentCount'), + 'uploader': ('program', 'supplier', 'name'), + 'channel': ('socialGroup', 'name'), + 'channel_id': ('socialGroup', 'id'), + 'channel_url': ('socialGroup', 'socialGroupPageUrl'), + }), + 'description': clean_html(traverse_obj(embedded_data, ('program', 'description'))), + 'timestamp': int_or_none(traverse_obj(embedded_data, ('program', 'openTime'))), + 'is_live': True, + 'thumbnails': thumbnails, + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/niconicochannelplus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/niconicochannelplus.py new file mode 100644 index 0000000..89af3f7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/niconicochannelplus.py @@ -0,0 +1,426 @@ +import functools +import json + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + OnDemandPagedList, + filter_dict, + int_or_none, + parse_qs, + str_or_none, + traverse_obj, + unified_timestamp, + url_or_none, +) + + +class NiconicoChannelPlusBaseIE(InfoExtractor): + _WEBPAGE_BASE_URL = 'https://nicochannel.jp' + + def _call_api(self, path, item_id, *args, **kwargs): + return self._download_json( + f'https://nfc-api.nicochannel.jp/fc/{path}', video_id=item_id, *args, **kwargs) + + def _find_fanclub_site_id(self, channel_name): + fanclub_list_json = self._call_api( + 'content_providers/channels', item_id=f'channels/{channel_name}', + note='Fetching channel list', errnote='Unable to fetch channel list', + )['data']['content_providers'] + fanclub_id = traverse_obj(fanclub_list_json, ( + lambda _, v: v['domain'] == f'{self._WEBPAGE_BASE_URL}/{channel_name}', 'id'), + get_all=False) + if not fanclub_id: + raise ExtractorError(f'Channel {channel_name} does not exist', expected=True) + return fanclub_id + + def _get_channel_base_info(self, fanclub_site_id): + return traverse_obj(self._call_api( + f'fanclub_sites/{fanclub_site_id}/page_base_info', item_id=f'fanclub_sites/{fanclub_site_id}', + note='Fetching channel base info', errnote='Unable to fetch channel base info', fatal=False, + ), ('data', 'fanclub_site', {dict})) or {} + + def _get_channel_user_info(self, fanclub_site_id): + return traverse_obj(self._call_api( + f'fanclub_sites/{fanclub_site_id}/user_info', item_id=f'fanclub_sites/{fanclub_site_id}', + note='Fetching channel user info', errnote='Unable to fetch channel user info', fatal=False, + data=json.dumps('null').encode('ascii'), + ), ('data', 'fanclub_site', {dict})) or {} + + +class NiconicoChannelPlusIE(NiconicoChannelPlusBaseIE): + IE_NAME = 'NiconicoChannelPlus' + IE_DESC = 'ニコニコãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラス' + _VALID_URL = r'https?://nicochannel\.jp/(?P<channel>[\w.-]+)/(?:video|live)/(?P<code>sm\w+)' + _TESTS = [{ + 'url': 'https://nicochannel.jp/kaorin/video/smsDd8EdFLcVZk9yyAhD6H7H', + 'info_dict': { + 'id': 'smsDd8EdFLcVZk9yyAhD6H7H', + 'title': 'å‰ç”°ä½³ç¹”里ã¯ãƒ‹ã‚³ç”ŸãŒã—ãŸã„ï¼', + 'ext': 'mp4', + 'channel': 'å‰ç”°ä½³ç¹”里ã®ä¸–界攻略計画', + 'channel_id': 'kaorin', + 'channel_url': 'https://nicochannel.jp/kaorin', + 'live_status': 'not_live', + 'thumbnail': 'https://nicochannel.jp/public_html/contents/video_pages/74/thumbnail_path', + 'description': 'ï¼’ï¼ï¼’ï¼‘å¹´ï¼‘ï¼‘æœˆã«æ”¾é€ã•れãŸ\n「å‰ç”°ä½³ç¹”里ã¯ãƒ‹ã‚³ç”ŸãŒã—ãŸã„ï¼ã€ã‚¢ãƒ¼ã‚«ã‚¤ãƒ–ã«ãªã‚Šã¾ã™ã€‚', + 'timestamp': 1641360276, + 'duration': 4097, + 'comment_count': int, + 'view_count': int, + 'tags': [], + 'upload_date': '20220105', + }, + 'params': { + 'skip_download': True, + }, + }, { + # age limited video; test purpose channel. + 'url': 'https://nicochannel.jp/testman/video/smDXbcrtyPNxLx9jc4BW69Ve', + 'info_dict': { + 'id': 'smDXbcrtyPNxLx9jc4BW69Ve', + 'title': 'test oshiro', + 'ext': 'mp4', + 'channel': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン', + 'channel_id': 'testman', + 'channel_url': 'https://nicochannel.jp/testman', + 'age_limit': 18, + 'live_status': 'was_live', + 'timestamp': 1666344616, + 'duration': 86465, + 'comment_count': int, + 'view_count': int, + 'tags': [], + 'upload_date': '20221021', + }, + 'params': { + 'skip_download': True, + }, + }] + + def _real_extract(self, url): + content_code, channel_id = self._match_valid_url(url).group('code', 'channel') + fanclub_site_id = self._find_fanclub_site_id(channel_id) + + data_json = self._call_api( + f'video_pages/{content_code}', item_id=content_code, headers={'fc_use_device': 'null'}, + note='Fetching video page info', errnote='Unable to fetch video page info', + )['data']['video_page'] + + live_status, session_id = self._get_live_status_and_session_id(content_code, data_json) + + release_timestamp_str = data_json.get('live_scheduled_start_at') + + formats = [] + + if live_status == 'is_upcoming': + if release_timestamp_str: + msg = f'This live event will begin at {release_timestamp_str} UTC' + else: + msg = 'This event has not started yet' + self.raise_no_formats(msg, expected=True, video_id=content_code) + else: + formats = self._extract_m3u8_formats( + # "authenticated_url" is a format string that contains "{session_id}". + m3u8_url=data_json['video_stream']['authenticated_url'].format(session_id=session_id), + video_id=content_code) + + return { + 'id': content_code, + 'formats': formats, + '_format_sort_fields': ('tbr', 'vcodec', 'acodec'), + 'channel': self._get_channel_base_info(fanclub_site_id).get('fanclub_site_name'), + 'channel_id': channel_id, + 'channel_url': f'{self._WEBPAGE_BASE_URL}/{channel_id}', + 'age_limit': traverse_obj(self._get_channel_user_info(fanclub_site_id), ('content_provider', 'age_limit')), + 'live_status': live_status, + 'release_timestamp': unified_timestamp(release_timestamp_str), + **traverse_obj(data_json, { + 'title': ('title', {str}), + 'thumbnail': ('thumbnail_url', {url_or_none}), + 'description': ('description', {str}), + 'timestamp': ('released_at', {unified_timestamp}), + 'duration': ('active_video_filename', 'length', {int_or_none}), + 'comment_count': ('video_aggregate_info', 'number_of_comments', {int_or_none}), + 'view_count': ('video_aggregate_info', 'total_views', {int_or_none}), + 'tags': ('video_tags', ..., 'tag', {str}), + }), + '__post_extractor': self.extract_comments( + content_code=content_code, + comment_group_id=traverse_obj(data_json, ('video_comment_setting', 'comment_group_id'))), + } + + def _get_comments(self, content_code, comment_group_id): + item_id = f'{content_code}/comments' + + if not comment_group_id: + return None + + comment_access_token = self._call_api( + f'video_pages/{content_code}/comments_user_token', item_id, + note='Getting comment token', errnote='Unable to get comment token', + )['data']['access_token'] + + comment_list = self._download_json( + 'https://comm-api.sheeta.com/messages.history', video_id=item_id, + note='Fetching comments', errnote='Unable to fetch comments', + headers={'Content-Type': 'application/json'}, + query={ + 'sort_direction': 'asc', + 'limit': int_or_none(self._configuration_arg('max_comments', [''])[0]) or 120, + }, + data=json.dumps({ + 'token': comment_access_token, + 'group_id': comment_group_id, + }).encode('ascii')) + + for comment in traverse_obj(comment_list, ...): + yield traverse_obj(comment, { + 'author': ('nickname', {str}), + 'author_id': ('sender_id', {str_or_none}), + 'id': ('id', {str_or_none}), + 'text': ('message', {str}), + 'timestamp': (('updated_at', 'sent_at', 'created_at'), {unified_timestamp}), + 'author_is_uploader': ('sender_id', {lambda x: x == '-1'}), + }, get_all=False) + + def _get_live_status_and_session_id(self, content_code, data_json): + video_type = data_json.get('type') + live_finished_at = data_json.get('live_finished_at') + + payload = {} + if video_type == 'vod': + if live_finished_at: + live_status = 'was_live' + else: + live_status = 'not_live' + elif video_type == 'live': + if not data_json.get('live_started_at'): + return 'is_upcoming', '' + + if not live_finished_at: + live_status = 'is_live' + else: + live_status = 'was_live' + payload = {'broadcast_type': 'dvr'} + + video_allow_dvr_flg = traverse_obj(data_json, ('video', 'allow_dvr_flg')) + video_convert_to_vod_flg = traverse_obj(data_json, ('video', 'convert_to_vod_flg')) + + self.write_debug(f'allow_dvr_flg = {video_allow_dvr_flg}, convert_to_vod_flg = {video_convert_to_vod_flg}.') + + if not (video_allow_dvr_flg and video_convert_to_vod_flg): + raise ExtractorError( + 'Live was ended, there is no video for download.', video_id=content_code, expected=True) + else: + raise ExtractorError(f'Unknown type: {video_type}', video_id=content_code, expected=False) + + self.write_debug(f'{content_code}: video_type={video_type}, live_status={live_status}') + + session_id = self._call_api( + f'video_pages/{content_code}/session_ids', item_id=f'{content_code}/session', + data=json.dumps(payload).encode('ascii'), headers={ + 'Content-Type': 'application/json', + 'fc_use_device': 'null', + 'origin': 'https://nicochannel.jp', + }, + note='Getting session id', errnote='Unable to get session id', + )['data']['session_id'] + + return live_status, session_id + + +class NiconicoChannelPlusChannelBaseIE(NiconicoChannelPlusBaseIE): + _PAGE_SIZE = 12 + + def _fetch_paged_channel_video_list(self, path, query, channel_name, item_id, page): + response = self._call_api( + path, item_id, query={ + **query, + 'page': (page + 1), + 'per_page': self._PAGE_SIZE, + }, + headers={'fc_use_device': 'null'}, + note=f'Getting channel info (page {page + 1})', + errnote=f'Unable to get channel info (page {page + 1})') + + for content_code in traverse_obj(response, ('data', 'video_pages', 'list', ..., 'content_code')): + # "video/{content_code}" works for both VOD and live, but "live/{content_code}" doesn't work for VOD + yield self.url_result( + f'{self._WEBPAGE_BASE_URL}/{channel_name}/video/{content_code}', NiconicoChannelPlusIE) + + +class NiconicoChannelPlusChannelVideosIE(NiconicoChannelPlusChannelBaseIE): + IE_NAME = 'NiconicoChannelPlus:channel:videos' + IE_DESC = 'ニコニコãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラス - ãƒãƒ£ãƒ³ãƒãƒ« - 動画リスト. nicochannel.jp/channel/videos' + _VALID_URL = r'https?://nicochannel\.jp/(?P<id>[a-z\d\._-]+)/videos(?:\?.*)?' + _TESTS = [{ + # query: None + 'url': 'https://nicochannel.jp/testman/videos', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 18, + }, { + # query: None + 'url': 'https://nicochannel.jp/testtarou/videos', + 'info_dict': { + 'id': 'testtarou-videos', + 'title': 'ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステスト太郎-videos', + }, + 'playlist_mincount': 2, + }, { + # query: None + 'url': 'https://nicochannel.jp/testjirou/videos', + 'info_dict': { + 'id': 'testjirou-videos', + 'title': 'ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステスト二郎-videos', + }, + 'playlist_mincount': 12, + }, { + # query: tag + 'url': 'https://nicochannel.jp/testman/videos?tag=%E6%A4%9C%E8%A8%BC%E7%94%A8', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 6, + }, { + # query: vodType + 'url': 'https://nicochannel.jp/testman/videos?vodType=1', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 18, + }, { + # query: sort + 'url': 'https://nicochannel.jp/testman/videos?sort=-released_at', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 18, + }, { + # query: tag, vodType + 'url': 'https://nicochannel.jp/testman/videos?tag=%E6%A4%9C%E8%A8%BC%E7%94%A8&vodType=1', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 6, + }, { + # query: tag, sort + 'url': 'https://nicochannel.jp/testman/videos?tag=%E6%A4%9C%E8%A8%BC%E7%94%A8&sort=-released_at', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 6, + }, { + # query: vodType, sort + 'url': 'https://nicochannel.jp/testman/videos?vodType=1&sort=-released_at', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 18, + }, { + # query: tag, vodType, sort + 'url': 'https://nicochannel.jp/testman/videos?tag=%E6%A4%9C%E8%A8%BC%E7%94%A8&vodType=1&sort=-released_at', + 'info_dict': { + 'id': 'testman-videos', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-videos', + }, + 'playlist_mincount': 6, + }] + + def _real_extract(self, url): + """ + API parameters: + sort: + -released_at å…¬é–‹æ—¥ãŒæ–°ã—ã„é † (newest to oldest) + released_at 公開日ãŒå¤ã„é † (oldest to newest) + -number_of_vod_views å†ç”Ÿæ•°ãŒå¤šã„é † (most play count) + number_of_vod_views コメントãŒå¤šã„é † (most comments) + vod_type (is "vodType" in "url"): + 0 ã™ã¹ã¦ (all) + 1 会員é™å®š (members only) + 2 一部無料 (partially free) + 3 レンタル (rental) + 4 生放é€ã‚¢ãƒ¼ã‚«ã‚¤ãƒ– (live archives) + 5 アップロード動画 (uploaded videos) + """ + + channel_id = self._match_id(url) + fanclub_site_id = self._find_fanclub_site_id(channel_id) + channel_name = self._get_channel_base_info(fanclub_site_id).get('fanclub_site_name') + qs = parse_qs(url) + + return self.playlist_result( + OnDemandPagedList( + functools.partial( + self._fetch_paged_channel_video_list, f'fanclub_sites/{fanclub_site_id}/video_pages', + filter_dict({ + 'tag': traverse_obj(qs, ('tag', 0)), + 'sort': traverse_obj(qs, ('sort', 0), default='-released_at'), + 'vod_type': traverse_obj(qs, ('vodType', 0), default='0'), + }), + channel_id, f'{channel_id}/videos'), + self._PAGE_SIZE), + playlist_id=f'{channel_id}-videos', playlist_title=f'{channel_name}-videos') + + +class NiconicoChannelPlusChannelLivesIE(NiconicoChannelPlusChannelBaseIE): + IE_NAME = 'NiconicoChannelPlus:channel:lives' + IE_DESC = 'ニコニコãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラス - ãƒãƒ£ãƒ³ãƒãƒ« - ライブリスト. nicochannel.jp/channel/lives' + _VALID_URL = r'https?://nicochannel\.jp/(?P<id>[a-z\d\._-]+)/lives' + _TESTS = [{ + 'url': 'https://nicochannel.jp/testman/lives', + 'info_dict': { + 'id': 'testman-lives', + 'title': '本番ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステストマン-lives', + }, + 'playlist_mincount': 18, + }, { + 'url': 'https://nicochannel.jp/testtarou/lives', + 'info_dict': { + 'id': 'testtarou-lives', + 'title': 'ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステスト太郎-lives', + }, + 'playlist_mincount': 2, + }, { + 'url': 'https://nicochannel.jp/testjirou/lives', + 'info_dict': { + 'id': 'testjirou-lives', + 'title': 'ãƒãƒ£ãƒ³ãƒãƒ«ãƒ—ラステスト二郎-lives', + }, + 'playlist_mincount': 6, + }] + + def _real_extract(self, url): + """ + API parameters: + live_type: + 1 放é€ä¸­ (on air) + 2 放é€äºˆå®š (scheduled live streams, oldest to newest) + 3 éŽåŽ»ã®æ”¾é€ - ã™ã¹ã¦ (all ended live streams, newest to oldest) + 4 éŽåŽ»ã®æ”¾é€ - 生放é€ã‚¢ãƒ¼ã‚«ã‚¤ãƒ– (all archives for live streams, oldest to newest) + We use "4" instead of "3" because some recently ended live streams could not be downloaded. + """ + + channel_id = self._match_id(url) + fanclub_site_id = self._find_fanclub_site_id(channel_id) + channel_name = self._get_channel_base_info(fanclub_site_id).get('fanclub_site_name') + + return self.playlist_result( + OnDemandPagedList( + functools.partial( + self._fetch_paged_channel_video_list, f'fanclub_sites/{fanclub_site_id}/live_pages', + { + 'live_type': 4, + }, + channel_id, f'{channel_id}/lives'), + self._PAGE_SIZE), + playlist_id=f'{channel_id}-lives', playlist_title=f'{channel_name}-lives') diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ninecninemedia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ninecninemedia.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ninecninemedia.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ninecninemedia.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ninegag.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ninegag.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ninegag.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ninegag.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/ninenow.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ninenow.py new file mode 100644 index 0000000..c655b75 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/ninenow.py @@ -0,0 +1,122 @@ +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + ExtractorError, + int_or_none, + float_or_none, + smuggle_url, + str_or_none, + try_get, + unified_strdate, + unified_timestamp, +) + + +class NineNowIE(InfoExtractor): + IE_NAME = '9now.com.au' + _VALID_URL = r'https?://(?:www\.)?9now\.com\.au/(?:[^/]+/){2}(?P<id>[^/?#]+)' + _GEO_COUNTRIES = ['AU'] + _TESTS = [{ + # clip + 'url': 'https://www.9now.com.au/afl-footy-show/2016/clip-ciql02091000g0hp5oktrnytc', + 'md5': '17cf47d63ec9323e562c9957a968b565', + 'info_dict': { + 'id': '16801', + 'ext': 'mp4', + 'title': 'St. Kilda\'s Joey Montagna on the potential for a player\'s strike', + 'description': 'Is a boycott of the NAB Cup "on the table"?', + 'uploader_id': '4460760524001', + 'upload_date': '20160713', + 'timestamp': 1468421266, + }, + 'skip': 'Only available in Australia', + }, { + # episode + 'url': 'https://www.9now.com.au/afl-footy-show/2016/episode-19', + 'only_matching': True, + }, { + # DRM protected + 'url': 'https://www.9now.com.au/andrew-marrs-history-of-the-world/season-1/episode-1', + 'only_matching': True, + }, { + # episode of series + 'url': 'https://www.9now.com.au/lego-masters/season-3/episode-3', + 'info_dict': { + 'id': '6249614030001', + 'title': 'Episode 3', + 'ext': 'mp4', + 'season_number': 3, + 'episode_number': 3, + 'description': 'In the first elimination of the competition, teams will have 10 hours to build a world inside a snow globe.', + 'uploader_id': '4460760524001', + 'timestamp': 1619002200, + 'upload_date': '20210421', + }, + 'expected_warnings': ['Ignoring subtitle tracks'], + 'params': { + 'skip_download': True, + } + }] + BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/4460760524001/default_default/index.html?videoId=%s' + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + page_data = self._parse_json(self._search_regex( + r'window\.__data\s*=\s*({.*?});', webpage, + 'page data', default='{}'), display_id, fatal=False) + if not page_data: + page_data = self._parse_json(self._parse_json(self._search_regex( + r'window\.__data\s*=\s*JSON\.parse\s*\(\s*(".+?")\s*\)\s*;', + webpage, 'page data'), display_id), display_id) + + for kind in ('episode', 'clip'): + current_key = page_data.get(kind, {}).get( + 'current%sKey' % kind.capitalize()) + if not current_key: + continue + cache = page_data.get(kind, {}).get('%sCache' % kind, {}) + if not cache: + continue + common_data = { + 'episode': (cache.get(current_key) or list(cache.values())[0])[kind], + 'season': (cache.get(current_key) or list(cache.values())[0]).get('season', None) + } + break + else: + raise ExtractorError('Unable to find video data') + + if not self.get_param('allow_unplayable_formats') and try_get(common_data, lambda x: x['episode']['video']['drm'], bool): + self.report_drm(display_id) + brightcove_id = try_get( + common_data, lambda x: x['episode']['video']['brightcoveId'], compat_str) or 'ref:%s' % common_data['episode']['video']['referenceId'] + video_id = str_or_none(try_get(common_data, lambda x: x['episode']['video']['id'])) or brightcove_id + + title = try_get(common_data, lambda x: x['episode']['name'], compat_str) + season_number = try_get(common_data, lambda x: x['season']['seasonNumber'], int) + episode_number = try_get(common_data, lambda x: x['episode']['episodeNumber'], int) + timestamp = unified_timestamp(try_get(common_data, lambda x: x['episode']['airDate'], compat_str)) + release_date = unified_strdate(try_get(common_data, lambda x: x['episode']['availability'], compat_str)) + thumbnails_data = try_get(common_data, lambda x: x['episode']['image']['sizes'], dict) or {} + thumbnails = [{ + 'id': thumbnail_id, + 'url': thumbnail_url, + 'width': int_or_none(thumbnail_id[1:]), + } for thumbnail_id, thumbnail_url in thumbnails_data.items()] + + return { + '_type': 'url_transparent', + 'url': smuggle_url( + self.BRIGHTCOVE_URL_TEMPLATE % brightcove_id, + {'geo_countries': self._GEO_COUNTRIES}), + 'id': video_id, + 'title': title, + 'description': try_get(common_data, lambda x: x['episode']['description'], compat_str), + 'duration': float_or_none(try_get(common_data, lambda x: x['episode']['video']['duration'], float), 1000), + 'thumbnails': thumbnails, + 'ie_key': 'BrightcoveNew', + 'season_number': season_number, + 'episode_number': episode_number, + 'timestamp': timestamp, + 'release_date': release_date, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nintendo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nintendo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nintendo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nintendo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nitter.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nitter.py new file mode 100644 index 0000000..35d1311 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nitter.py @@ -0,0 +1,360 @@ +from .common import InfoExtractor +from ..compat import compat_urlparse +from ..utils import ( + parse_count, + unified_timestamp, + remove_end, + determine_ext, +) +import re +import random + + +class NitterIE(InfoExtractor): + # Taken from https://github.com/zedeus/nitter/wiki/Instances + + NON_HTTP_INSTANCES = ( + '3nzoldnxplag42gqjs23xvghtzf6t6yzssrtytnntc6ppc7xxuoneoad.onion', + 'nitter.l4qlywnpwqsluw65ts7md3khrivpirse744un3x7mlskqauz5pyuzgqd.onion', + 'nitter7bryz3jv7e3uekphigvmoyoem4al3fynerxkj22dmoxoq553qd.onion', + 'npf37k3mtzwxreiw52ccs5ay4e6qt2fkcs2ndieurdyn2cuzzsfyfvid.onion', + 'nitter.v6vgyqpa7yefkorazmg5d5fimstmvm2vtbirt6676mt7qmllrcnwycqd.onion', + 'i23nv6w3juvzlw32xzoxcqzktegd4i4fu3nmnc2ewv4ggiu4ledwklad.onion', + '26oq3gioiwcmfojub37nz5gzbkdiqp7fue5kvye7d4txv4ny6fb4wwid.onion', + 'vfaomgh4jxphpbdfizkm5gbtjahmei234giqj4facbwhrfjtcldauqad.onion', + 'iwgu3cv7ywf3gssed5iqtavmrlszgsxazkmwwnt4h2kdait75thdyrqd.onion', + 'erpnncl5nhyji3c32dcfmztujtl3xaddqb457jsbkulq24zqq7ifdgad.onion', + 'ckzuw5misyahmg7j5t5xwwuj3bwy62jfolxyux4brfflramzsvvd3syd.onion', + 'jebqj47jgxleaiosfcxfibx2xdahjettuydlxbg64azd4khsxv6kawid.onion', + 'nttr2iupbb6fazdpr2rgbooon2tzbbsvvkagkgkwohhodjzj43stxhad.onion', + 'nitraeju2mipeziu2wtcrqsxg7h62v5y4eqgwi75uprynkj74gevvuqd.onion', + 'nitter.lqs5fjmajyp7rvp4qvyubwofzi6d4imua7vs237rkc4m5qogitqwrgyd.onion', + 'ibsboeui2im5o7dxnik3s5yghufumgy5abevtij5nbizequfpu4qi4ad.onion', + 'ec5nvbycpfa5k6ro77blxgkyrzbkv7uy6r5cngcbkadtjj2733nm3uyd.onion', + + 'nitter.i2p', + 'u6ikd6zndl3c4dsdq4mmujpntgeevdk5qzkfb57r4tnfeccrn2qa.b32.i2p', + + 'nitterlgj3n5fgwesu3vxc5h67ruku33nqaoeoocae2mvlzhsu6k7fqd.onion', + ) + + HTTP_INSTANCES = ( + 'nitter.lacontrevoie.fr', + 'nitter.fdn.fr', + 'nitter.1d4.us', + 'nitter.kavin.rocks', + 'nitter.unixfox.eu', + 'nitter.domain.glass', + 'nitter.namazso.eu', + 'birdsite.xanny.family', + 'nitter.moomoo.me', + 'bird.trom.tf', + 'nitter.it', + 'twitter.censors.us', + 'nitter.grimneko.de', + 'twitter.076.ne.jp', + 'nitter.fly.dev', + 'notabird.site', + 'nitter.weiler.rocks', + 'nitter.sethforprivacy.com', + 'nitter.cutelab.space', + 'nitter.nl', + 'nitter.mint.lgbt', + 'nitter.bus-hit.me', + 'nitter.esmailelbob.xyz', + 'tw.artemislena.eu', + 'nitter.winscloud.net', + 'nitter.tiekoetter.com', + 'nitter.spaceint.fr', + 'nitter.privacy.com.de', + 'nitter.poast.org', + 'nitter.bird.froth.zone', + 'nitter.dcs0.hu', + 'twitter.dr460nf1r3.org', + 'nitter.garudalinux.org', + 'twitter.femboy.hu', + 'nitter.cz', + 'nitter.privacydev.net', + 'nitter.evil.site', + 'tweet.lambda.dance', + 'nitter.kylrth.com', + 'nitter.foss.wtf', + 'nitter.priv.pw', + 'nitter.tokhmi.xyz', + 'nitter.catalyst.sx', + 'unofficialbird.com', + 'nitter.projectsegfau.lt', + 'nitter.eu.projectsegfau.lt', + 'singapore.unofficialbird.com', + 'canada.unofficialbird.com', + 'india.unofficialbird.com', + 'nederland.unofficialbird.com', + 'uk.unofficialbird.com', + 'n.l5.ca', + 'nitter.slipfox.xyz', + 'nitter.soopy.moe', + 'nitter.qwik.space', + 'read.whatever.social', + 'nitter.rawbit.ninja', + 'nt.vern.cc', + 'ntr.odyssey346.dev', + 'nitter.ir', + 'nitter.privacytools.io', + 'nitter.sneed.network', + 'n.sneed.network', + 'nitter.manasiwibi.com', + 'nitter.smnz.de', + 'nitter.twei.space', + 'nitter.inpt.fr', + 'nitter.d420.de', + 'nitter.caioalonso.com', + 'nitter.at', + 'nitter.drivet.xyz', + 'nitter.pw', + 'nitter.nicfab.eu', + 'bird.habedieeh.re', + 'nitter.hostux.net', + 'nitter.adminforge.de', + 'nitter.platypush.tech', + 'nitter.mask.sh', + 'nitter.pufe.org', + 'nitter.us.projectsegfau.lt', + 'nitter.arcticfoxes.net', + 't.com.sb', + 'nitter.kling.gg', + 'nitter.ktachibana.party', + 'nitter.riverside.rocks', + 'nitter.girlboss.ceo', + 'nitter.lunar.icu', + 'twitter.moe.ngo', + 'nitter.freedit.eu', + 'ntr.frail.duckdns.org', + 'nitter.librenode.org', + 'n.opnxng.com', + 'nitter.plus.st', + ) + + DEAD_INSTANCES = ( + # maintenance + 'nitter.ethibox.fr', + + # official, rate limited + 'nitter.net', + # offline + 'is-nitter.resolv.ee', + 'lu-nitter.resolv.ee', + 'nitter.13ad.de', + 'nitter.40two.app', + 'nitter.cattube.org', + 'nitter.cc', + 'nitter.dark.fail', + 'nitter.himiko.cloud', + 'nitter.koyu.space', + 'nitter.mailstation.de', + 'nitter.mastodont.cat', + 'nitter.tedomum.net', + 'nitter.tokhmi.xyz', + 'nitter.weaponizedhumiliation.com', + 'nitter.vxempire.xyz', + 'tweet.lambda.dance', + 'nitter.ca', + 'nitter.42l.fr', + 'nitter.pussthecat.org', + 'nitter.nixnet.services', + 'nitter.eu', + 'nitter.actionsack.com', + 'nitter.hu', + 'twitr.gq', + 'nittereu.moomoo.me', + 'bird.from.tf', + 'twitter.grimneko.de', + 'nitter.alefvanoon.xyz', + 'n.hyperborea.cloud', + 'twitter.mstdn.social', + 'nitter.silkky.cloud', + 'nttr.stream', + 'fuckthesacklers.network', + 'nitter.govt.land', + 'nitter.datatunnel.xyz', + 'de.nttr.stream', + 'twtr.bch.bar', + 'nitter.exonip.de', + 'nitter.mastodon.pro', + 'nitter.notraxx.ch', + 'nitter.skrep.in', + 'nitter.snopyta.org', + ) + + INSTANCES = NON_HTTP_INSTANCES + HTTP_INSTANCES + DEAD_INSTANCES + + _INSTANCES_RE = f'(?:{"|".join(map(re.escape, INSTANCES))})' + _VALID_URL = fr'https?://{_INSTANCES_RE}/(?P<uploader_id>.+)/status/(?P<id>[0-9]+)(#.)?' + current_instance = random.choice(HTTP_INSTANCES) + + _TESTS = [ + { + # GIF (wrapped in mp4) + 'url': f'https://{current_instance}/firefox/status/1314279897502629888#m', + 'info_dict': { + 'id': '1314279897502629888', + 'ext': 'mp4', + 'title': 'md5:7890a9277da4639ab624dd899424c5d8', + 'description': 'md5:5fea96a4d3716c350f8b95b21b3111fe', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 'Firefox 🔥', + 'uploader_id': 'firefox', + 'uploader_url': f'https://{current_instance}/firefox', + 'upload_date': '20201008', + 'timestamp': 1602183720, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + }, { # normal video + 'url': f'https://{current_instance}/Le___Doc/status/1299715685392756737#m', + 'info_dict': { + 'id': '1299715685392756737', + 'ext': 'mp4', + 'title': 're:^.* - "Je ne prédis jamais rien"\nD Raoult, Août 2020...', + 'description': '"Je ne prédis jamais rien"\nD Raoult, Août 2020...', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 're:^Le *Doc', + 'uploader_id': 'Le___Doc', + 'uploader_url': f'https://{current_instance}/Le___Doc', + 'upload_date': '20200829', + 'timestamp': 1598711340, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + }, { # video embed in a "Streaming Political Ads" box + 'url': f'https://{current_instance}/mozilla/status/1321147074491092994#m', + 'info_dict': { + 'id': '1321147074491092994', + 'ext': 'mp4', + 'title': 'md5:8290664aabb43b9189145c008386bf12', + 'description': 'md5:9cf2762d49674bc416a191a689fb2aaa', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 'Mozilla', + 'uploader_id': 'mozilla', + 'uploader_url': f'https://{current_instance}/mozilla', + 'upload_date': '20201027', + 'timestamp': 1603820940, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + 'expected_warnings': ['Ignoring subtitle tracks found in the HLS manifest'], + }, { # not the first tweet but main-tweet + 'url': f'https://{current_instance}/firefox/status/1354848277481414657#m', + 'info_dict': { + 'id': '1354848277481414657', + 'ext': 'mp4', + 'title': 'md5:bef647f03bd1c6b15b687ea70dfc9700', + 'description': 'md5:5efba25e2f9dac85ebcd21160cb4341f', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 'Firefox 🔥', + 'uploader_id': 'firefox', + 'uploader_url': f'https://{current_instance}/firefox', + 'upload_date': '20210128', + 'timestamp': 1611855960, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + } + }, { # no OpenGraph title + 'url': f'https://{current_instance}/LocalBateman/status/1678455464038735895#m', + 'info_dict': { + 'id': '1678455464038735895', + 'ext': 'mp4', + 'title': 'Your Typical Local Man - Local man, what did Romanians ever do to you?', + 'description': 'Local man, what did Romanians ever do to you?', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 'Your Typical Local Man', + 'uploader_id': 'LocalBateman', + 'uploader_url': f'https://{current_instance}/LocalBateman', + 'upload_date': '20230710', + 'timestamp': 1689009900, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + 'expected_warnings': ['Ignoring subtitle tracks found in the HLS manifest'], + 'params': {'skip_download': 'm3u8'}, + } + ] + + def _real_extract(self, url): + video_id, uploader_id = self._match_valid_url(url).group('id', 'uploader_id') + parsed_url = compat_urlparse.urlparse(url) + base_url = f'{parsed_url.scheme}://{parsed_url.netloc}' + + self._set_cookie(parsed_url.netloc, 'hlsPlayback', 'on') + full_webpage = webpage = self._download_webpage(url, video_id) + + main_tweet_start = full_webpage.find('class="main-tweet"') + if main_tweet_start > 0: + webpage = full_webpage[main_tweet_start:] + + video_url = '%s%s' % (base_url, self._html_search_regex( + r'(?:<video[^>]+data-url|<source[^>]+src)="([^"]+)"', webpage, 'video url')) + ext = determine_ext(video_url) + + if ext == 'unknown_video': + formats = self._extract_m3u8_formats(video_url, video_id, ext='mp4') + else: + formats = [{ + 'url': video_url, + 'ext': ext + }] + + title = description = self._og_search_description(full_webpage, default=None) or self._html_search_regex( + r'<div class="tweet-content[^>]+>([^<]+)</div>', webpage, 'title', fatal=False) + + uploader_id = self._html_search_regex( + r'<a class="username"[^>]+title="@([^"]+)"', webpage, 'uploader id', fatal=False) or uploader_id + + uploader = self._html_search_regex( + r'<a class="fullname"[^>]+title="([^"]+)"', webpage, 'uploader name', fatal=False) + if uploader: + title = f'{uploader} - {title}' + + counts = { + f'{x[0]}_count': self._html_search_regex( + fr'<span[^>]+class="icon-{x[1]}[^>]*></span>([^<]*)</div>', + webpage, f'{x[0]} count', fatal=False) + for x in (('view', 'play'), ('like', 'heart'), ('repost', 'retweet'), ('comment', 'comment')) + } + counts = {field: 0 if count == '' else parse_count(count) for field, count in counts.items()} + + thumbnail = ( + self._html_search_meta('og:image', full_webpage, 'thumbnail url') + or remove_end('%s%s' % (base_url, self._html_search_regex( + r'<video[^>]+poster="([^"]+)"', webpage, 'thumbnail url', fatal=False)), '%3Asmall')) + + thumbnails = [ + {'id': id, 'url': f'{thumbnail}%3A{id}'} + for id in ('thumb', 'small', 'large', 'medium', 'orig') + ] + + date = self._html_search_regex( + r'<span[^>]+class="tweet-date"[^>]*><a[^>]+title="([^"]+)"', + webpage, 'upload date', default='').replace('·', '') + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'uploader': uploader, + 'timestamp': unified_timestamp(date), + 'uploader_id': uploader_id, + 'uploader_url': f'{base_url}/{uploader_id}', + 'formats': formats, + 'thumbnails': thumbnails, + 'thumbnail': thumbnail, + **counts, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/njpwworld.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/njpwworld.py new file mode 100644 index 0000000..6078381 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/njpwworld.py @@ -0,0 +1,82 @@ +import re + +from .common import InfoExtractor +from ..compat import compat_urlparse +from ..utils import ( + get_element_by_class, + urlencode_postdata, +) + + +class NJPWWorldIE(InfoExtractor): + _VALID_URL = r'https?://(front\.)?njpwworld\.com/p/(?P<id>[a-z0-9_]+)' + IE_DESC = '新日本プロレスワールド' + _NETRC_MACHINE = 'njpwworld' + + _TESTS = [{ + 'url': 'http://njpwworld.com/p/s_series_00155_1_9/', + 'info_dict': { + 'id': 's_series_00155_1_9', + 'ext': 'mp4', + 'title': '闘強導夢2000 2000å¹´1月4æ—¥ æ±äº¬ãƒ‰ãƒ¼ãƒ  第9è©¦åˆ ãƒ©ãƒ³ãƒ‡ã‚£ãƒ»ã‚µãƒ™ãƒ¼ã‚¸ VS リック・スタイナー', + 'tags': list, + }, + 'params': { + 'skip_download': True, # AES-encrypted m3u8 + }, + 'skip': 'Requires login', + }, { + 'url': 'https://front.njpwworld.com/p/s_series_00563_16_bs', + 'info_dict': { + 'id': 's_series_00563_16_bs', + 'ext': 'mp4', + 'title': 'WORLD TAG LEAGUE 2020 & BEST OF THE SUPER Jr.27 2020å¹´12月6æ—¥ ç¦å²¡ãƒ»ç¦å²¡å›½éš›ã‚»ãƒ³ã‚¿ãƒ¼ ãƒãƒƒã‚¯ã‚¹ãƒ†ãƒ¼ã‚¸ã‚³ãƒ¡ãƒ³ãƒˆï¼ˆå­—幕ã‚り)', + 'tags': ["ç¦å²¡ãƒ»ç¦å²¡å›½éš›ã‚»ãƒ³ã‚¿ãƒ¼", "ãƒãƒƒã‚¯ã‚¹ãƒ†ãƒ¼ã‚¸ã‚³ãƒ¡ãƒ³ãƒˆ", "2020", "20年代"], + }, + 'params': { + 'skip_download': True, + }, + }] + + _LOGIN_URL = 'https://front.njpwworld.com/auth/login' + + def _perform_login(self, username, password): + # Setup session (will set necessary cookies) + self._request_webpage( + 'https://njpwworld.com/', None, note='Setting up session') + + webpage, urlh = self._download_webpage_handle( + self._LOGIN_URL, None, + note='Logging in', errnote='Unable to login', + data=urlencode_postdata({'login_id': username, 'pw': password}), + headers={'Referer': 'https://front.njpwworld.com/auth'}) + # /auth/login will return 302 for successful logins + if urlh.url == self._LOGIN_URL: + self.report_warning('unable to login') + return False + + return True + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + formats = [] + for kind, vid in re.findall(r'if\s+\(\s*imageQualityType\s*==\s*\'([^\']+)\'\s*\)\s*{\s*video_id\s*=\s*"(\d+)"', webpage): + player_path = '/intent?id=%s&type=url' % vid + player_url = compat_urlparse.urljoin(url, player_path) + formats += self._extract_m3u8_formats( + player_url, video_id, 'mp4', 'm3u8_native', m3u8_id=kind, fatal=False, quality=int(kind == 'high')) + + tag_block = get_element_by_class('tag-block', webpage) + tags = re.findall( + r'<a[^>]+class="tag-[^"]+"[^>]*>([^<]+)</a>', tag_block + ) if tag_block else None + + return { + 'id': video_id, + 'title': get_element_by_class('article-title', webpage) or self._og_search_title(webpage), + 'formats': formats, + 'tags': tags, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nobelprize.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nobelprize.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nobelprize.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nobelprize.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/noice.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/noice.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/noice.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/noice.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nonktube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nonktube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nonktube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nonktube.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/noodlemagazine.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/noodlemagazine.py new file mode 100644 index 0000000..1c1a763 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/noodlemagazine.py @@ -0,0 +1,80 @@ +from .common import InfoExtractor +from ..utils import ( + int_or_none, + parse_count, + parse_duration, + unified_strdate, + urljoin, +) +from ..utils.traversal import traverse_obj + + +class NoodleMagazineIE(InfoExtractor): + _VALID_URL = r'https?://(?:www|adult\.)?noodlemagazine\.com/watch/(?P<id>[0-9-_]+)' + _TEST = { + 'url': 'https://adult.noodlemagazine.com/watch/-67421364_456239604', + 'md5': '9e02aa763612929d0b4b850591a9248b', + 'info_dict': { + 'id': '-67421364_456239604', + 'title': 'Aria alexander manojob', + 'thumbnail': r're:^https://.*\.jpg', + 'ext': 'mp4', + 'duration': 903, + 'view_count': int, + 'like_count': int, + 'description': 'Aria alexander manojob', + 'tags': ['aria', 'alexander', 'manojob'], + 'upload_date': '20190218', + 'age_limit': 18 + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + title = self._og_search_title(webpage) + duration = parse_duration(self._html_search_meta('video:duration', webpage, 'duration', default=None)) + description = self._og_search_property('description', webpage, default='').replace(' watch online hight quality video', '') + tags = self._html_search_meta('video:tag', webpage, default='').split(', ') + view_count = parse_count(self._html_search_meta('ya:ovs:views_total', webpage, default=None)) + like_count = parse_count(self._html_search_meta('ya:ovs:likes', webpage, default=None)) + upload_date = unified_strdate(self._html_search_meta('ya:ovs:upload_date', webpage, default='')) + + def build_url(url_or_path): + return urljoin('https://adult.noodlemagazine.com', url_or_path) + + headers = {'Referer': url} + player_path = self._html_search_regex( + r'<iframe[^>]+\bid="iplayer"[^>]+\bsrc="([^"]+)"', webpage, 'player path') + player_iframe = self._download_webpage( + build_url(player_path), video_id, 'Downloading iframe page', headers=headers) + playlist_url = self._search_regex( + r'window\.playlistUrl\s*=\s*["\']([^"\']+)["\']', player_iframe, 'playlist url') + playlist_info = self._download_json(build_url(playlist_url), video_id, headers=headers) + + formats = [] + for source in traverse_obj(playlist_info, ('sources', lambda _, v: v['file'])): + if source.get('type') == 'hls': + formats.extend(self._extract_m3u8_formats( + build_url(source['file']), video_id, 'mp4', fatal=False, m3u8_id='hls')) + else: + formats.append(traverse_obj(source, { + 'url': ('file', {build_url}), + 'format_id': 'label', + 'height': ('label', {int_or_none}), + 'ext': 'type', + })) + + return { + 'id': video_id, + 'formats': formats, + 'title': title, + 'thumbnail': self._og_search_property('image', webpage, default=None) or playlist_info.get('image'), + 'duration': duration, + 'description': description, + 'tags': tags, + 'view_count': view_count, + 'like_count': like_count, + 'upload_date': upload_date, + 'age_limit': 18 + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/noovo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/noovo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/noovo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/noovo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/normalboots.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/normalboots.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/normalboots.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/normalboots.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nosnl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nosnl.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nosnl.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nosnl.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nosvideo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nosvideo.py new file mode 100644 index 0000000..7e9688c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nosvideo.py @@ -0,0 +1,72 @@ +import re + +from .common import InfoExtractor +from ..networking import Request +from ..utils import ( + ExtractorError, + urlencode_postdata, + xpath_text, + xpath_with_ns, +) + +_x = lambda p: xpath_with_ns(p, {'xspf': 'http://xspf.org/ns/0/'}) + + +class NosVideoIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?nosvideo\.com/' + \ + r'(?:embed/|\?v=)(?P<id>[A-Za-z0-9]{12})/?' + _PLAYLIST_URL = 'http://nosvideo.com/xml/{xml_id:s}.xml' + _FILE_DELETED_REGEX = r'<b>File Not Found</b>' + _TEST = { + 'url': 'http://nosvideo.com/?v=mu8fle7g7rpq', + 'md5': '6124ed47130d8be3eacae635b071e6b6', + 'info_dict': { + 'id': 'mu8fle7g7rpq', + 'ext': 'mp4', + 'title': 'big_buck_bunny_480p_surround-fix.avi.mp4', + 'thumbnail': r're:^https?://.*\.jpg$', + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + fields = { + 'id': video_id, + 'op': 'download1', + 'method_free': 'Continue to Video', + } + req = Request(url, urlencode_postdata(fields)) + req.headers['Content-type'] = 'application/x-www-form-urlencoded' + webpage = self._download_webpage(req, video_id, + 'Downloading download page') + if re.search(self._FILE_DELETED_REGEX, webpage) is not None: + raise ExtractorError('Video %s does not exist' % video_id, + expected=True) + + xml_id = self._search_regex(r'php\|([^\|]+)\|', webpage, 'XML ID') + playlist_url = self._PLAYLIST_URL.format(xml_id=xml_id) + playlist = self._download_xml(playlist_url, video_id) + + track = playlist.find(_x('.//xspf:track')) + if track is None: + raise ExtractorError( + 'XML playlist is missing the \'track\' element', + expected=True) + title = xpath_text(track, _x('./xspf:title'), 'title') + url = xpath_text(track, _x('./xspf:file'), 'URL', fatal=True) + thumbnail = xpath_text(track, _x('./xspf:image'), 'thumbnail') + if title is not None: + title = title.strip() + + formats = [{ + 'format_id': 'sd', + 'url': url, + }] + + return { + 'id': video_id, + 'title': title, + 'thumbnail': thumbnail, + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nova.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nova.py new file mode 100644 index 0000000..bd0c4eb --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nova.py @@ -0,0 +1,296 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + clean_html, + determine_ext, + int_or_none, + js_to_json, + traverse_obj, + unified_strdate, + url_or_none, +) + + +class NovaEmbedIE(InfoExtractor): + _VALID_URL = r'https?://media\.cms\.nova\.cz/embed/(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://media.cms.nova.cz/embed/8o0n0r?autoplay=1', + 'info_dict': { + 'id': '8o0n0r', + 'title': '2180. díl', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 2578, + }, + 'params': { + 'skip_download': True, + 'ignore_no_formats_error': True, + }, + 'expected_warnings': ['DRM protected', 'Requested format is not available'], + }, { + 'url': 'https://media.cms.nova.cz/embed/KybpWYvcgOa', + 'info_dict': { + 'id': 'KybpWYvcgOa', + 'ext': 'mp4', + 'title': 'Borhyová oslavila 60? Soutěžící z poÅ™adu odboural moderátora OndÅ™eje Sokola', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 114, + }, + 'params': {'skip_download': 'm3u8'}, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + has_drm = False + duration = None + formats = [] + + def process_format_list(format_list, format_id=""): + nonlocal formats, has_drm + if not isinstance(format_list, list): + format_list = [format_list] + for format_dict in format_list: + if not isinstance(format_dict, dict): + continue + if (not self.get_param('allow_unplayable_formats') + and traverse_obj(format_dict, ('drm', 'keySystem'))): + has_drm = True + continue + format_url = url_or_none(format_dict.get('src')) + format_type = format_dict.get('type') + ext = determine_ext(format_url) + if (format_type == 'application/x-mpegURL' + or format_id == 'HLS' or ext == 'm3u8'): + formats.extend(self._extract_m3u8_formats( + format_url, video_id, 'mp4', + entry_protocol='m3u8_native', m3u8_id='hls', + fatal=False)) + elif (format_type == 'application/dash+xml' + or format_id == 'DASH' or ext == 'mpd'): + formats.extend(self._extract_mpd_formats( + format_url, video_id, mpd_id='dash', fatal=False)) + else: + formats.append({ + 'url': format_url, + }) + + player = self._search_json( + r'player:', webpage, 'player', video_id, fatal=False, end_pattern=r';\s*</script>') + if player: + for src in traverse_obj(player, ('lib', 'source', 'sources', ...)): + process_format_list(src) + duration = traverse_obj(player, ('sourceInfo', 'duration', {int_or_none})) + if not formats and not has_drm: + # older code path, in use before August 2023 + player = self._parse_json( + self._search_regex( + (r'(?:(?:replacePlaceholders|processAdTagModifier).*?:\s*)?(?:replacePlaceholders|processAdTagModifier)\s*\(\s*(?P<json>{.*?})\s*\)(?:\s*\))?\s*,', + r'Player\.init\s*\([^,]+,(?P<cndn>\s*\w+\s*\?)?\s*(?P<json>{(?(cndn).+?|.+)})\s*(?(cndn):|,\s*{.+?}\s*\)\s*;)'), + webpage, 'player', group='json'), video_id) + if player: + for format_id, format_list in player['tracks'].items(): + process_format_list(format_list, format_id) + duration = int_or_none(player.get('duration')) + + if not formats and has_drm: + self.report_drm(video_id) + + title = self._og_search_title( + webpage, default=None) or self._search_regex( + (r'<value>(?P<title>[^<]+)', + r'videoTitle\s*:\s*(["\'])(?P<value>(?:(?!\1).)+)\1'), webpage, + 'title', group='value') + thumbnail = self._og_search_thumbnail( + webpage, default=None) or self._search_regex( + r'poster\s*:\s*(["\'])(?P<value>(?:(?!\1).)+)\1', webpage, + 'thumbnail', fatal=False, group='value') + duration = int_or_none(self._search_regex( + r'videoDuration\s*:\s*(\d+)', webpage, 'duration', + default=duration)) + + return { + 'id': video_id, + 'title': title, + 'thumbnail': thumbnail, + 'duration': duration, + 'formats': formats, + } + + +class NovaIE(InfoExtractor): + IE_DESC = 'TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz' + _VALID_URL = r'https?://(?:[^.]+\.)?(?P<site>tv(?:noviny)?|tn|novaplus|vymena|fanda|krasna|doma|prask)\.nova\.cz/(?:[^/]+/)+(?P<id>[^/]+?)(?:\.html|/|$)' + _TESTS = [{ + 'url': 'http://tn.nova.cz/clanek/tajemstvi-ukryte-v-podzemi-specialni-nemocnice-v-prazske-krci.html#player_13260', + 'md5': '249baab7d0104e186e78b0899c7d5f28', + 'info_dict': { + 'id': '1757139', + 'display_id': 'tajemstvi-ukryte-v-podzemi-specialni-nemocnice-v-prazske-krci', + 'ext': 'mp4', + 'title': 'Podzemní nemocnice v pražské KrÄi', + 'description': 'md5:f0a42dd239c26f61c28f19e62d20ef53', + 'thumbnail': r're:^https?://.*\.(?:jpg)', + } + }, { + 'url': 'http://fanda.nova.cz/clanek/fun-and-games/krvavy-epos-zaklinac-3-divoky-hon-vychazi-vyhrajte-ho-pro-sebe.html', + 'info_dict': { + 'id': '1753621', + 'ext': 'mp4', + 'title': 'ZaklínaÄ 3: Divoký hon', + 'description': 're:.*Pokud se stejnÄ› jako my nemůžete.*', + 'thumbnail': r're:https?://.*\.jpg(\?.*)?', + 'upload_date': '20150521', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + 'skip': 'gone', + }, { + # media.cms.nova.cz embed + 'url': 'https://novaplus.nova.cz/porad/ulice/epizoda/18760-2180-dil', + 'info_dict': { + 'id': '8o0n0r', + 'ext': 'mp4', + 'title': '2180. díl', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 2578, + }, + 'params': { + 'skip_download': True, + }, + 'add_ie': [NovaEmbedIE.ie_key()], + 'skip': 'CHYBA 404: STRÃNKA NENALEZENA', + }, { + 'url': 'http://sport.tn.nova.cz/clanek/sport/hokej/nhl/zivot-jde-dal-hodnotil-po-vyrazeni-z-playoff-jiri-sekac.html', + 'only_matching': True, + }, { + 'url': 'http://fanda.nova.cz/clanek/fun-and-games/krvavy-epos-zaklinac-3-divoky-hon-vychazi-vyhrajte-ho-pro-sebe.html', + 'only_matching': True, + }, { + 'url': 'http://doma.nova.cz/clanek/zdravi/prijdte-se-zapsat-do-registru-kostni-drene-jiz-ve-stredu-3-cervna.html', + 'only_matching': True, + }, { + 'url': 'http://prask.nova.cz/clanek/novinky/co-si-na-sobe-nase-hvezdy-nechaly-pojistit.html', + 'only_matching': True, + }, { + 'url': 'http://tv.nova.cz/clanek/novinky/zivot-je-zivot-bondovsky-trailer.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + display_id = mobj.group('id') + site = mobj.group('site') + + webpage = self._download_webpage(url, display_id) + + description = clean_html(self._og_search_description(webpage, default=None)) + if site == 'novaplus': + upload_date = unified_strdate(self._search_regex( + r'(\d{1,2}-\d{1,2}-\d{4})$', display_id, 'upload date', default=None)) + elif site == 'fanda': + upload_date = unified_strdate(self._search_regex( + r'<span class="date_time">(\d{1,2}\.\d{1,2}\.\d{4})', webpage, 'upload date', default=None)) + else: + upload_date = None + + # novaplus + embed_id = self._search_regex( + r'<iframe[^>]+\bsrc=["\'](?:https?:)?//media\.cms\.nova\.cz/embed/([^/?#&]+)', + webpage, 'embed url', default=None) + if embed_id: + return { + '_type': 'url_transparent', + 'url': 'https://media.cms.nova.cz/embed/%s' % embed_id, + 'ie_key': NovaEmbedIE.ie_key(), + 'id': embed_id, + 'description': description, + 'upload_date': upload_date + } + + video_id = self._search_regex( + [r"(?:media|video_id)\s*:\s*'(\d+)'", + r'media=(\d+)', + r'id="article_video_(\d+)"', + r'id="player_(\d+)"'], + webpage, 'video id') + + config_url = self._search_regex( + r'src="(https?://(?:tn|api)\.nova\.cz/bin/player/videojs/config\.php\?[^"]+)"', + webpage, 'config url', default=None) + config_params = {} + + if not config_url: + player = self._parse_json( + self._search_regex( + r'(?s)Player\s*\(.+?\s*,\s*({.+?\bmedia\b["\']?\s*:\s*["\']?\d+.+?})\s*\)', webpage, + 'player', default='{}'), + video_id, transform_source=js_to_json, fatal=False) + if player: + config_url = url_or_none(player.get('configUrl')) + params = player.get('configParams') + if isinstance(params, dict): + config_params = params + + if not config_url: + DEFAULT_SITE_ID = '23000' + SITES = { + 'tvnoviny': DEFAULT_SITE_ID, + 'novaplus': DEFAULT_SITE_ID, + 'vymena': DEFAULT_SITE_ID, + 'krasna': DEFAULT_SITE_ID, + 'fanda': '30', + 'tn': '30', + 'doma': '30', + } + + site_id = self._search_regex( + r'site=(\d+)', webpage, 'site id', default=None) or SITES.get( + site, DEFAULT_SITE_ID) + + config_url = 'https://api.nova.cz/bin/player/videojs/config.php' + config_params = { + 'site': site_id, + 'media': video_id, + 'quality': 3, + 'version': 1, + } + + config = self._download_json( + config_url, display_id, + 'Downloading config JSON', query=config_params, + transform_source=lambda s: s[s.index('{'):s.rindex('}') + 1]) + + mediafile = config['mediafile'] + video_url = mediafile['src'] + + m = re.search(r'^(?P<url>rtmpe?://[^/]+/(?P<app>[^/]+?))/&*(?P<playpath>.+)$', video_url) + if m: + formats = [{ + 'url': m.group('url'), + 'app': m.group('app'), + 'play_path': m.group('playpath'), + 'player_path': 'http://tvnoviny.nova.cz/static/shared/app/videojs/video-js.swf', + 'ext': 'flv', + }] + else: + formats = [{ + 'url': video_url, + }] + + title = mediafile.get('meta', {}).get('title') or self._og_search_title(webpage) + thumbnail = config.get('poster') + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': description, + 'upload_date': upload_date, + 'thumbnail': thumbnail, + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/novaplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/novaplay.py new file mode 100644 index 0000000..d8849cd --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/novaplay.py @@ -0,0 +1,69 @@ +from .common import InfoExtractor +from ..utils import int_or_none, parse_duration, parse_iso8601 + + +class NovaPlayIE(InfoExtractor): + _VALID_URL = r'https://play\.nova\.bg/video/[^?#]+/(?P<id>\d+)' + _TESTS = [ + { + 'url': 'https://play.nova.bg/video/ochakvaite/season-0/ochakvaite-2022-07-22-sybudi-se-sat/606627', + 'md5': 'd79dff2d09d196c595a7290f48e33399', + 'info_dict': { + 'id': '606627', + 'ext': 'mp4', + 'title': 'Събуди Ñе - Ñъбота по NOVA (23.07.2022)', + 'alt_title': 'ochakvaite/season-0/ochakvaite-2022-07-22-sybudi-se-sat', + 'duration': 29.0, + 'timestamp': 1658491547, + 'upload_date': '20220722', + 'thumbnail': 'https://nbg-img.fite.tv/img/606627_460x260.jpg', + 'description': '29 Ñек', + 'view_count': False + }, + }, + { + 'url': 'https://play.nova.bg/video/ochakvaite/season-0/ochakvaite-2022-07-22-cherry-tazi/606609', + 'md5': 'f3e973e2ed1a5b9b3f498b1ab82d01b3', + 'info_dict': { + 'id': '606609', + 'ext': 'mp4', + 'title': 'Черешката на тортата - тази вечер по NOVA (22.07.2022)', + 'alt_title': 'ochakvaite/season-0/ochakvaite-2022-07-22-cherry-tazi', + 'duration': 29.0, + 'timestamp': 1658476303, + 'upload_date': '20220722', + 'thumbnail': 'https://nbg-img.fite.tv/img/606609_460x260.jpg', + 'description': '29 Ñек', + 'view_count': False + }, + } + ] + + _access_token = None + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + self._access_token = self._access_token or self._download_json( + 'https://play.nova.bg/api/client', None, note='Fetching access token')['accessToken'] + video_props = self._search_nextjs_data(webpage, video_id)['props']['pageProps']['video'] + m3u8_url = self._download_json( + f'https://nbg-api.fite.tv/api/v2/videos/{video_id}/streams', + video_id, headers={ + 'x-flipps-user-agent': 'Flipps/75/9.7', + 'x-flipps-version': '2022-05-17', + 'Authorization': f'Bearer {self._access_token}' + })[0]['links']['play']['href'] + formats = self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', m3u8_id='hls') + + return { + 'id': video_id, + 'title': video_props['title'], + 'alt_title': video_props.get('slug'), + 'thumbnail': self._og_search_thumbnail(webpage), + 'description': self._og_search_description(webpage), + 'formats': formats, + 'duration': parse_duration(video_props['duration']), + 'timestamp': parse_iso8601(video_props['published_at']), + 'view_count': int_or_none(video_props['view_count']), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nowness.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nowness.py new file mode 100644 index 0000000..a3c29f6 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nowness.py @@ -0,0 +1,142 @@ +from .brightcove import ( + BrightcoveLegacyIE, + BrightcoveNewIE, +) +from .common import InfoExtractor +from ..compat import compat_str +from ..networking import Request +from ..utils import ExtractorError + + +class NownessBaseIE(InfoExtractor): + def _extract_url_result(self, post): + if post['type'] == 'video': + for media in post['media']: + if media['type'] == 'video': + video_id = media['content'] + source = media['source'] + if source == 'brightcove': + player_code = self._download_webpage( + 'http://www.nowness.com/iframe?id=%s' % video_id, video_id, + note='Downloading player JavaScript', + errnote='Unable to download player JavaScript') + bc_url = BrightcoveLegacyIE._extract_brightcove_url(player_code) + if bc_url: + return self.url_result(bc_url, BrightcoveLegacyIE.ie_key()) + bc_url = BrightcoveNewIE._extract_url(self, player_code) + if bc_url: + return self.url_result(bc_url, BrightcoveNewIE.ie_key()) + raise ExtractorError('Could not find player definition') + elif source == 'vimeo': + return self.url_result('http://vimeo.com/%s' % video_id, 'Vimeo') + elif source == 'youtube': + return self.url_result(video_id, 'Youtube') + elif source == 'cinematique': + # yt-dlp currently doesn't support cinematique + # return self.url_result('http://cinematique.com/embed/%s' % video_id, 'Cinematique') + pass + + def _api_request(self, url, request_path): + display_id = self._match_id(url) + request = Request( + 'http://api.nowness.com/api/' + request_path % display_id, + headers={ + 'X-Nowness-Language': 'zh-cn' if 'cn.nowness.com' in url else 'en-us', + }) + return display_id, self._download_json(request, display_id) + + +class NownessIE(NownessBaseIE): + IE_NAME = 'nowness' + _VALID_URL = r'https?://(?:(?:www|cn)\.)?nowness\.com/(?:story|(?:series|category)/[^/]+)/(?P<id>[^/]+?)(?:$|[?#])' + _TESTS = [{ + 'url': 'https://www.nowness.com/story/candor-the-art-of-gesticulation', + 'md5': '068bc0202558c2e391924cb8cc470676', + 'info_dict': { + 'id': '2520295746001', + 'ext': 'mp4', + 'title': 'Candor: The Art of Gesticulation', + 'description': 'Candor: The Art of Gesticulation', + 'thumbnail': r're:^https?://.*\.jpg', + 'timestamp': 1446745676, + 'upload_date': '20151105', + 'uploader_id': '2385340575001', + }, + 'add_ie': ['BrightcoveNew'], + }, { + 'url': 'https://cn.nowness.com/story/kasper-bjorke-ft-jaakko-eino-kalevi-tnr', + 'md5': 'e79cf125e387216f86b2e0a5b5c63aa3', + 'info_dict': { + 'id': '3716354522001', + 'ext': 'mp4', + 'title': 'Kasper Bjørke ft. Jaakko Eino Kalevi: TNR', + 'description': 'Kasper Bjørke ft. Jaakko Eino Kalevi: TNR', + 'thumbnail': r're:^https?://.*\.jpg', + 'timestamp': 1407315371, + 'upload_date': '20140806', + 'uploader_id': '2385340575001', + }, + 'add_ie': ['BrightcoveNew'], + }, { + # vimeo + 'url': 'https://www.nowness.com/series/nowness-picks/jean-luc-godard-supercut', + 'md5': '9a5a6a8edf806407e411296ab6bc2a49', + 'info_dict': { + 'id': '130020913', + 'ext': 'mp4', + 'title': 'Bleu, Blanc, Rouge - A Godard Supercut', + 'description': 'md5:f0ea5f1857dffca02dbd37875d742cec', + 'thumbnail': r're:^https?://.*\.jpg', + 'upload_date': '20150607', + 'uploader': 'Cinema Sem Lei', + 'uploader_id': 'cinemasemlei', + }, + 'add_ie': ['Vimeo'], + }] + + def _real_extract(self, url): + _, post = self._api_request(url, 'post/getBySlug/%s') + return self._extract_url_result(post) + + +class NownessPlaylistIE(NownessBaseIE): + IE_NAME = 'nowness:playlist' + _VALID_URL = r'https?://(?:(?:www|cn)\.)?nowness\.com/playlist/(?P<id>\d+)' + _TEST = { + 'url': 'https://www.nowness.com/playlist/3286/i-guess-thats-why-they-call-it-the-blues', + 'info_dict': { + 'id': '3286', + }, + 'playlist_mincount': 8, + } + + def _real_extract(self, url): + playlist_id, playlist = self._api_request(url, 'post?PlaylistId=%s') + entries = [self._extract_url_result(item) for item in playlist['items']] + return self.playlist_result(entries, playlist_id) + + +class NownessSeriesIE(NownessBaseIE): + IE_NAME = 'nowness:series' + _VALID_URL = r'https?://(?:(?:www|cn)\.)?nowness\.com/series/(?P<id>[^/]+?)(?:$|[?#])' + _TEST = { + 'url': 'https://www.nowness.com/series/60-seconds', + 'info_dict': { + 'id': '60', + 'title': '60 Seconds', + 'description': 'One-minute wisdom in a new NOWNESS series', + }, + 'playlist_mincount': 4, + } + + def _real_extract(self, url): + display_id, series = self._api_request(url, 'series/getBySlug/%s') + entries = [self._extract_url_result(post) for post in series['posts']] + series_title = None + series_description = None + translations = series.get('translations', []) + if translations: + series_title = translations[0].get('title') or translations[0]['seoTitle'] + series_description = translations[0].get('seoDescription') + return self.playlist_result( + entries, compat_str(series['id']), series_title, series_description) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/noz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/noz.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/noz.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/noz.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/npo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/npo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/npo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/npo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/npr.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/npr.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/npr.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/npr.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nrk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nrk.py new file mode 100644 index 0000000..384865a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nrk.py @@ -0,0 +1,875 @@ +import itertools +import random +import re + +from .common import InfoExtractor +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + determine_ext, + int_or_none, + parse_duration, + parse_iso8601, + str_or_none, + try_get, + url_or_none, + urljoin, +) + + +class NRKBaseIE(InfoExtractor): + _GEO_COUNTRIES = ['NO'] + _CDN_REPL_REGEX = r'''(?x):// + (?: + nrkod\d{1,2}-httpcache0-47115-cacheod0\.dna\.ip-only\.net/47115-cacheod0| + nrk-od-no\.telenorcdn\.net| + minicdn-od\.nrk\.no/od/nrkhd-osl-rr\.netwerk\.no/no + )/''' + + def _extract_nrk_formats(self, asset_url, video_id): + if re.match(r'https?://[^/]+\.akamaihd\.net/i/', asset_url): + return self._extract_akamai_formats(asset_url, video_id) + asset_url = re.sub(r'(?:bw_(?:low|high)=\d+|no_audio_only)&?', '', asset_url) + formats = self._extract_m3u8_formats( + asset_url, video_id, 'mp4', 'm3u8_native', fatal=False) + if not formats and re.search(self._CDN_REPL_REGEX, asset_url): + formats = self._extract_m3u8_formats( + re.sub(self._CDN_REPL_REGEX, '://nrk-od-%02d.akamaized.net/no/' % random.randint(0, 99), asset_url), + video_id, 'mp4', 'm3u8_native', fatal=False) + return formats + + def _raise_error(self, data): + MESSAGES = { + 'ProgramRightsAreNotReady': 'Du kan dessverre ikke se eller høre programmet', + 'ProgramRightsHasExpired': 'Programmet har gÃ¥tt ut', + 'NoProgramRights': 'Ikke tilgjengelig', + 'ProgramIsGeoBlocked': 'NRK har ikke rettigheter til Ã¥ vise dette programmet utenfor Norge', + } + message_type = data.get('messageType', '') + # Can be ProgramIsGeoBlocked or ChannelIsGeoBlocked* + if 'IsGeoBlocked' in message_type or try_get(data, lambda x: x['usageRights']['isGeoBlocked']) is True: + self.raise_geo_restricted( + msg=MESSAGES.get('ProgramIsGeoBlocked'), + countries=self._GEO_COUNTRIES) + message = data.get('endUserMessage') or MESSAGES.get(message_type, message_type) + raise ExtractorError('%s said: %s' % (self.IE_NAME, message), expected=True) + + def _call_api(self, path, video_id, item=None, note=None, fatal=True, query=None): + return self._download_json( + urljoin('https://psapi.nrk.no/', path), + video_id, note or 'Downloading %s JSON' % item, + fatal=fatal, query=query) + + +class NRKIE(NRKBaseIE): + _VALID_URL = r'''(?x) + (?: + nrk:| + https?:// + (?: + (?:www\.)?nrk\.no/video/(?:PS\*|[^_]+_)| + v8[-.]psapi\.nrk\.no/mediaelement/ + ) + ) + (?P<id>[^?\#&]+) + ''' + + _TESTS = [{ + # video + 'url': 'http://www.nrk.no/video/PS*150533', + 'md5': 'f46be075326e23ad0e524edfcb06aeb6', + 'info_dict': { + 'id': '150533', + 'ext': 'mp4', + 'title': 'Dompap og andre fugler i Piip-Show', + 'description': 'md5:d9261ba34c43b61c812cb6b0269a5c8f', + 'duration': 262, + } + }, { + # audio + 'url': 'http://www.nrk.no/video/PS*154915', + # MD5 is unstable + 'info_dict': { + 'id': '154915', + 'ext': 'mp4', + 'title': 'Slik høres internett ut nÃ¥r du er blind', + 'description': 'md5:a621f5cc1bd75c8d5104cb048c6b8568', + 'duration': 20, + } + }, { + 'url': 'nrk:ecc1b952-96dc-4a98-81b9-5296dc7a98d9', + 'only_matching': True, + }, { + 'url': 'nrk:clip/7707d5a3-ebe7-434a-87d5-a3ebe7a34a70', + 'only_matching': True, + }, { + 'url': 'https://v8-psapi.nrk.no/mediaelement/ecc1b952-96dc-4a98-81b9-5296dc7a98d9', + 'only_matching': True, + }, { + 'url': 'https://www.nrk.no/video/dompap-og-andre-fugler-i-piip-show_150533', + 'only_matching': True, + }, { + 'url': 'https://www.nrk.no/video/humor/kommentatorboksen-reiser-til-sjos_d1fda11f-a4ad-437a-a374-0398bc84e999', + 'only_matching': True, + }, { + # podcast + 'url': 'nrk:l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8', + 'only_matching': True, + }, { + 'url': 'nrk:podcast/l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8', + 'only_matching': True, + }, { + # clip + 'url': 'nrk:150533', + 'only_matching': True, + }, { + 'url': 'nrk:clip/150533', + 'only_matching': True, + }, { + # program + 'url': 'nrk:MDDP12000117', + 'only_matching': True, + }, { + 'url': 'nrk:program/ENRK10100318', + 'only_matching': True, + }, { + # direkte + 'url': 'nrk:nrk1', + 'only_matching': True, + }, { + 'url': 'nrk:channel/nrk1', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url).split('/')[-1] + + def call_playback_api(item, query=None): + try: + return self._call_api(f'playback/{item}/program/{video_id}', video_id, item, query=query) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 400: + return self._call_api(f'playback/{item}/{video_id}', video_id, item, query=query) + raise + + # known values for preferredCdn: akamai, iponly, minicdn and telenor + manifest = call_playback_api('manifest', {'preferredCdn': 'akamai'}) + + video_id = try_get(manifest, lambda x: x['id'], compat_str) or video_id + + if manifest.get('playability') == 'nonPlayable': + self._raise_error(manifest['nonPlayable']) + + playable = manifest['playable'] + + formats = [] + for asset in playable['assets']: + if not isinstance(asset, dict): + continue + if asset.get('encrypted'): + continue + format_url = url_or_none(asset.get('url')) + if not format_url: + continue + asset_format = (asset.get('format') or '').lower() + if asset_format == 'hls' or determine_ext(format_url) == 'm3u8': + formats.extend(self._extract_nrk_formats(format_url, video_id)) + elif asset_format == 'mp3': + formats.append({ + 'url': format_url, + 'format_id': asset_format, + 'vcodec': 'none', + }) + + data = call_playback_api('metadata') + + preplay = data['preplay'] + titles = preplay['titles'] + title = titles['title'] + alt_title = titles.get('subtitle') + + description = try_get(preplay, lambda x: x['description'].replace('\r', '\n')) + duration = parse_duration(playable.get('duration')) or parse_duration(data.get('duration')) + + thumbnails = [] + for image in try_get( + preplay, lambda x: x['poster']['images'], list) or []: + if not isinstance(image, dict): + continue + image_url = url_or_none(image.get('url')) + if not image_url: + continue + thumbnails.append({ + 'url': image_url, + 'width': int_or_none(image.get('pixelWidth')), + 'height': int_or_none(image.get('pixelHeight')), + }) + + subtitles = {} + for sub in try_get(playable, lambda x: x['subtitles'], list) or []: + if not isinstance(sub, dict): + continue + sub_url = url_or_none(sub.get('webVtt')) + if not sub_url: + continue + sub_key = str_or_none(sub.get('language')) or 'nb' + sub_type = str_or_none(sub.get('type')) + if sub_type: + sub_key += '-%s' % sub_type + subtitles.setdefault(sub_key, []).append({ + 'url': sub_url, + }) + + legal_age = try_get( + data, lambda x: x['legalAge']['body']['rating']['code'], compat_str) + # https://en.wikipedia.org/wiki/Norwegian_Media_Authority + age_limit = None + if legal_age: + if legal_age == 'A': + age_limit = 0 + elif legal_age.isdigit(): + age_limit = int_or_none(legal_age) + + is_series = try_get(data, lambda x: x['_links']['series']['name']) == 'series' + + info = { + 'id': video_id, + 'title': title, + 'alt_title': alt_title, + 'description': description, + 'duration': duration, + 'thumbnails': thumbnails, + 'age_limit': age_limit, + 'formats': formats, + 'subtitles': subtitles, + 'timestamp': parse_iso8601(try_get(manifest, lambda x: x['availability']['onDemand']['from'], str)) + } + + if is_series: + series = season_id = season_number = episode = episode_number = None + programs = self._call_api( + 'programs/%s' % video_id, video_id, 'programs', fatal=False) + if programs and isinstance(programs, dict): + series = str_or_none(programs.get('seriesTitle')) + season_id = str_or_none(programs.get('seasonId')) + season_number = int_or_none(programs.get('seasonNumber')) + episode = str_or_none(programs.get('episodeTitle')) + episode_number = int_or_none(programs.get('episodeNumber')) + if not series: + series = title + if alt_title: + title += ' - %s' % alt_title + if not season_number: + season_number = int_or_none(self._search_regex( + r'Sesong\s+(\d+)', description or '', 'season number', + default=None)) + if not episode: + episode = alt_title if is_series else None + if not episode_number: + episode_number = int_or_none(self._search_regex( + r'^(\d+)\.', episode or '', 'episode number', + default=None)) + if not episode_number: + episode_number = int_or_none(self._search_regex( + r'\((\d+)\s*:\s*\d+\)', description or '', + 'episode number', default=None)) + info.update({ + 'title': title, + 'series': series, + 'season_id': season_id, + 'season_number': season_number, + 'episode': episode, + 'episode_number': episode_number, + }) + + return info + + +class NRKTVIE(InfoExtractor): + IE_DESC = 'NRK TV and NRK Radio' + _EPISODE_RE = r'(?P<id>[a-zA-Z]{4}\d{8})' + _VALID_URL = r'https?://(?:tv|radio)\.nrk(?:super)?\.no/(?:[^/]+/)*%s' % _EPISODE_RE + _TESTS = [{ + 'url': 'https://tv.nrk.no/program/MDDP12000117', + 'md5': 'c4a5960f1b00b40d47db65c1064e0ab1', + 'info_dict': { + 'id': 'MDDP12000117', + 'ext': 'mp4', + 'title': 'Alarm Trolltunga', + 'description': 'md5:46923a6e6510eefcce23d5ef2a58f2ce', + 'duration': 2223.44, + 'age_limit': 6, + 'subtitles': { + 'nb-nor': [{ + 'ext': 'vtt', + }], + 'nb-ttv': [{ + 'ext': 'vtt', + }] + }, + }, + }, { + 'url': 'https://tv.nrk.no/serie/20-spoersmaal-tv/MUHH48000314/23-05-2014', + 'md5': '8d40dab61cea8ab0114e090b029a0565', + 'info_dict': { + 'id': 'MUHH48000314', + 'ext': 'mp4', + 'title': '20 spørsmÃ¥l - 23. mai 2014', + 'alt_title': '23. mai 2014', + 'description': 'md5:bdea103bc35494c143c6a9acdd84887a', + 'duration': 1741, + 'series': '20 spørsmÃ¥l', + 'episode': '23. mai 2014', + 'age_limit': 0, + }, + }, { + 'url': 'https://tv.nrk.no/program/mdfp15000514', + 'info_dict': { + 'id': 'MDFP15000514', + 'ext': 'mp4', + 'title': 'Kunnskapskanalen - Grunnlovsjubiléet - Stor stÃ¥hei for ingenting', + 'description': 'md5:89290c5ccde1b3a24bb8050ab67fe1db', + 'duration': 4605.08, + 'series': 'Kunnskapskanalen', + 'episode': 'Grunnlovsjubiléet - Stor stÃ¥hei for ingenting', + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + }, { + # single playlist video + 'url': 'https://tv.nrk.no/serie/tour-de-ski/MSPO40010515/06-01-2015#del=2', + 'info_dict': { + 'id': 'MSPO40010515', + 'ext': 'mp4', + 'title': 'Sprint fri teknikk, kvinner og menn 06.01.2015', + 'description': 'md5:c03aba1e917561eface5214020551b7a', + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': ['Failed to download m3u8 information'], + 'skip': 'particular part is not supported currently', + }, { + 'url': 'https://tv.nrk.no/serie/tour-de-ski/MSPO40010515/06-01-2015', + 'info_dict': { + 'id': 'MSPO40010515', + 'ext': 'mp4', + 'title': 'Sprint fri teknikk, kvinner og menn 06.01.2015', + 'description': 'md5:c03aba1e917561eface5214020551b7a', + 'age_limit': 0, + }, + 'expected_warnings': ['Failed to download m3u8 information'], + 'skip': 'Ikke tilgjengelig utenfor Norge', + }, { + 'url': 'https://tv.nrk.no/serie/anno/KMTE50001317/sesong-3/episode-13', + 'info_dict': { + 'id': 'KMTE50001317', + 'ext': 'mp4', + 'title': 'Anno - 13. episode', + 'description': 'md5:11d9613661a8dbe6f9bef54e3a4cbbfa', + 'duration': 2340, + 'series': 'Anno', + 'episode': '13. episode', + 'season_number': 3, + 'episode_number': 13, + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://tv.nrk.no/serie/nytt-paa-nytt/MUHH46000317/27-01-2017', + 'info_dict': { + 'id': 'MUHH46000317', + 'ext': 'mp4', + 'title': 'Nytt pÃ¥ Nytt 27.01.2017', + 'description': 'md5:5358d6388fba0ea6f0b6d11c48b9eb4b', + 'duration': 1796, + 'series': 'Nytt pÃ¥ nytt', + 'episode': '27.01.2017', + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'ProgramRightsHasExpired', + }, { + 'url': 'https://radio.nrk.no/serie/dagsnytt/NPUB21019315/12-07-2015#', + 'only_matching': True, + }, { + 'url': 'https://tv.nrk.no/serie/lindmo/2018/MUHU11006318/avspiller', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/serie/dagsnytt/sesong/201507/NPUB21019315', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + return self.url_result( + 'nrk:%s' % video_id, ie=NRKIE.ie_key(), video_id=video_id) + + +class NRKTVEpisodeIE(InfoExtractor): + _VALID_URL = r'https?://tv\.nrk\.no/serie/(?P<id>[^/]+/sesong/(?P<season_number>\d+)/episode/(?P<episode_number>\d+))' + _TESTS = [{ + 'url': 'https://tv.nrk.no/serie/hellums-kro/sesong/1/episode/2', + 'info_dict': { + 'id': 'MUHH36005220', + 'ext': 'mp4', + 'title': 'Hellums kro - 2. Kro, krig og kjærlighet', + 'description': 'md5:ad92ddffc04cea8ce14b415deef81787', + 'duration': 1563.92, + 'series': 'Hellums kro', + 'season_number': 1, + 'episode_number': 2, + 'episode': '2. Kro, krig og kjærlighet', + 'age_limit': 6, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://tv.nrk.no/serie/backstage/sesong/1/episode/8', + 'info_dict': { + 'id': 'MSUI14000816', + 'ext': 'mp4', + 'title': 'Backstage - 8. episode', + 'description': 'md5:de6ca5d5a2d56849e4021f2bf2850df4', + 'duration': 1320, + 'series': 'Backstage', + 'season_number': 1, + 'episode_number': 8, + 'episode': '8. episode', + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'ProgramRightsHasExpired', + }] + + def _real_extract(self, url): + display_id, season_number, episode_number = self._match_valid_url(url).groups() + + webpage = self._download_webpage(url, display_id) + + info = self._search_json_ld(webpage, display_id, default={}) + nrk_id = info.get('@id') or self._html_search_meta( + 'nrk:program-id', webpage, default=None) or self._search_regex( + r'data-program-id=["\'](%s)' % NRKTVIE._EPISODE_RE, webpage, + 'nrk id') + assert re.match(NRKTVIE._EPISODE_RE, nrk_id) + + info.update({ + '_type': 'url', + 'id': nrk_id, + 'url': 'nrk:%s' % nrk_id, + 'ie_key': NRKIE.ie_key(), + 'season_number': int(season_number), + 'episode_number': int(episode_number), + }) + return info + + +class NRKTVSerieBaseIE(NRKBaseIE): + def _extract_entries(self, entry_list): + if not isinstance(entry_list, list): + return [] + entries = [] + for episode in entry_list: + nrk_id = episode.get('prfId') or episode.get('episodeId') + if not nrk_id or not isinstance(nrk_id, compat_str): + continue + entries.append(self.url_result( + 'nrk:%s' % nrk_id, ie=NRKIE.ie_key(), video_id=nrk_id)) + return entries + + _ASSETS_KEYS = ('episodes', 'instalments',) + + def _extract_assets_key(self, embedded): + for asset_key in self._ASSETS_KEYS: + if embedded.get(asset_key): + return asset_key + + @staticmethod + def _catalog_name(serie_kind): + return 'podcast' if serie_kind in ('podcast', 'podkast') else 'series' + + def _entries(self, data, display_id): + for page_num in itertools.count(1): + embedded = data.get('_embedded') or data + if not isinstance(embedded, dict): + break + assets_key = self._extract_assets_key(embedded) + if not assets_key: + break + # Extract entries + entries = try_get( + embedded, + (lambda x: x[assets_key]['_embedded'][assets_key], + lambda x: x[assets_key]), + list) + for e in self._extract_entries(entries): + yield e + # Find next URL + next_url_path = try_get( + data, + (lambda x: x['_links']['next']['href'], + lambda x: x['_embedded'][assets_key]['_links']['next']['href']), + compat_str) + if not next_url_path: + break + data = self._call_api( + next_url_path, display_id, + note='Downloading %s JSON page %d' % (assets_key, page_num), + fatal=False) + if not data: + break + + +class NRKTVSeasonIE(NRKTVSerieBaseIE): + _VALID_URL = r'''(?x) + https?:// + (?P<domain>tv|radio)\.nrk\.no/ + (?P<serie_kind>serie|pod[ck]ast)/ + (?P<serie>[^/]+)/ + (?: + (?:sesong/)?(?P<id>\d+)| + sesong/(?P<id_2>[^/?#&]+) + ) + ''' + _TESTS = [{ + 'url': 'https://tv.nrk.no/serie/backstage/sesong/1', + 'info_dict': { + 'id': 'backstage/1', + 'title': 'Sesong 1', + }, + 'playlist_mincount': 30, + }, { + # no /sesong/ in path + 'url': 'https://tv.nrk.no/serie/lindmo/2016', + 'info_dict': { + 'id': 'lindmo/2016', + 'title': '2016', + }, + 'playlist_mincount': 29, + }, { + # weird nested _embedded in catalog JSON response + 'url': 'https://radio.nrk.no/serie/dickie-dick-dickens/sesong/1', + 'info_dict': { + 'id': 'dickie-dick-dickens/1', + 'title': 'Sesong 1', + }, + 'playlist_mincount': 11, + }, { + # 841 entries, multi page + 'url': 'https://radio.nrk.no/serie/dagsnytt/sesong/201509', + 'info_dict': { + 'id': 'dagsnytt/201509', + 'title': 'September 2015', + }, + 'playlist_mincount': 841, + }, { + # 180 entries, single page + 'url': 'https://tv.nrk.no/serie/spangas/sesong/1', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/podkast/hele_historien/sesong/diagnose-kverulant', + 'info_dict': { + 'id': 'hele_historien/diagnose-kverulant', + 'title': 'Diagnose kverulant', + }, + 'playlist_mincount': 3, + }, { + 'url': 'https://radio.nrk.no/podkast/loerdagsraadet/sesong/202101', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return (False if NRKTVIE.suitable(url) or NRKTVEpisodeIE.suitable(url) or NRKRadioPodkastIE.suitable(url) + else super(NRKTVSeasonIE, cls).suitable(url)) + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + domain = mobj.group('domain') + serie_kind = mobj.group('serie_kind') + serie = mobj.group('serie') + season_id = mobj.group('id') or mobj.group('id_2') + display_id = '%s/%s' % (serie, season_id) + + data = self._call_api( + '%s/catalog/%s/%s/seasons/%s' + % (domain, self._catalog_name(serie_kind), serie, season_id), + display_id, 'season', query={'pageSize': 50}) + + title = try_get(data, lambda x: x['titles']['title'], compat_str) or display_id + return self.playlist_result( + self._entries(data, display_id), + display_id, title) + + +class NRKTVSeriesIE(NRKTVSerieBaseIE): + _VALID_URL = r'https?://(?P<domain>(?:tv|radio)\.nrk|(?:tv\.)?nrksuper)\.no/(?P<serie_kind>serie|pod[ck]ast)/(?P<id>[^/]+)' + _TESTS = [{ + # new layout, instalments + 'url': 'https://tv.nrk.no/serie/groenn-glede', + 'info_dict': { + 'id': 'groenn-glede', + 'title': 'Grønn glede', + 'description': 'md5:7576e92ae7f65da6993cf90ee29e4608', + }, + 'playlist_mincount': 90, + }, { + # new layout, instalments, more entries + 'url': 'https://tv.nrk.no/serie/lindmo', + 'only_matching': True, + }, { + 'url': 'https://tv.nrk.no/serie/blank', + 'info_dict': { + 'id': 'blank', + 'title': 'Blank', + 'description': 'md5:7664b4e7e77dc6810cd3bca367c25b6e', + }, + 'playlist_mincount': 30, + }, { + # new layout, seasons + 'url': 'https://tv.nrk.no/serie/backstage', + 'info_dict': { + 'id': 'backstage', + 'title': 'Backstage', + 'description': 'md5:63692ceb96813d9a207e9910483d948b', + }, + 'playlist_mincount': 60, + }, { + # old layout + 'url': 'https://tv.nrksuper.no/serie/labyrint', + 'info_dict': { + 'id': 'labyrint', + 'title': 'Labyrint', + 'description': 'I Daidalos sin undersjøiske Labyrint venter spennende oppgaver, skumle robotskapninger og slim.', + }, + 'playlist_mincount': 3, + }, { + 'url': 'https://tv.nrk.no/serie/broedrene-dal-og-spektralsteinene', + 'only_matching': True, + }, { + 'url': 'https://tv.nrk.no/serie/saving-the-human-race', + 'only_matching': True, + }, { + 'url': 'https://tv.nrk.no/serie/postmann-pat', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/serie/dickie-dick-dickens', + 'info_dict': { + 'id': 'dickie-dick-dickens', + 'title': 'Dickie Dick Dickens', + 'description': 'md5:19e67411ffe57f7dce08a943d7a0b91f', + }, + 'playlist_mincount': 8, + }, { + 'url': 'https://nrksuper.no/serie/labyrint', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/podkast/ulrikkes_univers', + 'info_dict': { + 'id': 'ulrikkes_univers', + }, + 'playlist_mincount': 10, + }, { + 'url': 'https://radio.nrk.no/podkast/ulrikkes_univers/nrkno-poddkast-26588-134079-05042018030000', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return ( + False if any(ie.suitable(url) + for ie in (NRKTVIE, NRKTVEpisodeIE, NRKRadioPodkastIE, NRKTVSeasonIE)) + else super(NRKTVSeriesIE, cls).suitable(url)) + + def _real_extract(self, url): + site, serie_kind, series_id = self._match_valid_url(url).groups() + is_radio = site == 'radio.nrk' + domain = 'radio' if is_radio else 'tv' + + size_prefix = 'p' if is_radio else 'embeddedInstalmentsP' + series = self._call_api( + '%s/catalog/%s/%s' + % (domain, self._catalog_name(serie_kind), series_id), + series_id, 'serie', query={size_prefix + 'ageSize': 50}) + titles = try_get(series, [ + lambda x: x['titles'], + lambda x: x[x['type']]['titles'], + lambda x: x[x['seriesType']]['titles'], + ]) or {} + + entries = [] + entries.extend(self._entries(series, series_id)) + embedded = series.get('_embedded') or {} + linked_seasons = try_get(series, lambda x: x['_links']['seasons']) or [] + embedded_seasons = embedded.get('seasons') or [] + if len(linked_seasons) > len(embedded_seasons): + for season in linked_seasons: + season_url = urljoin(url, season.get('href')) + if not season_url: + season_name = season.get('name') + if season_name and isinstance(season_name, compat_str): + season_url = 'https://%s.nrk.no/serie/%s/sesong/%s' % (domain, series_id, season_name) + if season_url: + entries.append(self.url_result( + season_url, ie=NRKTVSeasonIE.ie_key(), + video_title=season.get('title'))) + else: + for season in embedded_seasons: + entries.extend(self._entries(season, series_id)) + entries.extend(self._entries( + embedded.get('extraMaterial') or {}, series_id)) + + return self.playlist_result( + entries, series_id, titles.get('title'), titles.get('subtitle')) + + +class NRKTVDirekteIE(NRKTVIE): # XXX: Do not subclass from concrete IE + IE_DESC = 'NRK TV Direkte and NRK Radio Direkte' + _VALID_URL = r'https?://(?:tv|radio)\.nrk\.no/direkte/(?P<id>[^/?#&]+)' + + _TESTS = [{ + 'url': 'https://tv.nrk.no/direkte/nrk1', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/direkte/p1_oslo_akershus', + 'only_matching': True, + }] + + +class NRKRadioPodkastIE(InfoExtractor): + _VALID_URL = r'https?://radio\.nrk\.no/pod[ck]ast/(?:[^/]+/)+(?P<id>l_[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})' + + _TESTS = [{ + 'url': 'https://radio.nrk.no/podkast/ulrikkes_univers/l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8', + 'md5': '8d40dab61cea8ab0114e090b029a0565', + 'info_dict': { + 'id': 'MUHH48000314AA', + 'ext': 'mp4', + 'title': '20 spørsmÃ¥l 23.05.2014', + 'description': 'md5:bdea103bc35494c143c6a9acdd84887a', + 'duration': 1741, + 'series': '20 spørsmÃ¥l', + 'episode': '23.05.2014', + }, + }, { + 'url': 'https://radio.nrk.no/podcast/ulrikkes_univers/l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/podkast/ulrikkes_univers/sesong/1/l_96f4f1b0-de54-4e6a-b4f1-b0de54fe6af8', + 'only_matching': True, + }, { + 'url': 'https://radio.nrk.no/podkast/hele_historien/sesong/bortfoert-i-bergen/l_774d1a2c-7aa7-4965-8d1a-2c7aa7d9652c', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + return self.url_result( + 'nrk:%s' % video_id, ie=NRKIE.ie_key(), video_id=video_id) + + +class NRKPlaylistBaseIE(InfoExtractor): + def _extract_description(self, webpage): + pass + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + webpage = self._download_webpage(url, playlist_id) + + entries = [ + self.url_result('nrk:%s' % video_id, NRKIE.ie_key()) + for video_id in re.findall(self._ITEM_RE, webpage) + ] + + playlist_title = self._extract_title(webpage) + playlist_description = self._extract_description(webpage) + + return self.playlist_result( + entries, playlist_id, playlist_title, playlist_description) + + +class NRKPlaylistIE(NRKPlaylistBaseIE): + _VALID_URL = r'https?://(?:www\.)?nrk\.no/(?!video|skole)(?:[^/]+/)+(?P<id>[^/]+)' + _ITEM_RE = r'class="[^"]*\brich\b[^"]*"[^>]+data-video-id="([^"]+)"' + _TESTS = [{ + 'url': 'http://www.nrk.no/troms/gjenopplev-den-historiske-solformorkelsen-1.12270763', + 'info_dict': { + 'id': 'gjenopplev-den-historiske-solformorkelsen-1.12270763', + 'title': 'Gjenopplev den historiske solformørkelsen', + 'description': 'md5:c2df8ea3bac5654a26fc2834a542feed', + }, + 'playlist_count': 2, + }, { + 'url': 'http://www.nrk.no/kultur/bok/rivertonprisen-til-karin-fossum-1.12266449', + 'info_dict': { + 'id': 'rivertonprisen-til-karin-fossum-1.12266449', + 'title': 'Rivertonprisen til Karin Fossum', + 'description': 'Første kvinne pÃ¥ 15 Ã¥r til Ã¥ vinne krimlitteraturprisen.', + }, + 'playlist_count': 2, + }] + + def _extract_title(self, webpage): + return self._og_search_title(webpage, fatal=False) + + def _extract_description(self, webpage): + return self._og_search_description(webpage) + + +class NRKTVEpisodesIE(NRKPlaylistBaseIE): + _VALID_URL = r'https?://tv\.nrk\.no/program/[Ee]pisodes/[^/]+/(?P<id>\d+)' + _ITEM_RE = r'data-episode=["\']%s' % NRKTVIE._EPISODE_RE + _TESTS = [{ + 'url': 'https://tv.nrk.no/program/episodes/nytt-paa-nytt/69031', + 'info_dict': { + 'id': '69031', + 'title': 'Nytt pÃ¥ nytt, sesong: 201210', + }, + 'playlist_count': 4, + }] + + def _extract_title(self, webpage): + return self._html_search_regex( + r'<h1>([^<]+)</h1>', webpage, 'title', fatal=False) + + +class NRKSkoleIE(InfoExtractor): + IE_DESC = 'NRK Skole' + _VALID_URL = r'https?://(?:www\.)?nrk\.no/skole/?\?.*\bmediaId=(?P<id>\d+)' + + _TESTS = [{ + 'url': 'https://www.nrk.no/skole/?page=search&q=&mediaId=14099', + 'md5': '18c12c3d071953c3bf8d54ef6b2587b7', + 'info_dict': { + 'id': '6021', + 'ext': 'mp4', + 'title': 'Genetikk og eneggede tvillinger', + 'description': 'md5:3aca25dcf38ec30f0363428d2b265f8d', + 'duration': 399, + }, + }, { + 'url': 'https://www.nrk.no/skole/?page=objectives&subject=naturfag&objective=K15114&mediaId=19355', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + nrk_id = self._download_json( + 'https://nrkno-skole-prod.kube.nrk.no/skole/api/media/%s' % video_id, + video_id)['psId'] + + return self.url_result('nrk:%s' % nrk_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nrl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nrl.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nrl.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nrl.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ntvcojp.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ntvcojp.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ntvcojp.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ntvcojp.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ntvde.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ntvde.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ntvde.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ntvde.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ntvru.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ntvru.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ntvru.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ntvru.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/nubilesporn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nubilesporn.py new file mode 100644 index 0000000..1d630f5 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/nubilesporn.py @@ -0,0 +1,99 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + clean_html, + float_or_none, + format_field, + get_element_by_class, + get_element_by_id, + get_element_html_by_class, + get_elements_by_class, + int_or_none, + try_call, + unified_timestamp, + urlencode_postdata, +) + + +class NubilesPornIE(InfoExtractor): + _NETRC_MACHINE = 'nubiles-porn' + _VALID_URL = r'''(?x) + https://members\.nubiles-porn\.com/video/watch/(?P<id>\d+) + (?:/(?P<display_id>[\w\-]+-s(?P<season>\d+)e(?P<episode>\d+)))? + ''' + + _TESTS = [{ + 'url': 'https://members.nubiles-porn.com/video/watch/165320/trying-to-focus-my-one-track-mind-s3e1', + 'md5': 'fa7f09da8027c35e4bdf0f94f55eac82', + 'info_dict': { + 'id': '165320', + 'title': 'Trying To Focus My One Track Mind - S3:E1', + 'ext': 'mp4', + 'display_id': 'trying-to-focus-my-one-track-mind-s3e1', + 'thumbnail': 'https://images.nubiles-porn.com/videos/trying_to_focus_my_one_track_mind/samples/cover1280.jpg', + 'description': 'md5:81f3d4372e0e39bff5c801da277a5141', + 'timestamp': 1676160000, + 'upload_date': '20230212', + 'channel': 'Younger Mommy', + 'channel_id': '64', + 'channel_url': 'https://members.nubiles-porn.com/video/website/64', + 'like_count': int, + 'average_rating': float, + 'age_limit': 18, + 'categories': ['Big Boobs', 'Big Naturals', 'Blowjob', 'Brunette', 'Cowgirl', 'Girl Orgasm', 'Girl-Boy', + 'Glasses', 'Hardcore', 'Milf', 'Shaved Pussy', 'Tattoos', 'YoungerMommy.com'], + 'tags': list, + 'cast': ['Kenzie Love'], + 'availability': 'needs_auth', + 'series': 'Younger Mommy', + 'series_id': '64', + 'season': 'Season 3', + 'season_number': 3, + 'episode': 'Episode 1', + 'episode_number': 1 + } + }] + + def _perform_login(self, username, password): + login_webpage = self._download_webpage('https://nubiles-porn.com/login', video_id=None) + inputs = self._hidden_inputs(login_webpage) + inputs.update({'username': username, 'password': password}) + self._request_webpage('https://nubiles-porn.com/authentication/login', None, data=urlencode_postdata(inputs)) + + def _real_extract(self, url): + url_match = self._match_valid_url(url) + video_id = url_match.group('id') + page = self._download_webpage(url, video_id) + + media_entries = self._parse_html5_media_entries( + url, get_element_by_class('watch-page-video-wrapper', page), video_id)[0] + + channel_id, channel_name = self._search_regex( + r'/video/website/(?P<id>\d+).+>(?P<name>\w+).com', get_element_html_by_class('site-link', page), + 'channel', fatal=False, group=('id', 'name')) or (None, None) + channel_name = re.sub(r'([^A-Z]+)([A-Z]+)', r'\1 \2', channel_name) + + return { + 'id': video_id, + 'title': self._search_regex('<h2>([^<]+)</h2>', page, 'title', fatal=False), + 'formats': media_entries.get('formats'), + 'display_id': url_match.group('display_id'), + 'thumbnail': media_entries.get('thumbnail'), + 'description': clean_html(get_element_html_by_class('content-pane-description', page)), + 'timestamp': unified_timestamp(get_element_by_class('date', page)), + 'channel': channel_name, + 'channel_id': channel_id, + 'channel_url': format_field(channel_id, None, 'https://members.nubiles-porn.com/video/website/%s'), + 'like_count': int_or_none(get_element_by_id('likecount', page)), + 'average_rating': float_or_none(get_element_by_class('score', page)), + 'age_limit': 18, + 'categories': try_call(lambda: list(map(clean_html, get_elements_by_class('btn', get_element_by_class('categories', page))))), + 'tags': try_call(lambda: list(map(clean_html, get_elements_by_class('btn', get_elements_by_class('tags', page)[1])))), + 'cast': get_elements_by_class('content-pane-performer', page), + 'availability': 'needs_auth', + 'series': channel_name, + 'series_id': channel_id, + 'season_number': int_or_none(url_match.group('season')), + 'episode_number': int_or_none(url_match.group('episode')) + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nuevo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nuevo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nuevo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nuevo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nuvid.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nuvid.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nuvid.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nuvid.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nytimes.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nytimes.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nytimes.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nytimes.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nzherald.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nzherald.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nzherald.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nzherald.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nzonscreen.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nzonscreen.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nzonscreen.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nzonscreen.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/nzz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/nzz.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/nzz.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/nzz.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/odatv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/odatv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/odatv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/odatv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/odkmedia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/odkmedia.py new file mode 100644 index 0000000..b852160 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/odkmedia.py @@ -0,0 +1,105 @@ +import json + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + GeoRestrictedError, + float_or_none, + traverse_obj, + try_call +) + + +class OnDemandChinaEpisodeIE(InfoExtractor): + _VALID_URL = r'https?://www\.ondemandchina\.com/\w+/watch/(?P<series>[\w-]+)/(?P<id>ep-(?P<ep>\d+))' + _TESTS = [{ + 'url': 'https://www.ondemandchina.com/en/watch/together-against-covid-19/ep-1', + 'info_dict': { + 'id': '264394', + 'ext': 'mp4', + 'duration': 3256.88, + 'title': 'EP 1 The Calling', + 'alt_title': '第1集 令出如山', + 'thumbnail': 'https://d2y2efdi5wgkcl.cloudfront.net/fit-in/256x256/media-io/2020/9/11/image.d9816e81.jpg', + 'description': '疫情严峻,党政军民学ã€ä¸œè¥¿å—北中ååŒåº”考', + 'tags': ['Social Humanities', 'Documentary', 'Medical', 'Social'], + } + }] + + _QUERY = ''' + query Episode($programSlug: String!, $episodeNumber: Int!) { + episode( + programSlug: $programSlug + episodeNumber: $episodeNumber + kind: "series" + part: null + ) { + id + title + titleEn + titleKo + titleZhHans + titleZhHant + synopsis + synopsisEn + synopsisKo + synopsisZhHans + synopsisZhHant + videoDuration + images { + thumbnail + } + } + }''' + + def _real_extract(self, url): + program_slug, display_id, ep_number = self._match_valid_url(url).group('series', 'id', 'ep') + webpage = self._download_webpage(url, display_id) + + video_info = self._download_json( + 'https://odc-graphql.odkmedia.io/graphql', display_id, + headers={'Content-type': 'application/json'}, + data=json.dumps({ + 'operationName': 'Episode', + 'query': self._QUERY, + 'variables': { + 'programSlug': program_slug, + 'episodeNumber': int(ep_number), + }, + }).encode())['data']['episode'] + + try: + source_json = self._download_json( + f'https://odkmedia.io/odc/api/v2/playback/{video_info["id"]}/', display_id, + headers={'Authorization': '', 'service-name': 'odc'}) + except ExtractorError as e: + if isinstance(e.cause, HTTPError): + error_data = self._parse_json(e.cause.response.read(), display_id)['detail'] + raise GeoRestrictedError(error_data) + + formats, subtitles = [], {} + for source in traverse_obj(source_json, ('sources', ...)): + if source.get('type') == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles(source.get('url'), display_id) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + self.report_warning(f'Unsupported format {source.get("type")}', display_id) + + return { + 'id': str(video_info['id']), + 'duration': float_or_none(video_info.get('videoDuration'), 1000), + 'thumbnail': (traverse_obj(video_info, ('images', 'thumbnail')) + or self._html_search_meta(['og:image', 'twitter:image'], webpage)), + 'title': (traverse_obj(video_info, 'title', 'titleEn') + or self._html_search_meta(['og:title', 'twitter:title'], webpage) + or self._html_extract_title(webpage)), + 'alt_title': traverse_obj(video_info, 'titleKo', 'titleZhHans', 'titleZhHant'), + 'description': (traverse_obj( + video_info, 'synopsisEn', 'synopsisKo', 'synopsisZhHans', 'synopsisZhHant', 'synopisis') + or self._html_search_meta(['og:description', 'twitter:description', 'description'], webpage)), + 'formats': formats, + 'subtitles': subtitles, + 'tags': try_call(lambda: self._html_search_meta('keywords', webpage).split(', ')) + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/odnoklassniki.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/odnoklassniki.py new file mode 100644 index 0000000..1be45d8 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/odnoklassniki.py @@ -0,0 +1,464 @@ +import urllib.parse + +from .common import InfoExtractor +from ..compat import ( + compat_etree_fromstring, + compat_parse_qs, + compat_urllib_parse_unquote, + compat_urllib_parse_urlparse, +) +from ..networking import HEADRequest +from ..utils import ( + ExtractorError, + float_or_none, + int_or_none, + qualities, + smuggle_url, + traverse_obj, + unescapeHTML, + unified_strdate, + unsmuggle_url, + url_or_none, + urlencode_postdata, +) + + +class OdnoklassnikiIE(InfoExtractor): + _VALID_URL = r'''(?x) + https?:// + (?:(?:www|m|mobile)\.)? + (?:odnoklassniki|ok)\.ru/ + (?: + video(?P<embed>embed)?/| + web-api/video/moviePlayer/| + live/| + dk\?.*?st\.mvId= + ) + (?P<id>[\d-]+) + ''' + _EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//(?:odnoklassniki|ok)\.ru/videoembed/.+?)\1'] + _TESTS = [{ + 'note': 'Coub embedded', + 'url': 'http://ok.ru/video/1484130554189', + 'info_dict': { + 'id': '1keok9', + 'ext': 'mp4', + 'timestamp': 1545580896, + 'view_count': int, + 'thumbnail': r're:^https?://.*\.jpg$', + 'title': 'ÐÐ°Ñ€Ð¾Ð´Ð½Ð°Ñ Ð·Ð°Ð±Ð°Ð²Ð°', + 'uploader': 'Nevata', + 'upload_date': '20181223', + 'age_limit': 0, + 'uploader_id': 'nevata.s', + 'like_count': int, + 'duration': 8.08, + 'repost_count': int, + }, + }, { + 'note': 'vk.com embedded', + 'url': 'https://ok.ru/video/3568183087575', + 'info_dict': { + 'id': '-165101755_456243749', + 'ext': 'mp4', + 'uploader_id': '-165101755', + 'duration': 132, + 'timestamp': 1642869935, + 'upload_date': '20220122', + 'thumbnail': str, + 'title': str, + 'uploader': str, + }, + 'skip': 'vk extractor error', + }, { + # metadata in JSON, webm_dash with Firefox UA + 'url': 'http://ok.ru/video/20079905452', + 'md5': '8f477d8931c531374a3e36daec617b2c', + 'info_dict': { + 'id': '20079905452', + 'ext': 'webm', + 'title': 'Культура менÑет Ð½Ð°Ñ (прекраÑный ролик!))', + 'thumbnail': str, + 'duration': 100, + 'upload_date': '20141207', + 'uploader_id': '330537914540', + 'uploader': 'Виталий ДобровольÑкий', + 'like_count': int, + 'age_limit': 0, + }, + 'params': { + 'format': 'bv[ext=webm]', + 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101 Firefox/102.0'}, + }, + }, { + # metadataUrl + 'url': 'http://ok.ru/video/63567059965189-0?fromTime=5', + 'md5': '2bae2f58eefe1b3d26f3926c4a64d2f3', + 'info_dict': { + 'id': '63567059965189-0', + 'ext': 'mp4', + 'title': 'Девушка без комплекÑов ...', + 'thumbnail': str, + 'duration': 191, + 'upload_date': '20150518', + 'uploader_id': '534380003155', + 'uploader': '☭ Ðндрей Мещанинов ☭', + 'like_count': int, + 'age_limit': 0, + 'start_time': 5, + }, + 'params': {'skip_download': 'm3u8'}, + }, { + # YouTube embed (metadataUrl, provider == USER_YOUTUBE) + 'url': 'https://ok.ru/video/3952212382174', + 'md5': '5fb5f83ce16cb212d6bf887282b5da53', + 'info_dict': { + 'id': '5axVgHHDBvU', + 'ext': 'mp4', + 'title': 'Youtube-dl 101: What is it and HOW to use it! Full Download Walkthrough and Guide', + 'description': 'md5:b57209eeb9d5c2f20c984dfb58862097', + 'uploader': 'Lod Mer', + 'uploader_id': '575186401502', + 'duration': 1529, + 'age_limit': 0, + 'upload_date': '20210405', + 'comment_count': int, + 'live_status': 'not_live', + 'view_count': int, + 'thumbnail': 'https://i.mycdn.me/i?r=AEHujHvw2RjEbemUCNEorZbxYpb_p_9AcN2FmGik64Krkcmz37YtlY093oAM5-HIEAt7Zi9s0CiBOSDmbngC-I-k&fn=external_8', + 'uploader_url': 'https://www.youtube.com/@MrKewlkid94', + 'channel_follower_count': int, + 'tags': ['youtube-dl', 'youtube playlists', 'download videos', 'download audio'], + 'channel_id': 'UCVGtvURtEURYHtJFUegdSug', + 'like_count': int, + 'availability': 'public', + 'channel_url': 'https://www.youtube.com/channel/UCVGtvURtEURYHtJFUegdSug', + 'categories': ['Education'], + 'playable_in_embed': True, + 'channel': 'BornToReact', + }, + }, { + # YouTube embed (metadata, provider == USER_YOUTUBE, no metadata.movie.title field) + 'url': 'http://ok.ru/video/62036049272859-0', + 'info_dict': { + 'id': '62036049272859-0', + 'ext': 'mp4', + 'title': 'МУЗЫКРДОЖДЯ .', + 'description': 'md5:6f1867132bd96e33bf53eda1091e8ed0', + 'upload_date': '20120106', + 'uploader_id': '473534735899', + 'uploader': 'МARINA D', + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'Video has not been found', + }, { + 'note': 'Only available in mobile webpage', + 'url': 'https://m.ok.ru/video/2361249957145', + 'info_dict': { + 'id': '2361249957145', + 'ext': 'mp4', + 'title': 'БыковÑкое крещение', + 'duration': 3038.181, + 'thumbnail': r're:^https?://i\.mycdn\.me/videoPreview\?.+', + }, + }, { + 'note': 'subtitles', + 'url': 'https://ok.ru/video/4249587550747', + 'info_dict': { + 'id': '4249587550747', + 'ext': 'mp4', + 'title': 'Small Country An African Childhood (2020) (1080p) +subtitle', + 'uploader': 'Sunflower Movies', + 'uploader_id': '595802161179', + 'upload_date': '20220816', + 'duration': 6728, + 'age_limit': 0, + 'thumbnail': r're:^https?://i\.mycdn\.me/videoPreview\?.+', + 'like_count': int, + 'subtitles': dict, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'http://ok.ru/web-api/video/moviePlayer/20079905452', + 'only_matching': True, + }, { + 'url': 'http://www.ok.ru/video/20648036891', + 'only_matching': True, + }, { + 'url': 'http://www.ok.ru/videoembed/20648036891', + 'only_matching': True, + }, { + 'url': 'http://m.ok.ru/video/20079905452', + 'only_matching': True, + }, { + 'url': 'http://mobile.ok.ru/video/20079905452', + 'only_matching': True, + }, { + 'url': 'https://www.ok.ru/live/484531969818', + 'only_matching': True, + }, { + 'url': 'https://m.ok.ru/dk?st.cmd=movieLayer&st.discId=863789452017&st.retLoc=friend&st.rtu=%2Fdk%3Fst.cmd%3DfriendMovies%26st.mode%3Down%26st.mrkId%3D%257B%2522uploadedMovieMarker%2522%253A%257B%2522marker%2522%253A%25221519410114503%2522%252C%2522hasMore%2522%253Atrue%257D%252C%2522sharedMovieMarker%2522%253A%257B%2522marker%2522%253Anull%252C%2522hasMore%2522%253Afalse%257D%257D%26st.friendId%3D561722190321%26st.frwd%3Don%26_prevCmd%3DfriendMovies%26tkn%3D7257&st.discType=MOVIE&st.mvId=863789452017&_prevCmd=friendMovies&tkn=3648#lst#', + 'only_matching': True, + }, { + # Paid video + 'url': 'https://ok.ru/video/954886983203', + 'only_matching': True, + }, { + 'url': 'https://ok.ru/videoembed/2932705602075', + 'info_dict': { + 'id': '2932705602075', + 'ext': 'mp4', + 'thumbnail': 'https://i.mycdn.me/videoPreview?id=1369902483995&type=37&idx=2&tkn=fqlnoQD_xwq5ovIlKfgNyU08qmM&fn=external_8', + 'title': 'Boosty Ð´Ð»Ñ Ñ‚ÐµÐ±Ñ!', + 'uploader_id': '597811038747', + 'like_count': 0, + 'duration': 35, + }, + }] + + _WEBPAGE_TESTS = [{ + 'url': 'https://boosty.to/ikakprosto/posts/56cedaca-b56a-4dfd-b3ed-98c79cfa0167', + 'info_dict': { + 'id': '3950343629563', + 'ext': 'mp4', + 'thumbnail': 'https://i.mycdn.me/videoPreview?id=2776238394107&type=37&idx=11&tkn=F3ejkUFcpuI4DnMRxrDGcH5YcmM&fn=external_8', + 'title': 'ЗаÑц БуÑти.mp4', + 'uploader_id': '571368965883', + 'like_count': 0, + 'duration': 10444, + }, + 'skip': 'Site no longer embeds', + }] + + def _clear_cookies(self, cdn_url): + # Direct http downloads will fail if CDN cookies are set + # so we need to reset them after each format extraction + self.cookiejar.clear(domain='.mycdn.me') + self.cookiejar.clear(domain=urllib.parse.urlparse(cdn_url).hostname) + + @classmethod + def _extract_embed_urls(cls, url, webpage): + for x in super()._extract_embed_urls(url, webpage): + yield smuggle_url(x, {'referrer': url}) + + def _real_extract(self, url): + try: + return self._extract_desktop(url) + except ExtractorError as e: + try: + return self._extract_mobile(url) + except ExtractorError: + # error message of desktop webpage is in English + raise e + + def _extract_desktop(self, url): + start_time = int_or_none(compat_parse_qs( + compat_urllib_parse_urlparse(url).query).get('fromTime', [None])[0]) + + url, smuggled = unsmuggle_url(url, {}) + video_id, is_embed = self._match_valid_url(url).group('id', 'embed') + mode = 'videoembed' if is_embed else 'video' + + webpage = self._download_webpage( + f'https://ok.ru/{mode}/{video_id}', video_id, + note='Downloading desktop webpage', + headers={'Referer': smuggled['referrer']} if smuggled.get('referrer') else {}) + + error = self._search_regex( + r'[^>]+class="vp_video_stub_txt"[^>]*>([^<]+)<', + webpage, 'error', default=None) + # Direct link from boosty + if (error == 'The author of this video has not been found or is blocked' + and not smuggled.get('referrer') and mode == 'videoembed'): + return self._extract_desktop(smuggle_url(url, {'referrer': 'https://boosty.to'})) + elif error: + raise ExtractorError(error, expected=True) + + player = self._parse_json( + unescapeHTML(self._search_regex( + r'data-options=(?P<quote>["\'])(?P<player>{.+?%s.+?})(?P=quote)' % video_id, + webpage, 'player', group='player')), + video_id) + + # embedded external player + if player.get('isExternalPlayer') and player.get('url'): + return self.url_result(player['url']) + + flashvars = player['flashvars'] + + metadata = flashvars.get('metadata') + if metadata: + metadata = self._parse_json(metadata, video_id) + else: + data = {} + st_location = flashvars.get('location') + if st_location: + data['st.location'] = st_location + metadata = self._download_json( + compat_urllib_parse_unquote(flashvars['metadataUrl']), + video_id, 'Downloading metadata JSON', + data=urlencode_postdata(data)) + + movie = metadata['movie'] + + # Some embedded videos may not contain title in movie dict (e.g. + # http://ok.ru/video/62036049272859-0) thus we allow missing title + # here and it's going to be extracted later by an extractor that + # will process the actual embed. + provider = metadata.get('provider') + title = movie['title'] if provider == 'UPLOADED_ODKL' else movie.get('title') + + thumbnail = movie.get('poster') + duration = int_or_none(movie.get('duration')) + + author = metadata.get('author', {}) + uploader_id = author.get('id') + uploader = author.get('name') + + upload_date = unified_strdate(self._html_search_meta( + 'ya:ovs:upload_date', webpage, 'upload date', default=None)) + + age_limit = None + adult = self._html_search_meta( + 'ya:ovs:adult', webpage, 'age limit', default=None) + if adult: + age_limit = 18 if adult == 'true' else 0 + + like_count = int_or_none(metadata.get('likeCount')) + + subtitles = {} + for sub in traverse_obj(metadata, ('movie', 'subtitleTracks', ...), expected_type=dict): + sub_url = sub.get('url') + if not sub_url: + continue + subtitles.setdefault(sub.get('language') or 'en', []).append({ + 'url': sub_url, + 'ext': 'vtt', + }) + + info = { + 'id': video_id, + 'title': title, + 'thumbnail': thumbnail, + 'duration': duration, + 'upload_date': upload_date, + 'uploader': uploader, + 'uploader_id': uploader_id, + 'like_count': like_count, + 'age_limit': age_limit, + 'start_time': start_time, + 'subtitles': subtitles, + } + + # pladform + if provider == 'OPEN_GRAPH': + info.update({ + '_type': 'url_transparent', + 'url': movie['contentId'], + }) + return info + + if provider == 'USER_YOUTUBE': + info.update({ + '_type': 'url_transparent', + 'url': movie['contentId'], + }) + return info + + assert title + if provider == 'LIVE_TV_APP': + info['title'] = title + + quality = qualities(('4', '0', '1', '2', '3', '5', '6', '7')) + + formats = [{ + 'url': f['url'], + 'ext': 'mp4', + 'format_id': f.get('name'), + } for f in traverse_obj(metadata, ('videos', lambda _, v: url_or_none(v['url'])))] + + m3u8_url = traverse_obj(metadata, 'hlsManifestUrl', 'ondemandHls') + if m3u8_url: + formats.extend(self._extract_m3u8_formats( + m3u8_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False)) + self._clear_cookies(m3u8_url) + + for mpd_id, mpd_key in [('dash', 'ondemandDash'), ('webm', 'metadataWebmUrl')]: + mpd_url = metadata.get(mpd_key) + if mpd_url: + formats.extend(self._extract_mpd_formats( + mpd_url, video_id, mpd_id=mpd_id, fatal=False)) + self._clear_cookies(mpd_url) + + dash_manifest = metadata.get('metadataEmbedded') + if dash_manifest: + formats.extend(self._parse_mpd_formats( + compat_etree_fromstring(dash_manifest), 'mpd')) + + for fmt in formats: + fmt_type = self._search_regex( + r'\btype[/=](\d)', fmt['url'], + 'format type', default=None) + if fmt_type: + fmt['quality'] = quality(fmt_type) + + # Live formats + m3u8_url = metadata.get('hlsMasterPlaylistUrl') + if m3u8_url: + formats.extend(self._extract_m3u8_formats( + m3u8_url, video_id, 'mp4', m3u8_id='hls', fatal=False)) + self._clear_cookies(m3u8_url) + rtmp_url = metadata.get('rtmpUrl') + if rtmp_url: + formats.append({ + 'url': rtmp_url, + 'format_id': 'rtmp', + 'ext': 'flv', + }) + + if not formats: + payment_info = metadata.get('paymentInfo') + if payment_info: + self.raise_no_formats('This video is paid, subscribe to download it', expected=True) + + info['formats'] = formats + return info + + def _extract_mobile(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage( + 'http://m.ok.ru/video/%s' % video_id, video_id, + note='Downloading mobile webpage') + + error = self._search_regex( + r'видео</a>\s*<div\s+class="empty">(.+?)</div>', + webpage, 'error', default=None) + if error: + raise ExtractorError(error, expected=True) + + json_data = self._search_regex( + r'data-video="(.+?)"', webpage, 'json data') + json_data = self._parse_json(unescapeHTML(json_data), video_id) or {} + + redirect_url = self._request_webpage(HEADRequest( + json_data['videoSrc']), video_id, 'Requesting download URL').url + self._clear_cookies(redirect_url) + + return { + 'id': video_id, + 'title': json_data.get('videoName'), + 'duration': float_or_none(json_data.get('videoDuration'), scale=1000), + 'thumbnail': json_data.get('videoPosterSrc'), + 'formats': [{ + 'format_id': 'mobile', + 'url': redirect_url, + 'ext': 'mp4', + }] + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/oftv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/oftv.py new file mode 100644 index 0000000..4cac518 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/oftv.py @@ -0,0 +1,54 @@ +from .common import InfoExtractor +from .zype import ZypeIE +from ..utils import traverse_obj + + +class OfTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?of\.tv/video/(?P<id>\w+)' + _TESTS = [{ + 'url': 'https://of.tv/video/627d7d95b353db0001dadd1a', + 'md5': 'cb9cd5db3bb9ee0d32bfd7e373d6ef0a', + 'info_dict': { + 'id': '627d7d95b353db0001dadd1a', + 'ext': 'mp4', + 'title': 'E1: Jacky vs Eric', + 'thumbnail': r're:^https?://.*\.jpg', + 'average_rating': 0, + 'description': 'md5:dd16e3e2a8d27d922e7a989f85986853', + 'display_id': '', + 'duration': 1423, + 'timestamp': 1652391300, + 'upload_date': '20220512', + 'view_count': 0, + 'creator': 'This is Fire' + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + info = next(ZypeIE.extract_from_webpage(self._downloader, url, webpage)) + info['_type'] = 'url_transparent' + info['creator'] = self._search_regex(r'<a[^>]+class=\"creator-name\"[^>]+>([^<]+)', webpage, 'creator') + return info + + +class OfTVPlaylistIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?of\.tv/creators/(?P<id>[a-zA-Z0-9-]+)/?(?:$|[?#])' + _TESTS = [{ + 'url': 'https://of.tv/creators/this-is-fire/', + 'playlist_count': 8, + 'info_dict': { + 'id': 'this-is-fire' + } + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + webpage = self._download_webpage(url, playlist_id) + + json_match = self._search_json( + r'var\s*remaining_videos\s*=', webpage, 'oftv playlists', playlist_id, contains_pattern=r'\[.+\]') + + return self.playlist_from_matches( + traverse_obj(json_match, (..., 'discovery_url')), playlist_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/oktoberfesttv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/oktoberfesttv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/oktoberfesttv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/oktoberfesttv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/olympics.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/olympics.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/olympics.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/olympics.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/on24.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/on24.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/on24.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/on24.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/once.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/once.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/once.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/once.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ondemandkorea.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ondemandkorea.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ondemandkorea.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ondemandkorea.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/onefootball.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/onefootball.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/onefootball.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/onefootball.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/onenewsnz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/onenewsnz.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/onenewsnz.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/onenewsnz.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/oneplace.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/oneplace.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/oneplace.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/oneplace.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/onet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/onet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/onet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/onet.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/onionstudios.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/onionstudios.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/onionstudios.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/onionstudios.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ooyala.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ooyala.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ooyala.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ooyala.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/opencast.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/opencast.py new file mode 100644 index 0000000..1fafd9a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/opencast.py @@ -0,0 +1,183 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + determine_ext, + ExtractorError, + int_or_none, + parse_iso8601, + traverse_obj, + variadic, +) + + +class OpencastBaseIE(InfoExtractor): + _INSTANCES_RE = r'''(?: + opencast\.informatik\.kit\.edu| + electures\.uni-muenster\.de| + oc-presentation\.ltcc\.tuwien\.ac\.at| + medien\.ph-noe\.ac\.at| + oc-video\.ruhr-uni-bochum\.de| + oc-video1\.ruhr-uni-bochum\.de| + opencast\.informatik\.uni-goettingen\.de| + heicast\.uni-heidelberg\.de| + opencast\.hawk\.de:8080| + opencast\.hs-osnabrueck\.de| + video[0-9]+\.virtuos\.uni-osnabrueck\.de| + opencast\.uni-koeln\.de| + media\.opencast\.hochschule-rhein-waal\.de| + matterhorn\.dce\.harvard\.edu| + hs-harz\.opencast\.uni-halle\.de| + videocampus\.urz\.uni-leipzig\.de| + media\.uct\.ac\.za| + vid\.igb\.illinois\.edu| + cursosabertos\.c3sl\.ufpr\.br| + mcmedia\.missioncollege\.org| + clases\.odon\.edu\.uy + )''' + _UUID_RE = r'[\da-fA-F]{8}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{12}' + + def _call_api(self, host, video_id, **kwargs): + return self._download_json(self._API_BASE % (host, video_id), video_id, **kwargs) + + def _parse_mediapackage(self, video): + video_id = video.get('id') + if video_id is None: + raise ExtractorError('Video id was not found') + + formats = [] + for track in variadic(traverse_obj(video, ('media', 'track')) or []): + href = track.get('url') + if href is None: + continue + ext = determine_ext(href, None) + + transport = track.get('transport') + + if transport == 'DASH' or ext == 'mpd': + formats.extend(self._extract_mpd_formats(href, video_id, mpd_id='dash', fatal=False)) + elif transport == 'HLS' or ext == 'm3u8': + formats.extend(self._extract_m3u8_formats( + href, video_id, m3u8_id='hls', entry_protocol='m3u8_native', fatal=False)) + elif transport == 'HDS' or ext == 'f4m': + formats.extend(self._extract_f4m_formats(href, video_id, f4m_id='hds', fatal=False)) + elif transport == 'SMOOTH': + formats.extend(self._extract_ism_formats(href, video_id, ism_id='smooth', fatal=False)) + elif ext == 'smil': + formats.extend(self._extract_smil_formats(href, video_id, fatal=False)) + else: + track_obj = { + 'url': href, + 'ext': ext, + 'format_note': track.get('transport'), + 'resolution': traverse_obj(track, ('video', 'resolution')), + 'fps': int_or_none(traverse_obj(track, ('video', 'framerate'))), + 'vbr': int_or_none(traverse_obj(track, ('video', 'bitrate')), scale=1000), + 'vcodec': traverse_obj(track, ('video', 'encoder', 'type')) if track.get('video') else 'none', + 'abr': int_or_none(traverse_obj(track, ('audio', 'bitrate')), scale=1000), + 'asr': int_or_none(traverse_obj(track, ('audio', 'samplingrate'))), + 'acodec': traverse_obj(track, ('audio', 'encoder', 'type')) if track.get('audio') else 'none', + } + + if transport == 'RTMP': + m_obj = re.search(r'(?:rtmp://[^/]+/(?P<app>[^/]+))/(?P<ext>.+):(?P<playpath>.+)', href) + if not m_obj: + continue + track_obj.update({ + 'app': m_obj.group('app'), + 'ext': m_obj.group('ext'), + 'play_path': m_obj.group('ext') + ':' + m_obj.group('playpath'), + 'rtmp_live': True, + 'preference': -2, + }) + formats.append(track_obj) + + return { + 'id': video_id, + 'formats': formats, + 'title': video.get('title'), + 'series': video.get('seriestitle'), + 'season_id': video.get('series'), + 'creator': traverse_obj(video, ('creators', 'creator')), + 'timestamp': parse_iso8601(video.get('start')), + 'thumbnail': traverse_obj(video, ('attachments', 'attachment', ..., 'url'), get_all=False), + } + + +class OpencastIE(OpencastBaseIE): + _VALID_URL = rf'''(?x) + https?://(?P<host>{OpencastBaseIE._INSTANCES_RE})/paella/ui/watch\.html\? + (?:[^#]+&)?id=(?P<id>{OpencastBaseIE._UUID_RE})''' + + _API_BASE = 'https://%s/search/episode.json?id=%s' + + _TESTS = [ + { + 'url': 'https://oc-video1.ruhr-uni-bochum.de/paella/ui/watch.html?id=ed063cd5-72c8-46b5-a60a-569243edcea8', + 'md5': '554c8e99a90f7be7e874619fcf2a3bc9', + 'info_dict': { + 'id': 'ed063cd5-72c8-46b5-a60a-569243edcea8', + 'ext': 'mp4', + 'title': '11 - Kryptographie - 24.11.2015', + 'thumbnail': r're:^https?://.*\.jpg$', + 'timestamp': 1606208400, + 'upload_date': '20201124', + 'season_id': 'cf68a4a1-36b1-4a53-a6ba-61af5705a0d0', + 'series': 'Kryptographie - WiSe 15/16', + 'creator': 'Alexander May', + }, + } + ] + + def _real_extract(self, url): + host, video_id = self._match_valid_url(url).group('host', 'id') + return self._parse_mediapackage( + self._call_api(host, video_id)['search-results']['result']['mediapackage']) + + +class OpencastPlaylistIE(OpencastBaseIE): + _VALID_URL = rf'''(?x) + https?://(?P<host>{OpencastBaseIE._INSTANCES_RE})(?: + /engage/ui/index\.html\?(?:[^#]+&)?epFrom=| + /ltitools/index\.html\?(?:[^#]+&)?series= + )(?P<id>{OpencastBaseIE._UUID_RE})''' + + _API_BASE = 'https://%s/search/episode.json?sid=%s' + + _TESTS = [ + { + 'url': 'https://oc-video1.ruhr-uni-bochum.de/engage/ui/index.html?epFrom=cf68a4a1-36b1-4a53-a6ba-61af5705a0d0', + 'info_dict': { + 'id': 'cf68a4a1-36b1-4a53-a6ba-61af5705a0d0', + 'title': 'Kryptographie - WiSe 15/16', + }, + 'playlist_mincount': 29, + }, + { + 'url': 'https://oc-video1.ruhr-uni-bochum.de/ltitools/index.html?subtool=series&series=cf68a4a1-36b1-4a53-a6ba-61af5705a0d0&lng=de', + 'info_dict': { + 'id': 'cf68a4a1-36b1-4a53-a6ba-61af5705a0d0', + 'title': 'Kryptographie - WiSe 15/16', + }, + 'playlist_mincount': 29, + }, + { + 'url': 'https://electures.uni-muenster.de/engage/ui/index.html?e=1&p=1&epFrom=39391d10-a711-4d23-b21d-afd2ed7d758c', + 'info_dict': { + 'id': '39391d10-a711-4d23-b21d-afd2ed7d758c', + 'title': '021670 Theologische Themen bei Hans Blumenberg WiSe 2017/18', + }, + 'playlist_mincount': 13, + }, + ] + + def _real_extract(self, url): + host, video_id = self._match_valid_url(url).group('host', 'id') + + entries = [ + self._parse_mediapackage(episode['mediapackage']) + for episode in variadic(self._call_api(host, video_id)['search-results']['result']) + if episode.get('mediapackage') + ] + + return self.playlist_result(entries, video_id, traverse_obj(entries, (0, 'series'))) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/openload.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/openload.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/openload.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/openload.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/openrec.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/openrec.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/openrec.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/openrec.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ora.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ora.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ora.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ora.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/orf.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/orf.py new file mode 100644 index 0000000..cc3c003 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/orf.py @@ -0,0 +1,526 @@ +import functools +import re + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ( + clean_html, + determine_ext, + float_or_none, + InAdvancePagedList, + int_or_none, + join_nonempty, + orderedSet, + remove_end, + make_archive_id, + smuggle_url, + strip_jsonp, + try_call, + unescapeHTML, + unified_strdate, + unsmuggle_url, + url_or_none, +) + + +class ORFTVthekIE(InfoExtractor): + IE_NAME = 'orf:tvthek' + IE_DESC = 'ORF TVthek' + _VALID_URL = r'(?P<url>https?://tvthek\.orf\.at/(?:(?:[^/]+/){2}){1,2}(?P<id>\d+))(/[^/]+/(?P<vid>\d+))?(?:$|[?#])' + + _TESTS = [{ + 'url': 'https://tvthek.orf.at/profile/ZIB-2/1211/ZIB-2/14121079', + 'info_dict': { + 'id': '14121079', + }, + 'playlist_count': 11, + 'params': {'noplaylist': True} + }, { + 'url': 'https://tvthek.orf.at/profile/ZIB-2/1211/ZIB-2/14121079/Umfrage-Welches-Tier-ist-Sebastian-Kurz/15083150', + 'info_dict': { + 'id': '14121079', + }, + 'playlist_count': 1, + 'params': {'playlist_items': '5'} + }, { + 'url': 'https://tvthek.orf.at/profile/ZIB-2/1211/ZIB-2/14121079/Umfrage-Welches-Tier-ist-Sebastian-Kurz/15083150', + 'info_dict': { + 'id': '14121079', + 'playlist_count': 1 + }, + 'playlist': [{ + 'info_dict': { + 'id': '15083150', + 'ext': 'mp4', + 'description': 'md5:7be1c485425f5f255a5e4e4815e77d04', + 'thumbnail': 'https://api-tvthek.orf.at/uploads/media/segments/0130/59/824271ea35cd8931a0fb08ab316a5b0a1562342c.jpeg', + 'title': 'Umfrage: Welches Tier ist Sebastian Kurz?', + } + }], + 'playlist_count': 1, + 'params': {'noplaylist': True, 'skip_download': 'm3u8'} + }, { + 'url': 'http://tvthek.orf.at/program/Aufgetischt/2745173/Aufgetischt-Mit-der-Steirischen-Tafelrunde/8891389', + 'playlist': [{ + 'md5': '2942210346ed779588f428a92db88712', + 'info_dict': { + 'id': '8896777', + 'ext': 'mp4', + 'title': 'Aufgetischt: Mit der Steirischen Tafelrunde', + 'description': 'md5:c1272f0245537812d4e36419c207b67d', + 'duration': 2668, + 'upload_date': '20141208', + }, + }], + 'skip': 'Blocked outside of Austria / Germany', + }, { + 'url': 'http://tvthek.orf.at/topic/Im-Wandel-der-Zeit/8002126/Best-of-Ingrid-Thurnher/7982256', + 'info_dict': { + 'id': '7982259', + 'ext': 'mp4', + 'title': 'Best of Ingrid Thurnher', + 'upload_date': '20140527', + 'description': 'Viele Jahre war Ingrid Thurnher das "Gesicht" der ZIB 2. Vor ihrem Wechsel zur ZIB 2 im Jahr 1995 moderierte sie unter anderem "Land und Leute", "Österreich-Bild" und "Niederösterreich heute".', + }, + 'params': { + 'skip_download': True, # rtsp downloads + }, + 'skip': 'Blocked outside of Austria / Germany', + }, { + 'url': 'http://tvthek.orf.at/topic/Fluechtlingskrise/10463081/Heimat-Fremde-Heimat/13879132/Senioren-betreuen-Migrantenkinder/13879141', + 'only_matching': True, + }, { + 'url': 'http://tvthek.orf.at/profile/Universum/35429', + 'only_matching': True, + }] + + def _pagefunc(self, url, data_jsb, n, *, image=None): + sd = data_jsb[n] + video_id, title = str(sd['id']), sd['title'] + formats = [] + for fd in sd['sources']: + src = url_or_none(fd.get('src')) + if not src: + continue + format_id = join_nonempty('delivery', 'quality', 'quality_string', from_dict=fd) + ext = determine_ext(src) + if ext == 'm3u8': + m3u8_formats = self._extract_m3u8_formats( + src, video_id, 'mp4', m3u8_id=format_id, fatal=False, note=f'Downloading {format_id} m3u8 manifest') + if any('/geoprotection' in f['url'] for f in m3u8_formats): + self.raise_geo_restricted() + formats.extend(m3u8_formats) + elif ext == 'f4m': + formats.extend(self._extract_f4m_formats( + src, video_id, f4m_id=format_id, fatal=False)) + elif ext == 'mpd': + formats.extend(self._extract_mpd_formats( + src, video_id, mpd_id=format_id, fatal=False, note=f'Downloading {format_id} mpd manifest')) + else: + formats.append({ + 'format_id': format_id, + 'url': src, + 'protocol': fd.get('protocol'), + }) + + # Check for geoblocking. + # There is a property is_geoprotection, but that's always false + geo_str = sd.get('geoprotection_string') + http_url = next( + (f['url'] for f in formats if re.match(r'^https?://.*\.mp4$', f['url'])), + None) if geo_str else None + if http_url: + self._request_webpage( + HEADRequest(http_url), video_id, fatal=False, note='Testing for geoblocking', + errnote=f'This video seems to be blocked outside of {geo_str}. You may want to try the streaming-* formats') + + subtitles = {} + for sub in sd.get('subtitles', []): + sub_src = sub.get('src') + if not sub_src: + continue + subtitles.setdefault(sub.get('lang', 'de-AT'), []).append({ + 'url': sub_src, + }) + + upload_date = unified_strdate(sd.get('created_date')) + + thumbnails = [] + preview = sd.get('preview_image_url') + if preview: + thumbnails.append({ + 'id': 'preview', + 'url': preview, + 'preference': 0, + }) + image = sd.get('image_full_url') or image + if image: + thumbnails.append({ + 'id': 'full', + 'url': image, + 'preference': 1, + }) + + yield { + 'id': video_id, + 'title': title, + 'webpage_url': smuggle_url(f'{url}/part/{video_id}', {'force_noplaylist': True}), + 'formats': formats, + 'subtitles': subtitles, + 'description': sd.get('description'), + 'duration': int_or_none(sd.get('duration_in_seconds')), + 'upload_date': upload_date, + 'thumbnails': thumbnails, + } + + def _real_extract(self, url): + url, smuggled_data = unsmuggle_url(url) + playlist_id, video_id, base_url = self._match_valid_url(url).group('id', 'vid', 'url') + webpage = self._download_webpage(url, playlist_id) + + data_jsb = self._parse_json( + self._search_regex( + r'<div[^>]+class=(["\']).*?VideoPlaylist.*?\1[^>]+data-jsb=(["\'])(?P<json>.+?)\2', + webpage, 'playlist', group='json'), + playlist_id, transform_source=unescapeHTML)['playlist']['videos'] + + if not self._yes_playlist(playlist_id, video_id, smuggled_data): + data_jsb = [sd for sd in data_jsb if str(sd.get('id')) == video_id] + + playlist_count = len(data_jsb) + image = self._og_search_thumbnail(webpage) if playlist_count == 1 else None + + page_func = functools.partial(self._pagefunc, base_url, data_jsb, image=image) + return { + '_type': 'playlist', + 'entries': InAdvancePagedList(page_func, playlist_count, 1), + 'id': playlist_id, + } + + +class ORFRadioIE(InfoExtractor): + IE_NAME = 'orf:radio' + + STATION_INFO = { + 'fm4': ('fm4', 'fm4', 'orffm4'), + 'noe': ('noe', 'oe2n', 'orfnoe'), + 'wien': ('wie', 'oe2w', 'orfwie'), + 'burgenland': ('bgl', 'oe2b', 'orfbgl'), + 'ooe': ('ooe', 'oe2o', 'orfooe'), + 'steiermark': ('stm', 'oe2st', 'orfstm'), + 'kaernten': ('ktn', 'oe2k', 'orfktn'), + 'salzburg': ('sbg', 'oe2s', 'orfsbg'), + 'tirol': ('tir', 'oe2t', 'orftir'), + 'vorarlberg': ('vbg', 'oe2v', 'orfvbg'), + 'oe3': ('oe3', 'oe3', 'orfoe3'), + 'oe1': ('oe1', 'oe1', 'orfoe1'), + } + _STATION_RE = '|'.join(map(re.escape, STATION_INFO.keys())) + + _VALID_URL = rf'''(?x) + https?://(?: + (?P<station>{_STATION_RE})\.orf\.at/player| + radiothek\.orf\.at/(?P<station2>{_STATION_RE}) + )/(?P<date>[0-9]+)/(?P<show>\w+)''' + + _TESTS = [{ + 'url': 'https://radiothek.orf.at/ooe/20220801/OGMO', + 'info_dict': { + 'id': 'OGMO', + 'title': 'Guten Morgen OÖ', + 'description': 'md5:a3f6083399ef92b8cbe2d421b180835a', + }, + 'playlist': [{ + 'md5': 'f33147d954a326e338ea52572c2810e8', + 'info_dict': { + 'id': '2022-08-01_0459_tl_66_7DaysMon1_319062', + 'ext': 'mp3', + 'title': 'Guten Morgen OÖ', + 'upload_date': '20220801', + 'duration': 18000, + 'timestamp': 1659322789, + 'description': 'md5:a3f6083399ef92b8cbe2d421b180835a', + } + }] + }, { + 'url': 'https://ooe.orf.at/player/20220801/OGMO', + 'info_dict': { + 'id': 'OGMO', + 'title': 'Guten Morgen OÖ', + 'description': 'md5:a3f6083399ef92b8cbe2d421b180835a', + }, + 'playlist': [{ + 'md5': 'f33147d954a326e338ea52572c2810e8', + 'info_dict': { + 'id': '2022-08-01_0459_tl_66_7DaysMon1_319062', + 'ext': 'mp3', + 'title': 'Guten Morgen OÖ', + 'upload_date': '20220801', + 'duration': 18000, + 'timestamp': 1659322789, + 'description': 'md5:a3f6083399ef92b8cbe2d421b180835a', + } + }] + }, { + 'url': 'http://fm4.orf.at/player/20170107/4CC', + 'only_matching': True, + }, { + 'url': 'https://noe.orf.at/player/20200423/NGM', + 'only_matching': True, + }, { + 'url': 'https://wien.orf.at/player/20200423/WGUM', + 'only_matching': True, + }, { + 'url': 'https://burgenland.orf.at/player/20200423/BGM', + 'only_matching': True, + }, { + 'url': 'https://steiermark.orf.at/player/20200423/STGMS', + 'only_matching': True, + }, { + 'url': 'https://kaernten.orf.at/player/20200423/KGUMO', + 'only_matching': True, + }, { + 'url': 'https://salzburg.orf.at/player/20200423/SGUM', + 'only_matching': True, + }, { + 'url': 'https://tirol.orf.at/player/20200423/TGUMO', + 'only_matching': True, + }, { + 'url': 'https://vorarlberg.orf.at/player/20200423/VGUM', + 'only_matching': True, + }, { + 'url': 'https://oe3.orf.at/player/20200424/3WEK', + 'only_matching': True, + }, { + 'url': 'http://oe1.orf.at/player/20170108/456544', + 'md5': '34d8a6e67ea888293741c86a099b745b', + 'info_dict': { + 'id': '2017-01-08_0759_tl_51_7DaysSun6_256141', + 'ext': 'mp3', + 'title': 'Morgenjournal', + 'duration': 609, + 'timestamp': 1483858796, + 'upload_date': '20170108', + }, + 'skip': 'Shows from ORF radios are only available for 7 days.' + }] + + def _entries(self, data, station): + _, loop_station, old_ie = self.STATION_INFO[station] + for info in data['streams']: + item_id = info.get('loopStreamId') + if not item_id: + continue + video_id = item_id.replace('.mp3', '') + yield { + 'id': video_id, + 'ext': 'mp3', + 'url': f'https://loopstream01.apa.at/?channel={loop_station}&id={item_id}', + '_old_archive_ids': [make_archive_id(old_ie, video_id)], + 'title': data.get('title'), + 'description': clean_html(data.get('subtitle')), + 'duration': try_call(lambda: (info['end'] - info['start']) / 1000), + 'timestamp': int_or_none(info.get('start'), scale=1000), + 'series': data.get('programTitle'), + } + + def _real_extract(self, url): + station, station2, show_date, show_id = self._match_valid_url(url).group('station', 'station2', 'date', 'show') + api_station, _, _ = self.STATION_INFO[station or station2] + data = self._download_json( + f'http://audioapi.orf.at/{api_station}/api/json/current/broadcast/{show_id}/{show_date}', show_id) + + return self.playlist_result( + self._entries(data, station or station2), show_id, data.get('title'), clean_html(data.get('subtitle'))) + + +class ORFIPTVIE(InfoExtractor): + IE_NAME = 'orf:iptv' + IE_DESC = 'iptv.ORF.at' + _VALID_URL = r'https?://iptv\.orf\.at/(?:#/)?stories/(?P<id>\d+)' + + _TEST = { + 'url': 'http://iptv.orf.at/stories/2275236/', + 'md5': 'c8b22af4718a4b4af58342529453e3e5', + 'info_dict': { + 'id': '350612', + 'ext': 'flv', + 'title': 'Weitere Evakuierungen um Vulkan Calbuco', + 'description': 'md5:d689c959bdbcf04efeddedbf2299d633', + 'duration': 68.197, + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20150425', + }, + } + + def _real_extract(self, url): + story_id = self._match_id(url) + + webpage = self._download_webpage( + 'http://iptv.orf.at/stories/%s' % story_id, story_id) + + video_id = self._search_regex( + r'data-video(?:id)?="(\d+)"', webpage, 'video id') + + data = self._download_json( + 'http://bits.orf.at/filehandler/static-api/json/current/data.json?file=%s' % video_id, + video_id)[0] + + duration = float_or_none(data['duration'], 1000) + + video = data['sources']['default'] + load_balancer_url = video['loadBalancerUrl'] + abr = int_or_none(video.get('audioBitrate')) + vbr = int_or_none(video.get('bitrate')) + fps = int_or_none(video.get('videoFps')) + width = int_or_none(video.get('videoWidth')) + height = int_or_none(video.get('videoHeight')) + thumbnail = video.get('preview') + + rendition = self._download_json( + load_balancer_url, video_id, transform_source=strip_jsonp) + + f = { + 'abr': abr, + 'vbr': vbr, + 'fps': fps, + 'width': width, + 'height': height, + } + + formats = [] + for format_id, format_url in rendition['redirect'].items(): + if format_id == 'rtmp': + ff = f.copy() + ff.update({ + 'url': format_url, + 'format_id': format_id, + }) + formats.append(ff) + elif determine_ext(format_url) == 'f4m': + formats.extend(self._extract_f4m_formats( + format_url, video_id, f4m_id=format_id)) + elif determine_ext(format_url) == 'm3u8': + formats.extend(self._extract_m3u8_formats( + format_url, video_id, 'mp4', m3u8_id=format_id)) + else: + continue + + title = remove_end(self._og_search_title(webpage), ' - iptv.ORF.at') + description = self._og_search_description(webpage) + upload_date = unified_strdate(self._html_search_meta( + 'dc.date', webpage, 'upload date')) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'duration': duration, + 'thumbnail': thumbnail, + 'upload_date': upload_date, + 'formats': formats, + } + + +class ORFFM4StoryIE(InfoExtractor): + IE_NAME = 'orf:fm4:story' + IE_DESC = 'fm4.orf.at stories' + _VALID_URL = r'https?://fm4\.orf\.at/stories/(?P<id>\d+)' + + _TEST = { + 'url': 'http://fm4.orf.at/stories/2865738/', + 'playlist': [{ + 'md5': 'e1c2c706c45c7b34cf478bbf409907ca', + 'info_dict': { + 'id': '547792', + 'ext': 'flv', + 'title': 'Manu Delago und Inner Tongue live', + 'description': 'Manu Delago und Inner Tongue haben bei der FM4 Soundpark Session live alles gegeben. Hier gibt es Fotos und die gesamte Session als Video.', + 'duration': 1748.52, + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20170913', + }, + }, { + 'md5': 'c6dd2179731f86f4f55a7b49899d515f', + 'info_dict': { + 'id': '547798', + 'ext': 'flv', + 'title': 'Manu Delago und Inner Tongue live (2)', + 'duration': 1504.08, + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20170913', + 'description': 'Manu Delago und Inner Tongue haben bei der FM4 Soundpark Session live alles gegeben. Hier gibt es Fotos und die gesamte Session als Video.', + }, + }], + } + + def _real_extract(self, url): + story_id = self._match_id(url) + webpage = self._download_webpage(url, story_id) + + entries = [] + all_ids = orderedSet(re.findall(r'data-video(?:id)?="(\d+)"', webpage)) + for idx, video_id in enumerate(all_ids): + data = self._download_json( + 'http://bits.orf.at/filehandler/static-api/json/current/data.json?file=%s' % video_id, + video_id)[0] + + duration = float_or_none(data['duration'], 1000) + + video = data['sources']['q8c'] + load_balancer_url = video['loadBalancerUrl'] + abr = int_or_none(video.get('audioBitrate')) + vbr = int_or_none(video.get('bitrate')) + fps = int_or_none(video.get('videoFps')) + width = int_or_none(video.get('videoWidth')) + height = int_or_none(video.get('videoHeight')) + thumbnail = video.get('preview') + + rendition = self._download_json( + load_balancer_url, video_id, transform_source=strip_jsonp) + + f = { + 'abr': abr, + 'vbr': vbr, + 'fps': fps, + 'width': width, + 'height': height, + } + + formats = [] + for format_id, format_url in rendition['redirect'].items(): + if format_id == 'rtmp': + ff = f.copy() + ff.update({ + 'url': format_url, + 'format_id': format_id, + }) + formats.append(ff) + elif determine_ext(format_url) == 'f4m': + formats.extend(self._extract_f4m_formats( + format_url, video_id, f4m_id=format_id)) + elif determine_ext(format_url) == 'm3u8': + formats.extend(self._extract_m3u8_formats( + format_url, video_id, 'mp4', m3u8_id=format_id)) + else: + continue + + title = remove_end(self._og_search_title(webpage), ' - fm4.ORF.at') + if idx >= 1: + # Titles are duplicates, make them unique + title += ' (' + str(idx + 1) + ')' + description = self._og_search_description(webpage) + upload_date = unified_strdate(self._html_search_meta( + 'dc.date', webpage, 'upload date')) + + entries.append({ + 'id': video_id, + 'title': title, + 'description': description, + 'duration': duration, + 'thumbnail': thumbnail, + 'upload_date': upload_date, + 'formats': formats, + }) + + return self.playlist_result(entries) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/outsidetv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/outsidetv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/outsidetv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/outsidetv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/owncloud.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/owncloud.py new file mode 100644 index 0000000..79fd830 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/owncloud.py @@ -0,0 +1,80 @@ +import re +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + determine_ext, + url_or_none, + urlencode_postdata, +) + + +class OwnCloudIE(InfoExtractor): + _INSTANCES_RE = '|'.join(( + r'(?:[^\.]+\.)?sciebo\.de', + r'cloud\.uni-koblenz-landau\.de', + )) + _VALID_URL = rf'https?://(?:{_INSTANCES_RE})/s/(?P<id>[\w.-]+)' + + _TESTS = [ + { + 'url': 'https://ruhr-uni-bochum.sciebo.de/s/wWhqZzh9jTumVFN', + 'info_dict': { + 'id': 'wWhqZzh9jTumVFN', + 'ext': 'mp4', + 'title': 'CmvpJST.mp4', + }, + }, + { + 'url': 'https://ruhr-uni-bochum.sciebo.de/s/WNDuFu0XuFtmm3f', + 'info_dict': { + 'id': 'WNDuFu0XuFtmm3f', + 'ext': 'mp4', + 'title': 'CmvpJST.mp4', + }, + 'params': { + 'videopassword': '12345', + }, + }, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage, urlh = self._download_webpage_handle(url, video_id) + + if re.search(r'<label[^>]+for="password"', webpage): + webpage = self._verify_video_password(webpage, urlh.url, video_id) + + hidden_inputs = self._hidden_inputs(webpage) + title = hidden_inputs.get('filename') + parsed_url = urllib.parse.urlparse(url) + + return { + 'id': video_id, + 'title': title, + 'url': url_or_none(hidden_inputs.get('downloadURL')) or parsed_url._replace( + path=urllib.parse.urljoin(parsed_url.path, 'download')).geturl(), + 'ext': determine_ext(title), + } + + def _verify_video_password(self, webpage, url, video_id): + password = self.get_param('videopassword') + if password is None: + raise ExtractorError( + 'This video is protected by a password, use the --video-password option', + expected=True) + + validation_response = self._download_webpage( + url, video_id, 'Validating Password', 'Wrong password?', + data=urlencode_postdata({ + 'requesttoken': self._hidden_inputs(webpage)['requesttoken'], + 'password': password, + })) + + if re.search(r'<label[^>]+for="password"', validation_response): + warning = self._search_regex( + r'<div[^>]+class="warning">([^<]*)</div>', validation_response, + 'warning', default='The password is wrong') + raise ExtractorError(f'Opening the video failed, {self.IE_NAME} said: {warning!r}', expected=True) + return validation_response diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/packtpub.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/packtpub.py new file mode 100644 index 0000000..5620330 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/packtpub.py @@ -0,0 +1,155 @@ +import json + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + clean_html, + ExtractorError, + # remove_end, + str_or_none, + strip_or_none, + unified_timestamp, + # urljoin, +) + + +class PacktPubBaseIE(InfoExtractor): + # _PACKT_BASE = 'https://www.packtpub.com' + _STATIC_PRODUCTS_BASE = 'https://static.packt-cdn.com/products/' + + +class PacktPubIE(PacktPubBaseIE): + _VALID_URL = r'https?://(?:(?:www\.)?packtpub\.com/mapt|subscription\.packtpub\.com)/video/[^/]+/(?P<course_id>\d+)/(?P<chapter_id>[^/]+)/(?P<id>[^/]+)(?:/(?P<display_id>[^/?&#]+))?' + + _TESTS = [{ + 'url': 'https://www.packtpub.com/mapt/video/web-development/9781787122215/20528/20530/Project+Intro', + 'md5': '1e74bd6cfd45d7d07666f4684ef58f70', + 'info_dict': { + 'id': '20530', + 'ext': 'mp4', + 'title': 'Project Intro', + 'thumbnail': r're:(?i)^https?://.*\.jpg', + 'timestamp': 1490918400, + 'upload_date': '20170331', + }, + }, { + 'url': 'https://subscription.packtpub.com/video/web_development/9781787122215/20528/20530/project-intro', + 'only_matching': True, + }, { + 'url': 'https://subscription.packtpub.com/video/programming/9781838988906/p1/video1_1/business-card-project', + 'only_matching': True, + }] + _NETRC_MACHINE = 'packtpub' + _TOKEN = None + + def _perform_login(self, username, password): + try: + self._TOKEN = self._download_json( + 'https://services.packtpub.com/auth-v1/users/tokens', None, + 'Downloading Authorization Token', data=json.dumps({ + 'username': username, + 'password': password, + }).encode())['data']['access'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status in (400, 401, 404): + message = self._parse_json(e.cause.response.read().decode(), None)['message'] + raise ExtractorError(message, expected=True) + raise + + def _real_extract(self, url): + course_id, chapter_id, video_id, display_id = self._match_valid_url(url).groups() + + headers = {} + if self._TOKEN: + headers['Authorization'] = 'Bearer ' + self._TOKEN + try: + video_url = self._download_json( + 'https://services.packtpub.com/products-v1/products/%s/%s/%s' % (course_id, chapter_id, video_id), video_id, + 'Downloading JSON video', headers=headers)['data'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 400: + self.raise_login_required('This video is locked') + raise + + # TODO: find a better way to avoid duplicating course requests + # metadata = self._download_json( + # '%s/products/%s/chapters/%s/sections/%s/metadata' + # % (self._MAPT_REST, course_id, chapter_id, video_id), + # video_id)['data'] + + # title = metadata['pageTitle'] + # course_title = metadata.get('title') + # if course_title: + # title = remove_end(title, ' - %s' % course_title) + # timestamp = unified_timestamp(metadata.get('publicationDate')) + # thumbnail = urljoin(self._PACKT_BASE, metadata.get('filepath')) + + return { + 'id': video_id, + 'url': video_url, + 'title': display_id or video_id, # title, + # 'thumbnail': thumbnail, + # 'timestamp': timestamp, + } + + +class PacktPubCourseIE(PacktPubBaseIE): + _VALID_URL = r'(?P<url>https?://(?:(?:www\.)?packtpub\.com/mapt|subscription\.packtpub\.com)/video/[^/]+/(?P<id>\d+))' + _TESTS = [{ + 'url': 'https://www.packtpub.com/mapt/video/web-development/9781787122215', + 'info_dict': { + 'id': '9781787122215', + 'title': 'Learn Nodejs by building 12 projects [Video]', + 'description': 'md5:489da8d953f416e51927b60a1c7db0aa', + }, + 'playlist_count': 90, + }, { + 'url': 'https://subscription.packtpub.com/video/web_development/9781787122215', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if PacktPubIE.suitable(url) else super( + PacktPubCourseIE, cls).suitable(url) + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + url, course_id = mobj.group('url', 'id') + + course = self._download_json( + self._STATIC_PRODUCTS_BASE + '%s/toc' % course_id, course_id) + metadata = self._download_json( + self._STATIC_PRODUCTS_BASE + '%s/summary' % course_id, + course_id, fatal=False) or {} + + entries = [] + for chapter_num, chapter in enumerate(course['chapters'], 1): + chapter_id = str_or_none(chapter.get('id')) + sections = chapter.get('sections') + if not chapter_id or not isinstance(sections, list): + continue + chapter_info = { + 'chapter': chapter.get('title'), + 'chapter_number': chapter_num, + 'chapter_id': chapter_id, + } + for section in sections: + section_id = str_or_none(section.get('id')) + if not section_id or section.get('contentType') != 'video': + continue + entry = { + '_type': 'url_transparent', + 'url': '/'.join([url, chapter_id, section_id]), + 'title': strip_or_none(section.get('title')), + 'description': clean_html(section.get('summary')), + 'thumbnail': metadata.get('coverImage'), + 'timestamp': unified_timestamp(metadata.get('publicationDate')), + 'ie_key': PacktPubIE.ie_key(), + } + entry.update(chapter_info) + entries.append(entry) + + return self.playlist_result( + entries, course_id, metadata.get('title'), + clean_html(metadata.get('about'))) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/palcomp3.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/palcomp3.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/palcomp3.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/palcomp3.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pandoratv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pandoratv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pandoratv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pandoratv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/panopto.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/panopto.py new file mode 100644 index 0000000..5ab2b2b --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/panopto.py @@ -0,0 +1,600 @@ +import calendar +import json +import functools +from datetime import datetime, timezone +from random import random + +from .common import InfoExtractor +from ..compat import ( + compat_urllib_parse_urlparse, + compat_urlparse +) + +from ..utils import ( + bug_reports_message, + ExtractorError, + get_first, + int_or_none, + OnDemandPagedList, + parse_qs, + srt_subtitles_timecode, + traverse_obj, +) + + +class PanoptoBaseIE(InfoExtractor): + BASE_URL_RE = r'(?P<base_url>https?://[\w.-]+\.panopto.(?:com|eu)/Panopto)' + + # see panopto core.js + _SUB_LANG_MAPPING = { + 0: 'en-US', + 1: 'en-GB', + 2: 'es-MX', + 3: 'es-ES', + 4: 'de-DE', + 5: 'fr-FR', + 6: 'nl-NL', + 7: 'th-TH', + 8: 'zh-CN', + 9: 'zh-TW', + 10: 'ko-KR', + 11: 'ja-JP', + 12: 'ru-RU', + 13: 'pt-PT', + 14: 'pl-PL', + 15: 'en-AU', + 16: 'da-DK', + 17: 'fi-FI', + 18: 'hu-HU', + 19: 'nb-NO', + 20: 'sv-SE', + 21: 'it-IT' + } + + def _call_api(self, base_url, path, video_id, data=None, fatal=True, **kwargs): + response = self._download_json( + base_url + path, video_id, data=json.dumps(data).encode('utf8') if data else None, + fatal=fatal, headers={'accept': 'application/json', 'content-type': 'application/json'}, **kwargs) + if not response: + return + error_code = traverse_obj(response, 'ErrorCode') + if error_code == 2: + self.raise_login_required(method='cookies') + elif error_code is not None: + msg = f'Panopto said: {response.get("ErrorMessage")}' + if fatal: + raise ExtractorError(msg, video_id=video_id, expected=True) + else: + self.report_warning(msg, video_id=video_id) + return response + + @staticmethod + def _parse_fragment(url): + return {k: json.loads(v[0]) for k, v in compat_urlparse.parse_qs(compat_urllib_parse_urlparse(url).fragment).items()} + + +class PanoptoIE(PanoptoBaseIE): + _VALID_URL = PanoptoBaseIE.BASE_URL_RE + r'/Pages/(Viewer|Embed)\.aspx.*(?:\?|&)id=(?P<id>[a-f0-9-]+)' + _EMBED_REGEX = [rf'<iframe[^>]+src=["\'](?P<url>{PanoptoBaseIE.BASE_URL_RE}/Pages/(Viewer|Embed|Sessions/List)\.aspx[^"\']+)'] + _TESTS = [ + { + 'url': 'https://demo.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=26b3ae9e-4a48-4dcc-96ba-0befba08a0fb', + 'info_dict': { + 'id': '26b3ae9e-4a48-4dcc-96ba-0befba08a0fb', + 'title': 'Panopto for Business - Use Cases', + 'timestamp': 1459184200, + 'thumbnail': r're:https://demo\.hosted\.panopto\.com/.+', + 'upload_date': '20160328', + 'ext': 'mp4', + 'cast': [], + 'chapters': [], + 'duration': 88.17099999999999, + 'average_rating': int, + 'uploader_id': '2db6b718-47a0-4b0b-9e17-ab0b00f42b1e', + 'channel_id': 'e4c6a2fc-1214-4ca0-8fb7-aef2e29ff63a', + 'channel': 'Showcase Videos' + }, + }, + { + 'url': 'https://demo.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=ed01b077-c9e5-4c7b-b8ff-15fa306d7a59', + 'info_dict': { + 'id': 'ed01b077-c9e5-4c7b-b8ff-15fa306d7a59', + 'title': 'Overcoming Top 4 Challenges of Enterprise Video', + 'uploader': 'Panopto Support', + 'timestamp': 1449409251, + 'thumbnail': r're:https://demo\.hosted\.panopto\.com/.+', + 'upload_date': '20151206', + 'ext': 'mp4', + 'chapters': 'count:12', + 'cast': ['Panopto Support'], + 'uploader_id': 'a96d1a31-b4de-489b-9eee-b4a5b414372c', + 'average_rating': int, + 'description': 'md5:4391837802b3fc856dadf630c4b375d1', + 'duration': 1088.2659999999998, + 'channel_id': '9f3c1921-43bb-4bda-8b3a-b8d2f05a8546', + 'channel': 'Webcasts', + }, + }, + { + # Extra params in URL + 'url': 'https://howtovideos.hosted.panopto.com/Panopto/Pages/Viewer.aspx?randomparam=thisisnotreal&id=5fa74e93-3d87-4694-b60e-aaa4012214ed&advance=true', + 'info_dict': { + 'id': '5fa74e93-3d87-4694-b60e-aaa4012214ed', + 'ext': 'mp4', + 'duration': 129.513, + 'cast': ['Kathryn Kelly'], + 'uploader_id': '316a0a58-7fa2-4cd9-be1c-64270d284a56', + 'timestamp': 1569845768, + 'tags': ['Viewer', 'Enterprise'], + 'chapters': [], + 'upload_date': '20190930', + 'thumbnail': r're:https://howtovideos\.hosted\.panopto\.com/.+', + 'description': 'md5:2d844aaa1b1a14ad0e2601a0993b431f', + 'title': 'Getting Started: View a Video', + 'average_rating': int, + 'uploader': 'Kathryn Kelly', + 'channel_id': 'fb93bc3c-6750-4b80-a05b-a921013735d3', + 'channel': 'Getting Started', + } + }, + { + # Does not allow normal Viewer.aspx. AUDIO livestream has no url, so should be skipped and only give one stream. + 'url': 'https://unisa.au.panopto.com/Panopto/Pages/Embed.aspx?id=9d9a0fa3-e99a-4ebd-a281-aac2017f4da4', + 'info_dict': { + 'id': '9d9a0fa3-e99a-4ebd-a281-aac2017f4da4', + 'ext': 'mp4', + 'cast': ['LTS CLI Script'], + 'chapters': [], + 'duration': 2178.45, + 'description': 'md5:ee5cf653919f55b72bce2dbcf829c9fa', + 'channel_id': 'b23e673f-c287-4cb1-8344-aae9005a69f8', + 'average_rating': int, + 'uploader_id': '38377323-6a23-41e2-9ff6-a8e8004bf6f7', + 'uploader': 'LTS CLI Script', + 'timestamp': 1572458134, + 'title': 'WW2 Vets Interview 3 Ronald Stanley George', + 'thumbnail': r're:https://unisa\.au\.panopto\.com/.+', + 'channel': 'World War II Veteran Interviews', + 'upload_date': '20191030', + }, + }, + { + # Slides/storyboard + 'url': 'https://demo.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=a7f12f1d-3872-4310-84b0-f8d8ab15326b', + 'info_dict': { + 'id': 'a7f12f1d-3872-4310-84b0-f8d8ab15326b', + 'ext': 'mhtml', + 'timestamp': 1448798857, + 'duration': 4712.681, + 'title': 'Cache Memory - CompSci 15-213, Lecture 12', + 'channel_id': 'e4c6a2fc-1214-4ca0-8fb7-aef2e29ff63a', + 'uploader_id': 'a96d1a31-b4de-489b-9eee-b4a5b414372c', + 'upload_date': '20151129', + 'average_rating': 0, + 'uploader': 'Panopto Support', + 'channel': 'Showcase Videos', + 'description': 'md5:55e51d54233ddb0e6c2ed388ca73822c', + 'cast': ['ISR Videographer', 'Panopto Support'], + 'chapters': 'count:28', + 'thumbnail': r're:https://demo\.hosted\.panopto\.com/.+', + }, + 'params': {'format': 'mhtml', 'skip_download': True} + }, + { + 'url': 'https://na-training-1.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=8285224a-9a2b-4957-84f2-acb0000c4ea9', + 'info_dict': { + 'id': '8285224a-9a2b-4957-84f2-acb0000c4ea9', + 'ext': 'mp4', + 'chapters': [], + 'title': 'Company Policy', + 'average_rating': 0, + 'timestamp': 1615058901, + 'channel': 'Human Resources', + 'tags': ['HumanResources'], + 'duration': 1604.243, + 'thumbnail': r're:https://na-training-1\.hosted\.panopto\.com/.+', + 'uploader_id': '8e8ba0a3-424f-40df-a4f1-ab3a01375103', + 'uploader': 'Cait M.', + 'upload_date': '20210306', + 'cast': ['Cait M.'], + 'subtitles': {'en-US': [{'ext': 'srt', 'data': 'md5:a3f4d25963fdeace838f327097c13265'}], + 'es-ES': [{'ext': 'srt', 'data': 'md5:57e9dad365fd0fbaf0468eac4949f189'}]}, + }, + 'params': {'writesubtitles': True, 'skip_download': True} + }, { + # On Panopto there are two subs: "Default" and en-US. en-US is blank and should be skipped. + 'url': 'https://na-training-1.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=940cbd41-f616-4a45-b13e-aaf1000c915b', + 'info_dict': { + 'id': '940cbd41-f616-4a45-b13e-aaf1000c915b', + 'ext': 'mp4', + 'subtitles': 'count:1', + 'title': 'HR Benefits Review Meeting*', + 'cast': ['Panopto Support'], + 'chapters': [], + 'timestamp': 1575024251, + 'thumbnail': r're:https://na-training-1\.hosted\.panopto\.com/.+', + 'channel': 'Zoom', + 'description': 'md5:04f90a9c2c68b7828144abfb170f0106', + 'uploader': 'Panopto Support', + 'average_rating': 0, + 'duration': 409.34499999999997, + 'uploader_id': 'b6ac04ad-38b8-4724-a004-a851004ea3df', + 'upload_date': '20191129', + + }, + 'params': {'writesubtitles': True, 'skip_download': True} + }, + { + 'url': 'https://ucc.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?id=0e8484a4-4ceb-4d98-a63f-ac0200b455cb', + 'only_matching': True + }, + { + 'url': 'https://brown.hosted.panopto.com/Panopto/Pages/Embed.aspx?id=0b3ff73b-36a0-46c5-8455-aadf010a3638', + 'only_matching': True + }, + ] + + @classmethod + def suitable(cls, url): + return False if PanoptoPlaylistIE.suitable(url) else super().suitable(url) + + def _mark_watched(self, base_url, video_id, delivery_info): + duration = traverse_obj(delivery_info, ('Delivery', 'Duration'), expected_type=float) + invocation_id = delivery_info.get('InvocationId') + stream_id = traverse_obj(delivery_info, ('Delivery', 'Streams', ..., 'PublicID'), get_all=False, expected_type=str) + if invocation_id and stream_id and duration: + timestamp_str = f'/Date({calendar.timegm(datetime.now(timezone.utc).timetuple())}000)/' + data = { + 'streamRequests': [ + { + 'ClientTimeStamp': timestamp_str, + 'ID': 0, + 'InvocationID': invocation_id, + 'PlaybackSpeed': 1, + 'SecondsListened': duration - 1, + 'SecondsRejected': 0, + 'StartPosition': 0, + 'StartReason': 2, + 'StopReason': None, + 'StreamID': stream_id, + 'TimeStamp': timestamp_str, + 'UpdatesRejected': 0 + }, + ]} + + self._download_webpage( + base_url + '/Services/Analytics.svc/AddStreamRequests', video_id, + fatal=False, data=json.dumps(data).encode('utf8'), headers={'content-type': 'application/json'}, + note='Marking watched', errnote='Unable to mark watched') + + @staticmethod + def _extract_chapters(timestamps): + chapters = [] + for timestamp in timestamps or []: + caption = timestamp.get('Caption') + start, duration = int_or_none(timestamp.get('Time')), int_or_none(timestamp.get('Duration')) + if not caption or start is None or duration is None: + continue + chapters.append({ + 'start_time': start, + 'end_time': start + duration, + 'title': caption + }) + return chapters + + @staticmethod + def _extract_mhtml_formats(base_url, timestamps): + image_frags = {} + for timestamp in timestamps or []: + duration = timestamp.get('Duration') + obj_id, obj_sn = timestamp.get('ObjectIdentifier'), timestamp.get('ObjectSequenceNumber'), + if timestamp.get('EventTargetType') == 'PowerPoint' and obj_id is not None and obj_sn is not None: + image_frags.setdefault('slides', []).append({ + 'url': base_url + f'/Pages/Viewer/Image.aspx?id={obj_id}&number={obj_sn}', + 'duration': duration + }) + + obj_pid, session_id, abs_time = timestamp.get('ObjectPublicIdentifier'), timestamp.get('SessionID'), timestamp.get('AbsoluteTime') + if None not in (obj_pid, session_id, abs_time): + image_frags.setdefault('chapter', []).append({ + 'url': base_url + f'/Pages/Viewer/Thumb.aspx?eventTargetPID={obj_pid}&sessionPID={session_id}&number={obj_sn}&isPrimary=false&absoluteTime={abs_time}', + 'duration': duration, + }) + for name, fragments in image_frags.items(): + yield { + 'format_id': name, + 'ext': 'mhtml', + 'protocol': 'mhtml', + 'acodec': 'none', + 'vcodec': 'none', + 'url': 'about:invalid', + 'fragments': fragments + } + + @staticmethod + def _json2srt(data, delivery): + def _gen_lines(): + for i, line in enumerate(data): + start_time = line['Time'] + duration = line.get('Duration') + if duration: + end_time = start_time + duration + else: + end_time = traverse_obj(data, (i + 1, 'Time')) or delivery['Duration'] + yield f'{i + 1}\n{srt_subtitles_timecode(start_time)} --> {srt_subtitles_timecode(end_time)}\n{line["Caption"]}' + return '\n\n'.join(_gen_lines()) + + def _get_subtitles(self, base_url, video_id, delivery): + subtitles = {} + for lang in delivery.get('AvailableLanguages') or []: + response = self._call_api( + base_url, '/Pages/Viewer/DeliveryInfo.aspx', video_id, fatal=False, + note='Downloading captions JSON metadata', query={ + 'deliveryId': video_id, + 'getCaptions': True, + 'language': str(lang), + 'responseType': 'json' + } + ) + if not isinstance(response, list): + continue + subtitles.setdefault(self._SUB_LANG_MAPPING.get(lang) or 'default', []).append({ + 'ext': 'srt', + 'data': self._json2srt(response, delivery), + }) + return subtitles + + def _extract_streams_formats_and_subtitles(self, video_id, streams, **fmt_kwargs): + formats = [] + subtitles = {} + for stream in streams or []: + stream_formats = [] + http_stream_url = stream.get('StreamHttpUrl') + stream_url = stream.get('StreamUrl') + + if http_stream_url: + stream_formats.append({'url': http_stream_url}) + + if stream_url: + media_type = stream.get('ViewerMediaFileTypeName') + if media_type in ('hls', ): + m3u8_formats, stream_subtitles = self._extract_m3u8_formats_and_subtitles(stream_url, video_id) + stream_formats.extend(m3u8_formats) + subtitles = self._merge_subtitles(subtitles, stream_subtitles) + else: + stream_formats.append({ + 'url': stream_url + }) + for fmt in stream_formats: + fmt.update({ + 'format_note': stream.get('Tag'), + **fmt_kwargs + }) + formats.extend(stream_formats) + + return formats, subtitles + + def _real_extract(self, url): + base_url, video_id = self._match_valid_url(url).group('base_url', 'id') + delivery_info = self._call_api( + base_url, '/Pages/Viewer/DeliveryInfo.aspx', video_id, + query={ + 'deliveryId': video_id, + 'invocationId': '', + 'isLiveNotes': 'false', + 'refreshAuthCookie': 'true', + 'isActiveBroadcast': 'false', + 'isEditing': 'false', + 'isKollectiveAgentInstalled': 'false', + 'isEmbed': 'false', + 'responseType': 'json', + } + ) + + delivery = delivery_info['Delivery'] + session_start_time = int_or_none(delivery.get('SessionStartTime')) + timestamps = delivery.get('Timestamps') + + # Podcast stream is usually the combined streams. We will prefer that by default. + podcast_formats, podcast_subtitles = self._extract_streams_formats_and_subtitles( + video_id, delivery.get('PodcastStreams'), format_note='PODCAST') + + streams_formats, streams_subtitles = self._extract_streams_formats_and_subtitles( + video_id, delivery.get('Streams'), preference=-10) + + formats = podcast_formats + streams_formats + formats.extend(self._extract_mhtml_formats(base_url, timestamps)) + subtitles = self._merge_subtitles( + podcast_subtitles, streams_subtitles, self.extract_subtitles(base_url, video_id, delivery)) + + self.mark_watched(base_url, video_id, delivery_info) + + return { + 'id': video_id, + 'title': delivery.get('SessionName'), + 'cast': traverse_obj(delivery, ('Contributors', ..., 'DisplayName'), expected_type=lambda x: x or None), + 'timestamp': session_start_time - 11640000000 if session_start_time else None, + 'duration': delivery.get('Duration'), + 'thumbnail': base_url + f'/Services/FrameGrabber.svc/FrameRedirect?objectId={video_id}&mode=Delivery&random={random()}', + 'average_rating': delivery.get('AverageRating'), + 'chapters': self._extract_chapters(timestamps), + 'uploader': delivery.get('OwnerDisplayName') or None, + 'uploader_id': delivery.get('OwnerId'), + 'description': delivery.get('SessionAbstract'), + 'tags': traverse_obj(delivery, ('Tags', ..., 'Content')), + 'channel_id': delivery.get('SessionGroupPublicID'), + 'channel': traverse_obj(delivery, 'SessionGroupLongName', 'SessionGroupShortName', get_all=False), + 'formats': formats, + 'subtitles': subtitles + } + + +class PanoptoPlaylistIE(PanoptoBaseIE): + _VALID_URL = PanoptoBaseIE.BASE_URL_RE + r'/Pages/(Viewer|Embed)\.aspx.*(?:\?|&)pid=(?P<id>[a-f0-9-]+)' + _TESTS = [ + { + 'url': 'https://howtovideos.hosted.panopto.com/Panopto/Pages/Viewer.aspx?pid=f3b39fcf-882f-4849-93d6-a9f401236d36&id=5fa74e93-3d87-4694-b60e-aaa4012214ed&advance=true', + 'info_dict': { + 'title': 'Featured Video Tutorials', + 'id': 'f3b39fcf-882f-4849-93d6-a9f401236d36', + 'description': '', + }, + 'playlist_mincount': 36 + }, + { + 'url': 'https://utsa.hosted.panopto.com/Panopto/Pages/Viewer.aspx?pid=e2900555-3ad4-4bdb-854d-ad2401686190', + 'info_dict': { + 'title': 'Library Website Introduction Playlist', + 'id': 'e2900555-3ad4-4bdb-854d-ad2401686190', + 'description': 'md5:f958bca50a1cbda15fdc1e20d32b3ecb', + }, + 'playlist_mincount': 4 + }, + + ] + + def _entries(self, base_url, playlist_id, session_list_id): + session_list_info = self._call_api( + base_url, f'/Api/SessionLists/{session_list_id}?collections[0].maxCount=500&collections[0].name=items', playlist_id) + + items = session_list_info['Items'] + for item in items: + if item.get('TypeName') != 'Session': + self.report_warning('Got an item in the playlist that is not a Session' + bug_reports_message(), only_once=True) + continue + yield { + '_type': 'url', + 'id': item.get('Id'), + 'url': item.get('ViewerUri'), + 'title': item.get('Name'), + 'description': item.get('Description'), + 'duration': item.get('Duration'), + 'channel': traverse_obj(item, ('Parent', 'Name')), + 'channel_id': traverse_obj(item, ('Parent', 'Id')) + } + + def _real_extract(self, url): + base_url, playlist_id = self._match_valid_url(url).group('base_url', 'id') + + video_id = get_first(parse_qs(url), 'id') + if video_id: + if self.get_param('noplaylist'): + self.to_screen('Downloading just video %s because of --no-playlist' % video_id) + return self.url_result(base_url + f'/Pages/Viewer.aspx?id={video_id}', ie_key=PanoptoIE.ie_key(), video_id=video_id) + else: + self.to_screen(f'Downloading playlist {playlist_id}; add --no-playlist to just download video {video_id}') + + playlist_info = self._call_api(base_url, f'/Api/Playlists/{playlist_id}', playlist_id) + return self.playlist_result( + self._entries(base_url, playlist_id, playlist_info['SessionListId']), + playlist_id=playlist_id, playlist_title=playlist_info.get('Name'), + playlist_description=playlist_info.get('Description')) + + +class PanoptoListIE(PanoptoBaseIE): + _VALID_URL = PanoptoBaseIE.BASE_URL_RE + r'/Pages/Sessions/List\.aspx' + _PAGE_SIZE = 250 + _TESTS = [ + { + 'url': 'https://demo.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=%22e4c6a2fc-1214-4ca0-8fb7-aef2e29ff63a%22', + 'info_dict': { + 'id': 'e4c6a2fc-1214-4ca0-8fb7-aef2e29ff63a', + 'title': 'Showcase Videos' + }, + 'playlist_mincount': 140 + + }, + { + 'url': 'https://demo.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#view=2&maxResults=250', + 'info_dict': { + 'id': 'panopto_list', + 'title': 'panopto_list' + }, + 'playlist_mincount': 300 + }, + { + # Folder that contains 8 folders and a playlist + 'url': 'https://howtovideos.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx?noredirect=true#folderID=%224b9de7ae-0080-4158-8496-a9ba01692c2e%22', + 'info_dict': { + 'id': '4b9de7ae-0080-4158-8496-a9ba01692c2e', + 'title': 'Video Tutorials' + }, + 'playlist_mincount': 9 + } + + ] + + def _fetch_page(self, base_url, query_params, display_id, page): + + params = { + 'sortColumn': 1, + 'getFolderData': True, + 'includePlaylists': True, + **query_params, + 'page': page, + 'maxResults': self._PAGE_SIZE, + } + + response = self._call_api( + base_url, '/Services/Data.svc/GetSessions', f'{display_id} page {page+1}', + data={'queryParameters': params}, fatal=False) + + for result in get_first(response, 'Results', default=[]): + # This could be a video, playlist (or maybe something else) + item_id = result.get('DeliveryID') + yield { + '_type': 'url', + 'id': item_id, + 'title': result.get('SessionName'), + 'url': traverse_obj(result, 'ViewerUrl', 'EmbedUrl', get_all=False) or (base_url + f'/Pages/Viewer.aspx?id={item_id}'), + 'duration': result.get('Duration'), + 'channel': result.get('FolderName'), + 'channel_id': result.get('FolderID'), + } + + for folder in get_first(response, 'Subfolders', default=[]): + folder_id = folder.get('ID') + yield self.url_result( + base_url + f'/Pages/Sessions/List.aspx#folderID="{folder_id}"', + ie_key=PanoptoListIE.ie_key(), video_id=folder_id, title=folder.get('Name')) + + def _extract_folder_metadata(self, base_url, folder_id): + response = self._call_api( + base_url, '/Services/Data.svc/GetFolderInfo', folder_id, + data={'folderID': folder_id}, fatal=False) + return { + 'title': get_first(response, 'Name') + } + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + base_url = mobj.group('base_url') + + query_params = self._parse_fragment(url) + folder_id, display_id = query_params.get('folderID'), 'panopto_list' + + if query_params.get('isSubscriptionsPage'): + display_id = 'subscriptions' + if not query_params.get('subscribableTypes'): + query_params['subscribableTypes'] = [0, 1, 2] + elif query_params.get('isSharedWithMe'): + display_id = 'sharedwithme' + elif folder_id: + display_id = folder_id + + query = query_params.get('query') + if query: + display_id += f': query "{query}"' + + info = { + '_type': 'playlist', + 'id': display_id, + 'title': display_id, + } + if folder_id: + info.update(self._extract_folder_metadata(base_url, folder_id)) + + info['entries'] = OnDemandPagedList( + functools.partial(self._fetch_page, base_url, query_params, display_id), self._PAGE_SIZE) + + return info diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/paramountplus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/paramountplus.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/paramountplus.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/paramountplus.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/parler.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/parler.py new file mode 100644 index 0000000..2af805e --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/parler.py @@ -0,0 +1,91 @@ +import functools + +from .common import InfoExtractor +from .youtube import YoutubeIE +from ..utils import ( + clean_html, + int_or_none, + strip_or_none, + traverse_obj, + unified_timestamp, + urljoin, +) + + +class ParlerIE(InfoExtractor): + IE_DESC = 'Posts on parler.com' + _VALID_URL = r'https://parler\.com/feed/(?P<id>[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})' + _TESTS = [ + { + 'url': 'https://parler.com/feed/df79fdba-07cc-48fe-b085-3293897520d7', + 'md5': '16e0f447bf186bb3cf64de5bbbf4d22d', + 'info_dict': { + 'id': 'df79fdba-07cc-48fe-b085-3293897520d7', + 'ext': 'mp4', + 'thumbnail': 'https://bl-images.parler.com/videos/6ce7cdf3-a27a-4d72-bf9c-d3e17ce39a66/thumbnail.jpeg', + 'title': 'Parler video #df79fdba-07cc-48fe-b085-3293897520d7', + 'description': 'md5:6f220bde2df4a97cbb89ac11f1fd8197', + 'timestamp': 1659785481, + 'upload_date': '20220806', + 'uploader': 'Tulsi Gabbard', + 'uploader_id': 'TulsiGabbard', + 'uploader_url': 'https://parler.com/TulsiGabbard', + 'view_count': int, + 'comment_count': int, + 'repost_count': int, + }, + }, + { + 'url': 'https://parler.com/feed/f23b85c1-6558-470f-b9ff-02c145f28da5', + 'md5': 'eaba1ff4a10fe281f5ce74e930ab2cb4', + 'info_dict': { + 'id': 'r5vkSaz8PxQ', + 'ext': 'mp4', + 'live_status': 'not_live', + 'comment_count': int, + 'duration': 1267, + 'like_count': int, + 'channel_follower_count': int, + 'channel_id': 'UCox6YeMSY1PQInbCtTaZj_w', + 'upload_date': '20220716', + 'thumbnail': 'https://i.ytimg.com/vi/r5vkSaz8PxQ/maxresdefault.jpg', + 'tags': 'count:17', + 'availability': 'public', + 'categories': ['Entertainment'], + 'playable_in_embed': True, + 'channel': 'Who Knows What! With Mahesh & Friends', + 'title': 'Tom MacDonald Names Reaction', + 'uploader': 'Who Knows What! With Mahesh & Friends', + 'uploader_id': '@maheshchookolingo', + 'age_limit': 0, + 'description': 'md5:33c21f0d35ae6dc2edf3007d6696baea', + 'channel_url': 'https://www.youtube.com/channel/UCox6YeMSY1PQInbCtTaZj_w', + 'view_count': int, + 'uploader_url': 'http://www.youtube.com/@maheshchookolingo', + }, + }, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + data = self._download_json(f'https://api.parler.com/v0/public/parleys/{video_id}', + video_id)['data'] + if data.get('link'): + return self.url_result(data['link'], YoutubeIE) + + return { + 'id': video_id, + 'title': strip_or_none(data.get('title')) or '', + **traverse_obj(data, { + 'url': ('video', 'videoSrc'), + 'thumbnail': ('video', 'thumbnailUrl'), + 'description': ('body', {clean_html}), + 'timestamp': ('date_created', {unified_timestamp}), + 'uploader': ('user', 'name', {strip_or_none}), + 'uploader_id': ('user', 'username', {str}), + 'uploader_url': ('user', 'username', {functools.partial(urljoin, 'https://parler.com/')}), + 'view_count': ('views', {int_or_none}), + 'comment_count': ('total_comments', {int_or_none}), + 'repost_count': ('echos', {int_or_none}), + }) + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/parlview.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/parlview.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/parlview.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/parlview.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/patreon.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/patreon.py new file mode 100644 index 0000000..9316789 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/patreon.py @@ -0,0 +1,454 @@ +import itertools + +from .common import InfoExtractor +from .vimeo import VimeoIE +from ..compat import compat_urllib_parse_unquote +from ..networking.exceptions import HTTPError +from ..utils import ( + KNOWN_EXTENSIONS, + ExtractorError, + clean_html, + determine_ext, + int_or_none, + mimetype2ext, + parse_iso8601, + str_or_none, + traverse_obj, + try_get, + url_or_none, + urljoin, +) + + +class PatreonBaseIE(InfoExtractor): + USER_AGENT = 'Patreon/7.6.28 (Android; Android 11; Scale/2.10)' + + def _call_api(self, ep, item_id, query=None, headers=None, fatal=True, note=None): + if headers is None: + headers = {} + if 'User-Agent' not in headers: + headers['User-Agent'] = self.USER_AGENT + if query: + query.update({'json-api-version': 1.0}) + + try: + return self._download_json( + f'https://www.patreon.com/api/{ep}', + item_id, note='Downloading API JSON' if not note else note, + query=query, fatal=fatal, headers=headers) + except ExtractorError as e: + if not isinstance(e.cause, HTTPError) or mimetype2ext(e.cause.response.headers.get('Content-Type')) != 'json': + raise + err_json = self._parse_json(self._webpage_read_content(e.cause.response, None, item_id), item_id, fatal=False) + err_message = traverse_obj(err_json, ('errors', ..., 'detail'), get_all=False) + if err_message: + raise ExtractorError(f'Patreon said: {err_message}', expected=True) + raise + + +class PatreonIE(PatreonBaseIE): + _VALID_URL = r'https?://(?:www\.)?patreon\.com/(?:creation\?hid=|posts/(?:[\w-]+-)?)(?P<id>\d+)' + _TESTS = [{ + 'url': 'http://www.patreon.com/creation?hid=743933', + 'md5': 'e25505eec1053a6e6813b8ed369875cc', + 'info_dict': { + 'id': '743933', + 'ext': 'mp3', + 'title': 'Episode 166: David Smalley of Dogma Debate', + 'description': 'md5:34d207dd29aa90e24f1b3f58841b81c7', + 'uploader': 'Cognitive Dissonance Podcast', + 'thumbnail': 're:^https?://.*$', + 'timestamp': 1406473987, + 'upload_date': '20140727', + 'uploader_id': '87145', + 'like_count': int, + 'comment_count': int, + 'uploader_url': 'https://www.patreon.com/dissonancepod', + 'channel_id': '80642', + 'channel_url': 'https://www.patreon.com/dissonancepod', + 'channel_follower_count': int, + }, + }, { + 'url': 'http://www.patreon.com/creation?hid=754133', + 'md5': '3eb09345bf44bf60451b8b0b81759d0a', + 'info_dict': { + 'id': '754133', + 'ext': 'mp3', + 'title': 'CD 167 Extra', + 'uploader': 'Cognitive Dissonance Podcast', + 'thumbnail': 're:^https?://.*$', + 'like_count': int, + 'comment_count': int, + 'uploader_url': 'https://www.patreon.com/dissonancepod', + }, + 'skip': 'Patron-only content', + }, { + 'url': 'https://www.patreon.com/creation?hid=1682498', + 'info_dict': { + 'id': 'SU4fj_aEMVw', + 'ext': 'mp4', + 'title': 'I\'m on Patreon!', + 'uploader': 'TraciJHines', + 'thumbnail': 're:^https?://.*$', + 'upload_date': '20150211', + 'description': 'md5:8af6425f50bd46fbf29f3db0fc3a8364', + 'uploader_id': 'TraciJHines', + 'categories': ['Entertainment'], + 'duration': 282, + 'view_count': int, + 'tags': 'count:39', + 'age_limit': 0, + 'channel': 'TraciJHines', + 'channel_url': 'https://www.youtube.com/channel/UCGLim4T2loE5rwCMdpCIPVg', + 'live_status': 'not_live', + 'like_count': int, + 'channel_id': 'UCGLim4T2loE5rwCMdpCIPVg', + 'availability': 'public', + 'channel_follower_count': int, + 'playable_in_embed': True, + 'uploader_url': 'http://www.youtube.com/user/TraciJHines', + 'comment_count': int, + }, + 'params': { + 'noplaylist': True, + 'skip_download': True, + } + }, { + 'url': 'https://www.patreon.com/posts/episode-166-of-743933', + 'only_matching': True, + }, { + 'url': 'https://www.patreon.com/posts/743933', + 'only_matching': True, + }, { + 'url': 'https://www.patreon.com/posts/kitchen-as-seen-51706779', + 'md5': '96656690071f6d64895866008484251b', + 'info_dict': { + 'id': '555089736', + 'ext': 'mp4', + 'title': 'KITCHEN AS SEEN ON DEEZ NUTS EXTENDED!', + 'uploader': 'Cold Ones', + 'thumbnail': 're:^https?://.*$', + 'upload_date': '20210526', + 'description': 'md5:557a409bd79d3898689419094934ba79', + 'uploader_id': '14936315', + }, + 'skip': 'Patron-only content' + }, { + # m3u8 video (https://github.com/yt-dlp/yt-dlp/issues/2277) + 'url': 'https://www.patreon.com/posts/video-sketchbook-32452882', + 'info_dict': { + 'id': '32452882', + 'ext': 'mp4', + 'comment_count': int, + 'uploader_id': '4301314', + 'like_count': int, + 'timestamp': 1576696962, + 'upload_date': '20191218', + 'thumbnail': r're:^https?://.*$', + 'uploader_url': 'https://www.patreon.com/loish', + 'description': 'md5:e2693e97ee299c8ece47ffdb67e7d9d2', + 'title': 'VIDEO // sketchbook flipthrough', + 'uploader': 'Loish ', + 'tags': ['sketchbook', 'video'], + 'channel_id': '1641751', + 'channel_url': 'https://www.patreon.com/loish', + 'channel_follower_count': int, + } + }, { + # bad videos under media (if media is included). Real one is under post_file + 'url': 'https://www.patreon.com/posts/premium-access-70282931', + 'info_dict': { + 'id': '70282931', + 'ext': 'mp4', + 'title': '[Premium Access + Uncut] The Office - 2x6 The Fight - Group Reaction', + 'channel_url': 'https://www.patreon.com/thenormies', + 'channel_id': '573397', + 'uploader_id': '2929435', + 'uploader': 'The Normies', + 'description': 'md5:79c9fd8778e2cef84049a94c058a5e23', + 'comment_count': int, + 'upload_date': '20220809', + 'thumbnail': r're:^https?://.*$', + 'channel_follower_count': int, + 'like_count': int, + 'timestamp': 1660052820, + 'tags': ['The Office', 'early access', 'uncut'], + 'uploader_url': 'https://www.patreon.com/thenormies', + }, + 'skip': 'Patron-only content', + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + post = self._call_api( + f'posts/{video_id}', video_id, query={ + 'fields[media]': 'download_url,mimetype,size_bytes', + 'fields[post]': 'comment_count,content,embed,image,like_count,post_file,published_at,title,current_user_can_view', + 'fields[user]': 'full_name,url', + 'fields[post_tag]': 'value', + 'fields[campaign]': 'url,name,patron_count', + 'json-api-use-default-includes': 'false', + 'include': 'audio,user,user_defined_tags,campaign,attachments_media', + }) + attributes = post['data']['attributes'] + title = attributes['title'].strip() + image = attributes.get('image') or {} + info = { + 'id': video_id, + 'title': title, + 'description': clean_html(attributes.get('content')), + 'thumbnail': image.get('large_url') or image.get('url'), + 'timestamp': parse_iso8601(attributes.get('published_at')), + 'like_count': int_or_none(attributes.get('like_count')), + 'comment_count': int_or_none(attributes.get('comment_count')), + } + can_view_post = traverse_obj(attributes, 'current_user_can_view') + if can_view_post and info['comment_count']: + info['__post_extractor'] = self.extract_comments(video_id) + + for i in post.get('included', []): + i_type = i.get('type') + if i_type == 'media': + media_attributes = i.get('attributes') or {} + download_url = media_attributes.get('download_url') + ext = mimetype2ext(media_attributes.get('mimetype')) + + # if size_bytes is None, this media file is likely unavailable + # See: https://github.com/yt-dlp/yt-dlp/issues/4608 + size_bytes = int_or_none(media_attributes.get('size_bytes')) + if download_url and ext in KNOWN_EXTENSIONS and size_bytes is not None: + # XXX: what happens if there are multiple attachments? + return { + **info, + 'ext': ext, + 'filesize': size_bytes, + 'url': download_url, + } + elif i_type == 'user': + user_attributes = i.get('attributes') + if user_attributes: + info.update({ + 'uploader': user_attributes.get('full_name'), + 'uploader_id': str_or_none(i.get('id')), + 'uploader_url': user_attributes.get('url'), + }) + + elif i_type == 'post_tag': + info.setdefault('tags', []).append(traverse_obj(i, ('attributes', 'value'))) + + elif i_type == 'campaign': + info.update({ + 'channel': traverse_obj(i, ('attributes', 'title')), + 'channel_id': str_or_none(i.get('id')), + 'channel_url': traverse_obj(i, ('attributes', 'url')), + 'channel_follower_count': int_or_none(traverse_obj(i, ('attributes', 'patron_count'))), + }) + + # handle Vimeo embeds + if try_get(attributes, lambda x: x['embed']['provider']) == 'Vimeo': + embed_html = try_get(attributes, lambda x: x['embed']['html']) + v_url = url_or_none(compat_urllib_parse_unquote( + self._search_regex(r'(https(?:%3A%2F%2F|://)player\.vimeo\.com.+app_id(?:=|%3D)+\d+)', embed_html, 'vimeo url', fatal=False))) + if v_url: + return { + **info, + '_type': 'url_transparent', + 'url': VimeoIE._smuggle_referrer(v_url, 'https://patreon.com'), + 'ie_key': 'Vimeo', + } + + embed_url = try_get(attributes, lambda x: x['embed']['url']) + if embed_url: + return { + **info, + '_type': 'url', + 'url': embed_url, + } + + post_file = traverse_obj(attributes, 'post_file') + if post_file: + name = post_file.get('name') + ext = determine_ext(name) + if ext in KNOWN_EXTENSIONS: + return { + **info, + 'ext': ext, + 'url': post_file['url'], + } + elif name == 'video': + formats, subtitles = self._extract_m3u8_formats_and_subtitles(post_file['url'], video_id) + return { + **info, + 'formats': formats, + 'subtitles': subtitles, + } + + if can_view_post is False: + self.raise_no_formats('You do not have access to this post', video_id=video_id, expected=True) + else: + self.raise_no_formats('No supported media found in this post', video_id=video_id, expected=True) + return info + + def _get_comments(self, post_id): + cursor = None + count = 0 + params = { + 'page[count]': 50, + 'include': 'parent.commenter.campaign,parent.post.user,parent.post.campaign.creator,parent.replies.parent,parent.replies.commenter.campaign,parent.replies.post.user,parent.replies.post.campaign.creator,commenter.campaign,post.user,post.campaign.creator,replies.parent,replies.commenter.campaign,replies.post.user,replies.post.campaign.creator,on_behalf_of_campaign', + 'fields[comment]': 'body,created,is_by_creator', + 'fields[user]': 'image_url,full_name,url', + 'filter[flair]': 'image_tiny_url,name', + 'sort': '-created', + 'json-api-version': 1.0, + 'json-api-use-default-includes': 'false', + } + + for page in itertools.count(1): + + params.update({'page[cursor]': cursor} if cursor else {}) + response = self._call_api( + f'posts/{post_id}/comments', post_id, query=params, note='Downloading comments page %d' % page) + + cursor = None + for comment in traverse_obj(response, (('data', ('included', lambda _, v: v['type'] == 'comment')), ...)): + count += 1 + comment_id = comment.get('id') + attributes = comment.get('attributes') or {} + if comment_id is None: + continue + author_id = traverse_obj(comment, ('relationships', 'commenter', 'data', 'id')) + author_info = traverse_obj( + response, ('included', lambda _, v: v['id'] == author_id and v['type'] == 'user', 'attributes'), + get_all=False, expected_type=dict, default={}) + + yield { + 'id': comment_id, + 'text': attributes.get('body'), + 'timestamp': parse_iso8601(attributes.get('created')), + 'parent': traverse_obj(comment, ('relationships', 'parent', 'data', 'id'), default='root'), + 'author_is_uploader': attributes.get('is_by_creator'), + 'author_id': author_id, + 'author': author_info.get('full_name'), + 'author_thumbnail': author_info.get('image_url'), + } + + if count < traverse_obj(response, ('meta', 'count')): + cursor = traverse_obj(response, ('data', -1, 'id')) + + if cursor is None: + break + + +class PatreonCampaignIE(PatreonBaseIE): + + _VALID_URL = r'https?://(?:www\.)?patreon\.com/(?!rss)(?:(?:m/(?P<campaign_id>\d+))|(?P<vanity>[-\w]+))' + _TESTS = [{ + 'url': 'https://www.patreon.com/dissonancepod/', + 'info_dict': { + 'title': 'Cognitive Dissonance Podcast', + 'channel_url': 'https://www.patreon.com/dissonancepod', + 'id': '80642', + 'description': 'md5:eb2fa8b83da7ab887adeac34da6b7af7', + 'channel_id': '80642', + 'channel': 'Cognitive Dissonance Podcast', + 'age_limit': 0, + 'channel_follower_count': int, + 'uploader_id': '87145', + 'uploader_url': 'https://www.patreon.com/dissonancepod', + 'uploader': 'Cognitive Dissonance Podcast', + 'thumbnail': r're:^https?://.*$', + }, + 'playlist_mincount': 68, + }, { + 'url': 'https://www.patreon.com/m/4767637/posts', + 'info_dict': { + 'title': 'Not Just Bikes', + 'channel_follower_count': int, + 'id': '4767637', + 'channel_id': '4767637', + 'channel_url': 'https://www.patreon.com/notjustbikes', + 'description': 'md5:595c6e7dca76ae615b1d38c298a287a1', + 'age_limit': 0, + 'channel': 'Not Just Bikes', + 'uploader_url': 'https://www.patreon.com/notjustbikes', + 'uploader': 'Not Just Bikes', + 'uploader_id': '37306634', + 'thumbnail': r're:^https?://.*$', + }, + 'playlist_mincount': 71 + }, { + 'url': 'https://www.patreon.com/dissonancepod/posts', + 'only_matching': True + }, { + 'url': 'https://www.patreon.com/m/5932659', + 'only_matching': True + }] + + @classmethod + def suitable(cls, url): + return False if PatreonIE.suitable(url) else super(PatreonCampaignIE, cls).suitable(url) + + def _entries(self, campaign_id): + cursor = None + params = { + 'fields[post]': 'patreon_url,url', + 'filter[campaign_id]': campaign_id, + 'filter[is_draft]': 'false', + 'sort': '-published_at', + 'json-api-use-default-includes': 'false', + } + + for page in itertools.count(1): + + params.update({'page[cursor]': cursor} if cursor else {}) + posts_json = self._call_api('posts', campaign_id, query=params, note='Downloading posts page %d' % page) + + cursor = traverse_obj(posts_json, ('meta', 'pagination', 'cursors', 'next')) + for post_url in traverse_obj(posts_json, ('data', ..., 'attributes', 'patreon_url')): + yield self.url_result(urljoin('https://www.patreon.com/', post_url), PatreonIE) + + if cursor is None: + break + + def _real_extract(self, url): + + campaign_id, vanity = self._match_valid_url(url).group('campaign_id', 'vanity') + if campaign_id is None: + webpage = self._download_webpage(url, vanity, headers={'User-Agent': self.USER_AGENT}) + campaign_id = self._search_regex(r'https://www.patreon.com/api/campaigns/(\d+)/?', webpage, 'Campaign ID') + + params = { + 'json-api-use-default-includes': 'false', + 'fields[user]': 'full_name,url', + 'fields[campaign]': 'name,summary,url,patron_count,creation_count,is_nsfw,avatar_photo_url', + 'include': 'creator' + } + + campaign_response = self._call_api( + f'campaigns/{campaign_id}', campaign_id, + note='Downloading campaign info', fatal=False, + query=params) or {} + + campaign_info = campaign_response.get('data') or {} + channel_name = traverse_obj(campaign_info, ('attributes', 'name')) + user_info = traverse_obj( + campaign_response, ('included', lambda _, v: v['type'] == 'user'), + default={}, expected_type=dict, get_all=False) + + return { + '_type': 'playlist', + 'id': campaign_id, + 'title': channel_name, + 'entries': self._entries(campaign_id), + 'description': clean_html(traverse_obj(campaign_info, ('attributes', 'summary'))), + 'channel_url': traverse_obj(campaign_info, ('attributes', 'url')), + 'channel_follower_count': int_or_none(traverse_obj(campaign_info, ('attributes', 'patron_count'))), + 'channel_id': campaign_id, + 'channel': channel_name, + 'uploader_url': traverse_obj(user_info, ('attributes', 'url')), + 'uploader_id': str_or_none(user_info.get('id')), + 'uploader': traverse_obj(user_info, ('attributes', 'full_name')), + 'playlist_count': traverse_obj(campaign_info, ('attributes', 'creation_count')), + 'age_limit': 18 if traverse_obj(campaign_info, ('attributes', 'is_nsfw')) else 0, + 'thumbnail': url_or_none(traverse_obj(campaign_info, ('attributes', 'avatar_photo_url'))), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pbs.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pbs.py new file mode 100644 index 0000000..2bb2ea9 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pbs.py @@ -0,0 +1,757 @@ +import re + +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + ExtractorError, + determine_ext, + int_or_none, + float_or_none, + js_to_json, + orderedSet, + strip_jsonp, + strip_or_none, + traverse_obj, + unified_strdate, + url_or_none, + US_RATINGS, +) + + +class PBSIE(InfoExtractor): + _STATIONS = ( + (r'(?:video|www|player)\.pbs\.org', 'PBS: Public Broadcasting Service'), # http://www.pbs.org/ + (r'video\.aptv\.org', 'APT - Alabama Public Television (WBIQ)'), # http://aptv.org/ + (r'video\.gpb\.org', 'GPB/Georgia Public Broadcasting (WGTV)'), # http://www.gpb.org/ + (r'video\.mpbonline\.org', 'Mississippi Public Broadcasting (WMPN)'), # http://www.mpbonline.org + (r'video\.wnpt\.org', 'Nashville Public Television (WNPT)'), # http://www.wnpt.org + (r'video\.wfsu\.org', 'WFSU-TV (WFSU)'), # http://wfsu.org/ + (r'video\.wsre\.org', 'WSRE (WSRE)'), # http://www.wsre.org + (r'video\.wtcitv\.org', 'WTCI (WTCI)'), # http://www.wtcitv.org + (r'video\.pba\.org', 'WPBA/Channel 30 (WPBA)'), # http://pba.org/ + (r'video\.alaskapublic\.org', 'Alaska Public Media (KAKM)'), # http://alaskapublic.org/kakm + # (r'kuac\.org', 'KUAC (KUAC)'), # http://kuac.org/kuac-tv/ + # (r'ktoo\.org', '360 North (KTOO)'), # http://www.ktoo.org/ + # (r'azpm\.org', 'KUAT 6 (KUAT)'), # http://www.azpm.org/ + (r'video\.azpbs\.org', 'Arizona PBS (KAET)'), # http://www.azpbs.org + (r'portal\.knme\.org', 'KNME-TV/Channel 5 (KNME)'), # http://www.newmexicopbs.org/ + (r'video\.vegaspbs\.org', 'Vegas PBS (KLVX)'), # http://vegaspbs.org/ + (r'watch\.aetn\.org', 'AETN/ARKANSAS ETV NETWORK (KETS)'), # http://www.aetn.org/ + (r'video\.ket\.org', 'KET (WKLE)'), # http://www.ket.org/ + (r'video\.wkno\.org', 'WKNO/Channel 10 (WKNO)'), # http://www.wkno.org/ + (r'video\.lpb\.org', 'LPB/LOUISIANA PUBLIC BROADCASTING (WLPB)'), # http://www.lpb.org/ + (r'videos\.oeta\.tv', 'OETA (KETA)'), # http://www.oeta.tv + (r'video\.optv\.org', 'Ozarks Public Television (KOZK)'), # http://www.optv.org/ + (r'watch\.wsiu\.org', 'WSIU Public Broadcasting (WSIU)'), # http://www.wsiu.org/ + (r'video\.keet\.org', 'KEET TV (KEET)'), # http://www.keet.org + (r'pbs\.kixe\.org', 'KIXE/Channel 9 (KIXE)'), # http://kixe.org/ + (r'video\.kpbs\.org', 'KPBS San Diego (KPBS)'), # http://www.kpbs.org/ + (r'video\.kqed\.org', 'KQED (KQED)'), # http://www.kqed.org + (r'vids\.kvie\.org', 'KVIE Public Television (KVIE)'), # http://www.kvie.org + (r'video\.pbssocal\.org', 'PBS SoCal/KOCE (KOCE)'), # http://www.pbssocal.org/ + (r'video\.valleypbs\.org', 'ValleyPBS (KVPT)'), # http://www.valleypbs.org/ + (r'video\.cptv\.org', 'CONNECTICUT PUBLIC TELEVISION (WEDH)'), # http://cptv.org + (r'watch\.knpb\.org', 'KNPB Channel 5 (KNPB)'), # http://www.knpb.org/ + (r'video\.soptv\.org', 'SOPTV (KSYS)'), # http://www.soptv.org + # (r'klcs\.org', 'KLCS/Channel 58 (KLCS)'), # http://www.klcs.org + # (r'krcb\.org', 'KRCB Television & Radio (KRCB)'), # http://www.krcb.org + # (r'kvcr\.org', 'KVCR TV/DT/FM :: Vision for the Future (KVCR)'), # http://kvcr.org + (r'video\.rmpbs\.org', 'Rocky Mountain PBS (KRMA)'), # http://www.rmpbs.org + (r'video\.kenw\.org', 'KENW-TV3 (KENW)'), # http://www.kenw.org + (r'video\.kued\.org', 'KUED Channel 7 (KUED)'), # http://www.kued.org + (r'video\.wyomingpbs\.org', 'Wyoming PBS (KCWC)'), # http://www.wyomingpbs.org + (r'video\.cpt12\.org', 'Colorado Public Television / KBDI 12 (KBDI)'), # http://www.cpt12.org/ + (r'video\.kbyueleven\.org', 'KBYU-TV (KBYU)'), # http://www.kbyutv.org/ + (r'video\.thirteen\.org', 'Thirteen/WNET New York (WNET)'), # http://www.thirteen.org + (r'video\.wgbh\.org', 'WGBH/Channel 2 (WGBH)'), # http://wgbh.org + (r'video\.wgby\.org', 'WGBY (WGBY)'), # http://www.wgby.org + (r'watch\.njtvonline\.org', 'NJTV Public Media NJ (WNJT)'), # http://www.njtvonline.org/ + # (r'ripbs\.org', 'Rhode Island PBS (WSBE)'), # http://www.ripbs.org/home/ + (r'watch\.wliw\.org', 'WLIW21 (WLIW)'), # http://www.wliw.org/ + (r'video\.mpt\.tv', 'mpt/Maryland Public Television (WMPB)'), # http://www.mpt.org + (r'watch\.weta\.org', 'WETA Television and Radio (WETA)'), # http://www.weta.org + (r'video\.whyy\.org', 'WHYY (WHYY)'), # http://www.whyy.org + (r'video\.wlvt\.org', 'PBS 39 (WLVT)'), # http://www.wlvt.org/ + (r'video\.wvpt\.net', 'WVPT - Your Source for PBS and More! (WVPT)'), # http://www.wvpt.net + (r'video\.whut\.org', 'Howard University Television (WHUT)'), # http://www.whut.org + (r'video\.wedu\.org', 'WEDU PBS (WEDU)'), # http://www.wedu.org + (r'video\.wgcu\.org', 'WGCU Public Media (WGCU)'), # http://www.wgcu.org/ + # (r'wjct\.org', 'WJCT Public Broadcasting (WJCT)'), # http://www.wjct.org + (r'video\.wpbt2\.org', 'WPBT2 (WPBT)'), # http://www.wpbt2.org + (r'video\.wucftv\.org', 'WUCF TV (WUCF)'), # http://wucftv.org + (r'video\.wuft\.org', 'WUFT/Channel 5 (WUFT)'), # http://www.wuft.org + (r'watch\.wxel\.org', 'WXEL/Channel 42 (WXEL)'), # http://www.wxel.org/home/ + (r'video\.wlrn\.org', 'WLRN/Channel 17 (WLRN)'), # http://www.wlrn.org/ + (r'video\.wusf\.usf\.edu', 'WUSF Public Broadcasting (WUSF)'), # http://wusf.org/ + (r'video\.scetv\.org', 'ETV (WRLK)'), # http://www.scetv.org + (r'video\.unctv\.org', 'UNC-TV (WUNC)'), # http://www.unctv.org/ + # (r'pbsguam\.org', 'PBS Guam (KGTF)'), # http://www.pbsguam.org/ + (r'video\.pbshawaii\.org', 'PBS Hawaii - Oceanic Cable Channel 10 (KHET)'), # http://www.pbshawaii.org/ + (r'video\.idahoptv\.org', 'Idaho Public Television (KAID)'), # http://idahoptv.org + (r'video\.ksps\.org', 'KSPS (KSPS)'), # http://www.ksps.org/home/ + (r'watch\.opb\.org', 'OPB (KOPB)'), # http://www.opb.org + (r'watch\.nwptv\.org', 'KWSU/Channel 10 & KTNW/Channel 31 (KWSU)'), # http://www.kwsu.org + (r'video\.will\.illinois\.edu', 'WILL-TV (WILL)'), # http://will.illinois.edu/ + (r'video\.networkknowledge\.tv', 'Network Knowledge - WSEC/Springfield (WSEC)'), # http://www.wsec.tv + (r'video\.wttw\.com', 'WTTW11 (WTTW)'), # http://www.wttw.com/ + # (r'wtvp\.org', 'WTVP & WTVP.org, Public Media for Central Illinois (WTVP)'), # http://www.wtvp.org/ + (r'video\.iptv\.org', 'Iowa Public Television/IPTV (KDIN)'), # http://www.iptv.org/ + (r'video\.ninenet\.org', 'Nine Network (KETC)'), # http://www.ninenet.org + (r'video\.wfwa\.org', 'PBS39 Fort Wayne (WFWA)'), # http://wfwa.org/ + (r'video\.wfyi\.org', 'WFYI Indianapolis (WFYI)'), # http://www.wfyi.org + (r'video\.mptv\.org', 'Milwaukee Public Television (WMVS)'), # http://www.mptv.org + (r'video\.wnin\.org', 'WNIN (WNIN)'), # http://www.wnin.org/ + (r'video\.wnit\.org', 'WNIT Public Television (WNIT)'), # http://www.wnit.org/ + (r'video\.wpt\.org', 'WPT (WPNE)'), # http://www.wpt.org/ + (r'video\.wvut\.org', 'WVUT/Channel 22 (WVUT)'), # http://wvut.org/ + (r'video\.weiu\.net', 'WEIU/Channel 51 (WEIU)'), # http://www.weiu.net + (r'video\.wqpt\.org', 'WQPT-TV (WQPT)'), # http://www.wqpt.org + (r'video\.wycc\.org', 'WYCC PBS Chicago (WYCC)'), # http://www.wycc.org + # (r'lakeshorepublicmedia\.org', 'Lakeshore Public Television (WYIN)'), # http://lakeshorepublicmedia.org/ + (r'video\.wipb\.org', 'WIPB-TV (WIPB)'), # http://wipb.org + (r'video\.indianapublicmedia\.org', 'WTIU (WTIU)'), # http://indianapublicmedia.org/tv/ + (r'watch\.cetconnect\.org', 'CET (WCET)'), # http://www.cetconnect.org + (r'video\.thinktv\.org', 'ThinkTVNetwork (WPTD)'), # http://www.thinktv.org + (r'video\.wbgu\.org', 'WBGU-TV (WBGU)'), # http://wbgu.org + (r'video\.wgvu\.org', 'WGVU TV (WGVU)'), # http://www.wgvu.org/ + (r'video\.netnebraska\.org', 'NET1 (KUON)'), # http://netnebraska.org + (r'video\.pioneer\.org', 'Pioneer Public Television (KWCM)'), # http://www.pioneer.org + (r'watch\.sdpb\.org', 'SDPB Television (KUSD)'), # http://www.sdpb.org + (r'video\.tpt\.org', 'TPT (KTCA)'), # http://www.tpt.org + (r'watch\.ksmq\.org', 'KSMQ (KSMQ)'), # http://www.ksmq.org/ + (r'watch\.kpts\.org', 'KPTS/Channel 8 (KPTS)'), # http://www.kpts.org/ + (r'watch\.ktwu\.org', 'KTWU/Channel 11 (KTWU)'), # http://ktwu.org + # (r'shptv\.org', 'Smoky Hills Public Television (KOOD)'), # http://www.shptv.org + # (r'kcpt\.org', 'KCPT Kansas City Public Television (KCPT)'), # http://kcpt.org/ + # (r'blueridgepbs\.org', 'Blue Ridge PBS (WBRA)'), # http://www.blueridgepbs.org/ + (r'watch\.easttennesseepbs\.org', 'East Tennessee PBS (WSJK)'), # http://easttennesseepbs.org + (r'video\.wcte\.tv', 'WCTE-TV (WCTE)'), # http://www.wcte.org + (r'video\.wljt\.org', 'WLJT, Channel 11 (WLJT)'), # http://wljt.org/ + (r'video\.wosu\.org', 'WOSU TV (WOSU)'), # http://wosu.org/ + (r'video\.woub\.org', 'WOUB/WOUC (WOUB)'), # http://woub.org/tv/index.php?section=5 + (r'video\.wvpublic\.org', 'WVPB (WVPB)'), # http://wvpublic.org/ + (r'video\.wkyupbs\.org', 'WKYU-PBS (WKYU)'), # http://www.wkyupbs.org + # (r'wyes\.org', 'WYES-TV/New Orleans (WYES)'), # http://www.wyes.org + (r'video\.kera\.org', 'KERA 13 (KERA)'), # http://www.kera.org/ + (r'video\.mpbn\.net', 'MPBN (WCBB)'), # http://www.mpbn.net/ + (r'video\.mountainlake\.org', 'Mountain Lake PBS (WCFE)'), # http://www.mountainlake.org/ + (r'video\.nhptv\.org', 'NHPTV (WENH)'), # http://nhptv.org/ + (r'video\.vpt\.org', 'Vermont PBS (WETK)'), # http://www.vpt.org + (r'video\.witf\.org', 'witf (WITF)'), # http://www.witf.org + (r'watch\.wqed\.org', 'WQED Multimedia (WQED)'), # http://www.wqed.org/ + (r'video\.wmht\.org', 'WMHT Educational Telecommunications (WMHT)'), # http://www.wmht.org/home/ + (r'video\.deltabroadcasting\.org', 'Q-TV (WDCQ)'), # http://www.deltabroadcasting.org + (r'video\.dptv\.org', 'WTVS Detroit Public TV (WTVS)'), # http://www.dptv.org/ + (r'video\.wcmu\.org', 'CMU Public Television (WCMU)'), # http://www.wcmu.org + (r'video\.wkar\.org', 'WKAR-TV (WKAR)'), # http://wkar.org/ + (r'wnmuvideo\.nmu\.edu', 'WNMU-TV Public TV 13 (WNMU)'), # http://wnmutv.nmu.edu + (r'video\.wdse\.org', 'WDSE - WRPT (WDSE)'), # http://www.wdse.org/ + (r'video\.wgte\.org', 'WGTE TV (WGTE)'), # http://www.wgte.org + (r'video\.lptv\.org', 'Lakeland Public Television (KAWE)'), # http://www.lakelandptv.org + # (r'prairiepublic\.org', 'PRAIRIE PUBLIC (KFME)'), # http://www.prairiepublic.org/ + (r'video\.kmos\.org', 'KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS)'), # http://www.kmos.org/ + (r'watch\.montanapbs\.org', 'MontanaPBS (KUSM)'), # http://montanapbs.org + (r'video\.krwg\.org', 'KRWG/Channel 22 (KRWG)'), # http://www.krwg.org + (r'video\.kacvtv\.org', 'KACV (KACV)'), # http://www.panhandlepbs.org/home/ + (r'video\.kcostv\.org', 'KCOS/Channel 13 (KCOS)'), # www.kcostv.org + (r'video\.wcny\.org', 'WCNY/Channel 24 (WCNY)'), # http://www.wcny.org + (r'video\.wned\.org', 'WNED (WNED)'), # http://www.wned.org/ + (r'watch\.wpbstv\.org', 'WPBS (WPBS)'), # http://www.wpbstv.org + (r'video\.wskg\.org', 'WSKG Public TV (WSKG)'), # http://wskg.org + (r'video\.wxxi\.org', 'WXXI (WXXI)'), # http://wxxi.org + (r'video\.wpsu\.org', 'WPSU (WPSU)'), # http://www.wpsu.org + # (r'wqln\.org', 'WQLN/Channel 54 (WQLN)'), # http://www.wqln.org + (r'on-demand\.wvia\.org', 'WVIA Public Media Studios (WVIA)'), # http://www.wvia.org/ + (r'video\.wtvi\.org', 'WTVI (WTVI)'), # http://www.wtvi.org/ + # (r'whro\.org', 'WHRO (WHRO)'), # http://whro.org + (r'video\.westernreservepublicmedia\.org', 'Western Reserve PBS (WNEO)'), # http://www.WesternReservePublicMedia.org/ + (r'video\.ideastream\.org', 'WVIZ/PBS ideastream (WVIZ)'), # http://www.wviz.org/ + (r'video\.kcts9\.org', 'KCTS 9 (KCTS)'), # http://kcts9.org/ + (r'video\.basinpbs\.org', 'Basin PBS (KPBT)'), # http://www.basinpbs.org + (r'video\.houstonpbs\.org', 'KUHT / Channel 8 (KUHT)'), # http://www.houstonpublicmedia.org/ + # (r'tamu\.edu', 'KAMU - TV (KAMU)'), # http://KAMU.tamu.edu + # (r'kedt\.org', 'KEDT/Channel 16 (KEDT)'), # http://www.kedt.org + (r'video\.klrn\.org', 'KLRN (KLRN)'), # http://www.klrn.org + (r'video\.klru\.tv', 'KLRU (KLRU)'), # http://www.klru.org + # (r'kmbh\.org', 'KMBH-TV (KMBH)'), # http://www.kmbh.org + # (r'knct\.org', 'KNCT (KNCT)'), # http://www.knct.org + # (r'ktxt\.org', 'KTTZ-TV (KTXT)'), # http://www.ktxt.org + (r'video\.wtjx\.org', 'WTJX Channel 12 (WTJX)'), # http://www.wtjx.org/ + (r'video\.ideastations\.org', 'WCVE PBS (WCVE)'), # http://ideastations.org/ + (r'video\.kbtc\.org', 'KBTC Public Television (KBTC)'), # http://kbtc.org + ) + + IE_NAME = 'pbs' + IE_DESC = 'Public Broadcasting Service (PBS) and member stations: %s' % ', '.join(list(zip(*_STATIONS))[1]) + + _VALID_URL = r'''(?x)https?:// + (?: + # Direct video URL + (?:%s)/(?:(?:vir|port)alplayer|video)/(?P<id>[0-9]+)(?:[?/]|$) | + # Article with embedded player (or direct video) + (?:www\.)?pbs\.org/(?:[^/]+/){1,5}(?P<presumptive_id>[^/]+?)(?:\.html)?/?(?:$|[?\#]) | + # Player + (?:video|player)\.pbs\.org/(?:widget/)?partnerplayer/(?P<player_id>[^/]+) + ) + ''' % '|'.join(list(zip(*_STATIONS))[0]) + + _GEO_COUNTRIES = ['US'] + + _TESTS = [ + { + 'url': 'http://www.pbs.org/tpt/constitution-usa-peter-sagal/watch/a-more-perfect-union/', + 'md5': '173dc391afd361fa72eab5d3d918968d', + 'info_dict': { + 'id': '2365006249', + 'ext': 'mp4', + 'title': 'Constitution USA with Peter Sagal - A More Perfect Union', + 'description': 'md5:31b664af3c65fd07fa460d306b837d00', + 'duration': 3190, + }, + }, + { + 'url': 'http://www.pbs.org/wgbh/pages/frontline/losing-iraq/', + 'md5': '6f722cb3c3982186d34b0f13374499c7', + 'info_dict': { + 'id': '2365297690', + 'ext': 'mp4', + 'title': 'FRONTLINE - Losing Iraq', + 'description': 'md5:5979a4d069b157f622d02bff62fbe654', + 'duration': 5050, + }, + }, + { + 'url': 'http://www.pbs.org/newshour/bb/education-jan-june12-cyberschools_02-23/', + 'md5': 'b19856d7f5351b17a5ab1dc6a64be633', + 'info_dict': { + 'id': '2201174722', + 'ext': 'mp4', + 'title': 'PBS NewsHour - Cyber Schools Gain Popularity, but Quality Questions Persist', + 'description': 'md5:86ab9a3d04458b876147b355788b8781', + 'duration': 801, + }, + }, + { + 'url': 'http://www.pbs.org/wnet/gperf/dudamel-conducts-verdi-requiem-hollywood-bowl-full-episode/3374/', + 'md5': 'c62859342be2a0358d6c9eb306595978', + 'info_dict': { + 'id': '2365297708', + 'ext': 'mp4', + 'title': 'Great Performances - Dudamel Conducts Verdi Requiem at the Hollywood Bowl - Full', + 'description': 'md5:657897370e09e2bc6bf0f8d2cd313c6b', + 'duration': 6559, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + }, + { + 'url': 'http://www.pbs.org/wgbh/nova/earth/killer-typhoon.html', + 'md5': '908f3e5473a693b266b84e25e1cf9703', + 'info_dict': { + 'id': '2365160389', + 'display_id': 'killer-typhoon', + 'ext': 'mp4', + 'description': 'md5:c741d14e979fc53228c575894094f157', + 'title': 'NOVA - Killer Typhoon', + 'duration': 3172, + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20140122', + 'age_limit': 10, + }, + }, + { + 'url': 'http://www.pbs.org/wgbh/pages/frontline/united-states-of-secrets/', + 'info_dict': { + 'id': 'united-states-of-secrets', + }, + 'playlist_count': 2, + }, + { + 'url': 'http://www.pbs.org/wgbh/americanexperience/films/great-war/', + 'info_dict': { + 'id': 'great-war', + }, + 'playlist_count': 3, + }, + { + 'url': 'http://www.pbs.org/wgbh/americanexperience/films/death/player/', + 'info_dict': { + 'id': '2276541483', + 'display_id': 'player', + 'ext': 'mp4', + 'title': 'American Experience - Death and the Civil War, Chapter 1', + 'description': 'md5:67fa89a9402e2ee7d08f53b920674c18', + 'duration': 682, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + 'params': { + 'skip_download': True, # requires ffmpeg + }, + }, + { + 'url': 'http://www.pbs.org/video/2365245528/', + 'md5': '115223d41bd55cda8ae5cd5ed4e11497', + 'info_dict': { + 'id': '2365245528', + 'display_id': '2365245528', + 'ext': 'mp4', + 'title': 'FRONTLINE - United States of Secrets (Part One)', + 'description': 'md5:55756bd5c551519cc4b7703e373e217e', + 'duration': 6851, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + }, + { + # Video embedded in iframe containing angle brackets as attribute's value (e.g. + # "<iframe style='position: absolute;<br />\ntop: 0; left: 0;' ...", see + # https://github.com/ytdl-org/youtube-dl/issues/7059) + 'url': 'http://www.pbs.org/food/features/a-chefs-life-season-3-episode-5-prickly-business/', + 'md5': '59b0ef5009f9ac8a319cc5efebcd865e', + 'info_dict': { + 'id': '2365546844', + 'display_id': 'a-chefs-life-season-3-episode-5-prickly-business', + 'ext': 'mp4', + 'title': "A Chef's Life - Season 3, Ep. 5: Prickly Business", + 'description': 'md5:c0ff7475a4b70261c7e58f493c2792a5', + 'duration': 1480, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + }, + { + # Frontline video embedded via flp2012.js + 'url': 'http://www.pbs.org/wgbh/pages/frontline/the-atomic-artists', + 'info_dict': { + 'id': '2070868960', + 'display_id': 'the-atomic-artists', + 'ext': 'mp4', + 'title': 'FRONTLINE - The Atomic Artists', + 'description': 'md5:f677e4520cfacb4a5ce1471e31b57800', + 'duration': 723, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + 'params': { + 'skip_download': True, # requires ffmpeg + }, + }, + { + # Serves hd only via wigget/partnerplayer page + 'url': 'http://www.pbs.org/video/2365641075/', + 'md5': 'fdf907851eab57211dd589cf12006666', + 'info_dict': { + 'id': '2365641075', + 'ext': 'mp4', + 'title': 'FRONTLINE - Netanyahu at War', + 'duration': 6852, + 'thumbnail': r're:^https?://.*\.jpg$', + 'formats': 'mincount:8', + }, + }, + { + # https://github.com/ytdl-org/youtube-dl/issues/13801 + 'url': 'https://www.pbs.org/video/pbs-newshour-full-episode-july-31-2017-1501539057/', + 'info_dict': { + 'id': '3003333873', + 'ext': 'mp4', + 'title': 'PBS NewsHour - full episode July 31, 2017', + 'description': 'md5:d41d8cd98f00b204e9800998ecf8427e', + 'duration': 3265, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + 'params': { + 'skip_download': True, + }, + }, + { + 'url': 'http://www.pbs.org/wgbh/roadshow/watch/episode/2105-indianapolis-hour-2/', + 'info_dict': { + 'id': '2365936247', + 'ext': 'mp4', + 'title': 'Antiques Roadshow - Indianapolis, Hour 2', + 'description': 'md5:524b32249db55663e7231b6b8d1671a2', + 'duration': 3180, + 'thumbnail': r're:^https?://.*\.jpg$', + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': ['HTTP Error 403: Forbidden'], + }, + { + 'url': 'https://www.pbs.org/wgbh/masterpiece/episodes/victoria-s2-e1/', + 'info_dict': { + 'id': '3007193718', + 'ext': 'mp4', + 'title': "Victoria - A Soldier's Daughter / The Green-Eyed Monster", + 'description': 'md5:37efbac85e0c09b009586523ec143652', + 'duration': 6292, + 'thumbnail': r're:^https?://.*\.(?:jpg|JPG)$', + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': ['HTTP Error 403: Forbidden'], + }, + { + 'url': 'https://player.pbs.org/partnerplayer/tOz9tM5ljOXQqIIWke53UA==/', + 'info_dict': { + 'id': '3011407934', + 'ext': 'mp4', + 'title': 'Stories from the Stage - Road Trip', + 'duration': 1619, + 'thumbnail': r're:^https?://.*\.(?:jpg|JPG)$', + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': ['HTTP Error 403: Forbidden'], + }, + { + 'url': 'http://player.pbs.org/widget/partnerplayer/2365297708/?start=0&end=0&chapterbar=false&endscreen=false&topbar=true', + 'only_matching': True, + }, + { + 'url': 'http://watch.knpb.org/video/2365616055/', + 'only_matching': True, + }, + { + 'url': 'https://player.pbs.org/portalplayer/3004638221/?uid=', + 'only_matching': True, + } + ] + _ERRORS = { + 101: 'We\'re sorry, but this video is not yet available.', + 403: 'We\'re sorry, but this video is not available in your region due to right restrictions.', + 404: 'We are experiencing technical difficulties that are preventing us from playing the video at this time. Please check back again soon.', + 410: 'This video has expired and is no longer available for online streaming.', + } + + def _real_initialize(self): + cookie = (self._download_json( + 'http://localization.services.pbs.org/localize/auto/cookie/', + None, headers=self.geo_verification_headers(), fatal=False) or {}).get('cookie') + if cookie: + station = self._search_regex(r'#?s=\["([^"]+)"', cookie, 'station') + if station: + self._set_cookie('.pbs.org', 'pbsol.station', station) + + def _extract_webpage(self, url): + mobj = self._match_valid_url(url) + + description = None + + presumptive_id = mobj.group('presumptive_id') + display_id = presumptive_id + if presumptive_id: + webpage = self._download_webpage(url, display_id) + + description = strip_or_none(self._og_search_description( + webpage, default=None) or self._html_search_meta( + 'description', webpage, default=None)) + upload_date = unified_strdate(self._search_regex( + r'<input type="hidden" id="air_date_[0-9]+" value="([^"]+)"', + webpage, 'upload date', default=None)) + + # tabbed frontline videos + MULTI_PART_REGEXES = ( + r'<div[^>]+class="videotab[^"]*"[^>]+vid="(\d+)"', + r'<a[^>]+href=["\']#(?:video-|part)\d+["\'][^>]+data-cove[Ii]d=["\'](\d+)', + ) + for p in MULTI_PART_REGEXES: + tabbed_videos = orderedSet(re.findall(p, webpage)) + if tabbed_videos: + return tabbed_videos, presumptive_id, upload_date, description + + MEDIA_ID_REGEXES = [ + r"div\s*:\s*'videoembed'\s*,\s*mediaid\s*:\s*'(\d+)'", # frontline video embed + r'class="coveplayerid">([^<]+)<', # coveplayer + r'<section[^>]+data-coveid="(\d+)"', # coveplayer from http://www.pbs.org/wgbh/frontline/film/real-csi/ + r'<input type="hidden" id="pbs_video_id_[0-9]+" value="([0-9]+)"/>', # jwplayer + r"(?s)window\.PBS\.playerConfig\s*=\s*{.*?id\s*:\s*'([0-9]+)',", + r'<div[^>]+\bdata-cove-id=["\'](\d+)"', # http://www.pbs.org/wgbh/roadshow/watch/episode/2105-indianapolis-hour-2/ + r'<iframe[^>]+\bsrc=["\'](?:https?:)?//video\.pbs\.org/widget/partnerplayer/(\d+)', # https://www.pbs.org/wgbh/masterpiece/episodes/victoria-s2-e1/ + ] + + media_id = self._search_regex( + MEDIA_ID_REGEXES, webpage, 'media ID', fatal=False, default=None) + if media_id: + return media_id, presumptive_id, upload_date, description + + # Frontline video embedded via flp + video_id = self._search_regex( + r'videoid\s*:\s*"([\d+a-z]{7,})"', webpage, 'videoid', default=None) + if video_id: + # pkg_id calculation is reverse engineered from + # http://www.pbs.org/wgbh/pages/frontline/js/flp2012.js + prg_id = self._search_regex( + r'videoid\s*:\s*"([\d+a-z]{7,})"', webpage, 'videoid')[7:] + if 'q' in prg_id: + prg_id = prg_id.split('q')[1] + prg_id = int(prg_id, 16) + getdir = self._download_json( + 'http://www.pbs.org/wgbh/pages/frontline/.json/getdir/getdir%d.json' % prg_id, + presumptive_id, 'Downloading getdir JSON', + transform_source=strip_jsonp) + return getdir['mid'], presumptive_id, upload_date, description + + for iframe in re.findall(r'(?s)<iframe(.+?)></iframe>', webpage): + url = self._search_regex( + r'src=(["\'])(?P<url>.+?partnerplayer.+?)\1', iframe, + 'player URL', default=None, group='url') + if url: + break + + if not url: + url = self._og_search_url(webpage) + + mobj = re.match( + self._VALID_URL, self._proto_relative_url(url.strip())) + + player_id = mobj.group('player_id') + if not display_id: + display_id = player_id + if player_id: + player_page = self._download_webpage( + url, display_id, note='Downloading player page', + errnote='Could not download player page') + video_id = self._search_regex( + r'<div\s+id=["\']video_(\d+)', player_page, 'video ID', + default=None) + if not video_id: + video_info = self._extract_video_data( + player_page, 'video data', display_id) + video_id = compat_str( + video_info.get('id') or video_info['contentID']) + else: + video_id = mobj.group('id') + display_id = video_id + + return video_id, display_id, None, description + + def _extract_video_data(self, string, name, video_id, fatal=True): + return self._parse_json( + self._search_regex( + [r'(?s)PBS\.videoData\s*=\s*({.+?});\n', + r'window\.videoBridge\s*=\s*({.+?});'], + string, name, default='{}'), + video_id, transform_source=js_to_json, fatal=fatal) + + def _real_extract(self, url): + video_id, display_id, upload_date, description = self._extract_webpage(url) + + if isinstance(video_id, list): + entries = [self.url_result( + 'http://video.pbs.org/video/%s' % vid_id, 'PBS', vid_id) + for vid_id in video_id] + return self.playlist_result(entries, display_id) + + info = {} + redirects = [] + redirect_urls = set() + + def extract_redirect_urls(info): + for encoding_name in ('recommended_encoding', 'alternate_encoding'): + redirect = info.get(encoding_name) + if not redirect: + continue + redirect_url = redirect.get('url') + if redirect_url and redirect_url not in redirect_urls: + redirects.append(redirect) + redirect_urls.add(redirect_url) + encodings = info.get('encodings') + if isinstance(encodings, list): + for encoding in encodings: + encoding_url = url_or_none(encoding) + if encoding_url and encoding_url not in redirect_urls: + redirects.append({'url': encoding_url}) + redirect_urls.add(encoding_url) + + chapters = [] + # Player pages may also serve different qualities + for page in ('widget/partnerplayer', 'portalplayer'): + player = self._download_webpage( + 'http://player.pbs.org/%s/%s' % (page, video_id), + display_id, 'Downloading %s page' % page, fatal=False) + if player: + video_info = self._extract_video_data( + player, '%s video data' % page, display_id, fatal=False) + if video_info: + extract_redirect_urls(video_info) + if not info: + info = video_info + if not chapters: + raw_chapters = video_info.get('chapters') or [] + if not raw_chapters: + for chapter_data in re.findall(r'(?s)chapters\.push\(({.*?})\)', player): + chapter = self._parse_json(chapter_data, video_id, js_to_json, fatal=False) + if not chapter: + continue + raw_chapters.append(chapter) + for chapter in raw_chapters: + start_time = float_or_none(chapter.get('start_time'), 1000) + duration = float_or_none(chapter.get('duration'), 1000) + if start_time is None or duration is None: + continue + chapters.append({ + 'start_time': start_time, + 'end_time': start_time + duration, + 'title': chapter.get('title'), + }) + + formats = [] + http_url = None + hls_subs = {} + for num, redirect in enumerate(redirects): + redirect_id = redirect.get('eeid') + + redirect_info = self._download_json( + '%s?format=json' % redirect['url'], display_id, + 'Downloading %s video url info' % (redirect_id or num), + headers=self.geo_verification_headers()) + + if redirect_info['status'] == 'error': + message = self._ERRORS.get( + redirect_info['http_code'], redirect_info['message']) + if redirect_info['http_code'] == 403: + self.raise_geo_restricted( + msg=message, countries=self._GEO_COUNTRIES) + raise ExtractorError( + '%s said: %s' % (self.IE_NAME, message), expected=True) + + format_url = redirect_info.get('url') + if not format_url: + continue + + if determine_ext(format_url) == 'm3u8': + hls_formats, hls_subs = self._extract_m3u8_formats_and_subtitles( + format_url, display_id, 'mp4', m3u8_id='hls', fatal=False) + formats.extend(hls_formats) + else: + formats.append({ + 'url': format_url, + 'format_id': redirect_id, + }) + if re.search(r'^https?://.*(?:\d+k|baseline)', format_url): + http_url = format_url + self._remove_duplicate_formats(formats) + m3u8_formats = list(filter( + lambda f: f.get('protocol') == 'm3u8' and f.get('vcodec') != 'none', + formats)) + if http_url: + for m3u8_format in m3u8_formats: + bitrate = self._search_regex(r'(\d+)k', m3u8_format['url'], 'bitrate', default=None) + # Lower qualities (150k and 192k) are not available as HTTP formats (see [1]), + # we won't try extracting them. + # Since summer 2016 higher quality formats (4500k and 6500k) are also available + # albeit they are not documented in [2]. + # 1. https://github.com/ytdl-org/youtube-dl/commit/cbc032c8b70a038a69259378c92b4ba97b42d491#commitcomment-17313656 + # 2. https://projects.pbs.org/confluence/display/coveapi/COVE+Video+Specifications + if not bitrate or int(bitrate) < 400: + continue + f_url = re.sub(r'\d+k|baseline', bitrate + 'k', http_url) + # This may produce invalid links sometimes (e.g. + # http://www.pbs.org/wgbh/frontline/film/suicide-plan) + if not self._is_valid_url(f_url, display_id, 'http-%sk video' % bitrate): + continue + f = m3u8_format.copy() + f.update({ + 'url': f_url, + 'format_id': m3u8_format['format_id'].replace('hls', 'http'), + 'protocol': 'http', + }) + formats.append(f) + for f in formats: + if (f.get('format_note') or '').endswith(' AD'): # Audio description + f['language_preference'] = -10 + + rating_str = info.get('rating') + if rating_str is not None: + rating_str = rating_str.rpartition('-')[2] + age_limit = US_RATINGS.get(rating_str) + + subtitles = {} + captions = info.get('cc') or {} + for caption_url in captions.values(): + subtitles.setdefault('en', []).append({ + 'url': caption_url + }) + subtitles = self._merge_subtitles(subtitles, hls_subs) + + # info['title'] is often incomplete (e.g. 'Full Episode', 'Episode 5', etc) + # Try turning it to 'program - title' naming scheme if possible + alt_title = info.get('program', {}).get('title') + if alt_title: + info['title'] = alt_title + ' - ' + re.sub(r'^' + alt_title + r'[\s\-:]+', '', info['title']) + + description = info.get('description') or info.get( + 'program', {}).get('description') or description + + return { + 'id': video_id, + 'display_id': display_id, + 'title': info['title'], + 'description': description, + 'thumbnail': info.get('image_url'), + 'duration': int_or_none(info.get('duration')), + 'age_limit': age_limit, + 'upload_date': upload_date, + 'formats': formats, + 'subtitles': subtitles, + 'chapters': chapters, + } + + +class PBSKidsIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?pbskids\.org/video/[\w-]+/(?P<id>\d+)' + _TESTS = [ + { + 'url': 'https://pbskids.org/video/molly-of-denali/3030407927', + 'md5': '1ded20a017cc6b53446238f1804ce4c7', + 'info_dict': { + 'id': '3030407927', + 'title': 'Bird in the Hand/Bye-Bye Birdie', + 'channel': 'molly-of-denali', + 'duration': 1540, + 'ext': 'mp4', + 'series': 'Molly of Denali', + 'description': 'md5:d006b2211633685d8ebc8d03b6d5611e', + 'categories': ['Episode'], + 'upload_date': '20190718', + } + }, + { + 'url': 'https://pbskids.org/video/plum-landing/2365205059', + 'md5': '92e5d189851a64ae1d0237a965be71f5', + 'info_dict': { + 'id': '2365205059', + 'title': 'Cooper\'s Favorite Place in Nature', + 'channel': 'plum-landing', + 'duration': 67, + 'ext': 'mp4', + 'series': 'Plum Landing', + 'description': 'md5:657e5fc4356a84ead1c061eb280ff05d', + 'categories': ['Episode'], + 'upload_date': '20140302', + } + } + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + meta = self._search_json(r'window\._PBS_KIDS_DEEPLINK\s*=', webpage, 'video info', video_id) + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + traverse_obj(meta, ('video_obj', 'URI', {url_or_none})), video_id, ext='mp4') + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + **traverse_obj(meta, { + 'categories': ('video_obj', 'video_type', {str}, {lambda x: [x] if x else None}), + 'channel': ('show_slug', {str}), + 'description': ('video_obj', 'description', {str}), + 'duration': ('video_obj', 'duration', {int_or_none}), + 'series': ('video_obj', 'program_title', {str}), + 'title': ('video_obj', 'title', {str}), + 'upload_date': ('video_obj', 'air_date', {unified_strdate}), + }) + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pearvideo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pearvideo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pearvideo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pearvideo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/peekvids.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/peekvids.py new file mode 100644 index 0000000..41f591b --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/peekvids.py @@ -0,0 +1,190 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + get_element_by_class, + int_or_none, + merge_dicts, + url_or_none, +) + + +class PeekVidsBaseIE(InfoExtractor): + def _real_extract(self, url): + domain, video_id = self._match_valid_url(url).group('domain', 'id') + webpage = self._download_webpage(url, video_id, expected_status=429) + if '>Rate Limit Exceeded' in webpage: + raise ExtractorError( + f'You are suspected as a bot. Wait, or pass the captcha on the site and provide cookies. {self._login_hint()}', + video_id=video_id, expected=True) + + title = self._html_search_regex(r'(?s)<h1\b[^>]*>(.+?)</h1>', webpage, 'title') + + display_id = video_id + video_id = self._search_regex(r'(?s)<video\b[^>]+\bdata-id\s*=\s*["\']?([\w-]+)', webpage, 'short video ID') + srcs = self._download_json( + f'https://www.{domain}/v-alt/{video_id}', video_id, + note='Downloading list of source files') + + formats = [] + for k, v in srcs.items(): + f_url = url_or_none(v) + if not f_url: + continue + + height = self._search_regex(r'^data-src(\d{3,})$', k, 'height', default=None) + if not height: + continue + + formats.append({ + 'url': f_url, + 'format_id': height, + 'height': int_or_none(height), + }) + + if not formats: + formats = [{'url': url} for url in srcs.values()] + + info = self._search_json_ld(webpage, video_id, expected_type='VideoObject', default={}) + info.pop('url', None) + + # may not have found the thumbnail if it was in a list in the ld+json + info.setdefault('thumbnail', self._og_search_thumbnail(webpage)) + detail = (get_element_by_class('detail-video-block', webpage) + or get_element_by_class('detail-block', webpage) or '') + info['description'] = self._html_search_regex( + rf'(?s)(.+?)(?:{re.escape(info.get("description", ""))}\s*<|<ul\b)', + detail, 'description', default=None) or None + info['title'] = re.sub(r'\s*[,-][^,-]+$', '', info.get('title') or title) or self._generic_title(url) + + def cat_tags(name, html): + l = self._html_search_regex( + rf'(?s)<span\b[^>]*>\s*{re.escape(name)}\s*:\s*</span>(.+?)</li>', + html, name, default='') + return list(filter(None, re.split(r'\s+', l))) + + return merge_dicts({ + 'id': video_id, + 'display_id': display_id, + 'age_limit': 18, + 'formats': formats, + 'categories': cat_tags('Categories', detail), + 'tags': cat_tags('Tags', detail), + 'uploader': self._html_search_regex(r'[Uu]ploaded\s+by\s(.+?)"', webpage, 'uploader', default=None), + }, info) + + +class PeekVidsIE(PeekVidsBaseIE): + _VALID_URL = r'''(?x) + https?://(?:www\.)?(?P<domain>peekvids\.com)/ + (?:(?:[^/?#]+/){2}|embed/?\?(?:[^#]*&)?v=) + (?P<id>[^/?&#]*) + ''' + _TESTS = [{ + 'url': 'https://peekvids.com/pc/dane-jones-cute-redhead-with-perfect-tits-with-mini-vamp/BSyLMbN0YCd', + 'md5': '2ff6a357a9717dc9dc9894b51307e9a2', + 'info_dict': { + 'id': '1262717', + 'display_id': 'BSyLMbN0YCd', + 'title': ' Dane Jones - Cute redhead with perfect tits with Mini Vamp', + 'ext': 'mp4', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:0a61df3620de26c0af8963b1a730cd69', + 'timestamp': 1642579329, + 'upload_date': '20220119', + 'duration': 416, + 'view_count': int, + 'age_limit': 18, + 'uploader': 'SEXYhub.com', + 'categories': list, + 'tags': list, + }, + }] + + +class PlayVidsIE(PeekVidsBaseIE): + _VALID_URL = r'https?://(?:www\.)?(?P<domain>playvids\.com)/(?:embed/|\w\w?/)?(?P<id>[^/?#]*)' + _TESTS = [{ + 'url': 'https://www.playvids.com/U3pBrYhsjXM/pc/dane-jones-cute-redhead-with-perfect-tits-with-mini-vamp', + 'md5': '2f12e50213dd65f142175da633c4564c', + 'info_dict': { + 'id': '1978030', + 'display_id': 'U3pBrYhsjXM', + 'title': ' Dane Jones - Cute redhead with perfect tits with Mini Vamp', + 'ext': 'mp4', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:0a61df3620de26c0af8963b1a730cd69', + 'timestamp': 1640435839, + 'upload_date': '20211225', + 'duration': 416, + 'view_count': int, + 'age_limit': 18, + 'uploader': 'SEXYhub.com', + 'categories': list, + 'tags': list, + }, + }, { + 'url': 'https://www.playvids.com/es/U3pBrYhsjXM/pc/dane-jones-cute-redhead-with-perfect-tits-with-mini-vamp', + 'only_matching': True, + }, { + 'url': 'https://www.playvids.com/embed/U3pBrYhsjXM', + 'only_matching': True, + }, { + 'url': 'https://www.playvids.com/bKmGLe3IwjZ/sv/brazzers-800-phone-sex-madison-ivy-always-on-the-line', + 'md5': 'e783986e596cafbf46411a174ab42ba6', + 'info_dict': { + 'id': '762385', + 'display_id': 'bKmGLe3IwjZ', + 'ext': 'mp4', + 'title': 'Brazzers - 1 800 Phone Sex: Madison Ivy Always On The Line 6', + 'description': 'md5:bdcd2db2b8ad85831a491d7c8605dcef', + 'timestamp': 1516958544, + 'upload_date': '20180126', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 480, + 'uploader': 'Brazzers', + 'age_limit': 18, + 'view_count': int, + 'categories': list, + 'tags': list, + }, + }, { + 'url': 'https://www.playvids.com/v/47iUho33toY', + 'md5': 'b056b5049d34b648c1e86497cf4febce', + 'info_dict': { + 'id': '700621', + 'display_id': '47iUho33toY', + 'ext': 'mp4', + 'title': 'KATEE OWEN STRIPTIASE IN SEXY RED LINGERIE', + 'description': None, + 'timestamp': 1507052209, + 'upload_date': '20171003', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 332, + 'uploader': 'Cacerenele', + 'age_limit': 18, + 'view_count': int, + 'categories': list, + 'tags': list, + }, + }, { + 'url': 'https://www.playvids.com/z3_7iwWCmqt/sexy-teen-filipina-striptease-beautiful-pinay-bargirl-strips-and-dances', + 'md5': 'efa09be9f031314b7b7e3bc6510cd0df', + 'info_dict': { + 'id': '1523518', + 'display_id': 'z3_7iwWCmqt', + 'ext': 'mp4', + 'title': 'SEXY TEEN FILIPINA STRIPTEASE - Beautiful Pinay Bargirl Strips and Dances', + 'description': None, + 'timestamp': 1607470323, + 'upload_date': '20201208', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 593, + 'uploader': 'yorours', + 'age_limit': 18, + 'view_count': int, + 'categories': list, + 'tags': list, + }, + }] diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/peertube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/peertube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/peertube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/peertube.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/peertv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/peertv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/peertv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/peertv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/peloton.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/peloton.py new file mode 100644 index 0000000..7864299 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/peloton.py @@ -0,0 +1,215 @@ +import json +import re +import urllib.parse + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + float_or_none, + str_or_none, + traverse_obj, + url_or_none, +) + + +class PelotonIE(InfoExtractor): + IE_NAME = 'peloton' + _NETRC_MACHINE = 'peloton' + _VALID_URL = r'https?://members\.onepeloton\.com/classes/player/(?P<id>[a-f0-9]+)' + _TESTS = [{ + 'url': 'https://members.onepeloton.com/classes/player/0e9653eb53544eeb881298c8d7a87b86', + 'info_dict': { + 'id': '0e9653eb53544eeb881298c8d7a87b86', + 'title': '20 min Chest & Back Strength', + 'ext': 'mp4', + 'thumbnail': r're:^https?://.+\.jpg', + 'description': 'md5:fcd5be9b9eda0194b470e13219050a66', + 'creator': 'Chase Tucker', + 'release_timestamp': 1556141400, + 'timestamp': 1556141400, + 'upload_date': '20190424', + 'duration': 1389, + 'categories': ['Strength'], + 'tags': ['Workout Mat', 'Light Weights', 'Medium Weights'], + 'is_live': False, + 'chapters': 'count:1', + 'subtitles': {'en': [{ + 'url': r're:^https?://.+', + 'ext': 'vtt' + }]}, + }, 'params': { + 'skip_download': 'm3u8', + }, + '_skip': 'Account needed' + }, { + 'url': 'https://members.onepeloton.com/classes/player/26603d53d6bb4de1b340514864a6a6a8', + 'info_dict': { + 'id': '26603d53d6bb4de1b340514864a6a6a8', + 'title': '30 min Earth Day Run', + 'ext': 'm4a', + 'thumbnail': r're:https://.+\.jpg', + 'description': 'md5:adc065a073934d7ee0475d217afe0c3d', + 'creator': 'Selena Samuela', + 'release_timestamp': 1587567600, + 'timestamp': 1587567600, + 'upload_date': '20200422', + 'duration': 1802, + 'categories': ['Running'], + 'is_live': False, + 'chapters': 'count:3' + }, 'params': { + 'skip_download': 'm3u8', + }, + '_skip': 'Account needed' + }] + + _MANIFEST_URL_TEMPLATE = '%s?hdnea=%s' + + def _start_session(self, video_id): + self._download_webpage('https://api.onepeloton.com/api/started_client_session', video_id, note='Starting session') + + def _login(self, video_id): + username, password = self._get_login_info() + if not (username and password): + self.raise_login_required() + try: + self._download_json( + 'https://api.onepeloton.com/auth/login', video_id, note='Logging in', + data=json.dumps({ + 'username_or_email': username, + 'password': password, + 'with_pubsub': False + }).encode(), + headers={'Content-Type': 'application/json', 'User-Agent': 'web'}) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + json_string = self._webpage_read_content(e.cause.response, None, video_id) + res = self._parse_json(json_string, video_id) + raise ExtractorError(res['message'], expected=res['message'] == 'Login failed') + else: + raise + + def _get_token(self, video_id): + try: + subscription = self._download_json( + 'https://api.onepeloton.com/api/subscription/stream', video_id, note='Downloading token', + data=json.dumps({}).encode(), headers={'Content-Type': 'application/json'}) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + json_string = self._webpage_read_content(e.cause.response, None, video_id) + res = self._parse_json(json_string, video_id) + raise ExtractorError(res['message'], expected=res['message'] == 'Stream limit reached') + else: + raise + return subscription['token'] + + def _real_extract(self, url): + video_id = self._match_id(url) + try: + self._start_session(video_id) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + self._login(video_id) + self._start_session(video_id) + else: + raise + + metadata = self._download_json('https://api.onepeloton.com/api/ride/%s/details?stream_source=multichannel' % video_id, video_id) + ride_data = metadata.get('ride') + if not ride_data: + raise ExtractorError('Missing stream metadata') + token = self._get_token(video_id) + + is_live = False + if ride_data.get('content_format') == 'audio': + url = self._MANIFEST_URL_TEMPLATE % (ride_data.get('vod_stream_url'), urllib.parse.quote(token)) + formats = [{ + 'url': url, + 'ext': 'm4a', + 'format_id': 'audio', + 'vcodec': 'none', + }] + subtitles = {} + else: + if ride_data.get('vod_stream_url'): + url = 'https://members.onepeloton.com/.netlify/functions/m3u8-proxy?displayLanguage=en&acceptedSubtitles=%s&url=%s?hdnea=%s' % ( + ','.join([re.sub('^([a-z]+)-([A-Z]+)$', r'\1', caption) for caption in ride_data['captions']]), + ride_data['vod_stream_url'], + urllib.parse.quote(urllib.parse.quote(token))) + elif ride_data.get('live_stream_url'): + url = self._MANIFEST_URL_TEMPLATE % (ride_data.get('live_stream_url'), urllib.parse.quote(token)) + is_live = True + else: + raise ExtractorError('Missing video URL') + formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4') + + if metadata.get('instructor_cues'): + subtitles['cues'] = [{ + 'data': json.dumps(metadata.get('instructor_cues')), + 'ext': 'json' + }] + + category = ride_data.get('fitness_discipline_display_name') + chapters = [{ + 'start_time': segment.get('start_time_offset'), + 'end_time': segment.get('start_time_offset') + segment.get('length'), + 'title': segment.get('name') + } for segment in traverse_obj(metadata, ('segments', 'segment_list'))] + + return { + 'id': video_id, + 'title': ride_data.get('title'), + 'formats': formats, + 'thumbnail': url_or_none(ride_data.get('image_url')), + 'description': str_or_none(ride_data.get('description')), + 'creator': traverse_obj(ride_data, ('instructor', 'name')), + 'release_timestamp': ride_data.get('original_air_time'), + 'timestamp': ride_data.get('original_air_time'), + 'subtitles': subtitles, + 'duration': float_or_none(ride_data.get('length')), + 'categories': [category] if category else None, + 'tags': traverse_obj(ride_data, ('equipment_tags', ..., 'name')), + 'is_live': is_live, + 'chapters': chapters + } + + +class PelotonLiveIE(InfoExtractor): + IE_NAME = 'peloton:live' + IE_DESC = 'Peloton Live' + _VALID_URL = r'https?://members\.onepeloton\.com/player/live/(?P<id>[a-f0-9]+)' + _TEST = { + 'url': 'https://members.onepeloton.com/player/live/eedee2d19f804a9788f53aa8bd38eb1b', + 'info_dict': { + 'id': '32edc92d28044be5bf6c7b6f1f8d1cbc', + 'title': '30 min HIIT Ride: Live from Home', + 'ext': 'mp4', + 'thumbnail': r're:^https?://.+\.png', + 'description': 'md5:f0d7d8ed3f901b7ee3f62c1671c15817', + 'creator': 'Alex Toussaint', + 'release_timestamp': 1587736620, + 'timestamp': 1587736620, + 'upload_date': '20200424', + 'duration': 2014, + 'categories': ['Cycling'], + 'is_live': False, + 'chapters': 'count:3' + }, + 'params': { + 'skip_download': 'm3u8', + }, + '_skip': 'Account needed' + } + + def _real_extract(self, url): + workout_id = self._match_id(url) + peloton = self._download_json(f'https://api.onepeloton.com/api/peloton/{workout_id}', workout_id) + + if peloton.get('ride_id'): + if not peloton.get('is_live') or peloton.get('is_encore') or peloton.get('status') != 'PRE_START': + return self.url_result('https://members.onepeloton.com/classes/player/%s' % peloton['ride_id']) + else: + raise ExtractorError('Ride has not started', expected=True) + else: + raise ExtractorError('Missing video ID') diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/people.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/people.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/people.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/people.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/performgroup.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/performgroup.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/performgroup.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/performgroup.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/periscope.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/periscope.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/periscope.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/periscope.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pgatour.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pgatour.py new file mode 100644 index 0000000..36c2c62 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pgatour.py @@ -0,0 +1,47 @@ +from .brightcove import BrightcoveNewIE +from .common import InfoExtractor + + +class PGATourIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?pgatour\.com/video/[\w-]+/(?P<tc>T)?(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.pgatour.com/video/competition/T6322447785112/adam-hadwin-2023-the-players-round-4-18th-hole-shot-1', + 'info_dict': { + 'id': '6322447785112', + 'ext': 'mp4', + 'title': 'Adam Hadwin | 2023 THE PLAYERS | Round 4 | 18th hole | Shot 1', + 'uploader_id': '6116716431001', + 'upload_date': '20230312', + 'timestamp': 1678653136, + 'duration': 20.011, + 'thumbnail': r're:^https://.+\.jpg', + 'tags': 'count:7', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://www.pgatour.com/video/features/6322506425112/follow-the-players-trophy-on-championship-sunday', + 'info_dict': { + 'id': '6322506425112', + 'ext': 'mp4', + 'title': 'Follow THE PLAYERS trophy on Championship Sunday', + 'description': 'md5:4d29e4bdfa03694a0ebfd08950398568', + 'uploader_id': '6082840763001', + 'upload_date': '20230313', + 'timestamp': 1678739835, + 'duration': 123.435, + 'thumbnail': r're:^https://.+\.jpg', + 'tags': 'count:8', + }, + 'params': {'skip_download': 'm3u8'}, + }] + + def _real_extract(self, url): + video_id, is_tourcast = self._match_valid_url(url).group('id', 'tc') + + # From https://www.pgatour.com/_next/static/chunks/pages/_app-8bcf849560daf38d.js + account_id = '6116716431001' if is_tourcast else '6082840763001' + player_id = 'Vsd5Umu8r' if is_tourcast else 'FWIBYMBPj' + + return self.url_result( + f'https://players.brightcove.net/{account_id}/{player_id}_default/index.html?videoId={video_id}', + BrightcoveNewIE) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/philharmoniedeparis.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/philharmoniedeparis.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/philharmoniedeparis.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/philharmoniedeparis.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/phoenix.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/phoenix.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/phoenix.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/phoenix.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/photobucket.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/photobucket.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/photobucket.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/photobucket.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/piapro.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/piapro.py new file mode 100644 index 0000000..5f39e06 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/piapro.py @@ -0,0 +1,118 @@ +from .common import InfoExtractor +from ..compat import compat_urlparse +from ..utils import ( + ExtractorError, + parse_duration, + parse_filesize, + str_to_int, + unified_timestamp, + urlencode_postdata, +) + + +class PiaproIE(InfoExtractor): + _NETRC_MACHINE = 'piapro' + _VALID_URL = r'https?://piapro\.jp/(?:t|content)/(?P<id>\w+)/?' + _TESTS = [{ + 'url': 'https://piapro.jp/t/NXYR', + 'md5': 'f7c0f760913fb1d44a1c45a4af793909', + 'info_dict': { + 'id': 'NXYR', + 'ext': 'mp3', + 'uploader': 'wowaka', + 'uploader_id': 'wowaka', + 'title': 'è£è¡¨ãƒ©ãƒãƒ¼ã‚º', + 'description': 'http://www.nicovideo.jp/watch/sm8082467', + 'duration': 189.0, + 'timestamp': 1251785475, + 'thumbnail': r're:^https?://.*\.(?:png|jpg)$', + 'upload_date': '20090901', + 'view_count': int, + } + }, { + 'note': 'There are break lines in description, mandating (?s) flag', + 'url': 'https://piapro.jp/t/9cSd', + 'md5': '952bb6d1e8de95050206408a87790676', + 'info_dict': { + 'id': '9cSd', + 'ext': 'mp3', + 'title': 'é’ã«æº¶ã‘ãŸé¢¨èˆ¹ / åˆéŸ³ãƒŸã‚¯', + 'description': 'md5:d395a9bd151447631a5a1460bc7f9132', + 'uploader': 'シアン・キノ', + 'duration': 229.0, + 'timestamp': 1644030039, + 'upload_date': '20220205', + 'view_count': int, + 'thumbnail': r're:^https?://.*\.(?:png|jpg)$', + 'uploader_id': 'cyankino', + } + }, { + 'url': 'https://piapro.jp/content/hcw0z3a169wtemz6', + 'only_matching': True + }] + + _login_status = False + + def _perform_login(self, username, password): + login_ok = True + login_form_strs = { + '_username': username, + '_password': password, + '_remember_me': 'on', + 'login': 'ログイン' + } + self._request_webpage('https://piapro.jp/login/', None) + urlh = self._request_webpage( + 'https://piapro.jp/login/exe', None, + note='Logging in', errnote='Unable to log in', + data=urlencode_postdata(login_form_strs)) + if urlh is False: + login_ok = False + else: + parts = compat_urlparse.urlparse(urlh.url) + if parts.path != '/': + login_ok = False + if not login_ok: + self.report_warning( + 'unable to log in: bad username or password') + self._login_status = login_ok + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + category_id = self._search_regex(r'categoryId=(.+)">', webpage, 'category ID') + if category_id not in ('1', '2', '21', '22', '23', '24', '25'): + raise ExtractorError('The URL does not contain audio.', expected=True) + + str_duration, str_filesize = self._search_regex( + r'サイズ:</span>(.+?)/\(([0-9,]+?[KMG]?B))', webpage, 'duration and size', + group=(1, 2), default=(None, None)) + str_viewcount = self._search_regex(r'閲覧数:</span>([0-9,]+)\s+', webpage, 'view count', fatal=False) + + uploader_id, uploader = self._search_regex( + r'<a\s+class="cd_user-name"\s+href="/(.*)">([^<]+)ã•ã‚“<', webpage, 'uploader', + group=(1, 2), default=(None, None)) + content_id = self._search_regex(r'contentId\:\'(.+)\'', webpage, 'content ID') + create_date = self._search_regex(r'createDate\:\'(.+)\'', webpage, 'timestamp') + + player_webpage = self._download_webpage( + f'https://piapro.jp/html5_player_popup/?id={content_id}&cdate={create_date}', + video_id, note='Downloading player webpage') + + return { + 'id': video_id, + 'title': self._html_search_regex(r'<h1\s+class="cd_works-title">(.+?)</h1>', webpage, 'title', fatal=False), + 'description': self._html_search_regex(r'(?s)<p\s+class="cd_dtl_cap">(.+?)</p>\s*<div', webpage, 'description', fatal=False), + 'uploader': uploader, + 'uploader_id': uploader_id, + 'timestamp': unified_timestamp(create_date, False), + 'duration': parse_duration(str_duration), + 'view_count': str_to_int(str_viewcount), + 'thumbnail': self._html_search_meta('twitter:image', webpage), + + 'filesize_approx': parse_filesize(str_filesize.replace(',', '')), + 'url': self._search_regex(r'mp3:\s*\'(.*?)\'\}', player_webpage, 'url'), + 'ext': 'mp3', + 'vcodec': 'none', + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/piaulizaportal.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/piaulizaportal.py new file mode 100644 index 0000000..1eb6d92 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/piaulizaportal.py @@ -0,0 +1,70 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + int_or_none, + parse_qs, + time_seconds, + traverse_obj, +) + + +class PIAULIZAPortalIE(InfoExtractor): + IE_DESC = 'ulizaportal.jp - PIA LIVE STREAM' + _VALID_URL = r'https?://(?:www\.)?ulizaportal\.jp/pages/(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12})' + _TESTS = [{ + 'url': 'https://ulizaportal.jp/pages/005f18b7-e810-5618-cb82-0987c5755d44', + 'info_dict': { + 'id': '005f18b7-e810-5618-cb82-0987c5755d44', + 'title': 'プレゼンテーションプレイヤーã®ã‚µãƒ³ãƒ—ル', + 'live_status': 'not_live', + }, + 'params': { + 'skip_download': True, + 'ignore_no_formats_error': True, + }, + }, { + 'url': 'https://ulizaportal.jp/pages/005e1b23-fe93-5780-19a0-98e917cc4b7d?expires=4102412400&signature=f422a993b683e1068f946caf406d211c17d1ef17da8bef3df4a519502155aa91&version=1', + 'info_dict': { + 'id': '005e1b23-fe93-5780-19a0-98e917cc4b7d', + 'title': 'ã€ç¢ºèªç”¨ã€‘視è´ã‚µãƒ³ãƒ—ルページ(ULIZA)', + 'live_status': 'not_live', + }, + 'params': { + 'skip_download': True, + 'ignore_no_formats_error': True, + }, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + expires = int_or_none(traverse_obj(parse_qs(url), ('expires', 0))) + if expires and expires <= time_seconds(): + raise ExtractorError('The link is expired.', video_id=video_id, expected=True) + + webpage = self._download_webpage(url, video_id) + + player_data = self._download_webpage( + self._search_regex( + r'<script [^>]*\bsrc="(https://player-api\.p\.uliza\.jp/v1/players/[^"]+)"', + webpage, 'player data url'), + video_id, headers={'Referer': 'https://ulizaportal.jp/'}, + note='Fetching player data', errnote='Unable to fetch player data') + + formats = self._extract_m3u8_formats( + self._search_regex( + r'["\'](https://vms-api\.p\.uliza\.jp/v1/prog-index\.m3u8[^"\']+)', player_data, + 'm3u8 url', default=None), + video_id, fatal=False) + m3u8_type = self._search_regex( + r'/hls/(dvr|video)/', traverse_obj(formats, (0, 'url')), 'm3u8 type', default=None) + + return { + 'id': video_id, + 'title': self._html_extract_title(webpage), + 'formats': formats, + 'live_status': { + 'video': 'is_live', + 'dvr': 'was_live', # short-term archives + }.get(m3u8_type, 'not_live'), # VOD or long-term archives + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/picarto.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/picarto.py new file mode 100644 index 0000000..d415ba2 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/picarto.py @@ -0,0 +1,152 @@ +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + str_or_none, + traverse_obj, +) + + +class PicartoIE(InfoExtractor): + _VALID_URL = r'https?://(?:www.)?picarto\.tv/(?P<id>[a-zA-Z0-9]+)' + _TEST = { + 'url': 'https://picarto.tv/Setz', + 'info_dict': { + 'id': 'Setz', + 'ext': 'mp4', + 'title': 're:^Setz [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'timestamp': int, + 'is_live': True + }, + 'skip': 'Stream is offline', + } + + @classmethod + def suitable(cls, url): + return False if PicartoVodIE.suitable(url) else super(PicartoIE, cls).suitable(url) + + def _real_extract(self, url): + channel_id = self._match_id(url) + + data = self._download_json( + 'https://ptvintern.picarto.tv/ptvapi', channel_id, query={ + 'query': '''{ + channel(name: "%s") { + adult + id + online + stream_name + title + } + getLoadBalancerUrl(channel_name: "%s") { + url + } +}''' % (channel_id, channel_id), + })['data'] + metadata = data['channel'] + + if metadata.get('online') == 0: + raise ExtractorError('Stream is offline', expected=True) + title = metadata['title'] + + cdn_data = self._download_json( + data['getLoadBalancerUrl']['url'] + '/stream/json_' + metadata['stream_name'] + '.js', + channel_id, 'Downloading load balancing info') + + formats = [] + for source in (cdn_data.get('source') or []): + source_url = source.get('url') + if not source_url: + continue + source_type = source.get('type') + if source_type == 'html5/application/vnd.apple.mpegurl': + formats.extend(self._extract_m3u8_formats( + source_url, channel_id, 'mp4', m3u8_id='hls', fatal=False)) + elif source_type == 'html5/video/mp4': + formats.append({ + 'url': source_url, + }) + + mature = metadata.get('adult') + if mature is None: + age_limit = None + else: + age_limit = 18 if mature is True else 0 + + return { + 'id': channel_id, + 'title': title.strip(), + 'is_live': True, + 'channel': channel_id, + 'channel_id': metadata.get('id'), + 'channel_url': 'https://picarto.tv/%s' % channel_id, + 'age_limit': age_limit, + 'formats': formats, + } + + +class PicartoVodIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?picarto\.tv/(?:videopopout|\w+/videos)/(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://picarto.tv/videopopout/ArtofZod_2017.12.12.00.13.23.flv', + 'md5': '3ab45ba4352c52ee841a28fb73f2d9ca', + 'info_dict': { + 'id': 'ArtofZod_2017.12.12.00.13.23.flv', + 'ext': 'mp4', + 'title': 'ArtofZod_2017.12.12.00.13.23.flv', + 'thumbnail': r're:^https?://.*\.jpg' + }, + 'skip': 'The VOD does not exist', + }, { + 'url': 'https://picarto.tv/ArtofZod/videos/772650', + 'md5': '00067a0889f1f6869cc512e3e79c521b', + 'info_dict': { + 'id': '772650', + 'ext': 'mp4', + 'title': 'Art of Zod - Drawing and Painting', + 'thumbnail': r're:^https?://.*\.jpg', + 'channel': 'ArtofZod', + 'age_limit': 18, + } + }, { + 'url': 'https://picarto.tv/videopopout/Plague', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + data = self._download_json( + 'https://ptvintern.picarto.tv/ptvapi', video_id, query={ + 'query': f'''{{ + video(id: "{video_id}") {{ + id + title + adult + file_name + video_recording_image_url + channel {{ + name + }} + }} +}}''' + })['data']['video'] + + file_name = data['file_name'] + netloc = urllib.parse.urlparse(data['video_recording_image_url']).netloc + + formats = self._extract_m3u8_formats( + f'https://{netloc}/stream/hls/{file_name}/index.m3u8', video_id, 'mp4', m3u8_id='hls') + + return { + 'id': video_id, + **traverse_obj(data, { + 'id': ('id', {str_or_none}), + 'title': ('title', {str}), + 'thumbnail': 'video_recording_image_url', + 'channel': ('channel', 'name', {str}), + 'age_limit': ('adult', {lambda x: 18 if x else 0}), + }), + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/piksel.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/piksel.py new file mode 100644 index 0000000..97a9bf5 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/piksel.py @@ -0,0 +1,174 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + dict_get, + ExtractorError, + int_or_none, + join_nonempty, + parse_iso8601, + traverse_obj, + try_get, + unescapeHTML, + urljoin, +) + + +class PikselIE(InfoExtractor): + _VALID_URL = r'''(?x)https?:// + (?: + (?: + player\. + (?: + olympusattelecom| + vibebyvista + )| + (?:api|player)\.multicastmedia| + (?:api-ovp|player)\.piksel + )\.com| + (?: + mz-edge\.stream\.co| + movie-s\.nhk\.or + )\.jp| + vidego\.baltimorecity\.gov + )/v/(?:refid/(?P<refid>[^/]+)/prefid/)?(?P<id>[\w-]+)''' + _EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>(?:https?:)?//player\.piksel\.com/v/[a-z0-9]+)'] + _TESTS = [ + { + 'url': 'http://player.piksel.com/v/ums2867l', + 'md5': '34e34c8d89dc2559976a6079db531e85', + 'info_dict': { + 'id': 'ums2867l', + 'ext': 'mp4', + 'title': 'GX-005 with Caption', + 'timestamp': 1481335659, + 'upload_date': '20161210' + } + }, + { + # Original source: http://www.uscourts.gov/cameras-courts/state-washington-vs-donald-j-trump-et-al + 'url': 'https://player.piksel.com/v/v80kqp41', + 'md5': '753ddcd8cc8e4fa2dda4b7be0e77744d', + 'info_dict': { + 'id': 'v80kqp41', + 'ext': 'mp4', + 'title': 'WAW- State of Washington vs. Donald J. Trump, et al', + 'description': 'State of Washington vs. Donald J. Trump, et al, Case Number 17-CV-00141-JLR, TRO Hearing, Civil Rights Case, 02/3/2017, 1:00 PM (PST), Seattle Federal Courthouse, Seattle, WA, Judge James L. Robart presiding.', + 'timestamp': 1486171129, + 'upload_date': '20170204' + } + }, + { + # https://www3.nhk.or.jp/nhkworld/en/ondemand/video/2019240/ + 'url': 'http://player.piksel.com/v/refid/nhkworld/prefid/nw_vod_v_en_2019_240_20190823233000_02_1566873477', + 'only_matching': True, + } + ] + + def _call_api(self, app_token, resource, display_id, query, host='https://player.piksel.com', fatal=True): + url = urljoin(host, f'/ws/ws_{resource}/api/{app_token}/mode/json/apiv/5') + response = traverse_obj( + self._download_json(url, display_id, query=query, fatal=fatal), ('response', {dict})) or {} + failure = traverse_obj(response, ('failure', 'reason')) if response else 'Empty response from API' + if failure: + if fatal: + raise ExtractorError(failure, expected=True) + self.report_warning(failure) + return response + + def _real_extract(self, url): + ref_id, display_id = self._match_valid_url(url).groups() + webpage = self._download_webpage(url, display_id) + app_token = self._search_regex([ + r'clientAPI\s*:\s*"([^"]+)"', + r'data-de-api-key\s*=\s*"([^"]+)"' + ], webpage, 'app token') + query = {'refid': ref_id, 'prefid': display_id} if ref_id else {'v': display_id} + program = self._call_api( + app_token, 'program', display_id, query, url)['WsProgramResponse']['program'] + video_id = program['uuid'] + video_data = program['asset'] + title = video_data['title'] + asset_type = dict_get(video_data, ['assetType', 'asset_type']) + + formats = [] + + def process_asset_file(asset_file): + if not asset_file: + return + # TODO: extract rtmp formats + http_url = asset_file.get('http_url') + if not http_url: + return + tbr = None + vbr = int_or_none(asset_file.get('videoBitrate'), 1024) + abr = int_or_none(asset_file.get('audioBitrate'), 1024) + if asset_type == 'video': + tbr = vbr + abr + elif asset_type == 'audio': + tbr = abr + + formats.append({ + 'format_id': join_nonempty('http', tbr), + 'url': unescapeHTML(http_url), + 'vbr': vbr, + 'abr': abr, + 'width': int_or_none(asset_file.get('videoWidth')), + 'height': int_or_none(asset_file.get('videoHeight')), + 'filesize': int_or_none(asset_file.get('filesize')), + 'tbr': tbr, + }) + + def process_asset_files(asset_files): + for asset_file in (asset_files or []): + process_asset_file(asset_file) + + process_asset_files(video_data.get('assetFiles')) + process_asset_file(video_data.get('referenceFile')) + if not formats: + asset_id = video_data.get('assetid') or program.get('assetid') + if asset_id: + process_asset_files(try_get(self._call_api( + app_token, 'asset_file', display_id, { + 'assetid': asset_id, + }, url, False), lambda x: x['WsAssetFileResponse']['AssetFiles'])) + + m3u8_url = dict_get(video_data, [ + 'm3u8iPadURL', + 'ipadM3u8Url', + 'm3u8AndroidURL', + 'm3u8iPhoneURL', + 'iphoneM3u8Url']) + if m3u8_url: + formats.extend(self._extract_m3u8_formats( + m3u8_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False)) + + smil_url = dict_get(video_data, ['httpSmil', 'hdSmil', 'rtmpSmil']) + if smil_url: + transform_source = None + if ref_id == 'nhkworld': + # TODO: figure out if this is something to be fixed in urljoin, + # _parse_smil_formats or keep it here + transform_source = lambda x: x.replace('src="/', 'src="').replace('/media"', '/media/"') + formats.extend(self._extract_smil_formats( + re.sub(r'/od/[^/]+/', '/od/http/', smil_url), video_id, + transform_source=transform_source, fatal=False)) + + subtitles = {} + for caption in video_data.get('captions', []): + caption_url = caption.get('url') + if caption_url: + subtitles.setdefault(caption.get('locale', 'en'), []).append({ + 'url': caption_url}) + + return { + 'id': video_id, + 'title': title, + 'description': video_data.get('description'), + 'thumbnail': video_data.get('thumbnailUrl'), + 'timestamp': parse_iso8601(video_data.get('dateadd')), + 'formats': formats, + 'subtitles': subtitles, + '_format_sort_fields': ('tbr', ), # Incomplete resolution information + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pinkbike.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pinkbike.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pinkbike.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pinkbike.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pinterest.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pinterest.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pinterest.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pinterest.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pixivsketch.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pixivsketch.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pixivsketch.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pixivsketch.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pladform.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pladform.py new file mode 100644 index 0000000..0050068 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pladform.py @@ -0,0 +1,136 @@ +from .common import InfoExtractor +from ..utils import ( + determine_ext, + ExtractorError, + int_or_none, + parse_qs, + xpath_text, + qualities, +) + + +class PladformIE(InfoExtractor): + _VALID_URL = r'''(?x) + https?:// + (?: + (?: + out\.pladform\.ru/player| + static\.pladform\.ru/player\.swf + ) + \?.*\bvideoid=| + video\.pladform\.ru/catalog/video/videoid/ + ) + (?P<id>\d+) + ''' + _EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//out\.pladform\.ru/player\?.+?)\1'] + _TESTS = [{ + 'url': 'http://out.pladform.ru/player?pl=18079&type=html5&videoid=100231282', + 'info_dict': { + 'id': '6216d548e755edae6e8280667d774791', + 'ext': 'mp4', + 'timestamp': 1406117012, + 'title': 'Гарик МартироÑÑн и Гарик Харламов - КаÑтинг на концерт ко Дню милиции', + 'age_limit': 0, + 'upload_date': '20140723', + 'thumbnail': str, + 'view_count': int, + 'description': str, + 'category': list, + 'uploader_id': '12082', + 'uploader': 'Comedy Club', + 'duration': 367, + }, + 'expected_warnings': ['HTTP Error 404: Not Found'] + }, { + 'url': 'https://out.pladform.ru/player?pl=64471&videoid=3777899&vk_puid15=0&vk_puid34=0', + 'md5': '53362fac3a27352da20fa2803cc5cd6f', + 'info_dict': { + 'id': '3777899', + 'ext': 'mp4', + 'title': 'СТУДИЯ СОЮЗ • Шоу Ð¡Ñ‚ÑƒÐ´Ð¸Ñ Ð¡Ð¾ÑŽÐ·, 24 выпуÑк (01.02.2018) Ðурлан Сабуров и Слава КомиÑÑаренко', + 'description': 'md5:05140e8bf1b7e2d46e7ba140be57fd95', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 3190, + }, + }, { + 'url': 'http://static.pladform.ru/player.swf?pl=21469&videoid=100183293&vkcid=0', + 'only_matching': True, + }, { + 'url': 'http://video.pladform.ru/catalog/video/videoid/100183293/vkcid/0', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + qs = parse_qs(url) + pl = qs.get('pl', ['1'])[0] + + video = self._download_xml( + 'http://out.pladform.ru/getVideo', video_id, query={ + 'pl': pl, + 'videoid': video_id, + }, fatal=False) + + def fail(text): + raise ExtractorError( + '%s returned error: %s' % (self.IE_NAME, text), + expected=True) + + if not video: + targetUrl = self._request_webpage(url, video_id, note='Resolving final URL').url + if targetUrl == url: + raise ExtractorError('Can\'t parse page') + return self.url_result(targetUrl) + + if video.tag == 'error': + fail(video.text) + + quality = qualities(('ld', 'sd', 'hd')) + + formats = [] + for src in video.findall('./src'): + if src is None: + continue + format_url = src.text + if not format_url: + continue + if src.get('type') == 'hls' or determine_ext(format_url) == 'm3u8': + formats.extend(self._extract_m3u8_formats( + format_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + else: + formats.append({ + 'url': src.text, + 'format_id': src.get('quality'), + 'quality': quality(src.get('quality')), + }) + + if not formats: + error = xpath_text(video, './cap', 'error', default=None) + if error: + fail(error) + + webpage = self._download_webpage( + 'http://video.pladform.ru/catalog/video/videoid/%s' % video_id, + video_id) + + title = self._og_search_title(webpage, fatal=False) or xpath_text( + video, './/title', 'title', fatal=True) + description = self._search_regex( + r'</h3>\s*<p>([^<]+)</p>', webpage, 'description', fatal=False) + thumbnail = self._og_search_thumbnail(webpage) or xpath_text( + video, './/cover', 'cover') + + duration = int_or_none(xpath_text(video, './/time', 'duration')) + age_limit = int_or_none(xpath_text(video, './/age18', 'age limit')) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'duration': duration, + 'age_limit': age_limit, + 'formats': formats, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/planetmarathi.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/planetmarathi.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/planetmarathi.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/planetmarathi.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/platzi.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/platzi.py new file mode 100644 index 0000000..166b98c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/platzi.py @@ -0,0 +1,213 @@ +from .common import InfoExtractor +from ..compat import ( + compat_b64decode, + compat_str, +) +from ..utils import ( + clean_html, + ExtractorError, + int_or_none, + str_or_none, + try_get, + url_or_none, + urlencode_postdata, + urljoin, +) + + +class PlatziBaseIE(InfoExtractor): + _LOGIN_URL = 'https://platzi.com/login/' + _NETRC_MACHINE = 'platzi' + + def _perform_login(self, username, password): + login_page = self._download_webpage( + self._LOGIN_URL, None, 'Downloading login page') + + login_form = self._hidden_inputs(login_page) + + login_form.update({ + 'email': username, + 'password': password, + }) + + urlh = self._request_webpage( + self._LOGIN_URL, None, 'Logging in', + data=urlencode_postdata(login_form), + headers={'Referer': self._LOGIN_URL}) + + # login succeeded + if 'platzi.com/login' not in urlh.url: + return + + login_error = self._webpage_read_content( + urlh, self._LOGIN_URL, None, 'Downloading login error page') + + login = self._parse_json( + self._search_regex( + r'login\s*=\s*({.+?})(?:\s*;|\s*</script)', login_error, 'login'), + None) + + for kind in ('error', 'password', 'nonFields'): + error = str_or_none(login.get('%sError' % kind)) + if error: + raise ExtractorError( + 'Unable to login: %s' % error, expected=True) + raise ExtractorError('Unable to log in') + + +class PlatziIE(PlatziBaseIE): + _VALID_URL = r'''(?x) + https?:// + (?: + platzi\.com/clases| # es version + courses\.platzi\.com/classes # en version + )/[^/]+/(?P<id>\d+)-[^/?\#&]+ + ''' + + _TESTS = [{ + 'url': 'https://platzi.com/clases/1311-next-js/12074-creando-nuestra-primera-pagina/', + 'md5': '8f56448241005b561c10f11a595b37e3', + 'info_dict': { + 'id': '12074', + 'ext': 'mp4', + 'title': 'Creando nuestra primera página', + 'description': 'md5:4c866e45034fc76412fbf6e60ae008bc', + 'duration': 420, + }, + 'skip': 'Requires platzi account credentials', + }, { + 'url': 'https://courses.platzi.com/classes/1367-communication-codestream/13430-background/', + 'info_dict': { + 'id': '13430', + 'ext': 'mp4', + 'title': 'Background', + 'description': 'md5:49c83c09404b15e6e71defaf87f6b305', + 'duration': 360, + }, + 'skip': 'Requires platzi account credentials', + 'params': { + 'skip_download': True, + }, + }] + + def _real_extract(self, url): + lecture_id = self._match_id(url) + + webpage = self._download_webpage(url, lecture_id) + + data = self._parse_json( + self._search_regex( + # client_data may contain "};" so that we have to try more + # strict regex first + (r'client_data\s*=\s*({.+?})\s*;\s*\n', + r'client_data\s*=\s*({.+?})\s*;'), + webpage, 'client data'), + lecture_id) + + material = data['initialState']['material'] + desc = material['description'] + title = desc['title'] + + formats = [] + for server_id, server in material['videos'].items(): + if not isinstance(server, dict): + continue + for format_id in ('hls', 'dash'): + format_url = url_or_none(server.get(format_id)) + if not format_url: + continue + if format_id == 'hls': + formats.extend(self._extract_m3u8_formats( + format_url, lecture_id, 'mp4', + entry_protocol='m3u8_native', m3u8_id=format_id, + note='Downloading %s m3u8 information' % server_id, + fatal=False)) + elif format_id == 'dash': + formats.extend(self._extract_mpd_formats( + format_url, lecture_id, mpd_id=format_id, + note='Downloading %s MPD manifest' % server_id, + fatal=False)) + + content = str_or_none(desc.get('content')) + description = (clean_html(compat_b64decode(content).decode('utf-8')) + if content else None) + duration = int_or_none(material.get('duration'), invscale=60) + + return { + 'id': lecture_id, + 'title': title, + 'description': description, + 'duration': duration, + 'formats': formats, + } + + +class PlatziCourseIE(PlatziBaseIE): + _VALID_URL = r'''(?x) + https?:// + (?: + platzi\.com/clases| # es version + courses\.platzi\.com/classes # en version + )/(?P<id>[^/?\#&]+) + ''' + _TESTS = [{ + 'url': 'https://platzi.com/clases/next-js/', + 'info_dict': { + 'id': '1311', + 'title': 'Curso de Next.js', + }, + 'playlist_count': 22, + }, { + 'url': 'https://courses.platzi.com/classes/communication-codestream/', + 'info_dict': { + 'id': '1367', + 'title': 'Codestream Course', + }, + 'playlist_count': 14, + }] + + @classmethod + def suitable(cls, url): + return False if PlatziIE.suitable(url) else super(PlatziCourseIE, cls).suitable(url) + + def _real_extract(self, url): + course_name = self._match_id(url) + + webpage = self._download_webpage(url, course_name) + + props = self._parse_json( + self._search_regex(r'data\s*=\s*({.+?})\s*;', webpage, 'data'), + course_name)['initialProps'] + + entries = [] + for chapter_num, chapter in enumerate(props['concepts'], 1): + if not isinstance(chapter, dict): + continue + materials = chapter.get('materials') + if not materials or not isinstance(materials, list): + continue + chapter_title = chapter.get('title') + chapter_id = str_or_none(chapter.get('id')) + for material in materials: + if not isinstance(material, dict): + continue + if material.get('material_type') != 'video': + continue + video_url = urljoin(url, material.get('url')) + if not video_url: + continue + entries.append({ + '_type': 'url_transparent', + 'url': video_url, + 'title': str_or_none(material.get('name')), + 'id': str_or_none(material.get('id')), + 'ie_key': PlatziIE.ie_key(), + 'chapter': chapter_title, + 'chapter_number': chapter_num, + 'chapter_id': chapter_id, + }) + + course_id = compat_str(try_get(props, lambda x: x['course']['id'])) + course_title = try_get(props, lambda x: x['course']['name'], compat_str) + + return self.playlist_result(entries, course_id, course_title) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/playfm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playfm.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/playfm.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/playfm.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/playplustv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playplustv.py new file mode 100644 index 0000000..a4439c8 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/playplustv.py @@ -0,0 +1,100 @@ +import json + +from .common import InfoExtractor +from ..networking import PUTRequest +from ..networking.exceptions import HTTPError +from ..utils import ExtractorError, clean_html, int_or_none + + +class PlayPlusTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?playplus\.(?:com|tv)/VOD/(?P<project_id>[0-9]+)/(?P<id>[0-9a-f]{32})' + _TEST = { + 'url': 'https://www.playplus.tv/VOD/7572/db8d274a5163424e967f35a30ddafb8e', + 'md5': 'd078cb89d7ab6b9df37ce23c647aef72', + 'info_dict': { + 'id': 'db8d274a5163424e967f35a30ddafb8e', + 'ext': 'mp4', + 'title': 'Capítulo 179 - Final', + 'description': 'md5:01085d62d8033a1e34121d3c3cabc838', + 'timestamp': 1529992740, + 'upload_date': '20180626', + }, + 'skip': 'Requires account credential', + } + _NETRC_MACHINE = 'playplustv' + _GEO_COUNTRIES = ['BR'] + _token = None + _profile_id = None + + def _call_api(self, resource, video_id=None, query=None): + return self._download_json('https://api.playplus.tv/api/media/v2/get' + resource, video_id, headers={ + 'Authorization': 'Bearer ' + self._token, + }, query=query) + + def _perform_login(self, username, password): + req = PUTRequest( + 'https://api.playplus.tv/api/web/login', json.dumps({ + 'email': username, + 'password': password, + }).encode(), { + 'Content-Type': 'application/json; charset=utf-8', + }) + + try: + self._token = self._download_json(req, None)['token'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + raise ExtractorError(self._parse_json( + e.cause.response.read(), None)['errorMessage'], expected=True) + raise + + self._profile = self._call_api('Profiles')['list'][0]['_id'] + + def _real_initialize(self): + if not self._token: + self.raise_login_required(method='password') + + def _real_extract(self, url): + project_id, media_id = self._match_valid_url(url).groups() + media = self._call_api( + 'Media', media_id, { + 'profileId': self._profile, + 'projectId': project_id, + 'mediaId': media_id, + })['obj'] + title = media['title'] + + formats = [] + for f in media.get('files', []): + f_url = f.get('url') + if not f_url: + continue + file_info = f.get('fileInfo') or {} + formats.append({ + 'url': f_url, + 'width': int_or_none(file_info.get('width')), + 'height': int_or_none(file_info.get('height')), + }) + + thumbnails = [] + for thumb in media.get('thumbs', []): + thumb_url = thumb.get('url') + if not thumb_url: + continue + thumbnails.append({ + 'url': thumb_url, + 'width': int_or_none(thumb.get('width')), + 'height': int_or_none(thumb.get('height')), + }) + + return { + 'id': media_id, + 'title': title, + 'formats': formats, + 'thumbnails': thumbnails, + 'description': clean_html(media.get('description')) or media.get('shortDescription'), + 'timestamp': int_or_none(media.get('publishDate'), 1000), + 'view_count': int_or_none(media.get('numberOfViews')), + 'comment_count': int_or_none(media.get('numberOfComments')), + 'tags': media.get('tags'), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/plays.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/plays.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/plays.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/plays.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/playstuff.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playstuff.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/playstuff.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/playstuff.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/playsuisse.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playsuisse.py new file mode 100644 index 0000000..76288c7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/playsuisse.py @@ -0,0 +1,187 @@ +import json + +from .common import InfoExtractor +from ..utils import int_or_none, traverse_obj + + +class PlaySuisseIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?playsuisse\.ch/(?:watch|detail)/(?:[^#]*[?&]episodeId=)?(?P<id>[0-9]+)' + _TESTS = [ + { + # Old URL + 'url': 'https://www.playsuisse.ch/watch/763211/0', + 'only_matching': True, + }, + { + # episode in a series + 'url': 'https://www.playsuisse.ch/watch/763182?episodeId=763211', + 'md5': '82df2a470b2dfa60c2d33772a8a60cf8', + 'info_dict': { + 'id': '763211', + 'ext': 'mp4', + 'title': 'Knochen', + 'description': 'md5:8ea7a8076ba000cd9e8bc132fd0afdd8', + 'duration': 3344, + 'series': 'Wilder', + 'season': 'Season 1', + 'season_number': 1, + 'episode': 'Knochen', + 'episode_number': 1, + 'thumbnail': 're:https://playsuisse-img.akamaized.net/', + } + }, { + # film + 'url': 'https://www.playsuisse.ch/watch/808675', + 'md5': '818b94c1d2d7c4beef953f12cb8f3e75', + 'info_dict': { + 'id': '808675', + 'ext': 'mp4', + 'title': 'Der Läufer', + 'description': 'md5:9f61265c7e6dcc3e046137a792b275fd', + 'duration': 5280, + 'thumbnail': 're:https://playsuisse-img.akamaized.net/', + } + }, { + # series (treated as a playlist) + 'url': 'https://www.playsuisse.ch/detail/1115687', + 'info_dict': { + 'description': 'md5:e4a2ae29a8895823045b5c3145a02aa3', + 'id': '1115687', + 'series': 'They all came out to Montreux', + 'title': 'They all came out to Montreux', + }, + 'playlist': [{ + 'info_dict': { + 'description': 'md5:f2462744834b959a31adc6292380cda2', + 'duration': 3180, + 'episode': 'Folge 1', + 'episode_number': 1, + 'id': '1112663', + 'season': 'Season 1', + 'season_number': 1, + 'series': 'They all came out to Montreux', + 'thumbnail': 're:https://playsuisse-img.akamaized.net/', + 'title': 'Folge 1', + 'ext': 'mp4' + }, + }, { + 'info_dict': { + 'description': 'md5:9dfd308699fe850d3bce12dc1bad9b27', + 'duration': 2935, + 'episode': 'Folge 2', + 'episode_number': 2, + 'id': '1112661', + 'season': 'Season 1', + 'season_number': 1, + 'series': 'They all came out to Montreux', + 'thumbnail': 're:https://playsuisse-img.akamaized.net/', + 'title': 'Folge 2', + 'ext': 'mp4' + }, + }, { + 'info_dict': { + 'description': 'md5:14a93a3356b2492a8f786ab2227ef602', + 'duration': 2994, + 'episode': 'Folge 3', + 'episode_number': 3, + 'id': '1112664', + 'season': 'Season 1', + 'season_number': 1, + 'series': 'They all came out to Montreux', + 'thumbnail': 're:https://playsuisse-img.akamaized.net/', + 'title': 'Folge 3', + 'ext': 'mp4' + } + }], + } + ] + + _GRAPHQL_QUERY = ''' + query AssetWatch($assetId: ID!) { + assetV2(id: $assetId) { + ...Asset + episodes { + ...Asset + } + } + } + fragment Asset on AssetV2 { + id + name + description + duration + episodeNumber + seasonNumber + seriesName + medias { + type + url + } + thumbnail16x9 { + ...ImageDetails + } + thumbnail2x3 { + ...ImageDetails + } + thumbnail16x9WithTitle { + ...ImageDetails + } + thumbnail2x3WithTitle { + ...ImageDetails + } + } + fragment ImageDetails on AssetImage { + id + url + }''' + + def _get_media_data(self, media_id): + # NOTE In the web app, the "locale" header is used to switch between languages, + # However this doesn't seem to take effect when passing the header here. + response = self._download_json( + 'https://4bbepzm4ef.execute-api.eu-central-1.amazonaws.com/prod/graphql', + media_id, data=json.dumps({ + 'operationName': 'AssetWatch', + 'query': self._GRAPHQL_QUERY, + 'variables': {'assetId': media_id} + }).encode('utf-8'), + headers={'Content-Type': 'application/json', 'locale': 'de'}) + + return response['data']['assetV2'] + + def _real_extract(self, url): + media_id = self._match_id(url) + media_data = self._get_media_data(media_id) + info = self._extract_single(media_data) + if media_data.get('episodes'): + info.update({ + '_type': 'playlist', + 'entries': map(self._extract_single, media_data['episodes']), + }) + return info + + def _extract_single(self, media_data): + thumbnails = traverse_obj(media_data, lambda k, _: k.startswith('thumbnail')) + + formats, subtitles = [], {} + for media in traverse_obj(media_data, 'medias', default=[]): + if not media.get('url') or media.get('type') != 'HLS': + continue + f, subs = self._extract_m3u8_formats_and_subtitles( + media['url'], media_data['id'], 'mp4', m3u8_id='HLS', fatal=False) + formats.extend(f) + self._merge_subtitles(subs, target=subtitles) + + return { + 'id': media_data['id'], + 'title': media_data.get('name'), + 'description': media_data.get('description'), + 'thumbnails': thumbnails, + 'duration': int_or_none(media_data.get('duration')), + 'formats': formats, + 'subtitles': subtitles, + 'series': media_data.get('seriesName'), + 'season_number': int_or_none(media_data.get('seasonNumber')), + 'episode': media_data.get('name') if media_data.get('episodeNumber') else None, + 'episode_number': int_or_none(media_data.get('episodeNumber')), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/playtvak.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playtvak.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/playtvak.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/playtvak.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/playvid.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playvid.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/playvid.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/playvid.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/playwire.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/playwire.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/playwire.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/playwire.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pluralsight.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pluralsight.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pluralsight.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pluralsight.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/plutotv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/plutotv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/plutotv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/plutotv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/podbayfm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/podbayfm.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/podbayfm.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/podbayfm.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/podchaser.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/podchaser.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/podchaser.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/podchaser.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/podomatic.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/podomatic.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/podomatic.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/podomatic.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pokemon.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pokemon.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pokemon.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pokemon.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pokergo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pokergo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pokergo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pokergo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/polsatgo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/polsatgo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/polsatgo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/polsatgo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/polskieradio.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/polskieradio.py new file mode 100644 index 0000000..5bf92b9 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/polskieradio.py @@ -0,0 +1,610 @@ +import itertools +import json +import math +import re +import urllib.parse + +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + ExtractorError, + InAdvancePagedList, + determine_ext, + extract_attributes, + int_or_none, + js_to_json, + parse_iso8601, + strip_or_none, + traverse_obj, + unescapeHTML, + unified_timestamp, + url_or_none, + urljoin, +) + + +class PolskieRadioBaseExtractor(InfoExtractor): + def _extract_webpage_player_entries(self, webpage, playlist_id, base_data): + media_urls = set() + + for data_media in re.findall(r'<[^>]+data-media="?({[^>]+})"?', webpage): + media = self._parse_json(data_media, playlist_id, transform_source=unescapeHTML, fatal=False) + if not media.get('file') or not media.get('desc'): + continue + media_url = self._proto_relative_url(media['file']) + if media_url in media_urls: + continue + media_urls.add(media_url) + entry = base_data.copy() + entry.update({ + 'id': compat_str(media['id']), + 'url': media_url, + 'duration': int_or_none(media.get('length')), + 'vcodec': 'none' if media.get('provider') == 'audio' else None, + }) + entry_title = urllib.parse.unquote(media['desc']) + if entry_title: + entry['title'] = entry_title + yield entry + + +class PolskieRadioLegacyIE(PolskieRadioBaseExtractor): + # legacy sites + IE_NAME = 'polskieradio:legacy' + _VALID_URL = r'https?://(?:www\.)?polskieradio(?:24)?\.pl/\d+/\d+/[Aa]rtykul/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.polskieradio.pl/8/2382/Artykul/2534482,Zagarysci-Poezja-jak-spoiwo', + 'info_dict': { + 'id': '2534482', + 'title': 'Å»agaryÅ›ci. Poezja jak spoiwo', + 'description': 'md5:f18d95d5dcba747a09b635e21a4c0695', + }, + 'playlist': [{ + 'md5': 'd07559829f61d5a93a75755987ded760', + 'info_dict': { + 'id': '2516679', + 'ext': 'mp3', + 'title': 'md5:c6e1234e0b747ad883cb91b7ad06b98c', + 'timestamp': 1592654400, + 'upload_date': '20200620', + 'duration': 1430, + 'thumbnail': r're:^https?://static\.prsa\.pl/images/.*\.jpg$' + }, + }], + }, { + # PR4 audition - other frontend + 'url': 'https://www.polskieradio.pl/10/6071/Artykul/2610977,Poglos-29-pazdziernika-godz-2301', + 'info_dict': { + 'id': '2610977', + 'ext': 'mp3', + 'title': 'PogÅ‚os 29 października godz. 23:01', + }, + }, { + 'url': 'https://polskieradio24.pl/130/4503/Artykul/2621876,Narusza-nasza-suwerennosc-Publicysci-o-uzaleznieniu-funduszy-UE-od-praworzadnosci', + 'only_matching': True, + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + webpage, urlh = self._download_webpage_handle(url, playlist_id) + if PolskieRadioIE.suitable(urlh.url): + return self.url_result(urlh.url, PolskieRadioIE, playlist_id) + + content = self._search_regex( + r'(?s)<div[^>]+class="\s*this-article\s*"[^>]*>(.+?)<div[^>]+class="tags"[^>]*>', + webpage, 'content', default=None) + + timestamp = unified_timestamp(self._html_search_regex( + r'(?s)<span[^>]+id="datetime2"[^>]*>(.+?)</span>', + webpage, 'timestamp', default=None)) + + thumbnail_url = self._og_search_thumbnail(webpage, default=None) + + title = self._og_search_title(webpage).strip() + + description = strip_or_none(self._og_search_description(webpage, default=None)) + description = description.replace('\xa0', ' ') if description is not None else None + + if not content: + return { + 'id': playlist_id, + 'url': self._proto_relative_url( + self._search_regex( + r"source:\s*'(//static\.prsa\.pl/[^']+)'", + webpage, 'audition record url')), + 'title': title, + 'description': description, + 'timestamp': timestamp, + 'thumbnail': thumbnail_url, + } + + entries = self._extract_webpage_player_entries(content, playlist_id, { + 'title': title, + 'timestamp': timestamp, + 'thumbnail': thumbnail_url, + }) + + return self.playlist_result(entries, playlist_id, title, description) + + +class PolskieRadioIE(PolskieRadioBaseExtractor): + # new next.js sites + _VALID_URL = r'https?://(?:[^/]+\.)?(?:polskieradio(?:24)?|radiokierowcow)\.pl/artykul/(?P<id>\d+)' + _TESTS = [{ + # articleData, attachments + 'url': 'https://jedynka.polskieradio.pl/artykul/1587943', + 'info_dict': { + 'id': '1587943', + 'title': 'Prof. Andrzej Nowak: o historii nie da siÄ™ myÅ›leć beznamiÄ™tnie', + 'description': 'md5:12f954edbf3120c5e7075e17bf9fc5c5', + }, + 'playlist': [{ + 'md5': '2984ee6ce9046d91fc233bc1a864a09a', + 'info_dict': { + 'id': '7a85d429-5356-4def-a347-925e4ae7406b', + 'ext': 'mp3', + 'title': 'md5:d4623290d4ac983bf924061c75c23a0d', + }, + }], + }, { + # post, legacy html players + 'url': 'https://trojka.polskieradio.pl/artykul/2589163,Czy-wciaz-otrzymujemy-zdjecia-z-sond-Voyager', + 'info_dict': { + 'id': '2589163', + 'title': 'Czy wciąż otrzymujemy zdjÄ™cia z sond Voyager?', + 'description': 'md5:cf1a7f348d63a2db9c0d7a63d1669473', + }, + 'playlist': [{ + 'info_dict': { + 'id': '2577880', + 'ext': 'mp3', + 'title': 'md5:a57d10a0c02abd34dd675cb33707ad5a', + 'duration': 321, + }, + }], + }, { + # data, legacy + 'url': 'https://radiokierowcow.pl/artykul/2694529', + 'info_dict': { + 'id': '2694529', + 'title': 'Zielona fala reliktem przeszÅ‚oÅ›ci?', + 'description': 'md5:f20a9a7ed9cb58916c54add94eae3bc0', + }, + 'playlist_count': 3, + }, { + 'url': 'https://trojka.polskieradio.pl/artykul/1632955', + 'only_matching': True, + }, { + # with mp4 video + 'url': 'https://trojka.polskieradio.pl/artykul/1634903', + 'only_matching': True, + }, { + 'url': 'https://jedynka.polskieradio.pl/artykul/3042436,Polityka-wschodnia-ojca-i-syna-Wladyslawa-Lokietka-i-Kazimierza-Wielkiego', + 'only_matching': True, + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + webpage = self._download_webpage(url, playlist_id) + + article_data = traverse_obj( + self._search_nextjs_data(webpage, playlist_id), ( + 'props', 'pageProps', (('data', 'articleData'), 'post', 'data')), get_all=False) + + title = strip_or_none(article_data['title']) + + description = strip_or_none(article_data.get('lead')) + + entries = [{ + 'url': entry['file'], + 'ext': determine_ext(entry.get('fileName')), + 'id': self._search_regex( + r'([a-f\d]{8}-(?:[a-f\d]{4}-){3}[a-f\d]{12})', entry['file'], 'entry id'), + 'title': strip_or_none(entry.get('description')) or title, + } for entry in article_data.get('attachments') or () if entry.get('fileType') in ('Audio', )] + + if not entries: + # some legacy articles have no json attachments, but players in body + entries = self._extract_webpage_player_entries(article_data['content'], playlist_id, { + 'title': title, + }) + + return self.playlist_result(entries, playlist_id, title, description) + + +class PolskieRadioAuditionIE(InfoExtractor): + # new next.js sites + IE_NAME = 'polskieradio:audition' + _VALID_URL = r'https?://(?:[^/]+\.)?polskieradio\.pl/audycj[ae]/(?P<id>\d+)' + _TESTS = [{ + # articles, PR1 + 'url': 'https://jedynka.polskieradio.pl/audycje/5102', + 'info_dict': { + 'id': '5102', + 'title': 'Historia żywa', + 'thumbnail': r're:https://static\.prsa\.pl/images/.+', + }, + 'playlist_mincount': 38, + }, { + # episodes, PR1 + 'url': 'https://jedynka.polskieradio.pl/audycje/5769', + 'info_dict': { + 'id': '5769', + 'title': 'AgroFakty', + 'thumbnail': r're:https://static\.prsa\.pl/images/.+', + }, + 'playlist_mincount': 269, + }, { + # both episodes and articles, PR3 + 'url': 'https://trojka.polskieradio.pl/audycja/8906', + 'info_dict': { + 'id': '8906', + 'title': 'Trójka budzi', + 'thumbnail': r're:https://static\.prsa\.pl/images/.+', + }, + 'playlist_mincount': 722, + }, { + # some articles were "promoted to main page" and thus link to old frontend + 'url': 'https://trojka.polskieradio.pl/audycja/305', + 'info_dict': { + 'id': '305', + 'title': 'Co w mowie piszczy?', + 'thumbnail': r're:https://static\.prsa\.pl/images/.+', + }, + 'playlist_count': 1523, + }] + + def _call_lp3(self, path, query, video_id, note): + return self._download_json( + f'https://lp3test.polskieradio.pl/{path}', video_id, note, + query=query, headers={'x-api-key': '9bf6c5a2-a7d0-4980-9ed7-a3f7291f2a81'}) + + def _entries(self, playlist_id, has_episodes, has_articles): + for i in itertools.count(1) if has_episodes else []: + page = self._call_lp3( + 'AudioArticle/GetListByCategoryId', { + 'categoryId': playlist_id, + 'PageSize': 10, + 'skip': i, + 'format': 400, + }, playlist_id, f'Downloading episode list page {i}') + if not traverse_obj(page, 'data'): + break + for episode in page['data']: + yield { + 'id': str(episode['id']), + 'url': episode['file'], + 'title': episode.get('title'), + 'duration': int_or_none(episode.get('duration')), + 'timestamp': parse_iso8601(episode.get('datePublic')), + } + + for i in itertools.count(1) if has_articles else []: + page = self._call_lp3( + 'Article/GetListByCategoryId', { + 'categoryId': playlist_id, + 'PageSize': 9, + 'skip': i, + 'format': 400, + }, playlist_id, f'Downloading article list page {i}') + if not traverse_obj(page, 'data'): + break + for article in page['data']: + yield { + '_type': 'url_transparent', + 'id': str(article['id']), + 'url': article['url'], + 'title': article.get('shortTitle'), + 'description': traverse_obj(article, ('description', 'lead')), + 'timestamp': parse_iso8601(article.get('datePublic')), + } + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + page_props = traverse_obj( + self._search_nextjs_data(self._download_webpage(url, playlist_id), playlist_id), + ('props', 'pageProps', ('data', None)), get_all=False) + + has_episodes = bool(traverse_obj(page_props, 'episodes', 'audios')) + has_articles = bool(traverse_obj(page_props, 'articles')) + + return self.playlist_result( + self._entries(playlist_id, has_episodes, has_articles), playlist_id, + title=traverse_obj(page_props, ('details', 'name')), + description=traverse_obj(page_props, ('details', 'description', 'lead')), + thumbnail=traverse_obj(page_props, ('details', 'photo'))) + + +class PolskieRadioCategoryIE(InfoExtractor): + # legacy sites + IE_NAME = 'polskieradio:category' + _VALID_URL = r'https?://(?:www\.)?polskieradio\.pl/(?:\d+(?:,[^/]+)?/|[^/]+/Tag)(?P<id>\d+)' + _TESTS = [{ + 'url': 'http://www.polskieradio.pl/37,RedakcjaKatolicka/4143,Kierunek-Krakow', + 'info_dict': { + 'id': '4143', + 'title': 'Kierunek Kraków', + }, + 'playlist_mincount': 61 + }, { + 'url': 'http://www.polskieradio.pl/10,czworka/214,muzyka', + 'info_dict': { + 'id': '214', + 'title': 'Muzyka', + }, + 'playlist_mincount': 61 + }, { + # billennium tabs + 'url': 'https://www.polskieradio.pl/8/2385', + 'info_dict': { + 'id': '2385', + 'title': 'Droga przez mÄ…kÄ™', + }, + 'playlist_mincount': 111, + }, { + 'url': 'https://www.polskieradio.pl/10/4930', + 'info_dict': { + 'id': '4930', + 'title': 'Teraz K-pop!', + }, + 'playlist_mincount': 392, + }, { + # post back pages, audio content directly without articles + 'url': 'https://www.polskieradio.pl/8,dwojka/7376,nowa-mowa', + 'info_dict': { + 'id': '7376', + 'title': 'Nowa mowa', + }, + 'playlist_mincount': 244, + }, { + 'url': 'https://www.polskieradio.pl/Krzysztof-Dziuba/Tag175458', + 'info_dict': { + 'id': '175458', + 'title': 'Krzysztof Dziuba', + }, + 'playlist_mincount': 420, + }, { + 'url': 'http://www.polskieradio.pl/8,Dwojka/196,Publicystyka', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if PolskieRadioLegacyIE.suitable(url) else super().suitable(url) + + def _entries(self, url, page, category_id): + content = page + is_billennium_tabs = 'onclick="TB_LoadTab(' in page + is_post_back = 'onclick="__doPostBack(' in page + pagination = page if is_billennium_tabs else None + for page_num in itertools.count(2): + for a_entry, entry_id in re.findall( + r'(?s)<article[^>]+>.*?(<a[^>]+href=["\'](?:(?:https?)?://[^/]+)?/\d+/\d+/Artykul/(\d+)[^>]+>).*?</article>', + content): + entry = extract_attributes(a_entry) + if entry.get('href'): + yield self.url_result( + urljoin(url, entry['href']), PolskieRadioLegacyIE, entry_id, entry.get('title')) + for a_entry in re.findall(r'<span data-media=({[^ ]+})', content): + yield traverse_obj(self._parse_json(a_entry, category_id), { + 'url': 'file', + 'id': 'uid', + 'duration': 'length', + 'title': ('title', {urllib.parse.unquote}), + 'description': ('desc', {urllib.parse.unquote}), + }) + if is_billennium_tabs: + params = self._search_json( + r'<div[^>]+class=["\']next["\'][^>]*>\s*<a[^>]+onclick=["\']TB_LoadTab\(', + pagination, 'next page params', category_id, default=None, close_objects=1, + contains_pattern='.+', transform_source=lambda x: '[%s' % js_to_json(unescapeHTML(x))) + if not params: + break + tab_content = self._download_json( + 'https://www.polskieradio.pl/CMS/TemplateBoxesManagement/TemplateBoxTabContent.aspx/GetTabContent', + category_id, f'Downloading page {page_num}', headers={'content-type': 'application/json'}, + data=json.dumps(dict(zip(( + 'boxInstanceId', 'tabId', 'categoryType', 'sectionId', 'categoryId', 'pagerMode', + 'subjectIds', 'tagIndexId', 'queryString', 'name', 'openArticlesInParentTemplate', + 'idSectionFromUrl', 'maxDocumentAge', 'showCategoryForArticle', 'pageNumber' + ), params))).encode())['d'] + content, pagination = tab_content['Content'], tab_content.get('PagerContent') + elif is_post_back: + target = self._search_regex( + r'onclick=(?:["\'])__doPostBack\((?P<q1>["\'])(?P<target>[\w$]+)(?P=q1)\s*,\s*(?P<q2>["\'])Next(?P=q2)', + content, 'pagination postback target', group='target', default=None) + if not target: + break + content = self._download_webpage( + url, category_id, f'Downloading page {page_num}', + data=urllib.parse.urlencode({ + **self._hidden_inputs(content), + '__EVENTTARGET': target, + '__EVENTARGUMENT': 'Next', + }).encode()) + else: + next_url = urljoin(url, self._search_regex( + r'<div[^>]+class=["\']next["\'][^>]*>\s*<a[^>]+href=(["\'])(?P<url>(?:(?!\1).)+)\1', + content, 'next page url', group='url', default=None)) + if not next_url: + break + content = self._download_webpage(next_url, category_id, f'Downloading page {page_num}') + + def _real_extract(self, url): + category_id = self._match_id(url) + webpage, urlh = self._download_webpage_handle(url, category_id) + if PolskieRadioAuditionIE.suitable(urlh.url): + return self.url_result(urlh.url, PolskieRadioAuditionIE, category_id) + title = self._html_search_regex( + r'<title>([^<]+)(?: - [^<]+ - [^<]+| w [Pp]olskie[Rr]adio\.pl\s*)', + webpage, 'title', fatal=False) + return self.playlist_result( + self._entries(url, webpage, category_id), + category_id, title) + + +class PolskieRadioPlayerIE(InfoExtractor): + IE_NAME = 'polskieradio:player' + _VALID_URL = r'https?://player\.polskieradio\.pl/anteny/(?P[^/]+)' + + _BASE_URL = 'https://player.polskieradio.pl' + _PLAYER_URL = 'https://player.polskieradio.pl/main.bundle.js' + _STATIONS_API_URL = 'https://apipr.polskieradio.pl/api/stacje' + + _TESTS = [{ + 'url': 'https://player.polskieradio.pl/anteny/trojka', + 'info_dict': { + 'id': '3', + 'ext': 'm4a', + 'title': 'Trójka', + }, + 'params': { + 'format': 'bestaudio', + 'skip_download': 'endless stream', + }, + }] + + def _get_channel_list(self, channel_url='no_channel'): + player_code = self._download_webpage( + self._PLAYER_URL, channel_url, + note='Downloading js player') + channel_list = js_to_json(self._search_regex( + r';var r="anteny",a=(\[.+?\])},', player_code, 'channel list')) + return self._parse_json(channel_list, channel_url) + + def _real_extract(self, url): + channel_url = self._match_id(url) + channel_list = self._get_channel_list(channel_url) + + channel = next((c for c in channel_list if c.get('url') == channel_url), None) + + if not channel: + raise ExtractorError('Channel not found') + + station_list = self._download_json(self._STATIONS_API_URL, channel_url, + note='Downloading stream url list', + headers={ + 'Accept': 'application/json', + 'Referer': url, + 'Origin': self._BASE_URL, + }) + station = next((s for s in station_list + if s.get('Name') == (channel.get('streamName') or channel.get('name'))), None) + if not station: + raise ExtractorError('Station not found even though we extracted channel') + + formats = [] + for stream_url in station['Streams']: + stream_url = self._proto_relative_url(stream_url) + if stream_url.endswith('/playlist.m3u8'): + formats.extend(self._extract_m3u8_formats(stream_url, channel_url, live=True)) + elif stream_url.endswith('/manifest.f4m'): + formats.extend(self._extract_mpd_formats(stream_url, channel_url)) + elif stream_url.endswith('/Manifest'): + formats.extend(self._extract_ism_formats(stream_url, channel_url)) + else: + formats.append({ + 'url': stream_url, + }) + + return { + 'id': compat_str(channel['id']), + 'formats': formats, + 'title': channel.get('name') or channel.get('streamName'), + 'display_id': channel_url, + 'thumbnail': f'{self._BASE_URL}/images/{channel_url}-color-logo.png', + 'is_live': True, + } + + +class PolskieRadioPodcastBaseExtractor(InfoExtractor): + _API_BASE = 'https://apipodcasts.polskieradio.pl/api' + + def _parse_episode(self, data): + return { + 'id': data['guid'], + 'formats': [{ + 'url': data['url'], + 'filesize': int_or_none(data.get('fileSize')), + }], + 'title': data['title'], + 'description': data.get('description'), + 'duration': int_or_none(data.get('length')), + 'timestamp': parse_iso8601(data.get('publishDate')), + 'thumbnail': url_or_none(data.get('image')), + 'series': data.get('podcastTitle'), + 'episode': data['title'], + } + + +class PolskieRadioPodcastListIE(PolskieRadioPodcastBaseExtractor): + IE_NAME = 'polskieradio:podcast:list' + _VALID_URL = r'https?://podcasty\.polskieradio\.pl/podcast/(?P\d+)' + _TESTS = [{ + 'url': 'https://podcasty.polskieradio.pl/podcast/8/', + 'info_dict': { + 'id': '8', + 'title': 'Åšniadanie w Trójce', + 'description': 'md5:57abcc27bc4c6a6b25baa3061975b9ef', + 'uploader': 'Beata Michniewicz', + }, + 'playlist_mincount': 714, + }] + _PAGE_SIZE = 10 + + def _call_api(self, podcast_id, page): + return self._download_json( + f'{self._API_BASE}/Podcasts/{podcast_id}/?pageSize={self._PAGE_SIZE}&page={page}', + podcast_id, f'Downloading page {page}') + + def _real_extract(self, url): + podcast_id = self._match_id(url) + data = self._call_api(podcast_id, 1) + + def get_page(page_num): + page_data = self._call_api(podcast_id, page_num + 1) if page_num else data + yield from (self._parse_episode(ep) for ep in page_data['items']) + + return { + '_type': 'playlist', + 'entries': InAdvancePagedList( + get_page, math.ceil(data['itemCount'] / self._PAGE_SIZE), self._PAGE_SIZE), + 'id': str(data['id']), + 'title': data.get('title'), + 'description': data.get('description'), + 'uploader': data.get('announcer'), + } + + +class PolskieRadioPodcastIE(PolskieRadioPodcastBaseExtractor): + IE_NAME = 'polskieradio:podcast' + _VALID_URL = r'https?://podcasty\.polskieradio\.pl/track/(?P[a-f\d]{8}(?:-[a-f\d]{4}){4}[a-f\d]{8})' + _TESTS = [{ + 'url': 'https://podcasty.polskieradio.pl/track/6eafe403-cb8f-4756-b896-4455c3713c32', + 'info_dict': { + 'id': '6eafe403-cb8f-4756-b896-4455c3713c32', + 'ext': 'mp3', + 'title': 'Theresa May rezygnuje. Co dalej z brexitem?', + 'description': 'md5:e41c409a29d022b70ef0faa61dbded60', + 'episode': 'Theresa May rezygnuje. Co dalej z brexitem?', + 'duration': 2893, + 'thumbnail': 'https://static.prsa.pl/images/58649376-c8a0-4ba2-a714-78b383285f5f.jpg', + 'series': 'Raport o stanie Å›wiata', + }, + }] + + def _real_extract(self, url): + podcast_id = self._match_id(url) + data = self._download_json( + f'{self._API_BASE}/audio', + podcast_id, 'Downloading podcast metadata', + data=json.dumps({ + 'guids': [podcast_id], + }).encode('utf-8'), + headers={ + 'Content-Type': 'application/json', + }) + return self._parse_episode(data[0]) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/popcorntimes.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/popcorntimes.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/popcorntimes.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/popcorntimes.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/popcorntv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/popcorntv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/popcorntv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/popcorntv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/porn91.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/porn91.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/porn91.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/porn91.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pornbox.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornbox.py new file mode 100644 index 0000000..c381382 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornbox.py @@ -0,0 +1,113 @@ +from .common import InfoExtractor +from ..compat import functools +from ..utils import ( + int_or_none, + parse_duration, + parse_iso8601, + qualities, + str_or_none, + traverse_obj, + url_or_none, +) + + +class PornboxIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?pornbox\.com/application/watch-page/(?P[0-9]+)' + _TESTS = [{ + 'url': 'https://pornbox.com/application/watch-page/212108', + 'md5': '3ff6b6e206f263be4c5e987a3162ac6e', + 'info_dict': { + 'id': '212108', + 'ext': 'mp4', + 'title': 'md5:ececc5c6e6c9dd35d290c45fed05fd49', + 'uploader': 'Lily Strong', + 'timestamp': 1665871200, + 'upload_date': '20221015', + 'age_limit': 18, + 'availability': 'needs_auth', + 'duration': 1505, + 'cast': ['Lily Strong', 'John Strong'], + 'tags': 'count:11', + 'description': 'md5:589c7f33e183aa8aa939537300efb859', + 'thumbnail': r're:^https?://cdn-image\.gtflixtv\.com.*\.jpg.*$' + } + }, { + 'url': 'https://pornbox.com/application/watch-page/216045', + 'info_dict': { + 'id': '216045', + 'title': 'md5:3e48528e73a9a2b12f7a2772ed0b26a2', + 'description': 'md5:3e631dcaac029f15ed434e402d1b06c7', + 'uploader': 'VK Studio', + 'timestamp': 1618264800, + 'upload_date': '20210412', + 'age_limit': 18, + 'availability': 'premium_only', + 'duration': 2710, + 'cast': 'count:3', + 'tags': 'count:29', + 'thumbnail': r're:^https?://cdn-image\.gtflixtv\.com.*\.jpg.*$', + 'subtitles': 'count:6' + }, + 'params': { + 'skip_download': True, + 'ignore_no_formats_error': True + }, + 'expected_warnings': [ + 'You are either not logged in or do not have access to this scene', + 'No video formats found', 'Requested format is not available'] + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + public_data = self._download_json(f'https://pornbox.com/contents/{video_id}', video_id) + + subtitles = {country_code: [{ + 'url': f'https://pornbox.com/contents/{video_id}/subtitles/{country_code}', + 'ext': 'srt' + }] for country_code in traverse_obj(public_data, ('subtitles', ..., {str}))} + + is_free_scene = traverse_obj( + public_data, ('price', 'is_available_for_free', {bool}), default=False) + + metadata = { + 'id': video_id, + **traverse_obj(public_data, { + 'title': ('scene_name', {str.strip}), + 'description': ('small_description', {str.strip}), + 'uploader': 'studio', + 'duration': ('runtime', {parse_duration}), + 'cast': (('models', 'male_models'), ..., 'model_name'), + 'thumbnail': ('player_poster', {url_or_none}), + 'tags': ('niches', ..., 'niche'), + }), + 'age_limit': 18, + 'timestamp': parse_iso8601(traverse_obj( + public_data, ('studios', 'release_date'), 'publish_date')), + 'availability': self._availability(needs_auth=True, needs_premium=not is_free_scene), + 'subtitles': subtitles, + } + + if not public_data.get('is_purchased') or not is_free_scene: + self.raise_login_required( + 'You are either not logged in or do not have access to this scene', metadata_available=True) + return metadata + + media_id = traverse_obj(public_data, ( + 'medias', lambda _, v: v['title'] == 'Full video', 'media_id', {int}), get_all=False) + if not media_id: + self.raise_no_formats('Could not find stream id', video_id=video_id) + + stream_data = self._download_json( + f'https://pornbox.com/media/{media_id}/stream', video_id=video_id, note='Getting manifest urls') + + get_quality = qualities(['web', 'vga', 'hd', '1080p', '4k', '8k']) + metadata['formats'] = traverse_obj(stream_data, ('qualities', lambda _, v: v['src'], { + 'url': 'src', + 'vbr': ('bitrate', {functools.partial(int_or_none, scale=1000)}), + 'format_id': ('quality', {str_or_none}), + 'quality': ('quality', {get_quality}), + 'width': ('size', {lambda x: int(x[:-1])}), + })) + + return metadata diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/porncom.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/porncom.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/porncom.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/porncom.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pornez.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornez.py new file mode 100644 index 0000000..bc45f86 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornez.py @@ -0,0 +1,60 @@ +from .common import InfoExtractor +from ..utils import ( + clean_html, + int_or_none, + get_element_by_class, + urljoin, +) + + +class PornezIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?pornez\.net/(?:video(?P\w+)|watch)/' + _TESTS = [{ + 'url': 'https://pornez.net/video344819/mistresst-funny_penis_names-wmv/', + 'info_dict': { + 'id': '344819', + 'ext': 'mp4', + 'title': 'mistresst funny_penis_names wmv', + 'thumbnail': r're:^https?://.*\.jpg$', + 'age_limit': 18, + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://pornez.net/watch/leana+lovings+stiff+for+stepdaughter/', + 'info_dict': { + 'id': '156161', + 'ext': 'mp4', + 'title': 'Watch leana lovings stiff for stepdaughter porn video.', + 'age_limit': 18, + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://pornez.net/videovzs27fj/tutor4k-e14-blue-wave-1080p-nbq-tutor4k-e14-blue-wave/', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + if not video_id: + video_id = self._search_regex( + r']+\bhref=["\']https?://pornez.net/\?p=(\w+)["\']', webpage, 'id') + + iframe_src = self._html_search_regex(r']+src="([^"]+)"', webpage, 'iframe') + iframe = self._download_webpage(urljoin('https://pornez.net', iframe_src), video_id) + + entries = self._parse_html5_media_entries(iframe_src, iframe, video_id)[0] + for fmt in entries['formats']: + height = self._search_regex(r'_(\d+)\.m3u8', fmt['url'], 'height') + fmt['format_id'] = '%sp' % height + fmt['height'] = int_or_none(height) + + entries.update({ + 'id': video_id, + 'title': (clean_html(get_element_by_class('video-title', webpage)) + or self._html_search_meta( + ['twitter:title', 'og:title', 'description'], webpage, 'title', default=None)), + 'thumbnail': self._html_search_meta(['thumbnailUrl'], webpage, 'thumb', default=None), + 'age_limit': 18, + }) + return entries diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pornflip.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornflip.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pornflip.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pornflip.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pornhd.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornhd.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pornhd.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pornhd.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pornhub.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornhub.py new file mode 100644 index 0000000..999d038 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornhub.py @@ -0,0 +1,825 @@ +import functools +import itertools +import math +import operator +import re + +from .common import InfoExtractor +from .openload import PhantomJSwrapper +from ..compat import compat_str +from ..networking import Request +from ..networking.exceptions import HTTPError +from ..utils import ( + NO_DEFAULT, + ExtractorError, + clean_html, + determine_ext, + format_field, + int_or_none, + merge_dicts, + orderedSet, + remove_quotes, + remove_start, + str_to_int, + update_url_query, + url_or_none, + urlencode_postdata, +) + + +class PornHubBaseIE(InfoExtractor): + _NETRC_MACHINE = 'pornhub' + _PORNHUB_HOST_RE = r'(?:(?Ppornhub(?:premium)?\.(?:com|net|org))|pornhubvybmsymdol4iibwgwtkpwmeyd6luq2gxajgjzfjvotyt5zhyd\.onion)' + + def _download_webpage_handle(self, *args, **kwargs): + def dl(*args, **kwargs): + return super(PornHubBaseIE, self)._download_webpage_handle(*args, **kwargs) + + ret = dl(*args, **kwargs) + + if not ret: + return ret + + webpage, urlh = ret + + if any(re.search(p, webpage) for p in ( + r']+\bonload=["\']go\(\)', + r'document\.cookie\s*=\s*["\']RNKEY=', + r'document\.location\.reload\(true\)')): + url_or_request = args[0] + url = (url_or_request.url + if isinstance(url_or_request, Request) + else url_or_request) + phantom = PhantomJSwrapper(self, required_version='2.0') + phantom.get(url, html=webpage) + webpage, urlh = dl(*args, **kwargs) + + return webpage, urlh + + def _real_initialize(self): + self._logged_in = False + + def _set_age_cookies(self, host): + self._set_cookie(host, 'age_verified', '1') + self._set_cookie(host, 'accessAgeDisclaimerPH', '1') + self._set_cookie(host, 'accessAgeDisclaimerUK', '1') + self._set_cookie(host, 'accessPH', '1') + + def _login(self, host): + if self._logged_in: + return + + site = host.split('.')[0] + + # Both sites pornhub and pornhubpremium have separate accounts + # so there should be an option to provide credentials for both. + # At the same time some videos are available under the same video id + # on both sites so that we have to identify them as the same video. + # For that purpose we have to keep both in the same extractor + # but under different netrc machines. + username, password = self._get_login_info(netrc_machine=site) + if username is None: + return + + login_url = 'https://www.%s/%slogin' % (host, 'premium/' if 'premium' in host else '') + login_page = self._download_webpage( + login_url, None, 'Downloading %s login page' % site) + + def is_logged(webpage): + return any(re.search(p, webpage) for p in ( + r'class=["\']signOut', + r'>Sign\s+[Oo]ut\s*<')) + + if is_logged(login_page): + self._logged_in = True + return + + login_form = self._hidden_inputs(login_page) + + login_form.update({ + 'username': username, + 'password': password, + }) + + response = self._download_json( + 'https://www.%s/front/authenticate' % host, None, + 'Logging in to %s' % site, + data=urlencode_postdata(login_form), + headers={ + 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', + 'Referer': login_url, + 'X-Requested-With': 'XMLHttpRequest', + }) + + if response.get('success') == '1': + self._logged_in = True + return + + message = response.get('message') + if message is not None: + raise ExtractorError( + 'Unable to login: %s' % message, expected=True) + + raise ExtractorError('Unable to log in') + + +class PornHubIE(PornHubBaseIE): + IE_DESC = 'PornHub and Thumbzilla' + _VALID_URL = r'''(?x) + https?:// + (?: + (?:[^/]+\.)? + %s + /(?:(?:view_video\.php|video/show)\?viewkey=|embed/)| + (?:www\.)?thumbzilla\.com/video/ + ) + (?P[\da-z]+) + ''' % PornHubBaseIE._PORNHUB_HOST_RE + _EMBED_REGEX = [r']+?src=["\'](?P(?:https?:)?//(?:www\.)?pornhub(?:premium)?\.(?:com|net|org)/embed/[\da-z]+)'] + _TESTS = [{ + 'url': 'http://www.pornhub.com/view_video.php?viewkey=648719015', + 'md5': 'a6391306d050e4547f62b3f485dd9ba9', + 'info_dict': { + 'id': '648719015', + 'ext': 'mp4', + 'title': 'Seductive Indian beauty strips down and fingers her pink pussy', + 'uploader': 'Babes', + 'upload_date': '20130628', + 'timestamp': 1372447216, + 'duration': 361, + 'view_count': int, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 18, + 'tags': list, + 'categories': list, + 'cast': list, + }, + }, { + # non-ASCII title + 'url': 'http://www.pornhub.com/view_video.php?viewkey=1331683002', + 'info_dict': { + 'id': '1331683002', + 'ext': 'mp4', + 'title': 'é‡åº†å©·å©·å¥³çŽ‹è¶³äº¤', + 'upload_date': '20150213', + 'timestamp': 1423804862, + 'duration': 1753, + 'view_count': int, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 18, + 'tags': list, + 'categories': list, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'Video has been flagged for verification in accordance with our trust and safety policy', + }, { + # subtitles + 'url': 'https://www.pornhub.com/view_video.php?viewkey=ph5af5fef7c2aa7', + 'info_dict': { + 'id': 'ph5af5fef7c2aa7', + 'ext': 'mp4', + 'title': 'BFFS - Cute Teen Girls Share Cock On the Floor', + 'uploader': 'BFFs', + 'duration': 622, + 'view_count': int, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 18, + 'tags': list, + 'categories': list, + 'subtitles': { + 'en': [{ + "ext": 'srt' + }] + }, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'This video has been disabled', + }, { + 'url': 'http://www.pornhub.com/view_video.php?viewkey=ph601dc30bae19a', + 'info_dict': { + 'id': 'ph601dc30bae19a', + 'uploader': 'Projekt Melody', + 'uploader_id': 'projekt-melody', + 'upload_date': '20210205', + 'title': '"Welcome to My Pussy Mansion" - CB Stream (02/03/21)', + 'thumbnail': r're:https?://.+', + }, + }, { + 'url': 'http://www.pornhub.com/view_video.php?viewkey=ph557bbb6676d2d', + 'only_matching': True, + }, { + # removed at the request of cam4.com + 'url': 'http://fr.pornhub.com/view_video.php?viewkey=ph55ca2f9760862', + 'only_matching': True, + }, { + # removed at the request of the copyright owner + 'url': 'http://www.pornhub.com/view_video.php?viewkey=788152859', + 'only_matching': True, + }, { + # removed by uploader + 'url': 'http://www.pornhub.com/view_video.php?viewkey=ph572716d15a111', + 'only_matching': True, + }, { + # private video + 'url': 'http://www.pornhub.com/view_video.php?viewkey=ph56fd731fce6b7', + 'only_matching': True, + }, { + 'url': 'https://www.thumbzilla.com/video/ph56c6114abd99a/horny-girlfriend-sex', + 'only_matching': True, + }, { + 'url': 'http://www.pornhub.com/video/show?viewkey=648719015', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.net/view_video.php?viewkey=203640933', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.org/view_video.php?viewkey=203640933', + 'only_matching': True, + }, { + 'url': 'https://www.pornhubpremium.com/view_video.php?viewkey=ph5e4acdae54a82', + 'only_matching': True, + }, { + # Some videos are available with the same id on both premium + # and non-premium sites (e.g. this and the following test) + 'url': 'https://www.pornhub.com/view_video.php?viewkey=ph5f75b0f4b18e3', + 'only_matching': True, + }, { + 'url': 'https://www.pornhubpremium.com/view_video.php?viewkey=ph5f75b0f4b18e3', + 'only_matching': True, + }, { + # geo restricted + 'url': 'https://www.pornhub.com/view_video.php?viewkey=ph5a9813bfa7156', + 'only_matching': True, + }, { + 'url': 'http://pornhubvybmsymdol4iibwgwtkpwmeyd6luq2gxajgjzfjvotyt5zhyd.onion/view_video.php?viewkey=ph5a9813bfa7156', + 'only_matching': True, + }] + + def _extract_count(self, pattern, webpage, name): + return str_to_int(self._search_regex(pattern, webpage, '%s count' % name, default=None)) + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + host = mobj.group('host') or 'pornhub.com' + video_id = mobj.group('id') + + self._login(host) + self._set_age_cookies(host) + + def dl_webpage(platform): + self._set_cookie(host, 'platform', platform) + return self._download_webpage( + 'https://www.%s/view_video.php?viewkey=%s' % (host, video_id), + video_id, 'Downloading %s webpage' % platform) + + webpage = dl_webpage('pc') + + error_msg = self._html_search_regex( + (r'(?s)]+class=(["\'])(?:(?!\1).)*\b(?:removed|userMessageSection)\b(?:(?!\1).)*\1[^>]*>(?P.+?)', + r'(?s)]+class=["\']noVideo["\'][^>]*>(?P.+?)'), + webpage, 'error message', default=None, group='error') + if error_msg: + error_msg = re.sub(r'\s+', ' ', error_msg) + raise ExtractorError( + 'PornHub said: %s' % error_msg, + expected=True, video_id=video_id) + + if any(re.search(p, webpage) for p in ( + r'class=["\']geoBlocked["\']', + r'>\s*This content is unavailable in your country')): + self.raise_geo_restricted() + + # video_title from flashvars contains whitespace instead of non-ASCII (see + # http://www.pornhub.com/view_video.php?viewkey=1331683002), not relying + # on that anymore. + title = self._html_search_meta( + 'twitter:title', webpage, default=None) or self._html_search_regex( + (r'(?s)]+class=["\']title["\'][^>]*>(?P.+?)</h1>', + r'<div[^>]+data-video-title=(["\'])(?P<title>(?:(?!\1).)+)\1', + r'shareTitle["\']\s*[=:]\s*(["\'])(?P<title>(?:(?!\1).)+)\1'), + webpage, 'title', group='title') + + video_urls = [] + video_urls_set = set() + subtitles = {} + + flashvars = self._parse_json( + self._search_regex( + r'var\s+flashvars_\d+\s*=\s*({.+?});', webpage, 'flashvars', default='{}'), + video_id) + if flashvars: + subtitle_url = url_or_none(flashvars.get('closedCaptionsFile')) + if subtitle_url: + subtitles.setdefault('en', []).append({ + 'url': subtitle_url, + 'ext': 'srt', + }) + thumbnail = flashvars.get('image_url') + duration = int_or_none(flashvars.get('video_duration')) + media_definitions = flashvars.get('mediaDefinitions') + if isinstance(media_definitions, list): + for definition in media_definitions: + if not isinstance(definition, dict): + continue + video_url = definition.get('videoUrl') + if not video_url or not isinstance(video_url, compat_str): + continue + if video_url in video_urls_set: + continue + video_urls_set.add(video_url) + video_urls.append( + (video_url, int_or_none(definition.get('quality')))) + else: + thumbnail, duration = [None] * 2 + + def extract_js_vars(webpage, pattern, default=NO_DEFAULT): + assignments = self._search_regex( + pattern, webpage, 'encoded url', default=default) + if not assignments: + return {} + + assignments = assignments.split(';') + + js_vars = {} + + def parse_js_value(inp): + inp = re.sub(r'/\*(?:(?!\*/).)*?\*/', '', inp) + if '+' in inp: + inps = inp.split('+') + return functools.reduce( + operator.concat, map(parse_js_value, inps)) + inp = inp.strip() + if inp in js_vars: + return js_vars[inp] + return remove_quotes(inp) + + for assn in assignments: + assn = assn.strip() + if not assn: + continue + assn = re.sub(r'var\s+', '', assn) + vname, value = assn.split('=', 1) + js_vars[vname] = parse_js_value(value) + return js_vars + + def add_video_url(video_url): + v_url = url_or_none(video_url) + if not v_url: + return + if v_url in video_urls_set: + return + video_urls.append((v_url, None)) + video_urls_set.add(v_url) + + def parse_quality_items(quality_items): + q_items = self._parse_json(quality_items, video_id, fatal=False) + if not isinstance(q_items, list): + return + for item in q_items: + if isinstance(item, dict): + add_video_url(item.get('url')) + + if not video_urls: + FORMAT_PREFIXES = ('media', 'quality', 'qualityItems') + js_vars = extract_js_vars( + webpage, r'(var\s+(?:%s)_.+)' % '|'.join(FORMAT_PREFIXES), + default=None) + if js_vars: + for key, format_url in js_vars.items(): + if key.startswith(FORMAT_PREFIXES[-1]): + parse_quality_items(format_url) + elif any(key.startswith(p) for p in FORMAT_PREFIXES[:2]): + add_video_url(format_url) + if not video_urls and re.search( + r'<[^>]+\bid=["\']lockedPlayer', webpage): + raise ExtractorError( + 'Video %s is locked' % video_id, expected=True) + + if not video_urls: + js_vars = extract_js_vars( + dl_webpage('tv'), r'(var.+?mediastring.+?)</script>') + add_video_url(js_vars['mediastring']) + + for mobj in re.finditer( + r'<a[^>]+\bclass=["\']downloadBtn\b[^>]+\bhref=(["\'])(?P<url>(?:(?!\1).)+)\1', + webpage): + video_url = mobj.group('url') + if video_url not in video_urls_set: + video_urls.append((video_url, None)) + video_urls_set.add(video_url) + + upload_date = None + formats = [] + + def add_format(format_url, height=None): + ext = determine_ext(format_url) + if ext == 'mpd': + formats.extend(self._extract_mpd_formats( + format_url, video_id, mpd_id='dash', fatal=False)) + return + if ext == 'm3u8': + formats.extend(self._extract_m3u8_formats( + format_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + return + if not height: + height = int_or_none(self._search_regex( + r'(?P<height>\d+)[pP]?_\d+[kK]', format_url, 'height', + default=None)) + formats.append({ + 'url': format_url, + 'format_id': format_field(height, None, '%dp'), + 'height': height, + }) + + for video_url, height in video_urls: + if not upload_date: + upload_date = self._search_regex( + r'/(\d{6}/\d{2})/', video_url, 'upload data', default=None) + if upload_date: + upload_date = upload_date.replace('/', '') + if '/video/get_media' in video_url: + medias = self._download_json(video_url, video_id, fatal=False) + if isinstance(medias, list): + for media in medias: + if not isinstance(media, dict): + continue + video_url = url_or_none(media.get('videoUrl')) + if not video_url: + continue + height = int_or_none(media.get('quality')) + add_format(video_url, height) + continue + add_format(video_url) + + model_profile = self._search_json( + r'var\s+MODEL_PROFILE\s*=', webpage, 'model profile', video_id, fatal=False) + video_uploader = self._html_search_regex( + r'(?s)From: .+?<(?:a\b[^>]+\bhref=["\']/(?:(?:user|channel)s|model|pornstar)/|span\b[^>]+\bclass=["\']username)[^>]+>(.+?)<', + webpage, 'uploader', default=None) or model_profile.get('username') + + def extract_vote_count(kind, name): + return self._extract_count( + (r'<span[^>]+\bclass="votes%s"[^>]*>([\d,\.]+)</span>' % kind, + r'<span[^>]+\bclass=["\']votes%s["\'][^>]*\bdata-rating=["\'](\d+)' % kind), + webpage, name) + + view_count = self._extract_count( + r'<span class="count">([\d,\.]+)</span> [Vv]iews', webpage, 'view') + like_count = extract_vote_count('Up', 'like') + dislike_count = extract_vote_count('Down', 'dislike') + comment_count = self._extract_count( + r'All Comments\s*<span>\(([\d,.]+)\)', webpage, 'comment') + + def extract_list(meta_key): + div = self._search_regex( + r'(?s)<div[^>]+\bclass=["\'].*?\b%sWrapper[^>]*>(.+?)</div>' + % meta_key, webpage, meta_key, default=None) + if div: + return [clean_html(x).strip() for x in re.findall(r'(?s)<a[^>]+\bhref=[^>]+>.+?</a>', div)] + + info = self._search_json_ld(webpage, video_id, default={}) + # description provided in JSON-LD is irrelevant + info['description'] = None + + return merge_dicts({ + 'id': video_id, + 'uploader': video_uploader, + 'uploader_id': remove_start(model_profile.get('modelProfileLink'), '/model/'), + 'upload_date': upload_date, + 'title': title, + 'thumbnail': thumbnail, + 'duration': duration, + 'view_count': view_count, + 'like_count': like_count, + 'dislike_count': dislike_count, + 'comment_count': comment_count, + 'formats': formats, + 'age_limit': 18, + 'tags': extract_list('tags'), + 'categories': extract_list('categories'), + 'cast': extract_list('pornstars'), + 'subtitles': subtitles, + }, info) + + +class PornHubPlaylistBaseIE(PornHubBaseIE): + def _extract_page(self, url): + return int_or_none(self._search_regex( + r'\bpage=(\d+)', url, 'page', default=None)) + + def _extract_entries(self, webpage, host): + # Only process container div with main playlist content skipping + # drop-down menu that uses similar pattern for videos (see + # https://github.com/ytdl-org/youtube-dl/issues/11594). + container = self._search_regex( + r'(?s)(<div[^>]+class=["\']container.+)', webpage, + 'container', default=webpage) + + return [ + self.url_result( + 'http://www.%s/%s' % (host, video_url), + PornHubIE.ie_key(), video_title=title) + for video_url, title in orderedSet(re.findall( + r'href="/?(view_video\.php\?.*\bviewkey=[\da-z]+[^"]*)"[^>]*\s+title="([^"]+)"', + container)) + ] + + +class PornHubUserIE(PornHubPlaylistBaseIE): + _VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?%s/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/?#&]+))(?:[?#&]|/(?!videos)|$)' % PornHubBaseIE._PORNHUB_HOST_RE + _TESTS = [{ + 'url': 'https://www.pornhub.com/model/zoe_ph', + 'playlist_mincount': 118, + }, { + 'url': 'https://www.pornhub.com/pornstar/liz-vicious', + 'info_dict': { + 'id': 'liz-vicious', + }, + 'playlist_mincount': 118, + }, { + 'url': 'https://www.pornhub.com/users/russianveet69', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/channels/povd', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/model/zoe_ph?abc=1', + 'only_matching': True, + }, { + # Unavailable via /videos page, but available with direct pagination + # on pornstar page (see [1]), requires premium + # 1. https://github.com/ytdl-org/youtube-dl/issues/27853 + 'url': 'https://www.pornhubpremium.com/pornstar/sienna-west', + 'only_matching': True, + }, { + # Same as before, multi page + 'url': 'https://www.pornhubpremium.com/pornstar/lily-labeau', + 'only_matching': True, + }, { + 'url': 'https://pornhubvybmsymdol4iibwgwtkpwmeyd6luq2gxajgjzfjvotyt5zhyd.onion/model/zoe_ph', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + user_id = mobj.group('id') + videos_url = '%s/videos' % mobj.group('url') + self._set_age_cookies(mobj.group('host')) + page = self._extract_page(url) + if page: + videos_url = update_url_query(videos_url, {'page': page}) + return self.url_result( + videos_url, ie=PornHubPagedVideoListIE.ie_key(), video_id=user_id) + + +class PornHubPagedPlaylistBaseIE(PornHubPlaylistBaseIE): + @staticmethod + def _has_more(webpage): + return re.search( + r'''(?x) + <li[^>]+\bclass=["\']page_next| + <link[^>]+\brel=["\']next| + <button[^>]+\bid=["\']moreDataBtn + ''', webpage) is not None + + def _entries(self, url, host, item_id): + page = self._extract_page(url) + + VIDEOS = '/videos' + + def download_page(base_url, num, fallback=False): + note = 'Downloading page %d%s' % (num, ' (switch to fallback)' if fallback else '') + return self._download_webpage( + base_url, item_id, note, query={'page': num}) + + def is_404(e): + return isinstance(e.cause, HTTPError) and e.cause.status == 404 + + base_url = url + has_page = page is not None + first_page = page if has_page else 1 + for page_num in (first_page, ) if has_page else itertools.count(first_page): + try: + try: + webpage = download_page(base_url, page_num) + except ExtractorError as e: + # Some sources may not be available via /videos page, + # trying to fallback to main page pagination (see [1]) + # 1. https://github.com/ytdl-org/youtube-dl/issues/27853 + if is_404(e) and page_num == first_page and VIDEOS in base_url: + base_url = base_url.replace(VIDEOS, '') + webpage = download_page(base_url, page_num, fallback=True) + else: + raise + except ExtractorError as e: + if is_404(e) and page_num != first_page: + break + raise + page_entries = self._extract_entries(webpage, host) + if not page_entries: + break + for e in page_entries: + yield e + if not self._has_more(webpage): + break + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + host = mobj.group('host') + item_id = mobj.group('id') + + self._login(host) + self._set_age_cookies(host) + + return self.playlist_result(self._entries(url, host, item_id), item_id) + + +class PornHubPagedVideoListIE(PornHubPagedPlaylistBaseIE): + _VALID_URL = r'https?://(?:[^/]+\.)?%s/(?!playlist/)(?P<id>(?:[^/]+/)*[^/?#&]+)' % PornHubBaseIE._PORNHUB_HOST_RE + _TESTS = [{ + 'url': 'https://www.pornhub.com/model/zoe_ph/videos', + 'only_matching': True, + }, { + 'url': 'http://www.pornhub.com/users/rushandlia/videos', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos', + 'info_dict': { + 'id': 'pornstar/jenny-blighe/videos', + }, + 'playlist_mincount': 149, + }, { + 'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos?page=3', + 'info_dict': { + 'id': 'pornstar/jenny-blighe/videos', + }, + 'playlist_mincount': 40, + }, { + # default sorting as Top Rated Videos + 'url': 'https://www.pornhub.com/channels/povd/videos', + 'info_dict': { + 'id': 'channels/povd/videos', + }, + 'playlist_mincount': 293, + }, { + # Top Rated Videos + 'url': 'https://www.pornhub.com/channels/povd/videos?o=ra', + 'only_matching': True, + }, { + # Most Recent Videos + 'url': 'https://www.pornhub.com/channels/povd/videos?o=da', + 'only_matching': True, + }, { + # Most Viewed Videos + 'url': 'https://www.pornhub.com/channels/povd/videos?o=vi', + 'only_matching': True, + }, { + 'url': 'http://www.pornhub.com/users/zoe_ph/videos/public', + 'only_matching': True, + }, { + # Most Viewed Videos + 'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=mv', + 'only_matching': True, + }, { + # Top Rated Videos + 'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=tr', + 'only_matching': True, + }, { + # Longest Videos + 'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=lg', + 'only_matching': True, + }, { + # Newest Videos + 'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=cm', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos/paid', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos/fanonly', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/video', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/video?page=3', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/video/search?search=123', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/categories/teen', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/categories/teen?page=3', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/hd', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/hd?page=3', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/described-video', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/described-video?page=2', + 'only_matching': True, + }, { + 'url': 'https://www.pornhub.com/video/incategories/60fps-1/hd-porn', + 'only_matching': True, + }, { + 'url': 'https://pornhubvybmsymdol4iibwgwtkpwmeyd6luq2gxajgjzfjvotyt5zhyd.onion/model/zoe_ph/videos', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return (False + if PornHubIE.suitable(url) or PornHubUserIE.suitable(url) or PornHubUserVideosUploadIE.suitable(url) + else super(PornHubPagedVideoListIE, cls).suitable(url)) + + +class PornHubUserVideosUploadIE(PornHubPagedPlaylistBaseIE): + _VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?%s/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/]+)/videos/upload)' % PornHubBaseIE._PORNHUB_HOST_RE + _TESTS = [{ + 'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos/upload', + 'info_dict': { + 'id': 'jenny-blighe', + }, + 'playlist_mincount': 129, + }, { + 'url': 'https://www.pornhub.com/model/zoe_ph/videos/upload', + 'only_matching': True, + }, { + 'url': 'http://pornhubvybmsymdol4iibwgwtkpwmeyd6luq2gxajgjzfjvotyt5zhyd.onion/pornstar/jenny-blighe/videos/upload', + 'only_matching': True, + }] + + +class PornHubPlaylistIE(PornHubPlaylistBaseIE): + _VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?%s/playlist/(?P<id>[^/?#&]+))' % PornHubBaseIE._PORNHUB_HOST_RE + _TESTS = [{ + 'url': 'https://www.pornhub.com/playlist/44121572', + 'info_dict': { + 'id': '44121572', + }, + 'playlist_count': 77, + }, { + 'url': 'https://www.pornhub.com/playlist/4667351', + 'only_matching': True, + }, { + 'url': 'https://de.pornhub.com/playlist/4667351', + 'only_matching': True, + }, { + 'url': 'https://de.pornhub.com/playlist/4667351?page=2', + 'only_matching': True, + }] + + def _entries(self, url, host, item_id): + webpage = self._download_webpage(url, item_id, 'Downloading page 1') + playlist_id = self._search_regex(r'var\s+playlistId\s*=\s*"([^"]+)"', webpage, 'playlist_id') + video_count = int_or_none( + self._search_regex(r'var\s+itemsCount\s*=\s*([0-9]+)\s*\|\|', webpage, 'video_count')) + token = self._search_regex(r'var\s+token\s*=\s*"([^"]+)"', webpage, 'token') + page_count = math.ceil((video_count - 36) / 40.) + 1 + page_entries = self._extract_entries(webpage, host) + + def download_page(page_num): + note = 'Downloading page {}'.format(page_num) + page_url = 'https://www.{}/playlist/viewChunked'.format(host) + return self._download_webpage(page_url, item_id, note, query={ + 'id': playlist_id, + 'page': page_num, + 'token': token, + }) + + for page_num in range(1, page_count + 1): + if page_num > 1: + webpage = download_page(page_num) + page_entries = self._extract_entries(webpage, host) + if not page_entries: + break + for e in page_entries: + yield e + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + host = mobj.group('host') + item_id = mobj.group('id') + + self._login(host) + self._set_age_cookies(host) + + return self.playlist_result(self._entries(mobj.group('url'), host, item_id), item_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pornotube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornotube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pornotube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pornotube.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pornovoisines.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornovoisines.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pornovoisines.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pornovoisines.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pornoxo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pornoxo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pornoxo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pornoxo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/pr0gramm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pr0gramm.py new file mode 100644 index 0000000..c8e0bb4 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/pr0gramm.py @@ -0,0 +1,155 @@ +import json +from datetime import date +from urllib.parse import unquote + +from .common import InfoExtractor +from ..compat import functools +from ..utils import ExtractorError, make_archive_id, urljoin +from ..utils.traversal import traverse_obj + + +class Pr0grammIE(InfoExtractor): + _VALID_URL = r'https?://pr0gramm\.com\/(?:[^/?#]+/)+(?P<id>[\d]+)(?:[/?#:]|$)' + _TESTS = [{ + # Tags require account + 'url': 'https://pr0gramm.com/new/video/5466437', + 'info_dict': { + 'id': '5466437', + 'ext': 'mp4', + 'title': 'pr0gramm-5466437 by g11st', + 'tags': ['Neon Genesis Evangelion', 'Touhou Project', 'Fly me to the Moon', 'Marisad', 'Marisa Kirisame', 'video', 'sound', 'Marisa', 'Anime'], + 'uploader': 'g11st', + 'uploader_id': 394718, + 'upload_timestamp': 1671590240, + 'upload_date': '20221221', + 'like_count': int, + 'dislike_count': int, + 'age_limit': 0, + 'thumbnail': r're:^https://thumb\.pr0gramm\.com/.*\.jpg', + }, + }, { + # Tags require account + 'url': 'https://pr0gramm.com/new/3052805:comment28391322', + 'info_dict': { + 'id': '3052805', + 'ext': 'mp4', + 'title': 'pr0gramm-3052805 by Hansking1', + 'tags': 'count:15', + 'uploader': 'Hansking1', + 'uploader_id': 385563, + 'upload_timestamp': 1552930408, + 'upload_date': '20190318', + 'like_count': int, + 'dislike_count': int, + 'age_limit': 0, + 'thumbnail': r're:^https://thumb\.pr0gramm\.com/.*\.jpg', + }, + }, { + # Requires verified account + 'url': 'https://pr0gramm.com/new/Gianna%20Michaels/5848332', + 'info_dict': { + 'id': '5848332', + 'ext': 'mp4', + 'title': 'pr0gramm-5848332 by erd0pfel', + 'tags': 'count:18', + 'uploader': 'erd0pfel', + 'uploader_id': 349094, + 'upload_timestamp': 1694489652, + 'upload_date': '20230912', + 'like_count': int, + 'dislike_count': int, + 'age_limit': 18, + 'thumbnail': r're:^https://thumb\.pr0gramm\.com/.*\.jpg', + }, + }, { + 'url': 'https://pr0gramm.com/static/5466437', + 'only_matching': True, + }, { + 'url': 'https://pr0gramm.com/new/rowan%20atkinson%20herr%20bohne/3052805', + 'only_matching': True, + }, { + 'url': 'https://pr0gramm.com/user/froschler/dafur-ist-man-hier/5091290', + 'only_matching': True, + }] + + BASE_URL = 'https://pr0gramm.com' + + @functools.cached_property + def _is_logged_in(self): + return 'pp' in self._get_cookies(self.BASE_URL) + + @functools.cached_property + def _maximum_flags(self): + # We need to guess the flags for the content otherwise the api will raise an error + # We can guess the maximum allowed flags for the account from the cookies + # Bitflags are (msbf): nsfp, nsfl, nsfw, sfw + flags = 0b0001 + if self._is_logged_in: + flags |= 0b1000 + cookies = self._get_cookies(self.BASE_URL) + if 'me' not in cookies: + self._download_webpage(self.BASE_URL, None, 'Refreshing verification information') + if traverse_obj(cookies, ('me', {lambda x: x.value}, {unquote}, {json.loads}, 'verified')): + flags |= 0b0110 + + return flags + + def _call_api(self, endpoint, video_id, query={}, note='Downloading API json'): + data = self._download_json( + f'https://pr0gramm.com/api/items/{endpoint}', + video_id, note, query=query, expected_status=403) + + error = traverse_obj(data, ('error', {str})) + if error in ('nsfwRequired', 'nsflRequired', 'nsfpRequired', 'verificationRequired'): + if not self._is_logged_in: + self.raise_login_required() + raise ExtractorError(f'Unverified account cannot access NSFW/NSFL ({error})', expected=True) + elif error: + message = traverse_obj(data, ('msg', {str})) or error + raise ExtractorError(f'API returned error: {message}', expected=True) + + return data + + def _real_extract(self, url): + video_id = self._match_id(url) + video_info = traverse_obj( + self._call_api('get', video_id, {'id': video_id, 'flags': self._maximum_flags}), + ('items', 0, {dict})) + + source = urljoin('https://img.pr0gramm.com', video_info.get('image')) + if not source or not source.endswith('mp4'): + self.raise_no_formats('Could not extract a video', expected=bool(source), video_id=video_id) + + tags = None + if self._is_logged_in: + metadata = self._call_api('info', video_id, {'itemId': video_id}) + tags = traverse_obj(metadata, ('tags', ..., 'tag', {str})) + # Sorted by "confidence", higher confidence = earlier in list + confidences = traverse_obj(metadata, ('tags', ..., 'confidence', ({int}, {float}))) + if confidences: + tags = [tag for _, tag in sorted(zip(confidences, tags), reverse=True)] + + return { + 'id': video_id, + 'title': f'pr0gramm-{video_id} by {video_info.get("user")}', + 'formats': [{ + 'url': source, + 'ext': 'mp4', + **traverse_obj(video_info, { + 'width': ('width', {int}), + 'height': ('height', {int}), + }), + }], + 'tags': tags, + 'age_limit': 18 if traverse_obj(video_info, ('flags', {0b110.__and__})) else 0, + '_old_archive_ids': [make_archive_id('Pr0grammStatic', video_id)], + **traverse_obj(video_info, { + 'uploader': ('user', {str}), + 'uploader_id': ('userId', {int}), + 'like_count': ('up', {int}), + 'dislike_count': ('down', {int}), + 'upload_timestamp': ('created', {int}), + 'upload_date': ('created', {int}, {date.fromtimestamp}, {lambda x: x.strftime('%Y%m%d')}), + 'thumbnail': ('thumb', {lambda x: urljoin('https://thumb.pr0gramm.com', x)}) + }), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/prankcast.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/prankcast.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/prankcast.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/prankcast.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/premiershiprugby.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/premiershiprugby.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/premiershiprugby.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/premiershiprugby.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/presstv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/presstv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/presstv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/presstv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/projectveritas.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/projectveritas.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/projectveritas.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/projectveritas.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/prosiebensat1.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/prosiebensat1.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/prosiebensat1.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/prosiebensat1.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/prx.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/prx.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/prx.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/prx.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/puhutv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/puhutv.py new file mode 100644 index 0000000..4b8e5e9 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/puhutv.py @@ -0,0 +1,233 @@ +from .common import InfoExtractor +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + float_or_none, + parse_resolution, + str_or_none, + try_get, + unified_timestamp, + url_or_none, + urljoin, +) + + +class PuhuTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?puhutv\.com/(?P<id>[^/?#&]+)-izle' + IE_NAME = 'puhutv' + _TESTS = [{ + # film + 'url': 'https://puhutv.com/sut-kardesler-izle', + 'md5': 'a347470371d56e1585d1b2c8dab01c96', + 'info_dict': { + 'id': '5085', + 'display_id': 'sut-kardesler', + 'ext': 'mp4', + 'title': 'Süt KardeÅŸler', + 'description': 'md5:ca09da25b7e57cbb5a9280d6e48d17aa', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 4832.44, + 'creator': 'Arzu Film', + 'timestamp': 1561062602, + 'upload_date': '20190620', + 'release_year': 1976, + 'view_count': int, + 'tags': list, + }, + }, { + # episode, geo restricted, bypassable with --geo-verification-proxy + 'url': 'https://puhutv.com/jet-sosyete-1-bolum-izle', + 'only_matching': True, + }, { + # 4k, with subtitles + 'url': 'https://puhutv.com/dip-1-bolum-izle', + 'only_matching': True, + }] + _SUBTITLE_LANGS = { + 'English': 'en', + 'Deutsch': 'de', + 'عربى': 'ar' + } + + def _real_extract(self, url): + display_id = self._match_id(url) + + info = self._download_json( + urljoin(url, '/api/slug/%s-izle' % display_id), + display_id)['data'] + + video_id = compat_str(info['id']) + show = info.get('title') or {} + title = info.get('name') or show['name'] + if info.get('display_name'): + title = '%s %s' % (title, info['display_name']) + + try: + videos = self._download_json( + 'https://puhutv.com/api/assets/%s/videos' % video_id, + display_id, 'Downloading video JSON', + headers=self.geo_verification_headers()) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + self.raise_geo_restricted() + raise + + urls = [] + formats = [] + + for video in videos['data']['videos']: + media_url = url_or_none(video.get('url')) + if not media_url or media_url in urls: + continue + urls.append(media_url) + + playlist = video.get('is_playlist') + if (video.get('stream_type') == 'hls' and playlist is True) or 'playlist.m3u8' in media_url: + formats.extend(self._extract_m3u8_formats( + media_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + continue + + quality = int_or_none(video.get('quality')) + f = { + 'url': media_url, + 'ext': 'mp4', + 'height': quality + } + video_format = video.get('video_format') + is_hls = (video_format == 'hls' or '/hls/' in media_url or '/chunklist.m3u8' in media_url) and playlist is False + if is_hls: + format_id = 'hls' + f['protocol'] = 'm3u8_native' + elif video_format == 'mp4': + format_id = 'http' + else: + continue + if quality: + format_id += '-%sp' % quality + f['format_id'] = format_id + formats.append(f) + + creator = try_get( + show, lambda x: x['producer']['name'], compat_str) + + content = info.get('content') or {} + + images = try_get( + content, lambda x: x['images']['wide'], dict) or {} + thumbnails = [] + for image_id, image_url in images.items(): + if not isinstance(image_url, compat_str): + continue + if not image_url.startswith(('http', '//')): + image_url = 'https://%s' % image_url + t = parse_resolution(image_id) + t.update({ + 'id': image_id, + 'url': image_url + }) + thumbnails.append(t) + + tags = [] + for genre in show.get('genres') or []: + if not isinstance(genre, dict): + continue + genre_name = genre.get('name') + if genre_name and isinstance(genre_name, compat_str): + tags.append(genre_name) + + subtitles = {} + for subtitle in content.get('subtitles') or []: + if not isinstance(subtitle, dict): + continue + lang = subtitle.get('language') + sub_url = url_or_none(subtitle.get('url') or subtitle.get('file')) + if not lang or not isinstance(lang, compat_str) or not sub_url: + continue + subtitles[self._SUBTITLE_LANGS.get(lang, lang)] = [{ + 'url': sub_url + }] + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': info.get('description') or show.get('description'), + 'season_id': str_or_none(info.get('season_id')), + 'season_number': int_or_none(info.get('season_number')), + 'episode_number': int_or_none(info.get('episode_number')), + 'release_year': int_or_none(show.get('released_at')), + 'timestamp': unified_timestamp(info.get('created_at')), + 'creator': creator, + 'view_count': int_or_none(content.get('watch_count')), + 'duration': float_or_none(content.get('duration_in_ms'), 1000), + 'tags': tags, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'formats': formats + } + + +class PuhuTVSerieIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?puhutv\.com/(?P<id>[^/?#&]+)-detay' + IE_NAME = 'puhutv:serie' + _TESTS = [{ + 'url': 'https://puhutv.com/deniz-yildizi-detay', + 'info_dict': { + 'title': 'Deniz Yıldızı', + 'id': 'deniz-yildizi', + }, + 'playlist_mincount': 205, + }, { + # a film detail page which is using same url with serie page + 'url': 'https://puhutv.com/kaybedenler-kulubu-detay', + 'only_matching': True, + }] + + def _extract_entries(self, seasons): + for season in seasons: + season_id = season.get('id') + if not season_id: + continue + page = 1 + has_more = True + while has_more is True: + season = self._download_json( + 'https://galadriel.puhutv.com/seasons/%s' % season_id, + season_id, 'Downloading page %s' % page, query={ + 'page': page, + 'per': 40, + }) + episodes = season.get('episodes') + if isinstance(episodes, list): + for ep in episodes: + slug_path = str_or_none(ep.get('slugPath')) + if not slug_path: + continue + video_id = str_or_none(int_or_none(ep.get('id'))) + yield self.url_result( + 'https://puhutv.com/%s' % slug_path, + ie=PuhuTVIE.ie_key(), video_id=video_id, + video_title=ep.get('name') or ep.get('eventLabel')) + page += 1 + has_more = season.get('hasMore') + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + info = self._download_json( + urljoin(url, '/api/slug/%s-detay' % playlist_id), + playlist_id)['data'] + + seasons = info.get('seasons') + if seasons: + return self.playlist_result( + self._extract_entries(seasons), playlist_id, info.get('name')) + + # For films, these are using same url with series + video_id = info.get('slug') or info['assets'][0]['slug'] + return self.url_result( + 'https://puhutv.com/%s-izle' % video_id, + PuhuTVIE.ie_key(), video_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/puls4.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/puls4.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/puls4.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/puls4.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/pyvideo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/pyvideo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/pyvideo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/pyvideo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/qdance.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/qdance.py new file mode 100644 index 0000000..d817677 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/qdance.py @@ -0,0 +1,150 @@ +import json +import time + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + int_or_none, + jwt_decode_hs256, + str_or_none, + traverse_obj, + try_call, + url_or_none, +) + + +class QDanceIE(InfoExtractor): + _NETRC_MACHINE = 'qdance' + _VALID_URL = r'https?://(?:www\.)?q-dance\.com/network/(?:library|live)/(?P<id>\d+)' + _TESTS = [{ + 'note': 'vod', + 'url': 'https://www.q-dance.com/network/library/146542138', + 'info_dict': { + 'id': '146542138', + 'ext': 'mp4', + 'title': 'Sound Rush [LIVE] | Defqon.1 Weekend Festival 2022 | Friday | RED', + 'display_id': 'sound-rush-live-v3-defqon-1-weekend-festival-2022-friday-red', + 'description': 'Relive Defqon.1 - Primal Energy 2022 with the sounds of Sound Rush LIVE at the RED on Friday! 🔥', + 'season': 'Defqon.1 Weekend Festival 2022', + 'season_id': '31840632', + 'series': 'Defqon.1', + 'series_id': '31840378', + 'thumbnail': 'https://images.q-dance.network/1674829540-20220624171509-220624171509_delio_dn201093-2.jpg', + 'availability': 'premium_only', + 'duration': 1829, + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'note': 'livestream', + 'url': 'https://www.q-dance.com/network/live/149170353', + 'info_dict': { + 'id': '149170353', + 'ext': 'mp4', + 'title': r're:^Defqon\.1 2023 - Friday - RED', + 'display_id': 'defqon-1-2023-friday-red', + 'description': 'md5:3c73fbbd4044e578e696adfc64019163', + 'season': 'Defqon.1 Weekend Festival 2023', + 'season_id': '141735599', + 'series': 'Defqon.1', + 'series_id': '31840378', + 'thumbnail': 'https://images.q-dance.network/1686849069-area-thumbs_red.png', + 'availability': 'subscriber_only', + 'live_status': 'is_live', + 'channel_id': 'qdancenetwork.video_149170353', + }, + 'skip': 'Completed livestream', + }] + + _access_token = None + _refresh_token = None + + def _call_login_api(self, data, note='Logging in'): + login = self._download_json( + 'https://members.id-t.com/api/auth/login', None, note, headers={ + 'content-type': 'application/json', + 'brand': 'qdance', + 'origin': 'https://www.q-dance.com', + 'referer': 'https://www.q-dance.com/', + }, data=json.dumps(data, separators=(',', ':')).encode(), + expected_status=lambda x: True) + + tokens = traverse_obj(login, ('data', { + '_id-t-accounts-token': ('accessToken', {str}), + '_id-t-accounts-refresh': ('refreshToken', {str}), + '_id-t-accounts-id-token': ('idToken', {str}), + })) + + if not tokens.get('_id-t-accounts-token'): + error = ': '.join(traverse_obj(login, ('error', ('code', 'message'), {str}))) + if 'validation_error' not in error: + raise ExtractorError(f'Q-Dance API said "{error}"') + msg = 'Invalid username or password' if 'email' in data else 'Refresh token has expired' + raise ExtractorError(msg, expected=True) + + for name, value in tokens.items(): + self._set_cookie('.q-dance.com', name, value) + + def _perform_login(self, username, password): + self._call_login_api({'email': username, 'password': password}) + + def _real_initialize(self): + cookies = self._get_cookies('https://www.q-dance.com/') + self._refresh_token = try_call(lambda: cookies['_id-t-accounts-refresh'].value) + self._access_token = try_call(lambda: cookies['_id-t-accounts-token'].value) + if not self._access_token: + self.raise_login_required() + + def _get_auth(self): + if (try_call(lambda: jwt_decode_hs256(self._access_token)['exp']) or 0) <= int(time.time() - 120): + if not self._refresh_token: + raise ExtractorError( + 'Cannot refresh access token, login with yt-dlp or refresh cookies in browser') + self._call_login_api({'refreshToken': self._refresh_token}, note='Refreshing access token') + self._real_initialize() + + return {'Authorization': self._access_token} + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + data = self._search_nuxt_data(webpage, video_id, traverse=('data', 0, 'data')) + + def extract_availability(level): + level = int_or_none(level) or 0 + return self._availability( + needs_premium=(level >= 20), needs_subscription=(level >= 15), needs_auth=True) + + info = traverse_obj(data, { + 'title': ('title', {str.strip}), + 'description': ('description', {str.strip}), + 'display_id': ('slug', {str}), + 'thumbnail': ('thumbnail', {url_or_none}), + 'duration': ('durationInSeconds', {int_or_none}, {lambda x: x or None}), + 'availability': ('subscription', 'level', {extract_availability}), + 'is_live': ('type', {lambda x: x.lower() == 'live'}), + 'artist': ('acts', ..., {str}), + 'series': ('event', 'title', {str.strip}), + 'series_id': ('event', 'id', {str_or_none}), + 'season': ('eventEdition', 'title', {str.strip}), + 'season_id': ('eventEdition', 'id', {str_or_none}), + 'channel_id': ('pubnub', 'channelName', {str}), + }) + + stream = self._download_json( + f'https://dc9h6qmsoymbq.cloudfront.net/api/content/videos/{video_id}/url', + video_id, headers=self._get_auth(), expected_status=401) + + m3u8_url = traverse_obj(stream, ('data', 'url', {url_or_none})) + if not m3u8_url and traverse_obj(stream, ('error', 'code')) == 'unauthorized': + raise ExtractorError('Your account does not have access to this content', expected=True) + + formats = self._extract_m3u8_formats( + m3u8_url, video_id, fatal=False, live=True) if m3u8_url else [] + if not formats: + self.raise_no_formats('No active streams found', expected=bool(info.get('is_live'))) + + return { + **info, + 'id': video_id, + 'formats': formats, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/qingting.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/qingting.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/qingting.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/qingting.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/qqmusic.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/qqmusic.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/qqmusic.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/qqmusic.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/r7.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/r7.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/r7.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/r7.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/radiko.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiko.py new file mode 100644 index 0000000..c363d9b --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiko.py @@ -0,0 +1,252 @@ +import base64 +import random +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + clean_html, + time_seconds, + try_call, + unified_timestamp, + update_url_query, +) + + +class RadikoBaseIE(InfoExtractor): + _GEO_BYPASS = False + _FULL_KEY = None + _HOSTS_FOR_TIME_FREE_FFMPEG_UNSUPPORTED = ( + 'https://c-rpaa.smartstream.ne.jp', + 'https://si-c-radiko.smartstream.ne.jp', + 'https://tf-f-rpaa-radiko.smartstream.ne.jp', + 'https://tf-c-rpaa-radiko.smartstream.ne.jp', + 'https://si-f-radiko.smartstream.ne.jp', + 'https://rpaa.smartstream.ne.jp', + ) + _HOSTS_FOR_TIME_FREE_FFMPEG_SUPPORTED = ( + 'https://rd-wowza-radiko.radiko-cf.com', + 'https://radiko.jp', + 'https://f-radiko.smartstream.ne.jp', + ) + # Following URL forcibly connects not Time Free but Live + _HOSTS_FOR_LIVE = ( + 'https://c-radiko.smartstream.ne.jp', + ) + + def _negotiate_token(self): + _, auth1_handle = self._download_webpage_handle( + 'https://radiko.jp/v2/api/auth1', None, 'Downloading authentication page', + headers={ + 'x-radiko-app': 'pc_html5', + 'x-radiko-app-version': '0.0.1', + 'x-radiko-device': 'pc', + 'x-radiko-user': 'dummy_user', + }) + auth1_header = auth1_handle.headers + + auth_token = auth1_header['X-Radiko-AuthToken'] + kl = int(auth1_header['X-Radiko-KeyLength']) + ko = int(auth1_header['X-Radiko-KeyOffset']) + raw_partial_key = self._extract_full_key()[ko:ko + kl] + partial_key = base64.b64encode(raw_partial_key).decode() + + area_id = self._download_webpage( + 'https://radiko.jp/v2/api/auth2', None, 'Authenticating', + headers={ + 'x-radiko-device': 'pc', + 'x-radiko-user': 'dummy_user', + 'x-radiko-authtoken': auth_token, + 'x-radiko-partialkey': partial_key, + }).split(',')[0] + + if area_id == 'OUT': + self.raise_geo_restricted(countries=['JP']) + + auth_data = (auth_token, area_id) + self.cache.store('radiko', 'auth_data', auth_data) + return auth_data + + def _auth_client(self): + cachedata = self.cache.load('radiko', 'auth_data') + if cachedata is not None: + response = self._download_webpage( + 'https://radiko.jp/v2/api/auth_check', None, 'Checking cached token', expected_status=401, + headers={'X-Radiko-AuthToken': cachedata[0], 'X-Radiko-AreaId': cachedata[1]}) + if response == 'OK': + return cachedata + return self._negotiate_token() + + def _extract_full_key(self): + if self._FULL_KEY: + return self._FULL_KEY + + jscode = self._download_webpage( + 'https://radiko.jp/apps/js/playerCommon.js', None, + note='Downloading player js code') + full_key = self._search_regex( + (r"RadikoJSPlayer\([^,]*,\s*(['\"])pc_html5\1,\s*(['\"])(?P<fullkey>[0-9a-f]+)\2,\s*{"), + jscode, 'full key', fatal=False, group='fullkey') + + if full_key: + full_key = full_key.encode() + else: # use only full key ever known + full_key = b'bcd151073c03b352e1ef2fd66c32209da9ca0afa' + + self._FULL_KEY = full_key + return full_key + + def _find_program(self, video_id, station, cursor): + station_program = self._download_xml( + 'https://radiko.jp/v3/program/station/weekly/%s.xml' % station, video_id, + note='Downloading radio program for %s station' % station) + + prog = None + for p in station_program.findall('.//prog'): + ft_str, to_str = p.attrib['ft'], p.attrib['to'] + ft = unified_timestamp(ft_str, False) + to = unified_timestamp(to_str, False) + if ft <= cursor and cursor < to: + prog = p + break + if not prog: + raise ExtractorError('Cannot identify radio program to download!') + assert ft, to + return prog, station_program, ft, ft_str, to_str + + def _extract_formats(self, video_id, station, is_onair, ft, cursor, auth_token, area_id, query): + m3u8_playlist_data = self._download_xml( + f'https://radiko.jp/v3/station/stream/pc_html5/{station}.xml', video_id, + note='Downloading stream information') + + formats = [] + found = set() + + timefree_int = 0 if is_onair else 1 + + for element in m3u8_playlist_data.findall(f'.//url[@timefree="{timefree_int}"]/playlist_create_url'): + pcu = element.text + if pcu in found: + continue + found.add(pcu) + playlist_url = update_url_query(pcu, { + 'station_id': station, + **query, + 'l': '15', + 'lsid': ''.join(random.choices('0123456789abcdef', k=32)), + 'type': 'b', + }) + + time_to_skip = None if is_onair else cursor - ft + + domain = urllib.parse.urlparse(playlist_url).netloc + subformats = self._extract_m3u8_formats( + playlist_url, video_id, ext='m4a', + live=True, fatal=False, m3u8_id=domain, + note=f'Downloading m3u8 information from {domain}', + headers={ + 'X-Radiko-AreaId': area_id, + 'X-Radiko-AuthToken': auth_token, + }) + for sf in subformats: + if (is_onair ^ pcu.startswith(self._HOSTS_FOR_LIVE)) or ( + not is_onair and pcu.startswith(self._HOSTS_FOR_TIME_FREE_FFMPEG_UNSUPPORTED)): + sf['preference'] = -100 + sf['format_note'] = 'not preferred' + if not is_onair and timefree_int == 1 and time_to_skip: + sf['downloader_options'] = {'ffmpeg_args': ['-ss', str(time_to_skip)]} + formats.extend(subformats) + + return formats + + +class RadikoIE(RadikoBaseIE): + _VALID_URL = r'https?://(?:www\.)?radiko\.jp/#!/ts/(?P<station>[A-Z0-9-]+)/(?P<id>\d+)' + + _TESTS = [{ + # QRR (文化放é€) station provides <desc> + 'url': 'https://radiko.jp/#!/ts/QRR/20210425101300', + 'only_matching': True, + }, { + # FMT (TOKYO FM) station does not provide <desc> + 'url': 'https://radiko.jp/#!/ts/FMT/20210810150000', + 'only_matching': True, + }, { + 'url': 'https://radiko.jp/#!/ts/JOAK-FM/20210509090000', + 'only_matching': True, + }] + + def _real_extract(self, url): + station, video_id = self._match_valid_url(url).groups() + vid_int = unified_timestamp(video_id, False) + prog, station_program, ft, radio_begin, radio_end = self._find_program(video_id, station, vid_int) + + auth_token, area_id = self._auth_client() + + return { + 'id': video_id, + 'title': try_call(lambda: prog.find('title').text), + 'description': clean_html(try_call(lambda: prog.find('info').text)), + 'uploader': try_call(lambda: station_program.find('.//name').text), + 'uploader_id': station, + 'timestamp': vid_int, + 'is_live': True, + 'formats': self._extract_formats( + video_id=video_id, station=station, is_onair=False, + ft=ft, cursor=vid_int, auth_token=auth_token, area_id=area_id, + query={ + 'start_at': radio_begin, + 'ft': radio_begin, + 'end_at': radio_end, + 'to': radio_end, + 'seek': video_id + } + ), + } + + +class RadikoRadioIE(RadikoBaseIE): + _VALID_URL = r'https?://(?:www\.)?radiko\.jp/#!/live/(?P<id>[A-Z0-9-]+)' + + _TESTS = [{ + # QRR (文化放é€) station provides <desc> + 'url': 'https://radiko.jp/#!/live/QRR', + 'only_matching': True, + }, { + # FMT (TOKYO FM) station does not provide <desc> + 'url': 'https://radiko.jp/#!/live/FMT', + 'only_matching': True, + }, { + 'url': 'https://radiko.jp/#!/live/JOAK-FM', + 'only_matching': True, + }] + + def _real_extract(self, url): + station = self._match_id(url) + self.report_warning('Downloader will not stop at the end of the program! Press Ctrl+C to stop') + + auth_token, area_id = self._auth_client() + # get current time in JST (GMT+9:00 w/o DST) + vid_now = time_seconds(hours=9) + + prog, station_program, ft, _, _ = self._find_program(station, station, vid_now) + + title = prog.find('title').text + description = clean_html(prog.find('info').text) + station_name = station_program.find('.//name').text + + formats = self._extract_formats( + video_id=station, station=station, is_onair=True, + ft=ft, cursor=vid_now, auth_token=auth_token, area_id=area_id, + query={}) + + return { + 'id': station, + 'title': title, + 'description': description, + 'uploader': station_name, + 'uploader_id': station, + 'timestamp': ft, + 'formats': formats, + 'is_live': True, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/radiobremen.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiobremen.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/radiobremen.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/radiobremen.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/radiocanada.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiocanada.py new file mode 100644 index 0000000..1a5a635 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiocanada.py @@ -0,0 +1,165 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + determine_ext, + ExtractorError, + int_or_none, + unified_strdate, +) + + +class RadioCanadaIE(InfoExtractor): + IE_NAME = 'radiocanada' + _VALID_URL = r'(?:radiocanada:|https?://ici\.radio-canada\.ca/widgets/mediaconsole/)(?P<app_code>[^:/]+)[:/](?P<id>[0-9]+)' + _TESTS = [ + { + 'url': 'http://ici.radio-canada.ca/widgets/mediaconsole/medianet/7184272', + 'info_dict': { + 'id': '7184272', + 'ext': 'mp4', + 'title': 'Le parcours du tireur capté sur vidéo', + 'description': 'Images des caméras de surveillance fournies par la GRC montrant le parcours du tireur d\'Ottawa', + 'upload_date': '20141023', + }, + 'params': { + # m3u8 download + 'skip_download': True, + } + }, + { + # empty Title + 'url': 'http://ici.radio-canada.ca/widgets/mediaconsole/medianet/7754998/', + 'info_dict': { + 'id': '7754998', + 'ext': 'mp4', + 'title': 'letelejournal22h', + 'description': 'INTEGRALE WEB 22H-TJ', + 'upload_date': '20170720', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, + { + # with protectionType but not actually DRM protected + 'url': 'radiocanada:toutv:140872', + 'info_dict': { + 'id': '140872', + 'title': 'Épisode 1', + 'series': 'District 31', + }, + 'only_matching': True, + } + ] + _GEO_COUNTRIES = ['CA'] + _access_token = None + _claims = None + + def _call_api(self, path, video_id=None, app_code=None, query=None): + if not query: + query = {} + query.update({ + 'client_key': '773aea60-0e80-41bb-9c7f-e6d7c3ad17fb', + 'output': 'json', + }) + if video_id: + query.update({ + 'appCode': app_code, + 'idMedia': video_id, + }) + if self._access_token: + query['access_token'] = self._access_token + try: + return self._download_json( + 'https://services.radio-canada.ca/media/' + path, video_id, query=query) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status in (401, 422): + data = self._parse_json(e.cause.response.read().decode(), None) + error = data.get('error_description') or data['errorMessage']['text'] + raise ExtractorError(error, expected=True) + raise + + def _extract_info(self, app_code, video_id): + metas = self._call_api('meta/v1/index.ashx', video_id, app_code)['Metas'] + + def get_meta(name): + for meta in metas: + if meta.get('name') == name: + text = meta.get('text') + if text: + return text + + # protectionType does not necessarily mean the video is DRM protected (see + # https://github.com/ytdl-org/youtube-dl/pull/18609). + if get_meta('protectionType'): + self.report_warning('This video is probably DRM protected.') + + query = { + 'connectionType': 'hd', + 'deviceType': 'ipad', + 'multibitrate': 'true', + } + if self._claims: + query['claims'] = self._claims + v_data = self._call_api('validation/v2/', video_id, app_code, query) + v_url = v_data.get('url') + if not v_url: + error = v_data['message'] + if error == "Le contenu sélectionné n'est pas disponible dans votre pays": + raise self.raise_geo_restricted(error, self._GEO_COUNTRIES) + if error == 'Le contenu sélectionné est disponible seulement en premium': + self.raise_login_required(error) + raise ExtractorError( + '%s said: %s' % (self.IE_NAME, error), expected=True) + formats = self._extract_m3u8_formats(v_url, video_id, 'mp4') + + subtitles = {} + closed_caption_url = get_meta('closedCaption') or get_meta('closedCaptionHTML5') + if closed_caption_url: + subtitles['fr'] = [{ + 'url': closed_caption_url, + 'ext': determine_ext(closed_caption_url, 'vtt'), + }] + + return { + 'id': video_id, + 'title': get_meta('Title') or get_meta('AV-nomEmission'), + 'description': get_meta('Description') or get_meta('ShortDescription'), + 'thumbnail': get_meta('imageHR') or get_meta('imageMR') or get_meta('imageBR'), + 'duration': int_or_none(get_meta('length')), + 'series': get_meta('Emission'), + 'season_number': int_or_none('SrcSaison'), + 'episode_number': int_or_none('SrcEpisode'), + 'upload_date': unified_strdate(get_meta('Date')), + 'subtitles': subtitles, + 'formats': formats, + } + + def _real_extract(self, url): + return self._extract_info(*self._match_valid_url(url).groups()) + + +class RadioCanadaAudioVideoIE(InfoExtractor): + IE_NAME = 'radiocanada:audiovideo' + _VALID_URL = r'https?://ici\.radio-canada\.ca/([^/]+/)*media-(?P<id>[0-9]+)' + _TESTS = [{ + 'url': 'http://ici.radio-canada.ca/audio-video/media-7527184/barack-obama-au-vietnam', + 'info_dict': { + 'id': '7527184', + 'ext': 'mp4', + 'title': 'Barack Obama au Vietnam', + 'description': 'Les États-Unis lèvent l\'embargo sur la vente d\'armes qui datait de la guerre du Vietnam', + 'upload_date': '20160523', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + 'url': 'https://ici.radio-canada.ca/info/videos/media-7527184/barack-obama-au-vietnam', + 'only_matching': True, + }] + + def _real_extract(self, url): + return self.url_result('radiocanada:medianet:%s' % self._match_id(url)) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/radiode.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiode.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/radiode.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/radiode.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/radiofrance.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiofrance.py new file mode 100644 index 0000000..ec1b976 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiofrance.py @@ -0,0 +1,473 @@ +import itertools +import re +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + int_or_none, + join_nonempty, + js_to_json, + parse_duration, + strftime_or_none, + traverse_obj, + unified_strdate, + urljoin, +) + + +class RadioFranceIE(InfoExtractor): + _VALID_URL = r'^https?://maison\.radiofrance\.fr/radiovisions/(?P<id>[^?#]+)' + IE_NAME = 'radiofrance' + + _TEST = { + 'url': 'http://maison.radiofrance.fr/radiovisions/one-one', + 'md5': 'bdbb28ace95ed0e04faab32ba3160daf', + 'info_dict': { + 'id': 'one-one', + 'ext': 'ogg', + 'title': 'One to one', + 'description': "Plutôt que d'imaginer la radio de demain comme technologie ou comme création de contenu, je veux montrer que quelles que soient ses évolutions, j'ai l'intime conviction que la radio continuera d'être un grand média de proximité pour les auditeurs.", + 'uploader': 'Thomas Hercouët', + }, + } + + def _real_extract(self, url): + m = self._match_valid_url(url) + video_id = m.group('id') + + webpage = self._download_webpage(url, video_id) + title = self._html_search_regex(r'<h1>(.*?)</h1>', webpage, 'title') + description = self._html_search_regex( + r'<div class="bloc_page_wrapper"><div class="text">(.*?)</div>', + webpage, 'description', fatal=False) + uploader = self._html_search_regex( + r'<div class="credit">  © (.*?)</div>', + webpage, 'uploader', fatal=False) + + formats_str = self._html_search_regex( + r'class="jp-jplayer[^"]*" data-source="([^"]+)">', + webpage, 'audio URLs') + formats = [ + { + 'format_id': fm[0], + 'url': fm[1], + 'vcodec': 'none', + 'quality': i, + } + for i, fm in + enumerate(re.findall(r"([a-z0-9]+)\s*:\s*'([^']+)'", formats_str)) + ] + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'description': description, + 'uploader': uploader, + } + + +class RadioFranceBaseIE(InfoExtractor): + _VALID_URL_BASE = r'https?://(?:www\.)?radiofrance\.fr' + + _STATIONS_RE = '|'.join(map(re.escape, ( + 'franceculture', + 'franceinfo', + 'franceinter', + 'francemusique', + 'fip', + 'mouv', + ))) + + def _extract_data_from_webpage(self, webpage, display_id, key): + return traverse_obj(self._search_json( + r'\bconst\s+data\s*=', webpage, key, display_id, + contains_pattern=r'\[\{(?s:.+)\}\]', transform_source=js_to_json), + (..., 'data', key, {dict}), get_all=False) or {} + + +class FranceCultureIE(RadioFranceBaseIE): + _VALID_URL = rf'''(?x) + {RadioFranceBaseIE._VALID_URL_BASE} + /(?:{RadioFranceBaseIE._STATIONS_RE}) + /podcasts/(?:[^?#]+/)?(?P<display_id>[^?#]+)-(?P<id>\d{{6,}})(?:$|[?#]) + ''' + + _TESTS = [ + { + 'url': 'https://www.radiofrance.fr/franceculture/podcasts/science-en-questions/la-physique-d-einstein-aiderait-elle-a-comprendre-le-cerveau-8440487', + 'info_dict': { + 'id': '8440487', + 'display_id': 'la-physique-d-einstein-aiderait-elle-a-comprendre-le-cerveau', + 'ext': 'mp3', + 'title': 'La physique d’Einstein aiderait-elle à comprendre le cerveau ?', + 'description': 'Existerait-il un pont conceptuel entre la physique de l’espace-temps et les neurosciences ?', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'upload_date': '20220514', + 'duration': 2750, + }, + }, + { + 'url': 'https://www.radiofrance.fr/franceinter/podcasts/le-7-9-30/le-7-9-30-du-vendredi-10-mars-2023-2107675', + 'info_dict': { + 'id': '2107675', + 'display_id': 'le-7-9-30-du-vendredi-10-mars-2023', + 'title': 'Inflation alimentaire : comment en sortir ? - Régis Debray et Claude Grange - Cybèle Idelot', + 'description': 'md5:36ee74351ede77a314fdebb94026b916', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'upload_date': '20230310', + 'duration': 8977, + 'ext': 'mp3', + }, + }, + { + 'url': 'https://www.radiofrance.fr/franceinter/podcasts/la-rafle-du-vel-d-hiv-une-affaire-d-etat/les-racines-du-crime-episode-1-3715507', + 'only_matching': True, + }, { + 'url': 'https://www.radiofrance.fr/franceinfo/podcasts/le-billet-sciences/sante-bientot-un-vaccin-contre-l-asthme-allergique-3057200', + 'only_matching': True, + } + ] + + def _real_extract(self, url): + video_id, display_id = self._match_valid_url(url).group('id', 'display_id') + webpage = self._download_webpage(url, display_id) + + # _search_json_ld doesn't correctly handle this. See https://github.com/yt-dlp/yt-dlp/pull/3874#discussion_r891903846 + video_data = self._search_json('', webpage, 'audio data', display_id, contains_pattern=r'{\s*"@type"\s*:\s*"AudioObject".+}') + + return { + 'id': video_id, + 'display_id': display_id, + 'url': video_data['contentUrl'], + 'vcodec': 'none' if video_data.get('encodingFormat') == 'mp3' else None, + 'duration': parse_duration(video_data.get('duration')), + 'title': self._html_search_regex(r'(?s)<h1[^>]*itemprop="[^"]*name[^"]*"[^>]*>(.+?)</h1>', + webpage, 'title', default=self._og_search_title(webpage)), + 'description': self._html_search_regex( + r'(?s)<meta name="description"\s*content="([^"]+)', webpage, 'description', default=None), + 'thumbnail': self._og_search_thumbnail(webpage), + 'uploader': self._html_search_regex( + r'(?s)<span class="author">(.*?)</span>', webpage, 'uploader', default=None), + 'upload_date': unified_strdate(self._search_regex( + r'"datePublished"\s*:\s*"([^"]+)', webpage, 'timestamp', fatal=False)) + } + + +class RadioFranceLiveIE(RadioFranceBaseIE): + _VALID_URL = rf'''(?x) + https?://(?:www\.)?radiofrance\.fr + /(?P<id>{RadioFranceBaseIE._STATIONS_RE}) + /?(?P<substation_id>radio-[\w-]+)?(?:[#?]|$) + ''' + + _TESTS = [{ + 'url': 'https://www.radiofrance.fr/franceinter/', + 'info_dict': { + 'id': 'franceinter', + 'title': str, + 'live_status': 'is_live', + 'ext': 'aac', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.radiofrance.fr/franceculture', + 'info_dict': { + 'id': 'franceculture', + 'title': str, + 'live_status': 'is_live', + 'ext': 'aac', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.radiofrance.fr/mouv/radio-musique-kids-family', + 'info_dict': { + 'id': 'mouv-radio-musique-kids-family', + 'title': str, + 'live_status': 'is_live', + 'ext': 'aac', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.radiofrance.fr/mouv/radio-rnb-soul', + 'info_dict': { + 'id': 'mouv-radio-rnb-soul', + 'title': str, + 'live_status': 'is_live', + 'ext': 'aac', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.radiofrance.fr/mouv/radio-musique-mix', + 'info_dict': { + 'id': 'mouv-radio-musique-mix', + 'title': str, + 'live_status': 'is_live', + 'ext': 'aac', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.radiofrance.fr/fip/radio-rock', + 'info_dict': { + 'id': 'fip-radio-rock', + 'title': str, + 'live_status': 'is_live', + 'ext': 'aac', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.radiofrance.fr/mouv', + 'only_matching': True, + }] + + def _real_extract(self, url): + station_id, substation_id = self._match_valid_url(url).group('id', 'substation_id') + + if substation_id: + webpage = self._download_webpage(url, station_id) + api_response = self._extract_data_from_webpage(webpage, station_id, 'webRadioData') + else: + api_response = self._download_json( + f'https://www.radiofrance.fr/{station_id}/api/live', station_id) + + formats, subtitles = [], {} + for media_source in traverse_obj(api_response, (('now', None), 'media', 'sources', lambda _, v: v['url'])): + if media_source.get('format') == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles(media_source['url'], station_id, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + formats.append({ + 'url': media_source['url'], + 'abr': media_source.get('bitrate'), + }) + + return { + 'id': join_nonempty(station_id, substation_id), + 'title': traverse_obj(api_response, ('visual', 'legend')) or join_nonempty( + ('now', 'firstLine', 'title'), ('now', 'secondLine', 'title'), from_dict=api_response, delim=' - '), + 'formats': formats, + 'subtitles': subtitles, + 'is_live': True, + } + + +class RadioFrancePlaylistBase(RadioFranceBaseIE): + """Subclasses must set _METADATA_KEY""" + + def _call_api(self, content_id, cursor, page_num): + raise NotImplementedError('This method must be implemented by subclasses') + + def _generate_playlist_entries(self, content_id, content_response): + for page_num in itertools.count(2): + for entry in content_response['items']: + yield self.url_result( + f'https://www.radiofrance.fr/{entry["path"]}', url_transparent=True, **traverse_obj(entry, { + 'title': 'title', + 'description': 'standFirst', + 'timestamp': ('publishedDate', {int_or_none}), + 'thumbnail': ('visual', 'src'), + })) + + next_cursor = traverse_obj(content_response, (('pagination', None), 'next'), get_all=False) + if not next_cursor: + break + + content_response = self._call_api(content_id, next_cursor, page_num) + + def _real_extract(self, url): + display_id = self._match_id(url) + + metadata = self._download_json( + 'https://www.radiofrance.fr/api/v2.1/path', display_id, + query={'value': urllib.parse.urlparse(url).path})['content'] + + content_id = metadata['id'] + + return self.playlist_result( + self._generate_playlist_entries(content_id, metadata[self._METADATA_KEY]), content_id, + display_id=display_id, **{**traverse_obj(metadata, { + 'title': 'title', + 'description': 'standFirst', + 'thumbnail': ('visual', 'src'), + }), **traverse_obj(metadata, { + 'title': 'name', + 'description': 'role', + })}) + + +class RadioFrancePodcastIE(RadioFrancePlaylistBase): + _VALID_URL = rf'''(?x) + {RadioFranceBaseIE._VALID_URL_BASE} + /(?:{RadioFranceBaseIE._STATIONS_RE}) + /podcasts/(?P<id>[\w-]+)/?(?:[?#]|$) + ''' + + _TESTS = [{ + 'url': 'https://www.radiofrance.fr/franceinfo/podcasts/le-billet-vert', + 'info_dict': { + 'id': 'eaf6ef81-a980-4f1c-a7d1-8a75ecd54b17', + 'display_id': 'le-billet-vert', + 'title': 'Le billet sciences', + 'description': 'md5:eb1007b34b0c0a680daaa71525bbd4c1', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 11, + }, { + 'url': 'https://www.radiofrance.fr/franceinter/podcasts/jean-marie-le-pen-l-obsession-nationale', + 'info_dict': { + 'id': '566fd524-3074-4fbc-ac69-8696f2152a54', + 'display_id': 'jean-marie-le-pen-l-obsession-nationale', + 'title': 'Jean-Marie Le Pen, l\'obsession nationale', + 'description': 'md5:a07c0cfb894f6d07a62d0ad12c4b7d73', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_count': 7, + }, { + 'url': 'https://www.radiofrance.fr/franceculture/podcasts/serie-thomas-grjebine', + 'info_dict': { + 'id': '63c1ddc9-9f15-457a-98b2-411bac63f48d', + 'display_id': 'serie-thomas-grjebine', + 'title': 'Thomas Grjebine', + }, + 'playlist_count': 1, + }, { + 'url': 'https://www.radiofrance.fr/fip/podcasts/certains-l-aiment-fip', + 'info_dict': { + 'id': '143dff38-e956-4a5d-8576-1c0b7242b99e', + 'display_id': 'certains-l-aiment-fip', + 'title': 'Certains l’aiment Fip', + 'description': 'md5:ff974672ba00d4fd5be80fb001c5b27e', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 321, + }, { + 'url': 'https://www.radiofrance.fr/franceinter/podcasts/le-7-9', + 'only_matching': True, + }, { + 'url': 'https://www.radiofrance.fr/mouv/podcasts/dirty-mix', + 'only_matching': True, + }] + + _METADATA_KEY = 'expressions' + + def _call_api(self, podcast_id, cursor, page_num): + return self._download_json( + f'https://www.radiofrance.fr/api/v2.1/concepts/{podcast_id}/expressions', podcast_id, + note=f'Downloading page {page_num}', query={'pageCursor': cursor}) + + +class RadioFranceProfileIE(RadioFrancePlaylistBase): + _VALID_URL = rf'{RadioFranceBaseIE._VALID_URL_BASE}/personnes/(?P<id>[\w-]+)' + + _TESTS = [{ + 'url': 'https://www.radiofrance.fr/personnes/thomas-pesquet?p=3', + 'info_dict': { + 'id': '86c62790-e481-11e2-9f7b-782bcb6744eb', + 'display_id': 'thomas-pesquet', + 'title': 'Thomas Pesquet', + 'description': 'Astronaute à l\'agence spatiale européenne', + }, + 'playlist_mincount': 212, + }, { + 'url': 'https://www.radiofrance.fr/personnes/eugenie-bastie', + 'info_dict': { + 'id': '9593050b-0183-4972-a0b5-d8f699079e02', + 'display_id': 'eugenie-bastie', + 'title': 'Eugénie Bastié', + 'description': 'Journaliste et essayiste', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 39, + }, { + 'url': 'https://www.radiofrance.fr/personnes/lea-salame', + 'only_matching': True, + }] + + _METADATA_KEY = 'documents' + + def _call_api(self, profile_id, cursor, page_num): + resp = self._download_json( + f'https://www.radiofrance.fr/api/v2.1/taxonomy/{profile_id}/documents', profile_id, + note=f'Downloading page {page_num}', query={ + 'relation': 'personality', + 'cursor': cursor, + }) + + resp['next'] = traverse_obj(resp, ('pagination', 'next')) + return resp + + +class RadioFranceProgramScheduleIE(RadioFranceBaseIE): + _VALID_URL = rf'''(?x) + {RadioFranceBaseIE._VALID_URL_BASE} + /(?P<station>{RadioFranceBaseIE._STATIONS_RE}) + /grille-programmes(?:\?date=(?P<date>[\d-]+))? + ''' + + _TESTS = [{ + 'url': 'https://www.radiofrance.fr/franceinter/grille-programmes?date=17-02-2023', + 'info_dict': { + 'id': 'franceinter-program-20230217', + 'upload_date': '20230217', + }, + 'playlist_count': 25, + }, { + 'url': 'https://www.radiofrance.fr/franceculture/grille-programmes?date=01-02-2023', + 'info_dict': { + 'id': 'franceculture-program-20230201', + 'upload_date': '20230201', + }, + 'playlist_count': 25, + }, { + 'url': 'https://www.radiofrance.fr/mouv/grille-programmes?date=19-03-2023', + 'info_dict': { + 'id': 'mouv-program-20230319', + 'upload_date': '20230319', + }, + 'playlist_count': 3, + }, { + 'url': 'https://www.radiofrance.fr/francemusique/grille-programmes?date=18-03-2023', + 'info_dict': { + 'id': 'francemusique-program-20230318', + 'upload_date': '20230318', + }, + 'playlist_count': 15, + }, { + 'url': 'https://www.radiofrance.fr/franceculture/grille-programmes', + 'only_matching': True, + }] + + def _generate_playlist_entries(self, webpage_url, api_response): + for entry in traverse_obj(api_response, ('steps', lambda _, v: v['expression']['path'])): + yield self.url_result( + urljoin(webpage_url, f'/{entry["expression"]["path"]}'), ie=FranceCultureIE, + url_transparent=True, **traverse_obj(entry, { + 'title': ('expression', 'title'), + 'thumbnail': ('expression', 'visual', 'src'), + 'timestamp': ('startTime', {int_or_none}), + 'series_id': ('concept', 'id'), + 'series': ('concept', 'title'), + })) + + def _real_extract(self, url): + station, date = self._match_valid_url(url).group('station', 'date') + webpage = self._download_webpage(url, station) + grid_data = self._extract_data_from_webpage(webpage, station, 'grid') + upload_date = strftime_or_none(grid_data.get('date'), '%Y%m%d') + + return self.playlist_result( + self._generate_playlist_entries(url, grid_data), + join_nonempty(station, 'program', upload_date), upload_date=upload_date) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/radiojavan.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiojavan.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/radiojavan.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/radiojavan.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/radiokapital.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiokapital.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/radiokapital.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/radiokapital.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/radiozet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radiozet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/radiozet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/radiozet.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/radlive.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/radlive.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/radlive.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/radlive.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rai.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rai.py new file mode 100644 index 0000000..df4102a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rai.py @@ -0,0 +1,783 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + clean_html, + determine_ext, + ExtractorError, + filter_dict, + GeoRestrictedError, + int_or_none, + join_nonempty, + parse_duration, + remove_start, + strip_or_none, + traverse_obj, + try_get, + unified_strdate, + unified_timestamp, + update_url_query, + urljoin, + xpath_text, +) + + +class RaiBaseIE(InfoExtractor): + _UUID_RE = r'[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}' + _GEO_COUNTRIES = ['IT'] + _GEO_BYPASS = False + + def _extract_relinker_info(self, relinker_url, video_id, audio_only=False): + def fix_cdata(s): + # remove \r\n\t before and after <![CDATA[ ]]> to avoid + # polluted text with xpath_text + s = re.sub(r'(\]\]>)[\r\n\t]+(</)', '\\1\\2', s) + return re.sub(r'(>)[\r\n\t]+(<!\[CDATA\[)', '\\1\\2', s) + + if not re.match(r'https?://', relinker_url): + return {'formats': [{'url': relinker_url}]} + + # set User-Agent to generic 'Rai' to avoid quality filtering from + # the media server and get the maximum qualities available + relinker = self._download_xml( + relinker_url, video_id, note='Downloading XML metadata', + transform_source=fix_cdata, query={'output': 64}, + headers={**self.geo_verification_headers(), 'User-Agent': 'Rai'}) + + if xpath_text(relinker, './license_url', default='{}') != '{}': + self.report_drm(video_id) + + is_live = xpath_text(relinker, './is_live', default='N') == 'Y' + duration = parse_duration(xpath_text(relinker, './duration', default=None)) + media_url = xpath_text(relinker, './url[@type="content"]', default=None) + + if not media_url: + self.raise_no_formats('The relinker returned no media url') + + # geo flag is a bit unreliable and not properly set all the time + geoprotection = xpath_text(relinker, './geoprotection', default='N') == 'Y' + + ext = determine_ext(media_url) + formats = [] + + if ext == 'mp3': + formats.append({ + 'url': media_url, + 'vcodec': 'none', + 'acodec': 'mp3', + 'format_id': 'https-mp3', + }) + elif ext == 'm3u8' or 'format=m3u8' in media_url: + formats.extend(self._extract_m3u8_formats( + media_url, video_id, 'mp4', m3u8_id='hls', fatal=False)) + elif ext == 'f4m': + # very likely no longer needed. Cannot find any url that uses it. + manifest_url = update_url_query( + media_url.replace('manifest#live_hds.f4m', 'manifest.f4m'), + {'hdcore': '3.7.0', 'plugin': 'aasp-3.7.0.39.44'}) + formats.extend(self._extract_f4m_formats( + manifest_url, video_id, f4m_id='hds', fatal=False)) + elif ext == 'mp4': + bitrate = int_or_none(xpath_text(relinker, './bitrate')) + formats.append({ + 'url': media_url, + 'tbr': bitrate if bitrate > 0 else None, + 'format_id': join_nonempty('https', bitrate, delim='-'), + }) + else: + raise ExtractorError('Unrecognized media file found') + + if (not formats and geoprotection is True) or '/video_no_available.mp4' in media_url: + self.raise_geo_restricted(countries=self._GEO_COUNTRIES, metadata_available=True) + + if not audio_only and not is_live: + formats.extend(self._create_http_urls(media_url, relinker_url, formats)) + + return filter_dict({ + 'is_live': is_live, + 'duration': duration, + 'formats': formats, + }) + + def _create_http_urls(self, manifest_url, relinker_url, fmts): + _MANIFEST_REG = r'/(?P<id>\w+)(?:_(?P<quality>[\d\,]+))?(?:\.mp4)?(?:\.csmil)?/playlist\.m3u8' + _MP4_TMPL = '%s&overrideUserAgentRule=mp4-%s' + _QUALITY = { + # tbr: w, h + 250: [352, 198], + 400: [512, 288], + 600: [512, 288], + 700: [512, 288], + 800: [700, 394], + 1200: [736, 414], + 1500: [920, 518], + 1800: [1024, 576], + 2400: [1280, 720], + 3200: [1440, 810], + 3600: [1440, 810], + 5000: [1920, 1080], + 10000: [1920, 1080], + } + + def percentage(number, target, pc=20, roof=125): + '''check if the target is in the range of number +/- percent''' + if not number or number < 0: + return False + return abs(target - number) < min(float(number) * float(pc) / 100.0, roof) + + def get_format_info(tbr): + import math + br = int_or_none(tbr) + if len(fmts) == 1 and not br: + br = fmts[0].get('tbr') + if br and br > 300: + tbr = math.floor(br / 100) * 100 + else: + tbr = 250 + + # try extracting info from available m3u8 formats + format_copy = [None, None] + for f in fmts: + if f.get('tbr'): + if percentage(tbr, f['tbr']): + format_copy[0] = f.copy() + if [f.get('width'), f.get('height')] == _QUALITY.get(tbr): + format_copy[1] = f.copy() + format_copy[1]['tbr'] = tbr + + # prefer format with similar bitrate because there might be + # multiple video with the same resolution but different bitrate + format_copy = format_copy[0] or format_copy[1] or {} + return { + 'format_id': f'https-{tbr}', + 'width': format_copy.get('width'), + 'height': format_copy.get('height'), + 'tbr': format_copy.get('tbr'), + 'vcodec': format_copy.get('vcodec'), + 'acodec': format_copy.get('acodec'), + 'fps': format_copy.get('fps'), + } if format_copy else { + 'format_id': f'https-{tbr}', + 'width': _QUALITY[tbr][0], + 'height': _QUALITY[tbr][1], + 'tbr': tbr, + 'vcodec': 'avc1', + 'acodec': 'mp4a', + 'fps': 25, + } + + # filter out single-stream formats + fmts = [f for f in fmts + if not f.get('vcodec') == 'none' and not f.get('acodec') == 'none'] + + mobj = re.search(_MANIFEST_REG, manifest_url) + if not mobj: + return [] + available_qualities = mobj.group('quality').split(',') if mobj.group('quality') else ['*'] + + formats = [] + for q in filter(None, available_qualities): + self.write_debug(f'Creating https format for quality {q}') + formats.append({ + 'url': _MP4_TMPL % (relinker_url, q), + 'protocol': 'https', + 'ext': 'mp4', + **get_format_info(q) + }) + return formats + + @staticmethod + def _get_thumbnails_list(thumbs, url): + return [{ + 'url': urljoin(url, thumb_url), + } for thumb_url in (thumbs or {}).values() if thumb_url] + + @staticmethod + def _extract_subtitles(url, video_data): + STL_EXT = 'stl' + SRT_EXT = 'srt' + subtitles = {} + subtitles_array = video_data.get('subtitlesArray') or video_data.get('subtitleList') or [] + for k in ('subtitles', 'subtitlesUrl'): + subtitles_array.append({'url': video_data.get(k)}) + for subtitle in subtitles_array: + sub_url = subtitle.get('url') + if sub_url and isinstance(sub_url, str): + sub_lang = subtitle.get('language') or 'it' + sub_url = urljoin(url, sub_url) + sub_ext = determine_ext(sub_url, SRT_EXT) + subtitles.setdefault(sub_lang, []).append({ + 'ext': sub_ext, + 'url': sub_url, + }) + if STL_EXT == sub_ext: + subtitles[sub_lang].append({ + 'ext': SRT_EXT, + 'url': sub_url[:-len(STL_EXT)] + SRT_EXT, + }) + return subtitles + + +class RaiPlayIE(RaiBaseIE): + _VALID_URL = rf'(?P<base>https?://(?:www\.)?raiplay\.it/.+?-(?P<id>{RaiBaseIE._UUID_RE}))\.(?:html|json)' + _TESTS = [{ + 'url': 'https://www.raiplay.it/video/2014/04/Report-del-07042014-cb27157f-9dd0-4aee-b788-b1f67643a391.html', + 'md5': '8970abf8caf8aef4696e7b1f2adfc696', + 'info_dict': { + 'id': 'cb27157f-9dd0-4aee-b788-b1f67643a391', + 'ext': 'mp4', + 'title': 'Report del 07/04/2014', + 'alt_title': 'St 2013/14 - Report - Espresso nel caffè - 07/04/2014', + 'description': 'md5:d730c168a58f4bb35600fc2f881ec04e', + 'thumbnail': r're:^https?://www\.raiplay\.it/.+\.jpg', + 'uploader': 'Rai 3', + 'creator': 'Rai 3', + 'duration': 6160, + 'series': 'Report', + 'season': '2013/14', + 'subtitles': {'it': 'count:4'}, + 'release_year': 2022, + 'episode': 'Espresso nel caffè - 07/04/2014', + 'timestamp': 1396919880, + 'upload_date': '20140408', + 'formats': 'count:4', + }, + 'params': {'skip_download': True}, + }, { + # 1080p direct mp4 url + 'url': 'https://www.raiplay.it/video/2021/11/Blanca-S1E1-Senza-occhi-b1255a4a-8e72-4a2f-b9f3-fc1308e00736.html', + 'md5': 'aeda7243115380b2dd5e881fd42d949a', + 'info_dict': { + 'id': 'b1255a4a-8e72-4a2f-b9f3-fc1308e00736', + 'ext': 'mp4', + 'title': 'Blanca - S1E1 - Senza occhi', + 'alt_title': 'St 1 Ep 1 - Blanca - Senza occhi', + 'description': 'md5:75f95d5c030ec8bac263b1212322e28c', + 'thumbnail': r're:^https://www\.raiplay\.it/dl/img/.+\.jpg', + 'uploader': 'Rai Premium', + 'creator': 'Rai Fiction', + 'duration': 6493, + 'series': 'Blanca', + 'season': 'Season 1', + 'episode_number': 1, + 'release_year': 2021, + 'season_number': 1, + 'episode': 'Senza occhi', + 'timestamp': 1637318940, + 'upload_date': '20211119', + 'formats': 'count:12', + }, + 'params': {'skip_download': True}, + 'expected_warnings': ['Video not available. Likely due to geo-restriction.'] + }, { + # 1500 quality + 'url': 'https://www.raiplay.it/video/2012/09/S1E11---Tutto-cio-che-luccica-0cab3323-732e-45d6-8e86-7704acab6598.html', + 'md5': 'a634d20e8ab2d43724c273563f6bf87a', + 'info_dict': { + 'id': '0cab3323-732e-45d6-8e86-7704acab6598', + 'ext': 'mp4', + 'title': 'Mia and Me - S1E11 - Tutto ciò che luccica', + 'alt_title': 'St 1 Ep 11 - Mia and Me - Tutto ciò che luccica', + 'description': 'md5:4969e594184b1920c4c1f2b704da9dea', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 'Rai Gulp', + 'series': 'Mia and Me', + 'season': 'Season 1', + 'episode_number': 11, + 'release_year': 2015, + 'season_number': 1, + 'episode': 'Tutto ciò che luccica', + 'timestamp': 1348495020, + 'upload_date': '20120924', + }, + }, { + 'url': 'http://www.raiplay.it/video/2016/11/gazebotraindesi-efebe701-969c-4593-92f3-285f0d1ce750.html?', + 'only_matching': True, + }, { + # subtitles at 'subtitlesArray' key (see #27698) + 'url': 'https://www.raiplay.it/video/2020/12/Report---04-01-2021-2e90f1de-8eee-4de4-ac0e-78d21db5b600.html', + 'only_matching': True, + }, { + # DRM protected + 'url': 'https://www.raiplay.it/video/2021/06/Lo-straordinario-mondo-di-Zoey-S2E1-Lo-straordinario-ritorno-di-Zoey-3ba992de-2332-41ad-9214-73e32ab209f4.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + base, video_id = self._match_valid_url(url).groups() + + media = self._download_json( + f'{base}.json', video_id, 'Downloading video JSON') + + if not self.get_param('allow_unplayable_formats'): + if traverse_obj(media, (('program_info', None), 'rights_management', 'rights', 'drm')): + self.report_drm(video_id) + + video = media['video'] + relinker_info = self._extract_relinker_info(video['content_url'], video_id) + date_published = join_nonempty( + media.get('date_published'), media.get('time_published'), delim=' ') + season = media.get('season') + alt_title = join_nonempty(media.get('subtitle'), media.get('toptitle'), delim=' - ') + + return { + 'id': remove_start(media.get('id'), 'ContentItem-') or video_id, + 'display_id': video_id, + 'title': media.get('name'), + 'alt_title': strip_or_none(alt_title or None), + 'description': media.get('description'), + 'uploader': strip_or_none( + traverse_obj(media, ('program_info', 'channel')) + or media.get('channel') or None), + 'creator': strip_or_none( + traverse_obj(media, ('program_info', 'editor')) + or media.get('editor') or None), + 'duration': parse_duration(video.get('duration')), + 'timestamp': unified_timestamp(date_published), + 'thumbnails': self._get_thumbnails_list(media.get('images'), url), + 'series': traverse_obj(media, ('program_info', 'name')), + 'season_number': int_or_none(season), + 'season': season if (season and not season.isdigit()) else None, + 'episode': media.get('episode_title'), + 'episode_number': int_or_none(media.get('episode')), + 'subtitles': self._extract_subtitles(url, video), + 'release_year': int_or_none(traverse_obj(media, ('track_info', 'edit_year'))), + **relinker_info + } + + +class RaiPlayLiveIE(RaiPlayIE): # XXX: Do not subclass from concrete IE + _VALID_URL = r'(?P<base>https?://(?:www\.)?raiplay\.it/dirette/(?P<id>[^/?#&]+))' + _TESTS = [{ + 'url': 'http://www.raiplay.it/dirette/rainews24', + 'info_dict': { + 'id': 'd784ad40-e0ae-4a69-aa76-37519d238a9c', + 'display_id': 'rainews24', + 'ext': 'mp4', + 'title': 're:^Diretta di Rai News 24 [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'description': 'md5:4d00bcf6dc98b27c6ec480de329d1497', + 'uploader': 'Rai News 24', + 'creator': 'Rai News 24', + 'is_live': True, + 'live_status': 'is_live', + 'upload_date': '20090502', + 'timestamp': 1241276220, + 'formats': 'count:3', + }, + 'params': {'skip_download': True}, + }] + + +class RaiPlayPlaylistIE(InfoExtractor): + _VALID_URL = r'(?P<base>https?://(?:www\.)?raiplay\.it/programmi/(?P<id>[^/?#&]+))(?:/(?P<extra_id>[^?#&]+))?' + _TESTS = [{ + # entire series episodes + extras... + 'url': 'https://www.raiplay.it/programmi/nondirloalmiocapo/', + 'info_dict': { + 'id': 'nondirloalmiocapo', + 'title': 'Non dirlo al mio capo', + 'description': 'md5:98ab6b98f7f44c2843fd7d6f045f153b', + }, + 'playlist_mincount': 30, + }, { + # single season + 'url': 'https://www.raiplay.it/programmi/nondirloalmiocapo/episodi/stagione-2/', + 'info_dict': { + 'id': 'nondirloalmiocapo', + 'title': 'Non dirlo al mio capo - Stagione 2', + 'description': 'md5:98ab6b98f7f44c2843fd7d6f045f153b', + }, + 'playlist_count': 12, + }] + + def _real_extract(self, url): + base, playlist_id, extra_id = self._match_valid_url(url).groups() + + program = self._download_json( + f'{base}.json', playlist_id, 'Downloading program JSON') + + if extra_id: + extra_id = extra_id.upper().rstrip('/') + + playlist_title = program.get('name') + entries = [] + for b in (program.get('blocks') or []): + for s in (b.get('sets') or []): + if extra_id: + if extra_id != join_nonempty( + b.get('name'), s.get('name'), delim='/').replace(' ', '-').upper(): + continue + playlist_title = join_nonempty(playlist_title, s.get('name'), delim=' - ') + + s_id = s.get('id') + if not s_id: + continue + medias = self._download_json( + f'{base}/{s_id}.json', s_id, + 'Downloading content set JSON', fatal=False) + if not medias: + continue + for m in (medias.get('items') or []): + path_id = m.get('path_id') + if not path_id: + continue + video_url = urljoin(url, path_id) + entries.append(self.url_result( + video_url, ie=RaiPlayIE.ie_key(), + video_id=RaiPlayIE._match_id(video_url))) + + return self.playlist_result( + entries, playlist_id, playlist_title, + try_get(program, lambda x: x['program_info']['description'])) + + +class RaiPlaySoundIE(RaiBaseIE): + _VALID_URL = rf'(?P<base>https?://(?:www\.)?raiplaysound\.it/.+?-(?P<id>{RaiBaseIE._UUID_RE}))\.(?:html|json)' + _TESTS = [{ + 'url': 'https://www.raiplaysound.it/audio/2021/12/IL-RUGGITO-DEL-CONIGLIO-1ebae2a7-7cdb-42bb-842e-fe0d193e9707.html', + 'md5': '8970abf8caf8aef4696e7b1f2adfc696', + 'info_dict': { + 'id': '1ebae2a7-7cdb-42bb-842e-fe0d193e9707', + 'ext': 'mp3', + 'title': 'Il Ruggito del Coniglio del 10/12/2021', + 'alt_title': 'md5:0e6476cd57858bb0f3fcc835d305b455', + 'description': 'md5:2a17d2107e59a4a8faa0e18334139ee2', + 'thumbnail': r're:^https?://.+\.jpg$', + 'uploader': 'rai radio 2', + 'duration': 5685, + 'series': 'Il Ruggito del Coniglio', + 'episode': 'Il Ruggito del Coniglio del 10/12/2021', + 'creator': 'rai radio 2', + 'timestamp': 1638346620, + 'upload_date': '20211201', + }, + 'params': {'skip_download': True}, + }] + + def _real_extract(self, url): + base, audio_id = self._match_valid_url(url).group('base', 'id') + media = self._download_json(f'{base}.json', audio_id, 'Downloading audio JSON') + uid = try_get(media, lambda x: remove_start(remove_start(x['uniquename'], 'ContentItem-'), 'Page-')) + + info = {} + formats = [] + relinkers = set(traverse_obj(media, (('downloadable_audio', 'audio', ('live', 'cards', 0, 'audio')), 'url'))) + for r in relinkers: + info = self._extract_relinker_info(r, audio_id, True) + formats.extend(info.get('formats')) + + date_published = try_get(media, (lambda x: f'{x["create_date"]} {x.get("create_time") or ""}', + lambda x: x['live']['create_date'])) + + podcast_info = traverse_obj(media, 'podcast_info', ('live', 'cards', 0)) or {} + + return { + **info, + 'id': uid or audio_id, + 'display_id': audio_id, + 'title': traverse_obj(media, 'title', 'episode_title'), + 'alt_title': traverse_obj(media, ('track_info', 'media_name'), expected_type=strip_or_none), + 'description': media.get('description'), + 'uploader': traverse_obj(media, ('track_info', 'channel'), expected_type=strip_or_none), + 'creator': traverse_obj(media, ('track_info', 'editor'), expected_type=strip_or_none), + 'timestamp': unified_timestamp(date_published), + 'thumbnails': self._get_thumbnails_list(podcast_info.get('images'), url), + 'series': podcast_info.get('title'), + 'season_number': int_or_none(media.get('season')), + 'episode': media.get('episode_title'), + 'episode_number': int_or_none(media.get('episode')), + 'formats': formats, + } + + +class RaiPlaySoundLiveIE(RaiPlaySoundIE): # XXX: Do not subclass from concrete IE + _VALID_URL = r'(?P<base>https?://(?:www\.)?raiplaysound\.it/(?P<id>[^/?#&]+)$)' + _TESTS = [{ + 'url': 'https://www.raiplaysound.it/radio2', + 'info_dict': { + 'id': 'b00a50e6-f404-4af6-8f8c-ff3b9af73a44', + 'display_id': 'radio2', + 'ext': 'mp4', + 'title': r're:Rai Radio 2 \d+-\d+-\d+ \d+:\d+', + 'thumbnail': r're:^https://www\.raiplaysound\.it/dl/img/.+\.png', + 'uploader': 'rai radio 2', + 'series': 'Rai Radio 2', + 'creator': 'raiplaysound', + 'is_live': True, + 'live_status': 'is_live', + }, + 'params': {'skip_download': True}, + }] + + +class RaiPlaySoundPlaylistIE(InfoExtractor): + _VALID_URL = r'(?P<base>https?://(?:www\.)?raiplaysound\.it/(?:programmi|playlist|audiolibri)/(?P<id>[^/?#&]+))(?:/(?P<extra_id>[^?#&]+))?' + _TESTS = [{ + # entire show + 'url': 'https://www.raiplaysound.it/programmi/ilruggitodelconiglio', + 'info_dict': { + 'id': 'ilruggitodelconiglio', + 'title': 'Il Ruggito del Coniglio', + 'description': 'md5:48cff6972435964284614d70474132e6', + }, + 'playlist_mincount': 65, + }, { + # single season + 'url': 'https://www.raiplaysound.it/programmi/ilruggitodelconiglio/puntate/prima-stagione-1995', + 'info_dict': { + 'id': 'ilruggitodelconiglio_puntate_prima-stagione-1995', + 'title': 'Prima Stagione 1995', + }, + 'playlist_count': 1, + }] + + def _real_extract(self, url): + base, playlist_id, extra_id = self._match_valid_url(url).group('base', 'id', 'extra_id') + url = f'{base}.json' + program = self._download_json(url, playlist_id, 'Downloading program JSON') + + if extra_id: + extra_id = extra_id.rstrip('/') + playlist_id += '_' + extra_id.replace('/', '_') + path = next(c['path_id'] for c in program.get('filters') or [] if extra_id in c.get('weblink')) + program = self._download_json( + urljoin('https://www.raiplaysound.it', path), playlist_id, 'Downloading program secondary JSON') + + entries = [ + self.url_result(urljoin(base, c['path_id']), ie=RaiPlaySoundIE.ie_key()) + for c in traverse_obj(program, 'cards', ('block', 'cards')) or [] + if c.get('path_id')] + + return self.playlist_result(entries, playlist_id, program.get('title'), + traverse_obj(program, ('podcast_info', 'description'))) + + +class RaiIE(RaiBaseIE): + _VALID_URL = rf'https?://[^/]+\.(?:rai\.(?:it|tv))/.+?-(?P<id>{RaiBaseIE._UUID_RE})(?:-.+?)?\.html' + _TESTS = [{ + 'url': 'https://www.raisport.rai.it/dl/raiSport/media/rassegna-stampa-04a9f4bd-b563-40cf-82a6-aad3529cb4a9.html', + 'info_dict': { + 'id': '04a9f4bd-b563-40cf-82a6-aad3529cb4a9', + 'ext': 'mp4', + 'title': 'TG PRIMO TEMPO', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 1758, + 'upload_date': '20140612', + }, + 'params': {'skip_download': True}, + 'expected_warnings': ['Video not available. Likely due to geo-restriction.'] + }, { + 'url': 'https://www.rai.it/dl/RaiTV/programmi/media/ContentItem-efb17665-691c-45d5-a60c-5301333cbb0c.html', + 'info_dict': { + 'id': 'efb17665-691c-45d5-a60c-5301333cbb0c', + 'ext': 'mp4', + 'title': 'TG1 ore 20:00 del 03/11/2016', + 'description': 'TG1 edizione integrale ore 20:00 del giorno 03/11/2016', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 2214, + 'upload_date': '20161103' + }, + 'params': {'skip_download': True}, + }, { + # Direct MMS: Media URL no longer works. + 'url': 'http://www.rai.it/dl/RaiTV/programmi/media/ContentItem-b63a4089-ac28-48cf-bca5-9f5b5bc46df5.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + content_id = self._match_id(url) + media = self._download_json( + f'https://www.rai.tv/dl/RaiTV/programmi/media/ContentItem-{content_id}.html?json', + content_id, 'Downloading video JSON', fatal=False, expected_status=404) + + if media is None: + return None + + if 'Audio' in media['type']: + relinker_info = { + 'formats': [{ + 'format_id': join_nonempty('https', media.get('formatoAudio'), delim='-'), + 'url': media['audioUrl'], + 'ext': media.get('formatoAudio'), + 'vcodec': 'none', + 'acodec': media.get('formatoAudio'), + }] + } + elif 'Video' in media['type']: + relinker_info = self._extract_relinker_info(media['mediaUri'], content_id) + else: + raise ExtractorError('not a media file') + + thumbnails = self._get_thumbnails_list( + {image_type: media.get(image_type) for image_type in ( + 'image', 'image_medium', 'image_300')}, url) + + return { + 'id': content_id, + 'title': strip_or_none(media.get('name') or media.get('title')), + 'description': strip_or_none(media.get('desc')) or None, + 'thumbnails': thumbnails, + 'uploader': strip_or_none(media.get('author')) or None, + 'upload_date': unified_strdate(media.get('date')), + 'duration': parse_duration(media.get('length')), + 'subtitles': self._extract_subtitles(url, media), + **relinker_info + } + + +class RaiNewsIE(RaiIE): # XXX: Do not subclass from concrete IE + _VALID_URL = rf'https?://(www\.)?rainews\.it/(?!articoli)[^?#]+-(?P<id>{RaiBaseIE._UUID_RE})(?:-[^/?#]+)?\.html' + _EMBED_REGEX = [rf'<iframe[^>]+data-src="(?P<url>/iframe/[^?#]+?{RaiBaseIE._UUID_RE}\.html)'] + _TESTS = [{ + # new rainews player (#3911) + 'url': 'https://www.rainews.it/rubriche/24mm/video/2022/05/24mm-del-29052022-12cf645d-1ffd-4220-b27c-07c226dbdecf.html', + 'info_dict': { + 'id': '12cf645d-1ffd-4220-b27c-07c226dbdecf', + 'ext': 'mp4', + 'title': 'Puntata del 29/05/2022', + 'duration': 1589, + 'upload_date': '20220529', + 'uploader': 'rainews', + }, + 'params': {'skip_download': True}, + }, { + # old content with fallback method to extract media urls + 'url': 'https://www.rainews.it/dl/rainews/media/Weekend-al-cinema-da-Hollywood-arriva-il-thriller-di-Tate-Taylor-La-ragazza-del-treno-1632c009-c843-4836-bb65-80c33084a64b.html', + 'info_dict': { + 'id': '1632c009-c843-4836-bb65-80c33084a64b', + 'ext': 'mp4', + 'title': 'Weekend al cinema, da Hollywood arriva il thriller di Tate Taylor "La ragazza del treno"', + 'description': 'I film in uscita questa settimana.', + 'thumbnail': r're:^https?://.*\.png$', + 'duration': 833, + 'upload_date': '20161103' + }, + 'params': {'skip_download': True}, + 'expected_warnings': ['unable to extract player_data'], + }, { + # iframe + drm + 'url': 'https://www.rainews.it/iframe/video/2022/07/euro2022-europei-calcio-femminile-italia-belgio-gol-0-1-video-4de06a69-de75-4e32-a657-02f0885f8118.html', + 'only_matching': True, + }] + _PLAYER_TAG = 'news' + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + player_data = self._search_json( + rf'<rai{self._PLAYER_TAG}-player\s*data=\'', webpage, 'player_data', video_id, + transform_source=clean_html, default={}) + track_info = player_data.get('track_info') + relinker_url = traverse_obj(player_data, 'mediapolis', 'content_url') + + if not relinker_url: + # fallback on old implementation for some old content + try: + return self._extract_from_content_id(video_id, url) + except GeoRestrictedError: + raise + except ExtractorError as e: + raise ExtractorError('Relinker URL not found', cause=e) + + relinker_info = self._extract_relinker_info(urljoin(url, relinker_url), video_id) + + return { + 'id': video_id, + 'title': player_data.get('title') or track_info.get('title') or self._og_search_title(webpage), + 'upload_date': unified_strdate(track_info.get('date')), + 'uploader': strip_or_none(track_info.get('editor') or None), + **relinker_info + } + + +class RaiCulturaIE(RaiNewsIE): # XXX: Do not subclass from concrete IE + _VALID_URL = rf'https?://(www\.)?raicultura\.it/(?!articoli)[^?#]+-(?P<id>{RaiBaseIE._UUID_RE})(?:-[^/?#]+)?\.html' + _EMBED_REGEX = [rf'<iframe[^>]+data-src="(?P<url>/iframe/[^?#]+?{RaiBaseIE._UUID_RE}\.html)'] + _TESTS = [{ + 'url': 'https://www.raicultura.it/letteratura/articoli/2018/12/Alberto-Asor-Rosa-Letteratura-e-potere-05ba8775-82b5-45c5-a89d-dd955fbde1fb.html', + 'info_dict': { + 'id': '05ba8775-82b5-45c5-a89d-dd955fbde1fb', + 'ext': 'mp4', + 'title': 'Alberto Asor Rosa: Letteratura e potere', + 'duration': 1756, + 'upload_date': '20181206', + 'uploader': 'raicultura', + 'formats': 'count:2', + }, + 'params': {'skip_download': True}, + }] + _PLAYER_TAG = 'cultura' + + +class RaiSudtirolIE(RaiBaseIE): + _VALID_URL = r'https?://raisudtirol\.rai\.it/.+media=(?P<id>\w+)' + _TESTS = [{ + # mp4 file + 'url': 'https://raisudtirol.rai.it/la/index.php?media=Ptv1619729460', + 'info_dict': { + 'id': 'Ptv1619729460', + 'ext': 'mp4', + 'title': 'Euro: trasmisciun d\'economia - 29-04-2021 20:51', + 'series': 'Euro: trasmisciun d\'economia', + 'upload_date': '20210429', + 'thumbnail': r're:https://raisudtirol\.rai\.it/img/.+\.jpg', + 'uploader': 'raisudtirol', + 'formats': 'count:1', + }, + 'params': {'skip_download': True}, + }, { + # m3u manifest + 'url': 'https://raisudtirol.rai.it/it/kidsplayer.php?lang=it&media=GUGGUG_P1.smil', + 'info_dict': { + 'id': 'GUGGUG_P1', + 'ext': 'mp4', + 'title': 'GUGGUG! La Prospettiva - Die Perspektive', + 'uploader': 'raisudtirol', + 'formats': 'count:6', + }, + 'params': {'skip_download': True}, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + video_date = self._html_search_regex( + r'<span class="med_data">(.+?)</span>', webpage, 'video_date', default=None) + video_title = self._html_search_regex([ + r'<span class="med_title">(.+?)</span>', r'title: \'(.+?)\','], + webpage, 'video_title', default=None) + video_url = self._html_search_regex([ + r'sources:\s*\[\{file:\s*"(.+?)"\}\]', + r'<source\s+src="(.+?)"\s+type="application/x-mpegURL"'], + webpage, 'video_url', default=None) + + ext = determine_ext(video_url) + if ext == 'm3u8': + formats = self._extract_m3u8_formats(video_url, video_id) + elif ext == 'mp4': + formats = [{ + 'format_id': 'https-mp4', + 'url': self._proto_relative_url(video_url), + 'width': 1024, + 'height': 576, + 'fps': 25, + 'vcodec': 'avc1', + 'acodec': 'mp4a', + }] + else: + formats = [] + self.raise_no_formats(f'Unrecognized media file: {video_url}') + + return { + 'id': video_id, + 'title': join_nonempty(video_title, video_date, delim=' - '), + 'series': video_title if video_date else None, + 'upload_date': unified_strdate(video_date), + 'thumbnail': urljoin('https://raisudtirol.rai.it/', self._html_search_regex( + r'image: \'(.+?)\'', webpage, 'video_thumb', default=None)), + 'uploader': 'raisudtirol', + 'formats': formats, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/raywenderlich.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/raywenderlich.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/raywenderlich.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/raywenderlich.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rbgtum.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rbgtum.py new file mode 100644 index 0000000..c8a331f --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rbgtum.py @@ -0,0 +1,142 @@ +import re + +from .common import InfoExtractor +from ..utils import parse_qs, remove_start, traverse_obj, ExtractorError + + +class RbgTumIE(InfoExtractor): + _VALID_URL = r'https://(?:live\.rbg\.tum\.de|tum\.live)/w/(?P<id>[^?#]+)' + _TESTS = [{ + # Combined view + 'url': 'https://live.rbg.tum.de/w/cpp/22128', + 'md5': '53a5e7b3e07128e33bbf36687fe1c08f', + 'info_dict': { + 'id': 'cpp/22128', + 'ext': 'mp4', + 'title': 'Lecture: October 18. 2022', + 'series': 'Concepts of C++ programming (IN2377)', + } + }, { + # Presentation only + 'url': 'https://live.rbg.tum.de/w/I2DL/12349/PRES', + 'md5': '36c584272179f3e56b0db5d880639cba', + 'info_dict': { + 'id': 'I2DL/12349/PRES', + 'ext': 'mp4', + 'title': 'Lecture 3: Introduction to Neural Networks', + 'series': 'Introduction to Deep Learning (IN2346)', + } + }, { + # Camera only + 'url': 'https://live.rbg.tum.de/w/fvv-info/16130/CAM', + 'md5': 'e04189d92ff2f56aedf5cede65d37aad', + 'info_dict': { + 'id': 'fvv-info/16130/CAM', + 'ext': 'mp4', + 'title': 'Fachschaftsvollversammlung', + 'series': 'Fachschaftsvollversammlung Informatik', + } + }, { + 'url': 'https://tum.live/w/linalginfo/27102', + 'only_matching': True, + }, ] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + m3u8 = self._html_search_regex(r'"(https://[^"]+\.m3u8[^"]*)', webpage, 'm3u8') + lecture_title = self._html_search_regex(r'<h1[^>]*>([^<]+)</h1>', webpage, 'title', fatal=False) + lecture_series_title = remove_start(self._html_extract_title(webpage), 'TUM-Live | ') + + formats = self._extract_m3u8_formats(m3u8, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id='hls') + + return { + 'id': video_id, + 'title': lecture_title, + 'series': lecture_series_title, + 'formats': formats, + } + + +class RbgTumCourseIE(InfoExtractor): + _VALID_URL = r'https://(?P<hostname>(?:live\.rbg\.tum\.de|tum\.live))/old/course/(?P<id>(?P<year>\d+)/(?P<term>\w+)/(?P<slug>[^/?#]+))' + _TESTS = [{ + 'url': 'https://live.rbg.tum.de/old/course/2022/S/fpv', + 'info_dict': { + 'title': 'Funktionale Programmierung und Verifikation (IN0003)', + 'id': '2022/S/fpv', + }, + 'params': { + 'noplaylist': False, + }, + 'playlist_count': 13, + }, { + 'url': 'https://live.rbg.tum.de/old/course/2022/W/set', + 'info_dict': { + 'title': 'SET FSMPIC', + 'id': '2022/W/set', + }, + 'params': { + 'noplaylist': False, + }, + 'playlist_count': 6, + }, { + 'url': 'https://tum.live/old/course/2023/S/linalginfo', + 'only_matching': True, + }, ] + + def _real_extract(self, url): + course_id, hostname, year, term, slug = self._match_valid_url(url).group('id', 'hostname', 'year', 'term', 'slug') + meta = self._download_json( + f'https://{hostname}/api/courses/{slug}/', course_id, fatal=False, + query={'year': year, 'term': term}) or {} + lecture_series_title = meta.get('Name') + lectures = [self.url_result(f'https://{hostname}/w/{slug}/{stream_id}', RbgTumIE) + for stream_id in traverse_obj(meta, ('Streams', ..., 'ID'))] + + if not lectures: + webpage = self._download_webpage(url, course_id) + lecture_series_title = remove_start(self._html_extract_title(webpage), 'TUM-Live | ') + lectures = [self.url_result(f'https://{hostname}{lecture_path}', RbgTumIE) + for lecture_path in re.findall(r'href="(/w/[^/"]+/[^/"]+)"', webpage)] + + return self.playlist_result(lectures, course_id, lecture_series_title) + + +class RbgTumNewCourseIE(InfoExtractor): + _VALID_URL = r'https://(?P<hostname>(?:live\.rbg\.tum\.de|tum\.live))/\?' + _TESTS = [{ + 'url': 'https://live.rbg.tum.de/?year=2022&term=S&slug=fpv&view=3', + 'info_dict': { + 'title': 'Funktionale Programmierung und Verifikation (IN0003)', + 'id': '2022/S/fpv', + }, + 'params': { + 'noplaylist': False, + }, + 'playlist_count': 13, + }, { + 'url': 'https://live.rbg.tum.de/?year=2022&term=W&slug=set&view=3', + 'info_dict': { + 'title': 'SET FSMPIC', + 'id': '2022/W/set', + }, + 'params': { + 'noplaylist': False, + }, + 'playlist_count': 6, + }, { + 'url': 'https://tum.live/?year=2023&term=S&slug=linalginfo&view=3', + 'only_matching': True, + }] + + def _real_extract(self, url): + query = parse_qs(url) + errors = [key for key in ('year', 'term', 'slug') if not query.get(key)] + if errors: + raise ExtractorError(f'Input URL is missing query parameters: {", ".join(errors)}') + year, term, slug = query['year'][0], query['term'][0], query['slug'][0] + hostname = self._match_valid_url(url).group('hostname') + + return self.url_result(f'https://{hostname}/old/course/{year}/{term}/{slug}', RbgTumCourseIE) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rbmaradio.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rbmaradio.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rbmaradio.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rbmaradio.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rcs.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rcs.py new file mode 100644 index 0000000..b865f63 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rcs.py @@ -0,0 +1,372 @@ +import re + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ( + ExtractorError, + base_url, + clean_html, + extract_attributes, + get_element_html_by_class, + get_element_html_by_id, + int_or_none, + js_to_json, + mimetype2ext, + sanitize_url, + traverse_obj, + try_call, + url_basename, + urljoin, +) + + +class RCSBaseIE(InfoExtractor): + # based on VideoPlayerLoader.prototype.getVideoSrc + # and VideoPlayerLoader.prototype.transformSrc from + # https://js2.corriereobjects.it/includes2013/LIBS/js/corriere_video.sjs + _UUID_RE = r'[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}' + _RCS_ID_RE = r'[\w-]+-\d{10}' + _MIGRATION_MAP = { + 'videoamica-vh.akamaihd': 'amica', + 'media2-amica-it.akamaized': 'amica', + 'corrierevam-vh.akamaihd': 'corriere', + 'media2vam-corriere-it.akamaized': 'corriere', + 'cormezzogiorno-vh.akamaihd': 'corrieredelmezzogiorno', + 'media2vam-mezzogiorno-corriere-it.akamaized': 'corrieredelmezzogiorno', + 'corveneto-vh.akamaihd': 'corrieredelveneto', + 'media2vam-veneto-corriere-it.akamaized': 'corrieredelveneto', + 'corbologna-vh.akamaihd': 'corrieredibologna', + 'media2vam-bologna-corriere-it.akamaized': 'corrieredibologna', + 'corfiorentino-vh.akamaihd': 'corrierefiorentino', + 'media2vam-fiorentino-corriere-it.akamaized': 'corrierefiorentino', + 'corinnovazione-vh.akamaihd': 'corriereinnovazione', + 'media2-gazzanet-gazzetta-it.akamaized': 'gazzanet', + 'videogazzanet-vh.akamaihd': 'gazzanet', + 'videogazzaworld-vh.akamaihd': 'gazzaworld', + 'gazzettavam-vh.akamaihd': 'gazzetta', + 'media2vam-gazzetta-it.akamaized': 'gazzetta', + 'videoiodonna-vh.akamaihd': 'iodonna', + 'media2-leitv-it.akamaized': 'leitv', + 'videoleitv-vh.akamaihd': 'leitv', + 'videoliving-vh.akamaihd': 'living', + 'media2-living-corriere-it.akamaized': 'living', + 'media2-oggi-it.akamaized': 'oggi', + 'videooggi-vh.akamaihd': 'oggi', + 'media2-quimamme-it.akamaized': 'quimamme', + 'quimamme-vh.akamaihd': 'quimamme', + 'videorunning-vh.akamaihd': 'running', + 'media2-style-corriere-it.akamaized': 'style', + 'style-vh.akamaihd': 'style', + 'videostyle-vh.akamaihd': 'style', + 'media2-stylepiccoli-it.akamaized': 'stylepiccoli', + 'stylepiccoli-vh.akamaihd': 'stylepiccoli', + 'doveviaggi-vh.akamaihd': 'viaggi', + 'media2-doveviaggi-it.akamaized': 'viaggi', + 'media2-vivimilano-corriere-it.akamaized': 'vivimilano', + 'vivimilano-vh.akamaihd': 'vivimilano', + 'media2-youreporter-it.akamaized': 'youreporter' + } + + def _get_video_src(self, video): + for source in traverse_obj(video, ( + 'mediaProfile', 'mediaFile', lambda _, v: v.get('mimeType'))): + url = source['value'] + for s, r in ( + ('media2vam.corriere.it.edgesuite.net', 'media2vam-corriere-it.akamaized.net'), + ('media.youreporter.it.edgesuite.net', 'media-youreporter-it.akamaized.net'), + ('corrierepmd.corriere.it.edgesuite.net', 'corrierepmd-corriere-it.akamaized.net'), + ('media2vam-corriere-it.akamaized.net/fcs.quotidiani/vr/videos/', 'video.corriere.it/vr360/videos/'), + ('http://', 'https://'), + ): + url = url.replace(s, r) + + type_ = mimetype2ext(source['mimeType']) + if type_ == 'm3u8' and '-vh.akamaihd' in url: + # still needed for some old content: see _TESTS #3 + matches = re.search(r'(?:https?:)?//(?P<host>[\w\.\-]+)\.net/i(?P<path>.+)$', url) + if matches: + url = f'https://vod.rcsobjects.it/hls/{self._MIGRATION_MAP[matches.group("host")]}{matches.group("path")}' + if traverse_obj(video, ('mediaProfile', 'geoblocking')) or ( + type_ == 'm3u8' and 'fcs.quotidiani_!' in url): + url = url.replace('vod.rcsobjects', 'vod-it.rcsobjects') + if type_ == 'm3u8' and 'vod' in url: + url = url.replace('.csmil', '.urlset') + if type_ == 'mp3': + url = url.replace('media2vam-corriere-it.akamaized.net', 'vod.rcsobjects.it/corriere') + + yield { + 'type': type_, + 'url': url, + 'bitrate': source.get('bitrate') + } + + def _create_http_formats(self, m3u8_formats, video_id): + for f in m3u8_formats: + if f['vcodec'] == 'none': + continue + http_url = re.sub(r'(https?://[^/]+)/hls/([^?#]+?\.mp4).+', r'\g<1>/\g<2>', f['url']) + if http_url == f['url']: + continue + + http_f = f.copy() + del http_f['manifest_url'] + format_id = try_call(lambda: http_f['format_id'].replace('hls-', 'https-')) + urlh = self._request_webpage(HEADRequest(http_url), video_id, fatal=False, + note=f'Check filesize for {format_id}') + if not urlh: + continue + + http_f.update({ + 'format_id': format_id, + 'url': http_url, + 'protocol': 'https', + 'filesize_approx': int_or_none(urlh.headers.get('Content-Length', None)), + }) + yield http_f + + def _create_formats(self, sources, video_id): + for source in sources: + if source['type'] == 'm3u8': + m3u8_formats = self._extract_m3u8_formats( + source['url'], video_id, 'mp4', m3u8_id='hls', fatal=False) + yield from m3u8_formats + yield from self._create_http_formats(m3u8_formats, video_id) + elif source['type'] == 'mp3': + yield { + 'format_id': 'https-mp3', + 'ext': 'mp3', + 'acodec': 'mp3', + 'vcodec': 'none', + 'abr': source.get('bitrate'), + 'url': source['url'], + } + + def _real_extract(self, url): + cdn, video_id = self._match_valid_url(url).group('cdn', 'id') + display_id, video_data = None, None + + if re.match(self._UUID_RE, video_id) or re.match(self._RCS_ID_RE, video_id): + url = f'https://video.{cdn}/video-json/{video_id}' + else: + webpage = self._download_webpage(url, video_id) + data_config = get_element_html_by_id('divVideoPlayer', webpage) or get_element_html_by_class('divVideoPlayer', webpage) + + if data_config: + data_config = self._parse_json( + extract_attributes(data_config).get('data-config'), + video_id, fatal=False) or {} + if data_config.get('newspaper'): + cdn = f'{data_config["newspaper"]}.it' + display_id, video_id = video_id, data_config.get('uuid') or video_id + url = f'https://video.{cdn}/video-json/{video_id}' + else: + json_url = self._search_regex( + r'''(?x)url\s*=\s*(["']) + (?P<url> + (?:https?:)?//video\.rcs\.it + /fragment-includes/video-includes/[^"']+?\.json + )\1;''', + webpage, video_id, group='url', default=None) + if json_url: + video_data = self._download_json(sanitize_url(json_url, scheme='https'), video_id) + display_id, video_id = video_id, video_data.get('id') or video_id + + if not video_data: + webpage = self._download_webpage(url, video_id) + + video_data = self._search_json( + '##start-video##', webpage, 'video data', video_id, default=None, + end_pattern='##end-video##', transform_source=js_to_json) + + if not video_data: + # try search for iframes + emb = RCSEmbedsIE._extract_url(webpage) + if emb: + return { + '_type': 'url_transparent', + 'url': emb, + 'ie_key': RCSEmbedsIE.ie_key() + } + + if not video_data: + raise ExtractorError('Video data not found in the page') + + return { + 'id': video_id, + 'display_id': display_id, + 'title': video_data.get('title'), + 'description': (clean_html(video_data.get('description')) + or clean_html(video_data.get('htmlDescription')) + or self._html_search_meta('description', webpage)), + 'uploader': video_data.get('provider') or cdn, + 'formats': list(self._create_formats(self._get_video_src(video_data), video_id)), + } + + +class RCSEmbedsIE(RCSBaseIE): + _VALID_URL = r'''(?x) + https?://(?P<vid>video)\. + (?P<cdn> + (?: + rcs| + (?:corriere\w+\.)?corriere| + (?:gazzanet\.)?gazzetta + )\.it) + /video-embed/(?P<id>[^/=&\?]+?)(?:$|\?)''' + _EMBED_REGEX = [r'''(?x) + (?: + data-frame-src=| + <iframe[^\n]+src= + ) + (["']) + (?P<url>(?:https?:)?//video\. + (?: + rcs| + (?:corriere\w+\.)?corriere| + (?:gazzanet\.)?gazzetta + ) + \.it/video-embed/.+?) + \1'''] + _TESTS = [{ + 'url': 'https://video.rcs.it/video-embed/iodonna-0001585037', + 'md5': '0faca97df525032bb9847f690bc3720c', + 'info_dict': { + 'id': 'iodonna-0001585037', + 'ext': 'mp4', + 'title': 'Sky Arte racconta Madonna nella serie "Artist to icon"', + 'description': 'md5:65b09633df9ffee57f48b39e34c9e067', + 'uploader': 'rcs.it', + } + }, { + 'url': 'https://video.gazzanet.gazzetta.it/video-embed/gazzanet-mo05-0000260789', + 'only_matching': True + }, { + 'url': 'https://video.gazzetta.it/video-embed/49612410-00ca-11eb-bcd8-30d4253e0140', + 'only_matching': True + }] + _WEBPAGE_TESTS = [{ + 'url': 'https://www.iodonna.it/video-iodonna/personaggi-video/monica-bellucci-piu-del-lavoro-oggi-per-me-sono-importanti-lamicizia-e-la-famiglia/', + 'info_dict': { + 'id': 'iodonna-0002033648', + 'ext': 'mp4', + 'title': 'Monica Bellucci: «Più del lavoro, oggi per me sono importanti l\'amicizia e la famiglia»', + 'description': 'md5:daea6d9837351e56b1ab615c06bebac1', + 'uploader': 'rcs.it', + } + }] + + @staticmethod + def _sanitize_url(url): + url = sanitize_url(url, scheme='https') + return urljoin(base_url(url), url_basename(url)) + + @classmethod + def _extract_embed_urls(cls, url, webpage): + return map(cls._sanitize_url, super()._extract_embed_urls(url, webpage)) + + +class RCSIE(RCSBaseIE): + _VALID_URL = r'''(?x)https?://(?P<vid>video|viaggi)\. + (?P<cdn> + (?: + corrieredelmezzogiorno\. + |corrieredelveneto\. + |corrieredibologna\. + |corrierefiorentino\. + )?corriere\.it + |(?:gazzanet\.)?gazzetta\.it) + /(?!video-embed/)[^?#]+?/(?P<id>[^/\?]+)(?=\?|/$|$)''' + _TESTS = [{ + # json iframe directly from id + 'url': 'https://video.corriere.it/sport/formula-1/vettel-guida-ferrari-sf90-mugello-suo-fianco-c-elecrerc-bendato-video-esilarante/b727632a-f9d0-11ea-91b0-38d50a849abb', + 'md5': '14946840dec46ecfddf66ba4eea7d2b2', + 'info_dict': { + 'id': 'b727632a-f9d0-11ea-91b0-38d50a849abb', + 'ext': 'mp4', + 'title': 'Vettel guida la Ferrari SF90 al Mugello e al suo fianco c\'è Leclerc (bendato): il video è esilarante', + 'description': 'md5:3915ce5ebb3d2571deb69a5eb85ac9b5', + 'uploader': 'Corriere Tv', + } + }, { + # search for video id inside the page + 'url': 'https://viaggi.corriere.it/video/norvegia-il-nuovo-ponte-spettacolare-sopra-la-cascata-di-voringsfossen/', + 'md5': 'f22a92d9e666e80f2fffbf2825359c81', + 'info_dict': { + 'id': '5b7cd134-e2c1-11ea-89b3-b56dd0df2aa2', + 'display_id': 'norvegia-il-nuovo-ponte-spettacolare-sopra-la-cascata-di-voringsfossen', + 'ext': 'mp4', + 'title': 'La nuova spettacolare attrazione in Norvegia: il ponte sopra Vøringsfossen', + 'description': 'md5:18b35a291f6746c0c8dacd16e5f5f4f8', + 'uploader': 'DOVE Viaggi', + } + }, { + # only audio format https://github.com/yt-dlp/yt-dlp/issues/5683 + 'url': 'https://video.corriere.it/cronaca/audio-telefonata-il-papa-becciu-santita-lettera-che-mi-ha-inviato-condanna/b94c0d20-70c2-11ed-9572-e4b947a0ebd2', + 'md5': 'aaffb08d02f2ce4292a4654694c78150', + 'info_dict': { + 'id': 'b94c0d20-70c2-11ed-9572-e4b947a0ebd2', + 'ext': 'mp3', + 'title': 'L\'audio della telefonata tra il Papa e Becciu: «Santità, la lettera che mi ha inviato è una condanna»', + 'description': 'md5:c0ddb61bd94a8d4e0d4bb9cda50a689b', + 'uploader': 'Corriere Tv', + 'formats': [{'format_id': 'https-mp3', 'ext': 'mp3'}], + } + }, { + # old content still needs cdn migration + 'url': 'https://viaggi.corriere.it/video/milano-varallo-sesia-sul-treno-a-vapore/', + 'md5': '2dfdce7af249654ad27eeba03fe1e08d', + 'info_dict': { + 'id': 'd8f6c8d0-f7d7-11e8-bfca-f74cf4634191', + 'display_id': 'milano-varallo-sesia-sul-treno-a-vapore', + 'ext': 'mp4', + 'title': 'Milano-Varallo Sesia sul treno a vapore', + 'description': 'md5:6348f47aac230397fe341a74f7678d53', + 'uploader': 'DOVE Viaggi', + } + }, { + 'url': 'https://video.corriere.it/video-360/metro-copenaghen-tutta-italiana/a248a7f0-e2db-11e9-9830-af2de6b1f945', + 'only_matching': True + }] + + +class RCSVariousIE(RCSBaseIE): + _VALID_URL = r'''(?x)https?://www\. + (?P<cdn> + leitv\.it| + youreporter\.it| + amica\.it + )/(?:[^/]+/)?(?P<id>[^/]+?)(?:$|\?|/)''' + _TESTS = [{ + 'url': 'https://www.leitv.it/benessere/mal-di-testa/', + 'md5': '3b7a683d105a7313ec7513b014443631', + 'info_dict': { + 'id': 'leitv-0000125151', + 'display_id': 'mal-di-testa', + 'ext': 'mp4', + 'title': 'Cervicalgia e mal di testa, il video con i suggerimenti dell\'esperto', + 'description': 'md5:ae21418f34cee0b8d02a487f55bcabb5', + 'uploader': 'leitv.it', + } + }, { + 'url': 'https://www.youreporter.it/fiume-sesia-3-ottobre-2020/', + 'md5': '3989b6d603482611a2abd2f32b79f739', + 'info_dict': { + 'id': 'youreporter-0000332574', + 'display_id': 'fiume-sesia-3-ottobre-2020', + 'ext': 'mp4', + 'title': 'Fiume Sesia 3 ottobre 2020', + 'description': 'md5:0070eef1cc884d13c970a4125063de55', + 'uploader': 'youreporter.it', + } + }, { + 'url': 'https://www.amica.it/video-post/saint-omer-al-cinema-il-film-leone-dargento-che-ribalta-gli-stereotipi/', + 'md5': '187cce524dfd0343c95646c047375fc4', + 'info_dict': { + 'id': 'amica-0001225365', + 'display_id': 'saint-omer-al-cinema-il-film-leone-dargento-che-ribalta-gli-stereotipi', + 'ext': 'mp4', + 'title': '"Saint Omer": al cinema il film Leone d\'argento che ribalta gli stereotipi', + 'description': 'md5:b1c8869c2dcfd6073a2a311ba0008aa8', + 'uploader': 'rcs.it', + } + }] diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rcti.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rcti.py new file mode 100644 index 0000000..79d9c8e --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rcti.py @@ -0,0 +1,373 @@ +import json +import random +import time + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + dict_get, + ExtractorError, + strip_or_none, + traverse_obj, + try_get +) + + +class RCTIPlusBaseIE(InfoExtractor): + def _real_initialize(self): + self._AUTH_KEY = self._download_json( + 'https://api.rctiplus.com/api/v1/visitor?platform=web', # platform can be web, mweb, android, ios + None, 'Fetching authorization key')['data']['access_token'] + + def _call_api(self, url, video_id, note=None): + json = self._download_json( + url, video_id, note=note, headers={'Authorization': self._AUTH_KEY}) + if json.get('status', {}).get('code', 0) != 0: + raise ExtractorError(f'{self.IE_NAME} said: {json["status"]["message_client"]}', cause=json) + return json.get('data'), json.get('meta') + + +class RCTIPlusIE(RCTIPlusBaseIE): + _VALID_URL = r'https://www\.rctiplus\.com/(?:programs/\d+?/.*?/)?(?P<type>episode|clip|extra|live-event|missed-event)/(?P<id>\d+)/(?P<display_id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://www.rctiplus.com/programs/1259/kiko-untuk-lola/episode/22124/untuk-lola', + 'md5': '56ed45affad45fa18d5592a1bc199997', + 'info_dict': { + 'id': 'v_e22124', + 'title': 'Untuk Lola', + 'display_id': 'untuk-lola', + 'description': 'md5:2b809075c0b1e071e228ad6d13e41deb', + 'ext': 'mp4', + 'duration': 1400, + 'timestamp': 1615978800, + 'upload_date': '20210317', + 'series': 'Kiko : Untuk Lola', + 'season_number': 1, + 'episode_number': 1, + 'channel': 'RCTI', + }, + 'params': { + 'fixup': 'never', + }, + }, { # Clip; Series title doesn't appear on metadata JSON + 'url': 'https://www.rctiplus.com/programs/316/cahaya-terindah/clip/3921/make-a-wish', + 'md5': 'd179b2ff356f0e91a53bcc6a4d8504f0', + 'info_dict': { + 'id': 'v_c3921', + 'title': 'Make A Wish', + 'display_id': 'make-a-wish', + 'description': 'Make A Wish', + 'ext': 'mp4', + 'duration': 288, + 'timestamp': 1571652600, + 'upload_date': '20191021', + 'series': 'Cahaya Terindah', + 'channel': 'RCTI', + }, + 'params': { + 'fixup': 'never', + }, + }, { # Extra + 'url': 'https://www.rctiplus.com/programs/616/inews-malam/extra/9438/diungkapkan-melalui-surat-terbuka-ceo-ruangguru-belva-devara-mundur-dari-staf-khusus-presiden', + 'md5': 'c48106afdbce609749f5e0c007d9278a', + 'info_dict': { + 'id': 'v_ex9438', + 'title': 'md5:2ede828c0f8bde249e0912be150314ca', + 'display_id': 'md5:62b8d4e9ff096db527a1ad797e8a9933', + 'description': 'md5:2ede828c0f8bde249e0912be150314ca', + 'ext': 'mp4', + 'duration': 93, + 'timestamp': 1587561540, + 'upload_date': '20200422', + 'series': 'iNews Malam', + 'channel': 'INews', + }, + }, { # Missed event/replay + 'url': 'https://www.rctiplus.com/missed-event/2507/mou-signing-ceremony-27-juli-2021-1400-wib', + 'md5': '649c5f27250faed1452ca8b91e06922d', + 'info_dict': { + 'id': 'v_pe2507', + 'title': 'MOU Signing Ceremony | 27 Juli 2021 | 14.00 WIB', + 'display_id': 'mou-signing-ceremony-27-juli-2021-1400-wib', + 'ext': 'mp4', + 'timestamp': 1627142400, + 'upload_date': '20210724', + 'was_live': True, + 'release_timestamp': 1627369200, + }, + 'params': { + 'fixup': 'never', + }, + }, { # Live event; Cloudfront CDN + 'url': 'https://www.rctiplus.com/live-event/2530/dai-muda-charging-imun-dengan-iman-4-agustus-2021-1600-wib', + 'info_dict': { + 'id': 'v_le2530', + 'title': 'Dai Muda : Charging Imun dengan Iman | 4 Agustus 2021 | 16.00 WIB', + 'display_id': 'dai-muda-charging-imun-dengan-iman-4-agustus-2021-1600-wib', + 'ext': 'mp4', + 'timestamp': 1627898400, + 'upload_date': '20210802', + 'release_timestamp': 1628067600, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'This live event has ended.', + }, { # TV; live_at is null + 'url': 'https://www.rctiplus.com/live-event/1/rcti', + 'info_dict': { + 'id': 'v_lt1', + 'title': 'RCTI', + 'display_id': 'rcti', + 'ext': 'mp4', + 'timestamp': 1546344000, + 'upload_date': '20190101', + 'is_live': True, + }, + 'params': { + 'skip_download': True, + }, + }] + _CONVIVA_JSON_TEMPLATE = { + 't': 'CwsSessionHb', + 'cid': 'ff84ae928c3b33064b76dec08f12500465e59a6f', + 'clid': '0', + 'sid': 0, + 'seq': 0, + 'caps': 0, + 'sf': 7, + 'sdk': True, + } + + def _real_extract(self, url): + match = self._match_valid_url(url).groupdict() + video_type, video_id, display_id = match['type'], match['id'], match['display_id'] + + url_api_version = 'v2' if video_type == 'missed-event' else 'v1' + appier_id = '23984824_' + str(random.randint(0, 10000000000)) # Based on the webpage's uuidRandom generator + video_json = self._call_api( + f'https://api.rctiplus.com/api/{url_api_version}/{video_type}/{video_id}/url?appierid={appier_id}', display_id, 'Downloading video URL JSON')[0] + video_url = video_json['url'] + + is_upcoming = try_get(video_json, lambda x: x['current_date'] < x['live_at']) + if is_upcoming is None: + is_upcoming = try_get(video_json, lambda x: x['current_date'] < x['start_date']) + if is_upcoming: + self.raise_no_formats( + 'This event will start at %s.' % video_json['live_label'] if video_json.get('live_label') else 'This event has not started yet.', expected=True) + if 'akamaized' in video_url: + # For some videos hosted on Akamai's CDN (possibly AES-encrypted ones?), a session needs to at least be made via Conviva's API + conviva_json_data = { + **self._CONVIVA_JSON_TEMPLATE, + 'url': video_url, + 'sst': int(time.time()) + } + conviva_json_res = self._download_json( + 'https://ff84ae928c3b33064b76dec08f12500465e59a6f.cws.conviva.com/0/wsg', display_id, + 'Creating Conviva session', 'Failed to create Conviva session', + fatal=False, data=json.dumps(conviva_json_data).encode('utf-8')) + if conviva_json_res and conviva_json_res.get('err') != 'ok': + self.report_warning('Conviva said: %s' % str(conviva_json_res.get('err'))) + + video_meta, meta_paths = self._call_api( + 'https://api.rctiplus.com/api/v1/%s/%s' % (video_type, video_id), display_id, 'Downloading video metadata') + + thumbnails, image_path = [], meta_paths.get('image_path', 'https://rstatic.akamaized.net/media/') + if video_meta.get('portrait_image'): + thumbnails.append({ + 'id': 'portrait_image', + 'url': '%s%d%s' % (image_path, 2000, video_meta['portrait_image']) # 2000px seems to be the highest resolution that can be given + }) + if video_meta.get('landscape_image'): + thumbnails.append({ + 'id': 'landscape_image', + 'url': '%s%d%s' % (image_path, 2000, video_meta['landscape_image']) + }) + try: + formats = self._extract_m3u8_formats(video_url, display_id, 'mp4', headers={'Referer': 'https://www.rctiplus.com/'}) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + self.raise_geo_restricted(countries=['ID'], metadata_available=True) + else: + raise e + for f in formats: + if 'akamaized' in f['url'] or 'cloudfront' in f['url']: + f.setdefault('http_headers', {})['Referer'] = 'https://www.rctiplus.com/' # Referer header is required for akamai/cloudfront CDNs + + return { + 'id': video_meta.get('product_id') or video_json.get('product_id'), + 'title': dict_get(video_meta, ('title', 'name')) or dict_get(video_json, ('content_name', 'assets_name')), + 'display_id': display_id, + 'description': video_meta.get('summary'), + 'timestamp': video_meta.get('release_date') or video_json.get('start_date'), + 'duration': video_meta.get('duration'), + 'categories': [video_meta['genre']] if video_meta.get('genre') else None, + 'average_rating': video_meta.get('star_rating'), + 'series': video_meta.get('program_title') or video_json.get('program_title'), + 'season_number': video_meta.get('season'), + 'episode_number': video_meta.get('episode'), + 'channel': video_json.get('tv_name'), + 'channel_id': video_json.get('tv_id'), + 'formats': formats, + 'thumbnails': thumbnails, + 'is_live': video_type == 'live-event' and not is_upcoming, + 'was_live': video_type == 'missed-event', + 'live_status': 'is_upcoming' if is_upcoming else None, + 'release_timestamp': video_json.get('live_at'), + } + + +class RCTIPlusSeriesIE(RCTIPlusBaseIE): + _VALID_URL = r'https://www\.rctiplus\.com/programs/(?P<id>\d+)/(?P<display_id>[^/?#&]+)(?:/(?P<type>episodes|extras|clips))?' + _TESTS = [{ + 'url': 'https://www.rctiplus.com/programs/829/putri-untuk-pangeran', + 'playlist_mincount': 1019, + 'info_dict': { + 'id': '829', + 'title': 'Putri Untuk Pangeran', + 'description': 'md5:aca7b54d05bd95a67d4f4613cc1d622d', + 'age_limit': 2, + 'cast': ['Verrel Bramasta', 'Ranty Maria', 'Riza Syah', 'Ivan Fadilla', 'Nicole Parham', 'Dll', 'Aviv Elham'], + 'display_id': 'putri-untuk-pangeran', + 'tag': 'count:18', + }, + }, { # No episodes + 'url': 'https://www.rctiplus.com/programs/615/inews-pagi', + 'playlist_mincount': 388, + 'info_dict': { + 'id': '615', + 'title': 'iNews Pagi', + 'description': 'md5:f18ee3d4643cfb41c358e5a9b693ee04', + 'age_limit': 2, + 'tag': 'count:11', + 'display_id': 'inews-pagi', + } + }] + _AGE_RATINGS = { # Based off https://id.wikipedia.org/wiki/Sistem_rating_konten_televisi with additional ratings + 'S-SU': 2, + 'SU': 2, + 'P': 2, + 'A': 7, + 'R': 13, + 'R-R/1': 17, # Labelled as 17+ despite being R + 'D': 18, + } + + @classmethod + def suitable(cls, url): + return False if RCTIPlusIE.suitable(url) else super(RCTIPlusSeriesIE, cls).suitable(url) + + def _entries(self, url, display_id=None, note='Downloading entries JSON', metadata={}): + total_pages = 0 + try: + total_pages = self._call_api( + '%s&length=20&page=0' % url, + display_id, note)[1]['pagination']['total_page'] + except ExtractorError as e: + if 'not found' in str(e): + return [] + raise e + if total_pages <= 0: + return [] + + for page_num in range(1, total_pages + 1): + episode_list = self._call_api( + '%s&length=20&page=%s' % (url, page_num), + display_id, '%s page %s' % (note, page_num))[0] or [] + + for video_json in episode_list: + yield { + '_type': 'url', + 'url': video_json['share_link'], + 'ie_key': RCTIPlusIE.ie_key(), + 'id': video_json.get('product_id'), + 'title': video_json.get('title'), + 'display_id': video_json.get('title_code').replace('_', '-'), + 'description': video_json.get('summary'), + 'timestamp': video_json.get('release_date'), + 'duration': video_json.get('duration'), + 'season_number': video_json.get('season'), + 'episode_number': video_json.get('episode'), + **metadata + } + + def _series_entries(self, series_id, display_id=None, video_type=None, metadata={}): + if not video_type or video_type in 'episodes': + try: + seasons_list = self._call_api( + f'https://api.rctiplus.com/api/v1/program/{series_id}/season', + display_id, 'Downloading seasons list JSON')[0] + except ExtractorError as e: + if 'not found' not in str(e): + raise + seasons_list = [] + for season in seasons_list: + yield from self._entries( + f'https://api.rctiplus.com/api/v2/program/{series_id}/episode?season={season["season"]}', + display_id, f'Downloading season {season["season"]} episode entries', metadata) + if not video_type or video_type in 'extras': + yield from self._entries( + f'https://api.rctiplus.com/api/v2/program/{series_id}/extra?content_id=0', + display_id, 'Downloading extra entries', metadata) + if not video_type or video_type in 'clips': + yield from self._entries( + f'https://api.rctiplus.com/api/v2/program/{series_id}/clip?content_id=0', + display_id, 'Downloading clip entries', metadata) + + def _real_extract(self, url): + series_id, display_id, video_type = self._match_valid_url(url).group('id', 'display_id', 'type') + if video_type: + self.report_warning( + f'Only {video_type} will be downloaded. ' + f'To download everything from the series, remove "/{video_type}" from the URL') + + series_meta, meta_paths = self._call_api( + f'https://api.rctiplus.com/api/v1/program/{series_id}/detail', display_id, 'Downloading series metadata') + metadata = { + 'age_limit': try_get(series_meta, lambda x: self._AGE_RATINGS[x['age_restriction'][0]['code']]), + 'cast': traverse_obj(series_meta, (('starring', 'creator', 'writer'), ..., 'name'), + expected_type=lambda x: strip_or_none(x) or None), + 'tag': traverse_obj(series_meta, ('tag', ..., 'name'), + expected_type=lambda x: strip_or_none(x) or None), + } + return self.playlist_result( + self._series_entries(series_id, display_id, video_type, metadata), series_id, + series_meta.get('title'), series_meta.get('summary'), display_id=display_id, **metadata) + + +class RCTIPlusTVIE(RCTIPlusBaseIE): + _VALID_URL = r'https://www\.rctiplus\.com/((tv/(?P<tvname>\w+))|(?P<eventname>live-event|missed-event))' + _TESTS = [{ + 'url': 'https://www.rctiplus.com/tv/rcti', + 'info_dict': { + 'id': 'v_lt1', + 'title': 'RCTI', + 'ext': 'mp4', + 'timestamp': 1546344000, + 'upload_date': '20190101', + }, + 'params': { + 'skip_download': True, + } + }, { + # Returned video will always change + 'url': 'https://www.rctiplus.com/live-event', + 'only_matching': True, + }, { + # Returned video will also always change + 'url': 'https://www.rctiplus.com/missed-event', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if RCTIPlusIE.suitable(url) else super(RCTIPlusTVIE, cls).suitable(url) + + def _real_extract(self, url): + match = self._match_valid_url(url).groupdict() + tv_id = match.get('tvname') or match.get('eventname') + webpage = self._download_webpage(url, tv_id) + video_type, video_id = self._search_regex( + r'url\s*:\s*["\']https://api\.rctiplus\.com/api/v./(?P<type>[^/]+)/(?P<id>\d+)/url', + webpage, 'video link', group=('type', 'id')) + return self.url_result(f'https://www.rctiplus.com/{video_type}/{video_id}/{tv_id}', 'RCTIPlus') diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rds.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rds.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rds.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rds.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/recurbate.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/recurbate.py new file mode 100644 index 0000000..d7294cb --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/recurbate.py @@ -0,0 +1,42 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ExtractorError, merge_dicts + + +class RecurbateIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?recurbate\.com/play\.php\?video=(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://recurbate.com/play.php?video=39161415', + 'md5': 'dd2b4ec57aa3e3572cb5cf0997fca99f', + 'info_dict': { + 'id': '39161415', + 'ext': 'mp4', + 'description': 'md5:db48d09e4d93fc715f47fd3d6b7edd51', + 'title': 'Performer zsnicole33 show on 2022-10-25 20:23, Chaturbate Archive – Recurbate', + 'age_limit': 18, + }, + 'skip': 'Website require membership.', + }] + + def _real_extract(self, url): + SUBSCRIPTION_MISSING_MESSAGE = 'This video is only available for registered users; Set your authenticated browser user agent via the --user-agent parameter.' + video_id = self._match_id(url) + try: + webpage = self._download_webpage(url, video_id) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + self.raise_login_required(msg=SUBSCRIPTION_MISSING_MESSAGE, method='cookies') + raise + token = self._html_search_regex(r'data-token="([^"]+)"', webpage, 'token') + video_url = f'https://recurbate.com/api/get.php?video={video_id}&token={token}' + + video_webpage = self._download_webpage(video_url, video_id) + if video_webpage == 'shall_subscribe': + self.raise_login_required(msg=SUBSCRIPTION_MISSING_MESSAGE, method='cookies') + entries = self._parse_html5_media_entries(video_url, video_webpage, video_id) + return merge_dicts({ + 'id': video_id, + 'title': self._html_extract_title(webpage, 'title'), + 'description': self._og_search_description(webpage), + 'age_limit': self._rta_search(webpage), + }, entries[0]) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/redbee.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/redbee.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/redbee.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/redbee.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/redbulltv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/redbulltv.py new file mode 100644 index 0000000..d1de249 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/redbulltv.py @@ -0,0 +1,224 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + float_or_none, + ExtractorError, +) + + +class RedBullTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?redbull(?:\.tv|\.com(?:/[^/]+)?(?:/tv)?)(?:/events/[^/]+)?/(?:videos?|live|(?:film|episode)s)/(?P<id>AP-\w+)' + _TESTS = [{ + # film + 'url': 'https://www.redbull.tv/video/AP-1Q6XCDTAN1W11', + 'md5': 'fb0445b98aa4394e504b413d98031d1f', + 'info_dict': { + 'id': 'AP-1Q6XCDTAN1W11', + 'ext': 'mp4', + 'title': 'ABC of... WRC - ABC of... S1E6', + 'description': 'md5:5c7ed8f4015c8492ecf64b6ab31e7d31', + 'duration': 1582.04, + }, + }, { + # episode + 'url': 'https://www.redbull.tv/video/AP-1PMHKJFCW1W11', + 'info_dict': { + 'id': 'AP-1PMHKJFCW1W11', + 'ext': 'mp4', + 'title': 'Grime - Hashtags S2E4', + 'description': 'md5:5546aa612958c08a98faaad4abce484d', + 'duration': 904, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://www.redbull.com/int-en/tv/video/AP-1UWHCAR9S1W11/rob-meets-sam-gaze?playlist=playlists::3f81040a-2f31-4832-8e2e-545b1d39d173', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/us-en/videos/AP-1YM9QCYE52111', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/us-en/events/AP-1XV2K61Q51W11/live/AP-1XUJ86FDH1W11', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/films/AP-1ZSMAW8FH2111', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/episodes/AP-1TQWK7XE11W11', + 'only_matching': True, + }] + + def extract_info(self, video_id): + session = self._download_json( + 'https://api.redbull.tv/v3/session', video_id, + note='Downloading access token', query={ + 'category': 'personal_computer', + 'os_family': 'http', + }) + if session.get('code') == 'error': + raise ExtractorError('%s said: %s' % ( + self.IE_NAME, session['message'])) + token = session['token'] + + try: + video = self._download_json( + 'https://api.redbull.tv/v3/products/' + video_id, + video_id, note='Downloading video information', + headers={'Authorization': token} + ) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 404: + error_message = self._parse_json( + e.cause.response.read().decode(), video_id)['error'] + raise ExtractorError('%s said: %s' % ( + self.IE_NAME, error_message), expected=True) + raise + + title = video['title'].strip() + + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + 'https://dms.redbull.tv/v3/%s/%s/playlist.m3u8' % (video_id, token), + video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id='hls') + + for resource in video.get('resources', []): + if resource.startswith('closed_caption_'): + splitted_resource = resource.split('_') + if splitted_resource[2]: + subtitles.setdefault('en', []).append({ + 'url': 'https://resources.redbull.tv/%s/%s' % (video_id, resource), + 'ext': splitted_resource[2], + }) + + subheading = video.get('subheading') + if subheading: + title += ' - %s' % subheading + + return { + 'id': video_id, + 'title': title, + 'description': video.get('long_description') or video.get( + 'short_description'), + 'duration': float_or_none(video.get('duration'), scale=1000), + 'formats': formats, + 'subtitles': subtitles, + } + + def _real_extract(self, url): + video_id = self._match_id(url) + return self.extract_info(video_id) + + +class RedBullEmbedIE(RedBullTVIE): # XXX: Do not subclass from concrete IE + _VALID_URL = r'https?://(?:www\.)?redbull\.com/embed/(?P<id>rrn:content:[^:]+:[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}:[a-z]{2}-[A-Z]{2,3})' + _TESTS = [{ + # HLS manifest accessible only using assetId + 'url': 'https://www.redbull.com/embed/rrn:content:episode-videos:f3021f4f-3ed4-51ac-915a-11987126e405:en-INT', + 'only_matching': True, + }] + _VIDEO_ESSENSE_TMPL = '''... on %s { + videoEssence { + attributes + } + }''' + + def _real_extract(self, url): + rrn_id = self._match_id(url) + asset_id = self._download_json( + 'https://edge-graphql.crepo-production.redbullaws.com/v1/graphql', + rrn_id, headers={ + 'Accept': 'application/json', + 'API-KEY': 'e90a1ff11335423998b100c929ecc866', + }, query={ + 'query': '''{ + resource(id: "%s", enforceGeoBlocking: false) { + %s + %s + } +}''' % (rrn_id, self._VIDEO_ESSENSE_TMPL % 'LiveVideo', self._VIDEO_ESSENSE_TMPL % 'VideoResource'), + })['data']['resource']['videoEssence']['attributes']['assetId'] + return self.extract_info(asset_id) + + +class RedBullTVRrnContentIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?redbull\.com/(?P<region>[a-z]{2,3})-(?P<lang>[a-z]{2})/tv/(?:video|live|film)/(?P<id>rrn:content:[^:]+:[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})' + _TESTS = [{ + 'url': 'https://www.redbull.com/int-en/tv/video/rrn:content:live-videos:e3e6feb4-e95f-50b7-962a-c70f8fd13c73/mens-dh-finals-fort-william', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/tv/video/rrn:content:videos:a36a0f36-ff1b-5db8-a69d-ee11a14bf48b/tn-ts-style?playlist=rrn:content:event-profiles:83f05926-5de8-5389-b5e4-9bb312d715e8:extras', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/tv/film/rrn:content:films:d1f4d00e-4c04-5d19-b510-a805ffa2ab83/follow-me', + 'only_matching': True, + }] + + def _real_extract(self, url): + region, lang, rrn_id = self._match_valid_url(url).groups() + rrn_id += ':%s-%s' % (lang, region.upper()) + return self.url_result( + 'https://www.redbull.com/embed/' + rrn_id, + RedBullEmbedIE.ie_key(), rrn_id) + + +class RedBullIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?redbull\.com/(?P<region>[a-z]{2,3})-(?P<lang>[a-z]{2})/(?P<type>(?:episode|film|(?:(?:recap|trailer)-)?video)s|live)/(?!AP-|rrn:content:)(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://www.redbull.com/int-en/episodes/grime-hashtags-s02-e04', + 'md5': 'db8271a7200d40053a1809ed0dd574ff', + 'info_dict': { + 'id': 'AA-1MT8DQWA91W14', + 'ext': 'mp4', + 'title': 'Grime - Hashtags S2E4', + 'description': 'md5:5546aa612958c08a98faaad4abce484d', + }, + }, { + 'url': 'https://www.redbull.com/int-en/films/kilimanjaro-mountain-of-greatness', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/recap-videos/uci-mountain-bike-world-cup-2017-mens-xco-finals-from-vallnord', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/trailer-videos/kings-of-content', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/videos/tnts-style-red-bull-dance-your-style-s1-e12', + 'only_matching': True, + }, { + 'url': 'https://www.redbull.com/int-en/live/mens-dh-finals-fort-william', + 'only_matching': True, + }, { + # only available on the int-en website so a fallback is need for the API + # https://www.redbull.com/v3/api/graphql/v1/v3/query/en-GB>en-INT?filter[uriSlug]=fia-wrc-saturday-recap-estonia&rb3Schema=v1:hero + 'url': 'https://www.redbull.com/gb-en/live/fia-wrc-saturday-recap-estonia', + 'only_matching': True, + }] + _INT_FALLBACK_LIST = ['de', 'en', 'es', 'fr'] + _LAT_FALLBACK_MAP = ['ar', 'bo', 'car', 'cl', 'co', 'mx', 'pe'] + + def _real_extract(self, url): + region, lang, filter_type, display_id = self._match_valid_url(url).groups() + if filter_type == 'episodes': + filter_type = 'episode-videos' + elif filter_type == 'live': + filter_type = 'live-videos' + + regions = [region.upper()] + if region != 'int': + if region in self._LAT_FALLBACK_MAP: + regions.append('LAT') + if lang in self._INT_FALLBACK_LIST: + regions.append('INT') + locale = '>'.join(['%s-%s' % (lang, reg) for reg in regions]) + + rrn_id = self._download_json( + 'https://www.redbull.com/v3/api/graphql/v1/v3/query/' + locale, + display_id, query={ + 'filter[type]': filter_type, + 'filter[uriSlug]': display_id, + 'rb3Schema': 'v1:hero', + })['data']['id'] + + return self.url_result( + 'https://www.redbull.com/embed/' + rrn_id, + RedBullEmbedIE.ie_key(), rrn_id) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/reddit.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/reddit.py new file mode 100644 index 0000000..62f669f --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/reddit.py @@ -0,0 +1,353 @@ +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + float_or_none, + int_or_none, + traverse_obj, + try_get, + unescapeHTML, + urlencode_postdata, + url_or_none, +) + + +class RedditIE(InfoExtractor): + _NETRC_MACHINE = 'reddit' + _VALID_URL = r'https?://(?P<host>(?:\w+\.)?reddit(?:media)?\.com)/(?P<slug>(?:(?:r|user)/[^/]+/)?comments/(?P<id>[^/?#&]+))' + _TESTS = [{ + 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/', + 'info_dict': { + 'id': 'zv89llsvexdz', + 'ext': 'mp4', + 'display_id': '6rrwyj', + 'title': 'That small heart attack.', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:4', + 'timestamp': 1501941939, + 'upload_date': '20170805', + 'uploader': 'Antw87', + 'duration': 12, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 0, + 'channel_id': 'videos', + }, + 'params': { + 'skip_download': True, + }, + }, { + # 1080p fallback format + 'url': 'https://www.reddit.com/r/aww/comments/90bu6w/heat_index_was_110_degrees_so_we_offered_him_a/', + 'md5': '8b5902cfda3006bf90faea7adf765a49', + 'info_dict': { + 'id': 'gyh95hiqc0b11', + 'ext': 'mp4', + 'display_id': '90bu6w', + 'title': 'Heat index was 110 degrees so we offered him a cold drink. He went for a full body soak instead', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:7', + 'timestamp': 1532051078, + 'upload_date': '20180720', + 'uploader': 'FootLoosePickleJuice', + 'duration': 14, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 0, + 'channel_id': 'aww', + }, + }, { + # User post + 'url': 'https://www.reddit.com/user/creepyt0es/comments/nip71r/i_plan_to_make_more_stickers_and_prints_check/', + 'info_dict': { + 'id': 'zasobba6wp071', + 'ext': 'mp4', + 'display_id': 'nip71r', + 'title': 'I plan to make more stickers and prints! Check them out on my Etsy! Or get them through my Patreon. Links below.', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:5', + 'timestamp': 1621709093, + 'upload_date': '20210522', + 'uploader': 'creepyt0es', + 'duration': 6, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 0, + 'channel_id': 'u_creepyt0es', + }, + 'params': { + 'skip_download': True, + }, + }, { + # videos embedded in reddit text post + 'url': 'https://www.reddit.com/r/KamenRider/comments/wzqkxp/finale_kamen_rider_revice_episode_50_family_to/', + 'playlist_count': 2, + 'info_dict': { + 'id': 'wzqkxp', + 'title': 'md5:72d3d19402aa11eff5bd32fc96369b37', + }, + }, { + # crossposted reddit-hosted media + 'url': 'https://www.reddit.com/r/dumbfuckers_club/comments/zjjw82/cringe/', + 'md5': '746180895c7b75a9d6b05341f507699a', + 'info_dict': { + 'id': 'a1oneun6pa5a1', + 'ext': 'mp4', + 'display_id': 'zjjw82', + 'title': 'Cringe', + 'uploader': 'Otaku-senpai69420', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'upload_date': '20221212', + 'timestamp': 1670812309, + 'duration': 16, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'age_limit': 0, + 'channel_id': 'dumbfuckers_club', + }, + }, { + # post link without subreddit + 'url': 'https://www.reddit.com/comments/124pp33', + 'md5': '15eec9d828adcef4468b741a7e45a395', + 'info_dict': { + 'id': 'antsenjc2jqa1', + 'ext': 'mp4', + 'display_id': '124pp33', + 'title': 'Harmless prank of some old friends', + 'uploader': 'Dudezila', + 'channel_id': 'ContagiousLaughter', + 'duration': 17, + 'upload_date': '20230328', + 'timestamp': 1680012043, + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'age_limit': 0, + 'comment_count': int, + 'dislike_count': int, + 'like_count': int, + }, + }, { + # quarantined subreddit post + 'url': 'https://old.reddit.com/r/GenZedong/comments/12fujy3/based_hasan/', + 'md5': '3156ea69e3c1f1b6259683c5abd36e71', + 'info_dict': { + 'id': '8bwtclfggpsa1', + 'ext': 'mp4', + 'display_id': '12fujy3', + 'title': 'Based Hasan?', + 'uploader': 'KingNigelXLII', + 'channel_id': 'GenZedong', + 'duration': 16, + 'upload_date': '20230408', + 'timestamp': 1680979138, + 'age_limit': 0, + 'comment_count': int, + 'dislike_count': int, + 'like_count': int, + }, + 'skip': 'Requires account that has opted-in to the GenZedong subreddit', + }, { + 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj', + 'only_matching': True, + }, { + # imgur + 'url': 'https://www.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/', + 'only_matching': True, + }, { + # imgur @ old reddit + 'url': 'https://old.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/', + 'only_matching': True, + }, { + # streamable + 'url': 'https://www.reddit.com/r/videos/comments/6t7sg9/comedians_hilarious_joke_about_the_guam_flag/', + 'only_matching': True, + }, { + # youtube + 'url': 'https://www.reddit.com/r/videos/comments/6t75wq/southern_man_tries_to_speak_without_an_accent/', + 'only_matching': True, + }, { + # reddit video @ nm reddit + 'url': 'https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/', + 'only_matching': True, + }, { + 'url': 'https://www.redditmedia.com/r/serbia/comments/pu9wbx/ako_vu%C4%8Di%C4%87_izgubi_izbore_ja_%C4%87u_da_crknem/', + 'only_matching': True, + }] + + def _perform_login(self, username, password): + captcha = self._download_json( + 'https://www.reddit.com/api/requires_captcha/login.json', None, + 'Checking login requirement')['required'] + if captcha: + raise ExtractorError('Reddit is requiring captcha before login', expected=True) + login = self._download_json( + f'https://www.reddit.com/api/login/{username}', None, data=urlencode_postdata({ + 'op': 'login-main', + 'user': username, + 'passwd': password, + 'api_type': 'json', + }), note='Logging in', errnote='Login request failed') + errors = '; '.join(traverse_obj(login, ('json', 'errors', ..., 1))) + if errors: + raise ExtractorError(f'Unable to login, Reddit API says {errors}', expected=True) + elif not traverse_obj(login, ('json', 'data', 'cookie', {str})): + raise ExtractorError('Unable to login, no cookie was returned') + + def _real_extract(self, url): + host, slug, video_id = self._match_valid_url(url).group('host', 'slug', 'id') + + data = self._download_json( + f'https://{host}/{slug}/.json', video_id, fatal=False, expected_status=403) + if not data: + fallback_host = 'old.reddit.com' if host != 'old.reddit.com' else 'www.reddit.com' + self.to_screen(f'{host} request failed, retrying with {fallback_host}') + data = self._download_json( + f'https://{fallback_host}/{slug}/.json', video_id, expected_status=403) + + if traverse_obj(data, 'error') == 403: + reason = data.get('reason') + if reason == 'quarantined': + self.raise_login_required('Quarantined subreddit; an account that has opted in is required') + elif reason == 'private': + self.raise_login_required('Private subreddit; an account that has been approved is required') + else: + raise ExtractorError(f'HTTP Error 403 Forbidden; reason given: {reason}') + + data = data[0]['data']['children'][0]['data'] + video_url = data['url'] + + over_18 = data.get('over_18') + if over_18 is True: + age_limit = 18 + elif over_18 is False: + age_limit = 0 + else: + age_limit = None + + thumbnails = [] + + def add_thumbnail(src): + if not isinstance(src, dict): + return + thumbnail_url = url_or_none(src.get('url')) + if not thumbnail_url: + return + thumbnails.append({ + 'url': unescapeHTML(thumbnail_url), + 'width': int_or_none(src.get('width')), + 'height': int_or_none(src.get('height')), + 'http_headers': {'Accept': '*/*'}, + }) + + for image in try_get(data, lambda x: x['preview']['images']) or []: + if not isinstance(image, dict): + continue + add_thumbnail(image.get('source')) + resolutions = image.get('resolutions') + if isinstance(resolutions, list): + for resolution in resolutions: + add_thumbnail(resolution) + + info = { + 'title': data.get('title'), + 'thumbnails': thumbnails, + 'timestamp': float_or_none(data.get('created_utc')), + 'uploader': data.get('author'), + 'channel_id': data.get('subreddit'), + 'like_count': int_or_none(data.get('ups')), + 'dislike_count': int_or_none(data.get('downs')), + 'comment_count': int_or_none(data.get('num_comments')), + 'age_limit': age_limit, + } + + parsed_url = urllib.parse.urlparse(video_url) + + # Check for embeds in text posts, or else raise to avoid recursing into the same reddit URL + if 'reddit.com' in parsed_url.netloc and f'/{video_id}/' in parsed_url.path: + entries = [] + for media in traverse_obj(data, ('media_metadata', ...), expected_type=dict): + if not media.get('id') or media.get('e') != 'RedditVideo': + continue + formats = [] + if media.get('hlsUrl'): + formats.extend(self._extract_m3u8_formats( + unescapeHTML(media['hlsUrl']), video_id, 'mp4', m3u8_id='hls', fatal=False)) + if media.get('dashUrl'): + formats.extend(self._extract_mpd_formats( + unescapeHTML(media['dashUrl']), video_id, mpd_id='dash', fatal=False)) + if formats: + entries.append({ + 'id': media['id'], + 'display_id': video_id, + 'formats': formats, + **info, + }) + if entries: + return self.playlist_result(entries, video_id, info.get('title')) + raise ExtractorError('No media found', expected=True) + + # Check if media is hosted on reddit: + reddit_video = traverse_obj(data, ( + (None, ('crosspost_parent_list', ...)), ('secure_media', 'media'), 'reddit_video'), get_all=False) + if reddit_video: + playlist_urls = [ + try_get(reddit_video, lambda x: unescapeHTML(x[y])) + for y in ('dash_url', 'hls_url') + ] + + # Update video_id + display_id = video_id + video_id = self._search_regex( + r'https?://v\.redd\.it/(?P<id>[^/?#&]+)', reddit_video['fallback_url'], + 'video_id', default=display_id) + + dash_playlist_url = playlist_urls[0] or f'https://v.redd.it/{video_id}/DASHPlaylist.mpd' + hls_playlist_url = playlist_urls[1] or f'https://v.redd.it/{video_id}/HLSPlaylist.m3u8' + + formats = [{ + 'url': unescapeHTML(reddit_video['fallback_url']), + 'height': int_or_none(reddit_video.get('height')), + 'width': int_or_none(reddit_video.get('width')), + 'tbr': int_or_none(reddit_video.get('bitrate_kbps')), + 'acodec': 'none', + 'vcodec': 'h264', + 'ext': 'mp4', + 'format_id': 'fallback', + 'format_note': 'DASH video, mp4_dash', + }] + hls_fmts, subtitles = self._extract_m3u8_formats_and_subtitles( + hls_playlist_url, display_id, 'mp4', m3u8_id='hls', fatal=False) + formats.extend(hls_fmts) + dash_fmts, dash_subs = self._extract_mpd_formats_and_subtitles( + dash_playlist_url, display_id, mpd_id='dash', fatal=False) + formats.extend(dash_fmts) + self._merge_subtitles(dash_subs, target=subtitles) + + return { + **info, + 'id': video_id, + 'display_id': display_id, + 'formats': formats, + 'subtitles': subtitles, + 'duration': int_or_none(reddit_video.get('duration')), + } + + if parsed_url.netloc == 'v.redd.it': + self.raise_no_formats('This video is processing', expected=True, video_id=video_id) + return { + **info, + 'id': parsed_url.path.split('/')[1], + 'display_id': video_id, + } + + # Not hosted on reddit, must continue extraction + return { + **info, + 'display_id': video_id, + '_type': 'url_transparent', + 'url': video_url, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/redgifs.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/redgifs.py new file mode 100644 index 0000000..f945320 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/redgifs.py @@ -0,0 +1,260 @@ +import functools + +from .common import InfoExtractor +from ..compat import compat_parse_qs +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + qualities, + try_get, + OnDemandPagedList, +) + + +class RedGifsBaseInfoExtractor(InfoExtractor): + _FORMATS = { + 'gif': 250, + 'sd': 480, + 'hd': None, + } + + _API_HEADERS = { + 'referer': 'https://www.redgifs.com/', + 'origin': 'https://www.redgifs.com', + 'content-type': 'application/json', + } + + def _parse_gif_data(self, gif_data): + video_id = gif_data.get('id') + quality = qualities(tuple(self._FORMATS.keys())) + + orig_height = int_or_none(gif_data.get('height')) + aspect_ratio = try_get(gif_data, lambda x: orig_height / x['width']) + + formats = [] + for format_id, height in self._FORMATS.items(): + video_url = gif_data['urls'].get(format_id) + if not video_url: + continue + height = min(orig_height, height or orig_height) + formats.append({ + 'url': video_url, + 'format_id': format_id, + 'width': height * aspect_ratio if aspect_ratio else None, + 'height': height, + 'quality': quality(format_id), + }) + + return { + 'id': video_id, + 'webpage_url': f'https://redgifs.com/watch/{video_id}', + 'extractor_key': RedGifsIE.ie_key(), + 'extractor': 'RedGifs', + 'title': ' '.join(gif_data.get('tags') or []) or 'RedGifs', + 'timestamp': int_or_none(gif_data.get('createDate')), + 'uploader': gif_data.get('userName'), + 'duration': int_or_none(gif_data.get('duration')), + 'view_count': int_or_none(gif_data.get('views')), + 'like_count': int_or_none(gif_data.get('likes')), + 'categories': gif_data.get('tags') or [], + 'tags': gif_data.get('tags'), + 'age_limit': 18, + 'formats': formats, + } + + def _fetch_oauth_token(self, video_id): + # https://github.com/Redgifs/api/wiki/Temporary-tokens + auth = self._download_json('https://api.redgifs.com/v2/auth/temporary', + video_id, note='Fetching temporary token') + if not auth.get('token'): + raise ExtractorError('Unable to get temporary token') + self._API_HEADERS['authorization'] = f'Bearer {auth["token"]}' + + def _call_api(self, ep, video_id, *args, **kwargs): + for first_attempt in True, False: + if 'authorization' not in self._API_HEADERS: + self._fetch_oauth_token(video_id) + try: + headers = dict(self._API_HEADERS) + headers['x-customheader'] = f'https://www.redgifs.com/watch/{video_id}' + data = self._download_json( + f'https://api.redgifs.com/v2/{ep}', video_id, headers=headers, *args, **kwargs) + break + except ExtractorError as e: + if first_attempt and isinstance(e.cause, HTTPError) and e.cause.status == 401: + del self._API_HEADERS['authorization'] # refresh the token + continue + raise + + if 'error' in data: + raise ExtractorError(f'RedGifs said: {data["error"]}', expected=True, video_id=video_id) + return data + + def _fetch_page(self, ep, video_id, query, page): + query['page'] = page + 1 + data = self._call_api( + ep, video_id, query=query, note=f'Downloading JSON metadata page {page + 1}') + + for entry in data['gifs']: + yield self._parse_gif_data(entry) + + def _prepare_api_query(self, query, fields): + api_query = [ + (field_name, query.get(field_name, (default,))[0]) + for field_name, default in fields.items()] + + return {key: val for key, val in api_query if val is not None} + + def _paged_entries(self, ep, item_id, query, fields): + page = int_or_none(query.get('page', (None,))[0]) + page_fetcher = functools.partial( + self._fetch_page, ep, item_id, self._prepare_api_query(query, fields)) + return page_fetcher(page) if page else OnDemandPagedList(page_fetcher, self._PAGE_SIZE) + + +class RedGifsIE(RedGifsBaseInfoExtractor): + _VALID_URL = r'https?://(?:(?:www\.)?redgifs\.com/watch/|thumbs2\.redgifs\.com/)(?P<id>[^-/?#\.]+)' + _TESTS = [{ + 'url': 'https://www.redgifs.com/watch/squeakyhelplesswisent', + 'info_dict': { + 'id': 'squeakyhelplesswisent', + 'ext': 'mp4', + 'title': 'Hotwife Legs Thick', + 'timestamp': 1636287915, + 'upload_date': '20211107', + 'uploader': 'ignored52', + 'duration': 16, + 'view_count': int, + 'like_count': int, + 'categories': list, + 'age_limit': 18, + 'tags': list, + } + }, { + 'url': 'https://thumbs2.redgifs.com/SqueakyHelplessWisent-mobile.mp4#t=0', + 'info_dict': { + 'id': 'squeakyhelplesswisent', + 'ext': 'mp4', + 'title': 'Hotwife Legs Thick', + 'timestamp': 1636287915, + 'upload_date': '20211107', + 'uploader': 'ignored52', + 'duration': 16, + 'view_count': int, + 'like_count': int, + 'categories': list, + 'age_limit': 18, + 'tags': list, + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url).lower() + video_info = self._call_api( + f'gifs/{video_id}?views=yes', video_id, note='Downloading video info') + return self._parse_gif_data(video_info['gif']) + + +class RedGifsSearchIE(RedGifsBaseInfoExtractor): + IE_DESC = 'Redgifs search' + _VALID_URL = r'https?://(?:www\.)?redgifs\.com/browse\?(?P<query>[^#]+)' + _PAGE_SIZE = 80 + _TESTS = [ + { + 'url': 'https://www.redgifs.com/browse?tags=Lesbian', + 'info_dict': { + 'id': 'tags=Lesbian', + 'title': 'Lesbian', + 'description': 'RedGifs search for Lesbian, ordered by trending' + }, + 'playlist_mincount': 100, + }, + { + 'url': 'https://www.redgifs.com/browse?type=g&order=latest&tags=Lesbian', + 'info_dict': { + 'id': 'type=g&order=latest&tags=Lesbian', + 'title': 'Lesbian', + 'description': 'RedGifs search for Lesbian, ordered by latest' + }, + 'playlist_mincount': 100, + }, + { + 'url': 'https://www.redgifs.com/browse?type=g&order=latest&tags=Lesbian&page=2', + 'info_dict': { + 'id': 'type=g&order=latest&tags=Lesbian&page=2', + 'title': 'Lesbian', + 'description': 'RedGifs search for Lesbian, ordered by latest' + }, + 'playlist_count': 80, + } + ] + + def _real_extract(self, url): + query_str = self._match_valid_url(url).group('query') + query = compat_parse_qs(query_str) + if not query.get('tags'): + raise ExtractorError('Invalid query tags', expected=True) + + tags = query.get('tags')[0] + order = query.get('order', ('trending',))[0] + + query['search_text'] = [tags] + entries = self._paged_entries('gifs/search', query_str, query, { + 'search_text': None, + 'order': 'trending', + 'type': None, + }) + + return self.playlist_result( + entries, query_str, tags, f'RedGifs search for {tags}, ordered by {order}') + + +class RedGifsUserIE(RedGifsBaseInfoExtractor): + IE_DESC = 'Redgifs user' + _VALID_URL = r'https?://(?:www\.)?redgifs\.com/users/(?P<username>[^/?#]+)(?:\?(?P<query>[^#]+))?' + _PAGE_SIZE = 30 + _TESTS = [ + { + 'url': 'https://www.redgifs.com/users/lamsinka89', + 'info_dict': { + 'id': 'lamsinka89', + 'title': 'lamsinka89', + 'description': 'RedGifs user lamsinka89, ordered by recent' + }, + 'playlist_mincount': 100, + }, + { + 'url': 'https://www.redgifs.com/users/lamsinka89?page=3', + 'info_dict': { + 'id': 'lamsinka89?page=3', + 'title': 'lamsinka89', + 'description': 'RedGifs user lamsinka89, ordered by recent' + }, + 'playlist_count': 30, + }, + { + 'url': 'https://www.redgifs.com/users/lamsinka89?order=best&type=g', + 'info_dict': { + 'id': 'lamsinka89?order=best&type=g', + 'title': 'lamsinka89', + 'description': 'RedGifs user lamsinka89, ordered by best' + }, + 'playlist_mincount': 100, + } + ] + + def _real_extract(self, url): + username, query_str = self._match_valid_url(url).group('username', 'query') + playlist_id = f'{username}?{query_str}' if query_str else username + + query = compat_parse_qs(query_str) + order = query.get('order', ('recent',))[0] + + entries = self._paged_entries(f'users/{username}/search', playlist_id, query, { + 'order': 'recent', + 'type': None, + }) + + return self.playlist_result( + entries, playlist_id, username, f'RedGifs user {username}, ordered by {order}') diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/redtube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/redtube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/redtube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/redtube.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/regiotv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/regiotv.py new file mode 100644 index 0000000..edb6ae5 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/regiotv.py @@ -0,0 +1,55 @@ +from .common import InfoExtractor +from ..networking import Request +from ..utils import xpath_text, xpath_with_ns + + +class RegioTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?regio-tv\.de/video/(?P<id>[0-9]+)' + _TESTS = [{ + 'url': 'http://www.regio-tv.de/video/395808.html', + 'info_dict': { + 'id': '395808', + 'ext': 'mp4', + 'title': 'Wir in Ludwigsburg', + 'description': 'Mit unseren zuckersüßen Adventskindern, außerdem besuchen wir die Abendsterne!', + } + }, { + 'url': 'http://www.regio-tv.de/video/395808', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + key = self._search_regex( + r'key\s*:\s*(["\'])(?P<key>.+?)\1', webpage, 'key', group='key') + title = self._og_search_title(webpage) + + SOAP_TEMPLATE = '<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><{0} xmlns="http://v.telvi.de/"><key xsi:type="xsd:string">{1}</key></{0}></soap:Body></soap:Envelope>' + + request = Request( + 'http://v.telvi.de/', + SOAP_TEMPLATE.format('GetHTML5VideoData', key).encode('utf-8')) + video_data = self._download_xml(request, video_id, 'Downloading video XML') + + NS_MAP = { + 'xsi': 'http://www.w3.org/2001/XMLSchema-instance', + 'soap': 'http://schemas.xmlsoap.org/soap/envelope/', + } + + video_url = xpath_text( + video_data, xpath_with_ns('.//video', NS_MAP), 'video url', fatal=True) + thumbnail = xpath_text( + video_data, xpath_with_ns('.//image', NS_MAP), 'thumbnail') + description = self._og_search_description( + webpage) or self._html_search_meta('description', webpage) + + return { + 'id': video_id, + 'url': video_url, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rentv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rentv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rentv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rentv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/restudy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/restudy.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/restudy.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/restudy.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/reuters.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/reuters.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/reuters.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/reuters.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/reverbnation.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/reverbnation.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/reverbnation.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/reverbnation.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rheinmaintv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rheinmaintv.py new file mode 100644 index 0000000..c3b352d --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rheinmaintv.py @@ -0,0 +1,94 @@ +from .common import InfoExtractor +from ..utils import extract_attributes, merge_dicts, remove_end + + +class RheinMainTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?rheinmaintv\.de/sendungen/(?:[\w-]+/)*(?P<video_id>(?P<display_id>[\w-]+)/vom-\d{2}\.\d{2}\.\d{4}(?:/\d+)?)' + _TESTS = [{ + 'url': 'https://www.rheinmaintv.de/sendungen/beitrag-video/auf-dem-weg-zur-deutschen-meisterschaft/vom-07.11.2022/', + 'info_dict': { + 'id': 'auf-dem-weg-zur-deutschen-meisterschaft-vom-07.11.2022', + 'ext': 'ismv', # ismv+isma will be merged into mp4 + 'alt_title': 'Auf dem Weg zur Deutschen Meisterschaft', + 'title': 'Auf dem Weg zur Deutschen Meisterschaft', + 'upload_date': '20221108', + 'view_count': int, + 'display_id': 'auf-dem-weg-zur-deutschen-meisterschaft', + 'thumbnail': r're:^https://.+\.jpg', + 'description': 'md5:48c59b74192bc819a9b34af1d5ed1eb9', + 'timestamp': 1667933057, + 'duration': 243.0, + }, + 'params': {'skip_download': 'ism'}, + }, { + 'url': 'https://www.rheinmaintv.de/sendungen/beitrag-video/formationsgemeinschaft-rhein-main-bei-den-deutschen-meisterschaften/vom-14.11.2022/', + 'info_dict': { + 'id': 'formationsgemeinschaft-rhein-main-bei-den-deutschen-meisterschaften-vom-14.11.2022', + 'ext': 'ismv', + 'title': 'Formationsgemeinschaft Rhein-Main bei den Deutschen Meisterschaften', + 'timestamp': 1668526214, + 'display_id': 'formationsgemeinschaft-rhein-main-bei-den-deutschen-meisterschaften', + 'alt_title': 'Formationsgemeinschaft Rhein-Main bei den Deutschen Meisterschaften', + 'view_count': int, + 'thumbnail': r're:^https://.+\.jpg', + 'duration': 345.0, + 'description': 'md5:9370ba29526984006c2cba1372e5c5a0', + 'upload_date': '20221115', + }, + 'params': {'skip_download': 'ism'}, + }, { + 'url': 'https://www.rheinmaintv.de/sendungen/beitrag-video/casino-mainz-bei-den-deutschen-meisterschaften/vom-14.11.2022/', + 'info_dict': { + 'id': 'casino-mainz-bei-den-deutschen-meisterschaften-vom-14.11.2022', + 'ext': 'ismv', + 'title': 'Casino Mainz bei den Deutschen Meisterschaften', + 'view_count': int, + 'timestamp': 1668527402, + 'alt_title': 'Casino Mainz bei den Deutschen Meisterschaften', + 'upload_date': '20221115', + 'display_id': 'casino-mainz-bei-den-deutschen-meisterschaften', + 'duration': 348.0, + 'thumbnail': r're:^https://.+\.jpg', + 'description': 'md5:70fc1660eeba96da17199e5bdff4c0aa', + }, + 'params': {'skip_download': 'ism'}, + }, { + 'url': 'https://www.rheinmaintv.de/sendungen/beitrag-video/bricks4kids/vom-22.06.2022/', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + display_id = mobj.group('display_id') + video_id = mobj.group('video_id').replace('/', '-') + webpage = self._download_webpage(url, video_id) + + source, img = self._search_regex(r'(?s)(?P<source><source[^>]*>)(?P<img><img[^>]*>)', + webpage, 'video', group=('source', 'img')) + source = extract_attributes(source) + img = extract_attributes(img) + + raw_json_ld = list(self._yield_json_ld(webpage, video_id)) + json_ld = self._json_ld(raw_json_ld, video_id) + json_ld.pop('url', None) + + ism_manifest_url = ( + source.get('src') + or next(json_ld.get('embedUrl') for json_ld in raw_json_ld if json_ld.get('@type') == 'VideoObject') + ) + formats, subtitles = self._extract_ism_formats_and_subtitles(ism_manifest_url, video_id) + + return merge_dicts({ + 'id': video_id, + 'display_id': display_id, + 'title': + self._html_search_regex(r'<h1><span class="title">([^<]*)</span>', + webpage, 'headline', default=None) + or img.get('title') or json_ld.get('title') or self._og_search_title(webpage) + or remove_end(self._html_extract_title(webpage), ' -'), + 'alt_title': img.get('alt'), + 'description': json_ld.get('description') or self._og_search_description(webpage), + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': [{'url': img['src']}] if 'src' in img else json_ld.get('thumbnails'), + }, json_ld) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rice.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rice.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rice.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rice.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rmcdecouverte.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rmcdecouverte.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rmcdecouverte.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rmcdecouverte.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rockstargames.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rockstargames.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rockstargames.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rockstargames.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rokfin.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rokfin.py new file mode 100644 index 0000000..cad76f0 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rokfin.py @@ -0,0 +1,456 @@ +import itertools +import json +import re +import urllib.parse +from datetime import datetime + +from .common import InfoExtractor, SearchInfoExtractor +from ..utils import ( + ExtractorError, + determine_ext, + float_or_none, + format_field, + int_or_none, + str_or_none, + traverse_obj, + try_get, + unescapeHTML, + unified_timestamp, + url_or_none, + urlencode_postdata, +) + +_API_BASE_URL = 'https://prod-api-v2.production.rokfin.com/api/v2/public/' + + +class RokfinIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?rokfin\.com/(?P<id>(?P<type>post|stream)/\d+)' + _NETRC_MACHINE = 'rokfin' + _AUTH_BASE = 'https://secure.rokfin.com/auth/realms/rokfin-web/protocol/openid-connect' + _access_mgmt_tokens = {} # OAuth 2.0: RFC 6749, Sec. 1.4-5 + _TESTS = [{ + 'url': 'https://www.rokfin.com/post/57548/Mitt-Romneys-Crazy-Solution-To-Climate-Change', + 'info_dict': { + 'id': 'post/57548', + 'ext': 'mp4', + 'title': 'Mitt Romney\'s Crazy Solution To Climate Change', + 'thumbnail': r're:https://img\.production\.rokfin\.com/.+', + 'upload_date': '20211023', + 'timestamp': 1634998029, + 'channel': 'Jimmy Dore', + 'channel_id': 65429, + 'channel_url': 'https://rokfin.com/TheJimmyDoreShow', + 'availability': 'public', + 'live_status': 'not_live', + 'dislike_count': int, + 'like_count': int, + 'duration': 213, + } + }, { + 'url': 'https://rokfin.com/post/223/Julian-Assange-Arrested-Streaming-In-Real-Time', + 'info_dict': { + 'id': 'post/223', + 'ext': 'mp4', + 'title': 'Julian Assange Arrested: Streaming In Real Time', + 'thumbnail': r're:https://img\.production\.rokfin\.com/.+', + 'upload_date': '20190412', + 'timestamp': 1555052644, + 'channel': 'Ron Placone', + 'channel_id': 10, + 'channel_url': 'https://rokfin.com/RonPlacone', + 'availability': 'public', + 'live_status': 'not_live', + 'dislike_count': int, + 'like_count': int, + 'tags': ['FreeThinkingMedia^', 'RealProgressives^'], + } + }, { + 'url': 'https://www.rokfin.com/stream/10543/Its-A-Crazy-Mess-Regional-Director-Blows-Whistle-On-Pfizers-Vaccine-Trial-Data', + 'info_dict': { + 'id': 'stream/10543', + 'ext': 'mp4', + 'title': '"It\'s A Crazy Mess" Regional Director Blows Whistle On Pfizer\'s Vaccine Trial Data', + 'thumbnail': r're:https://img\.production\.rokfin\.com/.+', + 'description': 'md5:324ce2d3e3b62e659506409e458b9d8e', + 'channel': 'TLAVagabond', + 'channel_id': 53856, + 'channel_url': 'https://rokfin.com/TLAVagabond', + 'availability': 'public', + 'is_live': False, + 'was_live': True, + 'live_status': 'was_live', + 'timestamp': 1635874720, + 'release_timestamp': 1635874720, + 'release_date': '20211102', + 'upload_date': '20211102', + 'dislike_count': int, + 'like_count': int, + 'tags': ['FreeThinkingMedia^'], + 'duration': None, + } + }, { + 'url': 'https://rokfin.com/post/126703/Brave-New-World--Aldous-Huxley-DEEPDIVE--Chpts-13--Quite-Frankly--Jay-Dyer', + 'info_dict': { + 'id': 'post/126703', + 'ext': 'mp4', + 'title': 'Brave New World - Aldous Huxley DEEPDIVE! (Chpts 1-3) - Quite Frankly & Jay Dyer', + 'thumbnail': r're:https://img\.production\.rokfin\.com/.+', + 'channel': 'Jay Dyer', + 'channel_id': 186881, + 'channel_url': 'https://rokfin.com/jaydyer', + 'availability': 'premium_only', + 'live_status': 'not_live', + 'dislike_count': int, + 'like_count': int, + 'timestamp': 1678213357, + 'upload_date': '20230307', + 'tags': ['FreeThinkingMedia^', 'OpenMind^'], + 'description': 'md5:cb04e32e68326c9b2b251b297bacff35', + 'duration': 3100, + } + }, { + 'url': 'https://rokfin.com/stream/31332/The-Grayzone-live-on-Nordstream-blame-game', + 'info_dict': { + 'id': 'stream/31332', + 'ext': 'mp4', + 'title': 'The Grayzone live on Nordstream blame game', + 'thumbnail': r're:https://image\.v\.rokfin\.com/.+', + 'channel': 'Max Blumenthal', + 'channel_id': 248902, + 'channel_url': 'https://rokfin.com/MaxBlumenthal', + 'availability': 'premium_only', + 'live_status': 'was_live', + 'dislike_count': int, + 'like_count': int, + 'timestamp': 1678475166, + 'release_timestamp': 1678475166.0, + 'release_date': '20230310', + 'upload_date': '20230310', + 'tags': ['FreeThinkingMedia^'], + } + }] + + def _real_extract(self, url): + video_id, video_type = self._match_valid_url(url).group('id', 'type') + metadata = self._download_json_using_access_token(f'{_API_BASE_URL}{video_id}', video_id) + + scheduled = unified_timestamp(metadata.get('scheduledAt')) + live_status = ('was_live' if metadata.get('stoppedAt') + else 'is_upcoming' if scheduled + else 'is_live' if video_type == 'stream' + else 'not_live') + + video_url = traverse_obj(metadata, 'url', ('content', 'contentUrl'), expected_type=url_or_none) + if video_url in (None, 'fake.m3u8'): + video_url = format_field(self._search_regex( + r'https?://[^/]+/([^/]+)/storyboard.vtt', + traverse_obj(metadata, 'timelineUrl', ('content', 'timelineUrl'), expected_type=url_or_none), + video_id, default=None), None, 'https://stream.v.rokfin.com/%s.m3u8') + + formats, subtitles = [{'url': video_url}] if video_url else [], {} + if determine_ext(video_url) == 'm3u8': + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + video_url, video_id, fatal=False, live=live_status == 'is_live') + + if not formats: + if traverse_obj(metadata, 'premiumPlan', 'premium'): + self.raise_login_required('This video is only available to premium users', True, method='cookies') + elif scheduled: + self.raise_no_formats( + f'Stream is offline; scheduled for {datetime.fromtimestamp(scheduled).strftime("%Y-%m-%d %H:%M:%S")}', + video_id=video_id, expected=True) + + uploader = traverse_obj(metadata, ('createdBy', 'username'), ('creator', 'username')) + timestamp = (scheduled or float_or_none(metadata.get('postedAtMilli'), 1000) + or unified_timestamp(metadata.get('creationDateTime'))) + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + 'title': str_or_none(traverse_obj(metadata, 'title', ('content', 'contentTitle'))), + 'duration': float_or_none(traverse_obj(metadata, ('content', 'duration'))), + 'thumbnail': url_or_none(traverse_obj(metadata, 'thumbnail', ('content', 'thumbnailUrl1'))), + 'description': str_or_none(traverse_obj(metadata, 'description', ('content', 'contentDescription'))), + 'like_count': int_or_none(metadata.get('likeCount')), + 'dislike_count': int_or_none(metadata.get('dislikeCount')), + 'channel': str_or_none(traverse_obj(metadata, ('createdBy', 'name'), ('creator', 'name'))), + 'channel_id': traverse_obj(metadata, ('createdBy', 'id'), ('creator', 'id')), + 'channel_url': url_or_none(f'https://rokfin.com/{uploader}') if uploader else None, + 'timestamp': timestamp, + 'release_timestamp': timestamp if live_status != 'not_live' else None, + 'tags': traverse_obj(metadata, ('tags', ..., 'title'), expected_type=str_or_none), + 'live_status': live_status, + 'availability': self._availability( + needs_premium=bool(traverse_obj(metadata, 'premiumPlan', 'premium')), + is_private=False, needs_subscription=False, needs_auth=False, is_unlisted=False), + # 'comment_count': metadata.get('numComments'), # Data provided by website is wrong + '__post_extractor': self.extract_comments(video_id) if video_type == 'post' else None, + } + + def _get_comments(self, video_id): + pages_total = None + for page_n in itertools.count(): + raw_comments = self._download_json( + f'{_API_BASE_URL}comment?postId={video_id[5:]}&page={page_n}&size=50', + video_id, note=f'Downloading viewer comments page {page_n + 1}{format_field(pages_total, None, " of %s")}', + fatal=False) or {} + + for comment in raw_comments.get('content') or []: + yield { + 'text': str_or_none(comment.get('comment')), + 'author': str_or_none(comment.get('name')), + 'id': comment.get('commentId'), + 'author_id': comment.get('userId'), + 'parent': 'root', + 'like_count': int_or_none(comment.get('numLikes')), + 'dislike_count': int_or_none(comment.get('numDislikes')), + 'timestamp': unified_timestamp(comment.get('postedAt')) + } + + pages_total = int_or_none(raw_comments.get('totalPages')) or None + is_last = raw_comments.get('last') + if not raw_comments.get('content') or is_last or (page_n > pages_total if pages_total else is_last is not False): + return + + def _perform_login(self, username, password): + # https://openid.net/specs/openid-connect-core-1_0.html#CodeFlowAuth (Sec. 3.1) + login_page = self._download_webpage( + f'{self._AUTH_BASE}/auth?client_id=web&redirect_uri=https%3A%2F%2Frokfin.com%2Ffeed&response_mode=fragment&response_type=code&scope=openid', + None, note='loading login page', errnote='error loading login page') + authentication_point_url = unescapeHTML(self._search_regex( + r'<form\s+[^>]+action\s*=\s*"(https://secure\.rokfin\.com/auth/realms/rokfin-web/login-actions/authenticate\?[^"]+)"', + login_page, name='Authentication URL')) + + resp_body = self._download_webpage( + authentication_point_url, None, note='logging in', fatal=False, expected_status=404, + data=urlencode_postdata({'username': username, 'password': password, 'rememberMe': 'off', 'credentialId': ''})) + if not self._authentication_active(): + if re.search(r'(?i)(invalid\s+username\s+or\s+password)', resp_body or ''): + raise ExtractorError('invalid username/password', expected=True) + raise ExtractorError('Login failed') + + urlh = self._request_webpage( + f'{self._AUTH_BASE}/auth', None, + note='granting user authorization', errnote='user authorization rejected by Rokfin', + query={ + 'client_id': 'web', + 'prompt': 'none', + 'redirect_uri': 'https://rokfin.com/silent-check-sso.html', + 'response_mode': 'fragment', + 'response_type': 'code', + 'scope': 'openid', + }) + self._access_mgmt_tokens = self._download_json( + f'{self._AUTH_BASE}/token', None, + note='getting access credentials', errnote='error getting access credentials', + data=urlencode_postdata({ + 'code': urllib.parse.parse_qs(urllib.parse.urldefrag(urlh.url).fragment).get('code')[0], + 'client_id': 'web', + 'grant_type': 'authorization_code', + 'redirect_uri': 'https://rokfin.com/silent-check-sso.html' + })) + + def _authentication_active(self): + return not ( + {'KEYCLOAK_IDENTITY', 'KEYCLOAK_IDENTITY_LEGACY', 'KEYCLOAK_SESSION', 'KEYCLOAK_SESSION_LEGACY'} + - set(self._get_cookies(self._AUTH_BASE))) + + def _get_auth_token(self): + return try_get(self._access_mgmt_tokens, lambda x: ' '.join([x['token_type'], x['access_token']])) + + def _download_json_using_access_token(self, url_or_request, video_id, headers={}, query={}): + assert 'authorization' not in headers + headers = headers.copy() + auth_token = self._get_auth_token() + refresh_token = self._access_mgmt_tokens.get('refresh_token') + if auth_token: + headers['authorization'] = auth_token + + json_string, urlh = self._download_webpage_handle( + url_or_request, video_id, headers=headers, query=query, expected_status=401) + if not auth_token or urlh.status != 401 or refresh_token is None: + return self._parse_json(json_string, video_id) + + self._access_mgmt_tokens = self._download_json( + f'{self._AUTH_BASE}/token', video_id, + note='User authorization expired or canceled by Rokfin. Re-authorizing ...', errnote='Failed to re-authorize', + data=urlencode_postdata({ + 'grant_type': 'refresh_token', + 'refresh_token': refresh_token, + 'client_id': 'web' + })) + headers['authorization'] = self._get_auth_token() + if headers['authorization'] is None: + raise ExtractorError('User authorization lost', expected=True) + + return self._download_json(url_or_request, video_id, headers=headers, query=query) + + +class RokfinPlaylistBaseIE(InfoExtractor): + _TYPES = { + 'video': 'post', + 'audio': 'post', + 'stream': 'stream', + 'dead_stream': 'stream', + 'stack': 'stack', + } + + def _get_video_data(self, metadata): + for content in metadata.get('content') or []: + media_type = self._TYPES.get(content.get('mediaType')) + video_id = content.get('id') if media_type == 'post' else content.get('mediaId') + if not media_type or not video_id: + continue + + yield self.url_result(f'https://rokfin.com/{media_type}/{video_id}', video_id=f'{media_type}/{video_id}', + video_title=str_or_none(traverse_obj(content, ('content', 'contentTitle')))) + + +class RokfinStackIE(RokfinPlaylistBaseIE): + IE_NAME = 'rokfin:stack' + IE_DESC = 'Rokfin Stacks' + _VALID_URL = r'https?://(?:www\.)?rokfin\.com/stack/(?P<id>[^/]+)' + _TESTS = [{ + 'url': 'https://www.rokfin.com/stack/271/Tulsi-Gabbard-Portsmouth-Townhall-FULL--Feb-9-2020', + 'playlist_count': 8, + 'info_dict': { + 'id': '271', + }, + }] + + def _real_extract(self, url): + list_id = self._match_id(url) + return self.playlist_result(self._get_video_data( + self._download_json(f'{_API_BASE_URL}stack/{list_id}', list_id)), list_id) + + +class RokfinChannelIE(RokfinPlaylistBaseIE): + IE_NAME = 'rokfin:channel' + IE_DESC = 'Rokfin Channels' + _VALID_URL = r'https?://(?:www\.)?rokfin\.com/(?!((feed/?)|(discover/?)|(channels/?))$)(?P<id>[^/]+)/?$' + _TESTS = [{ + 'url': 'https://rokfin.com/TheConvoCouch', + 'playlist_mincount': 100, + 'info_dict': { + 'id': '12071-new', + 'title': 'TheConvoCouch - New', + 'description': 'md5:bb622b1bca100209b91cd685f7847f06', + }, + }] + + _TABS = { + 'new': 'posts', + 'top': 'top', + 'videos': 'video', + 'podcasts': 'audio', + 'streams': 'stream', + 'stacks': 'stack', + } + + def _real_initialize(self): + self._validate_extractor_args() + + def _validate_extractor_args(self): + requested_tabs = self._configuration_arg('tab', None) + if requested_tabs is not None and (len(requested_tabs) > 1 or requested_tabs[0] not in self._TABS): + raise ExtractorError(f'Invalid extractor-arg "tab". Must be one of {", ".join(self._TABS)}', expected=True) + + def _entries(self, channel_id, channel_name, tab): + pages_total = None + for page_n in itertools.count(0): + if tab in ('posts', 'top'): + data_url = f'{_API_BASE_URL}user/{channel_name}/{tab}?page={page_n}&size=50' + else: + data_url = f'{_API_BASE_URL}post/search/{tab}?page={page_n}&size=50&creator={channel_id}' + metadata = self._download_json( + data_url, channel_name, + note=f'Downloading video metadata page {page_n + 1}{format_field(pages_total, None, " of %s")}') + + yield from self._get_video_data(metadata) + pages_total = int_or_none(metadata.get('totalPages')) or None + is_last = metadata.get('last') + if is_last or (page_n > pages_total if pages_total else is_last is not False): + return + + def _real_extract(self, url): + channel_name = self._match_id(url) + channel_info = self._download_json(f'{_API_BASE_URL}user/{channel_name}', channel_name) + channel_id = channel_info['id'] + tab = self._configuration_arg('tab', default=['new'])[0] + + return self.playlist_result( + self._entries(channel_id, channel_name, self._TABS[tab]), + f'{channel_id}-{tab}', f'{channel_name} - {tab.title()}', str_or_none(channel_info.get('description'))) + + +class RokfinSearchIE(SearchInfoExtractor): + IE_NAME = 'rokfin:search' + IE_DESC = 'Rokfin Search' + _SEARCH_KEY = 'rkfnsearch' + _TYPES = { + 'video': (('id', 'raw'), 'post'), + 'audio': (('id', 'raw'), 'post'), + 'stream': (('content_id', 'raw'), 'stream'), + 'dead_stream': (('content_id', 'raw'), 'stream'), + 'stack': (('content_id', 'raw'), 'stack'), + } + _TESTS = [{ + 'url': 'rkfnsearch5:"zelenko"', + 'playlist_count': 5, + 'info_dict': { + 'id': '"zelenko"', + 'title': '"zelenko"', + } + }] + _db_url = None + _db_access_key = None + + def _real_initialize(self): + self._db_url, self._db_access_key = self.cache.load(self.ie_key(), 'auth', default=(None, None)) + if not self._db_url: + self._get_db_access_credentials() + + def _search_results(self, query): + total_pages = None + for page_number in itertools.count(1): + search_results = self._run_search_query( + query, data={'query': query, 'page': {'size': 100, 'current': page_number}}, + note=f'Downloading page {page_number}{format_field(total_pages, None, " of ~%s")}') + total_pages = traverse_obj(search_results, ('meta', 'page', 'total_pages'), expected_type=int_or_none) + + for result in search_results.get('results') or []: + video_id_key, video_type = self._TYPES.get(traverse_obj(result, ('content_type', 'raw')), (None, None)) + video_id = traverse_obj(result, video_id_key, expected_type=int_or_none) + if video_id and video_type: + yield self.url_result(url=f'https://rokfin.com/{video_type}/{video_id}') + if not search_results.get('results'): + return + + def _run_search_query(self, video_id, data, **kwargs): + data = json.dumps(data).encode() + for attempt in range(2): + search_results = self._download_json( + self._db_url, video_id, data=data, fatal=(attempt == 1), + headers={'authorization': self._db_access_key}, **kwargs) + if search_results: + return search_results + self.write_debug('Updating access credentials') + self._get_db_access_credentials(video_id) + + def _get_db_access_credentials(self, video_id=None): + auth_data = {'SEARCH_KEY': None, 'ENDPOINT_BASE': None} + notfound_err_page = self._download_webpage( + 'https://rokfin.com/discover', video_id, expected_status=404, note='Downloading home page') + for js_file_path in re.findall(r'<script\b[^>]*\ssrc\s*=\s*"(/static/js/[^">]+)"', notfound_err_page): + js_content = self._download_webpage( + f'https://rokfin.com{js_file_path}', video_id, note='Downloading JavaScript file', fatal=False) + auth_data.update(re.findall( + rf'REACT_APP_({"|".join(auth_data.keys())})\s*:\s*"([^"]+)"', js_content or '')) + if not all(auth_data.values()): + continue + + self._db_url = url_or_none(f'{auth_data["ENDPOINT_BASE"]}/api/as/v1/engines/rokfin-search/search.json') + self._db_access_key = f'Bearer {auth_data["SEARCH_KEY"]}' + self.cache.store(self.ie_key(), 'auth', (self._db_url, self._db_access_key)) + return + raise ExtractorError('Unable to extract access credentials') diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/roosterteeth.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/roosterteeth.py new file mode 100644 index 0000000..94e673b --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/roosterteeth.py @@ -0,0 +1,211 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + join_nonempty, + LazyList, + parse_qs, + str_or_none, + traverse_obj, + url_or_none, + urlencode_postdata, + urljoin, + update_url_query, +) + + +class RoosterTeethBaseIE(InfoExtractor): + _NETRC_MACHINE = 'roosterteeth' + _API_BASE = 'https://svod-be.roosterteeth.com' + _API_BASE_URL = f'{_API_BASE}/api/v1' + + def _perform_login(self, username, password): + if self._get_cookies(self._API_BASE_URL).get('rt_access_token'): + return + + try: + self._download_json( + 'https://auth.roosterteeth.com/oauth/token', + None, 'Logging in', data=urlencode_postdata({ + 'client_id': '4338d2b4bdc8db1239360f28e72f0d9ddb1fd01e7a38fbb07b4b1f4ba4564cc5', + 'grant_type': 'password', + 'username': username, + 'password': password, + })) + except ExtractorError as e: + msg = 'Unable to login' + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + resp = self._parse_json(e.cause.response.read().decode(), None, fatal=False) + if resp: + error = resp.get('extra_info') or resp.get('error_description') or resp.get('error') + if error: + msg += ': ' + error + self.report_warning(msg) + + def _extract_video_info(self, data): + thumbnails = [] + for image in traverse_obj(data, ('included', 'images')): + if image.get('type') not in ('episode_image', 'bonus_feature_image'): + continue + thumbnails.extend([{ + 'id': name, + 'url': url, + } for name, url in (image.get('attributes') or {}).items() if url_or_none(url)]) + + attributes = data.get('attributes') or {} + title = traverse_obj(attributes, 'title', 'display_title') + sub_only = attributes.get('is_sponsors_only') + + return { + 'id': str(data.get('id')), + 'display_id': attributes.get('slug'), + 'title': title, + 'description': traverse_obj(attributes, 'description', 'caption'), + 'series': attributes.get('show_title'), + 'season_number': int_or_none(attributes.get('season_number')), + 'season_id': attributes.get('season_id'), + 'episode': title, + 'episode_number': int_or_none(attributes.get('number')), + 'episode_id': str_or_none(data.get('uuid')), + 'channel_id': attributes.get('channel_id'), + 'duration': int_or_none(attributes.get('length')), + 'thumbnails': thumbnails, + 'availability': self._availability( + needs_premium=sub_only, needs_subscription=sub_only, needs_auth=sub_only, + is_private=False, is_unlisted=False), + 'tags': attributes.get('genres') + } + + +class RoosterTeethIE(RoosterTeethBaseIE): + _VALID_URL = r'https?://(?:.+?\.)?roosterteeth\.com/(?:episode|watch)/(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'http://roosterteeth.com/episode/million-dollars-but-season-2-million-dollars-but-the-game-announcement', + 'info_dict': { + 'id': '9156', + 'display_id': 'million-dollars-but-season-2-million-dollars-but-the-game-announcement', + 'ext': 'mp4', + 'title': 'Million Dollars, But... The Game Announcement', + 'description': 'md5:168a54b40e228e79f4ddb141e89fe4f5', + 'thumbnail': r're:^https?://.*\.png$', + 'series': 'Million Dollars, But...', + 'episode': 'Million Dollars, But... The Game Announcement', + }, + 'params': {'skip_download': True}, + }, { + 'url': 'https://roosterteeth.com/watch/rwby-bonus-25', + 'info_dict': { + 'id': '40432', + 'display_id': 'rwby-bonus-25', + 'title': 'Grimm', + 'description': 'md5:f30ff570741213418a8d2c19868b93ab', + 'episode': 'Grimm', + 'channel_id': '92f780eb-ebfe-4bf5-a3b5-c6ad5460a5f1', + 'thumbnail': r're:^https?://.*\.(png|jpe?g)$', + 'ext': 'mp4', + }, + 'params': {'skip_download': True}, + }, { + 'url': 'http://achievementhunter.roosterteeth.com/episode/off-topic-the-achievement-hunter-podcast-2016-i-didn-t-think-it-would-pass-31', + 'only_matching': True, + }, { + 'url': 'http://funhaus.roosterteeth.com/episode/funhaus-shorts-2016-austin-sucks-funhaus-shorts', + 'only_matching': True, + }, { + 'url': 'http://screwattack.roosterteeth.com/episode/death-battle-season-3-mewtwo-vs-shadow', + 'only_matching': True, + }, { + 'url': 'http://theknow.roosterteeth.com/episode/the-know-game-news-season-1-boring-steam-sales-are-better', + 'only_matching': True, + }, { + # only available for FIRST members + 'url': 'http://roosterteeth.com/episode/rt-docs-the-world-s-greatest-head-massage-the-world-s-greatest-head-massage-an-asmr-journey-part-one', + 'only_matching': True, + }, { + 'url': 'https://roosterteeth.com/watch/million-dollars-but-season-2-million-dollars-but-the-game-announcement', + 'only_matching': True, + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + api_episode_url = f'{self._API_BASE_URL}/watch/{display_id}' + + try: + video_data = self._download_json( + api_episode_url + '/videos', display_id, + 'Downloading video JSON metadata')['data'][0] + m3u8_url = video_data['attributes']['url'] + # XXX: additional URL at video_data['links']['download'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + if self._parse_json(e.cause.response.read().decode(), display_id).get('access') is False: + self.raise_login_required( + '%s is only available for FIRST members' % display_id) + raise + + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + m3u8_url, display_id, 'mp4', 'm3u8_native', m3u8_id='hls') + + episode = self._download_json( + api_episode_url, display_id, + 'Downloading episode JSON metadata')['data'][0] + + return { + 'display_id': display_id, + 'formats': formats, + 'subtitles': subtitles, + **self._extract_video_info(episode) + } + + +class RoosterTeethSeriesIE(RoosterTeethBaseIE): + _VALID_URL = r'https?://(?:.+?\.)?roosterteeth\.com/series/(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://roosterteeth.com/series/rwby?season=7', + 'playlist_count': 13, + 'info_dict': { + 'id': 'rwby-7', + 'title': 'RWBY - Season 7', + } + }, { + 'url': 'https://roosterteeth.com/series/role-initiative', + 'playlist_mincount': 16, + 'info_dict': { + 'id': 'role-initiative', + 'title': 'Role Initiative', + } + }, { + 'url': 'https://roosterteeth.com/series/let-s-play-minecraft?season=9', + 'playlist_mincount': 50, + 'info_dict': { + 'id': 'let-s-play-minecraft-9', + 'title': 'Let\'s Play Minecraft - Season 9', + } + }] + + def _entries(self, series_id, season_number): + display_id = join_nonempty(series_id, season_number) + # TODO: extract bonus material + for data in self._download_json( + f'{self._API_BASE_URL}/shows/{series_id}/seasons?order=asc&order_by', display_id)['data']: + idx = traverse_obj(data, ('attributes', 'number')) + if season_number and idx != season_number: + continue + season_url = update_url_query(urljoin(self._API_BASE, data['links']['episodes']), {'per_page': 1000}) + season = self._download_json(season_url, display_id, f'Downloading season {idx} JSON metadata')['data'] + for episode in season: + yield self.url_result( + f'https://www.roosterteeth.com{episode["canonical_links"]["self"]}', + RoosterTeethIE.ie_key(), + **self._extract_video_info(episode)) + + def _real_extract(self, url): + series_id = self._match_id(url) + season_number = traverse_obj(parse_qs(url), ('season', 0), expected_type=int_or_none) + + entries = LazyList(self._entries(series_id, season_number)) + return self.playlist_result( + entries, + join_nonempty(series_id, season_number), + join_nonempty(entries[0].get('series'), season_number, delim=' - Season ')) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rottentomatoes.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rottentomatoes.py new file mode 100644 index 0000000..e357175 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rottentomatoes.py @@ -0,0 +1,80 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + clean_html, + float_or_none, + get_element_by_class, + join_nonempty, + traverse_obj, + url_or_none, +) + + +class RottenTomatoesIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?rottentomatoes\.com/m/(?P<playlist>[^/]+)(?:/(?P<tr>trailers)(?:/(?P<id>\w+))?)?' + + _TESTS = [{ + 'url': 'http://www.rottentomatoes.com/m/toy_story_3/trailers/11028566/', + 'info_dict': { + 'id': '11028566', + 'ext': 'mp4', + 'title': 'Toy Story 3', + 'description': 'From the creators of the beloved TOY STORY films, comes a story that will reunite the gang in a whole new way.' + }, + 'skip': 'No longer available', + }, { + 'url': 'https://www.rottentomatoes.com/m/toy_story_3/trailers/VycaVoBKhGuk', + 'info_dict': { + 'id': 'VycaVoBKhGuk', + 'ext': 'mp4', + 'title': 'Toy Story 3: Trailer 2', + 'description': '', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 149.941 + }, + }, { + 'url': 'http://www.rottentomatoes.com/m/toy_story_3', + 'info_dict': { + 'id': 'toy_story_3', + 'title': 'Toy Story 3', + }, + 'playlist_mincount': 4, + }, { + 'url': 'http://www.rottentomatoes.com/m/toy_story_3/trailers', + 'info_dict': { + 'id': 'toy_story_3-trailers', + }, + 'playlist_mincount': 5, + }] + + def _extract_videos(self, data, display_id): + for video in traverse_obj(data, (lambda _, v: v['publicId'] and v['file'] and v['type'] == 'hls')): + yield { + 'formats': self._extract_m3u8_formats( + video['file'], display_id, 'mp4', m3u8_id='hls', fatal=False), + **traverse_obj(video, { + 'id': 'publicId', + 'title': 'title', + 'description': 'description', + 'duration': ('durationInSeconds', {float_or_none}), + 'thumbnail': ('image', {url_or_none}), + }), + } + + def _real_extract(self, url): + playlist_id, trailers, video_id = self._match_valid_url(url).group('playlist', 'tr', 'id') + playlist_id = join_nonempty(playlist_id, trailers) + webpage = self._download_webpage(url, playlist_id) + data = self._search_json( + r'<script[^>]+\bid=["\'](?:heroV|v)ideos["\'][^>]*>', webpage, + 'data', playlist_id, contains_pattern=r'\[{(?s:.+)}\]') + + if video_id: + video_data = traverse_obj(data, lambda _, v: v['publicId'] == video_id) + if not video_data: + raise ExtractorError('Unable to extract video from webpage') + return next(self._extract_videos(video_data, video_id)) + + return self.playlist_result( + self._extract_videos(data, playlist_id), playlist_id, + clean_html(get_element_by_class('scoreboard__title', webpage))) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rozhlas.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rozhlas.py new file mode 100644 index 0000000..6313432 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rozhlas.py @@ -0,0 +1,343 @@ +import itertools + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + extract_attributes, + int_or_none, + remove_start, + str_or_none, + traverse_obj, + unified_timestamp, + url_or_none, +) + + +class RozhlasIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?prehravac\.rozhlas\.cz/audio/(?P<id>[0-9]+)' + _TESTS = [{ + 'url': 'http://prehravac.rozhlas.cz/audio/3421320', + 'md5': '504c902dbc9e9a1fd50326eccf02a7e2', + 'info_dict': { + 'id': '3421320', + 'ext': 'mp3', + 'title': 'Echo Pavla Klusáka (30.06.2015 21:00)', + 'description': 'Osmdesátiny Terryho Rileyho jsou skvÄ›lou příležitostí proletÄ›t se elektronickými i akustickými díly zakladatatele minimalismu, který je aktivní už pÅ™es padesát let' + } + }, { + 'url': 'http://prehravac.rozhlas.cz/audio/3421320/embed', + 'only_matching': True, + }] + + def _real_extract(self, url): + audio_id = self._match_id(url) + + webpage = self._download_webpage( + 'http://prehravac.rozhlas.cz/audio/%s' % audio_id, audio_id) + + title = self._html_search_regex( + r'<h3>(.+?)</h3>\s*<p[^>]*>.*?</p>\s*<div[^>]+id=["\']player-track', + webpage, 'title', default=None) or remove_start( + self._og_search_title(webpage), 'Radio Wave - ') + description = self._html_search_regex( + r'<p[^>]+title=(["\'])(?P<url>(?:(?!\1).)+)\1[^>]*>.*?</p>\s*<div[^>]+id=["\']player-track', + webpage, 'description', fatal=False, group='url') + duration = int_or_none(self._search_regex( + r'data-duration=["\'](\d+)', webpage, 'duration', default=None)) + + return { + 'id': audio_id, + 'url': 'http://media.rozhlas.cz/_audio/%s.mp3' % audio_id, + 'title': title, + 'description': description, + 'duration': duration, + 'vcodec': 'none', + } + + +class RozhlasBaseIE(InfoExtractor): + def _extract_formats(self, entry, audio_id): + formats = [] + for audio in traverse_obj(entry, ('audioLinks', lambda _, v: url_or_none(v['url']))): + ext = audio.get('variant') + for retry in self.RetryManager(): + if retry.attempt > 1: + self._sleep(1, audio_id) + try: + if ext == 'dash': + formats.extend(self._extract_mpd_formats( + audio['url'], audio_id, mpd_id=ext)) + elif ext == 'hls': + formats.extend(self._extract_m3u8_formats( + audio['url'], audio_id, 'm4a', m3u8_id=ext)) + else: + formats.append({ + 'url': audio['url'], + 'ext': ext, + 'format_id': ext, + 'abr': int_or_none(audio.get('bitrate')), + 'acodec': ext, + 'vcodec': 'none', + }) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 429: + retry.error = e.cause + else: + self.report_warning(e.msg) + + return formats + + +class RozhlasVltavaIE(RozhlasBaseIE): + _VALID_URL = r'https?://(?:\w+\.rozhlas|english\.radio)\.cz/[\w-]+-(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://wave.rozhlas.cz/papej-masicko-porcujeme-a-bilancujeme-filmy-a-serialy-ktere-letos-zabily-8891337', + 'md5': 'ba2fdbc1242fc16771c7695d271ec355', + 'info_dict': { + 'id': '8891337', + 'title': 'md5:21f99739d04ab49d8c189ec711eef4ec', + }, + 'playlist_count': 1, + 'playlist': [{ + 'md5': 'ba2fdbc1242fc16771c7695d271ec355', + 'info_dict': { + 'id': '10520988', + 'ext': 'mp3', + 'title': 'Papej masíÄko! Porcujeme a bilancujeme filmy a seriály, které to letos zabily', + 'description': 'md5:1c6d29fb9564e1f17fc1bb83ae7da0bc', + 'duration': 1574, + 'artist': 'AleÅ¡ Stuchlý', + 'channel_id': 'radio-wave', + }, + }] + }, { + 'url': 'https://wave.rozhlas.cz/poslechnete-si-neklid-podcastovy-thriller-o-vine-strachu-a-vztahu-ktery-zasel-8554744', + 'info_dict': { + 'id': '8554744', + 'title': 'PoslechnÄ›te si Neklid. Podcastový thriller o vinÄ›, strachu a vztahu, který zaÅ¡el příliÅ¡ daleko', + }, + 'playlist_count': 5, + 'playlist': [{ + 'md5': '93d4109cf8f40523699ae9c1d4600bdd', + 'info_dict': { + 'id': '9890713', + 'ext': 'mp3', + 'title': 'Neklid #1', + 'description': '1. díl: Neklid: 1. díl', + 'duration': 1025, + 'artist': 'Josef Kokta', + 'channel_id': 'radio-wave', + 'chapter': 'Neklid #1', + 'chapter_number': 1, + }, + }, { + 'md5': 'e9763235be4a6dcf94bc8a5bac1ca126', + 'info_dict': { + 'id': '9890716', + 'ext': 'mp3', + 'title': 'Neklid #2', + 'description': '2. díl: Neklid: 2. díl', + 'duration': 768, + 'artist': 'Josef Kokta', + 'channel_id': 'radio-wave', + 'chapter': 'Neklid #2', + 'chapter_number': 2, + }, + }, { + 'md5': '00b642ea94b78cc949ac84da09f87895', + 'info_dict': { + 'id': '9890722', + 'ext': 'mp3', + 'title': 'Neklid #3', + 'description': '3. díl: Neklid: 3. díl', + 'duration': 607, + 'artist': 'Josef Kokta', + 'channel_id': 'radio-wave', + 'chapter': 'Neklid #3', + 'chapter_number': 3, + }, + }, { + 'md5': 'faef97b1b49da7df874740f118c19dea', + 'info_dict': { + 'id': '9890728', + 'ext': 'mp3', + 'title': 'Neklid #4', + 'description': '4. díl: Neklid: 4. díl', + 'duration': 621, + 'artist': 'Josef Kokta', + 'channel_id': 'radio-wave', + 'chapter': 'Neklid #4', + 'chapter_number': 4, + }, + }, { + 'md5': '6e729fa39b647325b868d419c76f3efa', + 'info_dict': { + 'id': '9890734', + 'ext': 'mp3', + 'title': 'Neklid #5', + 'description': '5. díl: Neklid: 5. díl', + 'duration': 908, + 'artist': 'Josef Kokta', + 'channel_id': 'radio-wave', + 'chapter': 'Neklid #5', + 'chapter_number': 5, + }, + }] + }, { + 'url': 'https://dvojka.rozhlas.cz/karel-siktanc-cerny-jezdec-bily-kun-napinava-pohadka-o-tajemnem-prizraku-8946969', + 'info_dict': { + 'id': '8946969', + 'title': 'Karel Å iktanc: ÄŒerný jezdec, bílý kůň. Napínavá pohádka o tajemném přízraku', + }, + 'playlist_count': 1, + 'playlist': [{ + 'info_dict': { + 'id': '10631121', + 'ext': 'm4a', + 'title': 'Karel Å iktanc: ÄŒerný jezdec, bílý kůň. Napínavá pohádka o tajemném přízraku', + 'description': 'Karel Å iktanc: ÄŒerný jezdec, bílý kůň', + 'duration': 2656, + 'artist': 'TvůrÄí skupina Drama a literatura', + 'channel_id': 'dvojka', + }, + }], + 'params': {'skip_download': 'dash'}, + }] + + def _extract_video(self, entry): + audio_id = entry['meta']['ga']['contentId'] + chapter_number = traverse_obj(entry, ('meta', 'ga', 'contentSerialPart', {int_or_none})) + + return { + 'id': audio_id, + 'chapter': traverse_obj(entry, ('meta', 'ga', 'contentNameShort')) if chapter_number else None, + 'chapter_number': chapter_number, + 'formats': self._extract_formats(entry, audio_id), + **traverse_obj(entry, { + 'title': ('meta', 'ga', 'contentName'), + 'description': 'title', + 'duration': ('duration', {int_or_none}), + 'artist': ('meta', 'ga', 'contentAuthor'), + 'channel_id': ('meta', 'ga', 'contentCreator'), + }) + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + # FIXME: Use get_element_text_and_html_by_tag when it accepts less strict html + data = self._parse_json(extract_attributes(self._search_regex( + r'(<div class="mujRozhlasPlayer" data-player=\'[^\']+\'>)', + webpage, 'player'))['data-player'], video_id)['data'] + + return { + '_type': 'playlist', + 'id': str_or_none(data.get('embedId')) or video_id, + 'title': traverse_obj(data, ('series', 'title')), + 'entries': map(self._extract_video, data['playlist']), + } + + +class MujRozhlasIE(RozhlasBaseIE): + _VALID_URL = r'https?://(?:www\.)?mujrozhlas\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)' + _TESTS = [{ + # single episode extraction + 'url': 'https://www.mujrozhlas.cz/vykopavky/ach-jo-zase-teleci-rizek-je-mnohem-min-cesky-nez-jsme-si-mysleli', + 'md5': '6f8fd68663e64936623e67c152a669e0', + 'info_dict': { + 'id': '10739193', + 'ext': 'mp3', + 'title': 'Ach jo, zase to telecí! Řízek je mnohem míň Äeský, než jsme si mysleli', + 'description': 'md5:db7141e9caaedc9041ec7cefb9a62908', + 'timestamp': 1684915200, + 'modified_timestamp': 1684922446, + 'series': 'Vykopávky', + 'thumbnail': 'https://portal.rozhlas.cz/sites/default/files/images/84377046610af6ddc54d910b1dd7a22b.jpg', + 'channel_id': 'radio-wave', + 'upload_date': '20230524', + 'modified_date': '20230524', + }, + }, { + # serial extraction + 'url': 'https://www.mujrozhlas.cz/radiokniha/jaroslava-janackova-pribeh-tajemneho-psani-o-pramenech-genezi-babicky', + 'playlist_mincount': 7, + 'info_dict': { + 'id': 'bb2b5f4e-ffb4-35a6-a34a-046aa62d6f6b', + 'title': 'Jaroslava JanáÄková: PříbÄ›h tajemného psaní. O pramenech a genezi BabiÄky', + 'description': 'md5:7434d8fac39ac9fee6df098e11dfb1be', + }, + }, { + # show extraction + 'url': 'https://www.mujrozhlas.cz/nespavci', + 'playlist_mincount': 14, + 'info_dict': { + 'id': '09db9b37-d0f4-368c-986a-d3439f741f08', + 'title': 'Nespavci', + 'description': 'md5:c430adcbf9e2b9eac88b745881e814dc', + }, + }] + + def _call_api(self, path, item_id, msg='API JSON'): + return self._download_json( + f'https://api.mujrozhlas.cz/{path}/{item_id}', item_id, + note=f'Downloading {msg}', errnote=f'Failed to download {msg}')['data'] + + def _extract_audio_entry(self, entry): + audio_id = entry['meta']['ga']['contentId'] + + return { + 'id': audio_id, + 'formats': self._extract_formats(entry['attributes'], audio_id), + **traverse_obj(entry, { + 'title': ('attributes', 'title'), + 'description': ('attributes', 'description'), + 'episode_number': ('attributes', 'part'), + 'series': ('attributes', 'mirroredShow', 'title'), + 'chapter': ('attributes', 'mirroredSerial', 'title'), + 'artist': ('meta', 'ga', 'contentAuthor'), + 'channel_id': ('meta', 'ga', 'contentCreator'), + 'timestamp': ('attributes', 'since', {unified_timestamp}), + 'modified_timestamp': ('attributes', 'updated', {unified_timestamp}), + 'thumbnail': ('attributes', 'asset', 'url', {url_or_none}), + }) + } + + def _entries(self, api_url, playlist_id): + for page in itertools.count(1): + episodes = self._download_json( + api_url, playlist_id, note=f'Downloading episodes page {page}', + errnote=f'Failed to download episodes page {page}', fatal=False) + for episode in traverse_obj(episodes, ('data', lambda _, v: v['meta']['ga']['contentId'])): + yield self._extract_audio_entry(episode) + api_url = traverse_obj(episodes, ('links', 'next', {url_or_none})) + if not api_url: + break + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + info = self._search_json(r'\bvar\s+dl\s*=', webpage, 'info json', display_id) + + entity = info['siteEntityBundle'] + + if entity == 'episode': + return self._extract_audio_entry(self._call_api( + 'episodes', info['contentId'], 'episode info API JSON')) + + elif entity in ('show', 'serial'): + playlist_id = info['contentShow'].split(':')[0] if entity == 'show' else info['contentId'] + data = self._call_api(f'{entity}s', playlist_id, f'{entity} playlist JSON') + api_url = data['relationships']['episodes']['links']['related'] + return self.playlist_result( + self._entries(api_url, playlist_id), playlist_id, + **traverse_obj(data, ('attributes', { + 'title': 'title', + 'description': 'description', + }))) + + else: + # `entity == 'person'` not implemented yet by API, ref: + # https://api.mujrozhlas.cz/persons/8367e456-2a57-379a-91bb-e699619bea49/participation + raise ExtractorError(f'Unsupported entity type "{entity}"') diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rte.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rte.py new file mode 100644 index 0000000..7ba80d4 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rte.py @@ -0,0 +1,162 @@ +import re + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + float_or_none, + parse_iso8601, + str_or_none, + try_get, + unescapeHTML, + url_or_none, + ExtractorError, +) + + +class RteBaseIE(InfoExtractor): + def _real_extract(self, url): + item_id = self._match_id(url) + + info_dict = {} + formats = [] + + ENDPOINTS = ( + 'https://feeds.rasset.ie/rteavgen/player/playlist?type=iptv&format=json&showId=', + 'http://www.rte.ie/rteavgen/getplaylist/?type=web&format=json&id=', + ) + + for num, ep_url in enumerate(ENDPOINTS, start=1): + try: + data = self._download_json(ep_url + item_id, item_id) + except ExtractorError as ee: + if num < len(ENDPOINTS) or formats: + continue + if isinstance(ee.cause, HTTPError) and ee.cause.status == 404: + error_info = self._parse_json(ee.cause.response.read().decode(), item_id, fatal=False) + if error_info: + raise ExtractorError( + '%s said: %s' % (self.IE_NAME, error_info['message']), + expected=True) + raise + + # NB the string values in the JSON are stored using XML escaping(!) + show = try_get(data, lambda x: x['shows'][0], dict) + if not show: + continue + + if not info_dict: + title = unescapeHTML(show['title']) + description = unescapeHTML(show.get('description')) + thumbnail = show.get('thumbnail') + duration = float_or_none(show.get('duration'), 1000) + timestamp = parse_iso8601(show.get('published')) + info_dict = { + 'id': item_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'timestamp': timestamp, + 'duration': duration, + } + + mg = try_get(show, lambda x: x['media:group'][0], dict) + if not mg: + continue + + if mg.get('url'): + m = re.match(r'(?P<url>rtmpe?://[^/]+)/(?P<app>.+)/(?P<playpath>mp4:.*)', mg['url']) + if m: + m = m.groupdict() + formats.append({ + 'url': m['url'] + '/' + m['app'], + 'app': m['app'], + 'play_path': m['playpath'], + 'player_url': url, + 'ext': 'flv', + 'format_id': 'rtmp', + }) + + if mg.get('hls_server') and mg.get('hls_url'): + formats.extend(self._extract_m3u8_formats( + mg['hls_server'] + mg['hls_url'], item_id, 'mp4', + entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)) + + if mg.get('hds_server') and mg.get('hds_url'): + formats.extend(self._extract_f4m_formats( + mg['hds_server'] + mg['hds_url'], item_id, + f4m_id='hds', fatal=False)) + + mg_rte_server = str_or_none(mg.get('rte:server')) + mg_url = str_or_none(mg.get('url')) + if mg_rte_server and mg_url: + hds_url = url_or_none(mg_rte_server + mg_url) + if hds_url: + formats.extend(self._extract_f4m_formats( + hds_url, item_id, f4m_id='hds', fatal=False)) + + info_dict['formats'] = formats + return info_dict + + +class RteIE(RteBaseIE): + IE_NAME = 'rte' + IE_DESC = 'Raidió Teilifís Éireann TV' + _VALID_URL = r'https?://(?:www\.)?rte\.ie/player/[^/]{2,3}/show/[^/]+/(?P<id>[0-9]+)' + _TEST = { + 'url': 'http://www.rte.ie/player/ie/show/iwitness-862/10478715/', + 'md5': '4a76eb3396d98f697e6e8110563d2604', + 'info_dict': { + 'id': '10478715', + 'ext': 'mp4', + 'title': 'iWitness', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'The spirit of Ireland, one voice and one minute at a time.', + 'duration': 60.046, + 'upload_date': '20151012', + 'timestamp': 1444694160, + }, + } + + +class RteRadioIE(RteBaseIE): + IE_NAME = 'rte:radio' + IE_DESC = 'Raidió Teilifís Éireann radio' + # Radioplayer URLs have two distinct specifier formats, + # the old format #!rii=<channel_id>:<id>:<playable_item_id>:<date>: + # the new format #!rii=b<channel_id>_<id>_<playable_item_id>_<date>_ + # where the IDs are int/empty, the date is DD-MM-YYYY, and the specifier may be truncated. + # An <id> uniquely defines an individual recording, and is the only part we require. + _VALID_URL = r'https?://(?:www\.)?rte\.ie/radio/utils/radioplayer/rteradioweb\.html#!rii=(?:b?[0-9]*)(?:%3A|:|%5F|_)(?P<id>[0-9]+)' + + _TESTS = [{ + # Old-style player URL; HLS and RTMPE formats + 'url': 'http://www.rte.ie/radio/utils/radioplayer/rteradioweb.html#!rii=16:10507902:2414:27-12-2015:', + 'md5': 'c79ccb2c195998440065456b69760411', + 'info_dict': { + 'id': '10507902', + 'ext': 'mp4', + 'title': 'Gloria', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:9ce124a7fb41559ec68f06387cabddf0', + 'timestamp': 1451203200, + 'upload_date': '20151227', + 'duration': 7230.0, + }, + }, { + # New-style player URL; RTMPE formats only + 'url': 'http://rte.ie/radio/utils/radioplayer/rteradioweb.html#!rii=b16_3250678_8861_06-04-2012_', + 'info_dict': { + 'id': '3250678', + 'ext': 'flv', + 'title': 'The Lyric Concert with Paul Herriott', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': '', + 'timestamp': 1333742400, + 'upload_date': '20120406', + 'duration': 7199.016, + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + }] diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtl2.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtl2.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtl2.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtl2.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtlnl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtlnl.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtlnl.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtlnl.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtnews.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtnews.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtnews.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtnews.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtp.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtp.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtp.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtp.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtrfm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtrfm.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtrfm.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtrfm.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rts.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rts.py new file mode 100644 index 0000000..9f73d18 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rts.py @@ -0,0 +1,231 @@ +import re + +from .srgssr import SRGSSRIE +from ..compat import compat_str +from ..utils import ( + determine_ext, + int_or_none, + parse_duration, + parse_iso8601, + unescapeHTML, + urljoin, +) + + +class RTSIE(SRGSSRIE): # XXX: Do not subclass from concrete IE + IE_DESC = 'RTS.ch' + _VALID_URL = r'rts:(?P<rts_id>\d+)|https?://(?:.+?\.)?rts\.ch/(?:[^/]+/){2,}(?P<id>[0-9]+)-(?P<display_id>.+?)\.html' + + _TESTS = [ + { + 'url': 'http://www.rts.ch/archives/tv/divers/3449373-les-enfants-terribles.html', + 'md5': '753b877968ad8afaeddccc374d4256a5', + 'info_dict': { + 'id': '3449373', + 'display_id': 'les-enfants-terribles', + 'ext': 'mp4', + 'duration': 1488, + 'title': 'Les Enfants Terribles', + 'description': 'France Pommier et sa soeur Luce Feral, les deux filles de ce groupe de 5.', + 'uploader': 'Divers', + 'upload_date': '19680921', + 'timestamp': -40280400, + 'thumbnail': r're:^https?://.*\.image', + 'view_count': int, + }, + 'expected_warnings': ['Unable to download f4m manifest', 'Failed to download m3u8 information'], + }, + { + 'url': 'http://www.rts.ch/emissions/passe-moi-les-jumelles/5624067-entre-ciel-et-mer.html', + 'info_dict': { + 'id': '5624065', + 'title': 'Passe-moi les jumelles', + }, + 'playlist_mincount': 4, + }, + { + 'url': 'http://www.rts.ch/video/sport/hockey/5745975-1-2-kloten-fribourg-5-2-second-but-pour-gotteron-par-kwiatowski.html', + 'info_dict': { + 'id': '5745975', + 'display_id': '1-2-kloten-fribourg-5-2-second-but-pour-gotteron-par-kwiatowski', + 'ext': 'mp4', + 'duration': 48, + 'title': '1/2, Kloten - Fribourg (5-2): second but pour Gottéron par Kwiatowski', + 'description': 'Hockey - Playoff', + 'uploader': 'Hockey', + 'upload_date': '20140403', + 'timestamp': 1396556882, + 'thumbnail': r're:^https?://.*\.image', + 'view_count': int, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'expected_warnings': ['Unable to download f4m manifest', 'Failed to download m3u8 information'], + 'skip': 'Blocked outside Switzerland', + }, + { + 'url': 'http://www.rts.ch/video/info/journal-continu/5745356-londres-cachee-par-un-epais-smog.html', + 'md5': '9bb06503773c07ce83d3cbd793cebb91', + 'info_dict': { + 'id': '5745356', + 'display_id': 'londres-cachee-par-un-epais-smog', + 'ext': 'mp4', + 'duration': 33, + 'title': 'Londres cachée par un épais smog', + 'description': 'Un important voile de smog recouvre Londres depuis mercredi, provoqué par la pollution et du sable du Sahara.', + 'uploader': 'L\'actu en vidéo', + 'upload_date': '20140403', + 'timestamp': 1396537322, + 'thumbnail': r're:^https?://.*\.image', + 'view_count': int, + }, + 'expected_warnings': ['Unable to download f4m manifest', 'Failed to download m3u8 information'], + }, + { + 'url': 'http://www.rts.ch/audio/couleur3/programmes/la-belle-video-de-stephane-laurenceau/5706148-urban-hippie-de-damien-krisl-03-04-2014.html', + 'md5': 'dd8ef6a22dff163d063e2a52bc8adcae', + 'info_dict': { + 'id': '5706148', + 'display_id': 'urban-hippie-de-damien-krisl-03-04-2014', + 'ext': 'mp3', + 'duration': 123, + 'title': '"Urban Hippie", de Damien Krisl', + 'description': 'Des Hippies super glam.', + 'upload_date': '20140403', + 'timestamp': 1396551600, + }, + }, + { + # article with videos on rhs + 'url': 'http://www.rts.ch/sport/hockey/6693917-hockey-davos-decroche-son-31e-titre-de-champion-de-suisse.html', + 'info_dict': { + 'id': '6693917', + 'title': 'Hockey: Davos décroche son 31e titre de champion de Suisse', + }, + 'playlist_mincount': 5, + }, + { + 'url': 'http://pages.rts.ch/emissions/passe-moi-les-jumelles/5624065-entre-ciel-et-mer.html', + 'only_matching': True, + } + ] + + def _real_extract(self, url): + m = self._match_valid_url(url) + media_id = m.group('rts_id') or m.group('id') + display_id = m.group('display_id') or media_id + + def download_json(internal_id): + return self._download_json( + 'http://www.rts.ch/a/%s.html?f=json/article' % internal_id, + display_id) + + all_info = download_json(media_id) + + # media_id extracted out of URL is not always a real id + if 'video' not in all_info and 'audio' not in all_info: + entries = [] + + for item in all_info.get('items', []): + item_url = item.get('url') + if not item_url: + continue + entries.append(self.url_result(item_url, 'RTS')) + + if not entries: + page, urlh = self._download_webpage_handle(url, display_id) + if re.match(self._VALID_URL, urlh.url).group('id') != media_id: + return self.url_result(urlh.url, 'RTS') + + # article with videos on rhs + videos = re.findall( + r'<article[^>]+class="content-item"[^>]*>\s*<a[^>]+data-video-urn="urn:([^"]+)"', + page) + if not videos: + videos = re.findall( + r'(?s)<iframe[^>]+class="srg-player"[^>]+src="[^"]+urn:([^"]+)"', + page) + if videos: + entries = [self.url_result('srgssr:%s' % video_urn, 'SRGSSR') for video_urn in videos] + + if entries: + return self.playlist_result(entries, media_id, all_info.get('title')) + + internal_id = self._html_search_regex( + r'<(?:video|audio) data-id="([0-9]+)"', page, + 'internal video id') + all_info = download_json(internal_id) + + media_type = 'video' if 'video' in all_info else 'audio' + + # check for errors + self._get_media_data('rts', media_type, media_id) + + info = all_info['video']['JSONinfo'] if 'video' in all_info else all_info['audio'] + + title = info['title'] + + def extract_bitrate(url): + return int_or_none(self._search_regex( + r'-([0-9]+)k\.', url, 'bitrate', default=None)) + + formats = [] + streams = info.get('streams', {}) + for format_id, format_url in streams.items(): + if format_id == 'hds_sd' and 'hds' in streams: + continue + if format_id == 'hls_sd' and 'hls' in streams: + continue + ext = determine_ext(format_url) + if ext in ('m3u8', 'f4m'): + format_url = self._get_tokenized_src(format_url, media_id, format_id) + if ext == 'f4m': + formats.extend(self._extract_f4m_formats( + format_url + ('?' if '?' not in format_url else '&') + 'hdcore=3.4.0', + media_id, f4m_id=format_id, fatal=False)) + else: + formats.extend(self._extract_m3u8_formats( + format_url, media_id, 'mp4', 'm3u8_native', m3u8_id=format_id, fatal=False)) + else: + formats.append({ + 'format_id': format_id, + 'url': format_url, + 'tbr': extract_bitrate(format_url), + }) + + download_base = 'http://rtsww%s-d.rts.ch/' % ('-a' if media_type == 'audio' else '') + for media in info.get('media', []): + media_url = media.get('url') + if not media_url or re.match(r'https?://', media_url): + continue + rate = media.get('rate') + ext = media.get('ext') or determine_ext(media_url, 'mp4') + format_id = ext + if rate: + format_id += '-%dk' % rate + formats.append({ + 'format_id': format_id, + 'url': urljoin(download_base, media_url), + 'tbr': rate or extract_bitrate(media_url), + }) + + self._check_formats(formats, media_id) + + duration = info.get('duration') or info.get('cutout') or info.get('cutduration') + if isinstance(duration, compat_str): + duration = parse_duration(duration) + + return { + 'id': media_id, + 'display_id': display_id, + 'formats': formats, + 'title': title, + 'description': info.get('intro'), + 'duration': duration, + 'view_count': int_or_none(info.get('plays')), + 'uploader': info.get('programName'), + 'timestamp': parse_iso8601(info.get('broadcast_date')), + 'thumbnail': unescapeHTML(info.get('preview_image_url')), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvcplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvcplay.py new file mode 100644 index 0000000..741c472 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvcplay.py @@ -0,0 +1,285 @@ +import re + +from .common import InfoExtractor, ExtractorError +from ..utils import ( + clean_html, + determine_ext, + int_or_none, + float_or_none, + js_to_json, + mimetype2ext, + traverse_obj, + urljoin, + url_or_none, +) + + +class RTVCPlayBaseIE(InfoExtractor): + _BASE_VALID_URL = r'https?://(?:www\.)?rtvcplay\.co' + + def _extract_player_config(self, webpage, video_id): + return self._search_json( + r'<script\b[^>]*>[^<]*(?:var|let|const)\s+config\s*=', re.sub(r'"\s*\+\s*"', '', webpage), + 'player_config', video_id, transform_source=js_to_json) + + def _extract_formats_and_subtitles_player_config(self, player_config, video_id): + formats, subtitles = [], {} + for source in traverse_obj(player_config, ('sources', ..., lambda _, v: url_or_none(v['url']))): + ext = mimetype2ext(source.get('mimetype'), default=determine_ext(source['url'])) + if ext == 'm3u8': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + source['url'], video_id, 'mp4', fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + formats.append({ + 'url': source['url'], + 'ext': ext, + }) + + return formats, subtitles + + +class RTVCPlayIE(RTVCPlayBaseIE): + _VALID_URL = RTVCPlayBaseIE._BASE_VALID_URL + r'/(?P<category>(?!embed)[^/]+)/(?:[^?#]+/)?(?P<id>[\w-]+)' + + _TESTS = [{ + 'url': 'https://www.rtvcplay.co/en-vivo/canal-institucional', + 'info_dict': { + 'id': 'canal-institucional', + 'title': r're:^Canal Institucional', + 'description': 'md5:eff9e548394175928059320c006031ea', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.rtvcplay.co/en-vivo/senal-colombia', + 'info_dict': { + 'id': 'senal-colombia', + 'title': r're:^Señal Colombia', + 'description': 'md5:799f16a401d97f40c33a2c6a3e2a507b', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.rtvcplay.co/en-vivo/radio-nacional', + 'info_dict': { + 'id': 'radio-nacional', + 'title': r're:^Radio Nacional', + 'description': 'md5:5de009bc6a9fa79d2a6cf0b73f977d53', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }, { + 'url': 'https://www.rtvcplay.co/peliculas-ficcion/senoritas', + 'md5': '1288ee6f6d1330d880f98bff2ed710a3', + 'info_dict': { + 'id': 'senoritas', + 'title': 'Señoritas', + 'description': 'md5:f095a2bb52cb6cf279daf6302f86fb32', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'ext': 'mp4', + }, + }, { + 'url': 'https://www.rtvcplay.co/competencias-basicas-ciudadanas-y-socioemocionales/profe-en-tu-casa/james-regresa-clases-28022022', + 'md5': 'f040a7380a269ad633cf837384d5e9fc', + 'info_dict': { + 'id': 'james-regresa-clases-28022022', + 'title': 'James regresa a clases - 28/02/2022', + 'description': 'md5:c5dcdf757c7ab29305e8763c6007e675', + 'ext': 'mp4', + }, + }, { + 'url': 'https://www.rtvcplay.co/peliculas-documentales/llinas-el-cerebro-y-el-universo', + 'info_dict': { + 'id': 'llinas-el-cerebro-y-el-universo', + 'title': 'Llinás, el cerebro y el universo', + 'description': 'md5:add875bf2309bb52b3e8b9b06116d9b0', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 3, + }, { + 'url': 'https://www.rtvcplay.co/competencias-basicas-ciudadanas-y-socioemocionales/profe-en-tu-casa', + 'info_dict': { + 'id': 'profe-en-tu-casa', + 'title': 'Profe en tu casa', + 'description': 'md5:47dbe20e263194413b1db2a2805a4f2e', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 537, + }, { + 'url': 'https://www.rtvcplay.co/series-al-oido/relato-de-un-naufrago-una-travesia-del-periodismo-a-la-literatura', + 'info_dict': { + 'id': 'relato-de-un-naufrago-una-travesia-del-periodismo-a-la-literatura', + 'title': 'Relato de un náufrago: una travesía del periodismo a la literatura', + 'description': 'md5:6da28fdca4a5a568ea47ef65ef775603', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 5, + }, { + 'url': 'https://www.rtvcplay.co/series-al-oido/diez-versiones', + 'info_dict': { + 'id': 'diez-versiones', + 'title': 'Diez versiones', + 'description': 'md5:997471ed971cb3fd8e41969457675306', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + }, + 'playlist_mincount': 20, + }] + + def _real_extract(self, url): + video_id, category = self._match_valid_url(url).group('id', 'category') + webpage = self._download_webpage(url, video_id) + + hydration = self._search_json( + r'window\.__RTVCPLAY_STATE__\s*=', webpage, 'hydration', + video_id, transform_source=js_to_json)['content']['currentContent'] + + asset_id = traverse_obj(hydration, ('video', 'assetid')) + if asset_id: + hls_url = hydration['base_url_hls'].replace('[node:field_asset_id]', asset_id) + else: + hls_url = traverse_obj(hydration, ('channel', 'hls')) + + metadata = traverse_obj(hydration, { + 'title': 'title', + 'description': 'description', + 'thumbnail': ((('channel', 'image', 'logo'), ('resource', 'image', 'cover_desktop')), 'path'), + }, get_all=False) + + # Probably it's a program's page + if not hls_url: + seasons = traverse_obj( + hydration, ('widgets', lambda _, y: y['type'] == 'seasonList', 'contents'), + get_all=False) + if not seasons: + podcast_episodes = hydration.get('audios') + if not podcast_episodes: + raise ExtractorError('Could not find asset_id nor program playlist nor podcast episodes') + + return self.playlist_result([ + self.url_result(episode['file'], url_transparent=True, **traverse_obj(episode, { + 'title': 'title', + 'description': ('description', {clean_html}), + 'episode_number': ('chapter_number', {float_or_none}, {int_or_none}), + 'season_number': ('season', {int_or_none}), + })) for episode in podcast_episodes], video_id, **metadata) + + entries = [self.url_result( + urljoin(url, episode['slug']), url_transparent=True, + **traverse_obj(season, { + 'season': 'title', + 'season_number': ('season', {int_or_none}), + }), **traverse_obj(episode, { + 'title': 'title', + 'thumbnail': ('image', 'cover', 'path'), + 'episode_number': ('chapter_number', {int_or_none}), + })) for season in seasons for episode in traverse_obj(season, ('contents', ...))] + + return self.playlist_result(entries, video_id, **metadata) + + formats, subtitles = self._extract_m3u8_formats_and_subtitles(hls_url, video_id, 'mp4') + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + 'is_live': category == 'en-vivo', + **metadata, + } + + +class RTVCPlayEmbedIE(RTVCPlayBaseIE): + _VALID_URL = RTVCPlayBaseIE._BASE_VALID_URL + r'/embed/(?P<id>[\w-]+)' + + _TESTS = [{ + 'url': 'https://www.rtvcplay.co/embed/72b0e699-248b-4929-a4a8-3782702fa7f9', + 'md5': 'ed529aeaee7aa2a72afe91ac7d1177a8', + 'info_dict': { + 'id': '72b0e699-248b-4929-a4a8-3782702fa7f9', + 'title': 'Tráiler: Señoritas', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'ext': 'mp4', + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + player_config = self._extract_player_config(webpage, video_id) + formats, subtitles = self._extract_formats_and_subtitles_player_config(player_config, video_id) + + asset_id = traverse_obj(player_config, ('rtvcplay', 'assetid')) + metadata = {} if not asset_id else self._download_json( + f'https://cms.rtvcplay.co/api/v1/video/asset-id/{asset_id}', video_id, fatal=False) + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + **traverse_obj(metadata, { + 'title': 'title', + 'description': 'description', + 'thumbnail': ('image', ..., 'thumbnail', 'path'), + }, get_all=False) + } + + +class RTVCKalturaIE(RTVCPlayBaseIE): + _VALID_URL = r'https?://media\.rtvc\.gov\.co/kalturartvc/(?P<id>[\w-]+)' + + _TESTS = [{ + 'url': 'https://media.rtvc.gov.co/kalturartvc/indexSC.html', + 'info_dict': { + 'id': 'indexSC', + 'title': r're:^Señal Colombia', + 'description': 'md5:799f16a401d97f40c33a2c6a3e2a507b', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + player_config = self._extract_player_config(webpage, video_id) + formats, subtitles = self._extract_formats_and_subtitles_player_config(player_config, video_id) + + channel_id = traverse_obj(player_config, ('rtvcplay', 'channelId')) + metadata = {} if not channel_id else self._download_json( + f'https://cms.rtvcplay.co/api/v1/taxonomy_term/streaming/{channel_id}', video_id, fatal=False) + + fmts, subs = self._extract_m3u8_formats_and_subtitles( + traverse_obj(metadata, ('channel', 'hls')), video_id, 'mp4', fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + 'is_live': True, + **traverse_obj(metadata, { + 'title': 'title', + 'description': 'description', + 'thumbnail': ('channel', 'image', 'logo', 'path'), + }) + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtve.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtve.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtve.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtve.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtvnh.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvnh.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtvnh.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtvnh.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rtvs.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvs.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rtvs.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rtvs.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvslo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvslo.py new file mode 100644 index 0000000..39ace7c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rtvslo.py @@ -0,0 +1,166 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + int_or_none, + parse_duration, + traverse_obj, + unified_timestamp, + url_or_none, +) + + +class RTVSLOIE(InfoExtractor): + IE_NAME = 'rtvslo.si' + _VALID_URL = r'''(?x) + https?://(?: + (?:365|4d)\.rtvslo.si/arhiv/[^/?#&;]+| + (?:www\.)?rtvslo\.si/rtv365/arhiv + )/(?P<id>\d+)''' + _GEO_COUNTRIES = ['SI'] + + _API_BASE = 'https://api.rtvslo.si/ava/{}/{}?client_id=82013fb3a531d5414f478747c1aca622' + SUB_LANGS_MAP = {'Slovenski': 'sl'} + + _TESTS = [ + { + 'url': 'https://www.rtvslo.si/rtv365/arhiv/174842550?s=tv', + 'info_dict': { + 'id': '174842550', + 'ext': 'mp4', + 'release_timestamp': 1643140032, + 'upload_date': '20220125', + 'series': 'Dnevnik', + 'thumbnail': 'https://img.rtvcdn.si/_up/ava/ava_misc/show_logos/92/dnevnik_3_wide2.jpg', + 'description': 'md5:76a18692757aeb8f0f51221106277dd2', + 'timestamp': 1643137046, + 'title': 'Dnevnik', + 'series_id': '92', + 'release_date': '20220125', + 'duration': 1789, + }, + }, { + 'url': 'https://365.rtvslo.si/arhiv/utrip/174843754', + 'info_dict': { + 'id': '174843754', + 'ext': 'mp4', + 'series_id': '94', + 'release_date': '20220129', + 'timestamp': 1643484455, + 'title': 'Utrip', + 'duration': 813, + 'thumbnail': 'https://img.rtvcdn.si/_up/ava/ava_misc/show_logos/94/utrip_1_wide2.jpg', + 'description': 'md5:77f2892630c7b17bb7a5bb84319020c9', + 'release_timestamp': 1643485825, + 'upload_date': '20220129', + 'series': 'Utrip', + }, + }, { + 'url': 'https://365.rtvslo.si/arhiv/il-giornale-della-sera/174844609', + 'info_dict': { + 'id': '174844609', + 'ext': 'mp3', + 'series_id': '106615841', + 'title': 'Il giornale della sera', + 'duration': 1328, + 'series': 'Il giornale della sera', + 'timestamp': 1643743800, + 'release_timestamp': 1643745424, + 'thumbnail': 'https://img.rtvcdn.si/_up/ava/ava_misc/show_logos/il-giornale-della-sera_wide2.jpg', + 'upload_date': '20220201', + 'tbr': 128000, + 'release_date': '20220201', + }, + }, { + 'url': 'https://365.rtvslo.si/arhiv/razred-zase/148350750', + 'info_dict': { + 'id': '148350750', + 'ext': 'mp4', + 'title': 'Prvi Å¡olski dan, mozaiÄna oddaja za mlade', + 'series': 'Razred zase', + 'series_id': '148185730', + 'duration': 1481, + 'upload_date': '20121019', + 'timestamp': 1350672122, + 'release_date': '20121019', + 'release_timestamp': 1350672122, + 'thumbnail': 'https://img.rtvcdn.si/_up/ava/ava_misc/show_logos/148185730/razred_zase_2014_logo_4d_wide2.jpg', + }, + }, { + 'url': 'https://4d.rtvslo.si/arhiv/dnevnik/174842550', + 'only_matching': True + } + ] + + def _real_extract(self, url): + v_id = self._match_id(url) + meta = self._download_json(self._API_BASE.format('getRecordingDrm', v_id), v_id)['response'] + + thumbs = [{'id': k, 'url': v, 'http_headers': {'Accept': 'image/jpeg'}} + for k, v in (meta.get('images') or {}).items()] + + subs = {} + for s in traverse_obj(meta, 'subs', 'subtitles', default=[]): + lang = self.SUB_LANGS_MAP.get(s.get('language'), s.get('language') or 'und') + subs.setdefault(lang, []).append({ + 'url': s.get('file'), + 'ext': traverse_obj(s, 'format', expected_type=str.lower), + }) + + jwt = meta.get('jwt') + if not jwt: + raise ExtractorError('Site did not provide an authentication token, cannot proceed.') + + media = self._download_json(self._API_BASE.format('getMedia', v_id), v_id, query={'jwt': jwt})['response'] + + formats = [] + skip_protocols = ['smil', 'f4m', 'dash'] + adaptive_url = traverse_obj(media, ('addaptiveMedia', 'hls_sec'), expected_type=url_or_none) + if adaptive_url: + formats = self._extract_wowza_formats(adaptive_url, v_id, skip_protocols=skip_protocols) + + adaptive_url = traverse_obj(media, ('addaptiveMedia_sl', 'hls_sec'), expected_type=url_or_none) + if adaptive_url: + for f in self._extract_wowza_formats(adaptive_url, v_id, skip_protocols=skip_protocols): + formats.append({ + **f, + 'format_id': 'sign-' + f['format_id'], + 'format_note': 'Sign language interpretation', 'preference': -10, + 'language': ( + 'slv' if f.get('language') == 'eng' and f.get('acodec') != 'none' + else f.get('language')) + }) + + for mediafile in traverse_obj(media, ('mediaFiles', lambda _, v: url_or_none(v['streams']['https']))): + formats.append(traverse_obj(mediafile, { + 'url': ('streams', 'https'), + 'ext': ('mediaType', {str.lower}), + 'width': ('width', {int_or_none}), + 'height': ('height', {int_or_none}), + 'tbr': ('bitrate', {int_or_none}), + 'filesize': ('filesize', {int_or_none}), + })) + + for mediafile in traverse_obj(media, ('mediaFiles', lambda _, v: url_or_none(v['streams']['hls_sec']))): + formats.extend(self._extract_wowza_formats( + mediafile['streams']['hls_sec'], v_id, skip_protocols=skip_protocols)) + + if any('intermission.mp4' in x['url'] for x in formats): + self.raise_geo_restricted(countries=self._GEO_COUNTRIES, metadata_available=True) + if any('dummy_720p.mp4' in x.get('manifest_url', '') for x in formats) and meta.get('stub') == 'error': + raise ExtractorError(f'{self.IE_NAME} said: Clip not available', expected=True) + + return { + 'id': v_id, + 'webpage_url': ''.join(traverse_obj(meta, ('canonical', ('domain', 'path')))), + 'title': meta.get('title'), + 'formats': formats, + 'subtitles': subs, + 'thumbnails': thumbs, + 'description': meta.get('description'), + 'timestamp': unified_timestamp(traverse_obj(meta, 'broadcastDate', ('broadcastDates', 0))), + 'release_timestamp': unified_timestamp(meta.get('recordingDate')), + 'duration': meta.get('duration') or parse_duration(meta.get('length')), + 'tags': meta.get('genre'), + 'series': meta.get('showName'), + 'series_id': meta.get('showId'), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ruhd.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ruhd.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ruhd.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ruhd.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rule34video.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rule34video.py new file mode 100644 index 0000000..f3250b5 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rule34video.py @@ -0,0 +1,65 @@ +import re + +from ..utils import parse_duration, unescapeHTML +from .common import InfoExtractor + + +class Rule34VideoIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?rule34video\.com/videos/(?P<id>\d+)' + _TESTS = [ + { + 'url': 'https://rule34video.com/videos/3065157/shot-it-mmd-hmv/', + 'md5': 'ffccac2c23799dabbd192621ae4d04f3', + 'info_dict': { + 'id': '3065157', + 'ext': 'mp4', + 'title': 'Shot It-(mmd hmv)', + 'thumbnail': 'https://rule34video.com/contents/videos_screenshots/3065000/3065157/preview.jpg', + 'duration': 347.0, + 'age_limit': 18, + 'tags': 'count:14' + } + }, + { + 'url': 'https://rule34video.com/videos/3065296/lara-in-trouble-ep-7-wildeerstudio/', + 'md5': '6bb5169f9f6b38cd70882bf2e64f6b86', + 'info_dict': { + 'id': '3065296', + 'ext': 'mp4', + 'title': 'Lara in Trouble Ep. 7 [WildeerStudio]', + 'thumbnail': 'https://rule34video.com/contents/videos_screenshots/3065000/3065296/preview.jpg', + 'duration': 938.0, + 'age_limit': 18, + 'tags': 'count:50' + } + }, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + formats = [] + + for mobj in re.finditer(r'<a[^>]+href="(?P<video_url>[^"]+download=true[^"]+)".*>(?P<ext>[^\s]+) (?P<quality>[^<]+)p</a>', webpage): + url, ext, quality = mobj.groups() + formats.append({ + 'url': url, + 'ext': ext.lower(), + 'quality': quality, + }) + + title = self._html_extract_title(webpage) + thumbnail = self._html_search_regex(r'preview_url:\s+\'([^\']+)\'', webpage, 'thumbnail', default=None) + duration = self._html_search_regex(r'"icon-clock"></i>\s+<span>((?:\d+:?)+)', webpage, 'duration', default=None) + + return { + 'id': video_id, + 'formats': formats, + 'title': title, + 'thumbnail': thumbnail, + 'duration': parse_duration(duration), + 'age_limit': 18, + 'tags': list(map(unescapeHTML, re.findall( + r'<a class="tag_item"[^>]+\bhref="https://rule34video\.com/tags/\d+/"[^>]*>(?P<tag>[^>]*)</a>', webpage))), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/rumble.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rumble.py new file mode 100644 index 0000000..85567d9 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/rumble.py @@ -0,0 +1,391 @@ +import itertools +import re + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + UnsupportedError, + clean_html, + determine_ext, + format_field, + get_element_by_class, + int_or_none, + join_nonempty, + parse_count, + parse_iso8601, + traverse_obj, + unescapeHTML, +) + + +class RumbleEmbedIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?rumble\.com/embed/(?:[0-9a-z]+\.)?(?P<id>[0-9a-z]+)' + _EMBED_REGEX = [fr'(?:<(?:script|iframe)[^>]+\bsrc=|["\']embedUrl["\']\s*:\s*)["\'](?P<url>{_VALID_URL})'] + _TESTS = [{ + 'url': 'https://rumble.com/embed/v5pv5f', + 'md5': '36a18a049856720189f30977ccbb2c34', + 'info_dict': { + 'id': 'v5pv5f', + 'ext': 'mp4', + 'title': 'WMAR 2 News Latest Headlines | October 20, 6pm', + 'timestamp': 1571611968, + 'upload_date': '20191020', + 'channel_url': 'https://rumble.com/c/WMAR', + 'channel': 'WMAR', + 'thumbnail': 'https://sp.rmbl.ws/s8/1/5/M/z/1/5Mz1a.qR4e-small-WMAR-2-News-Latest-Headline.jpg', + 'duration': 234, + 'uploader': 'WMAR', + 'live_status': 'not_live', + } + }, { + 'url': 'https://rumble.com/embed/vslb7v', + 'md5': '7418035de1a30a178b8af34dc2b6a52b', + 'info_dict': { + 'id': 'vslb7v', + 'ext': 'mp4', + 'title': 'Defense Sec. says US Commitment to NATO Defense \'Ironclad\'', + 'timestamp': 1645142135, + 'upload_date': '20220217', + 'channel_url': 'https://rumble.com/c/CyberTechNews', + 'channel': 'CTNews', + 'thumbnail': 'https://sp.rmbl.ws/s8/6/7/i/9/h/7i9hd.OvCc.jpg', + 'duration': 901, + 'uploader': 'CTNews', + 'live_status': 'not_live', + } + }, { + 'url': 'https://rumble.com/embed/vunh1h', + 'info_dict': { + 'id': 'vunh1h', + 'ext': 'mp4', + 'title': '‘Gideon, op zoek naar de waarheid’ including ENG SUBS', + 'timestamp': 1647197663, + 'upload_date': '20220313', + 'channel_url': 'https://rumble.com/user/BLCKBX', + 'channel': 'BLCKBX', + 'thumbnail': r're:https://.+\.jpg', + 'duration': 5069, + 'uploader': 'BLCKBX', + 'live_status': 'not_live', + 'subtitles': { + 'en': [ + { + 'url': r're:https://.+\.vtt', + 'name': 'English', + 'ext': 'vtt' + } + ] + }, + }, + 'params': {'skip_download': True} + }, { + 'url': 'https://rumble.com/embed/v1essrt', + 'info_dict': { + 'id': 'v1essrt', + 'ext': 'mp4', + 'title': 'startswith:lofi hip hop radio 📚 - beats to relax/study to', + 'timestamp': 1661519399, + 'upload_date': '20220826', + 'channel_url': 'https://rumble.com/c/LofiGirl', + 'channel': 'Lofi Girl', + 'thumbnail': r're:https://.+\.jpg', + 'duration': None, + 'uploader': 'Lofi Girl', + 'live_status': 'is_live', + }, + 'params': {'skip_download': True} + }, { + 'url': 'https://rumble.com/embed/v1amumr', + 'info_dict': { + 'id': 'v1amumr', + 'ext': 'mp4', + 'fps': 60, + 'title': 'Turning Point USA 2022 Student Action Summit DAY 1 - Rumble Exclusive Live', + 'timestamp': 1658518457, + 'upload_date': '20220722', + 'channel_url': 'https://rumble.com/c/RumbleEvents', + 'channel': 'Rumble Events', + 'thumbnail': r're:https://.+\.jpg', + 'duration': 16427, + 'uploader': 'Rumble Events', + 'live_status': 'was_live', + }, + 'params': {'skip_download': True} + }, { + 'url': 'https://rumble.com/embed/ufe9n.v5pv5f', + 'only_matching': True, + }] + + _WEBPAGE_TESTS = [ + { + 'note': 'Rumble JS embed', + 'url': 'https://therightscoop.com/what-does-9-plus-1-plus-1-equal-listen-to-this-audio-of-attempted-kavanaugh-assassins-call-and-youll-get-it', + 'md5': '4701209ac99095592e73dbba21889690', + 'info_dict': { + 'id': 'v15eqxl', + 'ext': 'mp4', + 'channel': 'Mr Producer Media', + 'duration': 92, + 'title': '911 Audio From The Man Who Wanted To Kill Supreme Court Justice Kavanaugh', + 'channel_url': 'https://rumble.com/c/RichSementa', + 'thumbnail': 'https://sp.rmbl.ws/s8/1/P/j/f/A/PjfAe.qR4e-small-911-Audio-From-The-Man-Who-.jpg', + 'timestamp': 1654892716, + 'uploader': 'Mr Producer Media', + 'upload_date': '20220610', + 'live_status': 'not_live', + } + }, + ] + + @classmethod + def _extract_embed_urls(cls, url, webpage): + embeds = tuple(super()._extract_embed_urls(url, webpage)) + if embeds: + return embeds + return [f'https://rumble.com/embed/{mobj.group("id")}' for mobj in re.finditer( + r'<script>[^<]*\bRumble\(\s*"play"\s*,\s*{[^}]*[\'"]?video[\'"]?\s*:\s*[\'"](?P<id>[0-9a-z]+)[\'"]', webpage)] + + def _real_extract(self, url): + video_id = self._match_id(url) + video = self._download_json( + 'https://rumble.com/embedJS/u3/', video_id, + query={'request': 'video', 'ver': 2, 'v': video_id}) + + sys_msg = traverse_obj(video, ('sys', 'msg')) + if sys_msg: + self.report_warning(sys_msg, video_id=video_id) + + if video.get('live') == 0: + live_status = 'not_live' if video.get('livestream_has_dvr') is None else 'was_live' + elif video.get('live') == 1: + live_status = 'is_upcoming' if video.get('livestream_has_dvr') else 'was_live' + elif video.get('live') == 2: + live_status = 'is_live' + else: + live_status = None + + formats = [] + for ext, ext_info in (video.get('ua') or {}).items(): + if isinstance(ext_info, dict): + for height, video_info in ext_info.items(): + if not traverse_obj(video_info, ('meta', 'h', {int_or_none})): + video_info.setdefault('meta', {})['h'] = height + ext_info = ext_info.values() + + for video_info in ext_info: + meta = video_info.get('meta') or {} + if not video_info.get('url'): + continue + if ext == 'hls': + if meta.get('live') is True and video.get('live') == 1: + live_status = 'post_live' + formats.extend(self._extract_m3u8_formats( + video_info['url'], video_id, + ext='mp4', m3u8_id='hls', fatal=False, live=live_status == 'is_live')) + continue + timeline = ext == 'timeline' + if timeline: + ext = determine_ext(video_info['url']) + formats.append({ + 'ext': ext, + 'acodec': 'none' if timeline else None, + 'url': video_info['url'], + 'format_id': join_nonempty(ext, format_field(meta, 'h', '%sp')), + 'format_note': 'Timeline' if timeline else None, + 'fps': None if timeline else video.get('fps'), + **traverse_obj(meta, { + 'tbr': 'bitrate', + 'filesize': 'size', + 'width': 'w', + 'height': 'h', + }, expected_type=lambda x: int(x) or None) + }) + + subtitles = { + lang: [{ + 'url': sub_info['path'], + 'name': sub_info.get('language') or '', + }] for lang, sub_info in (video.get('cc') or {}).items() if sub_info.get('path') + } + + author = video.get('author') or {} + thumbnails = traverse_obj(video, ('t', ..., {'url': 'i', 'width': 'w', 'height': 'h'})) + if not thumbnails and video.get('i'): + thumbnails = [{'url': video['i']}] + + if live_status in {'is_live', 'post_live'}: + duration = None + else: + duration = int_or_none(video.get('duration')) + + return { + 'id': video_id, + 'title': unescapeHTML(video.get('title')), + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'timestamp': parse_iso8601(video.get('pubDate')), + 'channel': author.get('name'), + 'channel_url': author.get('url'), + 'duration': duration, + 'uploader': author.get('name'), + 'live_status': live_status, + } + + +class RumbleIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?rumble\.com/(?P<id>v(?!ideos)[\w.-]+)[^/]*$' + _EMBED_REGEX = [ + r'<a class=video-item--a href=(?P<url>/v[\w.-]+\.html)>', + r'<a[^>]+class="videostream__link link"[^>]+href=(?P<url>/v[\w.-]+\.html)[^>]*>'] + _TESTS = [{ + 'add_ie': ['RumbleEmbed'], + 'url': 'https://rumble.com/vdmum1-moose-the-dog-helps-girls-dig-a-snow-fort.html', + 'md5': '53af34098a7f92c4e51cf0bd1c33f009', + 'info_dict': { + 'id': 'vb0ofn', + 'ext': 'mp4', + 'timestamp': 1612662578, + 'uploader': 'LovingMontana', + 'channel': 'LovingMontana', + 'upload_date': '20210207', + 'title': 'Winter-loving dog helps girls dig a snow fort ', + 'description': 'Moose the dog is more than happy to help with digging out this epic snow fort. Great job, Moose!', + 'channel_url': 'https://rumble.com/c/c-546523', + 'thumbnail': r're:https://.+\.jpg', + 'duration': 103, + 'like_count': int, + 'dislike_count': int, + 'view_count': int, + 'live_status': 'not_live', + } + }, { + 'url': 'http://www.rumble.com/vDMUM1?key=value', + 'only_matching': True, + }, { + 'note': 'timeline format', + 'url': 'https://rumble.com/v2ea9qb-the-u.s.-cannot-hide-this-in-ukraine-anymore-redacted-with-natali-and-clayt.html', + 'md5': '40d61fec6c0945bca3d0e1dc1aa53d79', + 'params': {'format': 'wv'}, + 'info_dict': { + 'id': 'v2bou5f', + 'ext': 'mp4', + 'uploader': 'Redacted News', + 'upload_date': '20230322', + 'timestamp': 1679445010, + 'title': 'The U.S. CANNOT hide this in Ukraine anymore | Redacted with Natali and Clayton Morris', + 'duration': 892, + 'channel': 'Redacted News', + 'description': 'md5:aaad0c5c3426d7a361c29bdaaced7c42', + 'channel_url': 'https://rumble.com/c/Redacted', + 'live_status': 'not_live', + 'thumbnail': 'https://sp.rmbl.ws/s8/1/d/x/2/O/dx2Oi.qR4e-small-The-U.S.-CANNOT-hide-this-i.jpg', + 'like_count': int, + 'dislike_count': int, + 'view_count': int, + }, + }, { + 'url': 'https://rumble.com/v2e7fju-the-covid-twitter-files-drop-protecting-fauci-while-censoring-the-truth-wma.html', + 'info_dict': { + 'id': 'v2blzyy', + 'ext': 'mp4', + 'live_status': 'was_live', + 'release_timestamp': 1679446804, + 'description': 'md5:2ac4908ccfecfb921f8ffa4b30c1e636', + 'release_date': '20230322', + 'timestamp': 1679445692, + 'duration': 4435, + 'upload_date': '20230322', + 'title': 'The Covid Twitter Files Drop: Protecting Fauci While Censoring The Truth w/Matt Taibbi', + 'uploader': 'Kim Iversen', + 'channel_url': 'https://rumble.com/c/KimIversen', + 'channel': 'Kim Iversen', + 'thumbnail': 'https://sp.rmbl.ws/s8/1/6/b/w/O/6bwOi.qR4e-small-The-Covid-Twitter-Files-Dro.jpg', + 'like_count': int, + 'dislike_count': int, + 'view_count': int, + }, + }] + + _WEBPAGE_TESTS = [{ + 'url': 'https://rumble.com/videos?page=2', + 'playlist_mincount': 24, + 'info_dict': { + 'id': 'videos?page=2', + 'title': 'All videos', + 'description': 'Browse videos uploaded to Rumble.com', + 'age_limit': 0, + }, + }, { + 'url': 'https://rumble.com/browse/live', + 'playlist_mincount': 25, + 'info_dict': { + 'id': 'live', + 'title': 'Browse', + 'age_limit': 0, + }, + }, { + 'url': 'https://rumble.com/search/video?q=rumble&sort=views', + 'playlist_mincount': 24, + 'info_dict': { + 'id': 'video?q=rumble&sort=views', + 'title': 'Search results for: rumble', + 'age_limit': 0, + }, + }] + + def _real_extract(self, url): + page_id = self._match_id(url) + webpage = self._download_webpage(url, page_id) + url_info = next(RumbleEmbedIE.extract_from_webpage(self._downloader, url, webpage), None) + if not url_info: + raise UnsupportedError(url) + + return { + '_type': 'url_transparent', + 'ie_key': url_info['ie_key'], + 'url': url_info['url'], + 'release_timestamp': parse_iso8601(self._search_regex( + r'(?:Livestream begins|Streamed on):\s+<time datetime="([^"]+)', webpage, 'release date', default=None)), + 'view_count': int_or_none(self._search_regex( + r'"userInteractionCount"\s*:\s*(\d+)', webpage, 'view count', default=None)), + 'like_count': parse_count(self._search_regex( + r'<span data-js="rumbles_up_votes">\s*([\d,.KM]+)', webpage, 'like count', default=None)), + 'dislike_count': parse_count(self._search_regex( + r'<span data-js="rumbles_down_votes">\s*([\d,.KM]+)', webpage, 'dislike count', default=None)), + 'description': clean_html(get_element_by_class('media-description', webpage)) + } + + +class RumbleChannelIE(InfoExtractor): + _VALID_URL = r'(?P<url>https?://(?:www\.)?rumble\.com/(?:c|user)/(?P<id>[^&?#$/]+))' + + _TESTS = [{ + 'url': 'https://rumble.com/c/Styxhexenhammer666', + 'playlist_mincount': 1160, + 'info_dict': { + 'id': 'Styxhexenhammer666', + }, + }, { + 'url': 'https://rumble.com/user/goldenpoodleharleyeuna', + 'playlist_mincount': 4, + 'info_dict': { + 'id': 'goldenpoodleharleyeuna', + }, + }] + + def entries(self, url, playlist_id): + for page in itertools.count(1): + try: + webpage = self._download_webpage(f'{url}?page={page}', playlist_id, note='Downloading page %d' % page) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 404: + break + raise + for video_url in re.findall(r'class=video-item--a\s?href=([^>]+\.html)', webpage): + yield self.url_result('https://rumble.com' + video_url) + + def _real_extract(self, url): + url, playlist_id = self._match_valid_url(url).groups() + return self.playlist_result(self.entries(url, playlist_id), playlist_id=playlist_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rutube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rutube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rutube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rutube.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/rutv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/rutv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/rutv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/rutv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ruutu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ruutu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ruutu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ruutu.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ruv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ruv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ruv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ruv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/s4c.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/s4c.py new file mode 100644 index 0000000..67eff72 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/s4c.py @@ -0,0 +1,103 @@ +from .common import InfoExtractor +from ..utils import traverse_obj, url_or_none + + +class S4CIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?s4c\.cymru/clic/programme/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.s4c.cymru/clic/programme/861362209', + 'info_dict': { + 'id': '861362209', + 'ext': 'mp4', + 'title': 'Y Swn', + 'description': 'md5:f7681a30e4955b250b3224aa9fe70cf0', + 'duration': 5340, + 'thumbnail': 'https://www.s4c.cymru/amg/1920x1080/Y_Swn_2023S4C_099_ii.jpg' + }, + }, { + 'url': 'https://www.s4c.cymru/clic/programme/856636948', + 'info_dict': { + 'id': '856636948', + 'ext': 'mp4', + 'title': 'Am Dro', + 'duration': 2880, + 'description': 'md5:100d8686fc9a632a0cb2db52a3433ffe', + 'thumbnail': 'https://www.s4c.cymru/amg/1920x1080/Am_Dro_2022-23S4C_P6_4005.jpg' + }, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + details = self._download_json( + f'https://www.s4c.cymru/df/full_prog_details?lang=e&programme_id={video_id}', + video_id, fatal=False) + + player_config = self._download_json( + 'https://player-api.s4c-cdn.co.uk/player-configuration/prod', video_id, query={ + 'programme_id': video_id, + 'signed': '0', + 'lang': 'en', + 'mode': 'od', + 'appId': 'clic', + 'streamName': '', + }, note='Downloading player config JSON') + subtitles = {} + for sub in traverse_obj(player_config, ('subtitles', lambda _, v: url_or_none(v['0']))): + subtitles.setdefault(sub.get('3', 'en'), []).append({ + 'url': sub['0'], + 'name': sub.get('1'), + }) + m3u8_url = self._download_json( + 'https://player-api.s4c-cdn.co.uk/streaming-urls/prod', video_id, query={ + 'mode': 'od', + 'application': 'clic', + 'region': 'WW', + 'extra': 'false', + 'thirdParty': 'false', + 'filename': player_config['filename'], + }, note='Downloading streaming urls JSON')['hls'] + + return { + 'id': video_id, + 'formats': self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', m3u8_id='hls'), + 'subtitles': subtitles, + 'thumbnail': url_or_none(player_config.get('poster')), + **traverse_obj(details, ('full_prog_details', 0, { + 'title': (('programme_title', 'series_title'), {str}), + 'description': ('full_billing', {str.strip}), + 'duration': ('duration', {lambda x: int(x) * 60}), + }), get_all=False), + } + + +class S4CSeriesIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?s4c\.cymru/clic/series/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.s4c.cymru/clic/series/864982911', + 'playlist_mincount': 6, + 'info_dict': { + 'id': '864982911', + 'title': 'Iaith ar Daith', + }, + }, { + 'url': 'https://www.s4c.cymru/clic/series/866852587', + 'playlist_mincount': 8, + 'info_dict': { + 'id': '866852587', + 'title': 'FFIT Cymru', + }, + }] + + def _real_extract(self, url): + series_id = self._match_id(url) + series_details = self._download_json( + 'https://www.s4c.cymru/df/series_details', series_id, query={ + 'lang': 'e', + 'series_id': series_id, + 'show_prog_in_series': 'Y' + }, note='Downloading series details JSON') + + return self.playlist_result( + [self.url_result(f'https://www.s4c.cymru/clic/programme/{episode_id}', S4CIE, episode_id) + for episode_id in traverse_obj(series_details, ('other_progs_in_series', ..., 'id'))], + series_id, traverse_obj(series_details, ('full_prog_details', 0, 'series_title', {str}))) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/safari.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/safari.py new file mode 100644 index 0000000..8d322d7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/safari.py @@ -0,0 +1,259 @@ +import json +import re + +from .common import InfoExtractor + +from ..compat import ( + compat_parse_qs, + compat_urlparse, +) +from ..utils import ( + ExtractorError, + update_url_query, +) + + +class SafariBaseIE(InfoExtractor): + _LOGIN_URL = 'https://learning.oreilly.com/accounts/login/' + _NETRC_MACHINE = 'safari' + + _API_BASE = 'https://learning.oreilly.com/api/v1' + _API_FORMAT = 'json' + + LOGGED_IN = False + + def _perform_login(self, username, password): + _, urlh = self._download_webpage_handle( + 'https://learning.oreilly.com/accounts/login-check/', None, + 'Downloading login page') + + def is_logged(urlh): + return 'learning.oreilly.com/home/' in urlh.url + + if is_logged(urlh): + self.LOGGED_IN = True + return + + redirect_url = urlh.url + parsed_url = compat_urlparse.urlparse(redirect_url) + qs = compat_parse_qs(parsed_url.query) + next_uri = compat_urlparse.urljoin( + 'https://api.oreilly.com', qs['next'][0]) + + auth, urlh = self._download_json_handle( + 'https://www.oreilly.com/member/auth/login/', None, 'Logging in', + data=json.dumps({ + 'email': username, + 'password': password, + 'redirect_uri': next_uri, + }).encode(), headers={ + 'Content-Type': 'application/json', + 'Referer': redirect_url, + }, expected_status=400) + + credentials = auth.get('credentials') + if (not auth.get('logged_in') and not auth.get('redirect_uri') + and credentials): + raise ExtractorError( + 'Unable to login: %s' % credentials, expected=True) + + # oreilly serves two same instances of the following cookies + # in Set-Cookie header and expects first one to be actually set + for cookie in ('groot_sessionid', 'orm-jwt', 'orm-rt'): + self._apply_first_set_cookie_header(urlh, cookie) + + _, urlh = self._download_webpage_handle( + auth.get('redirect_uri') or next_uri, None, 'Completing login',) + + if is_logged(urlh): + self.LOGGED_IN = True + return + + raise ExtractorError('Unable to log in') + + +class SafariIE(SafariBaseIE): + IE_NAME = 'safari' + IE_DESC = 'safaribooksonline.com online video' + _VALID_URL = r'''(?x) + https?:// + (?:www\.)?(?:safaribooksonline|(?:learning\.)?oreilly)\.com/ + (?: + library/view/[^/]+/(?P<course_id>[^/]+)/(?P<part>[^/?\#&]+)\.html| + videos/[^/]+/[^/]+/(?P<reference_id>[^-]+-[^/?\#&]+) + ) + ''' + + _TESTS = [{ + 'url': 'https://www.safaribooksonline.com/library/view/hadoop-fundamentals-livelessons/9780133392838/part00.html', + 'md5': 'dcc5a425e79f2564148652616af1f2a3', + 'info_dict': { + 'id': '0_qbqx90ic', + 'ext': 'mp4', + 'title': 'Introduction to Hadoop Fundamentals LiveLessons', + 'timestamp': 1437758058, + 'upload_date': '20150724', + 'uploader_id': 'stork', + }, + }, { + # non-digits in course id + 'url': 'https://www.safaribooksonline.com/library/view/create-a-nodejs/100000006A0210/part00.html', + 'only_matching': True, + }, { + 'url': 'https://www.safaribooksonline.com/library/view/learning-path-red/9780134664057/RHCE_Introduction.html', + 'only_matching': True, + }, { + 'url': 'https://www.safaribooksonline.com/videos/python-programming-language/9780134217314/9780134217314-PYMC_13_00', + 'only_matching': True, + }, { + 'url': 'https://learning.oreilly.com/videos/hadoop-fundamentals-livelessons/9780133392838/9780133392838-00_SeriesIntro', + 'only_matching': True, + }, { + 'url': 'https://www.oreilly.com/library/view/hadoop-fundamentals-livelessons/9780133392838/00_SeriesIntro.html', + 'only_matching': True, + }] + + _PARTNER_ID = '1926081' + _UICONF_ID = '29375172' + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + + reference_id = mobj.group('reference_id') + if reference_id: + video_id = reference_id + partner_id = self._PARTNER_ID + ui_id = self._UICONF_ID + else: + video_id = '%s-%s' % (mobj.group('course_id'), mobj.group('part')) + + webpage, urlh = self._download_webpage_handle(url, video_id) + + mobj = re.match(self._VALID_URL, urlh.url) + reference_id = mobj.group('reference_id') + if not reference_id: + reference_id = self._search_regex( + r'data-reference-id=(["\'])(?P<id>(?:(?!\1).)+)\1', + webpage, 'kaltura reference id', group='id') + partner_id = self._search_regex( + r'data-partner-id=(["\'])(?P<id>(?:(?!\1).)+)\1', + webpage, 'kaltura widget id', default=self._PARTNER_ID, + group='id') + ui_id = self._search_regex( + r'data-ui-id=(["\'])(?P<id>(?:(?!\1).)+)\1', + webpage, 'kaltura uiconf id', default=self._UICONF_ID, + group='id') + + query = { + 'wid': '_%s' % partner_id, + 'uiconf_id': ui_id, + 'flashvars[referenceId]': reference_id, + } + + if self.LOGGED_IN: + kaltura_session = self._download_json( + '%s/player/kaltura_session/?reference_id=%s' % (self._API_BASE, reference_id), + video_id, 'Downloading kaltura session JSON', + 'Unable to download kaltura session JSON', fatal=False, + headers={'Accept': 'application/json'}) + if kaltura_session: + session = kaltura_session.get('session') + if session: + query['flashvars[ks]'] = session + + return self.url_result(update_url_query( + 'https://cdnapisec.kaltura.com/html5/html5lib/v2.37.1/mwEmbedFrame.php', query), + 'Kaltura') + + +class SafariApiIE(SafariBaseIE): + IE_NAME = 'safari:api' + _VALID_URL = r'https?://(?:www\.)?(?:safaribooksonline|(?:learning\.)?oreilly)\.com/api/v1/book/(?P<course_id>[^/]+)/chapter(?:-content)?/(?P<part>[^/?#&]+)\.html' + + _TESTS = [{ + 'url': 'https://www.safaribooksonline.com/api/v1/book/9780133392838/chapter/part00.html', + 'only_matching': True, + }, { + 'url': 'https://www.safaribooksonline.com/api/v1/book/9780134664057/chapter/RHCE_Introduction.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + part = self._download_json( + url, '%s/%s' % (mobj.group('course_id'), mobj.group('part')), + 'Downloading part JSON') + web_url = part['web_url'] + if 'library/view' in web_url: + web_url = web_url.replace('library/view', 'videos') + natural_keys = part['natural_key'] + web_url = f'{web_url.rsplit("/", 1)[0]}/{natural_keys[0]}-{natural_keys[1][:-5]}' + return self.url_result(web_url, SafariIE.ie_key()) + + +class SafariCourseIE(SafariBaseIE): + IE_NAME = 'safari:course' + IE_DESC = 'safaribooksonline.com online courses' + + _VALID_URL = r'''(?x) + https?:// + (?: + (?:www\.)?(?:safaribooksonline|(?:learning\.)?oreilly)\.com/ + (?: + library/view/[^/]+| + api/v1/book| + videos/[^/]+ + )| + techbus\.safaribooksonline\.com + ) + /(?P<id>[^/]+) + ''' + + _TESTS = [{ + 'url': 'https://www.safaribooksonline.com/library/view/hadoop-fundamentals-livelessons/9780133392838/', + 'info_dict': { + 'id': '9780133392838', + 'title': 'Hadoop Fundamentals LiveLessons', + }, + 'playlist_count': 22, + 'skip': 'Requires safaribooksonline account credentials', + }, { + 'url': 'https://www.safaribooksonline.com/api/v1/book/9781449396459/?override_format=json', + 'only_matching': True, + }, { + 'url': 'http://techbus.safaribooksonline.com/9780134426365', + 'only_matching': True, + }, { + 'url': 'https://www.safaribooksonline.com/videos/python-programming-language/9780134217314', + 'only_matching': True, + }, { + 'url': 'https://learning.oreilly.com/videos/hadoop-fundamentals-livelessons/9780133392838', + 'only_matching': True, + }, { + 'url': 'https://www.oreilly.com/library/view/hadoop-fundamentals-livelessons/9780133392838/', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return (False if SafariIE.suitable(url) or SafariApiIE.suitable(url) + else super(SafariCourseIE, cls).suitable(url)) + + def _real_extract(self, url): + course_id = self._match_id(url) + + course_json = self._download_json( + '%s/book/%s/?override_format=%s' % (self._API_BASE, course_id, self._API_FORMAT), + course_id, 'Downloading course JSON') + + if 'chapters' not in course_json: + raise ExtractorError( + 'No chapters found for course %s' % course_id, expected=True) + + entries = [ + self.url_result(chapter, SafariApiIE.ie_key()) + for chapter in course_json['chapters']] + + course_title = course_json['title'] + + return self.playlist_result(entries, course_id, course_title) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/saitosan.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/saitosan.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/saitosan.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/saitosan.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/samplefocus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/samplefocus.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/samplefocus.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/samplefocus.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sapo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sapo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sapo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sapo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/savefrom.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/savefrom.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/savefrom.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/savefrom.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sbs.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sbs.py new file mode 100644 index 0000000..7a91150 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sbs.py @@ -0,0 +1,158 @@ +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ( + float_or_none, + int_or_none, + parse_duration, + parse_iso8601, + traverse_obj, + update_url_query, + url_or_none, +) + + +class SBSIE(InfoExtractor): + IE_DESC = 'sbs.com.au' + _VALID_URL = r'''(?x) + https?://(?:www\.)?sbs\.com\.au/(?: + ondemand(?: + /video/(?:single/)?| + /(?:movie|tv-program)/[^/]+/| + /(?:tv|news)-series/(?:[^/]+/){3}| + .*?\bplay=|/watch/ + )|news/(?:embeds/)?video/ + )(?P<id>[0-9]+)''' + _EMBED_REGEX = [r'''(?x)] + (?: + <meta\s+property="og:video"\s+content=| + <iframe[^>]+?src= + ) + (["\'])(?P<url>https?://(?:www\.)?sbs\.com\.au/ondemand/video/.+?)\1'''] + + _TESTS = [{ + # Original URL is handled by the generic IE which finds the iframe: + # http://www.sbs.com.au/thefeed/blog/2014/08/21/dingo-conservation + 'url': 'http://www.sbs.com.au/ondemand/video/single/320403011771/?source=drupal&vertical=thefeed', + 'md5': '31f84a7a19b53635db63c73f8ab0c4a7', + 'info_dict': { + 'id': '320403011771', # '_rFBPRPO4pMR', + 'ext': 'mp4', + 'title': 'Dingo Conservation (The Feed)', + 'description': 'md5:f250a9856fca50d22dec0b5b8015f8a5', + 'thumbnail': r're:https?://.*\.jpg', + 'duration': 308, + 'timestamp': 1408613220, + 'upload_date': '20140821', + 'uploader': 'SBSC', + 'tags': None, + 'categories': None, + }, + 'expected_warnings': ['Unable to download JSON metadata'], + }, { + 'url': 'http://www.sbs.com.au/ondemand/video/320403011771/Dingo-Conservation-The-Feed', + 'only_matching': True, + }, { + 'url': 'http://www.sbs.com.au/news/video/471395907773/The-Feed-July-9', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/?play=1836638787723', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/program/inside-windsor-castle?play=1283505731842', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/news/embeds/video/1840778819866', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/watch/1698704451971', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/movie/coherence/1469404227931', + 'only_matching': True, + }, { + 'note': 'Live stream', + 'url': 'https://www.sbs.com.au/ondemand/video/1726824003663/sbs-24x7-live-stream-nsw', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/news-series/dateline/dateline-2022/dateline-s2022-ep26/2072245827515', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/tv-series/the-handmaids-tale/season-5/the-handmaids-tale-s5-ep1/2065631811776', + 'only_matching': True, + }, { + 'url': 'https://www.sbs.com.au/ondemand/tv-program/autun-romes-forgotten-sister/2116212803602', + 'only_matching': True, + }] + + _GEO_COUNTRIES = ['AU'] + _AUS_TV_PARENTAL_GUIDELINES = { + 'P': 0, + 'C': 7, + 'G': 0, + 'PG': 0, + 'M': 14, + 'MA15+': 15, + 'MAV15+': 15, + 'R18+': 18, + } + _PLAYER_API = 'https://www.sbs.com.au/api/v3' + + def _real_extract(self, url): + video_id = self._match_id(url) + formats, subtitles = self._extract_smil_formats_and_subtitles( + update_url_query(f'{self._PLAYER_API}/video_smil', {'id': video_id}), video_id) + + if not formats: + urlh = self._request_webpage( + HEADRequest('https://sbs-vod-prod-01.akamaized.net/'), video_id, + note='Checking geo-restriction', fatal=False, expected_status=403) + if urlh: + error_reasons = urlh.headers.get_all('x-error-reason') or [] + if 'geo-blocked' in error_reasons: + self.raise_geo_restricted(countries=['AU']) + self.raise_no_formats('No formats are available', video_id=video_id) + + media = traverse_obj(self._download_json( + f'{self._PLAYER_API}/video_stream', video_id, fatal=False, + query={'id': video_id, 'context': 'tv'}), ('video_object', {dict})) or {} + + media.update(self._download_json( + f'https://catalogue.pr.sbsod.com/mpx-media/{video_id}', + video_id, fatal=not media) or {}) + + # For named episodes, use the catalogue's title to set episode, rather than generic 'Episode N'. + if traverse_obj(media, ('partOfSeries', {dict})): + media['epName'] = traverse_obj(media, ('title', {str})) + + return { + 'id': video_id, + **traverse_obj(media, { + 'title': ('name', {str}), + 'description': ('description', {str}), + 'channel': ('taxonomy', 'channel', 'name', {str}), + 'series': ((('partOfSeries', 'name'), 'seriesTitle'), {str}), + 'series_id': ((('partOfSeries', 'uuid'), 'seriesID'), {str}), + 'season_number': ('seasonNumber', {int_or_none}), + 'episode': ('epName', {str}), + 'episode_number': ('episodeNumber', {int_or_none}), + 'timestamp': (('datePublished', ('publication', 'startDate')), {parse_iso8601}), + 'release_year': ('releaseYear', {int_or_none}), + 'duration': ('duration', ({float_or_none}, {parse_duration})), + 'is_live': ('liveStream', {bool}), + 'age_limit': (('classificationID', 'contentRating'), {str.upper}, { + lambda x: self._AUS_TV_PARENTAL_GUIDELINES.get(x)}), # dict.get is unhashable in py3.7 + }, get_all=False), + **traverse_obj(media, { + 'categories': (('genres', ...), ('taxonomy', ('genre', 'subgenre'), 'name'), {str}), + 'tags': (('consumerAdviceTexts', ('sbsSubCertification', 'consumerAdvice')), ..., {str}), + 'thumbnails': ('thumbnails', lambda _, v: url_or_none(v['contentUrl']), { + 'id': ('name', {str}), + 'url': 'contentUrl', + 'width': ('width', {int_or_none}), + 'height': ('height', {int_or_none}), + }), + }), + 'formats': formats, + 'subtitles': subtitles, + 'uploader': 'SBSC', + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/screen9.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/screen9.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/screen9.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/screen9.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/screencast.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/screencast.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/screencast.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/screencast.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/screencastify.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/screencastify.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/screencastify.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/screencastify.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/screencastomatic.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/screencastomatic.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/screencastomatic.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/screencastomatic.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/scrippsnetworks.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/scrippsnetworks.py new file mode 100644 index 0000000..7f0bc96 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/scrippsnetworks.py @@ -0,0 +1,153 @@ +import json +import hashlib + +from .aws import AWSIE +from .anvato import AnvatoIE +from .common import InfoExtractor +from ..utils import ( + smuggle_url, + urlencode_postdata, + xpath_text, +) + + +class ScrippsNetworksWatchIE(AWSIE): + IE_NAME = 'scrippsnetworks:watch' + _VALID_URL = r'''(?x) + https?:// + watch\. + (?P<site>geniuskitchen)\.com/ + (?: + player\.[A-Z0-9]+\.html\#| + show/(?:[^/]+/){2}| + player/ + ) + (?P<id>\d+) + ''' + _TESTS = [{ + 'url': 'http://watch.geniuskitchen.com/player/3787617/Ample-Hills-Ice-Cream-Bike/', + 'info_dict': { + 'id': '4194875', + 'ext': 'mp4', + 'title': 'Ample Hills Ice Cream Bike', + 'description': 'Courtney Rada churns up a signature GK Now ice cream with The Scoopmaster.', + 'uploader': 'ANV', + 'upload_date': '20171011', + 'timestamp': 1507698000, + }, + 'params': { + 'skip_download': True, + }, + 'add_ie': [AnvatoIE.ie_key()], + 'skip': '404 Not Found', + }] + + _SNI_TABLE = { + 'geniuskitchen': 'genius', + } + + _AWS_API_KEY = 'E7wSQmq0qK6xPrF13WmzKiHo4BQ7tip4pQcSXVl1' + _AWS_PROXY_HOST = 'web.api.video.snidigital.com' + + _AWS_USER_AGENT = 'aws-sdk-js/2.80.0 callback' + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + site_id, video_id = mobj.group('site', 'id') + + aws_identity_id_json = json.dumps({ + 'IdentityId': '%s:7655847c-0ae7-4d9b-80d6-56c062927eb3' % self._AWS_REGION + }).encode('utf-8') + token = self._download_json( + 'https://cognito-identity.%s.amazonaws.com/' % self._AWS_REGION, video_id, + data=aws_identity_id_json, + headers={ + 'Accept': '*/*', + 'Content-Type': 'application/x-amz-json-1.1', + 'Referer': url, + 'X-Amz-Content-Sha256': hashlib.sha256(aws_identity_id_json).hexdigest(), + 'X-Amz-Target': 'AWSCognitoIdentityService.GetOpenIdToken', + 'X-Amz-User-Agent': self._AWS_USER_AGENT, + })['Token'] + + sts = self._download_xml( + 'https://sts.amazonaws.com/', video_id, data=urlencode_postdata({ + 'Action': 'AssumeRoleWithWebIdentity', + 'RoleArn': 'arn:aws:iam::710330595350:role/Cognito_WebAPIUnauth_Role', + 'RoleSessionName': 'web-identity', + 'Version': '2011-06-15', + 'WebIdentityToken': token, + }), headers={ + 'Referer': url, + 'X-Amz-User-Agent': self._AWS_USER_AGENT, + 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', + }) + + def get(key): + return xpath_text( + sts, './/{https://sts.amazonaws.com/doc/2011-06-15/}%s' % key, + fatal=True) + + mcp_id = self._aws_execute_api({ + 'uri': '/1/web/brands/%s/episodes/scrid/%s' % (self._SNI_TABLE[site_id], video_id), + 'access_key': get('AccessKeyId'), + 'secret_key': get('SecretAccessKey'), + 'session_token': get('SessionToken'), + }, video_id)['results'][0]['mcpId'] + + return self.url_result( + smuggle_url( + 'anvato:anvato_scripps_app_web_prod_0837996dbe373629133857ae9eb72e740424d80a:%s' % mcp_id, + {'geo_countries': ['US']}), + AnvatoIE.ie_key(), video_id=mcp_id) + + +class ScrippsNetworksIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?(?P<site>cookingchanneltv|discovery|(?:diy|food)network|hgtv|travelchannel)\.com/videos/[0-9a-z-]+-(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.cookingchanneltv.com/videos/the-best-of-the-best-0260338', + 'info_dict': { + 'id': '0260338', + 'ext': 'mp4', + 'title': 'The Best of the Best', + 'description': 'Catch a new episode of MasterChef Canada Tuedsay at 9/8c.', + 'timestamp': 1475678834, + 'upload_date': '20161005', + 'uploader': 'SCNI-SCND', + 'duration': 29.995, + 'chapters': [{'start_time': 0.0, 'end_time': 29.995, 'title': '<Untitled Chapter 1>'}], + 'thumbnail': 'https://images.dds.discovery.com/up/tp/Scripps_-_Food_Category_Prod/122/987/0260338_630x355.jpg', + }, + 'add_ie': ['ThePlatform'], + 'expected_warnings': ['No HLS formats found'], + }, { + 'url': 'https://www.diynetwork.com/videos/diy-barnwood-tablet-stand-0265790', + 'only_matching': True, + }, { + 'url': 'https://www.foodnetwork.com/videos/chocolate-strawberry-cake-roll-7524591', + 'only_matching': True, + }, { + 'url': 'https://www.hgtv.com/videos/cookie-decorating-101-0301929', + 'only_matching': True, + }, { + 'url': 'https://www.travelchannel.com/videos/two-climates-one-bag-5302184', + 'only_matching': True, + }, { + 'url': 'https://www.discovery.com/videos/guardians-of-the-glades-cooking-with-tom-cobb-5578368', + 'only_matching': True, + }] + _ACCOUNT_MAP = { + 'cookingchanneltv': 2433005105, + 'discovery': 2706091867, + 'diynetwork': 2433004575, + 'foodnetwork': 2433005105, + 'hgtv': 2433004575, + 'travelchannel': 2433005739, + } + _TP_TEMPL = 'https://link.theplatform.com/s/ip77QC/media/guid/%d/%s?mbr=true' + + def _real_extract(self, url): + site, guid = self._match_valid_url(url).groups() + return self.url_result(smuggle_url( + self._TP_TEMPL % (self._ACCOUNT_MAP[site], guid), + {'force_smil_url': True}), 'ThePlatform', guid) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/scrolller.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/scrolller.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/scrolller.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/scrolller.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/scte.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/scte.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/scte.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/scte.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/seeker.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/seeker.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/seeker.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/seeker.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/senalcolombia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/senalcolombia.py new file mode 100644 index 0000000..f3c066d --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/senalcolombia.py @@ -0,0 +1,31 @@ +from .common import InfoExtractor +from .rtvcplay import RTVCKalturaIE + + +class SenalColombiaLiveIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?senalcolombia\.tv/(?P<id>senal-en-vivo)' + + _TESTS = [{ + 'url': 'https://www.senalcolombia.tv/senal-en-vivo', + 'info_dict': { + 'id': 'indexSC', + 'title': 're:^Señal Colombia', + 'description': 'md5:799f16a401d97f40c33a2c6a3e2a507b', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + + hydration = self._search_json( + r'<script\b[^>]*data-drupal-selector\s*=\s*"[^"]*drupal-settings-json[^"]*"[^>]*>', + webpage, 'hydration', display_id) + + return self.url_result(hydration['envivosrc'], RTVCKalturaIE, display_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/senategov.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/senategov.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/senategov.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/senategov.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sendtonews.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sendtonews.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sendtonews.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sendtonews.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/servus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/servus.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/servus.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/servus.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sevenplus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sevenplus.py new file mode 100644 index 0000000..6c688d1 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sevenplus.py @@ -0,0 +1,132 @@ +import json +import re + +from .brightcove import BrightcoveNewBaseIE +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + try_get, + update_url_query, +) + + +class SevenPlusIE(BrightcoveNewBaseIE): + IE_NAME = '7plus' + _VALID_URL = r'https?://(?:www\.)?7plus\.com\.au/(?P<path>[^?]+\?.*?\bepisode-id=(?P<id>[^&#]+))' + _TESTS = [{ + 'url': 'https://7plus.com.au/MTYS?episode-id=MTYS7-003', + 'info_dict': { + 'id': 'MTYS7-003', + 'ext': 'mp4', + 'title': 'S7 E3 - Wind Surf', + 'description': 'md5:29c6a69f21accda7601278f81b46483d', + 'uploader_id': '5303576322001', + 'upload_date': '20171201', + 'timestamp': 1512106377, + 'series': 'Mighty Ships', + 'season_number': 7, + 'episode_number': 3, + 'episode': 'Wind Surf', + }, + 'params': { + 'skip_download': True, + } + }, { + 'url': 'https://7plus.com.au/UUUU?episode-id=AUMS43-001', + 'only_matching': True, + }] + + def _real_initialize(self): + self.token = None + + cookies = self._get_cookies('https://7plus.com.au') + api_key = next((x for x in cookies if x.startswith('glt_')), '')[4:] + if not api_key: # Cookies are signed out, skip login + return + + login_resp = self._download_json( + 'https://login.7plus.com.au/accounts.getJWT', None, 'Logging in', fatal=False, + query={ + 'APIKey': api_key, + 'sdk': 'js_latest', + 'login_token': cookies[f'glt_{api_key}'].value, + 'authMode': 'cookie', + 'pageURL': 'https://7plus.com.au/', + 'sdkBuild': '12471', + 'format': 'json', + }) or {} + + if 'errorMessage' in login_resp: + self.report_warning(f'Unable to login: 7plus said: {login_resp["errorMessage"]}') + return + id_token = login_resp.get('id_token') + if not id_token: + self.report_warning('Unable to login: Could not extract id token') + return + + token_resp = self._download_json( + 'https://7plus.com.au/auth/token', None, 'Getting auth token', fatal=False, + headers={'Content-Type': 'application/json'}, data=json.dumps({ + 'idToken': id_token, + 'platformId': 'web', + 'regSource': '7plus', + }).encode('utf-8')) or {} + self.token = token_resp.get('token') + if not self.token: + self.report_warning('Unable to log in: Could not extract auth token') + + def _real_extract(self, url): + path, episode_id = self._match_valid_url(url).groups() + + headers = {} + if self.token: + headers['Authorization'] = f'Bearer {self.token}' + + try: + media = self._download_json( + 'https://videoservice.swm.digital/playback', episode_id, query={ + 'appId': '7plus', + 'deviceType': 'web', + 'platformType': 'web', + 'accountId': 5303576322001, + 'referenceId': 'ref:' + episode_id, + 'deliveryId': 'csai', + 'videoType': 'vod', + }, headers=headers)['media'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + raise ExtractorError(self._parse_json( + e.cause.response.read().decode(), episode_id)[0]['error_code'], expected=True) + raise + + for source in media.get('sources', {}): + src = source.get('src') + if not src: + continue + source['src'] = update_url_query(src, {'rule': ''}) + + info = self._parse_brightcove_metadata(media, episode_id) + + content = self._download_json( + 'https://component-cdn.swm.digital/content/' + path, + episode_id, headers={ + 'market-id': 4, + }, fatal=False) or {} + for item in content.get('items', {}): + if item.get('componentData', {}).get('componentType') == 'infoPanel': + for src_key, dst_key in [('title', 'title'), ('shortSynopsis', 'description')]: + value = item.get(src_key) + if value: + info[dst_key] = value + info['series'] = try_get( + item, lambda x: x['seriesLogo']['name'], compat_str) + mobj = re.search(r'^S(\d+)\s+E(\d+)\s+-\s+(.+)$', info['title']) + if mobj: + info.update({ + 'season_number': int(mobj.group(1)), + 'episode_number': int(mobj.group(2)), + 'episode': mobj.group(3), + }) + + return info diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sexu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sexu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sexu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sexu.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/seznamzpravy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/seznamzpravy.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/seznamzpravy.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/seznamzpravy.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/shahid.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/shahid.py new file mode 100644 index 0000000..d509e88 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/shahid.py @@ -0,0 +1,217 @@ +import json +import math +import re + +from .aws import AWSIE +from ..networking.exceptions import HTTPError +from ..utils import ( + clean_html, + ExtractorError, + InAdvancePagedList, + int_or_none, + parse_iso8601, + str_or_none, + urlencode_postdata, +) + + +class ShahidBaseIE(AWSIE): + _AWS_PROXY_HOST = 'api2.shahid.net' + _AWS_API_KEY = '2RRtuMHx95aNI1Kvtn2rChEuwsCogUd4samGPjLh' + _VALID_URL_BASE = r'https?://shahid\.mbc\.net/[a-z]{2}/' + + def _handle_error(self, e): + fail_data = self._parse_json( + e.cause.response.read().decode('utf-8'), None, fatal=False) + if fail_data: + faults = fail_data.get('faults', []) + faults_message = ', '.join([clean_html(fault['userMessage']) for fault in faults if fault.get('userMessage')]) + if faults_message: + raise ExtractorError(faults_message, expected=True) + + def _call_api(self, path, video_id, request=None): + query = {} + if request: + query['request'] = json.dumps(request) + try: + return self._aws_execute_api({ + 'uri': '/proxy/v2/' + path, + 'access_key': 'AKIAI6X4TYCIXM2B7MUQ', + 'secret_key': '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', + }, video_id, query) + except ExtractorError as e: + if isinstance(e.cause, HTTPError): + self._handle_error(e) + raise + + +class ShahidIE(ShahidBaseIE): + _NETRC_MACHINE = 'shahid' + _VALID_URL = ShahidBaseIE._VALID_URL_BASE + r'(?:serie|show|movie)s/[^/]+/(?P<type>episode|clip|movie)-(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://shahid.mbc.net/ar/shows/%D9%85%D8%AA%D8%AD%D9%81-%D8%A7%D9%84%D8%AF%D8%AD%D9%8A%D8%AD-%D8%A7%D9%84%D9%85%D9%88%D8%B3%D9%85-1-%D9%83%D9%84%D9%8A%D8%A8-1/clip-816924', + 'info_dict': { + 'id': '816924', + 'ext': 'mp4', + 'title': 'متح٠الدحيح الموسم 1 كليب 1', + 'timestamp': 1602806400, + 'upload_date': '20201016', + 'description': 'برومو', + 'duration': 22, + 'categories': ['كوميديا'], + }, + 'params': { + # m3u8 download + 'skip_download': True, + } + }, { + 'url': 'https://shahid.mbc.net/ar/movies/%D8%A7%D9%84%D9%82%D9%86%D8%A7%D8%B5%D8%A9/movie-151746', + 'only_matching': True + }, { + # shahid plus subscriber only + 'url': 'https://shahid.mbc.net/ar/series/%D9%85%D8%B1%D8%A7%D9%8A%D8%A7-2011-%D8%A7%D9%84%D9%85%D9%88%D8%B3%D9%85-1-%D8%A7%D9%84%D8%AD%D9%84%D9%82%D8%A9-1/episode-90511', + 'only_matching': True + }, { + 'url': 'https://shahid.mbc.net/en/shows/Ramez-Fi-Al-Shallal-season-1-episode-1/episode-359319', + 'only_matching': True + }] + + def _perform_login(self, username, password): + try: + user_data = self._download_json( + 'https://shahid.mbc.net/wd/service/users/login', + None, 'Logging in', data=json.dumps({ + 'email': username, + 'password': password, + 'basic': 'false', + }).encode('utf-8'), headers={ + 'Content-Type': 'application/json; charset=UTF-8', + })['user'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError): + self._handle_error(e) + raise + + self._download_webpage( + 'https://shahid.mbc.net/populateContext', + None, 'Populate Context', data=urlencode_postdata({ + 'firstName': user_data['firstName'], + 'lastName': user_data['lastName'], + 'userName': user_data['email'], + 'csg_user_name': user_data['email'], + 'subscriberId': user_data['id'], + 'sessionId': user_data['sessionId'], + })) + + def _real_extract(self, url): + page_type, video_id = self._match_valid_url(url).groups() + if page_type == 'clip': + page_type = 'episode' + + playout = self._call_api( + 'playout/new/url/' + video_id, video_id)['playout'] + + if not self.get_param('allow_unplayable_formats') and playout.get('drm'): + self.report_drm(video_id) + + formats = self._extract_m3u8_formats(re.sub( + # https://docs.aws.amazon.com/mediapackage/latest/ug/manifest-filtering.html + r'aws\.manifestfilter=[\w:;,-]+&?', + '', playout['url']), video_id, 'mp4') + + # video = self._call_api( + # 'product/id', video_id, { + # 'id': video_id, + # 'productType': 'ASSET', + # 'productSubType': page_type.upper() + # })['productModel'] + + response = self._download_json( + 'http://api.shahid.net/api/v1_1/%s/%s' % (page_type, video_id), + video_id, 'Downloading video JSON', query={ + 'apiKey': 'sh@hid0nlin3', + 'hash': 'b2wMCTHpSmyxGqQjJFOycRmLSex+BpTK/ooxy6vHaqs=', + }) + data = response.get('data', {}) + error = data.get('error') + if error: + raise ExtractorError( + '%s returned error: %s' % (self.IE_NAME, '\n'.join(error.values())), + expected=True) + + video = data[page_type] + title = video['title'] + categories = [ + category['name'] + for category in video.get('genres', []) if 'name' in category] + + return { + 'id': video_id, + 'title': title, + 'description': video.get('description'), + 'thumbnail': video.get('thumbnailUrl'), + 'duration': int_or_none(video.get('duration')), + 'timestamp': parse_iso8601(video.get('referenceDate')), + 'categories': categories, + 'series': video.get('showTitle') or video.get('showName'), + 'season': video.get('seasonTitle'), + 'season_number': int_or_none(video.get('seasonNumber')), + 'season_id': str_or_none(video.get('seasonId')), + 'episode_number': int_or_none(video.get('number')), + 'episode_id': video_id, + 'formats': formats, + } + + +class ShahidShowIE(ShahidBaseIE): + _VALID_URL = ShahidBaseIE._VALID_URL_BASE + r'(?:show|serie)s/[^/]+/(?:show|series)-(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://shahid.mbc.net/ar/shows/%D8%B1%D8%A7%D9%85%D8%B2-%D9%82%D8%B1%D8%B4-%D8%A7%D9%84%D8%A8%D8%AD%D8%B1/show-79187', + 'info_dict': { + 'id': '79187', + 'title': 'رامز قرش البحر', + 'description': 'md5:c88fa7e0f02b0abd39d417aee0d046ff', + }, + 'playlist_mincount': 32, + }, { + 'url': 'https://shahid.mbc.net/ar/series/How-to-live-Longer-(The-Big-Think)/series-291861', + 'only_matching': True + }] + _PAGE_SIZE = 30 + + def _real_extract(self, url): + show_id = self._match_id(url) + + product = self._call_api( + 'playableAsset', show_id, {'showId': show_id})['productModel'] + playlist = product['playlist'] + playlist_id = playlist['id'] + show = product.get('show', {}) + + def page_func(page_num): + playlist = self._call_api( + 'product/playlist', show_id, { + 'playListId': playlist_id, + 'pageNumber': page_num, + 'pageSize': 30, + 'sorts': [{ + 'order': 'DESC', + 'type': 'SORTDATE' + }], + }) + for product in playlist.get('productList', {}).get('products', []): + product_url = product.get('productUrl', []).get('url') + if not product_url: + continue + yield self.url_result( + product_url, 'Shahid', + str_or_none(product.get('id')), + product.get('title')) + + entries = InAdvancePagedList( + page_func, + math.ceil(playlist['count'] / self._PAGE_SIZE), + self._PAGE_SIZE) + + return self.playlist_result( + entries, show_id, show.get('title'), show.get('description')) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/shared.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/shared.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/shared.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/shared.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sharevideos.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sharevideos.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sharevideos.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sharevideos.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/shemaroome.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/shemaroome.py new file mode 100644 index 0000000..ec9938b --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/shemaroome.py @@ -0,0 +1,102 @@ +from .common import InfoExtractor +from ..aes import aes_cbc_decrypt, unpad_pkcs7 +from ..compat import ( + compat_b64decode, +) +from ..utils import ( + bytes_to_intlist, + ExtractorError, + intlist_to_bytes, + unified_strdate, +) + + +class ShemarooMeIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?shemaroome\.com/(?:movies|shows)/(?P<id>[^?#]+)' + _TESTS = [{ + 'url': 'https://www.shemaroome.com/movies/dil-hai-tumhaara', + 'info_dict': { + 'id': 'dil-hai-tumhaara', + 'ext': 'mp4', + 'title': 'Dil Hai Tumhaara', + 'release_date': '20020906', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:2782c4127807103cf5a6ae2ca33645ce', + }, + 'params': { + 'skip_download': True + } + }, { + 'url': 'https://www.shemaroome.com/shows/jurm-aur-jazbaat/laalach', + 'info_dict': { + 'id': 'jurm-aur-jazbaat_laalach', + 'ext': 'mp4', + 'title': 'Laalach', + 'description': 'md5:92b79c2dcb539b0ab53f9fa5a048f53c', + 'thumbnail': r're:^https?://.*\.jpg$', + 'release_date': '20210507', + }, + 'params': { + 'skip_download': True + }, + 'skip': 'Premium videos cannot be downloaded yet.' + }, { + 'url': 'https://www.shemaroome.com/shows/jai-jai-jai-bajrang-bali/jai-jai-jai-bajrang-bali-episode-99', + 'info_dict': { + 'id': 'jai-jai-jai-bajrang-bali_jai-jai-jai-bajrang-bali-episode-99', + 'ext': 'mp4', + 'title': 'Jai Jai Jai Bajrang Bali Episode 99', + 'description': 'md5:850d127a18ee3f9529d7fbde2f49910d', + 'thumbnail': r're:^https?://.*\.jpg$', + 'release_date': '20110101', + }, + 'params': { + 'skip_download': True + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url).replace('/', '_') + webpage = self._download_webpage(url, video_id) + title = self._search_regex(r'id=\"ma_title\" value=\"([^\"]+)', webpage, 'title') + thumbnail = self._og_search_thumbnail(webpage) + content_def = self._search_regex(r'id=\"content_definition\" value=\"([^\"]+)', webpage, 'content_def') + catalog_id = self._search_regex(r'id=\"catalog_id\" value=\"([^\"]+)', webpage, 'catalog_id') + item_category = self._search_regex(r'id=\"item_category\" value=\"([^\"]+)', webpage, 'item_category') + content_id = self._search_regex(r'id=\"content_id\" value=\"([^\"]+)', webpage, 'content_id') + + data = f'catalog_id={catalog_id}&content_id={content_id}&category={item_category}&content_def={content_def}' + data_json = self._download_json('https://www.shemaroome.com/users/user_all_lists', video_id, data=data.encode()) + if not data_json.get('status'): + raise ExtractorError('Premium videos cannot be downloaded yet.', expected=True) + url_data = bytes_to_intlist(compat_b64decode(data_json['new_play_url'])) + key = bytes_to_intlist(compat_b64decode(data_json['key'])) + iv = [0] * 16 + m3u8_url = unpad_pkcs7(intlist_to_bytes(aes_cbc_decrypt(url_data, key, iv))).decode('ascii') + headers = {'stream_key': data_json['stream_key']} + formats, m3u8_subs = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id, fatal=False, headers=headers) + for fmt in formats: + fmt['http_headers'] = headers + + release_date = self._html_search_regex( + (r'itemprop="uploadDate">\s*([\d-]+)', r'id="release_date" value="([\d-]+)'), + webpage, 'release date', fatal=False) + + subtitles = {} + sub_url = data_json.get('subtitle') + if sub_url: + subtitles.setdefault('EN', []).append({ + 'url': self._proto_relative_url(sub_url), + }) + subtitles = self._merge_subtitles(subtitles, m3u8_subs) + description = self._html_search_regex(r'(?s)>Synopsis(</.+?)</', webpage, 'description', fatal=False) + + return { + 'id': video_id, + 'formats': formats, + 'title': title, + 'thumbnail': thumbnail, + 'release_date': unified_strdate(release_date), + 'description': description, + 'subtitles': subtitles, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/showroomlive.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/showroomlive.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/showroomlive.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/showroomlive.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sibnet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sibnet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sibnet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sibnet.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/simplecast.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/simplecast.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/simplecast.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/simplecast.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sina.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sina.py new file mode 100644 index 0000000..eeb9ebb --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sina.py @@ -0,0 +1,109 @@ +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ( + ExtractorError, + clean_html, + get_element_by_attribute, + int_or_none, + qualities, + update_url_query, +) + + +class SinaIE(InfoExtractor): + _VALID_URL = r'''(?x)https?://(?:[^/?#]+\.)?video\.sina\.com\.cn/ + (?: + (?:view/|.*\#)(?P<id>\d+)| + .+?/(?P<pseudo_id>[^/?#]+)(?:\.s?html)| + # This is used by external sites like Weibo + api/sinawebApi/outplay.php/(?P<token>.+?)\.swf + ) + ''' + + _TESTS = [ + { + 'url': 'http://video.sina.com.cn/news/spj/topvideoes20160504/?opsubject_id=top1#250576622', + 'md5': 'd38433e2fc886007729735650ae4b3e9', + 'info_dict': { + 'id': '250576622', + 'ext': 'mp4', + 'title': '现场:å…‹é²å…¹å®£å¸ƒé€€é€‰ 特朗普将稳获æå', + } + }, + { + 'url': 'http://video.sina.com.cn/v/b/101314253-1290078633.html', + 'info_dict': { + 'id': '101314253', + 'ext': 'flv', + 'title': '军方æé«˜å¯¹æœæƒ…报监视级别', + }, + 'skip': 'the page does not exist or has been deleted', + }, + { + 'url': 'http://video.sina.com.cn/view/250587748.html', + 'md5': '3d1807a25c775092aab3bc157fff49b4', + 'info_dict': { + 'id': '250587748', + 'ext': 'mp4', + 'title': '瞬间泪目:8年剿±¶å·åœ°éœ‡ç贵视频首æ›å…‰', + }, + }, + ] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + + video_id = mobj.group('id') + if not video_id: + if mobj.group('token') is not None: + # The video id is in the redirected url + self.to_screen('Getting video id') + request = HEADRequest(url) + _, urlh = self._download_webpage_handle(request, 'NA', False) + return self._real_extract(urlh.url) + else: + pseudo_id = mobj.group('pseudo_id') + webpage = self._download_webpage(url, pseudo_id) + error = get_element_by_attribute('class', 'errtitle', webpage) + if error: + raise ExtractorError('%s said: %s' % ( + self.IE_NAME, clean_html(error)), expected=True) + video_id = self._search_regex( + r"video_id\s*:\s*'(\d+)'", webpage, 'video id') + + video_data = self._download_json( + 'http://s.video.sina.com.cn/video/h5play', + video_id, query={'video_id': video_id}) + if video_data['code'] != 1: + raise ExtractorError('%s said: %s' % ( + self.IE_NAME, video_data['message']), expected=True) + else: + video_data = video_data['data'] + title = video_data['title'] + description = video_data.get('description') + if description: + description = description.strip() + + preference = qualities(['cif', 'sd', 'hd', 'fhd', 'ffd']) + formats = [] + for quality_id, quality in video_data.get('videos', {}).get('mp4', {}).items(): + file_api = quality.get('file_api') + file_id = quality.get('file_id') + if not file_api or not file_id: + continue + formats.append({ + 'format_id': quality_id, + 'url': update_url_query(file_api, {'vid': file_id}), + 'quality': preference(quality_id), + 'ext': 'mp4', + }) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': video_data.get('image'), + 'duration': int_or_none(video_data.get('length')), + 'timestamp': int_or_none(video_data.get('create_time')), + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sixplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sixplay.py new file mode 100644 index 0000000..ef93b92 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sixplay.py @@ -0,0 +1,122 @@ +from .common import InfoExtractor +from ..compat import ( + compat_str, +) +from ..utils import ( + determine_ext, + int_or_none, + parse_qs, + try_get, + qualities, +) + + +class SixPlayIE(InfoExtractor): + IE_NAME = '6play' + _VALID_URL = r'(?:6play:|https?://(?:www\.)?(?P<domain>6play\.fr|rtlplay\.be|play\.rtl\.hr|rtlmost\.hu)/.+?-c_)(?P<id>[0-9]+)' + _TESTS = [{ + 'url': 'https://www.6play.fr/minute-par-minute-p_9533/le-but-qui-a-marque-lhistoire-du-football-francais-c_12041051', + 'md5': '31fcd112637baa0c2ab92c4fcd8baf27', + 'info_dict': { + 'id': '12041051', + 'ext': 'mp4', + 'title': 'Le but qui a marqué l\'histoire du football français !', + 'description': 'md5:b59e7e841d646ef1eb42a7868eb6a851', + }, + }, { + 'url': 'https://www.rtlplay.be/rtl-info-13h-p_8551/les-titres-du-rtlinfo-13h-c_12045869', + 'only_matching': True, + }, { + 'url': 'https://play.rtl.hr/pj-masks-p_9455/epizoda-34-sezona-1-catboyevo-cudo-na-dva-kotaca-c_11984989', + 'only_matching': True, + }, { + 'url': 'https://www.rtlmost.hu/megtorve-p_14167/megtorve-6-resz-c_12397787', + 'only_matching': True, + }] + + def _real_extract(self, url): + domain, video_id = self._match_valid_url(url).groups() + service, consumer_name = { + '6play.fr': ('6play', 'm6web'), + 'rtlplay.be': ('rtlbe_rtl_play', 'rtlbe'), + 'play.rtl.hr': ('rtlhr_rtl_play', 'rtlhr'), + 'rtlmost.hu': ('rtlhu_rtl_most', 'rtlhu'), + }.get(domain, ('6play', 'm6web')) + + data = self._download_json( + 'https://pc.middleware.6play.fr/6play/v2/platforms/m6group_web/services/%s/videos/clip_%s' % (service, video_id), + video_id, headers={ + 'x-customer-name': consumer_name + }, query={ + 'csa': 5, + 'with': 'clips', + }) + + clip_data = data['clips'][0] + title = clip_data['title'] + + urls = [] + quality_key = qualities(['lq', 'sd', 'hq', 'hd']) + formats = [] + subtitles = {} + assets = clip_data.get('assets') or [] + for asset in assets: + asset_url = asset.get('full_physical_path') + protocol = asset.get('protocol') + if not asset_url or ((protocol == 'primetime' or asset.get('type') == 'usp_hlsfp_h264') and not ('_drmnp.ism/' in asset_url or '_unpnp.ism/' in asset_url)) or asset_url in urls: + continue + urls.append(asset_url) + container = asset.get('video_container') + ext = determine_ext(asset_url) + if protocol == 'http_subtitle' or ext == 'vtt': + subtitles.setdefault('fr', []).append({'url': asset_url}) + continue + if container == 'm3u8' or ext == 'm3u8': + if protocol == 'usp': + if parse_qs(asset_url).get('token', [None])[0]: + urlh = self._request_webpage( + asset_url, video_id, fatal=False, + headers=self.geo_verification_headers()) + if not urlh: + continue + asset_url = urlh.url + asset_url = asset_url.replace('_drmnp.ism/', '_unpnp.ism/') + for i in range(3, 0, -1): + asset_url = asset_url = asset_url.replace('_sd1/', '_sd%d/' % i) + m3u8_formats = self._extract_m3u8_formats( + asset_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False) + formats.extend(m3u8_formats) + formats.extend(self._extract_mpd_formats( + asset_url.replace('.m3u8', '.mpd'), + video_id, mpd_id='dash', fatal=False)) + if m3u8_formats: + break + else: + formats.extend(self._extract_m3u8_formats( + asset_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False)) + elif container == 'mp4' or ext == 'mp4': + quality = asset.get('video_quality') + formats.append({ + 'url': asset_url, + 'format_id': quality, + 'quality': quality_key(quality), + 'ext': ext, + }) + + def get(getter): + for src in (data, clip_data): + v = try_get(src, getter, compat_str) + if v: + return v + + return { + 'id': video_id, + 'title': title, + 'description': get(lambda x: x['description']), + 'duration': int_or_none(clip_data.get('duration')), + 'series': get(lambda x: x['program']['title']), + 'formats': formats, + 'subtitles': subtitles, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/skeb.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/skeb.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/skeb.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/skeb.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sky.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sky.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sky.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sky.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/skyit.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/skyit.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/skyit.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/skyit.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/skylinewebcams.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/skylinewebcams.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/skylinewebcams.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/skylinewebcams.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/skynewsarabia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/skynewsarabia.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/skynewsarabia.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/skynewsarabia.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/skynewsau.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/skynewsau.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/skynewsau.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/skynewsau.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/slideshare.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/slideshare.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/slideshare.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/slideshare.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/slideslive.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/slideslive.py new file mode 100644 index 0000000..25f867a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/slideslive.py @@ -0,0 +1,567 @@ +import re +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + int_or_none, + parse_qs, + smuggle_url, + traverse_obj, + unified_timestamp, + update_url_query, + url_or_none, + xpath_text, +) + + +class SlidesLiveIE(InfoExtractor): + _VALID_URL = r'https?://slideslive\.com/(?:embed/(?:presentation/)?)?(?P<id>[0-9]+)' + _TESTS = [{ + # service_name = yoda, only XML slides info + 'url': 'https://slideslive.com/38902413/gcc-ia16-backend', + 'info_dict': { + 'id': '38902413', + 'ext': 'mp4', + 'title': 'GCC IA16 backend', + 'timestamp': 1648189972, + 'upload_date': '20220325', + 'thumbnail': r're:^https?://.*\.jpg', + 'thumbnails': 'count:42', + 'chapters': 'count:41', + 'duration': 1638, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # service_name = yoda, /v7/ slides + 'url': 'https://slideslive.com/38935785', + 'info_dict': { + 'id': '38935785', + 'ext': 'mp4', + 'title': 'Offline Reinforcement Learning: From Algorithms to Practical Challenges', + 'upload_date': '20211115', + 'timestamp': 1636996003, + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:640', + 'chapters': 'count:639', + 'duration': 9832, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # service_name = yoda, /v1/ slides + 'url': 'https://slideslive.com/38973182/how-should-a-machine-learning-researcher-think-about-ai-ethics', + 'info_dict': { + 'id': '38973182', + 'ext': 'mp4', + 'title': 'How Should a Machine Learning Researcher Think About AI Ethics?', + 'upload_date': '20220201', + 'thumbnail': r're:^https?://.*\.jpg', + 'timestamp': 1643728135, + 'thumbnails': 'count:3', + 'chapters': 'count:2', + 'duration': 5889, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # service_name = youtube, only XML slides info + 'url': 'https://slideslive.com/38897546/special-metaprednaska-petra-ludwiga-hodnoty-pro-lepsi-spolecnost', + 'md5': '8a79b5e3d700837f40bd2afca3c8fa01', + 'info_dict': { + 'id': 'jmg02wCJD5M', + 'display_id': '38897546', + 'ext': 'mp4', + 'title': 'SPECIÃL: Meta-pÅ™ednáška Petra Ludwiga - Hodnoty pro lepší spoleÄnost', + 'description': 'Watch full version of this video at https://slideslive.com/38897546.', + 'channel_url': 'https://www.youtube.com/channel/UCZWdAkNYFncuX0khyvhqnxw', + 'channel': 'SlidesLive Videos - G1', + 'channel_id': 'UCZWdAkNYFncuX0khyvhqnxw', + 'uploader_id': 'UCZWdAkNYFncuX0khyvhqnxw', + 'uploader': 'SlidesLive Videos - G1', + 'uploader_url': 'http://www.youtube.com/channel/UCZWdAkNYFncuX0khyvhqnxw', + 'live_status': 'not_live', + 'upload_date': '20160710', + 'timestamp': 1618786715, + 'duration': 6827, + 'like_count': int, + 'view_count': int, + 'comment_count': int, + 'channel_follower_count': int, + 'age_limit': 0, + 'thumbnail': r're:^https?://.*\.(?:jpg|webp)', + 'thumbnails': 'count:169', + 'playable_in_embed': True, + 'availability': 'unlisted', + 'tags': [], + 'categories': ['People & Blogs'], + 'chapters': 'count:168', + }, + }, { + # embed-only presentation, only XML slides info + 'url': 'https://slideslive.com/embed/presentation/38925850', + 'info_dict': { + 'id': '38925850', + 'ext': 'mp4', + 'title': 'Towards a Deep Network Architecture for Structured Smoothness', + 'thumbnail': r're:^https?://.*\.jpg', + 'thumbnails': 'count:8', + 'timestamp': 1629671508, + 'upload_date': '20210822', + 'chapters': 'count:7', + 'duration': 326, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # embed-only presentation, only JSON slides info, /v5/ slides (.png) + 'url': 'https://slideslive.com/38979920/', + 'info_dict': { + 'id': '38979920', + 'ext': 'mp4', + 'title': 'MoReL: Multi-omics Relational Learning', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:7', + 'timestamp': 1654714970, + 'upload_date': '20220608', + 'chapters': 'count:6', + 'duration': 171, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v2/ slides (.jpg) + 'url': 'https://slideslive.com/38954074', + 'info_dict': { + 'id': '38954074', + 'ext': 'mp4', + 'title': 'Decentralized Attribution of Generative Models', + 'thumbnail': r're:^https?://.*\.jpg', + 'thumbnails': 'count:16', + 'timestamp': 1622806321, + 'upload_date': '20210604', + 'chapters': 'count:15', + 'duration': 306, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v4/ slides (.png) + 'url': 'https://slideslive.com/38979570/', + 'info_dict': { + 'id': '38979570', + 'ext': 'mp4', + 'title': 'Efficient Active Search for Combinatorial Optimization Problems', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:9', + 'timestamp': 1654714896, + 'upload_date': '20220608', + 'chapters': 'count:8', + 'duration': 295, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v10/ slides + 'url': 'https://slideslive.com/embed/presentation/38979880?embed_parent_url=https%3A%2F%2Fedit.videoken.com%2F', + 'info_dict': { + 'id': '38979880', + 'ext': 'mp4', + 'title': 'The Representation Power of Neural Networks', + 'timestamp': 1654714962, + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:22', + 'upload_date': '20220608', + 'chapters': 'count:21', + 'duration': 294, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v7/ slides, 2 video slides + 'url': 'https://slideslive.com/embed/presentation/38979682?embed_container_origin=https%3A%2F%2Fedit.videoken.com', + 'playlist_count': 3, + 'info_dict': { + 'id': '38979682-playlist', + 'title': 'LoRA: Low-Rank Adaptation of Large Language Models', + }, + 'playlist': [{ + 'info_dict': { + 'id': '38979682', + 'ext': 'mp4', + 'title': 'LoRA: Low-Rank Adaptation of Large Language Models', + 'timestamp': 1654714920, + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:30', + 'upload_date': '20220608', + 'chapters': 'count:31', + 'duration': 272, + }, + }, { + 'info_dict': { + 'id': '38979682-021', + 'ext': 'mp4', + 'title': 'LoRA: Low-Rank Adaptation of Large Language Models - Slide 021', + 'duration': 3, + 'timestamp': 1654714920, + 'upload_date': '20220608', + }, + }, { + 'info_dict': { + 'id': '38979682-024', + 'ext': 'mp4', + 'title': 'LoRA: Low-Rank Adaptation of Large Language Models - Slide 024', + 'duration': 4, + 'timestamp': 1654714920, + 'upload_date': '20220608', + }, + }], + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v6/ slides, 1 video slide, edit.videoken.com embed + 'url': 'https://slideslive.com/38979481/', + 'playlist_count': 2, + 'info_dict': { + 'id': '38979481-playlist', + 'title': 'How to Train Your MAML to Excel in Few-Shot Classification', + }, + 'playlist': [{ + 'info_dict': { + 'id': '38979481', + 'ext': 'mp4', + 'title': 'How to Train Your MAML to Excel in Few-Shot Classification', + 'timestamp': 1654714877, + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:43', + 'upload_date': '20220608', + 'chapters': 'count:43', + 'duration': 315, + }, + }, { + 'info_dict': { + 'id': '38979481-013', + 'ext': 'mp4', + 'title': 'How to Train Your MAML to Excel in Few-Shot Classification - Slide 013', + 'duration': 3, + 'timestamp': 1654714877, + 'upload_date': '20220608', + }, + }], + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v3/ slides, .jpg and .png, service_name = youtube + 'url': 'https://slideslive.com/embed/38932460/', + 'info_dict': { + 'id': 'RTPdrgkyTiE', + 'display_id': '38932460', + 'ext': 'mp4', + 'title': 'Active Learning for Hierarchical Multi-Label Classification', + 'description': 'Watch full version of this video at https://slideslive.com/38932460.', + 'channel': 'SlidesLive Videos - A', + 'channel_id': 'UC62SdArr41t_-_fX40QCLRw', + 'channel_url': 'https://www.youtube.com/channel/UC62SdArr41t_-_fX40QCLRw', + 'uploader': 'SlidesLive Videos - A', + 'uploader_id': 'UC62SdArr41t_-_fX40QCLRw', + 'uploader_url': 'http://www.youtube.com/channel/UC62SdArr41t_-_fX40QCLRw', + 'upload_date': '20200903', + 'timestamp': 1602599092, + 'duration': 942, + 'age_limit': 0, + 'live_status': 'not_live', + 'playable_in_embed': True, + 'availability': 'unlisted', + 'categories': ['People & Blogs'], + 'tags': [], + 'channel_follower_count': int, + 'like_count': int, + 'view_count': int, + 'thumbnail': r're:^https?://.*\.(?:jpg|png|webp)', + 'thumbnails': 'count:21', + 'chapters': 'count:20', + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # /v3/ slides, .png only, service_name = yoda + 'url': 'https://slideslive.com/38983994', + 'info_dict': { + 'id': '38983994', + 'ext': 'mp4', + 'title': 'Zero-Shot AutoML with Pretrained Models', + 'timestamp': 1662384834, + 'upload_date': '20220905', + 'thumbnail': r're:^https?://.*\.(?:jpg|png)', + 'thumbnails': 'count:23', + 'chapters': 'count:22', + 'duration': 295, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # service_name = yoda + 'url': 'https://slideslive.com/38903721/magic-a-scientific-resurrection-of-an-esoteric-legend', + 'only_matching': True, + }, { + # dead link, service_name = url + 'url': 'https://slideslive.com/38922070/learning-transferable-skills-1', + 'only_matching': True, + }, { + # dead link, service_name = vimeo + 'url': 'https://slideslive.com/38921896/retrospectives-a-venue-for-selfreflection-in-ml-research-3', + 'only_matching': True, + }] + + _WEBPAGE_TESTS = [{ + # only XML slides info + 'url': 'https://iclr.cc/virtual_2020/poster_Hklr204Fvr.html', + 'info_dict': { + 'id': '38925850', + 'ext': 'mp4', + 'title': 'Towards a Deep Network Architecture for Structured Smoothness', + 'thumbnail': r're:^https?://.*\.jpg', + 'thumbnails': 'count:8', + 'timestamp': 1629671508, + 'upload_date': '20210822', + 'chapters': 'count:7', + 'duration': 326, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }] + + @classmethod + def _extract_embed_urls(cls, url, webpage): + # Reference: https://slideslive.com/embed_presentation.js + for embed_id in re.findall(r'(?s)new\s+SlidesLiveEmbed\s*\([^)]+\bpresentationId:\s*["\'](\d+)["\']', webpage): + url_parsed = urllib.parse.urlparse(url) + origin = f'{url_parsed.scheme}://{url_parsed.netloc}' + yield update_url_query( + f'https://slideslive.com/embed/presentation/{embed_id}', { + 'embed_parent_url': url, + 'embed_container_origin': origin, + }) + + def _download_embed_webpage_handle(self, video_id, headers): + return self._download_webpage_handle( + f'https://slideslive.com/embed/presentation/{video_id}', video_id, + headers=headers, query=traverse_obj(headers, { + 'embed_parent_url': 'Referer', + 'embed_container_origin': 'Origin', + })) + + def _extract_custom_m3u8_info(self, m3u8_data): + m3u8_dict = {} + + lookup = { + 'PRESENTATION-TITLE': 'title', + 'PRESENTATION-UPDATED-AT': 'timestamp', + 'PRESENTATION-THUMBNAIL': 'thumbnail', + 'PLAYLIST-TYPE': 'playlist_type', + 'VOD-VIDEO-SERVICE-NAME': 'service_name', + 'VOD-VIDEO-ID': 'service_id', + 'VOD-VIDEO-SERVERS': 'video_servers', + 'VOD-SUBTITLES': 'subtitles', + 'VOD-SLIDES-JSON-URL': 'slides_json_url', + 'VOD-SLIDES-XML-URL': 'slides_xml_url', + } + + for line in m3u8_data.splitlines(): + if not line.startswith('#EXT-SL-'): + continue + tag, _, value = line.partition(':') + key = lookup.get(tag.lstrip('#EXT-SL-')) + if not key: + continue + m3u8_dict[key] = value + + # Some values are stringified JSON arrays + for key in ('video_servers', 'subtitles'): + if key in m3u8_dict: + m3u8_dict[key] = self._parse_json(m3u8_dict[key], None, fatal=False) or [] + + return m3u8_dict + + def _extract_formats_and_duration(self, cdn_hostname, path, video_id, skip_duration=False): + formats, duration = [], None + + hls_formats = self._extract_m3u8_formats( + f'https://{cdn_hostname}/{path}/master.m3u8', + video_id, 'mp4', m3u8_id='hls', fatal=False, live=True) + if hls_formats: + if not skip_duration: + duration = self._extract_m3u8_vod_duration( + hls_formats[0]['url'], video_id, note='Extracting duration from HLS manifest') + formats.extend(hls_formats) + + dash_formats = self._extract_mpd_formats( + f'https://{cdn_hostname}/{path}/master.mpd', video_id, mpd_id='dash', fatal=False) + if dash_formats: + if not duration and not skip_duration: + duration = self._extract_mpd_vod_duration( + f'https://{cdn_hostname}/{path}/master.mpd', video_id, + note='Extracting duration from DASH manifest') + formats.extend(dash_formats) + + return formats, duration + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage, urlh = self._download_embed_webpage_handle( + video_id, headers=traverse_obj(parse_qs(url), { + 'Referer': ('embed_parent_url', -1), + 'Origin': ('embed_container_origin', -1)})) + redirect_url = urlh.url + if 'domain_not_allowed' in redirect_url: + domain = traverse_obj(parse_qs(redirect_url), ('allowed_domains[]', ...), get_all=False) + if not domain: + raise ExtractorError( + 'This is an embed-only presentation. Try passing --referer', expected=True) + webpage, _ = self._download_embed_webpage_handle(video_id, headers={ + 'Referer': f'https://{domain}/', + 'Origin': f'https://{domain}', + }) + + player_token = self._search_regex(r'data-player-token="([^"]+)"', webpage, 'player token') + player_data = self._download_webpage( + f'https://ben.slideslive.com/player/{video_id}', video_id, + note='Downloading player info', query={'player_token': player_token}) + player_info = self._extract_custom_m3u8_info(player_data) + + service_name = player_info['service_name'].lower() + assert service_name in ('url', 'yoda', 'vimeo', 'youtube') + service_id = player_info['service_id'] + + slide_url_template = 'https://slides.slideslive.com/%s/slides/original/%s%s' + slides, slides_info = {}, [] + + if player_info.get('slides_json_url'): + slides = self._download_json( + player_info['slides_json_url'], video_id, fatal=False, + note='Downloading slides JSON', errnote=False) or {} + slide_ext_default = '.png' + slide_quality = traverse_obj(slides, ('slide_qualities', 0)) + if slide_quality: + slide_ext_default = '.jpg' + slide_url_template = f'https://cdn.slideslive.com/data/presentations/%s/slides/{slide_quality}/%s%s' + for slide_id, slide in enumerate(traverse_obj(slides, ('slides', ...), expected_type=dict), 1): + slides_info.append(( + slide_id, traverse_obj(slide, ('image', 'name')), + traverse_obj(slide, ('image', 'extname'), default=slide_ext_default), + int_or_none(slide.get('time'), scale=1000))) + + if not slides and player_info.get('slides_xml_url'): + slides = self._download_xml( + player_info['slides_xml_url'], video_id, fatal=False, + note='Downloading slides XML', errnote='Failed to download slides info') + slide_url_template = 'https://cdn.slideslive.com/data/presentations/%s/slides/big/%s%s' + for slide_id, slide in enumerate(slides.findall('./slide') if slides else [], 1): + slides_info.append(( + slide_id, xpath_text(slide, './slideName', 'name'), '.jpg', + int_or_none(xpath_text(slide, './timeSec', 'time')))) + + chapters, thumbnails = [], [] + if url_or_none(player_info.get('thumbnail')): + thumbnails.append({'id': 'cover', 'url': player_info['thumbnail']}) + for slide_id, slide_path, slide_ext, start_time in slides_info: + if slide_path: + thumbnails.append({ + 'id': f'{slide_id:03d}', + 'url': slide_url_template % (video_id, slide_path, slide_ext), + }) + chapters.append({ + 'title': f'Slide {slide_id:03d}', + 'start_time': start_time, + }) + + subtitles = {} + for sub in traverse_obj(player_info, ('subtitles', ...), expected_type=dict): + webvtt_url = url_or_none(sub.get('webvtt_url')) + if not webvtt_url: + continue + subtitles.setdefault(sub.get('language') or 'en', []).append({ + 'url': webvtt_url, + 'ext': 'vtt', + }) + + info = { + 'id': video_id, + 'title': player_info.get('title') or self._html_search_meta('title', webpage, default=''), + 'timestamp': unified_timestamp(player_info.get('timestamp')), + 'is_live': player_info.get('playlist_type') != 'vod', + 'thumbnails': thumbnails, + 'chapters': chapters, + 'subtitles': subtitles, + } + + if service_name == 'url': + info['url'] = service_id + elif service_name == 'yoda': + formats, duration = self._extract_formats_and_duration( + player_info['video_servers'][0], service_id, video_id) + info.update({ + 'duration': duration, + 'formats': formats, + }) + else: + info.update({ + '_type': 'url_transparent', + 'url': service_id, + 'ie_key': service_name.capitalize(), + 'display_id': video_id, + }) + if service_name == 'vimeo': + info['url'] = smuggle_url( + f'https://player.vimeo.com/video/{service_id}', + {'http_headers': {'Referer': url}}) + + video_slides = traverse_obj(slides, ('slides', ..., 'video', 'id')) + if not video_slides: + return info + + def entries(): + yield info + + service_data = self._download_json( + f'https://ben.slideslive.com/player/{video_id}/slides_video_service_data', + video_id, fatal=False, query={ + 'player_token': player_token, + 'videos': ','.join(video_slides), + }, note='Downloading video slides info', errnote='Failed to download video slides info') or {} + + for slide_id, slide in enumerate(traverse_obj(slides, ('slides', ...)), 1): + if not traverse_obj(slide, ('video', 'service')) == 'yoda': + continue + video_path = traverse_obj(slide, ('video', 'id')) + cdn_hostname = traverse_obj(service_data, ( + video_path, 'video_servers', ...), get_all=False) + if not cdn_hostname or not video_path: + continue + formats, _ = self._extract_formats_and_duration( + cdn_hostname, video_path, video_id, skip_duration=True) + if not formats: + continue + yield { + 'id': f'{video_id}-{slide_id:03d}', + 'title': f'{info["title"]} - Slide {slide_id:03d}', + 'timestamp': info['timestamp'], + 'duration': int_or_none(traverse_obj(slide, ('video', 'duration_ms')), scale=1000), + 'formats': formats, + } + + return self.playlist_result(entries(), f'{video_id}-playlist', info['title']) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/slutload.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/slutload.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/slutload.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/slutload.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/smotrim.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/smotrim.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/smotrim.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/smotrim.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/snotr.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/snotr.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/snotr.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/snotr.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sohu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sohu.py new file mode 100644 index 0000000..c0ff4f9 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sohu.py @@ -0,0 +1,293 @@ +import base64 +import re + +from .common import InfoExtractor +from ..compat import ( + compat_str, + compat_urllib_parse_urlencode, +) +from ..utils import ( + ExtractorError, + int_or_none, + float_or_none, + url_or_none, + unified_timestamp, + try_get, + urljoin, + traverse_obj, +) + + +class SohuIE(InfoExtractor): + _VALID_URL = r'https?://(?P<mytv>my\.)?tv\.sohu\.com/.+?/(?(mytv)|n)(?P<id>\d+)\.shtml.*?' + + # Sohu videos give different MD5 sums on Travis CI and my machine + _TESTS = [{ + 'note': 'This video is available only in Mainland China', + 'url': 'http://tv.sohu.com/20130724/n382479172.shtml#super', + 'info_dict': { + 'id': '382479172', + 'ext': 'mp4', + 'title': 'MV:Far East Movement《The Illest》', + }, + 'skip': 'On available in China', + }, { + 'url': 'http://tv.sohu.com/20150305/n409385080.shtml', + 'info_dict': { + 'id': '409385080', + 'ext': 'mp4', + 'title': '《2015æ¹–å—å«è§†ç¾Šå¹´å…ƒå®µæ™šä¼šã€‹å”嫣《花好月圆》', + }, + 'skip': 'no longer available', + }, { + 'url': 'http://my.tv.sohu.com/us/232799889/78693464.shtml', + 'info_dict': { + 'id': '78693464', + 'ext': 'mp4', + 'title': 'ã€çˆ±èŒƒå“】第31期:MWCè§ä¸åˆ°çš„奇葩手机', + 'uploader': '爱范儿视频', + 'duration': 213, + 'timestamp': 1425519600, + 'upload_date': '20150305', + 'thumbnail': 'http://e3f49eaa46b57.cdn.sohucs.com//group1/M10/83/FA/MTAuMTAuODguODA=/6_14cbccdde5eg104SysCutcloud_78693464_7_0b.jpg', + 'tags': ['爱范儿', '爱范å“', 'MWC', '手机'], + } + }, { + 'note': 'Multipart video', + 'url': 'http://my.tv.sohu.com/pl/8384802/78910339.shtml', + 'info_dict': { + 'id': '78910339', + 'title': 'ã€ç¥žæŽ¢è‹å®žæˆ˜ç§˜ç±ã€‘第13期 战争之影 赫å¡é‡Œå§†', + 'uploader': 'å°è‹cany', + 'duration': 744.0, + 'timestamp': 1426269360, + 'upload_date': '20150313', + 'thumbnail': 'http://e3f49eaa46b57.cdn.sohucs.com//group1/M11/89/57/MTAuMTAuODguODA=/6_14cea022a1dg102SysCutcloud_78910339_8_0b.jpg', + 'tags': ['å°è‹MM', '英雄è”盟', '实战秘ç±'], + }, + 'playlist': [{ + 'info_dict': { + 'id': '78910339_part1', + 'ext': 'mp4', + 'duration': 294, + 'title': 'ã€ç¥žæŽ¢è‹å®žæˆ˜ç§˜ç±ã€‘第13期 战争之影 赫å¡é‡Œå§†', + } + }, { + 'info_dict': { + 'id': '78910339_part2', + 'ext': 'mp4', + 'duration': 300, + 'title': 'ã€ç¥žæŽ¢è‹å®žæˆ˜ç§˜ç±ã€‘第13期 战争之影 赫å¡é‡Œå§†', + } + }, { + 'info_dict': { + 'id': '78910339_part3', + 'ext': 'mp4', + 'duration': 150, + 'title': 'ã€ç¥žæŽ¢è‹å®žæˆ˜ç§˜ç±ã€‘第13期 战争之影 赫å¡é‡Œå§†', + } + }] + }, { + 'note': 'Video with title containing dash', + 'url': 'http://my.tv.sohu.com/us/249884221/78932792.shtml', + 'info_dict': { + 'id': '78932792', + 'ext': 'mp4', + 'title': 'youtube-dl testing video', + 'duration': 360, + 'timestamp': 1426348620, + 'upload_date': '20150314', + 'thumbnail': 'http://e3f49eaa46b57.cdn.sohucs.com//group1/M02/8A/00/MTAuMTAuODguNzk=/6_14cee1be192g102SysCutcloud_78932792_7_7b.jpg', + 'tags': [], + }, + 'params': { + 'skip_download': True + } + }] + + def _real_extract(self, url): + + def _fetch_data(vid_id, mytv=False): + if mytv: + base_data_url = 'http://my.tv.sohu.com/play/videonew.do?vid=' + else: + base_data_url = 'http://hot.vrs.sohu.com/vrs_flash.action?vid=' + + return self._download_json( + base_data_url + vid_id, video_id, + 'Downloading JSON data for %s' % vid_id, + headers=self.geo_verification_headers()) + + mobj = self._match_valid_url(url) + video_id = mobj.group('id') + mytv = mobj.group('mytv') is not None + + webpage = self._download_webpage(url, video_id) + + title = re.sub(r'( - 高清正版在线观看)? - æœç‹è§†é¢‘$', '', self._og_search_title(webpage)) + + vid = self._html_search_regex( + r'var vid ?= ?["\'](\d+)["\']', + webpage, 'video path') + vid_data = _fetch_data(vid, mytv) + if vid_data['play'] != 1: + if vid_data.get('status') == 12: + raise ExtractorError( + '%s said: There\'s something wrong in the video.' % self.IE_NAME, + expected=True) + else: + self.raise_geo_restricted( + '%s said: The video is only licensed to users in Mainland China.' % self.IE_NAME) + + formats_json = {} + for format_id in ('nor', 'high', 'super', 'ori', 'h2644k', 'h2654k'): + vid_id = vid_data['data'].get('%sVid' % format_id) + if not vid_id: + continue + vid_id = compat_str(vid_id) + formats_json[format_id] = vid_data if vid == vid_id else _fetch_data(vid_id, mytv) + + part_count = vid_data['data']['totalBlocks'] + + playlist = [] + for i in range(part_count): + formats = [] + for format_id, format_data in formats_json.items(): + allot = format_data['allot'] + + data = format_data['data'] + clip_url = traverse_obj(data, (('clipsURL', 'mp4PlayUrl'), i, {url_or_none}), get_all=False) + if not clip_url: + raise ExtractorError(f'Unable to extract url for clip {i}') + su = data['su'] + + video_url = 'newflv.sohu.ccgslb.net' + cdnId = None + retries = 0 + + while 'newflv.sohu.ccgslb.net' in video_url: + params = { + 'prot': 9, + 'file': clip_url, + 'new': su[i], + 'prod': 'h5n', + 'rb': 1, + } + + if cdnId is not None: + params['idc'] = cdnId + + download_note = 'Downloading %s video URL part %d of %d' % ( + format_id, i + 1, part_count) + + if retries > 0: + download_note += ' (retry #%d)' % retries + part_info = self._parse_json(self._download_webpage( + 'http://%s/?%s' % (allot, compat_urllib_parse_urlencode(params)), + video_id, download_note), video_id) + + video_url = part_info['url'] + cdnId = part_info.get('nid') + + retries += 1 + if retries > 5: + raise ExtractorError('Failed to get video URL') + + formats.append({ + 'url': video_url, + 'format_id': format_id, + 'filesize': int_or_none( + try_get(data, lambda x: x['clipsBytes'][i])), + 'width': int_or_none(data.get('width')), + 'height': int_or_none(data.get('height')), + 'fps': int_or_none(data.get('fps')), + }) + + playlist.append({ + 'id': '%s_part%d' % (video_id, i + 1), + 'title': title, + 'duration': vid_data['data']['clipsDuration'][i], + 'formats': formats, + }) + + if len(playlist) == 1: + info = playlist[0] + info['id'] = video_id + else: + info = { + '_type': 'multi_video', + 'entries': playlist, + 'id': video_id, + 'title': title, + 'duration': traverse_obj(vid_data, ('data', 'totalDuration', {float_or_none})), + } + + if mytv: + publish_time = unified_timestamp(self._search_regex( + r'publishTime:\s*["\'](\d+-\d+-\d+ \d+:\d+)["\']', webpage, 'publish time', fatal=False)) + else: + publish_time = traverse_obj(vid_data, ('tv_application_time', {unified_timestamp})) + + return { + 'timestamp': publish_time - 8 * 3600 if publish_time else None, + **traverse_obj(vid_data, { + 'alt_title': ('data', 'subName', {str}), + 'uploader': ('wm_data', 'wm_username', {str}), + 'thumbnail': ('data', 'coverImg', {url_or_none}), + 'tags': ('data', 'tag', {str.split}), + }), + **info, + } + + +class SohuVIE(InfoExtractor): + _VALID_URL = r'https?://tv\.sohu\.com/v/(?P<id>[\w=-]+)\.html(?:$|[#?])' + + _TESTS = [{ + 'note': 'Multipart video', + 'url': 'https://tv.sohu.com/v/MjAyMzA2MTQvbjYwMTMxNTE5Mi5zaHRtbA==.html', + 'info_dict': { + 'id': '601315192', + 'title': '《淬ç«ä¸¹å¿ƒã€‹ç¬¬1集', + 'alt_title': '“点天ç¯â€å‘生事故', + 'duration': 2701.692, + 'timestamp': 1686758040, + 'upload_date': '20230614', + 'thumbnail': 'http://photocdn.tv.sohu.com/img/20230614/vrsa_hor_1686738763256_454010551.jpg', + }, + 'playlist_mincount': 9, + 'skip': 'Only available in China', + }, { + 'url': 'https://tv.sohu.com/v/dXMvMjMyNzk5ODg5Lzc4NjkzNDY0LnNodG1s.html', + 'info_dict': { + 'id': '78693464', + 'ext': 'mp4', + 'title': 'ã€çˆ±èŒƒå“】第31期:MWCè§ä¸åˆ°çš„奇葩手机', + 'uploader': '爱范儿视频', + 'duration': 213, + 'timestamp': 1425519600, + 'upload_date': '20150305', + 'thumbnail': 'http://e3f49eaa46b57.cdn.sohucs.com//group1/M10/83/FA/MTAuMTAuODguODA=/6_14cbccdde5eg104SysCutcloud_78693464_7_0b.jpg', + 'tags': ['爱范儿', '爱范å“', 'MWC', '手机'], + } + }, { + 'note': 'Multipart video', + 'url': 'https://tv.sohu.com/v/dXMvMjQyNTYyMTYzLzc4OTEwMzM5LnNodG1s.html?src=pl', + 'info_dict': { + 'id': '78910339', + 'title': 'ã€ç¥žæŽ¢è‹å®žæˆ˜ç§˜ç±ã€‘第13期 战争之影 赫å¡é‡Œå§†', + 'uploader': 'å°è‹cany', + 'duration': 744.0, + 'timestamp': 1426269360, + 'upload_date': '20150313', + 'thumbnail': 'http://e3f49eaa46b57.cdn.sohucs.com//group1/M11/89/57/MTAuMTAuODguODA=/6_14cea022a1dg102SysCutcloud_78910339_8_0b.jpg', + 'tags': ['å°è‹MM', '英雄è”盟', '实战秘ç±'], + }, + 'playlist_mincount': 3, + }] + + def _real_extract(self, url): + encoded_id = self._match_id(url) + path = base64.urlsafe_b64decode(encoded_id).decode() + subdomain = 'tv' if re.match(r'\d+/n\d+\.shtml', path) else 'my.tv' + return self.url_result(urljoin(f'http://{subdomain}.sohu.com/', path), SohuIE) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sonyliv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sonyliv.py new file mode 100644 index 0000000..4379572 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sonyliv.py @@ -0,0 +1,220 @@ +import datetime +import json +import math +import random +import time +import uuid + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + jwt_decode_hs256, + try_call, + try_get, +) + + +class SonyLIVIE(InfoExtractor): + _VALID_URL = r'''(?x) + (?: + sonyliv:| + https?://(?:www\.)?sonyliv\.com/(?:s(?:how|port)s/[^/]+|movies|clip|trailer|music-videos)/[^/?#&]+- + ) + (?P<id>\d+) + ''' + _TESTS = [{ + 'url': 'https://www.sonyliv.com/shows/bachelors-delight-1700000113/achaari-cheese-toast-1000022678?watch=true', + 'info_dict': { + 'title': 'Achaari Cheese Toast', + 'id': '1000022678', + 'ext': 'mp4', + 'upload_date': '20200411', + 'description': 'md5:3957fa31d9309bf336ceb3f37ad5b7cb', + 'timestamp': 1586632091, + 'duration': 185, + 'season_number': 1, + 'series': 'Bachelors Delight', + 'episode_number': 1, + 'release_year': 2016, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://www.sonyliv.com/movies/tahalka-1000050121?watch=true', + 'only_matching': True, + }, { + 'url': 'https://www.sonyliv.com/clip/jigarbaaz-1000098925', + 'only_matching': True, + }, { + 'url': 'https://www.sonyliv.com/trailer/sandwiched-forever-1000100286?watch=true', + 'only_matching': True, + }, { + 'url': 'https://www.sonyliv.com/sports/india-tour-of-australia-2020-21-1700000286/cricket-hls-day-3-1st-test-aus-vs-ind-19-dec-2020-1000100959?watch=true', + 'only_matching': True, + }, { + 'url': 'https://www.sonyliv.com/music-videos/yeh-un-dinon-ki-baat-hai-1000018779', + 'only_matching': True, + }] + _GEO_COUNTRIES = ['IN'] + _HEADERS = {} + _LOGIN_HINT = 'Use "--username <mobile_number>" to login using OTP or "--username token --password <auth_token>" to login using auth token.' + _NETRC_MACHINE = 'sonyliv' + + def _get_device_id(self): + e = int(time.time() * 1000) + t = list('xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx') + for i, c in enumerate(t): + n = int((e + 16 * random.random()) % 16) | 0 + e = math.floor(e / 16) + if c == 'x': + t[i] = str(n) + elif c == 'y': + t[i] = '{:x}'.format(3 & n | 8) + return ''.join(t) + '-' + str(int(time.time() * 1000)) + + def _perform_login(self, username, password): + self._HEADERS['device_id'] = self._get_device_id() + self._HEADERS['content-type'] = 'application/json' + + if username.lower() == 'token' and try_call(lambda: jwt_decode_hs256(password)): + self._HEADERS['authorization'] = password + self.report_login() + return + elif len(username) != 10 or not username.isdigit(): + raise ExtractorError(f'Invalid username/password; {self._LOGIN_HINT}') + + self.report_login() + otp_request_json = self._download_json( + 'https://apiv2.sonyliv.com/AGL/1.6/A/ENG/WEB/IN/HR/CREATEOTP-V2', + None, note='Sending OTP', headers=self._HEADERS, data=json.dumps({ + 'mobileNumber': username, + 'channelPartnerID': 'MSMIND', + 'country': 'IN', + 'timestamp': datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S.%MZ'), + 'otpSize': 6, + 'loginType': 'REGISTERORSIGNIN', + 'isMobileMandatory': True, + }).encode()) + if otp_request_json['resultCode'] == 'KO': + raise ExtractorError(otp_request_json['message'], expected=True) + + otp_verify_json = self._download_json( + 'https://apiv2.sonyliv.com/AGL/2.0/A/ENG/WEB/IN/HR/CONFIRMOTP-V2', + None, note='Verifying OTP', headers=self._HEADERS, data=json.dumps({ + 'channelPartnerID': 'MSMIND', + 'mobileNumber': username, + 'country': 'IN', + 'otp': self._get_tfa_info('OTP'), + 'dmaId': 'IN', + 'ageConfirmation': True, + 'timestamp': datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S.%MZ'), + 'isMobileMandatory': True, + }).encode()) + if otp_verify_json['resultCode'] == 'KO': + raise ExtractorError(otp_request_json['message'], expected=True) + self._HEADERS['authorization'] = otp_verify_json['resultObj']['accessToken'] + + def _call_api(self, version, path, video_id): + try: + return self._download_json( + 'https://apiv2.sonyliv.com/AGL/%s/A/ENG/WEB/%s' % (version, path), + video_id, headers=self._HEADERS)['resultObj'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 406 and self._parse_json( + e.cause.response.read().decode(), video_id)['message'] == 'Please subscribe to watch this content': + self.raise_login_required(self._LOGIN_HINT, method=None) + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + message = self._parse_json( + e.cause.response.read().decode(), video_id)['message'] + if message == 'Geoblocked Country': + self.raise_geo_restricted(countries=self._GEO_COUNTRIES) + raise ExtractorError(message) + raise + + def _initialize_pre_login(self): + self._HEADERS['security_token'] = self._call_api('1.4', 'ALL/GETTOKEN', None) + + def _real_extract(self, url): + video_id = self._match_id(url) + content = self._call_api( + '1.5', 'IN/CONTENT/VIDEOURL/VOD/' + video_id, video_id) + if not self.get_param('allow_unplayable_formats') and content.get('isEncrypted'): + self.report_drm(video_id) + dash_url = content['videoURL'] + headers = { + 'x-playback-session-id': '%s-%d' % (uuid.uuid4().hex, time.time() * 1000) + } + formats = self._extract_mpd_formats( + dash_url, video_id, mpd_id='dash', headers=headers, fatal=False) + formats.extend(self._extract_m3u8_formats( + dash_url.replace('.mpd', '.m3u8').replace('/DASH/', '/HLS/'), + video_id, 'mp4', m3u8_id='hls', headers=headers, fatal=False)) + for f in formats: + f.setdefault('http_headers', {}).update(headers) + + metadata = self._call_api( + '1.6', 'IN/DETAIL/' + video_id, video_id)['containers'][0]['metadata'] + title = metadata['episodeTitle'] + subtitles = {} + for sub in content.get('subtitle', []): + sub_url = sub.get('subtitleUrl') + if not sub_url: + continue + subtitles.setdefault(sub.get('subtitleLanguageName', 'ENG'), []).append({ + 'url': sub_url, + }) + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'thumbnail': content.get('posterURL'), + 'description': metadata.get('longDescription') or metadata.get('shortDescription'), + 'timestamp': int_or_none(metadata.get('creationDate'), 1000), + 'duration': int_or_none(metadata.get('duration')), + 'season_number': int_or_none(metadata.get('season')), + 'series': metadata.get('title'), + 'episode_number': int_or_none(metadata.get('episodeNumber')), + 'release_year': int_or_none(metadata.get('year')), + 'subtitles': subtitles, + } + + +class SonyLIVSeriesIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?sonyliv\.com/shows/[^/?#&]+-(?P<id>\d{10})$' + _TESTS = [{ + 'url': 'https://www.sonyliv.com/shows/adaalat-1700000091', + 'playlist_mincount': 456, + 'info_dict': { + 'id': '1700000091', + }, + }] + _API_SHOW_URL = "https://apiv2.sonyliv.com/AGL/1.9/R/ENG/WEB/IN/DL/DETAIL/{}?kids_safe=false&from=0&to=49" + _API_EPISODES_URL = "https://apiv2.sonyliv.com/AGL/1.4/R/ENG/WEB/IN/CONTENT/DETAIL/BUNDLE/{}?from=0&to=1000&orderBy=episodeNumber&sortOrder=asc" + _API_SECURITY_URL = 'https://apiv2.sonyliv.com/AGL/1.4/A/ENG/WEB/ALL/GETTOKEN' + + def _entries(self, show_id): + headers = { + 'Accept': 'application/json, text/plain, */*', + 'Referer': 'https://www.sonyliv.com', + } + headers['security_token'] = self._download_json( + self._API_SECURITY_URL, video_id=show_id, headers=headers, + note='Downloading security token')['resultObj'] + seasons = try_get( + self._download_json(self._API_SHOW_URL.format(show_id), video_id=show_id, headers=headers), + lambda x: x['resultObj']['containers'][0]['containers'], list) + for season in seasons or []: + season_id = season['id'] + episodes = try_get( + self._download_json(self._API_EPISODES_URL.format(season_id), video_id=season_id, headers=headers), + lambda x: x['resultObj']['containers'][0]['containers'], list) + for episode in episodes or []: + video_id = episode.get('id') + yield self.url_result('sonyliv:%s' % video_id, ie=SonyLIVIE.ie_key(), video_id=video_id) + + def _real_extract(self, url): + show_id = self._match_id(url) + return self.playlist_result(self._entries(show_id), playlist_id=show_id) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/soundcloud.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/soundcloud.py new file mode 100644 index 0000000..a7c2afd --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/soundcloud.py @@ -0,0 +1,948 @@ +import itertools +import re +import json +# import random + +from .common import ( + InfoExtractor, + SearchInfoExtractor +) +from ..compat import compat_str +from ..networking import HEADRequest, Request +from ..networking.exceptions import HTTPError +from ..utils import ( + error_to_compat_str, + ExtractorError, + float_or_none, + int_or_none, + KNOWN_EXTENSIONS, + mimetype2ext, + parse_qs, + str_or_none, + try_get, + unified_timestamp, + update_url_query, + url_or_none, + urlhandle_detect_ext, +) + + +class SoundcloudEmbedIE(InfoExtractor): + _VALID_URL = r'https?://(?:w|player|p)\.soundcloud\.com/player/?.*?\burl=(?P<id>.+)' + _EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?://)?(?:w\.)?soundcloud\.com/player.+?)\1'] + _TEST = { + # from https://www.soundi.fi/uutiset/ennakkokuuntelussa-timo-kaukolammen-station-to-station-to-station-julkaisua-juhlitaan-tanaan-g-livelabissa/ + 'url': 'https://w.soundcloud.com/player/?visual=true&url=https%3A%2F%2Fapi.soundcloud.com%2Fplaylists%2F922213810&show_artwork=true&maxwidth=640&maxheight=960&dnt=1&secret_token=s-ziYey', + 'only_matching': True, + } + + def _real_extract(self, url): + query = parse_qs(url) + api_url = query['url'][0] + secret_token = query.get('secret_token') + if secret_token: + api_url = update_url_query(api_url, {'secret_token': secret_token[0]}) + return self.url_result(api_url) + + +class SoundcloudBaseIE(InfoExtractor): + _NETRC_MACHINE = 'soundcloud' + + _API_V2_BASE = 'https://api-v2.soundcloud.com/' + _BASE_URL = 'https://soundcloud.com/' + _USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36' + _API_AUTH_QUERY_TEMPLATE = '?client_id=%s' + _API_AUTH_URL_PW = 'https://api-auth.soundcloud.com/web-auth/sign-in/password%s' + _API_VERIFY_AUTH_TOKEN = 'https://api-auth.soundcloud.com/connect/session%s' + _access_token = None + _HEADERS = {} + + _IMAGE_REPL_RE = r'-([0-9a-z]+)\.jpg' + + _ARTWORK_MAP = { + 'mini': 16, + 'tiny': 20, + 'small': 32, + 'badge': 47, + 't67x67': 67, + 'large': 100, + 't300x300': 300, + 'crop': 400, + 't500x500': 500, + 'original': 0, + } + + def _store_client_id(self, client_id): + self.cache.store('soundcloud', 'client_id', client_id) + + def _update_client_id(self): + webpage = self._download_webpage('https://soundcloud.com/', None) + for src in reversed(re.findall(r'<script[^>]+src="([^"]+)"', webpage)): + script = self._download_webpage(src, None, fatal=False) + if script: + client_id = self._search_regex( + r'client_id\s*:\s*"([0-9a-zA-Z]{32})"', + script, 'client id', default=None) + if client_id: + self._CLIENT_ID = client_id + self._store_client_id(client_id) + return + raise ExtractorError('Unable to extract client id') + + def _download_json(self, *args, **kwargs): + non_fatal = kwargs.get('fatal') is False + if non_fatal: + del kwargs['fatal'] + query = kwargs.get('query', {}).copy() + for _ in range(2): + query['client_id'] = self._CLIENT_ID + kwargs['query'] = query + try: + return super()._download_json(*args, **kwargs) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status in (401, 403): + self._store_client_id(None) + self._update_client_id() + continue + elif non_fatal: + self.report_warning(error_to_compat_str(e)) + return False + raise + + def _initialize_pre_login(self): + self._CLIENT_ID = self.cache.load('soundcloud', 'client_id') or 'a3e059563d7fd3372b49b37f00a00bcf' + + def _perform_login(self, username, password): + if username != 'oauth': + self.report_warning( + 'Login using username and password is not currently supported. ' + 'Use "--username oauth --password <oauth_token>" to login using an oauth token') + self._access_token = password + query = self._API_AUTH_QUERY_TEMPLATE % self._CLIENT_ID + payload = {'session': {'access_token': self._access_token}} + token_verification = Request(self._API_VERIFY_AUTH_TOKEN % query, json.dumps(payload).encode('utf-8')) + response = self._download_json(token_verification, None, note='Verifying login token...', fatal=False) + if response is not False: + self._HEADERS = {'Authorization': 'OAuth ' + self._access_token} + self.report_login() + else: + self.report_warning('Provided authorization token seems to be invalid. Continue as guest') + + r''' + def genDevId(): + def genNumBlock(): + return ''.join([str(random.randrange(10)) for i in range(6)]) + return '-'.join([genNumBlock() for i in range(4)]) + + payload = { + 'client_id': self._CLIENT_ID, + 'recaptcha_pubkey': 'null', + 'recaptcha_response': 'null', + 'credentials': { + 'identifier': username, + 'password': password + }, + 'signature': self.sign(username, password, self._CLIENT_ID), + 'device_id': genDevId(), + 'user_agent': self._USER_AGENT + } + + query = self._API_AUTH_QUERY_TEMPLATE % self._CLIENT_ID + login = sanitized_Request(self._API_AUTH_URL_PW % query, json.dumps(payload).encode('utf-8')) + response = self._download_json(login, None) + self._access_token = response.get('session').get('access_token') + if not self._access_token: + self.report_warning('Unable to get access token, login may has failed') + else: + self._HEADERS = {'Authorization': 'OAuth ' + self._access_token} + ''' + + # signature generation + def sign(self, user, pw, clid): + a = 33 + i = 1 + s = 440123 + w = 117 + u = 1800000 + l = 1042 + b = 37 + k = 37 + c = 5 + n = '0763ed7314c69015fd4a0dc16bbf4b90' # _KEY + y = '8' # _REV + r = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36' # _USER_AGENT + e = user # _USERNAME + t = clid # _CLIENT_ID + + d = '-'.join([str(mInt) for mInt in [a, i, s, w, u, l, b, k]]) + p = n + y + d + r + e + t + d + n + h = p + + m = 8011470 + f = 0 + + for f in range(f, len(h)): + m = (m >> 1) + ((1 & m) << 23) + m += ord(h[f]) + m &= 16777215 + + # c is not even needed + out = str(y) + ':' + str(d) + ':' + format(m, 'x') + ':' + str(c) + + return out + + def _extract_info_dict(self, info, full_title=None, secret_token=None, extract_flat=False): + track_id = compat_str(info['id']) + title = info['title'] + + format_urls = set() + formats = [] + query = {'client_id': self._CLIENT_ID} + if secret_token: + query['secret_token'] = secret_token + + if not extract_flat and info.get('downloadable') and info.get('has_downloads_left'): + download_url = update_url_query( + self._API_V2_BASE + 'tracks/' + track_id + '/download', query) + redirect_url = (self._download_json(download_url, track_id, fatal=False) or {}).get('redirectUri') + if redirect_url: + urlh = self._request_webpage( + HEADRequest(redirect_url), track_id, fatal=False) + if urlh: + format_url = urlh.url + format_urls.add(format_url) + formats.append({ + 'format_id': 'download', + 'ext': urlhandle_detect_ext(urlh) or 'mp3', + 'filesize': int_or_none(urlh.headers.get('Content-Length')), + 'url': format_url, + 'quality': 10, + }) + + def invalid_url(url): + return not url or url in format_urls + + def add_format(f, protocol, is_preview=False): + mobj = re.search(r'\.(?P<abr>\d+)\.(?P<ext>[0-9a-z]{3,4})(?=[/?])', stream_url) + if mobj: + for k, v in mobj.groupdict().items(): + if not f.get(k): + f[k] = v + format_id_list = [] + if protocol: + format_id_list.append(protocol) + ext = f.get('ext') + if ext == 'aac': + f['abr'] = '256' + for k in ('ext', 'abr'): + v = f.get(k) + if v: + format_id_list.append(v) + preview = is_preview or re.search(r'/(?:preview|playlist)/0/30/', f['url']) + if preview: + format_id_list.append('preview') + abr = f.get('abr') + if abr: + f['abr'] = int(abr) + if protocol == 'hls': + protocol = 'm3u8' if ext == 'aac' else 'm3u8_native' + else: + protocol = 'http' + f.update({ + 'format_id': '_'.join(format_id_list), + 'protocol': protocol, + 'preference': -10 if preview else None, + }) + formats.append(f) + + # New API + transcodings = try_get( + info, lambda x: x['media']['transcodings'], list) or [] + for t in transcodings: + if not isinstance(t, dict): + continue + format_url = url_or_none(t.get('url')) + if not format_url: + continue + stream = None if extract_flat else self._download_json( + format_url, track_id, query=query, fatal=False, headers=self._HEADERS) + if not isinstance(stream, dict): + continue + stream_url = url_or_none(stream.get('url')) + if invalid_url(stream_url): + continue + format_urls.add(stream_url) + stream_format = t.get('format') or {} + protocol = stream_format.get('protocol') + if protocol != 'hls' and '/hls' in format_url: + protocol = 'hls' + ext = None + preset = str_or_none(t.get('preset')) + if preset: + ext = preset.split('_')[0] + if ext not in KNOWN_EXTENSIONS: + ext = mimetype2ext(stream_format.get('mime_type')) + add_format({ + 'url': stream_url, + 'ext': ext, + }, 'http' if protocol == 'progressive' else protocol, + t.get('snipped') or '/preview/' in format_url) + + for f in formats: + f['vcodec'] = 'none' + + if not formats and info.get('policy') == 'BLOCK': + self.raise_geo_restricted(metadata_available=True) + + user = info.get('user') or {} + + thumbnails = [] + artwork_url = info.get('artwork_url') + thumbnail = artwork_url or user.get('avatar_url') + if isinstance(thumbnail, compat_str): + if re.search(self._IMAGE_REPL_RE, thumbnail): + for image_id, size in self._ARTWORK_MAP.items(): + i = { + 'id': image_id, + 'url': re.sub(self._IMAGE_REPL_RE, '-%s.jpg' % image_id, thumbnail), + } + if image_id == 'tiny' and not artwork_url: + size = 18 + elif image_id == 'original': + i['preference'] = 10 + if size: + i.update({ + 'width': size, + 'height': size, + }) + thumbnails.append(i) + else: + thumbnails = [{'url': thumbnail}] + + def extract_count(key): + return int_or_none(info.get('%s_count' % key)) + + return { + 'id': track_id, + 'uploader': user.get('username'), + 'uploader_id': str_or_none(user.get('id')) or user.get('permalink'), + 'uploader_url': user.get('permalink_url'), + 'timestamp': unified_timestamp(info.get('created_at')), + 'title': title, + 'description': info.get('description'), + 'thumbnails': thumbnails, + 'duration': float_or_none(info.get('duration'), 1000), + 'webpage_url': info.get('permalink_url'), + 'license': info.get('license'), + 'view_count': extract_count('playback'), + 'like_count': extract_count('favoritings') or extract_count('likes'), + 'comment_count': extract_count('comment'), + 'repost_count': extract_count('reposts'), + 'genre': info.get('genre'), + 'formats': formats if not extract_flat else None + } + + @classmethod + def _resolv_url(cls, url): + return cls._API_V2_BASE + 'resolve?url=' + url + + +class SoundcloudIE(SoundcloudBaseIE): + """Information extractor for soundcloud.com + To access the media, the uid of the song and a stream token + must be extracted from the page source and the script must make + a request to media.soundcloud.com/crossdomain.xml. Then + the media can be grabbed by requesting from an url composed + of the stream token and uid + """ + + _VALID_URL = r'''(?x)^(?:https?://)? + (?:(?:(?:www\.|m\.)?soundcloud\.com/ + (?!stations/track) + (?P<uploader>[\w\d-]+)/ + (?!(?:tracks|albums|sets(?:/.+?)?|reposts|likes|spotlight)/?(?:$|[?#])) + (?P<title>[\w\d-]+) + (?:/(?P<token>(?!(?:albums|sets|recommended))[^?]+?))? + (?:[?].*)?$) + |(?:api(?:-v2)?\.soundcloud\.com/tracks/(?P<track_id>\d+) + (?:/?\?secret_token=(?P<secret_token>[^&]+))?) + ) + ''' + IE_NAME = 'soundcloud' + _TESTS = [ + { + 'url': 'http://soundcloud.com/ethmusic/lostin-powers-she-so-heavy', + 'md5': 'ebef0a451b909710ed1d7787dddbf0d7', + 'info_dict': { + 'id': '62986583', + 'ext': 'mp3', + 'title': 'Lostin Powers - She so Heavy (SneakPreview) Adrian Ackers Blueprint 1', + 'description': 'No Downloads untill we record the finished version this weekend, i was too pumped n i had to post it , earl is prolly gonna b hella p.o\'d', + 'uploader': 'E.T. ExTerrestrial Music', + 'uploader_id': '1571244', + 'timestamp': 1349920598, + 'upload_date': '20121011', + 'duration': 143.216, + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + } + }, + # geo-restricted + { + 'url': 'https://soundcloud.com/the-concept-band/goldrushed-mastered?in=the-concept-band/sets/the-royal-concept-ep', + 'info_dict': { + 'id': '47127627', + 'ext': 'mp3', + 'title': 'Goldrushed', + 'description': 'From Stockholm Sweden\r\nPovel / Magnus / Filip / David\r\nwww.theroyalconcept.com', + 'uploader': 'The Royal Concept', + 'uploader_id': '9615865', + 'timestamp': 1337635207, + 'upload_date': '20120521', + 'duration': 227.155, + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + }, + }, + # private link + { + 'url': 'https://soundcloud.com/jaimemf/youtube-dl-test-video-a-y-baw/s-8Pjrp', + 'md5': 'aa0dd32bfea9b0c5ef4f02aacd080604', + 'info_dict': { + 'id': '123998367', + 'ext': 'mp3', + 'title': 'Youtube - Dl Test Video \'\' Ä↭', + 'description': 'test chars: \"\'/\\ä↭', + 'uploader': 'jaimeMF', + 'uploader_id': '69767071', + 'timestamp': 1386604920, + 'upload_date': '20131209', + 'duration': 9.927, + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + }, + }, + # private link (alt format) + { + 'url': 'https://api.soundcloud.com/tracks/123998367?secret_token=s-8Pjrp', + 'md5': 'aa0dd32bfea9b0c5ef4f02aacd080604', + 'info_dict': { + 'id': '123998367', + 'ext': 'mp3', + 'title': 'Youtube - Dl Test Video \'\' Ä↭', + 'description': 'test chars: \"\'/\\ä↭', + 'uploader': 'jaimeMF', + 'uploader_id': '69767071', + 'timestamp': 1386604920, + 'upload_date': '20131209', + 'duration': 9.927, + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + }, + }, + # downloadable song + { + 'url': 'https://soundcloud.com/the80m/the-following', + 'md5': '9ffcddb08c87d74fb5808a3c183a1d04', + 'info_dict': { + 'id': '343609555', + 'ext': 'wav', + }, + }, + # private link, downloadable format + { + 'url': 'https://soundcloud.com/oriuplift/uponly-238-no-talking-wav/s-AyZUd', + 'md5': '64a60b16e617d41d0bef032b7f55441e', + 'info_dict': { + 'id': '340344461', + 'ext': 'wav', + 'title': 'Uplifting Only 238 [No Talking] (incl. Alex Feed Guestmix) (Aug 31, 2017) [wav]', + 'description': 'md5:fa20ee0fca76a3d6df8c7e57f3715366', + 'uploader': 'Ori Uplift Music', + 'uploader_id': '12563093', + 'timestamp': 1504206263, + 'upload_date': '20170831', + 'duration': 7449.096, + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + }, + }, + # no album art, use avatar pic for thumbnail + { + 'url': 'https://soundcloud.com/garyvee/sideways-prod-mad-real', + 'md5': '59c7872bc44e5d99b7211891664760c2', + 'info_dict': { + 'id': '309699954', + 'ext': 'mp3', + 'title': 'Sideways (Prod. Mad Real)', + 'description': 'md5:d41d8cd98f00b204e9800998ecf8427e', + 'uploader': 'garyvee', + 'uploader_id': '2366352', + 'timestamp': 1488152409, + 'upload_date': '20170226', + 'duration': 207.012, + 'thumbnail': r're:https?://.*\.jpg', + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + }, + 'params': { + 'skip_download': True, + }, + }, + { + 'url': 'https://soundcloud.com/giovannisarani/mezzo-valzer', + 'md5': 'e22aecd2bc88e0e4e432d7dcc0a1abf7', + 'info_dict': { + 'id': '583011102', + 'ext': 'mp3', + 'title': 'Mezzo Valzer', + 'description': 'md5:4138d582f81866a530317bae316e8b61', + 'uploader': 'Micronie', + 'uploader_id': '3352531', + 'timestamp': 1551394171, + 'upload_date': '20190228', + 'duration': 180.157, + 'thumbnail': r're:https?://.*\.jpg', + 'license': 'all-rights-reserved', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + }, + }, + { + # AAC HQ format available (account with active subscription needed) + 'url': 'https://soundcloud.com/wandw/the-chainsmokers-ft-daya-dont-let-me-down-ww-remix-1', + 'only_matching': True, + }, + { + # Go+ (account with active subscription needed) + 'url': 'https://soundcloud.com/taylorswiftofficial/look-what-you-made-me-do', + 'only_matching': True, + }, + ] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + + track_id = mobj.group('track_id') + + query = {} + if track_id: + info_json_url = self._API_V2_BASE + 'tracks/' + track_id + full_title = track_id + token = mobj.group('secret_token') + if token: + query['secret_token'] = token + else: + full_title = resolve_title = '%s/%s' % mobj.group('uploader', 'title') + token = mobj.group('token') + if token: + resolve_title += '/%s' % token + info_json_url = self._resolv_url(self._BASE_URL + resolve_title) + + info = self._download_json( + info_json_url, full_title, 'Downloading info JSON', query=query, headers=self._HEADERS) + + return self._extract_info_dict(info, full_title, token) + + +class SoundcloudPlaylistBaseIE(SoundcloudBaseIE): + def _extract_set(self, playlist, token=None): + playlist_id = compat_str(playlist['id']) + tracks = playlist.get('tracks') or [] + if not all([t.get('permalink_url') for t in tracks]) and token: + tracks = self._download_json( + self._API_V2_BASE + 'tracks', playlist_id, + 'Downloading tracks', query={ + 'ids': ','.join([compat_str(t['id']) for t in tracks]), + 'playlistId': playlist_id, + 'playlistSecretToken': token, + }, headers=self._HEADERS) + entries = [] + for track in tracks: + track_id = str_or_none(track.get('id')) + url = track.get('permalink_url') + if not url: + if not track_id: + continue + url = self._API_V2_BASE + 'tracks/' + track_id + if token: + url += '?secret_token=' + token + entries.append(self.url_result( + url, SoundcloudIE.ie_key(), track_id)) + return self.playlist_result( + entries, playlist_id, + playlist.get('title'), + playlist.get('description')) + + +class SoundcloudSetIE(SoundcloudPlaylistBaseIE): + _VALID_URL = r'https?://(?:(?:www|m)\.)?soundcloud\.com/(?P<uploader>[\w\d-]+)/sets/(?P<slug_title>[:\w\d-]+)(?:/(?P<token>[^?/]+))?' + IE_NAME = 'soundcloud:set' + _TESTS = [{ + 'url': 'https://soundcloud.com/the-concept-band/sets/the-royal-concept-ep', + 'info_dict': { + 'id': '2284613', + 'title': 'The Royal Concept EP', + 'description': 'md5:71d07087c7a449e8941a70a29e34671e', + }, + 'playlist_mincount': 5, + }, { + 'url': 'https://soundcloud.com/the-concept-band/sets/the-royal-concept-ep/token', + 'only_matching': True, + }, { + 'url': 'https://soundcloud.com/discover/sets/weekly::flacmatic', + 'only_matching': True, + }, { + 'url': 'https://soundcloud.com/discover/sets/charts-top:all-music:de', + 'only_matching': True, + }, { + 'url': 'https://soundcloud.com/discover/sets/charts-top:hiphoprap:kr', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + + full_title = '%s/sets/%s' % mobj.group('uploader', 'slug_title') + token = mobj.group('token') + if token: + full_title += '/' + token + + info = self._download_json(self._resolv_url( + self._BASE_URL + full_title), full_title, headers=self._HEADERS) + + if 'errors' in info: + msgs = (compat_str(err['error_message']) for err in info['errors']) + raise ExtractorError('unable to download video webpage: %s' % ','.join(msgs)) + + return self._extract_set(info, token) + + +class SoundcloudPagedPlaylistBaseIE(SoundcloudBaseIE): + def _extract_playlist(self, base_url, playlist_id, playlist_title): + return { + '_type': 'playlist', + 'id': playlist_id, + 'title': playlist_title, + 'entries': self._entries(base_url, playlist_id), + } + + def _entries(self, url, playlist_id): + # Per the SoundCloud documentation, the maximum limit for a linked partitioning query is 200. + # https://developers.soundcloud.com/blog/offset-pagination-deprecated + query = { + 'limit': 200, + 'linked_partitioning': '1', + 'offset': 0, + } + + for i in itertools.count(): + for retry in self.RetryManager(): + try: + response = self._download_json( + url, playlist_id, query=query, headers=self._HEADERS, + note=f'Downloading track page {i + 1}') + break + except ExtractorError as e: + # Downloading page may result in intermittent 502 HTTP error + # See https://github.com/yt-dlp/yt-dlp/issues/872 + if not isinstance(e.cause, HTTPError) or e.cause.status != 502: + raise + retry.error = e + continue + + def resolve_entry(*candidates): + for cand in candidates: + if not isinstance(cand, dict): + continue + permalink_url = url_or_none(cand.get('permalink_url')) + if permalink_url: + return self.url_result( + permalink_url, + SoundcloudIE.ie_key() if SoundcloudIE.suitable(permalink_url) else None, + str_or_none(cand.get('id')), cand.get('title')) + + for e in response['collection'] or []: + yield resolve_entry(e, e.get('track'), e.get('playlist')) + + url = response.get('next_href') + if not url: + break + query.pop('offset', None) + + +class SoundcloudUserIE(SoundcloudPagedPlaylistBaseIE): + _VALID_URL = r'''(?x) + https?:// + (?:(?:www|m)\.)?soundcloud\.com/ + (?P<user>[^/]+) + (?:/ + (?P<rsrc>tracks|albums|sets|reposts|likes|spotlight) + )? + /?(?:[?#].*)?$ + ''' + IE_NAME = 'soundcloud:user' + _TESTS = [{ + 'url': 'https://soundcloud.com/soft-cell-official', + 'info_dict': { + 'id': '207965082', + 'title': 'Soft Cell (All)', + }, + 'playlist_mincount': 28, + }, { + 'url': 'https://soundcloud.com/soft-cell-official/tracks', + 'info_dict': { + 'id': '207965082', + 'title': 'Soft Cell (Tracks)', + }, + 'playlist_mincount': 27, + }, { + 'url': 'https://soundcloud.com/soft-cell-official/albums', + 'info_dict': { + 'id': '207965082', + 'title': 'Soft Cell (Albums)', + }, + 'playlist_mincount': 1, + }, { + 'url': 'https://soundcloud.com/jcv246/sets', + 'info_dict': { + 'id': '12982173', + 'title': 'Jordi / cv (Sets)', + }, + 'playlist_mincount': 2, + }, { + 'url': 'https://soundcloud.com/jcv246/reposts', + 'info_dict': { + 'id': '12982173', + 'title': 'Jordi / cv (Reposts)', + }, + 'playlist_mincount': 6, + }, { + 'url': 'https://soundcloud.com/clalberg/likes', + 'info_dict': { + 'id': '11817582', + 'title': 'clalberg (Likes)', + }, + 'playlist_mincount': 5, + }, { + 'url': 'https://soundcloud.com/grynpyret/spotlight', + 'info_dict': { + 'id': '7098329', + 'title': 'Grynpyret (Spotlight)', + }, + 'playlist_mincount': 1, + }] + + _BASE_URL_MAP = { + 'all': 'stream/users/%s', + 'tracks': 'users/%s/tracks', + 'albums': 'users/%s/albums', + 'sets': 'users/%s/playlists', + 'reposts': 'stream/users/%s/reposts', + 'likes': 'users/%s/likes', + 'spotlight': 'users/%s/spotlight', + } + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + uploader = mobj.group('user') + + user = self._download_json( + self._resolv_url(self._BASE_URL + uploader), + uploader, 'Downloading user info', headers=self._HEADERS) + + resource = mobj.group('rsrc') or 'all' + + return self._extract_playlist( + self._API_V2_BASE + self._BASE_URL_MAP[resource] % user['id'], + str_or_none(user.get('id')), + '%s (%s)' % (user['username'], resource.capitalize())) + + +class SoundcloudUserPermalinkIE(SoundcloudPagedPlaylistBaseIE): + _VALID_URL = r'https?://api\.soundcloud\.com/users/(?P<id>\d+)' + IE_NAME = 'soundcloud:user:permalink' + _TESTS = [{ + 'url': 'https://api.soundcloud.com/users/30909869', + 'info_dict': { + 'id': '30909869', + 'title': 'neilcic', + }, + 'playlist_mincount': 23, + }] + + def _real_extract(self, url): + user_id = self._match_id(url) + user = self._download_json( + self._resolv_url(url), user_id, 'Downloading user info', headers=self._HEADERS) + + return self._extract_playlist( + f'{self._API_V2_BASE}stream/users/{user["id"]}', str(user['id']), user.get('username')) + + +class SoundcloudTrackStationIE(SoundcloudPagedPlaylistBaseIE): + _VALID_URL = r'https?://(?:(?:www|m)\.)?soundcloud\.com/stations/track/[^/]+/(?P<id>[^/?#&]+)' + IE_NAME = 'soundcloud:trackstation' + _TESTS = [{ + 'url': 'https://soundcloud.com/stations/track/officialsundial/your-text', + 'info_dict': { + 'id': '286017854', + 'title': 'Track station: your text', + }, + 'playlist_mincount': 47, + }] + + def _real_extract(self, url): + track_name = self._match_id(url) + + track = self._download_json(self._resolv_url(url), track_name, headers=self._HEADERS) + track_id = self._search_regex( + r'soundcloud:track-stations:(\d+)', track['id'], 'track id') + + return self._extract_playlist( + self._API_V2_BASE + 'stations/%s/tracks' % track['id'], + track_id, 'Track station: %s' % track['title']) + + +class SoundcloudRelatedIE(SoundcloudPagedPlaylistBaseIE): + _VALID_URL = r'https?://(?:(?:www|m)\.)?soundcloud\.com/(?P<slug>[\w\d-]+/[\w\d-]+)/(?P<relation>albums|sets|recommended)' + IE_NAME = 'soundcloud:related' + _TESTS = [{ + 'url': 'https://soundcloud.com/wajang/sexapil-pingers-5/recommended', + 'info_dict': { + 'id': '1084577272', + 'title': 'Sexapil - Pingers 5 (Recommended)', + }, + 'playlist_mincount': 50, + }, { + 'url': 'https://soundcloud.com/wajang/sexapil-pingers-5/albums', + 'info_dict': { + 'id': '1084577272', + 'title': 'Sexapil - Pingers 5 (Albums)', + }, + 'playlist_mincount': 1, + }, { + 'url': 'https://soundcloud.com/wajang/sexapil-pingers-5/sets', + 'info_dict': { + 'id': '1084577272', + 'title': 'Sexapil - Pingers 5 (Sets)', + }, + 'playlist_mincount': 4, + }] + + _BASE_URL_MAP = { + 'albums': 'tracks/%s/albums', + 'sets': 'tracks/%s/playlists_without_albums', + 'recommended': 'tracks/%s/related', + } + + def _real_extract(self, url): + slug, relation = self._match_valid_url(url).group('slug', 'relation') + + track = self._download_json( + self._resolv_url(self._BASE_URL + slug), + slug, 'Downloading track info', headers=self._HEADERS) + + if track.get('errors'): + raise ExtractorError(f'{self.IE_NAME} said: %s' % ','.join( + str(err['error_message']) for err in track['errors']), expected=True) + + return self._extract_playlist( + self._API_V2_BASE + self._BASE_URL_MAP[relation] % track['id'], str(track['id']), + '%s (%s)' % (track.get('title') or slug, relation.capitalize())) + + +class SoundcloudPlaylistIE(SoundcloudPlaylistBaseIE): + _VALID_URL = r'https?://api(?:-v2)?\.soundcloud\.com/playlists/(?P<id>[0-9]+)(?:/?\?secret_token=(?P<token>[^&]+?))?$' + IE_NAME = 'soundcloud:playlist' + _TESTS = [{ + 'url': 'https://api.soundcloud.com/playlists/4110309', + 'info_dict': { + 'id': '4110309', + 'title': 'TILT Brass - Bowery Poetry Club, August \'03 [Non-Site SCR 02]', + 'description': 're:.*?TILT Brass - Bowery Poetry Club', + }, + 'playlist_count': 6, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + playlist_id = mobj.group('id') + + query = {} + token = mobj.group('token') + if token: + query['secret_token'] = token + + data = self._download_json( + self._API_V2_BASE + 'playlists/' + playlist_id, + playlist_id, 'Downloading playlist', query=query, headers=self._HEADERS) + + return self._extract_set(data, token) + + +class SoundcloudSearchIE(SoundcloudBaseIE, SearchInfoExtractor): + IE_NAME = 'soundcloud:search' + IE_DESC = 'Soundcloud search' + _SEARCH_KEY = 'scsearch' + _TESTS = [{ + 'url': 'scsearch15:post-avant jazzcore', + 'info_dict': { + 'id': 'post-avant jazzcore', + 'title': 'post-avant jazzcore', + }, + 'playlist_count': 15, + }] + + _MAX_RESULTS_PER_PAGE = 200 + _DEFAULT_RESULTS_PER_PAGE = 50 + + def _get_collection(self, endpoint, collection_id, **query): + limit = min( + query.get('limit', self._DEFAULT_RESULTS_PER_PAGE), + self._MAX_RESULTS_PER_PAGE) + query.update({ + 'limit': limit, + 'linked_partitioning': 1, + 'offset': 0, + }) + next_url = update_url_query(self._API_V2_BASE + endpoint, query) + + for i in itertools.count(1): + response = self._download_json( + next_url, collection_id, f'Downloading page {i}', + 'Unable to download API page', headers=self._HEADERS) + + for item in response.get('collection') or []: + if item: + yield self.url_result( + item['uri'], SoundcloudIE.ie_key(), **self._extract_info_dict(item, extract_flat=True)) + + next_url = response.get('next_href') + if not next_url: + break + + def _get_n_results(self, query, n): + return self.playlist_result(itertools.islice( + self._get_collection('search/tracks', query, limit=n, q=query), + 0, None if n == float('inf') else n), query, query) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/soundgasm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/soundgasm.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/soundgasm.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/soundgasm.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/southpark.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/southpark.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/southpark.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/southpark.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sovietscloset.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sovietscloset.py new file mode 100644 index 0000000..493eea2 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sovietscloset.py @@ -0,0 +1,207 @@ +from .common import InfoExtractor +from ..utils import ( + try_get, + unified_timestamp +) + + +class SovietsClosetBaseIE(InfoExtractor): + MEDIADELIVERY_REFERER = {'Referer': 'https://iframe.mediadelivery.net/'} + + def parse_nuxt_jsonp(self, nuxt_jsonp_url, video_id, name): + nuxt_jsonp = self._download_webpage(nuxt_jsonp_url, video_id, note=f'Downloading {name} __NUXT_JSONP__') + return self._search_nuxt_data(nuxt_jsonp, video_id, '__NUXT_JSONP__') + + def video_meta(self, video_id, game_name, category_name, episode_number, stream_date): + title = game_name + if category_name and category_name != 'Misc': + title += f' - {category_name}' + if episode_number: + title += f' #{episode_number}' + + timestamp = unified_timestamp(stream_date) + + return { + 'id': video_id, + 'title': title, + 'http_headers': self.MEDIADELIVERY_REFERER, + 'uploader': 'SovietWomble', + 'creator': 'SovietWomble', + 'release_timestamp': timestamp, + 'timestamp': timestamp, + 'uploader_id': 'SovietWomble', + 'uploader_url': 'https://www.twitch.tv/SovietWomble', + 'was_live': True, + 'availability': 'public', + 'series': game_name, + 'season': category_name, + 'episode_number': episode_number, + } + + +class SovietsClosetIE(SovietsClosetBaseIE): + _VALID_URL = r'https?://(?:www\.)?sovietscloset\.com/video/(?P<id>[0-9]+)/?' + _TESTS = [ + { + 'url': 'https://sovietscloset.com/video/1337', + 'md5': 'bd012b04b261725510ca5383074cdd55', + 'info_dict': { + 'id': '1337', + 'ext': 'mp4', + 'title': 'The Witcher #13', + 'thumbnail': r're:^https?://.*\.b-cdn\.net/2f0cfbf4-3588-43a9-a7d6-7c9ea3755e67/thumbnail\.jpg$', + 'uploader': 'SovietWomble', + 'creator': 'SovietWomble', + 'release_timestamp': 1492091580, + 'release_date': '20170413', + 'timestamp': 1492091580, + 'upload_date': '20170413', + 'uploader_id': 'SovietWomble', + 'uploader_url': 'https://www.twitch.tv/SovietWomble', + 'duration': 7007, + 'was_live': True, + 'availability': 'public', + 'series': 'The Witcher', + 'season': 'Misc', + 'episode_number': 13, + 'episode': 'Episode 13', + }, + }, + { + 'url': 'https://sovietscloset.com/video/1105', + 'md5': '89fa928f183893cb65a0b7be846d8a90', + 'info_dict': { + 'id': '1105', + 'ext': 'mp4', + 'title': 'Arma 3 - Zeus Games #5', + 'uploader': 'SovietWomble', + 'thumbnail': r're:^https?://.*\.b-cdn\.net/c0e5e76f-3a93-40b4-bf01-12343c2eec5d/thumbnail\.jpg$', + 'creator': 'SovietWomble', + 'release_timestamp': 1461157200, + 'release_date': '20160420', + 'timestamp': 1461157200, + 'upload_date': '20160420', + 'uploader_id': 'SovietWomble', + 'uploader_url': 'https://www.twitch.tv/SovietWomble', + 'duration': 8804, + 'was_live': True, + 'availability': 'public', + 'series': 'Arma 3', + 'season': 'Zeus Games', + 'episode_number': 5, + 'episode': 'Episode 5', + }, + }, + ] + + def _extract_bunnycdn_iframe(self, video_id, bunnycdn_id): + iframe = self._download_webpage( + f'https://iframe.mediadelivery.net/embed/5105/{bunnycdn_id}', + video_id, note='Downloading BunnyCDN iframe', headers=self.MEDIADELIVERY_REFERER) + + m3u8_url = self._search_regex(r'(https?://.*?\.m3u8)', iframe, 'm3u8 url') + thumbnail_url = self._search_regex(r'(https?://.*?thumbnail\.jpg)', iframe, 'thumbnail url') + + m3u8_formats = self._extract_m3u8_formats(m3u8_url, video_id, headers=self.MEDIADELIVERY_REFERER) + + if not m3u8_formats: + duration = None + else: + duration = self._extract_m3u8_vod_duration( + m3u8_formats[0]['url'], video_id, headers=self.MEDIADELIVERY_REFERER) + + return { + 'formats': m3u8_formats, + 'thumbnail': thumbnail_url, + 'duration': duration, + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + static_assets_base = self._search_regex(r'(/_nuxt/static/\d+)', webpage, 'staticAssetsBase') + static_assets_base = f'https://sovietscloset.com{static_assets_base}' + + stream = self.parse_nuxt_jsonp(f'{static_assets_base}/video/{video_id}/payload.js', video_id, 'video')['stream'] + + return { + **self.video_meta( + video_id=video_id, game_name=stream['game']['name'], + category_name=try_get(stream, lambda x: x['subcategory']['name'], str), + episode_number=stream.get('number'), stream_date=stream.get('date')), + **self._extract_bunnycdn_iframe(video_id, stream['bunnyId']), + } + + +class SovietsClosetPlaylistIE(SovietsClosetBaseIE): + _VALID_URL = r'https?://(?:www\.)?sovietscloset\.com/(?!video)(?P<id>[^#?]+)' + _TESTS = [ + + { + 'url': 'https://sovietscloset.com/The-Witcher', + 'info_dict': { + 'id': 'The-Witcher', + 'title': 'The Witcher', + }, + 'playlist_mincount': 31, + }, + { + 'url': 'https://sovietscloset.com/Arma-3/Zeus-Games', + 'info_dict': { + 'id': 'Arma-3/Zeus-Games', + 'title': 'Arma 3 - Zeus Games', + }, + 'playlist_mincount': 3, + }, + { + 'url': 'https://sovietscloset.com/arma-3/zeus-games/', + 'info_dict': { + 'id': 'arma-3/zeus-games', + 'title': 'Arma 3 - Zeus Games', + }, + 'playlist_mincount': 3, + }, + { + 'url': 'https://sovietscloset.com/Total-War-Warhammer', + 'info_dict': { + 'id': 'Total-War-Warhammer', + 'title': 'Total War: Warhammer - Greenskins', + }, + 'playlist_mincount': 33, + }, + ] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + if playlist_id.endswith('/'): + playlist_id = playlist_id[:-1] + + webpage = self._download_webpage(url, playlist_id) + + static_assets_base = self._search_regex(r'(/_nuxt/static/\d+)', webpage, 'staticAssetsBase') + static_assets_base = f'https://sovietscloset.com{static_assets_base}' + + sovietscloset = self.parse_nuxt_jsonp(f'{static_assets_base}/payload.js', playlist_id, 'global')['games'] + + if '/' in playlist_id: + game_slug, category_slug = playlist_id.lower().split('/') + else: + game_slug = playlist_id.lower() + category_slug = 'misc' + + game = next(game for game in sovietscloset if game['slug'].lower() == game_slug) + category = next((cat for cat in game['subcategories'] if cat.get('slug', '').lower() == category_slug), + game['subcategories'][0]) + category_slug = category.get('slug', '').lower() or category_slug + playlist_title = game.get('name') or game_slug + if category_slug != 'misc': + playlist_title += f' - {category.get("name") or category_slug}' + entries = [{ + **self.url_result(f'https://sovietscloset.com/video/{stream["id"]}', ie=SovietsClosetIE.ie_key()), + **self.video_meta( + video_id=stream['id'], game_name=game['name'], category_name=category.get('name'), + episode_number=i + 1, stream_date=stream.get('date')), + } for i, stream in enumerate(category['streams'])] + + return self.playlist_result(entries, playlist_id, playlist_title) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/spankbang.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/spankbang.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/spankbang.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/spankbang.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/spankwire.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/spankwire.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/spankwire.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/spankwire.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/spiegel.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/spiegel.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/spiegel.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/spiegel.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/spike.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/spike.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/spike.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/spike.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sport5.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sport5.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sport5.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sport5.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sportbox.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sportbox.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sportbox.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sportbox.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sportdeutschland.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sportdeutschland.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sportdeutschland.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sportdeutschland.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/spotify.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/spotify.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/spotify.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/spotify.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/spreaker.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/spreaker.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/spreaker.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/spreaker.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/springboardplatform.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/springboardplatform.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/springboardplatform.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/springboardplatform.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sprout.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sprout.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sprout.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sprout.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/srgssr.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/srgssr.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/srgssr.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/srgssr.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/srmediathek.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/srmediathek.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/srmediathek.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/srmediathek.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/stacommu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stacommu.py new file mode 100644 index 0000000..6f58f06 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/stacommu.py @@ -0,0 +1,148 @@ +import time + +from .wrestleuniverse import WrestleUniverseBaseIE +from ..utils import ( + int_or_none, + traverse_obj, + url_or_none, +) + + +class StacommuBaseIE(WrestleUniverseBaseIE): + _NETRC_MACHINE = 'stacommu' + _API_HOST = 'api.stacommu.jp' + _LOGIN_QUERY = {'key': 'AIzaSyCR9czxhH2eWuijEhTNWBZ5MCcOYEUTAhg'} + _LOGIN_HEADERS = { + 'Accept': '*/*', + 'Content-Type': 'application/json', + 'X-Client-Version': 'Chrome/JsCore/9.9.4/FirebaseCore-web', + 'Referer': 'https://www.stacommu.jp/', + 'Origin': 'https://www.stacommu.jp', + } + + @WrestleUniverseBaseIE._TOKEN.getter + def _TOKEN(self): + if self._REAL_TOKEN and self._TOKEN_EXPIRY <= int(time.time()): + self._refresh_token() + + return self._REAL_TOKEN + + def _get_formats(self, data, path, video_id=None): + if not traverse_obj(data, path) and not data.get('canWatch') and not self._TOKEN: + self.raise_login_required(method='password') + return super()._get_formats(data, path, video_id) + + def _extract_hls_key(self, data, path, decrypt): + encryption_data = traverse_obj(data, path) + if traverse_obj(encryption_data, ('encryptType', {int})) == 0: + return None + return traverse_obj(encryption_data, {'key': ('key', {decrypt}), 'iv': ('iv', {decrypt})}) + + +class StacommuVODIE(StacommuBaseIE): + _VALID_URL = r'https?://www\.stacommu\.jp/videos/episodes/(?P<id>[\da-zA-Z]+)' + _TESTS = [{ + # not encrypted + 'url': 'https://www.stacommu.jp/videos/episodes/aXcVKjHyAENEjard61soZZ', + 'info_dict': { + 'id': 'aXcVKjHyAENEjard61soZZ', + 'ext': 'mp4', + 'title': 'スタコミュAWARDã®è£å´ã€ã»ã¼å…¨éƒ¨è¦‹ã›ã¾ã™ï¼ã€œæ™´ã‚Œèˆžå°ã®ç›´å‰ãƒ‰ã‚­ãƒ‰ã‚­ç·¨ã€œ', + 'description': 'md5:6400275c57ae75c06da36b06f96beb1c', + 'timestamp': 1679652000, + 'upload_date': '20230324', + 'thumbnail': 'https://image.stacommu.jp/6eLobQan8PFtBoU4RL4uGg/6eLobQan8PFtBoU4RL4uGg', + 'cast': 'count:11', + 'duration': 250, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + # encrypted; requires a premium account + 'url': 'https://www.stacommu.jp/videos/episodes/3hybMByUvzMEqndSeu5LpD', + 'info_dict': { + 'id': '3hybMByUvzMEqndSeu5LpD', + 'ext': 'mp4', + 'title': 'スタプラフェス2023〜è£å´ã»ã¼å…¨éƒ¨è¦‹ã›ã¾ã™ã€œï¼ƒ10', + 'description': 'md5:85494488ccf1dfa1934accdeadd7b340', + 'timestamp': 1682506800, + 'upload_date': '20230426', + 'thumbnail': 'https://image.stacommu.jp/eMdXtEefR4kEyJJMpAFi7x/eMdXtEefR4kEyJJMpAFi7x', + 'cast': 'count:55', + 'duration': 312, + 'hls_aes': { + 'key': '6bbaf241b8e1fd9f59ecf546a70e4ae7', + 'iv': '1fc9002a23166c3bb1d240b953d09de9', + }, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }] + + _API_PATH = 'videoEpisodes' + + def _real_extract(self, url): + video_id = self._match_id(url) + video_info = self._download_metadata( + url, video_id, 'ja', ('dehydratedState', 'queries', 0, 'state', 'data')) + hls_info, decrypt = self._call_encrypted_api( + video_id, ':watch', 'stream information', data={'method': 1}) + + return { + 'id': video_id, + 'formats': self._get_formats(hls_info, ('protocolHls', 'url', {url_or_none}), video_id), + 'hls_aes': self._extract_hls_key(hls_info, 'protocolHls', decrypt), + **traverse_obj(video_info, { + 'title': ('displayName', {str}), + 'description': ('description', {str}), + 'timestamp': ('watchStartTime', {int_or_none}), + 'thumbnail': ('keyVisualUrl', {url_or_none}), + 'cast': ('casts', ..., 'displayName', {str}), + 'duration': ('duration', {int}), + }), + } + + +class StacommuLiveIE(StacommuBaseIE): + _VALID_URL = r'https?://www\.stacommu\.jp/live/(?P<id>[\da-zA-Z]+)' + _TESTS = [{ + 'url': 'https://www.stacommu.jp/live/d2FJ3zLnndegZJCAEzGM3m', + 'info_dict': { + 'id': 'd2FJ3zLnndegZJCAEzGM3m', + 'ext': 'mp4', + 'title': 'ä»²æ‘æ‚ èœ 2023/05/04', + 'timestamp': 1683195647, + 'upload_date': '20230504', + 'thumbnail': 'https://image.stacommu.jp/pHGF57SPEHE2ke83FS92FN/pHGF57SPEHE2ke83FS92FN', + 'duration': 5322, + 'hls_aes': { + 'key': 'efbb3ec0b8246f61adf1764c5a51213a', + 'iv': '80621d19a1f19167b64cedb415b05d1c', + }, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }] + + _API_PATH = 'events' + + def _real_extract(self, url): + video_id = self._match_id(url) + video_info = self._call_api(video_id, msg='video information', query={'al': 'ja'}, auth=False) + hls_info, decrypt = self._call_encrypted_api( + video_id, ':watchArchive', 'stream information', data={'method': 1}) + + return { + 'id': video_id, + 'formats': self._get_formats(hls_info, ('hls', 'urls', ..., {url_or_none}), video_id), + 'hls_aes': self._extract_hls_key(hls_info, 'hls', decrypt), + **traverse_obj(video_info, { + 'title': ('displayName', {str}), + 'timestamp': ('startTime', {int_or_none}), + 'thumbnail': ('keyVisualUrl', {url_or_none}), + 'duration': ('duration', {int_or_none}), + }), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/stageplus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stageplus.py new file mode 100644 index 0000000..4bed4d6 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/stageplus.py @@ -0,0 +1,515 @@ +import json +import uuid + +from .common import InfoExtractor +from ..utils import ( + float_or_none, + traverse_obj, + try_call, + unified_timestamp, + url_or_none, +) + + +class StagePlusVODConcertIE(InfoExtractor): + _NETRC_MACHINE = 'stageplus' + _VALID_URL = r'https?://(?:www\.)?stage-plus\.com/video/(?P<id>vod_concert_\w+)' + _TESTS = [{ + 'url': 'https://www.stage-plus.com/video/vod_concert_APNM8GRFDPHMASJKBSPJACG', + 'playlist_count': 6, + 'info_dict': { + 'id': 'vod_concert_APNM8GRFDPHMASJKBSPJACG', + 'title': 'Yuja Wang plays Rachmaninoff\'s Piano Concerto No. 2 – from Odeonsplatz', + 'description': 'md5:50f78ec180518c9bdb876bac550996fc', + 'artist': ['Yuja Wang', 'Lorenzo Viotti'], + 'upload_date': '20230331', + 'timestamp': 1680249600, + 'release_date': '20210709', + 'release_timestamp': 1625788800, + 'thumbnails': 'count:3', + }, + 'playlist': [{ + 'info_dict': { + 'id': 'performance_work_A1IN4PJFE9MM2RJ3CLBMUSJBBSOJAD9O', + 'ext': 'mp4', + 'title': 'Piano Concerto No. 2 in C Minor, Op. 18', + 'description': 'md5:50f78ec180518c9bdb876bac550996fc', + 'upload_date': '20230331', + 'timestamp': 1680249600, + 'release_date': '20210709', + 'release_timestamp': 1625788800, + 'duration': 2207, + 'chapters': 'count:5', + 'artist': ['Yuja Wang'], + 'composer': ['Sergei Rachmaninoff'], + 'album': 'Yuja Wang plays Rachmaninoff\'s Piano Concerto No. 2 – from Odeonsplatz', + 'album_artist': ['Yuja Wang', 'Lorenzo Viotti'], + 'track': 'Piano Concerto No. 2 in C Minor, Op. 18', + 'track_number': 1, + 'genre': 'Instrumental Concerto', + }, + }], + 'params': {'skip_download': 'm3u8'}, + }] + + # TODO: Prune this after livestream and/or album extractors are added + _GRAPHQL_QUERY = '''query videoDetailPage($videoId: ID!, $sliderItemsFirst: Int = 24) { + node(id: $videoId) { + __typename + ...LiveConcertFields + ... on LiveConcert { + artists { + edges { + role { + ...RoleFields + } + node { + id + name + sortName + } + } + } + isAtmos + maxResolution + groups { + id + name + typeDisplayName + } + shortDescription + performanceWorks { + ...livePerformanceWorkFields + } + totalDuration + sliders { + ...contentContainerFields + } + vodConcert { + __typename + id + } + } + ...VideoFields + ... on Video { + artists { + edges { + role { + ...RoleFields + } + node { + id + name + sortName + } + } + } + isAtmos + maxResolution + isLossless + description + productionDate + takedownDate + sliders { + ...contentContainerFields + } + } + ...VodConcertFields + ... on VodConcert { + artists { + edges { + role { + ...RoleFields + } + node { + id + name + sortName + } + } + } + isAtmos + maxResolution + groups { + id + name + typeDisplayName + } + performanceWorks { + ...PerformanceWorkFields + } + shortDescription + productionDate + takedownDate + sliders { + ...contentContainerFields + } + } + } +} + +fragment LiveConcertFields on LiveConcert { + endTime + id + pictures { + ...PictureFields + } + reruns { + ...liveConcertRerunFields + } + publicationLevel + startTime + streamStartTime + subtitle + title + typeDisplayName + stream { + ...liveStreamFields + } + trailerStream { + ...streamFields + } + geoAccessCountries + geoAccessMode +} + +fragment PictureFields on Picture { + id + url + type +} + +fragment liveConcertRerunFields on LiveConcertRerun { + streamStartTime + endTime + startTime + stream { + ...rerunStreamFields + } +} + +fragment rerunStreamFields on RerunStream { + publicationLevel + streamType + url +} + +fragment liveStreamFields on LiveStream { + publicationLevel + streamType + url +} + +fragment streamFields on Stream { + publicationLevel + streamType + url +} + +fragment RoleFields on Role { + __typename + id + type + displayName +} + +fragment livePerformanceWorkFields on LivePerformanceWork { + __typename + id + artists { + ...artistWithRoleFields + } + groups { + edges { + node { + id + name + typeDisplayName + } + } + } + work { + ...workFields + } +} + +fragment artistWithRoleFields on ArtistWithRoleConnection { + edges { + role { + ...RoleFields + } + node { + id + name + sortName + } + } +} + +fragment workFields on Work { + id + title + movements { + id + title + } + composers { + id + name + } + genre { + id + title + } +} + +fragment contentContainerFields on CuratedContentContainer { + __typename + ...SliderFields + ...BannerFields +} + +fragment SliderFields on Slider { + id + headline + items(first: $sliderItemsFirst) { + edges { + node { + id + __typename + ...AlbumFields + ...ArtistFields + ...EpochFields + ...GenreFields + ...GroupFields + ...LiveConcertFields + ...PartnerFields + ...PerformanceWorkFields + ...VideoFields + ...VodConcertFields + } + } + } +} + +fragment AlbumFields on Album { + artistAndGroupDisplayInfo + id + pictures { + ...PictureFields + } + title +} + +fragment ArtistFields on Artist { + id + name + roles { + ...RoleFields + } + pictures { + ...PictureFields + } +} + +fragment EpochFields on Epoch { + id + endYear + pictures { + ...PictureFields + } + startYear + title +} + +fragment GenreFields on Genre { + id + pictures { + ...PictureFields + } + title +} + +fragment GroupFields on Group { + id + name + typeDisplayName + pictures { + ...PictureFields + } +} + +fragment PartnerFields on Partner { + id + name + typeDisplayName + subtypeDisplayName + pictures { + ...PictureFields + } +} + +fragment PerformanceWorkFields on PerformanceWork { + __typename + id + artists { + ...artistWithRoleFields + } + groups { + edges { + node { + id + name + typeDisplayName + } + } + } + work { + ...workFields + } + stream { + ...streamFields + } + vodConcert { + __typename + id + } + duration + cuePoints { + mark + title + } +} + +fragment VideoFields on Video { + id + archiveReleaseDate + title + subtitle + pictures { + ...PictureFields + } + stream { + ...streamFields + } + trailerStream { + ...streamFields + } + duration + typeDisplayName + duration + geoAccessCountries + geoAccessMode + publicationLevel + takedownDate +} + +fragment VodConcertFields on VodConcert { + id + archiveReleaseDate + pictures { + ...PictureFields + } + subtitle + title + typeDisplayName + totalDuration + geoAccessCountries + geoAccessMode + trailerStream { + ...streamFields + } + publicationLevel + takedownDate +} + +fragment BannerFields on Banner { + description + link + pictures { + ...PictureFields + } + title +}''' + + _TOKEN = None + + def _perform_login(self, username, password): + auth = self._download_json('https://audience.api.stageplus.io/oauth/token', None, headers={ + 'Content-Type': 'application/json', + 'Origin': 'https://www.stage-plus.com', + }, data=json.dumps({ + 'grant_type': 'password', + 'username': username, + 'password': password, + 'device_info': 'Chrome (Windows)', + 'client_device_id': str(uuid.uuid4()), + }, separators=(',', ':')).encode(), note='Logging in') + + if auth.get('access_token'): + self._TOKEN = auth['access_token'] + + def _real_initialize(self): + if self._TOKEN: + return + + self._TOKEN = try_call( + lambda: self._get_cookies('https://www.stage-plus.com/')['dgplus_access_token'].value) + if not self._TOKEN: + self.raise_login_required() + + def _real_extract(self, url): + concert_id = self._match_id(url) + + data = self._download_json('https://audience.api.stageplus.io/graphql', concert_id, headers={ + 'authorization': f'Bearer {self._TOKEN}', + 'content-type': 'application/json', + 'Origin': 'https://www.stage-plus.com', + }, data=json.dumps({ + 'query': self._GRAPHQL_QUERY, + 'variables': {'videoId': concert_id}, + 'operationName': 'videoDetailPage' + }, separators=(',', ':')).encode())['data']['node'] + + metadata = traverse_obj(data, { + 'title': 'title', + 'description': ('shortDescription', {str}), + 'artist': ('artists', 'edges', ..., 'node', 'name'), + 'timestamp': ('archiveReleaseDate', {unified_timestamp}), + 'release_timestamp': ('productionDate', {unified_timestamp}), + }) + + thumbnails = traverse_obj(data, ('pictures', lambda _, v: url_or_none(v['url']), { + 'id': 'name', + 'url': 'url', + })) or None + + entries = [] + for idx, video in enumerate(traverse_obj(data, ( + 'performanceWorks', lambda _, v: v['id'] and url_or_none(v['stream']['url']))), 1): + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + video['stream']['url'], video['id'], 'mp4', m3u8_id='hls', query={'token': self._TOKEN}) + entries.append({ + 'id': video['id'], + 'formats': formats, + 'subtitles': subtitles, + 'album': metadata.get('title'), + 'album_artist': metadata.get('artist'), + 'track_number': idx, + **metadata, + **traverse_obj(video, { + 'title': ('work', 'title'), + 'track': ('work', 'title'), + 'duration': ('duration', {float_or_none}), + 'chapters': ( + 'cuePoints', lambda _, v: float_or_none(v['mark']) is not None, { + 'title': 'title', + 'start_time': ('mark', {float_or_none}), + }), + 'artist': ('artists', 'edges', ..., 'node', 'name'), + 'composer': ('work', 'composers', ..., 'name'), + 'genre': ('work', 'genre', 'title'), + }), + }) + + return self.playlist_result(entries, concert_id, thumbnails=thumbnails, **metadata) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/stanfordoc.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stanfordoc.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/stanfordoc.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/stanfordoc.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/startrek.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/startrek.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/startrek.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/startrek.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/startv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/startv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/startv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/startv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/steam.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/steam.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/steam.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/steam.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/stitcher.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stitcher.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/stitcher.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/stitcher.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/storyfire.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/storyfire.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/storyfire.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/storyfire.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/streamable.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/streamable.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/streamable.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/streamable.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/streamcloud.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/streamcloud.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/streamcloud.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/streamcloud.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/streamcz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/streamcz.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/streamcz.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/streamcz.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/streamff.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/streamff.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/streamff.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/streamff.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/streetvoice.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/streetvoice.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/streetvoice.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/streetvoice.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/stretchinternet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stretchinternet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/stretchinternet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/stretchinternet.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/stripchat.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stripchat.py new file mode 100644 index 0000000..b9523c8 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/stripchat.py @@ -0,0 +1,66 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + UserNotLive, + lowercase_escape, + traverse_obj +) + + +class StripchatIE(InfoExtractor): + _VALID_URL = r'https?://stripchat\.com/(?P<id>[^/?#]+)' + _TESTS = [{ + 'url': 'https://stripchat.com/Joselin_Flower', + 'info_dict': { + 'id': 'Joselin_Flower', + 'ext': 'mp4', + 'title': 're:^Joselin_Flower [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'description': str, + 'is_live': True, + 'age_limit': 18, + }, + 'skip': 'Room is offline', + }, { + 'url': 'https://stripchat.com/Rakhijaan@xh', + 'only_matching': True + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id, headers=self.geo_verification_headers()) + + data = self._parse_json( + self._search_regex( + r'<script\b[^>]*>\s*window\.__PRELOADED_STATE__\s*=(?P<value>.*?)<\/script>', + webpage, 'data', default='{}', group='value'), + video_id, transform_source=lowercase_escape, fatal=False) + if not data: + raise ExtractorError('Unable to find configuration for stream.') + + if traverse_obj(data, ('viewCam', 'show'), expected_type=dict): + raise ExtractorError('Model is in private show', expected=True) + elif not traverse_obj(data, ('viewCam', 'model', 'isLive'), expected_type=bool): + raise UserNotLive(video_id=video_id) + + model_id = traverse_obj(data, ('viewCam', 'model', 'id'), expected_type=int) + + formats = [] + for host in traverse_obj(data, ('config', 'data', ( + (('features', 'featuresV2'), 'hlsFallback', 'fallbackDomains', ...), 'hlsStreamHost'))): + formats = self._extract_m3u8_formats( + f'https://edge-hls.{host}/hls/{model_id}/master/{model_id}_auto.m3u8', + video_id, ext='mp4', m3u8_id='hls', fatal=False, live=True) + if formats: + break + if not formats: + self.raise_no_formats('No active streams found', expected=True) + + return { + 'id': video_id, + 'title': video_id, + 'description': self._og_search_description(webpage), + 'is_live': True, + 'formats': formats, + # Stripchat declares the RTA meta-tag, but in an non-standard format so _rta_search() can't be used + 'age_limit': 18, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/stv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/stv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/stv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/stv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/substack.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/substack.py new file mode 100644 index 0000000..6ee3f75 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/substack.py @@ -0,0 +1,108 @@ +import re +import urllib.parse + +from .common import InfoExtractor +from ..utils import js_to_json, str_or_none, traverse_obj + + +class SubstackIE(InfoExtractor): + _VALID_URL = r'https?://(?P<username>[\w-]+)\.substack\.com/p/(?P<id>[\w-]+)' + _TESTS = [{ + 'url': 'https://haleynahman.substack.com/p/i-made-a-vlog?s=r', + 'md5': 'f27e4fc6252001d48d479f45e65cdfd5', + 'info_dict': { + 'id': '47660949', + 'ext': 'mp4', + 'title': 'I MADE A VLOG', + 'description': 'md5:9248af9a759321e1027226f988f54d96', + 'thumbnail': 'md5:bec758a34d8ee9142d43bcebdf33af18', + 'uploader': 'Maybe Baby', + 'uploader_id': '33628', + } + }, { + 'url': 'https://haleynahman.substack.com/p/-dear-danny-i-found-my-boyfriends?s=r', + 'md5': '0a63eacec877a1171a62cfa69710fcea', + 'info_dict': { + 'id': '51045592', + 'ext': 'mpga', + 'title': "🎧 Dear Danny: I found my boyfriend's secret Twitter account", + 'description': 'md5:a57f2439319e56e0af92dd0c95d75797', + 'thumbnail': 'md5:daa40b6b79249417c14ff8103db29639', + 'uploader': 'Maybe Baby', + 'uploader_id': '33628', + } + }, { + 'url': 'https://andrewzimmern.substack.com/p/mussels-with-black-bean-sauce-recipe', + 'md5': 'fd3c07077b02444ff0130715b5f632bb', + 'info_dict': { + 'id': '47368578', + 'ext': 'mp4', + 'title': 'Mussels with Black Bean Sauce: Recipe of the Week #7', + 'description': 'md5:b96234a2906c7d854d5229818d889515', + 'thumbnail': 'md5:e30bfaa9da40e82aa62354263a9dd232', + 'uploader': "Andrew Zimmern's Spilled Milk ", + 'uploader_id': '577659', + } + }] + + @classmethod + def _extract_embed_urls(cls, url, webpage): + if not re.search(r'<script[^>]+src=["\']https://substackcdn.com/[^"\']+\.js', webpage): + return + + mobj = re.search(r'{[^}]*\\?["\']subdomain\\?["\']\s*:\s*\\?["\'](?P<subdomain>[^\\"\']+)', webpage) + if mobj: + parsed = urllib.parse.urlparse(url) + yield parsed._replace(netloc=f'{mobj.group("subdomain")}.substack.com').geturl() + raise cls.StopExtraction() + + def _extract_video_formats(self, video_id, url): + formats, subtitles = [], {} + for video_format in ('hls', 'mp4'): + video_url = urllib.parse.urljoin(url, f'/api/v1/video/upload/{video_id}/src?type={video_format}') + + if video_format == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles(video_url, video_id, 'mp4', fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + formats.append({ + 'url': video_url, + 'ext': video_format, + }) + + return formats, subtitles + + def _real_extract(self, url): + display_id, username = self._match_valid_url(url).group('id', 'username') + webpage = self._download_webpage(url, display_id) + + webpage_info = self._parse_json(self._search_json( + r'window\._preloads\s*=\s*JSON\.parse\(', webpage, 'json string', + display_id, transform_source=js_to_json, contains_pattern=r'"{(?s:.+)}"'), display_id) + + canonical_url = url + domain = traverse_obj(webpage_info, ('domainInfo', 'customDomain', {str})) + if domain: + canonical_url = urllib.parse.urlparse(url)._replace(netloc=domain).geturl() + + post_type = webpage_info['post']['type'] + formats, subtitles = [], {} + if post_type == 'podcast': + formats, subtitles = [{'url': webpage_info['post']['podcast_url']}], {} + elif post_type == 'video': + formats, subtitles = self._extract_video_formats(webpage_info['post']['videoUpload']['id'], canonical_url) + else: + self.raise_no_formats(f'Page type "{post_type}" is not supported') + + return { + 'id': str(webpage_info['post']['id']), + 'formats': formats, + 'subtitles': subtitles, + 'title': traverse_obj(webpage_info, ('post', 'title')), + 'description': traverse_obj(webpage_info, ('post', 'description')), + 'thumbnail': traverse_obj(webpage_info, ('post', 'cover_image')), + 'uploader': traverse_obj(webpage_info, ('pub', 'name')), + 'uploader_id': str_or_none(traverse_obj(webpage_info, ('post', 'publication_id'))), + 'webpage_url': canonical_url, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sunporno.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sunporno.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sunporno.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sunporno.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/sverigesradio.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sverigesradio.py new file mode 100644 index 0000000..01a07b3 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/sverigesradio.py @@ -0,0 +1,149 @@ +from .common import InfoExtractor +from ..utils import ( + determine_ext, + extract_attributes, + get_element_by_id, + get_element_html_by_class, + int_or_none, + str_or_none, + traverse_obj, + url_or_none, +) + + +class SverigesRadioBaseIE(InfoExtractor): + _BASE_URL = 'https://sverigesradio.se/sida/playerajax/' + _QUALITIES = ['low', 'medium', 'high'] + _EXT_TO_CODEC_MAP = { + 'mp3': 'mp3', + 'm4a': 'aac', + } + _CODING_FORMAT_TO_ABR_MAP = { + 5: 128, + 11: 192, + 12: 32, + 13: 96, + } + + def _real_extract(self, url): + audio_id, display_id = self._match_valid_url(url).group('id', 'slug') + if not audio_id: + webpage = self._download_webpage(url, display_id) + audio_id = ( + traverse_obj( + get_element_html_by_class('audio-button', webpage), + ({extract_attributes}, ('data-audio-id', 'data-publication-id')), get_all=False) + or self._parse_json(get_element_by_id('gtm-metadata', webpage), display_id)['pageId']) + + query = { + 'id': audio_id, + 'type': self._AUDIO_TYPE, + } + + item = self._download_json( + self._BASE_URL + 'audiometadata', audio_id, + 'Downloading audio JSON metadata', query=query)['items'][0] + + query['format'] = 'iis' + urls = [] + formats = [] + for quality in self._QUALITIES: + query['quality'] = quality + audio_url_data = self._download_json( + self._BASE_URL + 'getaudiourl', audio_id, + 'Downloading %s format JSON metadata' % quality, + fatal=False, query=query) or {} + audio_url = audio_url_data.get('audioUrl') + if not audio_url or audio_url in urls: + continue + urls.append(audio_url) + ext = determine_ext(audio_url) + coding_format = audio_url_data.get('codingFormat') + abr = int_or_none(self._search_regex( + r'_a(\d+)\.m4a', audio_url, 'audio bitrate', + default=None)) or self._CODING_FORMAT_TO_ABR_MAP.get(coding_format) + formats.append({ + 'abr': abr, + 'acodec': self._EXT_TO_CODEC_MAP.get(ext), + 'ext': ext, + 'format_id': str_or_none(coding_format), + 'vcodec': 'none', + 'url': audio_url, + }) + + return { + 'id': audio_id, + 'formats': formats, + **traverse_obj(item, { + 'title': 'subtitle', + 'series': 'title', + 'duration': ('duration', {int_or_none}), + 'thumbnail': ('displayimageurl', {url_or_none}), + 'description': 'description', + }), + } + + +class SverigesRadioPublicationIE(SverigesRadioBaseIE): + IE_NAME = 'sverigesradio:publication' + _VALID_URL = r'https?://(?:www\.)?sverigesradio\.se/(?:sida/)?(?:artikel|gruppsida)(?:\.aspx\?.*?\bartikel=(?P<id>[0-9]+)|/(?P<slug>[\w-]+))' + _TESTS = [{ + 'url': 'https://sverigesradio.se/sida/artikel.aspx?programid=83&artikel=7038546', + 'md5': '6a4917e1923fccb080e5a206a5afa542', + 'info_dict': { + 'id': '7038546', + 'ext': 'm4a', + 'duration': 132, + 'series': 'Nyheter (Ekot)', + 'title': 'Esa Teittinen: Sanningen har inte kommit fram', + 'description': 'md5:daf7ce66a8f0a53d5465a5984d3839df', + 'thumbnail': r're:^https?://.*\.jpg', + }, + }, { + 'url': 'https://sverigesradio.se/artikel/tysk-fotbollsfeber-bayern-munchens-10-ariga-segersvit-kan-brytas', + 'md5': 'f8a914ad50f491bb74eed403ab4bfef6', + 'info_dict': { + 'id': '8360345', + 'ext': 'm4a', + 'title': 'Tysk fotbollsfeber när Bayern Münchens 10-Ã¥riga segersvit kan brytas', + 'series': 'Radiosporten', + 'description': 'md5:5254610e20ce527ecb3a6102a06dcc5f', + 'duration': 72, + 'thumbnail': r're:^https?://.*\.jpg', + }, + }, { + 'url': 'https://sverigesradio.se/sida/gruppsida.aspx?programid=3304&grupp=6247&artikel=7146887', + 'only_matching': True, + }] + _AUDIO_TYPE = 'publication' + + +class SverigesRadioEpisodeIE(SverigesRadioBaseIE): + IE_NAME = 'sverigesradio:episode' + _VALID_URL = r'https?://(?:www\.)?sverigesradio\.se/(?:sida/)?avsnitt/(?:(?P<id>\d+)|(?P<slug>[\w-]+))(?:$|[#?])' + _TESTS = [{ + 'url': 'https://sverigesradio.se/avsnitt/1140922?programid=1300', + 'md5': '20dc4d8db24228f846be390b0c59a07c', + 'info_dict': { + 'id': '1140922', + 'ext': 'mp3', + 'duration': 3307, + 'series': 'Konflikt', + 'title': 'Metoo och valen', + 'description': 'md5:fcb5c1f667f00badcc702b196f10a27e', + 'thumbnail': r're:^https?://.*\.jpg', + }, + }, { + 'url': 'https://sverigesradio.se/avsnitt/p4-live-med-first-aid-kit-scandinavium-mars-2023', + 'md5': 'ce17fb82520a8033dbb846993d5589fe', + 'info_dict': { + 'id': '2160416', + 'ext': 'm4a', + 'title': 'P4 Live med First Aid Kit', + 'description': 'md5:6d5b78eed3d2b65f6de04daa45e9285d', + 'thumbnail': r're:^https?://.*\.jpg', + 'series': 'P4 Live', + 'duration': 5640, + }, + }] + _AUDIO_TYPE = 'episode' diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/svt.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/svt.py new file mode 100644 index 0000000..18da875 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/svt.py @@ -0,0 +1,452 @@ +import json +import re + +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + determine_ext, + dict_get, + int_or_none, + str_or_none, + strip_or_none, + traverse_obj, + try_get, + unified_timestamp, +) + + +class SVTBaseIE(InfoExtractor): + _GEO_COUNTRIES = ['SE'] + + def _extract_video(self, video_info, video_id): + is_live = dict_get(video_info, ('live', 'simulcast'), default=False) + m3u8_protocol = 'm3u8' if is_live else 'm3u8_native' + formats = [] + subtitles = {} + for vr in video_info['videoReferences']: + player_type = vr.get('playerType') or vr.get('format') + vurl = vr['url'] + ext = determine_ext(vurl) + if ext == 'm3u8': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + vurl, video_id, + ext='mp4', entry_protocol=m3u8_protocol, + m3u8_id=player_type, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + elif ext == 'f4m': + formats.extend(self._extract_f4m_formats( + vurl + '?hdcore=3.3.0', video_id, + f4m_id=player_type, fatal=False)) + elif ext == 'mpd': + fmts, subs = self._extract_mpd_formats_and_subtitles( + vurl, video_id, mpd_id=player_type, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + formats.append({ + 'format_id': player_type, + 'url': vurl, + }) + rights = try_get(video_info, lambda x: x['rights'], dict) or {} + if not formats and rights.get('geoBlockedSweden'): + self.raise_geo_restricted( + 'This video is only available in Sweden', + countries=self._GEO_COUNTRIES, metadata_available=True) + + subtitle_references = dict_get(video_info, ('subtitles', 'subtitleReferences')) + if isinstance(subtitle_references, list): + for sr in subtitle_references: + subtitle_url = sr.get('url') + subtitle_lang = sr.get('language', 'sv') + if subtitle_url: + sub = { + 'url': subtitle_url, + } + if determine_ext(subtitle_url) == 'm3u8': + # XXX: no way of testing, is it ever hit? + sub['ext'] = 'vtt' + subtitles.setdefault(subtitle_lang, []).append(sub) + + title = video_info.get('title') + + series = video_info.get('programTitle') + season_number = int_or_none(video_info.get('season')) + episode = video_info.get('episodeTitle') + episode_number = int_or_none(video_info.get('episodeNumber')) + + timestamp = unified_timestamp(rights.get('validFrom')) + duration = int_or_none(dict_get(video_info, ('materialLength', 'contentDuration'))) + age_limit = None + adult = dict_get( + video_info, ('inappropriateForChildren', 'blockedForChildren'), + skip_false_values=False) + if adult is not None: + age_limit = 18 if adult else 0 + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'subtitles': subtitles, + 'duration': duration, + 'timestamp': timestamp, + 'age_limit': age_limit, + 'series': series, + 'season_number': season_number, + 'episode': episode, + 'episode_number': episode_number, + 'is_live': is_live, + } + + +class SVTIE(SVTBaseIE): + _VALID_URL = r'https?://(?:www\.)?svt\.se/wd\?(?:.*?&)?widgetId=(?P<widget_id>\d+)&.*?\barticleId=(?P<id>\d+)' + _EMBED_REGEX = [r'(?:<iframe src|href)="(?P<url>%s[^"]*)"' % _VALID_URL] + _TEST = { + 'url': 'http://www.svt.se/wd?widgetId=23991§ionId=541&articleId=2900353&type=embed&contextSectionId=123&autostart=false', + 'md5': '33e9a5d8f646523ce0868ecfb0eed77d', + 'info_dict': { + 'id': '2900353', + 'ext': 'mp4', + 'title': 'Stjärnorna skojar till det - under SVT-intervjun', + 'duration': 27, + 'age_limit': 0, + }, + } + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + widget_id = mobj.group('widget_id') + article_id = mobj.group('id') + + info = self._download_json( + 'http://www.svt.se/wd?widgetId=%s&articleId=%s&format=json&type=embed&output=json' % (widget_id, article_id), + article_id) + + info_dict = self._extract_video(info['video'], article_id) + info_dict['title'] = info['context']['title'] + return info_dict + + +class SVTPlayBaseIE(SVTBaseIE): + _SVTPLAY_RE = r'root\s*\[\s*(["\'])_*svtplay\1\s*\]\s*=\s*(?P<json>{.+?})\s*;\s*\n' + + +class SVTPlayIE(SVTPlayBaseIE): + IE_DESC = 'SVT Play and Öppet arkiv' + _VALID_URL = r'''(?x) + (?: + (?: + svt:| + https?://(?:www\.)?svt\.se/barnkanalen/barnplay/[^/]+/ + ) + (?P<svt_id>[^/?#&]+)| + https?://(?:www\.)?(?:svtplay|oppetarkiv)\.se/(?:video|klipp|kanaler)/(?P<id>[^/?#&]+) + (?:.*?(?:modalId|id)=(?P<modal_id>[\da-zA-Z-]+))? + ) + ''' + _TESTS = [{ + 'url': 'https://www.svtplay.se/video/30479064', + 'md5': '2382036fd6f8c994856c323fe51c426e', + 'info_dict': { + 'id': '8zVbDPA', + 'ext': 'mp4', + 'title': 'Designdrömmar i Stenungsund', + 'timestamp': 1615770000, + 'upload_date': '20210315', + 'duration': 3519, + 'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$', + 'age_limit': 0, + 'subtitles': { + 'sv': [{ + 'ext': 'vtt', + }] + }, + }, + 'params': { + 'skip_download': 'm3u8', + }, + 'skip': 'Episode is no longer available', + }, { + 'url': 'https://www.svtplay.se/video/emBxBQj', + 'md5': '2382036fd6f8c994856c323fe51c426e', + 'info_dict': { + 'id': 'eyBd9aj', + 'ext': 'mp4', + 'title': '1. Farlig kryssning', + 'timestamp': 1491019200, + 'upload_date': '20170401', + 'duration': 2566, + 'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$', + 'age_limit': 0, + 'episode': '1. Farlig kryssning', + 'series': 'Rederiet', + 'subtitles': { + 'sv': 'count:3' + }, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + 'url': 'https://www.svtplay.se/video/jz2rYz7/anders-hansen-moter/james-fallon?info=visa', + 'info_dict': { + 'id': 'jvXAGVb', + 'ext': 'mp4', + 'title': 'James Fallon', + 'timestamp': 1673917200, + 'upload_date': '20230117', + 'duration': 1081, + 'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$', + 'age_limit': 0, + 'episode': 'James Fallon', + 'series': 'Anders Hansen möter...', + }, + 'params': { + 'skip_download': 'dash', + }, + }, { + 'url': 'https://www.svtplay.se/video/30479064/husdrommar/husdrommar-sasong-8-designdrommar-i-stenungsund?modalId=8zVbDPA', + 'only_matching': True, + }, { + 'url': 'https://www.svtplay.se/video/30684086/rapport/rapport-24-apr-18-00-7?id=e72gVpa', + 'only_matching': True, + }, { + # geo restricted to Sweden + 'url': 'http://www.oppetarkiv.se/video/5219710/trollflojten', + 'only_matching': True, + }, { + 'url': 'http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg', + 'only_matching': True, + }, { + 'url': 'https://www.svtplay.se/kanaler/svt1', + 'only_matching': True, + }, { + 'url': 'svt:1376446-003A', + 'only_matching': True, + }, { + 'url': 'svt:14278044', + 'only_matching': True, + }, { + 'url': 'https://www.svt.se/barnkanalen/barnplay/kar/eWv5MLX/', + 'only_matching': True, + }, { + 'url': 'svt:eWv5MLX', + 'only_matching': True, + }] + + def _extract_by_video_id(self, video_id, webpage=None): + data = self._download_json( + 'https://api.svt.se/videoplayer-api/video/%s' % video_id, + video_id, headers=self.geo_verification_headers()) + info_dict = self._extract_video(data, video_id) + if not info_dict.get('title'): + title = dict_get(info_dict, ('episode', 'series')) + if not title and webpage: + title = re.sub( + r'\s*\|\s*.+?$', '', self._og_search_title(webpage)) + if not title: + title = video_id + info_dict['title'] = title + return info_dict + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + video_id = mobj.group('id') + svt_id = mobj.group('svt_id') or mobj.group('modal_id') + + if svt_id: + return self._extract_by_video_id(svt_id) + + webpage = self._download_webpage(url, video_id) + + data = self._parse_json( + self._search_regex( + self._SVTPLAY_RE, webpage, 'embedded data', default='{}', + group='json'), + video_id, fatal=False) + + thumbnail = self._og_search_thumbnail(webpage) + + if data: + video_info = try_get( + data, lambda x: x['context']['dispatcher']['stores']['VideoTitlePageStore']['data']['video'], + dict) + if video_info: + info_dict = self._extract_video(video_info, video_id) + info_dict.update({ + 'title': data['context']['dispatcher']['stores']['MetaStore']['title'], + 'thumbnail': thumbnail, + }) + return info_dict + + svt_id = try_get( + data, lambda x: x['statistics']['dataLake']['content']['id'], + compat_str) + + if not svt_id: + nextjs_data = self._search_nextjs_data(webpage, video_id, fatal=False) + svt_id = traverse_obj(nextjs_data, ( + 'props', 'urqlState', ..., 'data', {json.loads}, 'detailsPageByPath', + 'video', 'svtId', {str}), get_all=False) + + if not svt_id: + svt_id = self._search_regex( + (r'<video[^>]+data-video-id=["\']([\da-zA-Z-]+)', + r'<[^>]+\bdata-rt=["\']top-area-play-button["\'][^>]+\bhref=["\'][^"\']*video/[\w-]+/[^"\']*\b(?:modalId|id)=([\w-]+)'), + webpage, 'video id') + + info_dict = self._extract_by_video_id(svt_id, webpage) + info_dict['thumbnail'] = thumbnail + + return info_dict + + +class SVTSeriesIE(SVTPlayBaseIE): + _VALID_URL = r'https?://(?:www\.)?svtplay\.se/(?P<id>[^/?&#]+)(?:.+?\btab=(?P<season_slug>[^&#]+))?' + _TESTS = [{ + 'url': 'https://www.svtplay.se/rederiet', + 'info_dict': { + 'id': '14445680', + 'title': 'Rederiet', + 'description': 'md5:d9fdfff17f5d8f73468176ecd2836039', + }, + 'playlist_mincount': 318, + }, { + 'url': 'https://www.svtplay.se/rederiet?tab=season-2-14445680', + 'info_dict': { + 'id': 'season-2-14445680', + 'title': 'Rederiet - Säsong 2', + 'description': 'md5:d9fdfff17f5d8f73468176ecd2836039', + }, + 'playlist_mincount': 12, + }] + + @classmethod + def suitable(cls, url): + return False if SVTIE.suitable(url) or SVTPlayIE.suitable(url) else super(SVTSeriesIE, cls).suitable(url) + + def _real_extract(self, url): + series_slug, season_id = self._match_valid_url(url).groups() + + series = self._download_json( + 'https://api.svt.se/contento/graphql', series_slug, + 'Downloading series page', query={ + 'query': '''{ + listablesBySlug(slugs: ["%s"]) { + associatedContent(include: [productionPeriod, season]) { + items { + item { + ... on Episode { + videoSvtId + } + } + } + id + name + } + id + longDescription + name + shortDescription + } +}''' % series_slug, + })['data']['listablesBySlug'][0] + + season_name = None + + entries = [] + for season in series['associatedContent']: + if not isinstance(season, dict): + continue + if season_id: + if season.get('id') != season_id: + continue + season_name = season.get('name') + items = season.get('items') + if not isinstance(items, list): + continue + for item in items: + video = item.get('item') or {} + content_id = video.get('videoSvtId') + if not content_id or not isinstance(content_id, compat_str): + continue + entries.append(self.url_result( + 'svt:' + content_id, SVTPlayIE.ie_key(), content_id)) + + title = series.get('name') + season_name = season_name or season_id + + if title and season_name: + title = '%s - %s' % (title, season_name) + elif season_id: + title = season_id + + return self.playlist_result( + entries, season_id or series.get('id'), title, + dict_get(series, ('longDescription', 'shortDescription'))) + + +class SVTPageIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?svt\.se/(?P<path>(?:[^/]+/)*(?P<id>[^/?&#]+))' + _TESTS = [{ + 'url': 'https://www.svt.se/sport/ishockey/bakom-masken-lehners-kamp-mot-mental-ohalsa', + 'info_dict': { + 'id': '25298267', + 'title': 'Bakom masken – Lehners kamp mot mental ohälsa', + }, + 'playlist_count': 4, + }, { + 'url': 'https://www.svt.se/nyheter/utrikes/svenska-andrea-ar-en-mil-fran-branderna-i-kalifornien', + 'info_dict': { + 'id': '24243746', + 'title': 'Svenska Andrea redo att fly sitt hem i Kalifornien', + }, + 'playlist_count': 2, + }, { + # only programTitle + 'url': 'http://www.svt.se/sport/ishockey/jagr-tacklar-giroux-under-intervjun', + 'info_dict': { + 'id': '8439V2K', + 'ext': 'mp4', + 'title': 'Stjärnorna skojar till det - under SVT-intervjun', + 'duration': 27, + 'age_limit': 0, + }, + }, { + 'url': 'https://www.svt.se/nyheter/lokalt/vast/svt-testar-tar-nagon-upp-skrapet-1', + 'only_matching': True, + }, { + 'url': 'https://www.svt.se/vader/manadskronikor/maj2018', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if SVTIE.suitable(url) or SVTPlayIE.suitable(url) else super(SVTPageIE, cls).suitable(url) + + def _real_extract(self, url): + path, display_id = self._match_valid_url(url).groups() + + article = self._download_json( + 'https://api.svt.se/nss-api/page/' + path, display_id, + query={'q': 'articles'})['articles']['content'][0] + + entries = [] + + def _process_content(content): + if content.get('_type') in ('VIDEOCLIP', 'VIDEOEPISODE'): + video_id = compat_str(content['image']['svtId']) + entries.append(self.url_result( + 'svt:' + video_id, SVTPlayIE.ie_key(), video_id)) + + for media in article.get('media', []): + _process_content(media) + + for obj in article.get('structuredBody', []): + _process_content(obj.get('content') or {}) + + return self.playlist_result( + entries, str_or_none(article.get('id')), + strip_or_none(article.get('title'))) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/swearnet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/swearnet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/swearnet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/swearnet.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/swrmediathek.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/swrmediathek.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/swrmediathek.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/swrmediathek.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/syfy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/syfy.py new file mode 100644 index 0000000..afcdbf7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/syfy.py @@ -0,0 +1,57 @@ +from .adobepass import AdobePassIE +from ..utils import ( + update_url_query, + smuggle_url, +) + + +class SyfyIE(AdobePassIE): + _VALID_URL = r'https?://(?:www\.)?syfy\.com/(?:[^/]+/)?videos/(?P<id>[^/?#]+)' + _TESTS = [{ + 'url': 'http://www.syfy.com/theinternetruinedmylife/videos/the-internet-ruined-my-life-season-1-trailer', + 'info_dict': { + 'id': '2968097', + 'ext': 'mp4', + 'title': 'The Internet Ruined My Life: Season 1 Trailer', + 'description': 'One tweet, one post, one click, can destroy everything.', + 'uploader': 'NBCU-MPAT', + 'upload_date': '20170113', + 'timestamp': 1484345640, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'add_ie': ['ThePlatform'], + 'skip': 'Redirects to main page', + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + syfy_mpx = list(self._parse_json(self._search_regex( + r'jQuery\.extend\(Drupal\.settings\s*,\s*({.+?})\);', webpage, 'drupal settings'), + display_id)['syfy']['syfy_mpx'].values())[0] + video_id = syfy_mpx['mpxGUID'] + title = syfy_mpx['episodeTitle'] + query = { + 'mbr': 'true', + 'manifest': 'm3u', + } + if syfy_mpx.get('entitlement') == 'auth': + resource = self._get_mvpd_resource( + 'syfy', title, video_id, + syfy_mpx.get('mpxRating', 'TV-14')) + query['auth'] = self._extract_mvpd_auth( + url, video_id, 'syfy', resource) + + return { + '_type': 'url_transparent', + 'ie_key': 'ThePlatform', + 'url': smuggle_url(update_url_query( + self._proto_relative_url(syfy_mpx['releaseURL']), query), + {'force_smil_url': True}), + 'title': title, + 'id': video_id, + 'display_id': display_id, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/syvdk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/syvdk.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/syvdk.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/syvdk.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/sztvhu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/sztvhu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/sztvhu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/sztvhu.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tagesschau.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tagesschau.py new file mode 100644 index 0000000..e23b490 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tagesschau.py @@ -0,0 +1,163 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + UnsupportedError, + extract_attributes, + int_or_none, + js_to_json, + parse_iso8601, + try_get, +) + + +class TagesschauIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tagesschau\.de/(?P<path>[^/]+/(?:[^/]+/)*?(?P<id>[^/#?]+?(?:-?[0-9]+)?))(?:~_?[^/#?]+?)?\.html' + + _TESTS = [{ + 'url': 'http://www.tagesschau.de/multimedia/video/video-102143.html', + 'md5': 'ccb9359bf8c4795836e43759f3408a93', + 'info_dict': { + 'id': 'video-102143-1', + 'ext': 'mp4', + 'title': 'Regierungsumbildung in Athen: Neue Minister in Griechenland vereidigt', + 'duration': 138, + }, + }, { + 'url': 'http://www.tagesschau.de/multimedia/sendung/ts-5727.html', + 'md5': '5c15e8f3da049e48829ec9786d835536', + 'info_dict': { + 'id': 'ts-5727-1', + 'ext': 'mp4', + 'title': 'Ganze Sendung', + 'duration': 932, + }, + }, { + # exclusive audio + 'url': 'http://www.tagesschau.de/multimedia/audio/audio-29417.html', + 'md5': '4bff8f23504df56a0d86ed312d654182', + 'info_dict': { + 'id': 'audio-29417-1', + 'ext': 'mp3', + 'title': 'EU-Gipfel: Im Verbrennerstreit hat Deutschland maximalen Schaden angerichtet', + }, + }, { + 'url': 'http://www.tagesschau.de/inland/bnd-303.html', + 'md5': 'f049fa1698d7564e9ca4c3325108f034', + 'info_dict': { + 'id': 'bnd-303-1', + 'ext': 'mp3', + 'title': 'Das Siegel des Bundesnachrichtendienstes | dpa', + }, + }, { + 'url': 'http://www.tagesschau.de/inland/afd-parteitag-135.html', + 'info_dict': { + 'id': 'afd-parteitag-135', + 'title': 'AfD', + }, + 'playlist_mincount': 15, + }, { + 'url': 'https://www.tagesschau.de/multimedia/audio/audio-29417~player.html', + 'info_dict': { + 'id': 'audio-29417-1', + 'ext': 'mp3', + 'title': 'EU-Gipfel: Im Verbrennerstreit hat Deutschland maximalen Schaden angerichtet', + }, + }, { + 'url': 'https://www.tagesschau.de/multimedia/audio/podcast-11km-327.html', + 'info_dict': { + 'id': 'podcast-11km-327', + 'ext': 'mp3', + 'title': 'Gewalt in der Kita – Wenn Erzieher:innen schweigen', + 'upload_date': '20230322', + 'timestamp': 1679482808, + 'thumbnail': 'https://www.tagesschau.de/multimedia/audio/podcast-11km-329~_v-original.jpg', + 'description': 'md5:dad059931fe4b3693e3656e93a249848', + }, + }, { + 'url': 'http://www.tagesschau.de/multimedia/sendung/tsg-3771.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/multimedia/sendung/tt-3827.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/multimedia/sendung/nm-3475.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/multimedia/sendung/weltspiegel-3167.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/multimedia/tsvorzwanzig-959.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/multimedia/sendung/bab/bab-3299~_bab-sendung-209.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/multimedia/video/video-102303~_bab-sendung-211.html', + 'only_matching': True, + }, { + 'url': 'http://www.tagesschau.de/100sekunden/index.html', + 'only_matching': True, + }, { + # playlist article with collapsing sections + 'url': 'http://www.tagesschau.de/wirtschaft/faq-freihandelszone-eu-usa-101.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + video_id = mobj.group('id') or mobj.group('path') + display_id = video_id.lstrip('-') + + webpage = self._download_webpage(url, display_id) + + title = self._html_search_regex( + r'<span[^>]*class="headline"[^>]*>(.+?)</span>', + webpage, 'title', default=None) or self._og_search_title(webpage, fatal=False) + + entries = [] + videos = re.findall(r'<div[^>]+>', webpage) + num = 0 + for video in videos: + video = extract_attributes(video).get('data-config') + if not video: + continue + video = self._parse_json(video, video_id, transform_source=js_to_json, fatal=False) + video_formats = try_get(video, lambda x: x['mc']['_mediaArray'][0]['_mediaStreamArray']) + if not video_formats: + continue + num += 1 + for video_format in video_formats: + media_url = video_format.get('_stream') or '' + formats = [] + if media_url.endswith('master.m3u8'): + formats = self._extract_m3u8_formats(media_url, video_id, 'mp4', m3u8_id='hls') + elif media_url.endswith('.mp3'): + formats = [{ + 'url': media_url, + 'vcodec': 'none', + }] + if not formats: + continue + entries.append({ + 'id': '%s-%d' % (display_id, num), + 'title': try_get(video, lambda x: x['mc']['_title']), + 'duration': int_or_none(try_get(video, lambda x: x['mc']['_duration'])), + 'formats': formats + }) + + if not entries: + raise UnsupportedError(url) + + if len(entries) > 1: + return self.playlist_result(entries, display_id, title) + + return { + 'id': display_id, + 'title': title, + 'thumbnail': self._og_search_thumbnail(webpage), + 'formats': entries[0]['formats'], + 'timestamp': parse_iso8601(self._html_search_meta('date', webpage)), + 'description': self._og_search_description(webpage), + 'duration': entries[0]['duration'], + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tass.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tass.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tass.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tass.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tbs.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tbs.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tbs.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tbs.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tbsjp.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tbsjp.py new file mode 100644 index 0000000..77ddeca --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tbsjp.py @@ -0,0 +1,152 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + clean_html, + get_element_text_and_html_by_tag, + int_or_none, + str_or_none, + traverse_obj, + try_call, + unified_timestamp, + urljoin, +) + + +class TBSJPEpisodeIE(InfoExtractor): + _VALID_URL = r'https?://cu\.tbs\.co\.jp/episode/(?P<id>[\d_]+)' + _GEO_BYPASS = False + _TESTS = [{ + 'url': 'https://cu.tbs.co.jp/episode/23613_2044134_1000049010', + 'skip': 'streams geo-restricted, Japan only. Also, will likely expire eventually', + 'info_dict': { + 'title': 'VIVANT 第三話 誤é€é‡‘完çµã¸!絶体絶命ã®å撃開始', + 'id': '23613_2044134_1000049010', + 'ext': 'mp4', + 'upload_date': '20230728', + 'duration': 3517, + 'release_timestamp': 1691118230, + 'episode': '第三話 誤é€é‡‘完çµã¸!絶体絶命ã®å撃開始', + 'release_date': '20230804', + 'categories': 'count:11', + 'episode_number': 3, + 'timestamp': 1690522538, + 'description': 'md5:2b796341af1ef772034133174ba4a895', + 'series': 'VIVANT', + }, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + meta = self._search_json(r'window\.app\s*=', webpage, 'episode info', video_id, fatal=False) + episode = traverse_obj(meta, ('falcorCache', 'catalog', 'episode', video_id, 'value')) + + tf_path = self._search_regex( + r'<script[^>]+src=["\'](/assets/tf\.[^"\']+\.js)["\']', webpage, 'stream API config') + tf_js = self._download_webpage(urljoin(url, tf_path), video_id, note='Downloading stream API config') + video_url = self._search_regex(r'videoPlaybackUrl:\s*[\'"]([^\'"]+)[\'"]', tf_js, 'stream API url') + api_key = self._search_regex(r'api_key:\s*[\'"]([^\'"]+)[\'"]', tf_js, 'stream API key') + + try: + source_meta = self._download_json(f'{video_url}ref:{video_id}', video_id, + headers={'X-Streaks-Api-Key': api_key}, + note='Downloading stream metadata') + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + self.raise_geo_restricted(countries=['JP']) + raise + + formats, subtitles = [], {} + for src in traverse_obj(source_meta, ('sources', ..., 'src')): + fmts, subs = self._extract_m3u8_formats_and_subtitles(src, video_id, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + + return { + 'title': try_call(lambda: clean_html(get_element_text_and_html_by_tag('h3', webpage)[0])), + 'id': video_id, + **traverse_obj(episode, { + 'categories': ('keywords', {list}), + 'id': ('content_id', {str}), + 'description': ('description', 0, 'value'), + 'timestamp': ('created_at', {unified_timestamp}), + 'release_timestamp': ('pub_date', {unified_timestamp}), + 'duration': ('tv_episode_info', 'duration', {int_or_none}), + 'episode_number': ('tv_episode_info', 'episode_number', {int_or_none}), + 'episode': ('title', lambda _, v: not v.get('is_phonetic'), 'value'), + 'series': ('custom_data', 'program_name'), + }, get_all=False), + 'formats': formats, + 'subtitles': subtitles, + } + + +class TBSJPProgramIE(InfoExtractor): + _VALID_URL = r'https?://cu\.tbs\.co\.jp/program/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://cu.tbs.co.jp/program/23601', + 'playlist_mincount': 4, + 'info_dict': { + 'id': '23601', + 'categories': ['エンタメ', 'ミライカプセル', '会社', 'åƒã', 'ãƒãƒ©ã‚¨ãƒ†ã‚£', 'å‹•ç”»'], + 'description': '幼少期ã®å¤¢ã¯å¤§äººã«ãªã£ã¦ã€ã©ã†æˆé•·ã—ãŸã®ã ã‚ã†ã‹ï¼Ÿ\nãã—ã¦ãã®å¤¢ã¯ä»Šå¾Œã€ã©ã®ã‚ˆã†ã«åºƒãŒã£ã¦ã„ãã®ã‹ï¼Ÿ\nã„ã¾è©±é¡Œã®ä¼šç¤¾ã§åƒã人ã®ã€Œå¤¢ã®æˆé•·ã€ã‚’æã', + 'series': 'ミライカプセル -I have a dream-', + 'title': 'ミライカプセル -I have a dream-' + } + }] + + def _real_extract(self, url): + programme_id = self._match_id(url) + webpage = self._download_webpage(url, programme_id) + meta = self._search_json(r'window\.app\s*=', webpage, 'programme info', programme_id) + + programme = traverse_obj(meta, ('falcorCache', 'catalog', 'program', programme_id, 'false', 'value')) + + return { + '_type': 'playlist', + 'entries': [self.url_result(f'https://cu.tbs.co.jp/episode/{video_id}', TBSJPEpisodeIE, video_id) + for video_id in traverse_obj(programme, ('custom_data', 'seriesList', 'episodeCode', ...))], + 'id': programme_id, + **traverse_obj(programme, { + 'categories': ('keywords', ...), + 'id': ('tv_episode_info', 'show_content_id', {str_or_none}), + 'description': ('custom_data', 'program_description'), + 'series': ('custom_data', 'program_name'), + 'title': ('custom_data', 'program_name'), + }), + } + + +class TBSJPPlaylistIE(InfoExtractor): + _VALID_URL = r'https?://cu\.tbs\.co\.jp/playlist/(?P<id>[\da-f]+)' + _TESTS = [{ + 'url': 'https://cu.tbs.co.jp/playlist/184f9970e7ba48e4915f1b252c55015e', + 'playlist_mincount': 4, + 'info_dict': { + 'title': 'ã¾ã‚‚ãªãé…信終了', + 'id': '184f9970e7ba48e4915f1b252c55015e', + } + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + page = self._download_webpage(url, playlist_id) + meta = self._search_json(r'window\.app\s*=', page, 'playlist info', playlist_id) + playlist = traverse_obj(meta, ('falcorCache', 'playList', playlist_id)) + + def entries(): + for entry in traverse_obj(playlist, ('catalogs', 'value', lambda _, v: v['content_id'])): + # TODO: it's likely possible to get all metadata from the playlist page json instead + content_id = entry['content_id'] + content_type = entry.get('content_type') + if content_type == 'tv_show': + yield self.url_result( + f'https://cu.tbs.co.jp/program/{content_id}', TBSJPProgramIE, content_id) + elif content_type == 'tv_episode': + yield self.url_result( + f'https://cu.tbs.co.jp/episode/{content_id}', TBSJPEpisodeIE, content_id) + else: + self.report_warning(f'Skipping "{content_id}" with unsupported content_type "{content_type}"') + + return self.playlist_result(entries(), playlist_id, traverse_obj(playlist, ('display_name', 'value'))) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tdslifeway.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tdslifeway.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tdslifeway.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tdslifeway.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/teachable.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/teachable.py new file mode 100644 index 0000000..01906bd --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/teachable.py @@ -0,0 +1,295 @@ +import re + +from .common import InfoExtractor +from .wistia import WistiaIE +from ..utils import ( + clean_html, + ExtractorError, + int_or_none, + get_element_by_class, + strip_or_none, + urlencode_postdata, + urljoin, +) + + +class TeachableBaseIE(InfoExtractor): + _NETRC_MACHINE = 'teachable' + _URL_PREFIX = 'teachable:' + + _SITES = { + # Only notable ones here + 'v1.upskillcourses.com': 'upskill', + 'gns3.teachable.com': 'gns3', + 'academyhacker.com': 'academyhacker', + 'stackskills.com': 'stackskills', + 'market.saleshacker.com': 'saleshacker', + 'learnability.org': 'learnability', + 'edurila.com': 'edurila', + 'courses.workitdaily.com': 'workitdaily', + } + + _VALID_URL_SUB_TUPLE = (_URL_PREFIX, '|'.join(re.escape(site) for site in _SITES.keys())) + + def _real_initialize(self): + self._logged_in = False + + def _login(self, site): + if self._logged_in: + return + + username, password = self._get_login_info(netrc_machine=self._SITES.get(site, site)) + if username is None: + return + + login_page, urlh = self._download_webpage_handle( + 'https://%s/sign_in' % site, None, + 'Downloading %s login page' % site) + + def is_logged(webpage): + return any(re.search(p, webpage) for p in ( + r'class=["\']user-signout', + r'<a[^>]+\bhref=["\']/sign_out', + r'Log\s+[Oo]ut\s*<')) + + if is_logged(login_page): + self._logged_in = True + return + + login_url = urlh.url + + login_form = self._hidden_inputs(login_page) + + login_form.update({ + 'user[email]': username, + 'user[password]': password, + }) + + post_url = self._search_regex( + r'<form[^>]+action=(["\'])(?P<url>(?:(?!\1).)+)\1', login_page, + 'post url', default=login_url, group='url') + + if not post_url.startswith('http'): + post_url = urljoin(login_url, post_url) + + response = self._download_webpage( + post_url, None, 'Logging in to %s' % site, + data=urlencode_postdata(login_form), + headers={ + 'Content-Type': 'application/x-www-form-urlencoded', + 'Referer': login_url, + }) + + if '>I accept the new Privacy Policy<' in response: + raise ExtractorError( + 'Unable to login: %s asks you to accept new Privacy Policy. ' + 'Go to https://%s/ and accept.' % (site, site), expected=True) + + # Successful login + if is_logged(response): + self._logged_in = True + return + + message = get_element_by_class('alert', response) + if message is not None: + raise ExtractorError( + 'Unable to login: %s' % clean_html(message), expected=True) + + raise ExtractorError('Unable to log in') + + +class TeachableIE(TeachableBaseIE): + _VALID_URL = r'''(?x) + (?: + %shttps?://(?P<site_t>[^/]+)| + https?://(?:www\.)?(?P<site>%s) + ) + /courses/[^/]+/lectures/(?P<id>\d+) + ''' % TeachableBaseIE._VALID_URL_SUB_TUPLE + + _TESTS = [{ + 'url': 'https://gns3.teachable.com/courses/gns3-certified-associate/lectures/6842364', + 'info_dict': { + 'id': 'untlgzk1v7', + 'ext': 'bin', + 'title': 'Overview', + 'description': 'md5:071463ff08b86c208811130ea1c2464c', + 'duration': 736.4, + 'timestamp': 1542315762, + 'upload_date': '20181115', + 'chapter': 'Welcome', + 'chapter_number': 1, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'http://v1.upskillcourses.com/courses/119763/lectures/1747100', + 'only_matching': True, + }, { + 'url': 'https://gns3.teachable.com/courses/423415/lectures/6885939', + 'only_matching': True, + }, { + 'url': 'teachable:https://v1.upskillcourses.com/courses/essential-web-developer-course/lectures/1747100', + 'only_matching': True, + }] + + @staticmethod + def _is_teachable(webpage): + return 'teachableTracker.linker:autoLink' in webpage and re.search( + r'<link[^>]+href=["\']https?://(?:process\.fs|assets)\.teachablecdn\.com', + webpage) + + @classmethod + def _extract_embed_urls(cls, url, webpage): + if cls._is_teachable(webpage): + if re.match(r'https?://[^/]+/(?:courses|p)', url): + yield f'{cls._URL_PREFIX}{url}' + raise cls.StopExtraction() + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + site = mobj.group('site') or mobj.group('site_t') + video_id = mobj.group('id') + + self._login(site) + + prefixed = url.startswith(self._URL_PREFIX) + if prefixed: + url = url[len(self._URL_PREFIX):] + + webpage = self._download_webpage(url, video_id) + + wistia_urls = WistiaIE._extract_embed_urls(url, webpage) + if not wistia_urls: + if any(re.search(p, webpage) for p in ( + r'class=["\']lecture-contents-locked', + r'>\s*Lecture contents locked', + r'id=["\']lecture-locked', + # https://academy.tailoredtutors.co.uk/courses/108779/lectures/1955313 + r'class=["\'](?:inner-)?lesson-locked', + r'>LESSON LOCKED<')): + self.raise_login_required('Lecture contents locked') + raise ExtractorError('Unable to find video URL') + + title = self._og_search_title(webpage, default=None) + + chapter = None + chapter_number = None + section_item = self._search_regex( + r'(?s)(?P<li><li[^>]+\bdata-lecture-id=["\']%s[^>]+>.+?</li>)' % video_id, + webpage, 'section item', default=None, group='li') + if section_item: + chapter_number = int_or_none(self._search_regex( + r'data-ss-position=["\'](\d+)', section_item, 'section id', + default=None)) + if chapter_number is not None: + sections = [] + for s in re.findall( + r'(?s)<div[^>]+\bclass=["\']section-title[^>]+>(.+?)</div>', webpage): + section = strip_or_none(clean_html(s)) + if not section: + sections = [] + break + sections.append(section) + if chapter_number <= len(sections): + chapter = sections[chapter_number - 1] + + entries = [{ + '_type': 'url_transparent', + 'url': wistia_url, + 'ie_key': WistiaIE.ie_key(), + 'title': title, + 'chapter': chapter, + 'chapter_number': chapter_number, + } for wistia_url in wistia_urls] + + return self.playlist_result(entries, video_id, title) + + +class TeachableCourseIE(TeachableBaseIE): + _VALID_URL = r'''(?x) + (?: + %shttps?://(?P<site_t>[^/]+)| + https?://(?:www\.)?(?P<site>%s) + ) + /(?:courses|p)/(?:enrolled/)?(?P<id>[^/?#&]+) + ''' % TeachableBaseIE._VALID_URL_SUB_TUPLE + _TESTS = [{ + 'url': 'http://v1.upskillcourses.com/courses/essential-web-developer-course/', + 'info_dict': { + 'id': 'essential-web-developer-course', + 'title': 'The Essential Web Developer Course (Free)', + }, + 'playlist_count': 192, + }, { + 'url': 'http://v1.upskillcourses.com/courses/119763/', + 'only_matching': True, + }, { + 'url': 'http://v1.upskillcourses.com/courses/enrolled/119763', + 'only_matching': True, + }, { + 'url': 'https://gns3.teachable.com/courses/enrolled/423415', + 'only_matching': True, + }, { + 'url': 'teachable:https://learn.vrdev.school/p/gear-vr-developer-mini', + 'only_matching': True, + }, { + 'url': 'teachable:https://filmsimplified.com/p/davinci-resolve-15-crash-course', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if TeachableIE.suitable(url) else super( + TeachableCourseIE, cls).suitable(url) + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + site = mobj.group('site') or mobj.group('site_t') + course_id = mobj.group('id') + + self._login(site) + + prefixed = url.startswith(self._URL_PREFIX) + if prefixed: + prefix = self._URL_PREFIX + url = url[len(prefix):] + + webpage = self._download_webpage(url, course_id) + + url_base = 'https://%s/' % site + + entries = [] + + for mobj in re.finditer( + r'(?s)(?P<li><li[^>]+class=(["\'])(?:(?!\2).)*?section-item[^>]+>.+?</li>)', + webpage): + li = mobj.group('li') + if 'fa-youtube-play' not in li and not re.search(r'\d{1,2}:\d{2}', li): + continue + lecture_url = self._search_regex( + r'<a[^>]+href=(["\'])(?P<url>(?:(?!\1).)+)\1', li, + 'lecture url', default=None, group='url') + if not lecture_url: + continue + lecture_id = self._search_regex( + r'/lectures/(\d+)', lecture_url, 'lecture id', default=None) + title = self._html_search_regex( + r'<span[^>]+class=["\']lecture-name[^>]+>([^<]+)', li, + 'title', default=None) + entry_url = urljoin(url_base, lecture_url) + if prefixed: + entry_url = self._URL_PREFIX + entry_url + entries.append( + self.url_result( + entry_url, + ie=TeachableIE.ie_key(), video_id=lecture_id, + video_title=clean_html(title))) + + course_title = self._html_search_regex( + (r'(?s)<img[^>]+class=["\']course-image[^>]+>\s*<h\d>(.+?)</h', + r'(?s)<h\d[^>]+class=["\']course-title[^>]+>(.+?)</h'), + webpage, 'course title', fatal=False) + + return self.playlist_result(entries, course_id, course_title) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/teachertube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/teachertube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/teachertube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/teachertube.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/teachingchannel.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/teachingchannel.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/teachingchannel.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/teachingchannel.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/teamcoco.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/teamcoco.py new file mode 100644 index 0000000..d32f812 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/teamcoco.py @@ -0,0 +1,280 @@ +import json +import re + +from .turner import TurnerBaseIE +from ..utils import ( + ExtractorError, + clean_html, + determine_ext, + make_archive_id, + merge_dicts, + mimetype2ext, + parse_duration, + parse_qs, + traverse_obj, + unified_timestamp, + urljoin, + url_or_none, +) + + +class TeamcocoBaseIE(TurnerBaseIE): + _QUALITIES = { + 'low': (480, 272), + 'sd': (640, 360), + 'hd': (1280, 720), + 'uhd': (1920, 1080), + } + + def _get_formats_and_subtitles(self, info, video_id): + formats, subtitles = [], {} + + for src in traverse_obj(info, ('src', ..., {dict})): + format_id = src.get('label') + src_url = src.get('src') + if re.match(r'https?:/[^/]', src_url): + src_url = src_url.replace(':/', '://', 1) + ext = determine_ext(src_url, mimetype2ext(src.get('type'))) + + if not format_id or not src_url: + continue + elif format_id == 'hls' or ext == 'm3u8': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + src_url, video_id, 'mp4', m3u8_id=format_id, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + + elif format_id in self._QUALITIES: + if src_url.startswith('/mp4:protected/'): + # TODO: Correct extraction for these files + continue + formats.append({ + 'url': src_url, + 'ext': ext, + 'format_id': format_id, + 'width': self._QUALITIES[format_id][0], + 'height': self._QUALITIES[format_id][1], + }) + + return formats, subtitles + + +class TeamcocoIE(TeamcocoBaseIE): + _VALID_URL = r'https?://(?:www\.)?teamcoco\.com/(?P<id>([^/]+/)*[^/?#]+)' + _TESTS = [ + { + 'url': 'http://teamcoco.com/video/mary-kay-remote', + 'info_dict': { + 'id': '80187', + 'display_id': 'video_mary-kay-remote', + 'ext': 'mp4', + 'title': 'Conan Becomes A Mary Kay Beauty Consultant', + 'description': 'md5:9fb64e45b5aef6b2af1b67612b36c162', + 'thumbnail': 'https://teamcoco.com/image/thumb?id=80187', + 'upload_date': '20140402', + 'timestamp': 1396440000, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + 'url': 'http://teamcoco.com/video/louis-ck-interview-george-w-bush', + 'info_dict': { + 'id': '19705', + 'display_id': 'video_louis-ck-interview-george-w-bush', + 'ext': 'mp4', + 'title': 'Louis C.K. Interview Pt. 1 11/3/11', + 'description': 'Louis C.K. got starstruck by George W. Bush, so what? Part one.', + 'thumbnail': 'https://teamcoco.com/image/thumb?id=19705', + 'upload_date': '20111104', + 'timestamp': 1320408000, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + 'url': 'http://teamcoco.com/video/timothy-olyphant-drinking-whiskey', + 'info_dict': { + 'id': '88748', + 'display_id': 'video_timothy-olyphant-drinking-whiskey', + 'ext': 'mp4', + 'title': 'Timothy Olyphant Raises A Toast To “Justifiedâ€', + 'description': 'md5:15501f23f020e793aeca761205e42c24', + 'upload_date': '20150415', + 'timestamp': 1429099200, + 'thumbnail': 'https://teamcoco.com/image/thumb?id=88748', + }, + }, { + 'url': 'http://teamcoco.com/video/full-episode-mon-6-1-joel-mchale-jake-tapper-and-musical-guest-courtney-barnett?playlist=x;eyJ0eXBlIjoidGFnIiwiaWQiOjl9', + 'info_dict': { + 'id': '89341', + 'ext': 'mp4', + 'title': 'Full Episode - Mon. 6/1 - Joel McHale, Jake Tapper, And Musical Guest Courtney Barnett', + 'description': 'Guests: Joel McHale, Jake Tapper, And Musical Guest Courtney Barnett', + }, + 'skip': 'This video is no longer available.', + }, { + 'url': 'http://teamcoco.com/video/the-conan-audiencey-awards-for-04/25/18', + 'only_matching': True, + }, { + 'url': 'http://teamcoco.com/italy/conan-jordan-schlansky-hit-the-streets-of-florence', + 'only_matching': True, + }, { + 'url': 'http://teamcoco.com/haiti/conan-s-haitian-history-lesson', + 'only_matching': True, + }, { + 'url': 'http://teamcoco.com/israel/conan-hits-the-streets-beaches-of-tel-aviv', + 'only_matching': True, + }, + ] + + def _real_extract(self, url): + display_id = self._match_id(url).replace('/', '_') + webpage = self._download_webpage(url, display_id) + data = self._search_nextjs_data(webpage, display_id)['props']['pageProps']['pageData'] + info = merge_dicts(*traverse_obj(data, ( + 'blocks', lambda _, v: v['name'] in ('meta-tags', 'video-player', 'video-info'), 'props', {dict}))) + + thumbnail = traverse_obj( + info, (('image', 'poster'), {lambda x: urljoin('https://teamcoco.com/', x)}), get_all=False) + video_id = traverse_obj(parse_qs(thumbnail), ('id', 0)) or display_id + + formats, subtitles = self._get_formats_and_subtitles(info, video_id) + + return { + 'id': video_id, + 'display_id': display_id, + 'formats': formats, + 'subtitles': subtitles, + 'thumbnail': thumbnail, + **traverse_obj(info, { + 'title': 'title', + 'description': (('descriptionHtml', 'description'), {clean_html}), + 'timestamp': ('publishedOn', {lambda x: f'{x} 12:00AM'}, {unified_timestamp}), + }, get_all=False), + } + + +class ConanClassicIE(TeamcocoBaseIE): + _VALID_URL = r'https?://(?:(?:www\.)?conanclassic|conan25\.teamcoco)\.com/(?P<id>([^/]+/)*[^/?#]+)' + _TESTS = [{ + 'url': 'https://conanclassic.com/video/ice-cube-kevin-hart-conan-share-lyft', + 'info_dict': { + 'id': '74709', + 'ext': 'mp4', + 'title': 'Ice Cube, Kevin Hart, & Conan Share A Lyft Car', + 'display_id': 'video/ice-cube-kevin-hart-conan-share-lyft', + 'description': 'The stars of "Ride Along" teach Conan how to roll around Hollywood.', + 'thumbnail': 'http://cdn.teamcococdn.com/image/640x360/lyft-5bd75f82b616c.png', + 'duration': 570.0, + 'upload_date': '20131211', + 'timestamp': 1386721620, + '_old_archive_ids': ['teamcoco 74709'], + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://conan25.teamcoco.com/video/ice-cube-kevin-hart-conan-share-lyft', + 'only_matching': True, + }] + + _GRAPHQL_QUERY = '''query find($id: ID!) { + findRecord(id: $id) { + +... on MetaInterface { + id + title + teaser + publishOn + slug + thumb { + +... on FileInterface { + id + path + preview + mime +} + + } +} + +... on Video { + videoType + duration + isLive + youtubeId + turnerMediaId + turnerMediaAuthToken + airDate +} + +... on Episode { + airDate + seasonNumber + episodeNumber + guestNames +} + + } + findRecordVideoMetadata(id: $id) { + turnerMediaId + turnerMediaAuthToken + duration + src + } +}''' + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + data = self._search_nextjs_data(webpage, display_id)['props']['pageProps']['pageData'] + video_id = traverse_obj( + data, ('blocks', ..., 'props', 'fieldDefs', lambda _, v: v['name'] == 'incomingVideoId', 'value'), + ('blocks', ..., 'props', 'fields', 'incomingVideoRecord', 'id'), get_all=False) + if not video_id: + self.raise_no_formats('Unable to extract video ID from webpage', expected=True) + + response = self._download_json( + 'https://conanclassic.com/api/legacy/graphql', video_id, data=json.dumps({ + 'query': self._GRAPHQL_QUERY, + 'variables': {'id': video_id}, + }, separators=(',', ':')).encode(), headers={ + 'Content-Type': 'application/json', + }) + + info = traverse_obj(response, ('data', 'findRecord', { + 'title': 'title', + 'description': 'teaser', + 'thumbnail': ('thumb', 'preview', {url_or_none}), + 'duration': ('duration', {parse_duration}), + 'timestamp': ('publishOn', {unified_timestamp}), + })) + + media_id = traverse_obj( + response, ('data', ('findRecord', 'findRecordVideoMetadata'), 'turnerMediaId'), get_all=False) + if media_id: + token = traverse_obj( + response, ('data', ('findRecord', 'findRecordVideoMetadata'), 'turnerMediaAuthToken'), get_all=False) + if not token: + raise ExtractorError('No Turner Media auth token found in API response') + self._initialize_geo_bypass({ + 'countries': ['US'], + }) + info.update(self._extract_ngtv_info(media_id, { + 'accessToken': token, + 'accessTokenType': 'jws', + })) + else: + formats, subtitles = self._get_formats_and_subtitles( + traverse_obj(response, ('data', 'findRecordVideoMetadata')), video_id) + info.update({ + 'formats': formats, + 'subtitles': subtitles, + }) + + return { + 'id': video_id, + 'display_id': display_id, + '_old_archive_ids': [make_archive_id('Teamcoco', video_id)], + **info, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/teamtreehouse.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/teamtreehouse.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/teamtreehouse.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/teamtreehouse.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/techtalks.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/techtalks.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/techtalks.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/techtalks.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ted.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ted.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ted.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ted.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tele13.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tele13.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tele13.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tele13.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tele5.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tele5.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tele5.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tele5.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telebruxelles.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telebruxelles.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telebruxelles.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telebruxelles.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/telecaribe.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telecaribe.py new file mode 100644 index 0000000..91118a1 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/telecaribe.py @@ -0,0 +1,91 @@ +import re + +from .common import InfoExtractor +from ..utils import traverse_obj + + +class TelecaribePlayIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?play\.telecaribe\.co/(?P<id>[\w-]+)' + _TESTS = [{ + 'url': 'https://www.play.telecaribe.co/breicok', + 'info_dict': { + 'id': 'breicok', + 'title': 'Breicok', + }, + 'playlist_count': 7, + }, { + 'url': 'https://www.play.telecaribe.co/si-fue-gol-de-yepes', + 'info_dict': { + 'id': 'si-fue-gol-de-yepes', + 'title': 'Sí Fue Gol de Yepes', + }, + 'playlist_count': 6, + }, { + 'url': 'https://www.play.telecaribe.co/ciudad-futura', + 'info_dict': { + 'id': 'ciudad-futura', + 'title': 'Ciudad Futura', + }, + 'playlist_count': 10, + }, { + 'url': 'https://www.play.telecaribe.co/live', + 'info_dict': { + 'id': 'live', + 'title': r're:^Señal en vivo', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + } + }, { + 'url': 'https://www.play.telecaribe.co/liveplus', + 'info_dict': { + 'id': 'liveplus', + 'title': r're:^Señal en vivo Plus', + 'live_status': 'is_live', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + 'skip': 'Geo-restricted to Colombia', + }] + + def _download_player_webpage(self, webpage, display_id): + page_id = self._search_regex( + (r'window\.firstPageId\s*=\s*["\']([^"\']+)', r'<div[^>]+id\s*=\s*"pageBackground_([^"]+)'), + webpage, 'page_id') + + props = self._download_json(self._search_regex( + rf'<link[^>]+href\s*=\s*"([^"]+)"[^>]+id\s*=\s*"features_{page_id}"', + webpage, 'json_props_url'), display_id)['props']['render']['compProps'] + + return self._download_webpage(traverse_obj(props, (..., 'url'))[-1], display_id) + + def _get_clean_title(self, title): + return re.sub(r'\s*\|\s*Telecaribe\s*VOD', '', title or '').strip() or None + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + player = self._download_player_webpage(webpage, display_id) + + livestream_url = self._search_regex( + r'(?:let|const|var)\s+source\s*=\s*["\']([^"\']+)', player, 'm3u8 url', default=None) + + if not livestream_url: + return self.playlist_from_matches( + re.findall(r'<a[^>]+href\s*=\s*"([^"]+\.mp4)', player), display_id, + self._get_clean_title(self._og_search_title(webpage))) + + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + livestream_url, display_id, 'mp4', live=True) + + return { + 'id': display_id, + 'title': self._get_clean_title(self._og_search_title(webpage)), + 'formats': formats, + 'subtitles': subtitles, + 'is_live': True, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telecinco.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telecinco.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telecinco.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telecinco.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telegraaf.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telegraaf.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telegraaf.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telegraaf.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telegram.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telegram.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telegram.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telegram.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telemb.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telemb.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telemb.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telemb.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/telemundo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telemundo.py new file mode 100644 index 0000000..54e74a6 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/telemundo.py @@ -0,0 +1,50 @@ +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import try_get, unified_timestamp + + +class TelemundoIE(InfoExtractor): + + _VALID_URL = r'https?:\/\/(?:www\.)?telemundo\.com\/.+?video\/[^\/]+(?P<id>tmvo\d{7})' + _TESTS = [{ + 'url': 'https://www.telemundo.com/noticias/noticias-telemundo-en-la-noche/empleo/video/esta-aplicacion-gratuita-esta-ayudando-los-latinos-encontrar-trabajo-en-estados-unidos-tmvo9829325', + 'info_dict': { + 'id': 'tmvo9829325', + 'timestamp': 1621396800, + 'title': 'Esta aplicación gratuita está ayudando a los latinos a encontrar trabajo en Estados Unidos', + 'uploader': 'Telemundo', + 'uploader_id': 'NBCU_Telemundo', + 'ext': 'mp4', + 'upload_date': '20210519', + }, + 'params': { + 'skip_download': True, + } + }, { + 'url': 'https://www.telemundo.com/shows/al-rojo-vivo/empleo/video/personajes-de-times-square-piden-que-la-ciudad-de-nueva-york-los-deje-volver-trabajar-tmvo9816272', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + metadata = self._search_nextjs_data(webpage, video_id) + redirect_url = try_get( + metadata, + lambda x: x['props']['initialState']['video']['associatedPlaylists'][0]['videos'][0]['videoAssets'][0]['publicUrl']) + + m3u8_url = self._request_webpage(HEADRequest( + redirect_url + '?format=redirect&manifest=m3u&format=redirect&Tracking=true&Embedded=true&formats=MPEG4'), + video_id, 'Processing m3u8').url + formats = self._extract_m3u8_formats(m3u8_url, video_id, 'mp4') + date = unified_timestamp(try_get( + metadata, lambda x: x['props']['initialState']['video']['associatedPlaylists'][0]['videos'][0]['datePublished'].split(' ', 1)[1])) + return { + 'url': url, + 'id': video_id, + 'title': self._search_regex(r'<h1[^>]+>([^<]+)', webpage, 'title', fatal=False), + 'formats': formats, + 'timestamp': date, + 'uploader': 'Telemundo', + 'uploader_id': self._search_regex(r'https?:\/\/(?:[^/]+\/){3}video\/(?P<id>[^\/]+)', m3u8_url, 'Akamai account', fatal=False) + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telequebec.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telequebec.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telequebec.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telequebec.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/teletask.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/teletask.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/teletask.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/teletask.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/telewebion.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/telewebion.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/telewebion.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/telewebion.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tempo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tempo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tempo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tempo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tencent.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tencent.py new file mode 100644 index 0000000..6618ea4 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tencent.py @@ -0,0 +1,490 @@ +import functools +import random +import re +import string +import time + +from .common import InfoExtractor +from ..aes import aes_cbc_encrypt_bytes +from ..utils import ( + ExtractorError, + float_or_none, + determine_ext, + int_or_none, + js_to_json, + traverse_obj, + urljoin, +) + + +class TencentBaseIE(InfoExtractor): + """Subclasses must set _API_URL, _APP_VERSION, _PLATFORM, _HOST, _REFERER""" + + def _check_api_response(self, api_response): + msg = api_response.get('msg') + if api_response.get('code') != '0.0' and msg is not None: + if msg in ( + '您所在区域暂无此内容版æƒï¼ˆå¦‚设置VPN请关闭åŽé‡è¯•)', + 'This content is not available in your area due to copyright restrictions. Please choose other videos.' + ): + self.raise_geo_restricted() + raise ExtractorError(f'Tencent said: {msg}') + + def _get_ckey(self, video_id, url, guid): + ua = self.get_param('http_headers')['User-Agent'] + + payload = (f'{video_id}|{int(time.time())}|mg3c3b04ba|{self._APP_VERSION}|{guid}|' + f'{self._PLATFORM}|{url[:48]}|{ua.lower()[:48]}||Mozilla|Netscape|Windows x86_64|00|') + + return aes_cbc_encrypt_bytes( + bytes(f'|{sum(map(ord, payload))}|{payload}', 'utf-8'), + b'Ok\xda\xa3\x9e/\x8c\xb0\x7f^r-\x9e\xde\xf3\x14', + b'\x01PJ\xf3V\xe6\x19\xcf.B\xbb\xa6\x8c?p\xf9', + padding_mode='whitespace').hex().upper() + + def _get_video_api_response(self, video_url, video_id, series_id, subtitle_format, video_format, video_quality): + guid = ''.join(random.choices(string.digits + string.ascii_lowercase, k=16)) + ckey = self._get_ckey(video_id, video_url, guid) + query = { + 'vid': video_id, + 'cid': series_id, + 'cKey': ckey, + 'encryptVer': '8.1', + 'spcaptiontype': '1' if subtitle_format == 'vtt' else '0', + 'sphls': '2' if video_format == 'hls' else '0', + 'dtype': '3' if video_format == 'hls' else '0', + 'defn': video_quality, + 'spsrt': '2', # Enable subtitles + 'sphttps': '1', # Enable HTTPS + 'otype': 'json', + 'spwm': '1', + 'hevclv': '28', # Enable HEVC + 'drm': '40', # Enable DRM + # For HDR + 'spvideo': '4', + 'spsfrhdr': '100', + # For SHD + 'host': self._HOST, + 'referer': self._REFERER, + 'ehost': video_url, + 'appVer': self._APP_VERSION, + 'platform': self._PLATFORM, + # For VQQ + 'guid': guid, + 'flowid': ''.join(random.choices(string.digits + string.ascii_lowercase, k=32)), + } + + return self._search_json(r'QZOutputJson=', self._download_webpage( + self._API_URL, video_id, query=query), 'api_response', video_id) + + def _extract_video_formats_and_subtitles(self, api_response, video_id): + video_response = api_response['vl']['vi'][0] + + formats, subtitles = [], {} + for video_format in video_response['ul']['ui']: + if video_format.get('hls') or determine_ext(video_format['url']) == 'm3u8': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + video_format['url'] + traverse_obj(video_format, ('hls', 'pt'), default=''), + video_id, 'mp4', fatal=False) + + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + formats.append({ + 'url': f'{video_format["url"]}{video_response["fn"]}?vkey={video_response["fvkey"]}', + 'ext': 'mp4', + }) + + identifier = video_response.get('br') + format_response = traverse_obj( + api_response, ('fl', 'fi', lambda _, v: v['br'] == identifier), + expected_type=dict, get_all=False) or {} + common_info = { + 'width': video_response.get('vw'), + 'height': video_response.get('vh'), + 'abr': float_or_none(format_response.get('audiobandwidth'), scale=1000), + 'vbr': float_or_none(format_response.get('bandwidth'), scale=1000), + 'fps': format_response.get('vfps'), + 'format': format_response.get('sname'), + 'format_id': format_response.get('name'), + 'format_note': format_response.get('resolution'), + 'dynamic_range': {'hdr10': 'hdr10'}.get(format_response.get('name'), 'sdr'), + 'has_drm': format_response.get('drm', 0) != 0, + } + for f in formats: + f.update(common_info) + + return formats, subtitles + + def _extract_video_native_subtitles(self, api_response): + subtitles = {} + for subtitle in traverse_obj(api_response, ('sfl', 'fi')) or (): + subtitles.setdefault(subtitle['lang'].lower(), []).append({ + 'url': subtitle['url'], + 'ext': 'srt' if subtitle.get('captionType') == 1 else 'vtt', + 'protocol': 'm3u8_native' if determine_ext(subtitle['url']) == 'm3u8' else 'http', + }) + + return subtitles + + def _extract_all_video_formats_and_subtitles(self, url, video_id, series_id): + api_responses = [self._get_video_api_response(url, video_id, series_id, 'srt', 'hls', 'hd')] + self._check_api_response(api_responses[0]) + qualities = traverse_obj(api_responses, (0, 'fl', 'fi', ..., 'name')) or ('shd', 'fhd') + for q in qualities: + if q not in ('ld', 'sd', 'hd'): + api_responses.append(self._get_video_api_response( + url, video_id, series_id, 'vtt', 'hls', q)) + self._check_api_response(api_responses[-1]) + + formats, subtitles = [], {} + for api_response in api_responses: + fmts, subs = self._extract_video_formats_and_subtitles(api_response, video_id) + native_subtitles = self._extract_video_native_subtitles(api_response) + + formats.extend(fmts) + self._merge_subtitles(subs, native_subtitles, target=subtitles) + + return formats, subtitles + + def _get_clean_title(self, title): + return re.sub( + r'\s*[_\-]\s*(?:Watch online|Watch HD Video Online|WeTV|腾讯视频|(?:高清)?1080P在线观看平å°).*?$', + '', title or '').strip() or None + + +class VQQBaseIE(TencentBaseIE): + _VALID_URL_BASE = r'https?://v\.qq\.com' + + _API_URL = 'https://h5vv6.video.qq.com/getvinfo' + _APP_VERSION = '3.5.57' + _PLATFORM = '10901' + _HOST = 'v.qq.com' + _REFERER = 'v.qq.com' + + def _get_webpage_metadata(self, webpage, video_id): + return self._search_json( + r'<script[^>]*>[^<]*window\.__(?:pinia|PINIA__)\s*=', + webpage, 'pinia data', video_id, transform_source=js_to_json, fatal=False) + + +class VQQVideoIE(VQQBaseIE): + IE_NAME = 'vqq:video' + _VALID_URL = VQQBaseIE._VALID_URL_BASE + r'/x/(?:page|cover/(?P<series_id>\w+))/(?P<id>\w+)' + + _TESTS = [{ + 'url': 'https://v.qq.com/x/page/q326831cny0.html', + 'md5': 'b11c9cb781df710d686b950376676e2a', + 'info_dict': { + 'id': 'q326831cny0', + 'ext': 'mp4', + 'title': 'æˆ‘æ˜¯é€‰æ‰‹ï¼šé›·éœ†è£‚é˜µï¼Œç»ˆæžæ—¶åˆ»', + 'description': 'md5:e7ed70be89244017dac2a835a10aeb1e', + 'thumbnail': r're:^https?://[^?#]+q326831cny0', + 'format_id': r're:^shd', + }, + }, { + 'url': 'https://v.qq.com/x/page/o3013za7cse.html', + 'md5': 'a1bcf42c6d28c189bd2fe2d468abb287', + 'info_dict': { + 'id': 'o3013za7cse', + 'ext': 'mp4', + 'title': '欧阳娜娜VLOG', + 'description': 'md5:29fe847497a98e04a8c3826e499edd2e', + 'thumbnail': r're:^https?://[^?#]+o3013za7cse', + 'format_id': r're:^shd', + }, + }, { + 'url': 'https://v.qq.com/x/cover/7ce5noezvafma27/a00269ix3l8.html', + 'md5': '87968df6238a65d2478f19c25adf850b', + 'info_dict': { + 'id': 'a00269ix3l8', + 'ext': 'mp4', + 'title': '鸡毛飞上天 第01集', + 'description': 'md5:8cae3534327315b3872fbef5e51b5c5b', + 'thumbnail': r're:^https?://[^?#]+7ce5noezvafma27', + 'series': '鸡毛飞上天', + 'format_id': r're:^shd', + }, + 'skip': '404', + }, { + 'url': 'https://v.qq.com/x/cover/mzc00200p29k31e/s0043cwsgj0.html', + 'md5': 'fadd10bf88aec3420f06f19ee1d24c5b', + 'info_dict': { + 'id': 's0043cwsgj0', + 'ext': 'mp4', + 'title': '第1集:如何快ä¹åƒç³–?', + 'description': 'md5:1d8c3a0b8729ae3827fa5b2d3ebd5213', + 'thumbnail': r're:^https?://[^?#]+s0043cwsgj0', + 'series': 'é’å¹´ç†å·¥å·¥ä½œè€…生活研究所', + 'format_id': r're:^shd', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + # Geo-restricted to China + 'url': 'https://v.qq.com/x/cover/mcv8hkc8zk8lnov/x0036x5qqsr.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id, series_id = self._match_valid_url(url).group('id', 'series_id') + webpage = self._download_webpage(url, video_id) + webpage_metadata = self._get_webpage_metadata(webpage, video_id) + + formats, subtitles = self._extract_all_video_formats_and_subtitles(url, video_id, series_id) + return { + 'id': video_id, + 'title': self._get_clean_title(self._og_search_title(webpage) + or traverse_obj(webpage_metadata, ('global', 'videoInfo', 'title'))), + 'description': (self._og_search_description(webpage) + or traverse_obj(webpage_metadata, ('global', 'videoInfo', 'desc'))), + 'formats': formats, + 'subtitles': subtitles, + 'thumbnail': (self._og_search_thumbnail(webpage) + or traverse_obj(webpage_metadata, ('global', 'videoInfo', 'pic160x90'))), + 'series': traverse_obj(webpage_metadata, ('global', 'coverInfo', 'title')), + } + + +class VQQSeriesIE(VQQBaseIE): + IE_NAME = 'vqq:series' + _VALID_URL = VQQBaseIE._VALID_URL_BASE + r'/x/cover/(?P<id>\w+)\.html/?(?:[?#]|$)' + + _TESTS = [{ + 'url': 'https://v.qq.com/x/cover/7ce5noezvafma27.html', + 'info_dict': { + 'id': '7ce5noezvafma27', + 'title': '鸡毛飞上天', + 'description': 'md5:8cae3534327315b3872fbef5e51b5c5b', + }, + 'playlist_count': 55, + }, { + 'url': 'https://v.qq.com/x/cover/oshd7r0vy9sfq8e.html', + 'info_dict': { + 'id': 'oshd7r0vy9sfq8e', + 'title': 'æ‹çˆ±ç»†èƒž2', + 'description': 'md5:9d8a2245679f71ca828534b0f95d2a03', + }, + 'playlist_count': 12, + }] + + def _real_extract(self, url): + series_id = self._match_id(url) + webpage = self._download_webpage(url, series_id) + webpage_metadata = self._get_webpage_metadata(webpage, series_id) + + episode_paths = [f'/x/cover/{series_id}/{video_id}.html' for video_id in re.findall( + r'<div[^>]+data-vid="(?P<video_id>[^"]+)"[^>]+class="[^"]+episode-item-rect--number', + webpage)] + + return self.playlist_from_matches( + episode_paths, series_id, ie=VQQVideoIE, getter=functools.partial(urljoin, url), + title=self._get_clean_title(traverse_obj(webpage_metadata, ('coverInfo', 'title')) + or self._og_search_title(webpage)), + description=(traverse_obj(webpage_metadata, ('coverInfo', 'description')) + or self._og_search_description(webpage))) + + +class WeTvBaseIE(TencentBaseIE): + _VALID_URL_BASE = r'https?://(?:www\.)?wetv\.vip/(?:[^?#]+/)?play' + + _API_URL = 'https://play.wetv.vip/getvinfo' + _APP_VERSION = '3.5.57' + _PLATFORM = '4830201' + _HOST = 'wetv.vip' + _REFERER = 'wetv.vip' + + def _get_webpage_metadata(self, webpage, video_id): + return self._parse_json( + traverse_obj(self._search_nextjs_data(webpage, video_id), ('props', 'pageProps', 'data')), + video_id, fatal=False) + + def _extract_episode(self, url): + video_id, series_id = self._match_valid_url(url).group('id', 'series_id') + webpage = self._download_webpage(url, video_id) + webpage_metadata = self._get_webpage_metadata(webpage, video_id) + + formats, subtitles = self._extract_all_video_formats_and_subtitles(url, video_id, series_id) + return { + 'id': video_id, + 'title': self._get_clean_title(self._og_search_title(webpage) + or traverse_obj(webpage_metadata, ('coverInfo', 'title'))), + 'description': (traverse_obj(webpage_metadata, ('coverInfo', 'description')) + or self._og_search_description(webpage)), + 'formats': formats, + 'subtitles': subtitles, + 'thumbnail': self._og_search_thumbnail(webpage), + 'duration': int_or_none(traverse_obj(webpage_metadata, ('videoInfo', 'duration'))), + 'series': traverse_obj(webpage_metadata, ('coverInfo', 'title')), + 'episode_number': int_or_none(traverse_obj(webpage_metadata, ('videoInfo', 'episode'))), + } + + def _extract_series(self, url, ie): + series_id = self._match_id(url) + webpage = self._download_webpage(url, series_id) + webpage_metadata = self._get_webpage_metadata(webpage, series_id) + + episode_paths = ([f'/play/{series_id}/{episode["vid"]}' for episode in webpage_metadata.get('videoList')] + or re.findall(r'<a[^>]+class="play-video__link"[^>]+href="(?P<path>[^"]+)', webpage)) + + return self.playlist_from_matches( + episode_paths, series_id, ie=ie, getter=functools.partial(urljoin, url), + title=self._get_clean_title(traverse_obj(webpage_metadata, ('coverInfo', 'title')) + or self._og_search_title(webpage)), + description=(traverse_obj(webpage_metadata, ('coverInfo', 'description')) + or self._og_search_description(webpage))) + + +class WeTvEpisodeIE(WeTvBaseIE): + IE_NAME = 'wetv:episode' + _VALID_URL = WeTvBaseIE._VALID_URL_BASE + r'/(?P<series_id>\w+)(?:-[^?#]+)?/(?P<id>\w+)(?:-[^?#]+)?' + + _TESTS = [{ + 'url': 'https://wetv.vip/en/play/air11ooo2rdsdi3-Cute-Programmer/v0040pr89t9-EP1-Cute-Programmer', + 'md5': '0c70fdfaa5011ab022eebc598e64bbbe', + 'info_dict': { + 'id': 'v0040pr89t9', + 'ext': 'mp4', + 'title': 'EP1: Cute Programmer', + 'description': 'md5:e87beab3bf9f392d6b9e541a63286343', + 'thumbnail': r're:^https?://[^?#]+air11ooo2rdsdi3', + 'series': 'Cute Programmer', + 'episode': 'Episode 1', + 'episode_number': 1, + 'duration': 2835, + 'format_id': r're:^shd', + }, + }, { + 'url': 'https://wetv.vip/en/play/u37kgfnfzs73kiu/p0039b9nvik', + 'md5': '3b3c15ca4b9a158d8d28d5aa9d7c0a49', + 'info_dict': { + 'id': 'p0039b9nvik', + 'ext': 'mp4', + 'title': 'EP1: You Are My Glory', + 'description': 'md5:831363a4c3b4d7615e1f3854be3a123b', + 'thumbnail': r're:^https?://[^?#]+u37kgfnfzs73kiu', + 'series': 'You Are My Glory', + 'episode': 'Episode 1', + 'episode_number': 1, + 'duration': 2454, + 'format_id': r're:^shd', + }, + }, { + 'url': 'https://wetv.vip/en/play/lcxgwod5hapghvw-WeTV-PICK-A-BOO/i0042y00lxp-Zhao-Lusi-Describes-The-First-Experiences-She-Had-In-Who-Rules-The-World-%7C-WeTV-PICK-A-BOO', + 'md5': '71133f5c2d5d6cad3427e1b010488280', + 'info_dict': { + 'id': 'i0042y00lxp', + 'ext': 'mp4', + 'title': 'md5:f7a0857dbe5fbbe2e7ad630b92b54e6a', + 'description': 'md5:76260cb9cdc0ef76826d7ca9d92fadfa', + 'thumbnail': r're:^https?://[^?#]+i0042y00lxp', + 'series': 'WeTV PICK-A-BOO', + 'episode': 'Episode 0', + 'episode_number': 0, + 'duration': 442, + 'format_id': r're:^shd', + }, + }] + + def _real_extract(self, url): + return self._extract_episode(url) + + +class WeTvSeriesIE(WeTvBaseIE): + _VALID_URL = WeTvBaseIE._VALID_URL_BASE + r'/(?P<id>\w+)(?:-[^/?#]+)?/?(?:[?#]|$)' + + _TESTS = [{ + 'url': 'https://wetv.vip/play/air11ooo2rdsdi3-Cute-Programmer', + 'info_dict': { + 'id': 'air11ooo2rdsdi3', + 'title': 'Cute Programmer', + 'description': 'md5:e87beab3bf9f392d6b9e541a63286343', + }, + 'playlist_count': 30, + }, { + 'url': 'https://wetv.vip/en/play/u37kgfnfzs73kiu-You-Are-My-Glory', + 'info_dict': { + 'id': 'u37kgfnfzs73kiu', + 'title': 'You Are My Glory', + 'description': 'md5:831363a4c3b4d7615e1f3854be3a123b', + }, + 'playlist_count': 32, + }] + + def _real_extract(self, url): + return self._extract_series(url, WeTvEpisodeIE) + + +class IflixBaseIE(WeTvBaseIE): + _VALID_URL_BASE = r'https?://(?:www\.)?iflix\.com/(?:[^?#]+/)?play' + + _API_URL = 'https://vplay.iflix.com/getvinfo' + _APP_VERSION = '3.5.57' + _PLATFORM = '330201' + _HOST = 'www.iflix.com' + _REFERER = 'www.iflix.com' + + +class IflixEpisodeIE(IflixBaseIE): + IE_NAME = 'iflix:episode' + _VALID_URL = IflixBaseIE._VALID_URL_BASE + r'/(?P<series_id>\w+)(?:-[^?#]+)?/(?P<id>\w+)(?:-[^?#]+)?' + + _TESTS = [{ + 'url': 'https://www.iflix.com/en/play/daijrxu03yypu0s/a0040kvgaza', + 'md5': '9740f9338c3a2105290d16b68fb3262f', + 'info_dict': { + 'id': 'a0040kvgaza', + 'ext': 'mp4', + 'title': 'EP1: Put Your Head On My Shoulder 2021', + 'description': 'md5:c095a742d3b7da6dfedd0c8170727a42', + 'thumbnail': r're:^https?://[^?#]+daijrxu03yypu0s', + 'series': 'Put Your Head On My Shoulder 2021', + 'episode': 'Episode 1', + 'episode_number': 1, + 'duration': 2639, + 'format_id': r're:^shd', + }, + }, { + 'url': 'https://www.iflix.com/en/play/fvvrcc3ra9lbtt1-Take-My-Brother-Away/i0029sd3gm1-EP1%EF%BC%9ATake-My-Brother-Away', + 'md5': '375c9b8478fdedca062274b2c2f53681', + 'info_dict': { + 'id': 'i0029sd3gm1', + 'ext': 'mp4', + 'title': 'EP1:Take My Brother Away', + 'description': 'md5:f0f7be1606af51cd94d5627de96b0c76', + 'thumbnail': r're:^https?://[^?#]+fvvrcc3ra9lbtt1', + 'series': 'Take My Brother Away', + 'episode': 'Episode 1', + 'episode_number': 1, + 'duration': 228, + 'format_id': r're:^shd', + }, + }] + + def _real_extract(self, url): + return self._extract_episode(url) + + +class IflixSeriesIE(IflixBaseIE): + _VALID_URL = IflixBaseIE._VALID_URL_BASE + r'/(?P<id>\w+)(?:-[^/?#]+)?/?(?:[?#]|$)' + + _TESTS = [{ + 'url': 'https://www.iflix.com/en/play/g21a6qk4u1s9x22-You-Are-My-Hero', + 'info_dict': { + 'id': 'g21a6qk4u1s9x22', + 'title': 'You Are My Hero', + 'description': 'md5:9c4d844bc0799cd3d2b5aed758a2050a', + }, + 'playlist_count': 40, + }, { + 'url': 'https://www.iflix.com/play/0s682hc45t0ohll', + 'info_dict': { + 'id': '0s682hc45t0ohll', + 'title': 'Miss Gu Who Is Silent', + 'description': 'md5:a9651d0236f25af06435e845fa2f8c78', + }, + 'playlist_count': 20, + }] + + def _real_extract(self, url): + return self._extract_series(url, IflixEpisodeIE) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tennistv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tennistv.py new file mode 100644 index 0000000..c1b4a33 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tennistv.py @@ -0,0 +1,155 @@ +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + random_uuidv4, + unified_timestamp, + urlencode_postdata, +) + + +class TennisTVIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tennistv\.com/videos/(?P<id>[-a-z0-9]+)' + _TESTS = [{ + 'url': 'https://www.tennistv.com/videos/indian-wells-2018-verdasco-fritz', + 'info_dict': { + 'id': 'indian-wells-2018-verdasco-fritz', + 'ext': 'mp4', + 'title': 'Fernando Verdasco v Taylor Fritz', + 'description': 're:^After his stunning victory.{174}$', + 'thumbnail': 'https://atp-prod.akamaized.net/api/images/v1/images/112831/landscape/1242/0', + 'timestamp': 1521017381, + 'upload_date': '20180314', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'Requires email and password of a subscribed account', + }, { + 'url': 'https://www.tennistv.com/videos/2650480/best-matches-of-2022-part-5', + 'info_dict': { + 'id': '2650480', + 'ext': 'mp4', + 'title': 'Best Matches of 2022 - Part 5', + 'description': 'md5:36dec3bfae7ed74bd79e48045b17264c', + 'thumbnail': 'https://open.http.mp.streamamg.com/p/3001482/sp/300148200/thumbnail/entry_id/0_myef18pd/version/100001/height/1920', + }, + 'params': {'skip_download': 'm3u8'}, + 'skip': 'Requires email and password of a subscribed account', + }] + _NETRC_MACHINE = 'tennistv' + + access_token, refresh_token = None, None + _PARTNER_ID = 3001482 + _FORMAT_URL = 'https://open.http.mp.streamamg.com/p/{partner}/sp/{partner}00/playManifest/entryId/{entry}/format/applehttp/protocol/https/a.m3u8?ks={session}' + _AUTH_BASE_URL = 'https://sso.tennistv.com/auth/realms/TennisTV/protocol/openid-connect' + _HEADERS = { + 'origin': 'https://www.tennistv.com', + 'referer': 'https://www.tennistv.com/', + 'content-Type': 'application/x-www-form-urlencoded' + } + + def _perform_login(self, username, password): + login_page = self._download_webpage( + f'{self._AUTH_BASE_URL}/auth', None, 'Downloading login page', + query={ + 'client_id': 'tennis-tv-web', + 'redirect_uri': 'https://tennistv.com', + 'response_mode': 'fragment', + 'response_type': 'code', + 'scope': 'openid' + }) + + post_url = self._html_search_regex(r'action=["\']([^"\']+?)["\']\s+method=["\']post["\']', login_page, 'login POST url') + temp_page = self._download_webpage( + post_url, None, 'Sending login data', 'Unable to send login data', + headers=self._HEADERS, data=urlencode_postdata({ + 'username': username, + 'password': password, + 'submitAction': 'Log In' + })) + if 'Your username or password was incorrect' in temp_page: + raise ExtractorError('Your username or password was incorrect', expected=True) + + handle = self._request_webpage( + f'{self._AUTH_BASE_URL}/auth', None, 'Logging in', headers=self._HEADERS, + query={ + 'client_id': 'tennis-tv-web', + 'redirect_uri': 'https://www.tennistv.com/resources/v1.1.10/html/silent-check-sso.html', + 'state': random_uuidv4(), + 'response_mode': 'fragment', + 'response_type': 'code', + 'scope': 'openid', + 'nonce': random_uuidv4(), + 'prompt': 'none' + }) + + self.get_token(None, { + 'code': urllib.parse.parse_qs(handle.url)['code'][-1], + 'grant_type': 'authorization_code', + 'client_id': 'tennis-tv-web', + 'redirect_uri': 'https://www.tennistv.com/resources/v1.1.10/html/silent-check-sso.html' + }) + + def get_token(self, video_id, payload): + res = self._download_json( + f'{self._AUTH_BASE_URL}/token', video_id, 'Fetching tokens', + 'Unable to fetch tokens', headers=self._HEADERS, data=urlencode_postdata(payload)) + + self.access_token = res.get('access_token') or self.access_token + self.refresh_token = res.get('refresh_token') or self.refresh_token + + def _real_initialize(self): + if self.access_token and self.refresh_token: + return + + cookies = self._get_cookies('https://www.tennistv.com/') + if not cookies.get('access_token') or not cookies.get('refresh_token'): + self.raise_login_required() + self.access_token, self.refresh_token = cookies['access_token'].value, cookies['refresh_token'].value + + def _download_session_json(self, video_id, entryid,): + return self._download_json( + f'https://atppayments.streamamg.com/api/v1/session/ksession/?lang=en&apijwttoken={self.access_token}&entryId={entryid}', + video_id, 'Downloading ksession token', 'Failed to download ksession token', headers=self._HEADERS) + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + entryid = self._search_regex(r'data-entry-id=["\']([^"\']+)', webpage, 'entryID') + session_json = self._download_session_json(video_id, entryid) + + k_session = session_json.get('KSession') + if k_session is None: + self.get_token(video_id, { + 'grant_type': 'refresh_token', + 'refresh_token': self.refresh_token, + 'client_id': 'tennis-tv-web' + }) + k_session = self._download_session_json(video_id, entryid).get('KSession') + if k_session is None: + raise ExtractorError('Failed to get KSession, possibly a premium video', expected=True) + + if session_json.get('ErrorMessage'): + self.report_warning(session_json['ErrorMessage']) + + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + self._FORMAT_URL.format(partner=self._PARTNER_ID, entry=entryid, session=k_session), video_id) + + return { + 'id': video_id, + 'title': self._generic_title('', webpage), + 'description': self._html_search_regex( + (r'<span itemprop="description" content=["\']([^"\']+)["\']>', *self._og_regexes('description')), + webpage, 'description', fatal=False), + 'thumbnail': f'https://open.http.mp.streamamg.com/p/{self._PARTNER_ID}/sp/{self._PARTNER_ID}00/thumbnail/entry_id/{entryid}/version/100001/height/1920', + 'timestamp': unified_timestamp(self._html_search_regex( + r'<span itemprop="uploadDate" content=["\']([^"\']+)["\']>', webpage, 'upload time', fatal=False)), + 'series': self._html_search_regex(r'data-series\s*?=\s*?"(.*?)"', webpage, 'series', fatal=False) or None, + 'season': self._html_search_regex(r'data-tournament-city\s*?=\s*?"(.*?)"', webpage, 'season', fatal=False) or None, + 'episode': self._html_search_regex(r'data-round\s*?=\s*?"(.*?)"', webpage, 'round', fatal=False) or None, + 'formats': formats, + 'subtitles': subtitles, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tenplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tenplay.py new file mode 100644 index 0000000..7ce7cbf --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tenplay.py @@ -0,0 +1,169 @@ +import base64 +import functools +import itertools +from datetime import datetime + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import int_or_none, traverse_obj, urlencode_postdata, urljoin + + +class TenPlayIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?10play\.com\.au/(?:[^/]+/)+(?P<id>tpv\d{6}[a-z]{5})' + _NETRC_MACHINE = '10play' + _TESTS = [{ + 'url': 'https://10play.com.au/neighbours/web-extras/season-39/nathan-borg-is-the-first-aussie-actor-with-a-cochlear-implant-to-join-neighbours/tpv210128qupwd', + 'info_dict': { + 'id': '6226844312001', + 'ext': 'mp4', + 'title': 'Nathan Borg Is The First Aussie Actor With A Cochlear Implant To Join Neighbours', + 'alt_title': 'Nathan Borg Is The First Aussie Actor With A Cochlear Implant To Join Neighbours', + 'description': 'md5:a02d0199c901c2dd4c796f1e7dd0de43', + 'duration': 186, + 'season': 39, + 'series': 'Neighbours', + 'thumbnail': r're:https://.*\.jpg', + 'uploader': 'Channel 10', + 'age_limit': 15, + 'timestamp': 1611810000, + 'upload_date': '20210128', + 'uploader_id': '2199827728001', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'Only available in Australia', + }, { + 'url': 'https://10play.com.au/todd-sampsons-body-hack/episodes/season-4/episode-7/tpv200921kvngh', + 'info_dict': { + 'id': '6192880312001', + 'ext': 'mp4', + 'title': "Todd Sampson's Body Hack - S4 Ep. 2", + 'description': 'md5:fa278820ad90f08ea187f9458316ac74', + 'age_limit': 15, + 'timestamp': 1600770600, + 'upload_date': '20200922', + 'uploader': 'Channel 10', + 'uploader_id': '2199827728001' + }, + 'params': { + 'skip_download': True, + } + }, { + 'url': 'https://10play.com.au/how-to-stay-married/web-extras/season-1/terrys-talks-ep-1-embracing-change/tpv190915ylupc', + 'only_matching': True, + }] + _GEO_BYPASS = False + + _AUS_AGES = { + 'G': 0, + 'PG': 15, + 'M': 15, + 'MA': 15, + 'MA15+': 15, + 'R': 18, + 'X': 18 + } + + def _get_bearer_token(self, video_id): + username, password = self._get_login_info() + if username is None or password is None: + self.raise_login_required('Your 10play account\'s details must be provided with --username and --password.') + _timestamp = datetime.now().strftime('%Y%m%d000000') + _auth_header = base64.b64encode(_timestamp.encode('ascii')).decode('ascii') + data = self._download_json('https://10play.com.au/api/user/auth', video_id, 'Getting bearer token', headers={ + 'X-Network-Ten-Auth': _auth_header, + }, data=urlencode_postdata({ + 'email': username, + 'password': password, + })) + return 'Bearer ' + data['jwt']['accessToken'] + + def _real_extract(self, url): + content_id = self._match_id(url) + data = self._download_json( + 'https://10play.com.au/api/v1/videos/' + content_id, content_id) + headers = {} + + if data.get('memberGated') is True: + _token = self._get_bearer_token(content_id) + headers = {'Authorization': _token} + + _video_url = self._download_json( + data.get('playbackApiEndpoint'), content_id, 'Downloading video JSON', + headers=headers).get('source') + m3u8_url = self._request_webpage(HEADRequest( + _video_url), content_id).url + if '10play-not-in-oz' in m3u8_url: + self.raise_geo_restricted(countries=['AU']) + formats = self._extract_m3u8_formats(m3u8_url, content_id, 'mp4') + + return { + 'formats': formats, + 'subtitles': {'en': [{'url': data.get('captionUrl')}]} if data.get('captionUrl') else None, + 'id': data.get('altId') or content_id, + 'duration': data.get('duration'), + 'title': data.get('subtitle'), + 'alt_title': data.get('title'), + 'description': data.get('description'), + 'age_limit': self._AUS_AGES.get(data.get('classification')), + 'series': data.get('tvShow'), + 'season': int_or_none(data.get('season')), + 'episode_number': int_or_none(data.get('episode')), + 'timestamp': data.get('published'), + 'thumbnail': data.get('imageUrl'), + 'uploader': 'Channel 10', + 'uploader_id': '2199827728001', + } + + +class TenPlaySeasonIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?10play\.com\.au/(?P<show>[^/?#]+)/episodes/(?P<season>[^/?#]+)/?(?:$|[?#])' + _TESTS = [{ + 'url': 'https://10play.com.au/masterchef/episodes/season-14', + 'info_dict': { + 'title': 'Season 14', + 'id': 'MjMyOTIy', + }, + 'playlist_mincount': 64, + }, { + 'url': 'https://10play.com.au/the-bold-and-the-beautiful-fast-tracked/episodes/season-2022', + 'info_dict': { + 'title': 'Season 2022', + 'id': 'Mjc0OTIw', + }, + 'playlist_mincount': 256, + }] + + def _entries(self, load_more_url, display_id=None): + skip_ids = [] + for page in itertools.count(1): + episodes_carousel = self._download_json( + load_more_url, display_id, query={'skipIds[]': skip_ids}, + note=f'Fetching episodes page {page}') + + episodes_chunk = episodes_carousel['items'] + skip_ids.extend(ep['id'] for ep in episodes_chunk) + + for ep in episodes_chunk: + yield ep['cardLink'] + if not episodes_carousel['hasMore']: + break + + def _real_extract(self, url): + show, season = self._match_valid_url(url).group('show', 'season') + season_info = self._download_json( + f'https://10play.com.au/api/shows/{show}/episodes/{season}', f'{show}/{season}') + + episodes_carousel = traverse_obj(season_info, ( + 'content', 0, 'components', ( + lambda _, v: v['title'].lower() == 'episodes', + (..., {dict}), + )), get_all=False) or {} + + playlist_id = episodes_carousel['tpId'] + + return self.playlist_from_matches( + self._entries(urljoin(url, episodes_carousel['loadMoreUrl']), playlist_id), + playlist_id, traverse_obj(season_info, ('content', 0, 'title', {str})), + getter=functools.partial(urljoin, url)) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/testurl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/testurl.py new file mode 100644 index 0000000..3cf0017 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/testurl.py @@ -0,0 +1,50 @@ +import re + +from .common import InfoExtractor +from ..utils import ExtractorError + + +class TestURLIE(InfoExtractor): + """ Allows addressing of the test cases as test:yout.*be_1 """ + + IE_DESC = False # Do not list + _VALID_URL = r'test(?:url)?:(?P<extractor>.*?)(?:_(?P<num>\d+|all))?$' + + def _real_extract(self, url): + from . import gen_extractor_classes + + extractor_id, num = self._match_valid_url(url).group('extractor', 'num') + if not extractor_id: + return {'id': ':test', 'title': '', 'url': url} + + rex = re.compile(extractor_id, flags=re.IGNORECASE) + matching_extractors = [e for e in gen_extractor_classes() if rex.search(e.IE_NAME)] + + if len(matching_extractors) == 0: + raise ExtractorError(f'No extractors matching {extractor_id!r} found', expected=True) + elif len(matching_extractors) > 1: + extractor = next(( # Check for exact match + ie for ie in matching_extractors if ie.IE_NAME.lower() == extractor_id.lower() + ), None) or next(( # Check for exact match without plugin suffix + ie for ie in matching_extractors if ie.IE_NAME.split('+')[0].lower() == extractor_id.lower() + ), None) + if not extractor: + raise ExtractorError( + 'Found multiple matching extractors: %s' % ' '.join(ie.IE_NAME for ie in matching_extractors), + expected=True) + else: + extractor = matching_extractors[0] + + testcases = tuple(extractor.get_testcases(True)) + if num == 'all': + return self.playlist_result( + [self.url_result(tc['url'], extractor) for tc in testcases], + url, f'{extractor.IE_NAME} tests') + try: + tc = testcases[int(num or 0)] + except IndexError: + raise ExtractorError( + f'Test case {num or 0} not found, got only {len(testcases)} tests', expected=True) + + self.to_screen(f'Test URL: {tc["url"]}') + return self.url_result(tc['url'], extractor) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tf1.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tf1.py new file mode 100644 index 0000000..aba4927 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tf1.py @@ -0,0 +1,101 @@ +import json + +from .common import InfoExtractor +from ..utils import ( + int_or_none, + parse_iso8601, + try_get, +) + + +class TF1IE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tf1\.fr/[^/]+/(?P<program_slug>[^/]+)/videos/(?P<id>[^/?&#]+)\.html' + _TESTS = [{ + 'url': 'https://www.tf1.fr/tmc/quotidien-avec-yann-barthes/videos/quotidien-premiere-partie-11-juin-2019.html', + 'info_dict': { + 'id': '13641379', + 'ext': 'mp4', + 'title': 'md5:f392bc52245dc5ad43771650c96fb620', + 'description': 'md5:a02cdb217141fb2d469d6216339b052f', + 'upload_date': '20190611', + 'timestamp': 1560273989, + 'duration': 1738, + 'series': 'Quotidien avec Yann Barthès', + 'tags': ['intégrale', 'quotidien', 'Replay'], + }, + 'params': { + # Sometimes wat serves the whole file with the --test option + 'skip_download': True, + }, + }, { + 'url': 'https://www.tf1.fr/tmc/burger-quiz/videos/burger-quiz-du-19-aout-2023-s03-episode-21-85585666.html', + 'info_dict': { + 'id': '14010600', + 'ext': 'mp4', + 'title': 'Burger Quiz - S03 EP21 avec Eye Haidara, Anne Depétrini, Jonathan Zaccaï et Pio Marmaï', + 'thumbnail': 'https://photos.tf1.fr/1280/720/burger-quiz-11-9adb79-0@1x.jpg', + 'description': 'Manu Payet recevra Eye Haidara, Anne Depétrini, Jonathan Zaccaï et Pio Marmaï.', + 'upload_date': '20230819', + 'timestamp': 1692469471, + 'season_number': 3, + 'series': 'Burger Quiz', + 'episode_number': 21, + 'season': 'Season 3', + 'tags': 'count:13', + 'episode': 'Episode 21', + 'duration': 2312 + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'http://www.tf1.fr/tf1/koh-lanta/videos/replay-koh-lanta-22-mai-2015.html', + 'only_matching': True, + }, { + 'url': 'http://www.tf1.fr/hd1/documentaire/videos/mylene-farmer-d-une-icone.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + program_slug, slug = self._match_valid_url(url).groups() + video = self._download_json( + 'https://www.tf1.fr/graphql/web', slug, query={ + 'id': '9b80783950b85247541dd1d851f9cc7fa36574af015621f853ab111a679ce26f', + 'variables': json.dumps({ + 'programSlug': program_slug, + 'slug': slug, + }) + })['data']['videoBySlug'] + wat_id = video['streamId'] + + tags = [] + for tag in (video.get('tags') or []): + label = tag.get('label') + if not label: + continue + tags.append(label) + + decoration = video.get('decoration') or {} + + thumbnails = [] + for source in (try_get(decoration, lambda x: x['image']['sources'], list) or []): + source_url = source.get('url') + if not source_url: + continue + thumbnails.append({ + 'url': source_url, + 'width': int_or_none(source.get('width')), + }) + + return { + '_type': 'url_transparent', + 'id': wat_id, + 'url': 'wat:' + wat_id, + 'title': video.get('title'), + 'thumbnails': thumbnails, + 'description': decoration.get('description'), + 'timestamp': parse_iso8601(video.get('date')), + 'duration': int_or_none(try_get(video, lambda x: x['publicPlayingInfos']['duration'])), + 'tags': tags, + 'series': decoration.get('programLabel'), + 'season_number': int_or_none(video.get('season')), + 'episode_number': int_or_none(video.get('episode')), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tfo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tfo.py new file mode 100644 index 0000000..d417f50 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tfo.py @@ -0,0 +1,48 @@ +import json + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ExtractorError, clean_html, int_or_none + + +class TFOIE(InfoExtractor): + _GEO_COUNTRIES = ['CA'] + _VALID_URL = r'https?://(?:www\.)?tfo\.org/(?:en|fr)/(?:[^/]+/){2}(?P<id>\d+)' + _TEST = { + 'url': 'http://www.tfo.org/en/universe/tfo-247/100463871/video-game-hackathon', + 'md5': 'cafbe4f47a8dae0ca0159937878100d6', + 'info_dict': { + 'id': '7da3d50e495c406b8fc0b997659cc075', + 'ext': 'mp4', + 'title': 'Video Game Hackathon', + 'description': 'md5:558afeba217c6c8d96c60e5421795c07', + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + self._request_webpage(HEADRequest('http://www.tfo.org/'), video_id) + infos = self._download_json( + 'http://www.tfo.org/api/web/video/get_infos', video_id, data=json.dumps({ + 'product_id': video_id, + }).encode(), headers={ + 'X-tfo-session': self._get_cookies('http://www.tfo.org/')['tfo-session'].value, + }) + if infos.get('success') == 0: + if infos.get('code') == 'ErrGeoBlocked': + self.raise_geo_restricted(countries=self._GEO_COUNTRIES) + raise ExtractorError('%s said: %s' % (self.IE_NAME, clean_html(infos['msg'])), expected=True) + video_data = infos['data'] + + return { + '_type': 'url_transparent', + 'id': video_id, + 'url': 'limelight:media:' + video_data['llid'], + 'title': video_data['title'], + 'description': video_data.get('description'), + 'series': video_data.get('collection'), + 'season_number': int_or_none(video_data.get('season')), + 'episode_number': int_or_none(video_data.get('episode')), + 'duration': int_or_none(video_data.get('duration')), + 'ie_key': 'LimelightMedia', + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/theholetv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/theholetv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/theholetv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/theholetv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/theintercept.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/theintercept.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/theintercept.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/theintercept.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/theplatform.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/theplatform.py new file mode 100644 index 0000000..433ce84 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/theplatform.py @@ -0,0 +1,417 @@ +import re +import time +import hmac +import binascii +import hashlib + + +from .once import OnceIE +from .adobepass import AdobePassIE +from ..networking import Request +from ..utils import ( + determine_ext, + ExtractorError, + float_or_none, + int_or_none, + parse_qs, + unsmuggle_url, + update_url_query, + xpath_with_ns, + mimetype2ext, + find_xpath_attr, + traverse_obj, + update_url, + urlhandle_detect_ext, +) +from ..networking import HEADRequest + +default_ns = 'http://www.w3.org/2005/SMIL21/Language' +_x = lambda p: xpath_with_ns(p, {'smil': default_ns}) + + +class ThePlatformBaseIE(OnceIE): + _TP_TLD = 'com' + + def _extract_theplatform_smil(self, smil_url, video_id, note='Downloading SMIL data'): + meta = self._download_xml( + smil_url, video_id, note=note, query={'format': 'SMIL'}, + headers=self.geo_verification_headers()) + error_element = find_xpath_attr(meta, _x('.//smil:ref'), 'src') + if error_element is not None: + exception = find_xpath_attr( + error_element, _x('.//smil:param'), 'name', 'exception') + if exception is not None: + if exception.get('value') == 'GeoLocationBlocked': + self.raise_geo_restricted(error_element.attrib['abstract']) + elif error_element.attrib['src'].startswith( + 'http://link.theplatform.%s/s/errorFiles/Unavailable.' + % self._TP_TLD): + raise ExtractorError( + error_element.attrib['abstract'], expected=True) + + smil_formats, subtitles = self._parse_smil_formats_and_subtitles( + meta, smil_url, video_id, namespace=default_ns, + # the parameters are from syfy.com, other sites may use others, + # they also work for nbc.com + f4m_params={'g': 'UXWGVKRWHFSP', 'hdcore': '3.0.3'}, + transform_rtmp_url=lambda streamer, src: (streamer, 'mp4:' + src)) + + formats = [] + for _format in smil_formats: + if OnceIE.suitable(_format['url']): + formats.extend(self._extract_once_formats(_format['url'])) + else: + media_url = _format['url'] + if determine_ext(media_url) == 'm3u8': + hdnea2 = self._get_cookies(media_url).get('hdnea2') + if hdnea2: + _format['url'] = update_url_query(media_url, {'hdnea3': hdnea2.value}) + + formats.append(_format) + + return formats, subtitles + + def _download_theplatform_metadata(self, path, video_id): + info_url = 'http://link.theplatform.%s/s/%s?format=preview' % (self._TP_TLD, path) + return self._download_json(info_url, video_id) + + def _parse_theplatform_metadata(self, info): + subtitles = {} + captions = info.get('captions') + if isinstance(captions, list): + for caption in captions: + lang, src, mime = caption.get('lang', 'en'), caption.get('src'), caption.get('type') + subtitles.setdefault(lang, []).append({ + 'ext': mimetype2ext(mime), + 'url': src, + }) + + duration = info.get('duration') + tp_chapters = info.get('chapters', []) + chapters = [] + if tp_chapters: + def _add_chapter(start_time, end_time): + start_time = float_or_none(start_time, 1000) + end_time = float_or_none(end_time, 1000) + if start_time is None or end_time is None: + return + chapters.append({ + 'start_time': start_time, + 'end_time': end_time, + }) + + for chapter in tp_chapters[:-1]: + _add_chapter(chapter.get('startTime'), chapter.get('endTime')) + _add_chapter(tp_chapters[-1].get('startTime'), tp_chapters[-1].get('endTime') or duration) + + return { + 'title': info['title'], + 'subtitles': subtitles, + 'description': info['description'], + 'thumbnail': info['defaultThumbnailUrl'], + 'duration': float_or_none(duration, 1000), + 'timestamp': int_or_none(info.get('pubDate'), 1000) or None, + 'uploader': info.get('billingCode'), + 'chapters': chapters, + } + + def _extract_theplatform_metadata(self, path, video_id): + info = self._download_theplatform_metadata(path, video_id) + return self._parse_theplatform_metadata(info) + + +class ThePlatformIE(ThePlatformBaseIE, AdobePassIE): + _VALID_URL = r'''(?x) + (?:https?://(?:link|player)\.theplatform\.com/[sp]/(?P<provider_id>[^/]+)/ + (?:(?:(?:[^/]+/)+select/)?(?P<media>media/(?:guid/\d+/)?)?|(?P<config>(?:[^/\?]+/(?:swf|config)|onsite)/select/))? + |theplatform:)(?P<id>[^/\?&]+)''' + _EMBED_REGEX = [ + r'''(?x) + <meta\s+ + property=(["'])(?:og:video(?::(?:secure_)?url)?|twitter:player)\1\s+ + content=(["'])(?P<url>https?://player\.theplatform\.com/p/.+?)\2''', + r'(?s)<(?:iframe|script)[^>]+src=(["\'])(?P<url>(?:https?:)?//player\.theplatform\.com/p/.+?)\1' + ] + + _TESTS = [{ + # from http://www.metacafe.com/watch/cb-e9I_cZgTgIPd/blackberrys_big_bold_z30/ + 'url': 'http://link.theplatform.com/s/dJ5BDC/e9I_cZgTgIPd/meta.smil?format=smil&Tracking=true&mbr=true', + 'info_dict': { + 'id': 'e9I_cZgTgIPd', + 'ext': 'flv', + 'title': 'Blackberry\'s big, bold Z30', + 'description': 'The Z30 is Blackberry\'s biggest, baddest mobile messaging device yet.', + 'duration': 247, + 'timestamp': 1383239700, + 'upload_date': '20131031', + 'uploader': 'CBSI-NEW', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + 'skip': '404 Not Found', + }, { + # from http://www.cnet.com/videos/tesla-model-s-a-second-step-towards-a-cleaner-motoring-future/ + 'url': 'http://link.theplatform.com/s/kYEXFC/22d_qsQ6MIRT', + 'info_dict': { + 'id': '22d_qsQ6MIRT', + 'ext': 'flv', + 'description': 'md5:ac330c9258c04f9d7512cf26b9595409', + 'title': 'Tesla Model S: A second step towards a cleaner motoring future', + 'timestamp': 1426176191, + 'upload_date': '20150312', + 'uploader': 'CBSI-NEW', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + 'skip': 'CNet no longer uses ThePlatform', + }, { + 'url': 'https://player.theplatform.com/p/D6x-PC/pulse_preview/embed/select/media/yMBg9E8KFxZD', + 'info_dict': { + 'id': 'yMBg9E8KFxZD', + 'ext': 'mp4', + 'description': 'md5:644ad9188d655b742f942bf2e06b002d', + 'title': 'HIGHLIGHTS: USA bag first ever series Cup win', + 'uploader': 'EGSM', + }, + 'skip': 'Dead link', + }, { + 'url': 'http://player.theplatform.com/p/NnzsPC/widget/select/media/4Y0TlYUr_ZT7', + 'only_matching': True, + }, { + 'url': 'http://player.theplatform.com/p/2E2eJC/nbcNewsOffsite?guid=tdy_or_siri_150701', + 'md5': 'fb96bb3d85118930a5b055783a3bd992', + 'info_dict': { + 'id': 'tdy_or_siri_150701', + 'ext': 'mp4', + 'title': 'iPhone Siri’s sassy response to a math question has people talking', + 'description': 'md5:a565d1deadd5086f3331d57298ec6333', + 'duration': 83.0, + 'thumbnail': r're:^https?://.*\.jpg$', + 'timestamp': 1435752600, + 'upload_date': '20150701', + 'uploader': 'NBCU-NEWS', + }, + 'skip': 'Error: Player PID "nbcNewsOffsite" is disabled', + }, { + # From http://www.nbc.com/the-blacklist/video/sir-crispin-crandall/2928790?onid=137781#vc137781=1 + # geo-restricted (US), HLS encrypted with AES-128 + 'url': 'http://player.theplatform.com/p/NnzsPC/onsite_universal/select/media/guid/2410887629/2928790?fwsitesection=nbc_the_blacklist_video_library&autoPlay=true&carouselID=137781', + 'only_matching': True, + }] + + @classmethod + def _extract_embed_urls(cls, url, webpage): + # Are whitespaces ignored in URLs? + # https://github.com/ytdl-org/youtube-dl/issues/12044 + for embed_url in super()._extract_embed_urls(url, webpage): + yield re.sub(r'\s', '', embed_url) + + @staticmethod + def _sign_url(url, sig_key, sig_secret, life=600, include_qs=False): + flags = '10' if include_qs else '00' + expiration_date = '%x' % (int(time.time()) + life) + + def str_to_hex(str): + return binascii.b2a_hex(str.encode('ascii')).decode('ascii') + + def hex_to_bytes(hex): + return binascii.a2b_hex(hex.encode('ascii')) + + relative_path = re.match(r'https?://link\.theplatform\.com/s/([^?]+)', url).group(1) + clear_text = hex_to_bytes(flags + expiration_date + str_to_hex(relative_path)) + checksum = hmac.new(sig_key.encode('ascii'), clear_text, hashlib.sha1).hexdigest() + sig = flags + expiration_date + checksum + str_to_hex(sig_secret) + return '%s&sig=%s' % (url, sig) + + def _real_extract(self, url): + url, smuggled_data = unsmuggle_url(url, {}) + self._initialize_geo_bypass({ + 'countries': smuggled_data.get('geo_countries'), + }) + + mobj = self._match_valid_url(url) + provider_id = mobj.group('provider_id') + video_id = mobj.group('id') + + if not provider_id: + provider_id = 'dJ5BDC' + + path = provider_id + '/' + if mobj.group('media'): + path += mobj.group('media') + path += video_id + + qs_dict = parse_qs(url) + if 'guid' in qs_dict: + webpage = self._download_webpage(url, video_id) + scripts = re.findall(r'<script[^>]+src="([^"]+)"', webpage) + feed_id = None + # feed id usually locates in the last script. + # Seems there's no pattern for the interested script filename, so + # I try one by one + for script in reversed(scripts): + feed_script = self._download_webpage( + self._proto_relative_url(script, 'http:'), + video_id, 'Downloading feed script') + feed_id = self._search_regex( + r'defaultFeedId\s*:\s*"([^"]+)"', feed_script, + 'default feed id', default=None) + if feed_id is not None: + break + if feed_id is None: + raise ExtractorError('Unable to find feed id') + return self.url_result('http://feed.theplatform.com/f/%s/%s?byGuid=%s' % ( + provider_id, feed_id, qs_dict['guid'][0])) + + if smuggled_data.get('force_smil_url', False): + smil_url = url + # Explicitly specified SMIL (see https://github.com/ytdl-org/youtube-dl/issues/7385) + elif '/guid/' in url: + headers = {} + source_url = smuggled_data.get('source_url') + if source_url: + headers['Referer'] = source_url + request = Request(url, headers=headers) + webpage = self._download_webpage(request, video_id) + smil_url = self._search_regex( + r'<link[^>]+href=(["\'])(?P<url>.+?)\1[^>]+type=["\']application/smil\+xml', + webpage, 'smil url', group='url') + path = self._search_regex( + r'link\.theplatform\.com/s/((?:[^/?#&]+/)+[^/?#&]+)', smil_url, 'path') + smil_url += '?' if '?' not in smil_url else '&' + 'formats=m3u,mpeg4' + elif mobj.group('config'): + config_url = url + '&form=json' + config_url = config_url.replace('swf/', 'config/') + config_url = config_url.replace('onsite/', 'onsite/config/') + config = self._download_json(config_url, video_id, 'Downloading config') + if 'releaseUrl' in config: + release_url = config['releaseUrl'] + else: + release_url = 'http://link.theplatform.com/s/%s?mbr=true' % path + smil_url = release_url + '&formats=MPEG4&manifest=f4m' + else: + smil_url = 'http://link.theplatform.com/s/%s?mbr=true' % path + + sig = smuggled_data.get('sig') + if sig: + smil_url = self._sign_url(smil_url, sig['key'], sig['secret']) + + formats, subtitles = self._extract_theplatform_smil(smil_url, video_id) + + # With some sites, manifest URL must be forced to extract HLS formats + if not traverse_obj(formats, lambda _, v: v['format_id'].startswith('hls')): + m3u8_url = update_url(url, query='mbr=true&manifest=m3u', fragment=None) + urlh = self._request_webpage( + HEADRequest(m3u8_url), video_id, 'Checking for HLS formats', 'No HLS formats found', fatal=False) + if urlh and urlhandle_detect_ext(urlh) == 'm3u8': + m3u8_fmts, m3u8_subs = self._extract_m3u8_formats_and_subtitles( + m3u8_url, video_id, m3u8_id='hls', fatal=False) + formats.extend(m3u8_fmts) + self._merge_subtitles(m3u8_subs, target=subtitles) + + ret = self._extract_theplatform_metadata(path, video_id) + combined_subtitles = self._merge_subtitles(ret.get('subtitles', {}), subtitles) + ret.update({ + 'id': video_id, + 'formats': formats, + 'subtitles': combined_subtitles, + }) + + return ret + + +class ThePlatformFeedIE(ThePlatformBaseIE): + _URL_TEMPLATE = '%s//feed.theplatform.com/f/%s/%s?form=json&%s' + _VALID_URL = r'https?://feed\.theplatform\.com/f/(?P<provider_id>[^/]+)/(?P<feed_id>[^?/]+)\?(?:[^&]+&)*(?P<filter>by(?:Gui|I)d=(?P<id>[^&]+))' + _TESTS = [{ + # From http://player.theplatform.com/p/7wvmTC/MSNBCEmbeddedOffSite?guid=n_hardball_5biden_140207 + 'url': 'http://feed.theplatform.com/f/7wvmTC/msnbc_video-p-test?form=json&pretty=true&range=-40&byGuid=n_hardball_5biden_140207', + 'md5': '6e32495b5073ab414471b615c5ded394', + 'info_dict': { + 'id': 'n_hardball_5biden_140207', + 'ext': 'mp4', + 'title': 'The Biden factor: will Joe run in 2016?', + 'description': 'Could Vice President Joe Biden be preparing a 2016 campaign? Mark Halperin and Sam Stein weigh in.', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20140208', + 'timestamp': 1391824260, + 'duration': 467.0, + 'categories': ['MSNBC/Issues/Democrats', 'MSNBC/Issues/Elections/Election 2016'], + 'uploader': 'NBCU-NEWS', + }, + }, { + 'url': 'http://feed.theplatform.com/f/2E2eJC/nnd_NBCNews?byGuid=nn_netcast_180306.Copy.01', + 'only_matching': True, + }] + + def _extract_feed_info(self, provider_id, feed_id, filter_query, video_id, custom_fields=None, asset_types_query={}, account_id=None): + real_url = self._URL_TEMPLATE % (self.http_scheme(), provider_id, feed_id, filter_query) + entry = self._download_json(real_url, video_id)['entries'][0] + main_smil_url = 'http://link.theplatform.com/s/%s/media/guid/%d/%s' % (provider_id, account_id, entry['guid']) if account_id else entry.get('plmedia$publicUrl') + + formats = [] + subtitles = {} + first_video_id = None + duration = None + asset_types = [] + for item in entry['media$content']: + smil_url = item['plfile$url'] + cur_video_id = ThePlatformIE._match_id(smil_url) + if first_video_id is None: + first_video_id = cur_video_id + duration = float_or_none(item.get('plfile$duration')) + file_asset_types = item.get('plfile$assetTypes') or parse_qs(smil_url)['assetTypes'] + for asset_type in file_asset_types: + if asset_type in asset_types: + continue + asset_types.append(asset_type) + query = { + 'mbr': 'true', + 'formats': item['plfile$format'], + 'assetTypes': asset_type, + } + if asset_type in asset_types_query: + query.update(asset_types_query[asset_type]) + cur_formats, cur_subtitles = self._extract_theplatform_smil(update_url_query( + main_smil_url or smil_url, query), video_id, 'Downloading SMIL data for %s' % asset_type) + formats.extend(cur_formats) + subtitles = self._merge_subtitles(subtitles, cur_subtitles) + + thumbnails = [{ + 'url': thumbnail['plfile$url'], + 'width': int_or_none(thumbnail.get('plfile$width')), + 'height': int_or_none(thumbnail.get('plfile$height')), + } for thumbnail in entry.get('media$thumbnails', [])] + + timestamp = int_or_none(entry.get('media$availableDate'), scale=1000) + categories = [item['media$name'] for item in entry.get('media$categories', [])] + + ret = self._extract_theplatform_metadata('%s/%s' % (provider_id, first_video_id), video_id) + subtitles = self._merge_subtitles(subtitles, ret['subtitles']) + ret.update({ + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'duration': duration, + 'timestamp': timestamp, + 'categories': categories, + }) + if custom_fields: + ret.update(custom_fields(entry)) + + return ret + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + + video_id = mobj.group('id') + provider_id = mobj.group('provider_id') + feed_id = mobj.group('feed_id') + filter_query = mobj.group('filter') + + return self._extract_feed_info(provider_id, feed_id, filter_query, video_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/thestar.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/thestar.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/thestar.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/thestar.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/thesun.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/thesun.py new file mode 100644 index 0000000..5edcf1c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/thesun.py @@ -0,0 +1,43 @@ +import re + +from .common import InfoExtractor +from ..utils import extract_attributes + + +class TheSunIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?the-?sun(\.co\.uk|\.com)/[^/]+/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.thesun.co.uk/tvandshowbiz/2261604/orlando-bloom-and-katy-perry-post-adorable-instagram-video-together-celebrating-thanksgiving-after-split-rumours/', + 'info_dict': { + 'id': '2261604', + 'title': 'md5:cba22f48bad9218b64d5bbe0e16afddf', + }, + 'playlist_count': 2, + }, { + 'url': 'https://www.the-sun.com/entertainment/7611415/1000lb-sisters-fans-rip-amy-dangerous-health-decision/', + 'info_dict': { + 'id': '7611415', + 'title': 'md5:e0b9b976f79dc770e5c80f22f40bb844', + }, + 'playlist_count': 1, + }] + BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s' + + def _real_extract(self, url): + article_id = self._match_id(url) + + webpage = self._download_webpage(url, article_id) + + entries = [] + for video in re.findall( + r'<video[^>]+data-video-id-pending=[^>]+>', + webpage): + attrs = extract_attributes(video) + video_id = attrs['data-video-id-pending'] + account_id = attrs.get('data-account', '5067014667001') + entries.append(self.url_result( + self.BRIGHTCOVE_URL_TEMPLATE % (account_id, video_id), + 'BrightcoveNew', video_id)) + + return self.playlist_result( + entries, article_id, self._og_search_title(webpage, fatal=False)) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/theweatherchannel.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/theweatherchannel.py new file mode 100644 index 0000000..d1921e4 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/theweatherchannel.py @@ -0,0 +1,99 @@ +import json + +from .theplatform import ThePlatformIE +from ..utils import ( + determine_ext, + parse_duration, + parse_iso8601, +) + + +class TheWeatherChannelIE(ThePlatformIE): # XXX: Do not subclass from concrete IE + _VALID_URL = r'https?://(?:www\.)?weather\.com(?P<asset_name>(?:/(?P<locale>[a-z]{2}-[A-Z]{2}))?/(?:[^/]+/)*video/(?P<id>[^/?#]+))' + _TESTS = [{ + 'url': 'https://weather.com/storms/hurricane/video/invest-95l-in-atlantic-has-a-medium-chance-of-development', + 'md5': '68f0cf616435683f27ce36bd9c927394', + 'info_dict': { + 'id': '81acef2d-ee8c-4545-ba83-bff3cc80db97', + 'ext': 'mp4', + 'title': 'Invest 95L In Atlantic Has A Medium Chance Of Development', + 'description': 'md5:0de720fd5f0d0e32207bd4c270fff824', + 'uploader': 'TWC - Digital', + 'uploader_id': 'b5a999e0-9e04-11e1-9ee2-001d092f5a10', + 'upload_date': '20230721', + 'timestamp': 1689967343, + 'display_id': 'invest-95l-in-atlantic-has-a-medium-chance-of-development', + 'duration': 34.0, + } + }, { + 'url': 'https://weather.com/en-CA/international/videos/video/unidentified-object-falls-from-sky-in-india', + 'only_matching': True, + }] + + def _real_extract(self, url): + asset_name, locale, display_id = self._match_valid_url(url).groups() + if not locale: + locale = 'en-US' + video_data = list(self._download_json( + 'https://weather.com/api/v1/p/redux-dal', display_id, data=json.dumps([{ + 'name': 'getCMSAssetsUrlConfig', + 'params': { + 'language': locale.replace('-', '_'), + 'query': { + 'assetName': { + '$in': asset_name, + }, + }, + } + }]).encode(), headers={ + 'Content-Type': 'application/json', + })['dal']['getCMSAssetsUrlConfig'].values())[0]['data'][0] + video_id = video_data['id'] + seo_meta = video_data.get('seometa', {}) + title = video_data.get('title') or seo_meta['title'] + + urls = [] + thumbnails = [] + formats = [] + for variant_id, variant_url in video_data.get('variants', []).items(): + variant_url = variant_url.strip() + if not variant_url or variant_url in urls: + continue + urls.append(variant_url) + ext = determine_ext(variant_url) + if ext == 'jpg': + thumbnails.append({ + 'url': variant_url, + 'id': variant_id, + }) + elif ThePlatformIE.suitable(variant_url): + tp_formats, _ = self._extract_theplatform_smil(variant_url, video_id) + formats.extend(tp_formats) + elif ext == 'm3u8': + formats.extend(self._extract_m3u8_formats( + variant_url, video_id, 'mp4', 'm3u8_native', + m3u8_id=variant_id, fatal=False)) + elif ext == 'f4m': + formats.extend(self._extract_f4m_formats( + variant_url, video_id, f4m_id=variant_id, fatal=False)) + else: + formats.append({ + 'url': variant_url, + 'format_id': variant_id, + }) + + cc_url = video_data.get('cc_url') + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': video_data.get('description') or seo_meta.get('description') or seo_meta.get('og:description'), + 'duration': parse_duration(video_data.get('duration')), + 'uploader': video_data.get('providername'), + 'uploader_id': video_data.get('providerid'), + 'timestamp': parse_iso8601(video_data.get('publishdate')), + 'subtitles': {locale[:2]: [{'url': cc_url}]} if cc_url else None, + 'thumbnails': thumbnails, + 'formats': formats, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/thisamericanlife.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/thisamericanlife.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/thisamericanlife.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/thisamericanlife.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/thisav.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/thisav.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/thisav.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/thisav.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/thisoldhouse.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/thisoldhouse.py new file mode 100644 index 0000000..cc7beee --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/thisoldhouse.py @@ -0,0 +1,55 @@ +from .common import InfoExtractor +from ..networking import HEADRequest + + +class ThisOldHouseIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?thisoldhouse\.com/(?:watch|how-to|tv-episode|(?:[^/]+/)?\d+)/(?P<id>[^/?#]+)' + _TESTS = [{ + 'url': 'https://www.thisoldhouse.com/how-to/how-to-build-storage-bench', + 'info_dict': { + 'id': '5dcdddf673c3f956ef5db202', + 'ext': 'mp4', + 'title': 'How to Build a Storage Bench', + 'description': 'In the workshop, Tom Silva and Kevin O\'Connor build a storage bench for an entryway.', + 'timestamp': 1442548800, + 'upload_date': '20150918', + 'duration': 674, + 'view_count': int, + 'average_rating': 0, + 'thumbnail': r're:^https?://.*\.jpg\?\d+$', + 'display_id': 'how-to-build-a-storage-bench', + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://www.thisoldhouse.com/watch/arlington-arts-crafts-arts-and-crafts-class-begins', + 'only_matching': True, + }, { + 'url': 'https://www.thisoldhouse.com/tv-episode/ask-toh-shelf-rough-electric', + 'only_matching': True, + }, { + 'url': 'https://www.thisoldhouse.com/furniture/21017078/how-to-build-a-storage-bench', + 'only_matching': True, + }, { + 'url': 'https://www.thisoldhouse.com/21113884/s41-e13-paradise-lost', + 'only_matching': True, + }, { + # iframe www.thisoldhouse.com + 'url': 'https://www.thisoldhouse.com/21083431/seaside-transformation-the-westerly-project', + 'only_matching': True, + }] + _ZYPE_TMPL = 'https://player.zype.com/embed/%s.html?api_key=hsOk_yMSPYNrT22e9pu8hihLXjaZf0JW5jsOWv4ZqyHJFvkJn6rtToHl09tbbsbe' + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + if 'To Unlock This content' in webpage: + self.raise_login_required(method='cookies') + video_url = self._search_regex( + r'<iframe[^>]+src=[\'"]((?:https?:)?//(?:www\.)?thisoldhouse\.(?:chorus\.build|com)/videos/zype/([0-9a-f]{24})[^\'"]*)[\'"]', + webpage, 'video url') + if 'subscription_required=true' in video_url or 'c-entry-group-labels__image' in webpage: + return self.url_result(self._request_webpage(HEADRequest(video_url), display_id).url, 'Zype', display_id) + video_id = self._search_regex(r'(?:https?:)?//(?:www\.)?thisoldhouse\.(?:chorus\.build|com)/videos/zype/([0-9a-f]{24})', video_url, 'video id') + return self.url_result(self._ZYPE_TMPL % video_id, 'Zype', video_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/thisvid.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/thisvid.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/thisvid.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/thisvid.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/threeqsdn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/threeqsdn.py new file mode 100644 index 0000000..7841f8d --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/threeqsdn.py @@ -0,0 +1,156 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + determine_ext, + ExtractorError, + float_or_none, + int_or_none, + join_nonempty, + parse_iso8601, +) + + +class ThreeQSDNIE(InfoExtractor): + IE_NAME = '3qsdn' + IE_DESC = '3Q SDN' + _VALID_URL = r'https?://playout\.3qsdn\.com/(?P<id>[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})' + _EMBED_REGEX = [r'<iframe[^>]+\b(?:data-)?src=(["\'])(?P<url>%s.*?)\1' % _VALID_URL] + _TESTS = [{ + # https://player.3qsdn.com/demo.html + 'url': 'https://playout.3qsdn.com/7201c779-6b3c-11e7-a40e-002590c750be', + 'md5': '64a57396b16fa011b15e0ea60edce918', + 'info_dict': { + 'id': '7201c779-6b3c-11e7-a40e-002590c750be', + 'ext': 'mp4', + 'title': 'Video Ads', + 'is_live': False, + 'description': 'Video Ads Demo', + 'timestamp': 1500334803, + 'upload_date': '20170717', + 'duration': 888.032, + 'subtitles': { + 'eng': 'count:1', + }, + }, + 'expected_warnings': ['Unknown MIME type application/mp4 in DASH manifest'], + }, { + # live video stream + 'url': 'https://playout.3qsdn.com/66e68995-11ca-11e8-9273-002590c750be', + 'info_dict': { + 'id': '66e68995-11ca-11e8-9273-002590c750be', + 'ext': 'mp4', + 'title': 're:^66e68995-11ca-11e8-9273-002590c750be [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'is_live': True, + }, + 'params': { + 'skip_download': True, # m3u8 downloads + }, + }, { + # live audio stream + 'url': 'http://playout.3qsdn.com/9edf36e0-6bf2-11e2-a16a-9acf09e2db48', + 'only_matching': True, + }, { + # live audio stream with some 404 URLs + 'url': 'http://playout.3qsdn.com/ac5c3186-777a-11e2-9c30-9acf09e2db48', + 'only_matching': True, + }, { + # geo restricted with 'This content is not available in your country' + 'url': 'http://playout.3qsdn.com/d63a3ffe-75e8-11e2-9c30-9acf09e2db48', + 'only_matching': True, + }, { + # geo restricted with 'playout.3qsdn.com/forbidden' + 'url': 'http://playout.3qsdn.com/8e330f26-6ae2-11e2-a16a-9acf09e2db48', + 'only_matching': True, + }, { + # live video with rtmp link + 'url': 'https://playout.3qsdn.com/6092bb9e-8f72-11e4-a173-002590c750be', + 'only_matching': True, + }, { + # ondemand from http://www.philharmonie.tv/veranstaltung/26/ + 'url': 'http://playout.3qsdn.com/0280d6b9-1215-11e6-b427-0cc47a188158?protocol=http', + 'only_matching': True, + }, { + # live video stream + 'url': 'https://playout.3qsdn.com/d755d94b-4ab9-11e3-9162-0025907ad44f?js=true', + 'only_matching': True, + }] + + def _extract_from_webpage(self, url, webpage): + for res in super()._extract_from_webpage(url, webpage): + yield { + **res, + '_type': 'url_transparent', + 'uploader': self._search_regex(r'^(?:https?://)?([^/]*)/.*', url, 'video uploader'), + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + try: + config = self._download_json( + url.replace('://playout.3qsdn.com/', '://playout.3qsdn.com/config/'), video_id) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + self.raise_geo_restricted() + raise + + live = config.get('streamContent') == 'live' + aspect = float_or_none(config.get('aspect')) + + formats = [] + subtitles = {} + for source_type, source in (config.get('sources') or {}).items(): + if not source: + continue + if source_type == 'dash': + fmts, subs = self._extract_mpd_formats_and_subtitles( + source, video_id, mpd_id='mpd', fatal=False) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + elif source_type == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + source, video_id, 'mp4', live=live, m3u8_id='hls', fatal=False) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + elif source_type == 'progressive': + for s in source: + src = s.get('src') + if not (src and self._is_valid_url(src, video_id)): + continue + ext = determine_ext(src) + height = int_or_none(s.get('height')) + formats.append({ + 'ext': ext, + 'format_id': join_nonempty('http', ext, height and '%dp' % height), + 'height': height, + 'source_preference': 0, + 'url': src, + 'vcodec': 'none' if height == 0 else None, + 'width': int(height * aspect) if height and aspect else None, + }) + + for subtitle in (config.get('subtitles') or []): + src = subtitle.get('src') + if not src: + continue + subtitles.setdefault(subtitle.get('label') or 'eng', []).append({ + 'url': src, + }) + + title = config.get('title') or video_id + + return { + 'id': video_id, + 'title': title, + 'thumbnail': config.get('poster') or None, + 'description': config.get('description') or None, + 'timestamp': parse_iso8601(config.get('upload_date')), + 'duration': float_or_none(config.get('vlength')) or None, + 'is_live': live, + 'formats': formats, + 'subtitles': subtitles, + # It seems like this would be correctly handled by default + # However, unless someone can confirm this, the old + # behaviour is being kept as-is + '_format_sort_fields': ('res', 'source_preference') + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/threespeak.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/threespeak.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/threespeak.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/threespeak.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tiktok.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tiktok.py new file mode 100644 index 0000000..f26972c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tiktok.py @@ -0,0 +1,1289 @@ +import itertools +import json +import random +import re +import string +import time + +from .common import InfoExtractor +from ..compat import compat_urllib_parse_unquote, compat_urllib_parse_urlparse +from ..networking import HEADRequest +from ..utils import ( + ExtractorError, + LazyList, + UnsupportedError, + UserNotLive, + determine_ext, + format_field, + get_first, + int_or_none, + join_nonempty, + merge_dicts, + qualities, + remove_start, + srt_subtitles_timecode, + str_or_none, + traverse_obj, + try_call, + try_get, + url_or_none, +) + + +class TikTokBaseIE(InfoExtractor): + _APP_VERSIONS = [('26.1.3', '260103'), ('26.1.2', '260102'), ('26.1.1', '260101'), ('25.6.2', '250602')] + _WORKING_APP_VERSION = None + _APP_NAME = 'trill' + _AID = 1180 + _UPLOADER_URL_FORMAT = 'https://www.tiktok.com/@%s' + _WEBPAGE_HOST = 'https://www.tiktok.com/' + QUALITIES = ('360p', '540p', '720p', '1080p') + + @property + def _API_HOSTNAME(self): + return self._configuration_arg( + 'api_hostname', ['api16-normal-c-useast1a.tiktokv.com'], ie_key=TikTokIE)[0] + + @staticmethod + def _create_url(user_id, video_id): + return f'https://www.tiktok.com/@{user_id or "_"}/video/{video_id}' + + def _get_sigi_state(self, webpage, display_id): + return self._search_json( + r'<script[^>]+\bid="(?:SIGI_STATE|sigi-persisted-data)"[^>]*>', webpage, + 'sigi state', display_id, end_pattern=r'</script>') + + def _call_api_impl(self, ep, query, manifest_app_version, video_id, fatal=True, + note='Downloading API JSON', errnote='Unable to download API page'): + self._set_cookie(self._API_HOSTNAME, 'odin_tt', ''.join(random.choices('0123456789abcdef', k=160))) + webpage_cookies = self._get_cookies(self._WEBPAGE_HOST) + if webpage_cookies.get('sid_tt'): + self._set_cookie(self._API_HOSTNAME, 'sid_tt', webpage_cookies['sid_tt'].value) + return self._download_json( + 'https://%s/aweme/v1/%s/' % (self._API_HOSTNAME, ep), video_id=video_id, + fatal=fatal, note=note, errnote=errnote, headers={ + 'User-Agent': f'com.ss.android.ugc.{self._APP_NAME}/{manifest_app_version} (Linux; U; Android 13; en_US; Pixel 7; Build/TD1A.220804.031; Cronet/58.0.2991.0)', + 'Accept': 'application/json', + }, query=query) + + def _build_api_query(self, query, app_version, manifest_app_version): + return { + **query, + 'version_name': app_version, + 'version_code': manifest_app_version, + 'build_number': app_version, + 'manifest_version_code': manifest_app_version, + 'update_version_code': manifest_app_version, + 'openudid': ''.join(random.choices('0123456789abcdef', k=16)), + 'uuid': ''.join(random.choices(string.digits, k=16)), + '_rticket': int(time.time() * 1000), + 'ts': int(time.time()), + 'device_brand': 'Google', + 'device_type': 'Pixel 7', + 'device_platform': 'android', + 'resolution': '1080*2400', + 'dpi': 420, + 'os_version': '13', + 'os_api': '29', + 'carrier_region': 'US', + 'sys_region': 'US', + 'region': 'US', + 'app_name': self._APP_NAME, + 'app_language': 'en', + 'language': 'en', + 'timezone_name': 'America/New_York', + 'timezone_offset': '-14400', + 'channel': 'googleplay', + 'ac': 'wifi', + 'mcc_mnc': '310260', + 'is_my_cn': 0, + 'aid': self._AID, + 'ssmix': 'a', + 'as': 'a1qwert123', + 'cp': 'cbfhckdckkde1', + } + + def _call_api(self, ep, query, video_id, fatal=True, + note='Downloading API JSON', errnote='Unable to download API page'): + if not self._WORKING_APP_VERSION: + app_version = self._configuration_arg('app_version', [''], ie_key=TikTokIE.ie_key())[0] + manifest_app_version = self._configuration_arg('manifest_app_version', [''], ie_key=TikTokIE.ie_key())[0] + if app_version and manifest_app_version: + self._WORKING_APP_VERSION = (app_version, manifest_app_version) + self.write_debug('Imported app version combo from extractor arguments') + elif app_version or manifest_app_version: + self.report_warning('Only one of the two required version params are passed as extractor arguments', only_once=True) + + if self._WORKING_APP_VERSION: + app_version, manifest_app_version = self._WORKING_APP_VERSION + real_query = self._build_api_query(query, app_version, manifest_app_version) + return self._call_api_impl(ep, real_query, manifest_app_version, video_id, fatal, note, errnote) + + for count, (app_version, manifest_app_version) in enumerate(self._APP_VERSIONS, start=1): + real_query = self._build_api_query(query, app_version, manifest_app_version) + try: + res = self._call_api_impl(ep, real_query, manifest_app_version, video_id, fatal, note, errnote) + self._WORKING_APP_VERSION = (app_version, manifest_app_version) + return res + except ExtractorError as e: + if isinstance(e.cause, json.JSONDecodeError) and e.cause.pos == 0: + if count == len(self._APP_VERSIONS): + if fatal: + raise e + else: + self.report_warning(str(e.cause or e.msg)) + return + self.report_warning('%s. Retrying... (attempt %s of %s)' % (str(e.cause or e.msg), count, len(self._APP_VERSIONS))) + continue + raise e + + def _extract_aweme_app(self, aweme_id): + feed_list = self._call_api( + 'feed', {'aweme_id': aweme_id}, aweme_id, note='Downloading video feed', + errnote='Unable to download video feed').get('aweme_list') or [] + aweme_detail = next((aweme for aweme in feed_list if str(aweme.get('aweme_id')) == aweme_id), None) + if not aweme_detail: + raise ExtractorError('Unable to find video in feed', video_id=aweme_id) + return self._parse_aweme_video_app(aweme_detail) + + def _get_subtitles(self, aweme_detail, aweme_id): + # TODO: Extract text positioning info + subtitles = {} + # aweme/detail endpoint subs + captions_info = traverse_obj( + aweme_detail, ('interaction_stickers', ..., 'auto_video_caption_info', 'auto_captions', ...), expected_type=dict) + for caption in captions_info: + caption_url = traverse_obj(caption, ('url', 'url_list', ...), expected_type=url_or_none, get_all=False) + if not caption_url: + continue + caption_json = self._download_json( + caption_url, aweme_id, note='Downloading captions', errnote='Unable to download captions', fatal=False) + if not caption_json: + continue + subtitles.setdefault(caption.get('language', 'en'), []).append({ + 'ext': 'srt', + 'data': '\n\n'.join( + f'{i + 1}\n{srt_subtitles_timecode(line["start_time"] / 1000)} --> {srt_subtitles_timecode(line["end_time"] / 1000)}\n{line["text"]}' + for i, line in enumerate(caption_json['utterances']) if line.get('text')) + }) + # feed endpoint subs + if not subtitles: + for caption in traverse_obj(aweme_detail, ('video', 'cla_info', 'caption_infos', ...), expected_type=dict): + if not caption.get('url'): + continue + subtitles.setdefault(caption.get('lang') or 'en', []).append({ + 'ext': remove_start(caption.get('caption_format'), 'web'), + 'url': caption['url'], + }) + # webpage subs + if not subtitles: + for caption in traverse_obj(aweme_detail, ('video', 'subtitleInfos', ...), expected_type=dict): + if not caption.get('Url'): + continue + subtitles.setdefault(caption.get('LanguageCodeName') or 'en', []).append({ + 'ext': remove_start(caption.get('Format'), 'web'), + 'url': caption['Url'], + }) + return subtitles + + def _parse_aweme_video_app(self, aweme_detail): + aweme_id = aweme_detail['aweme_id'] + video_info = aweme_detail['video'] + + def parse_url_key(url_key): + format_id, codec, res, bitrate = self._search_regex( + r'v[^_]+_(?P<id>(?P<codec>[^_]+)_(?P<res>\d+p)_(?P<bitrate>\d+))', url_key, + 'url key', default=(None, None, None, None), group=('id', 'codec', 'res', 'bitrate')) + if not format_id: + return {}, None + return { + 'format_id': format_id, + 'vcodec': 'h265' if codec == 'bytevc1' else codec, + 'tbr': int_or_none(bitrate, scale=1000) or None, + 'quality': qualities(self.QUALITIES)(res), + }, res + + known_resolutions = {} + + def audio_meta(url): + ext = determine_ext(url, default_ext='m4a') + return { + 'format_note': 'Music track', + 'ext': ext, + 'acodec': 'aac' if ext == 'm4a' else ext, + 'vcodec': 'none', + 'width': None, + 'height': None, + } if ext == 'mp3' or '-music-' in url else {} + + def extract_addr(addr, add_meta={}): + parsed_meta, res = parse_url_key(addr.get('url_key', '')) + if res: + known_resolutions.setdefault(res, {}).setdefault('height', add_meta.get('height') or addr.get('height')) + known_resolutions[res].setdefault('width', add_meta.get('width') or addr.get('width')) + parsed_meta.update(known_resolutions.get(res, {})) + add_meta.setdefault('height', int_or_none(res[:-1])) + return [{ + 'url': url, + 'filesize': int_or_none(addr.get('data_size')), + 'ext': 'mp4', + 'acodec': 'aac', + 'source_preference': -2 if 'aweme/v1' in url else -1, # Downloads from API might get blocked + **add_meta, **parsed_meta, + 'format_note': join_nonempty( + add_meta.get('format_note'), '(API)' if 'aweme/v1' in url else None, delim=' '), + **audio_meta(url), + } for url in addr.get('url_list') or []] + + # Hack: Add direct video links first to prioritize them when removing duplicate formats + formats = [] + if video_info.get('play_addr'): + formats.extend(extract_addr(video_info['play_addr'], { + 'format_id': 'play_addr', + 'format_note': 'Direct video', + 'vcodec': 'h265' if traverse_obj( + video_info, 'is_bytevc1', 'is_h265') else 'h264', # TODO: Check for "direct iOS" videos, like https://www.tiktok.com/@cookierun_dev/video/7039716639834656002 + 'width': video_info.get('width'), + 'height': video_info.get('height'), + })) + if video_info.get('download_addr'): + formats.extend(extract_addr(video_info['download_addr'], { + 'format_id': 'download_addr', + 'format_note': 'Download video%s' % (', watermarked' if video_info.get('has_watermark') else ''), + 'vcodec': 'h264', + 'width': video_info.get('width'), + 'height': video_info.get('height'), + 'preference': -2 if video_info.get('has_watermark') else -1, + })) + if video_info.get('play_addr_h264'): + formats.extend(extract_addr(video_info['play_addr_h264'], { + 'format_id': 'play_addr_h264', + 'format_note': 'Direct video', + 'vcodec': 'h264', + })) + if video_info.get('play_addr_bytevc1'): + formats.extend(extract_addr(video_info['play_addr_bytevc1'], { + 'format_id': 'play_addr_bytevc1', + 'format_note': 'Direct video', + 'vcodec': 'h265', + })) + + for bitrate in video_info.get('bit_rate', []): + if bitrate.get('play_addr'): + formats.extend(extract_addr(bitrate['play_addr'], { + 'format_id': bitrate.get('gear_name'), + 'format_note': 'Playback video', + 'tbr': try_get(bitrate, lambda x: x['bit_rate'] / 1000), + 'vcodec': 'h265' if traverse_obj( + bitrate, 'is_bytevc1', 'is_h265') else 'h264', + 'fps': bitrate.get('FPS'), + })) + + self._remove_duplicate_formats(formats) + auth_cookie = self._get_cookies(self._WEBPAGE_HOST).get('sid_tt') + if auth_cookie: + for f in formats: + self._set_cookie(compat_urllib_parse_urlparse(f['url']).hostname, 'sid_tt', auth_cookie.value) + + thumbnails = [] + for cover_id in ('cover', 'ai_dynamic_cover', 'animated_cover', 'ai_dynamic_cover_bak', + 'origin_cover', 'dynamic_cover'): + for cover_url in traverse_obj(video_info, (cover_id, 'url_list', ...)): + thumbnails.append({ + 'id': cover_id, + 'url': cover_url, + }) + + stats_info = aweme_detail.get('statistics') or {} + author_info = aweme_detail.get('author') or {} + music_info = aweme_detail.get('music') or {} + user_url = self._UPLOADER_URL_FORMAT % (traverse_obj(author_info, + 'sec_uid', 'id', 'uid', 'unique_id', + expected_type=str_or_none, get_all=False)) + labels = traverse_obj(aweme_detail, ('hybrid_label', ..., 'text'), expected_type=str) + + contained_music_track = traverse_obj( + music_info, ('matched_song', 'title'), ('matched_pgc_sound', 'title'), expected_type=str) + contained_music_author = traverse_obj( + music_info, ('matched_song', 'author'), ('matched_pgc_sound', 'author'), 'author', expected_type=str) + + is_generic_og_trackname = music_info.get('is_original_sound') and music_info.get('title') == 'original sound - %s' % music_info.get('owner_handle') + if is_generic_og_trackname: + music_track, music_author = contained_music_track or 'original sound', contained_music_author + else: + music_track, music_author = music_info.get('title'), music_info.get('author') + + return { + 'id': aweme_id, + 'extractor_key': TikTokIE.ie_key(), + 'extractor': TikTokIE.IE_NAME, + 'webpage_url': self._create_url(author_info.get('uid'), aweme_id), + **traverse_obj(aweme_detail, { + 'title': ('desc', {str}), + 'description': ('desc', {str}), + 'timestamp': ('create_time', {int_or_none}), + }), + **traverse_obj(stats_info, { + 'view_count': 'play_count', + 'like_count': 'digg_count', + 'repost_count': 'share_count', + 'comment_count': 'comment_count', + }, expected_type=int_or_none), + **traverse_obj(author_info, { + 'uploader': 'unique_id', + 'uploader_id': 'uid', + 'creator': 'nickname', + 'channel_id': 'sec_uid', + }, expected_type=str_or_none), + 'uploader_url': user_url, + 'track': music_track, + 'album': str_or_none(music_info.get('album')) or None, + 'artist': music_author or None, + 'formats': formats, + 'subtitles': self.extract_subtitles(aweme_detail, aweme_id), + 'thumbnails': thumbnails, + 'duration': int_or_none(traverse_obj(video_info, 'duration', ('download_addr', 'duration')), scale=1000), + 'availability': self._availability( + is_private='Private' in labels, + needs_subscription='Friends only' in labels, + is_unlisted='Followers only' in labels), + '_format_sort_fields': ('quality', 'codec', 'size', 'br'), + } + + def _parse_aweme_video_web(self, aweme_detail, webpage_url, video_id): + video_info = aweme_detail['video'] + author_info = traverse_obj(aweme_detail, 'authorInfo', 'author', expected_type=dict, default={}) + music_info = aweme_detail.get('music') or {} + stats_info = aweme_detail.get('stats') or {} + channel_id = traverse_obj(author_info or aweme_detail, (('authorSecId', 'secUid'), {str}), get_all=False) + user_url = self._UPLOADER_URL_FORMAT % channel_id if channel_id else None + + formats = [] + width = int_or_none(video_info.get('width')) + height = int_or_none(video_info.get('height')) + + for play_url in traverse_obj(video_info, ('playAddr', ((..., 'src'), None), {url_or_none})): + formats.append({ + 'url': self._proto_relative_url(play_url), + 'ext': 'mp4', + 'width': width, + 'height': height, + }) + + for download_url in traverse_obj(video_info, (('downloadAddr', ('download', 'url')), {url_or_none})): + formats.append({ + 'format_id': 'download', + 'url': self._proto_relative_url(download_url), + 'ext': 'mp4', + 'width': width, + 'height': height, + }) + + self._remove_duplicate_formats(formats) + + thumbnails = [] + for thumb_url in traverse_obj(aweme_detail, ( + (None, 'video'), ('thumbnail', 'cover', 'dynamicCover', 'originCover'), {url_or_none})): + thumbnails.append({ + 'url': self._proto_relative_url(thumb_url), + 'width': width, + 'height': height, + }) + + return { + 'id': video_id, + **traverse_obj(aweme_detail, { + 'title': ('desc', {str}), + 'description': ('desc', {str}), + 'duration': ('video', 'duration', {int_or_none}), + 'timestamp': ('createTime', {int_or_none}), + }), + **traverse_obj(author_info or aweme_detail, { + 'creator': ('nickname', {str}), + 'uploader': (('uniqueId', 'author'), {str}), + 'uploader_id': (('authorId', 'uid', 'id'), {str_or_none}), + }, get_all=False), + **traverse_obj(stats_info, { + 'view_count': 'playCount', + 'like_count': 'diggCount', + 'repost_count': 'shareCount', + 'comment_count': 'commentCount', + }, expected_type=int_or_none), + **traverse_obj(music_info, { + 'track': 'title', + 'album': ('album', {lambda x: x or None}), + 'artist': 'authorName', + }, expected_type=str), + 'channel_id': channel_id, + 'uploader_url': user_url, + 'formats': formats, + 'thumbnails': thumbnails, + 'http_headers': { + 'Referer': webpage_url, + } + } + + +class TikTokIE(TikTokBaseIE): + _VALID_URL = r'https?://www\.tiktok\.com/(?:embed|@(?P<user_id>[\w\.-]+)?/video)/(?P<id>\d+)' + _EMBED_REGEX = [rf'<(?:script|iframe)[^>]+\bsrc=(["\'])(?P<url>{_VALID_URL})'] + + _TESTS = [{ + 'url': 'https://www.tiktok.com/@leenabhushan/video/6748451240264420610', + 'md5': '736bb7a466c6f0a6afeb597da1e6f5b7', + 'info_dict': { + 'id': '6748451240264420610', + 'ext': 'mp4', + 'title': '#jassmanak #lehanga #leenabhushan', + 'description': '#jassmanak #lehanga #leenabhushan', + 'duration': 13, + 'height': 1024, + 'width': 576, + 'uploader': 'leenabhushan', + 'uploader_id': '6691488002098119685', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAA_Eb4t1vodM1IuTy_cvp9CY22RAb59xqrO0Xtz9CYQJvgXaDvZxYnZYRzDWhhgJmy', + 'creator': 'facestoriesbyleenabh', + 'thumbnail': r're:^https?://[\w\/\.\-]+(~[\w\-]+\.image)?', + 'upload_date': '20191016', + 'timestamp': 1571246252, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'artist': 'Ysrbeats', + 'album': 'Lehanga', + 'track': 'Lehanga', + }, + 'skip': '404 Not Found', + }, { + 'url': 'https://www.tiktok.com/@patroxofficial/video/6742501081818877190?langCountry=en', + 'md5': '6f3cf8cdd9b28cb8363fe0a9a160695b', + 'info_dict': { + 'id': '6742501081818877190', + 'ext': 'mp4', + 'title': 'md5:5e2a23877420bb85ce6521dbee39ba94', + 'description': 'md5:5e2a23877420bb85ce6521dbee39ba94', + 'duration': 27, + 'height': 960, + 'width': 540, + 'uploader': 'patrox', + 'uploader_id': '18702747', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAiFnldaILebi5heDoVU6bn4jBWWycX6-9U3xuNPqZ8Ws', + 'channel_id': 'MS4wLjABAAAAiFnldaILebi5heDoVU6bn4jBWWycX6-9U3xuNPqZ8Ws', + 'creator': 'patroX', + 'thumbnail': r're:^https?://[\w\/\.\-]+(~[\w\-]+\.image)?', + 'upload_date': '20190930', + 'timestamp': 1569860870, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'artist': 'Evan Todd, Jessica Keenan Wynn, Alice Lee, Barrett Wilbert Weed & Jon Eidson', + 'track': 'Big Fun', + }, + }, { + # Banned audio, only available on the app + 'url': 'https://www.tiktok.com/@barudakhb_/video/6984138651336838402', + 'info_dict': { + 'id': '6984138651336838402', + 'ext': 'mp4', + 'title': 'Balas @yolaaftwsr hayu yu ? #SquadRandom_ 🔥', + 'description': 'Balas @yolaaftwsr hayu yu ? #SquadRandom_ 🔥', + 'uploader': 'barudakhb_', + 'creator': 'md5:29f238c49bc0c176cb3cef1a9cea9fa6', + 'uploader_id': '6974687867511718913', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAbhBwQC-R1iKoix6jDFsF-vBdfx2ABoDjaZrM9fX6arU3w71q3cOWgWuTXn1soZ7d', + 'channel_id': 'MS4wLjABAAAAbhBwQC-R1iKoix6jDFsF-vBdfx2ABoDjaZrM9fX6arU3w71q3cOWgWuTXn1soZ7d', + 'track': 'Boka Dance', + 'artist': 'md5:29f238c49bc0c176cb3cef1a9cea9fa6', + 'timestamp': 1626121503, + 'duration': 18, + 'thumbnail': r're:^https?://[\w\/\.\-]+(~[\w\-]+\.image)?', + 'upload_date': '20210712', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + }, { + # Sponsored video, only available with feed workaround + 'url': 'https://www.tiktok.com/@MS4wLjABAAAATh8Vewkn0LYM7Fo03iec3qKdeCUOcBIouRk1mkiag6h3o_pQu_dUXvZ2EZlGST7_/video/7042692929109986561', + 'info_dict': { + 'id': '7042692929109986561', + 'ext': 'mp4', + 'title': 'Slap and Run!', + 'description': 'Slap and Run!', + 'uploader': 'user440922249', + 'creator': 'Slap And Run', + 'uploader_id': '7036055384943690754', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAATh8Vewkn0LYM7Fo03iec3qKdeCUOcBIouRk1mkiag6h3o_pQu_dUXvZ2EZlGST7_', + 'channel_id': 'MS4wLjABAAAATh8Vewkn0LYM7Fo03iec3qKdeCUOcBIouRk1mkiag6h3o_pQu_dUXvZ2EZlGST7_', + 'track': 'Promoted Music', + 'timestamp': 1639754738, + 'duration': 30, + 'thumbnail': r're:^https?://[\w\/\.\-]+(~[\w\-]+\.image)?', + 'upload_date': '20211217', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + 'params': {'skip_download': True}, # XXX: unable to download video data: HTTP Error 403: Forbidden + }, { + # Video without title and description + 'url': 'https://www.tiktok.com/@pokemonlife22/video/7059698374567611694', + 'info_dict': { + 'id': '7059698374567611694', + 'ext': 'mp4', + 'title': 'TikTok video #7059698374567611694', + 'description': '', + 'uploader': 'pokemonlife22', + 'creator': 'Pokemon', + 'uploader_id': '6820838815978423302', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAA0tF1nBwQVVMyrGu3CqttkNgM68Do1OXUFuCY0CRQk8fEtSVDj89HqoqvbSTmUP2W', + 'channel_id': 'MS4wLjABAAAA0tF1nBwQVVMyrGu3CqttkNgM68Do1OXUFuCY0CRQk8fEtSVDj89HqoqvbSTmUP2W', + 'track': 'original sound', + 'timestamp': 1643714123, + 'duration': 6, + 'thumbnail': r're:^https?://[\w\/\.\-]+(~[\w\-]+\.image)?', + 'upload_date': '20220201', + 'artist': 'Pokemon', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + }, { + # hydration JSON is sent in a <script> element + 'url': 'https://www.tiktok.com/@denidil6/video/7065799023130643713', + 'info_dict': { + 'id': '7065799023130643713', + 'ext': 'mp4', + 'title': '#denidil#денидил', + 'description': '#denidil#денидил', + 'uploader': 'denidil6', + 'uploader_id': '7046664115636405250', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAsvMSzFdQ4ikl3uR2TEJwMBbB2yZh2Zxwhx-WCo3rbDpAharE3GQCrFuJArI3C8QJ', + 'artist': 'Holocron Music', + 'album': 'Wolf Sounds (1 Hour) Enjoy the Company of the Animal That Is the Majestic King of the Night', + 'track': 'Wolf Sounds (1 Hour) Enjoy the Company of the Animal That Is the Majestic King of the Night', + 'timestamp': 1645134536, + 'duration': 26, + 'upload_date': '20220217', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + 'skip': 'This video is unavailable', + }, { + # slideshow audio-only mp3 format + 'url': 'https://www.tiktok.com/@_le_cannibale_/video/7139980461132074283', + 'info_dict': { + 'id': '7139980461132074283', + 'ext': 'mp3', + 'title': 'TikTok video #7139980461132074283', + 'description': '', + 'creator': 'Antaura', + 'uploader': '_le_cannibale_', + 'uploader_id': '6604511138619654149', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAoShJqaw_5gvy48y3azFeFcT4jeyKWbB0VVYasOCt2tTLwjNFIaDcHAM4D-QGXFOP', + 'channel_id': 'MS4wLjABAAAAoShJqaw_5gvy48y3azFeFcT4jeyKWbB0VVYasOCt2tTLwjNFIaDcHAM4D-QGXFOP', + 'artist': 'nathan !', + 'track': 'grahamscott canon', + 'upload_date': '20220905', + 'timestamp': 1662406249, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:^https://.+\.webp', + }, + }, { + # only available via web + 'url': 'https://www.tiktok.com/@moxypatch/video/7206382937372134662', + 'md5': '6aba7fad816e8709ff2c149679ace165', + 'info_dict': { + 'id': '7206382937372134662', + 'ext': 'mp4', + 'title': 'md5:1d95c0b96560ca0e8a231af4172b2c0a', + 'description': 'md5:1d95c0b96560ca0e8a231af4172b2c0a', + 'creator': 'MoxyPatch', + 'uploader': 'moxypatch', + 'uploader_id': '7039142049363379205', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAFhqKnngMHJSsifL0w1vFOP5kn3Ndo1ODp0XuIBkNMBCkALTvwILdpu12g3pTtL4V', + 'channel_id': 'MS4wLjABAAAAFhqKnngMHJSsifL0w1vFOP5kn3Ndo1ODp0XuIBkNMBCkALTvwILdpu12g3pTtL4V', + 'artist': 'your worst nightmare', + 'track': 'original sound', + 'upload_date': '20230303', + 'timestamp': 1677866781, + 'duration': 10, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:^https://.+', + 'thumbnails': 'count:3', + }, + 'expected_warnings': ['Unable to find video in feed'], + }, { + # 1080p format + 'url': 'https://www.tiktok.com/@tatemcrae/video/7107337212743830830', + 'md5': '982512017a8a917124d5a08c8ae79621', + 'info_dict': { + 'id': '7107337212743830830', + 'ext': 'mp4', + 'title': 'new music video 4 don’t come backkkk🧸🖤 i hope u enjoy !! @musicontiktok', + 'description': 'new music video 4 don’t come backkkk🧸🖤 i hope u enjoy !! @musicontiktok', + 'uploader': 'tatemcrae', + 'uploader_id': '86328792343818240', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAA-0bQT0CqebTRr6I4IkYvMDMKSRSJHLNPBo5HrSklJwyA2psXLSZG5FP-LMNpHnJd', + 'channel_id': 'MS4wLjABAAAA-0bQT0CqebTRr6I4IkYvMDMKSRSJHLNPBo5HrSklJwyA2psXLSZG5FP-LMNpHnJd', + 'creator': 'tate mcrae', + 'artist': 'tate mcrae', + 'track': 'original sound', + 'upload_date': '20220609', + 'timestamp': 1654805899, + 'duration': 150, + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:^https://.+\.webp', + }, + 'params': {'format': 'bytevc1_1080p_808907-0'}, + }, { + # Slideshow, audio-only m4a format + 'url': 'https://www.tiktok.com/@hara_yoimiya/video/7253412088251534594', + 'md5': '2ff8fe0174db2dbf49c597a7bef4e47d', + 'info_dict': { + 'id': '7253412088251534594', + 'ext': 'm4a', + 'title': 'Ñ Ñ€ÐµÐ´ флаг проÑтите #перепиÑка #щитпоÑÑ‚ #тревожныйтиппривÑзанноÑти #рекомендации ', + 'description': 'Ñ Ñ€ÐµÐ´ флаг проÑтите #перепиÑка #щитпоÑÑ‚ #тревожныйтиппривÑзанноÑти #рекомендации ', + 'uploader': 'hara_yoimiya', + 'uploader_id': '6582536342634676230', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAIAlDxriiPWLE-p8p1R_0Bx8qWKfi-7zwmGhzU8Mv25W8sNxjfIKrol31qTczzuLB', + 'channel_id': 'MS4wLjABAAAAIAlDxriiPWLE-p8p1R_0Bx8qWKfi-7zwmGhzU8Mv25W8sNxjfIKrol31qTczzuLB', + 'creator': 'лампочка', + 'artist': 'Øneheart', + 'album': 'watching the stars', + 'track': 'watching the stars', + 'upload_date': '20230708', + 'timestamp': 1688816612, + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + 'thumbnail': r're:^https://.+\.webp', + }, + }, { + # Auto-captions available + 'url': 'https://www.tiktok.com/@hankgreen1/video/7047596209028074758', + 'only_matching': True + }] + + def _real_extract(self, url): + video_id, user_id = self._match_valid_url(url).group('id', 'user_id') + try: + return self._extract_aweme_app(video_id) + except ExtractorError as e: + self.report_warning(f'{e}; trying with webpage') + + url = self._create_url(user_id, video_id) + webpage = self._download_webpage(url, video_id, headers={'User-Agent': 'Mozilla/5.0'}) + next_data = self._search_nextjs_data(webpage, video_id, default='{}') + if next_data: + status = traverse_obj(next_data, ('props', 'pageProps', 'statusCode'), expected_type=int) or 0 + video_data = traverse_obj(next_data, ('props', 'pageProps', 'itemInfo', 'itemStruct'), expected_type=dict) + else: + sigi_data = self._get_sigi_state(webpage, video_id) + status = traverse_obj(sigi_data, ('VideoPage', 'statusCode'), expected_type=int) or 0 + video_data = traverse_obj(sigi_data, ('ItemModule', video_id), expected_type=dict) + + if status == 0: + return self._parse_aweme_video_web(video_data, url, video_id) + elif status == 10216: + raise ExtractorError('This video is private', expected=True) + raise ExtractorError('Video not available', video_id=video_id) + + +class TikTokUserIE(TikTokBaseIE): + IE_NAME = 'tiktok:user' + _VALID_URL = r'https?://(?:www\.)?tiktok\.com/@(?P<id>[\w\.-]+)/?(?:$|[#?])' + _WORKING = False + _TESTS = [{ + 'url': 'https://tiktok.com/@corgibobaa?lang=en', + 'playlist_mincount': 45, + 'info_dict': { + 'id': '6935371178089399301', + 'title': 'corgibobaa', + 'thumbnail': r're:https://.+_1080x1080\.webp' + }, + 'expected_warnings': ['Retrying'] + }, { + 'url': 'https://www.tiktok.com/@6820838815978423302', + 'playlist_mincount': 5, + 'info_dict': { + 'id': '6820838815978423302', + 'title': '6820838815978423302', + 'thumbnail': r're:https://.+_1080x1080\.webp' + }, + 'expected_warnings': ['Retrying'] + }, { + 'url': 'https://www.tiktok.com/@meme', + 'playlist_mincount': 593, + 'info_dict': { + 'id': '79005827461758976', + 'title': 'meme', + 'thumbnail': r're:https://.+_1080x1080\.webp' + }, + 'expected_warnings': ['Retrying'] + }] + + r''' # TODO: Fix by adding _signature to api_url + def _entries(self, webpage, user_id, username): + secuid = self._search_regex(r'\"secUid\":\"(?P<secUid>[^\"]+)', webpage, username) + verifyfp_cookie = self._get_cookies('https://www.tiktok.com').get('s_v_web_id') + if not verifyfp_cookie: + raise ExtractorError('Improper cookies (missing s_v_web_id).', expected=True) + api_url = f'https://m.tiktok.com/api/post/item_list/?aid=1988&cookie_enabled=true&count=30&verifyFp={verifyfp_cookie.value}&secUid={secuid}&cursor=' + cursor = '0' + for page in itertools.count(): + data_json = self._download_json(api_url + cursor, username, note='Downloading Page %d' % page) + for video in data_json.get('itemList', []): + video_id = video['id'] + video_url = f'https://www.tiktok.com/@{user_id}/video/{video_id}' + yield self._url_result(video_url, 'TikTok', video_id, str_or_none(video.get('desc'))) + if not data_json.get('hasMore'): + break + cursor = data_json['cursor'] + ''' + + def _video_entries_api(self, webpage, user_id, username): + query = { + 'user_id': user_id, + 'count': 21, + 'max_cursor': 0, + 'min_cursor': 0, + 'retry_type': 'no_retry', + 'device_id': ''.join(random.choices(string.digits, k=19)), # Some endpoints don't like randomized device_id, so it isn't directly set in _call_api. + } + + for page in itertools.count(1): + for retry in self.RetryManager(): + try: + post_list = self._call_api( + 'aweme/post', query, username, note=f'Downloading user video list page {page}', + errnote='Unable to download user video list') + except ExtractorError as e: + if isinstance(e.cause, json.JSONDecodeError) and e.cause.pos == 0: + retry.error = e + continue + raise + yield from post_list.get('aweme_list', []) + if not post_list.get('has_more'): + break + query['max_cursor'] = post_list['max_cursor'] + + def _entries_api(self, user_id, videos): + for video in videos: + yield { + **self._parse_aweme_video_app(video), + 'extractor_key': TikTokIE.ie_key(), + 'extractor': 'TikTok', + 'webpage_url': f'https://tiktok.com/@{user_id}/video/{video["aweme_id"]}', + } + + def _real_extract(self, url): + user_name = self._match_id(url) + webpage = self._download_webpage(url, user_name, headers={ + 'User-Agent': 'facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)' + }) + user_id = self._html_search_regex(r'snssdk\d*://user/profile/(\d+)', webpage, 'user ID', default=None) or user_name + + videos = LazyList(self._video_entries_api(webpage, user_id, user_name)) + thumbnail = traverse_obj(videos, (0, 'author', 'avatar_larger', 'url_list', 0)) + + return self.playlist_result(self._entries_api(user_id, videos), user_id, user_name, thumbnail=thumbnail) + + +class TikTokBaseListIE(TikTokBaseIE): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor + def _entries(self, list_id, display_id): + query = { + self._QUERY_NAME: list_id, + 'cursor': 0, + 'count': 20, + 'type': 5, + 'device_id': ''.join(random.choices(string.digits, k=19)) + } + + for page in itertools.count(1): + for retry in self.RetryManager(): + try: + post_list = self._call_api( + self._API_ENDPOINT, query, display_id, note=f'Downloading video list page {page}', + errnote='Unable to download video list') + except ExtractorError as e: + if isinstance(e.cause, json.JSONDecodeError) and e.cause.pos == 0: + retry.error = e + continue + raise + for video in post_list.get('aweme_list', []): + yield { + **self._parse_aweme_video_app(video), + 'extractor_key': TikTokIE.ie_key(), + 'extractor': 'TikTok', + 'webpage_url': f'https://tiktok.com/@_/video/{video["aweme_id"]}', + } + if not post_list.get('has_more'): + break + query['cursor'] = post_list['cursor'] + + def _real_extract(self, url): + list_id = self._match_id(url) + return self.playlist_result(self._entries(list_id, list_id), list_id) + + +class TikTokSoundIE(TikTokBaseListIE): + IE_NAME = 'tiktok:sound' + _VALID_URL = r'https?://(?:www\.)?tiktok\.com/music/[\w\.-]+-(?P<id>[\d]+)[/?#&]?' + _WORKING = False + _QUERY_NAME = 'music_id' + _API_ENDPOINT = 'music/aweme' + _TESTS = [{ + 'url': 'https://www.tiktok.com/music/Build-a-Btch-6956990112127585029?lang=en', + 'playlist_mincount': 100, + 'info_dict': { + 'id': '6956990112127585029' + }, + 'expected_warnings': ['Retrying'] + }, { + # Actual entries are less than listed video count + 'url': 'https://www.tiktok.com/music/jiefei-soap-remix-7036843036118469381', + 'playlist_mincount': 2182, + 'info_dict': { + 'id': '7036843036118469381' + }, + 'expected_warnings': ['Retrying'] + }] + + +class TikTokEffectIE(TikTokBaseListIE): + IE_NAME = 'tiktok:effect' + _VALID_URL = r'https?://(?:www\.)?tiktok\.com/sticker/[\w\.-]+-(?P<id>[\d]+)[/?#&]?' + _WORKING = False + _QUERY_NAME = 'sticker_id' + _API_ENDPOINT = 'sticker/aweme' + _TESTS = [{ + 'url': 'https://www.tiktok.com/sticker/MATERIAL-GWOOORL-1258156', + 'playlist_mincount': 100, + 'info_dict': { + 'id': '1258156', + }, + 'expected_warnings': ['Retrying'] + }, { + # Different entries between mobile and web, depending on region + 'url': 'https://www.tiktok.com/sticker/Elf-Friend-479565', + 'only_matching': True + }] + + +class TikTokTagIE(TikTokBaseListIE): + IE_NAME = 'tiktok:tag' + _VALID_URL = r'https?://(?:www\.)?tiktok\.com/tag/(?P<id>[^/?#&]+)' + _WORKING = False + _QUERY_NAME = 'ch_id' + _API_ENDPOINT = 'challenge/aweme' + _TESTS = [{ + 'url': 'https://tiktok.com/tag/hello2018', + 'playlist_mincount': 39, + 'info_dict': { + 'id': '46294678', + 'title': 'hello2018', + }, + 'expected_warnings': ['Retrying'] + }, { + 'url': 'https://tiktok.com/tag/fypã‚·?is_copy_url=0&is_from_webapp=v1', + 'only_matching': True + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id, headers={ + 'User-Agent': 'facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)' + }) + tag_id = self._html_search_regex(r'snssdk\d*://challenge/detail/(\d+)', webpage, 'tag ID') + return self.playlist_result(self._entries(tag_id, display_id), tag_id, display_id) + + +class DouyinIE(TikTokBaseIE): + _VALID_URL = r'https?://(?:www\.)?douyin\.com/video/(?P<id>[0-9]+)' + _TESTS = [{ + 'url': 'https://www.douyin.com/video/6961737553342991651', + 'md5': 'a97db7e3e67eb57bf40735c022ffa228', + 'info_dict': { + 'id': '6961737553342991651', + 'ext': 'mp4', + 'title': '#æ¨è¶…è¶Š å°å°æ°´æ‰‹å¸¦ä½ åŽ»è¿œèˆªâ¤ï¸', + 'description': '#æ¨è¶…è¶Š å°å°æ°´æ‰‹å¸¦ä½ åŽ»è¿œèˆªâ¤ï¸', + 'uploader_id': '110403406559', + 'uploader_url': 'https://www.douyin.com/user/MS4wLjABAAAAEKnfa654JAJ_N5lgZDQluwsxmY0lhfmEYNQBBkwGG98', + 'channel_id': 'MS4wLjABAAAAEKnfa654JAJ_N5lgZDQluwsxmY0lhfmEYNQBBkwGG98', + 'creator': 'æ¨è¶…è¶Š', + 'duration': 19782, + 'timestamp': 1620905839, + 'upload_date': '20210513', + 'track': '@æ¨è¶…越创作的原声', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:https?://.+\.jpe?g', + }, + }, { + 'url': 'https://www.douyin.com/video/6982497745948921092', + 'md5': '34a87ebff3833357733da3fe17e37c0e', + 'info_dict': { + 'id': '6982497745948921092', + 'ext': 'mp4', + 'title': 'è¿™ä¸ªå¤æ—¥å’Œå°ç¾Š@æ¨è¶…è¶Š 一起é‡è§ç™½è‰²å¹»æƒ³', + 'description': 'è¿™ä¸ªå¤æ—¥å’Œå°ç¾Š@æ¨è¶…è¶Š 一起é‡è§ç™½è‰²å¹»æƒ³', + 'uploader_id': '408654318141572', + 'uploader_url': 'https://www.douyin.com/user/MS4wLjABAAAAZJpnglcjW2f_CMVcnqA_6oVBXKWMpH0F8LIHuUu8-lA', + 'channel_id': 'MS4wLjABAAAAZJpnglcjW2f_CMVcnqA_6oVBXKWMpH0F8LIHuUu8-lA', + 'creator': 'æ¨è¶…越工作室', + 'duration': 42479, + 'timestamp': 1625739481, + 'upload_date': '20210708', + 'track': '@æ¨è¶…越工作室创作的原声', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:https?://.+\.jpe?g', + }, + }, { + 'url': 'https://www.douyin.com/video/6953975910773099811', + 'md5': 'dde3302460f19db59c47060ff013b902', + 'info_dict': { + 'id': '6953975910773099811', + 'ext': 'mp4', + 'title': '#一起看海 å‡ºçŽ°åœ¨ä½ çš„å¤æ—¥é‡Œ', + 'description': '#一起看海 å‡ºçŽ°åœ¨ä½ çš„å¤æ—¥é‡Œ', + 'uploader_id': '110403406559', + 'uploader_url': 'https://www.douyin.com/user/MS4wLjABAAAAEKnfa654JAJ_N5lgZDQluwsxmY0lhfmEYNQBBkwGG98', + 'channel_id': 'MS4wLjABAAAAEKnfa654JAJ_N5lgZDQluwsxmY0lhfmEYNQBBkwGG98', + 'creator': 'æ¨è¶…è¶Š', + 'duration': 17343, + 'timestamp': 1619098692, + 'upload_date': '20210422', + 'track': '@æ¨è¶…越创作的原声', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:https?://.+\.jpe?g', + }, + }, { + 'url': 'https://www.douyin.com/video/6950251282489675042', + 'md5': 'b4db86aec367ef810ddd38b1737d2fed', + 'info_dict': { + 'id': '6950251282489675042', + 'ext': 'mp4', + 'title': '哈哈哈,æˆåŠŸäº†å“ˆå“ˆå“ˆå“ˆå“ˆå“ˆ', + 'uploader': 'æ¨è¶…è¶Š', + 'upload_date': '20210412', + 'timestamp': 1618231483, + 'uploader_id': '110403406559', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + }, + 'skip': 'No longer available', + }, { + 'url': 'https://www.douyin.com/video/6963263655114722595', + 'md5': 'cf9f11f0ec45d131445ec2f06766e122', + 'info_dict': { + 'id': '6963263655114722595', + 'ext': 'mp4', + 'title': '#哪个爱豆的105度最甜 æ¢ä¸ªè§’度看看我哈哈', + 'description': '#哪个爱豆的105度最甜 æ¢ä¸ªè§’度看看我哈哈', + 'uploader_id': '110403406559', + 'uploader_url': 'https://www.douyin.com/user/MS4wLjABAAAAEKnfa654JAJ_N5lgZDQluwsxmY0lhfmEYNQBBkwGG98', + 'channel_id': 'MS4wLjABAAAAEKnfa654JAJ_N5lgZDQluwsxmY0lhfmEYNQBBkwGG98', + 'creator': 'æ¨è¶…è¶Š', + 'duration': 15115, + 'timestamp': 1621261163, + 'upload_date': '20210517', + 'track': '@æ¨è¶…越创作的原声', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'thumbnail': r're:https?://.+\.jpe?g', + }, + }] + _APP_VERSIONS = [('23.3.0', '230300')] + _APP_NAME = 'aweme' + _AID = 1128 + _API_HOSTNAME = 'aweme.snssdk.com' + _UPLOADER_URL_FORMAT = 'https://www.douyin.com/user/%s' + _WEBPAGE_HOST = 'https://www.douyin.com/' + + def _real_extract(self, url): + video_id = self._match_id(url) + + try: + return self._extract_aweme_app(video_id) + except ExtractorError as e: + e.expected = True + self.to_screen(f'{e}; trying with webpage') + + webpage = self._download_webpage(url, video_id) + render_data = self._search_json( + r'<script [^>]*\bid=[\'"]RENDER_DATA[\'"][^>]*>', webpage, 'render data', video_id, + contains_pattern=r'%7B(?s:.+)%7D', fatal=False, transform_source=compat_urllib_parse_unquote) + if not render_data: + # TODO: Run verification challenge code to generate signature cookies + cookies = self._get_cookies(self._WEBPAGE_HOST) + expected = not cookies.get('s_v_web_id') or not cookies.get('ttwid') + raise ExtractorError( + 'Fresh cookies (not necessarily logged in) are needed', expected=expected) + + return self._parse_aweme_video_web(get_first(render_data, ('aweme', 'detail')), url, video_id) + + +class TikTokVMIE(InfoExtractor): + _VALID_URL = r'https?://(?:(?:vm|vt)\.tiktok\.com|(?:www\.)tiktok\.com/t)/(?P<id>\w+)' + IE_NAME = 'vm.tiktok' + + _TESTS = [{ + 'url': 'https://www.tiktok.com/t/ZTRC5xgJp', + 'info_dict': { + 'id': '7170520270497680683', + 'ext': 'mp4', + 'title': 'md5:c64f6152330c2efe98093ccc8597871c', + 'uploader_id': '6687535061741700102', + 'upload_date': '20221127', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAObqu3WCTXxmw2xwZ3iLEHnEecEIw7ks6rxWqOqOhaPja9BI7gqUQnjw8_5FSoDXX', + 'album': 'Wave of Mutilation: Best of Pixies', + 'thumbnail': r're:https://.+\.webp.*', + 'duration': 5, + 'timestamp': 1669516858, + 'repost_count': int, + 'artist': 'Pixies', + 'track': 'Where Is My Mind?', + 'description': 'md5:c64f6152330c2efe98093ccc8597871c', + 'uploader': 'sigmachaddeus', + 'creator': 'SigmaChad', + }, + }, { + 'url': 'https://vm.tiktok.com/ZTR45GpSF/', + 'info_dict': { + 'id': '7106798200794926362', + 'ext': 'mp4', + 'title': 'md5:edc3e7ea587847f8537468f2fe51d074', + 'uploader_id': '6997695878846268418', + 'upload_date': '20220608', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'thumbnail': r're:https://.+\.webp.*', + 'uploader_url': 'https://www.tiktok.com/@MS4wLjABAAAAdZ_NcPPgMneaGrW0hN8O_J_bwLshwNNERRF5DxOw2HKIzk0kdlLrR8RkVl1ksrMO', + 'duration': 29, + 'timestamp': 1654680400, + 'repost_count': int, + 'artist': 'Akihitoko', + 'track': 'original sound', + 'description': 'md5:edc3e7ea587847f8537468f2fe51d074', + 'uploader': 'akihitoko1', + 'creator': 'Akihitoko', + }, + }, { + 'url': 'https://vt.tiktok.com/ZSe4FqkKd', + 'only_matching': True, + }] + + def _real_extract(self, url): + new_url = self._request_webpage( + HEADRequest(url), self._match_id(url), headers={'User-Agent': 'facebookexternalhit/1.1'}).url + if self.suitable(new_url): # Prevent infinite loop in case redirect fails + raise UnsupportedError(new_url) + return self.url_result(new_url) + + +class TikTokLiveIE(TikTokBaseIE): + _VALID_URL = r'''(?x)https?://(?: + (?:www\.)?tiktok\.com/@(?P<uploader>[\w.-]+)/live| + m\.tiktok\.com/share/live/(?P<id>\d+) + )''' + IE_NAME = 'tiktok:live' + + _TESTS = [{ + 'url': 'https://www.tiktok.com/@weathernewslive/live', + 'info_dict': { + 'id': '7210809319192726273', + 'ext': 'mp4', + 'title': r're:ウェザーニュースLiVE[\d\s:-]*', + 'creator': 'ウェザーニュースLiVE', + 'uploader': 'weathernewslive', + 'uploader_id': '6621496731283095554', + 'uploader_url': 'https://www.tiktok.com/@weathernewslive', + 'live_status': 'is_live', + 'concurrent_view_count': int, + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://www.tiktok.com/@pilarmagenta/live', + 'info_dict': { + 'id': '7209423610325322522', + 'ext': 'mp4', + 'title': str, + 'creator': 'Pilarmagenta', + 'uploader': 'pilarmagenta', + 'uploader_id': '6624846890674683909', + 'uploader_url': 'https://www.tiktok.com/@pilarmagenta', + 'live_status': 'is_live', + 'concurrent_view_count': int, + }, + 'skip': 'Livestream', + }, { + 'url': 'https://m.tiktok.com/share/live/7209423610325322522/?language=en', + 'only_matching': True, + }, { + 'url': 'https://www.tiktok.com/@iris04201/live', + 'only_matching': True, + }] + + def _call_api(self, url, param, room_id, uploader, key=None): + response = traverse_obj(self._download_json( + url, room_id, fatal=False, query={ + 'aid': '1988', + param: room_id, + }), (key, {dict}), default={}) + + # status == 2 if live else 4 + if int_or_none(response.get('status')) == 2: + return response + # If room_id is obtained via mobile share URL and cannot be refreshed, do not wait for live + elif not uploader: + raise ExtractorError('This livestream has ended', expected=True) + raise UserNotLive(video_id=uploader) + + def _real_extract(self, url): + uploader, room_id = self._match_valid_url(url).group('uploader', 'id') + webpage = self._download_webpage( + url, uploader or room_id, headers={'User-Agent': 'Mozilla/5.0'}, fatal=not room_id) + + if webpage: + data = try_call(lambda: self._get_sigi_state(webpage, uploader or room_id)) + room_id = (traverse_obj(data, ('UserModule', 'users', ..., 'roomId', {str_or_none}), get_all=False) + or self._search_regex(r'snssdk\d*://live\?room_id=(\d+)', webpage, 'room ID', default=None) + or room_id) + uploader = uploader or traverse_obj( + data, ('LiveRoom', 'liveRoomUserInfo', 'user', 'uniqueId'), + ('UserModule', 'users', ..., 'uniqueId'), get_all=False, expected_type=str) + + if not room_id: + raise UserNotLive(video_id=uploader) + + formats = [] + live_info = self._call_api( + 'https://webcast.tiktok.com/webcast/room/info', 'room_id', room_id, uploader, key='data') + + get_quality = qualities(('SD1', 'ld', 'SD2', 'sd', 'HD1', 'hd', 'FULL_HD1', 'uhd', 'ORIGION', 'origin')) + parse_inner = lambda x: self._parse_json(x, None) + + for quality, stream in traverse_obj(live_info, ( + 'stream_url', 'live_core_sdk_data', 'pull_data', 'stream_data', + {parse_inner}, 'data', {dict}), default={}).items(): + + sdk_params = traverse_obj(stream, ('main', 'sdk_params', {parse_inner}, { + 'vcodec': ('VCodec', {str}), + 'tbr': ('vbitrate', {lambda x: int_or_none(x, 1000)}), + 'resolution': ('resolution', {lambda x: re.match(r'(?i)\d+x\d+|\d+p', x).group().lower()}), + })) + + flv_url = traverse_obj(stream, ('main', 'flv', {url_or_none})) + if flv_url: + formats.append({ + 'url': flv_url, + 'ext': 'flv', + 'format_id': f'flv-{quality}', + 'quality': get_quality(quality), + **sdk_params, + }) + + hls_url = traverse_obj(stream, ('main', 'hls', {url_or_none})) + if hls_url: + formats.append({ + 'url': hls_url, + 'ext': 'mp4', + 'protocol': 'm3u8_native', + 'format_id': f'hls-{quality}', + 'quality': get_quality(quality), + **sdk_params, + }) + + def get_vcodec(*keys): + return traverse_obj(live_info, ( + 'stream_url', *keys, {parse_inner}, 'VCodec', {str})) + + for stream in ('hls', 'rtmp'): + stream_url = traverse_obj(live_info, ('stream_url', f'{stream}_pull_url', {url_or_none})) + if stream_url: + formats.append({ + 'url': stream_url, + 'ext': 'mp4' if stream == 'hls' else 'flv', + 'protocol': 'm3u8_native' if stream == 'hls' else 'https', + 'format_id': f'{stream}-pull', + 'vcodec': get_vcodec(f'{stream}_pull_url_params'), + 'quality': get_quality('ORIGION'), + }) + + for f_id, f_url in traverse_obj(live_info, ('stream_url', 'flv_pull_url', {dict}), default={}).items(): + if not url_or_none(f_url): + continue + formats.append({ + 'url': f_url, + 'ext': 'flv', + 'format_id': f'flv-{f_id}'.lower(), + 'vcodec': get_vcodec('flv_pull_url_params', f_id), + 'quality': get_quality(f_id), + }) + + # If uploader is a guest on another's livestream, primary endpoint will not have m3u8 URLs + if not traverse_obj(formats, lambda _, v: v['ext'] == 'mp4'): + live_info = merge_dicts(live_info, self._call_api( + 'https://www.tiktok.com/api/live/detail/', 'roomID', room_id, uploader, key='LiveRoomInfo')) + if url_or_none(live_info.get('liveUrl')): + formats.append({ + 'url': live_info['liveUrl'], + 'ext': 'mp4', + 'protocol': 'm3u8_native', + 'format_id': 'hls-fallback', + 'vcodec': 'h264', + 'quality': get_quality('origin'), + }) + + uploader = uploader or traverse_obj(live_info, ('ownerInfo', 'uniqueId'), ('owner', 'display_id')) + + return { + 'id': room_id, + 'uploader': uploader, + 'uploader_url': format_field(uploader, None, self._UPLOADER_URL_FORMAT) or None, + 'is_live': True, + 'formats': formats, + '_format_sort_fields': ('quality', 'ext'), + **traverse_obj(live_info, { + 'title': 'title', + 'uploader_id': (('ownerInfo', 'owner'), 'id', {str_or_none}), + 'creator': (('ownerInfo', 'owner'), 'nickname'), + 'concurrent_view_count': (('user_count', ('liveRoomStats', 'userCount')), {int_or_none}), + }, get_all=False), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tinypic.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tinypic.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tinypic.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tinypic.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tmz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tmz.py new file mode 100644 index 0000000..edd16bc --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tmz.py @@ -0,0 +1,193 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + get_element_by_attribute, +) + + +class TMZIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tmz\.com/.*' + _TESTS = [ + { + 'url': 'http://www.tmz.com/videos/0-cegprt2p/', + 'info_dict': { + 'id': 'http://www.tmz.com/videos/0-cegprt2p/', + 'ext': 'mp4', + 'title': 'No Charges Against Hillary Clinton? Harvey Says It Ain\'t Over Yet', + 'description': 'Harvey talks about Director Comey’s decision not to prosecute Hillary Clinton.', + 'timestamp': 1467831837, + 'uploader': 'TMZ Staff', + 'upload_date': '20160706', + 'thumbnail': 'https://imagez.tmz.com/image/5e/4by3/2016/07/06/5eea7dc01baa5c2e83eb06930c170e46_xl.jpg', + 'duration': 772.0, + }, + }, + { + 'url': 'https://www.tmz.com/videos/071119-chris-morgan-women-4590005-0-zcsejvcr/', + 'info_dict': { + 'id': 'https://www.tmz.com/videos/071119-chris-morgan-women-4590005-0-zcsejvcr/', + 'ext': 'mp4', + 'title': 'Angry Bagel Shop Guy Says He Doesn\'t Trust Women', + 'description': 'The enraged man who went viral for ranting about women on dating sites before getting ragdolled in a bagel shop is defending his misogyny ... he says it\'s women\'s fault in the first place.', + 'timestamp': 1562889485, + 'uploader': 'TMZ Staff', + 'upload_date': '20190711', + 'thumbnail': 'https://imagez.tmz.com/image/a8/4by3/2019/07/12/a85480d27b2f50a7bfea2322151d67a5_xl.jpg', + 'duration': 123.0, + }, + }, + { + 'url': 'http://www.tmz.com/2015/04/19/bobby-brown-bobbi-kristina-awake-video-concert', + 'md5': '5429c85db8bde39a473a56ca8c4c5602', + 'info_dict': { + 'id': 'http://www.tmz.com/2015/04/19/bobby-brown-bobbi-kristina-awake-video-concert', + 'ext': 'mp4', + 'title': 'Bobby Brown Tells Crowd ... Bobbi Kristina is Awake', + 'description': 'Bobby Brown stunned his audience during a concert Saturday night, when he told the crowd, "Bobbi is awake. She\'s watching me."', + 'timestamp': 1429467813, + 'uploader': 'TMZ Staff', + 'upload_date': '20150419', + 'duration': 29.0, + 'thumbnail': 'https://imagez.tmz.com/image/15/4by3/2015/04/20/1539c7ae136359fc979236fa6a9449dd_xl.jpg', + }, + }, + { + 'url': 'http://www.tmz.com/2015/09/19/patti-labelle-concert-fan-stripping-kicked-out-nicki-minaj/', + 'info_dict': { + 'id': 'http://www.tmz.com/2015/09/19/patti-labelle-concert-fan-stripping-kicked-out-nicki-minaj/', + 'ext': 'mp4', + 'title': 'Patti LaBelle -- Goes Nuclear On Stripping Fan', + 'description': 'Patti LaBelle made it known loud and clear last night ... NO ' + 'ONE gets on her stage and strips down.', + 'timestamp': 1442683746, + 'uploader': 'TMZ Staff', + 'upload_date': '20150919', + 'duration': 104.0, + 'thumbnail': 'https://imagez.tmz.com/image/5e/4by3/2015/09/20/5e57d7575062528082994e18ac3f0f48_xl.jpg', + }, + }, + { + 'url': 'http://www.tmz.com/2016/01/28/adam-silver-sting-drake-blake-griffin/', + 'info_dict': { + 'id': 'http://www.tmz.com/2016/01/28/adam-silver-sting-drake-blake-griffin/', + 'ext': 'mp4', + 'title': 'NBA\'s Adam Silver -- Blake Griffin\'s a Great Guy ... He\'ll Learn from This', + 'description': 'Two pretty parts of this video with NBA Commish Adam Silver.', + 'timestamp': 1454010989, + 'uploader': 'TMZ Staff', + 'upload_date': '20160128', + 'duration': 59.0, + 'thumbnail': 'https://imagez.tmz.com/image/38/4by3/2016/01/29/3856e83e0beb57059ec412122b842fb1_xl.jpg', + }, + }, + { + 'url': 'http://www.tmz.com/2016/10/27/donald-trump-star-vandal-arrested-james-otis/', + 'info_dict': { + 'id': 'http://www.tmz.com/2016/10/27/donald-trump-star-vandal-arrested-james-otis/', + 'ext': 'mp4', + 'title': 'Trump Star Vandal -- I\'m Not Afraid of Donald or the Cops!', + 'description': 'James Otis is the the guy who took a pickaxe to Donald Trump\'s star on the Walk of Fame, and he tells TMZ .. he\'s ready and willing to go to jail for the crime.', + 'timestamp': 1477500095, + 'uploader': 'TMZ Staff', + 'upload_date': '20161026', + 'thumbnail': 'https://imagez.tmz.com/image/0d/4by3/2016/10/27/0d904814d4a75dcf9cc3b8cfd1edc1a3_xl.jpg', + 'duration': 128.0, + }, + }, + { + 'url': 'https://www.tmz.com/videos/2020-10-31-103120-beverly-hills-protest-4878209/', + 'info_dict': { + 'id': 'https://www.tmz.com/videos/2020-10-31-103120-beverly-hills-protest-4878209/', + 'ext': 'mp4', + 'title': 'Cops Use Billy Clubs Against Pro-Trump and Anti-Fascist ' + 'Demonstrators', + 'description': 'Beverly Hills may be an omen of what\'s coming next week, ' + 'because things got crazy on the streets and cops started ' + 'swinging their billy clubs at both Anti-Fascist and Pro-Trump ' + 'demonstrators.', + 'timestamp': 1604182772, + 'uploader': 'TMZ Staff', + 'upload_date': '20201031', + 'duration': 96.0, + 'thumbnail': 'https://imagez.tmz.com/image/f3/4by3/2020/10/31/f37bd5a8aef84497866f425130c58be3_xl.jpg', + }, + }, + { + 'url': 'https://www.tmz.com/2020/11/05/gervonta-davis-car-crash-hit-and-run-police/', + 'info_dict': { + 'id': 'Dddb6IGe-ws', + 'ext': 'mp4', + 'title': 'SICK LAMBO GERVONTA DAVIS IN HIS NEW RIDE RIGHT AFTER KO AFTER LEO EsNews Boxing', + 'uploader': 'ESNEWS', + 'description': 'md5:49675bc58883ccf80474b8aa701e1064', + 'upload_date': '20201102', + 'uploader_id': '@ESNEWS', + 'uploader_url': 'https://www.youtube.com/@ESNEWS', + 'like_count': int, + 'channel_id': 'UCI-Oq7oFGakzSzHFlTtsUsQ', + 'channel': 'ESNEWS', + 'view_count': int, + 'duration': 225, + 'live_status': 'not_live', + 'thumbnail': 'https://i.ytimg.com/vi_webp/Dddb6IGe-ws/maxresdefault.webp', + 'channel_url': 'https://www.youtube.com/channel/UCI-Oq7oFGakzSzHFlTtsUsQ', + 'channel_follower_count': int, + 'playable_in_embed': True, + 'categories': ['Sports'], + 'age_limit': 0, + 'tags': 'count:10', + 'availability': 'public', + 'comment_count': int, + }, + }, + { + 'url': 'https://www.tmz.com/2020/11/19/conor-mcgregor-dustin-poirier-contract-fight-ufc-257-fight-island/', + 'info_dict': { + 'id': '1329448013937471491', + 'ext': 'mp4', + 'title': 'The Mac Life - BREAKING: Conor McGregor (@thenotoriousmma) has signed his bout agreement for his rematch with Dustin Poirier for January 23.', + 'uploader': 'The Mac Life', + 'description': 'md5:56e6009bbc3d12498e10d08a8e1f1c69', + 'upload_date': '20201119', + 'display_id': '1329450007125225473', + 'uploader_id': 'TheMacLife', + 'timestamp': 1605800556, + 'thumbnail': 'https://pbs.twimg.com/media/EnMmfT8XYAExgxJ.jpg?name=small', + 'like_count': int, + 'duration': 11.812, + 'uploader_url': 'https://twitter.com/TheMacLife', + 'age_limit': 0, + 'repost_count': int, + 'tags': [], + 'comment_count': int, + }, + }, + ] + + def _real_extract(self, url): + webpage = self._download_webpage(url, url) + jsonld = self._search_json_ld(webpage, url) + if not jsonld or 'url' not in jsonld: + # try to extract from YouTube Player API + # see https://developers.google.com/youtube/iframe_api_reference#Video_Queueing_Functions + match_obj = re.search(r'\.cueVideoById\(\s*(?P<quote>[\'"])(?P<id>.*?)(?P=quote)', webpage) + if match_obj: + res = self.url_result(match_obj.group('id')) + return res + # try to extract from twitter + blockquote_el = get_element_by_attribute('class', 'twitter-tweet', webpage) + if blockquote_el: + matches = re.findall( + r'<a[^>]+href=\s*(?P<quote>[\'"])(?P<link>.*?)(?P=quote)', + blockquote_el) + if matches: + for _, match in matches: + if '/status/' in match: + res = self.url_result(match) + return res + raise ExtractorError('No video found!') + if id not in jsonld: + jsonld['id'] = url + return jsonld diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tnaflix.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tnaflix.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tnaflix.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tnaflix.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/toggle.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/toggle.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/toggle.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/toggle.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/toggo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/toggo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/toggo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/toggo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tokentube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tokentube.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tokentube.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tokentube.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tonline.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tonline.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tonline.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tonline.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/toongoggles.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/toongoggles.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/toongoggles.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/toongoggles.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/toutv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/toutv.py new file mode 100644 index 0000000..ced1224 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/toutv.py @@ -0,0 +1,87 @@ +import json + +from .radiocanada import RadioCanadaIE +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + merge_dicts, +) + + +class TouTvIE(RadioCanadaIE): # XXX: Do not subclass from concrete IE + _NETRC_MACHINE = 'toutv' + IE_NAME = 'tou.tv' + _VALID_URL = r'https?://ici\.tou\.tv/(?P<id>[a-zA-Z0-9_-]+(?:/S[0-9]+[EC][0-9]+)?)' + + _TESTS = [{ + 'url': 'http://ici.tou.tv/garfield-tout-court/S2015E17', + 'info_dict': { + 'id': '122017', + 'ext': 'mp4', + 'title': 'Saison 2015 Épisode 17', + 'description': 'La photo de famille 2', + 'upload_date': '20100717', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'skip': '404 Not Found', + }, { + 'url': 'http://ici.tou.tv/hackers', + 'only_matching': True, + }, { + 'url': 'https://ici.tou.tv/l-age-adulte/S01C501', + 'only_matching': True, + }] + _CLIENT_KEY = '90505c8d-9c34-4f34-8da1-3a85bdc6d4f4' + + def _perform_login(self, username, password): + try: + self._access_token = self._download_json( + 'https://services.radio-canada.ca/toutv/profiling/accounts/login', + None, 'Logging in', data=json.dumps({ + 'ClientId': self._CLIENT_KEY, + 'ClientSecret': '34026772-244b-49b6-8b06-317b30ac9a20', + 'Email': username, + 'Password': password, + 'Scope': 'id.write media-validation.read', + }).encode(), headers={ + 'Authorization': 'client-key ' + self._CLIENT_KEY, + 'Content-Type': 'application/json;charset=utf-8', + })['access_token'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + error = self._parse_json(e.cause.response.read().decode(), None)['Message'] + raise ExtractorError(error, expected=True) + raise + self._claims = self._call_api('validation/v2/getClaims')['claims'] + + def _real_extract(self, url): + path = self._match_id(url) + metadata = self._download_json( + 'https://services.radio-canada.ca/toutv/presentation/%s' % path, path, query={ + 'client_key': self._CLIENT_KEY, + 'device': 'web', + 'version': 4, + }) + # IsDrm does not necessarily mean the video is DRM protected (see + # https://github.com/ytdl-org/youtube-dl/issues/13994). + if not self.get_param('allow_unplayable_formats') and metadata.get('IsDrm'): + self.report_warning('This video is probably DRM protected.', path) + video_id = metadata['IdMedia'] + details = metadata['Details'] + + return merge_dicts({ + 'id': video_id, + 'title': details.get('OriginalTitle'), + 'description': details.get('Description'), + 'thumbnail': details.get('ImageUrl'), + 'duration': int_or_none(details.get('LengthInSeconds')), + 'series': metadata.get('ProgramTitle'), + 'season_number': int_or_none(metadata.get('SeasonNumber')), + 'season': metadata.get('SeasonTitle'), + 'episode_number': int_or_none(metadata.get('EpisodeNumber')), + 'episode': metadata.get('EpisodeTitle'), + }, self._extract_info(metadata.get('AppCode', 'toutv'), video_id)) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/toypics.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/toypics.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/toypics.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/toypics.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/traileraddict.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/traileraddict.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/traileraddict.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/traileraddict.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/triller.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/triller.py new file mode 100644 index 0000000..56e51fe --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/triller.py @@ -0,0 +1,329 @@ +import itertools +import json +import re + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ( + ExtractorError, + UnsupportedError, + determine_ext, + int_or_none, + parse_resolution, + str_or_none, + traverse_obj, + unified_timestamp, + url_basename, + urljoin, + url_or_none, +) + + +class TrillerBaseIE(InfoExtractor): + _NETRC_MACHINE = 'triller' + _API_BASE_URL = 'https://social.triller.co/v1.5' + _API_HEADERS = {'Origin': 'https://triller.co'} + + def _perform_login(self, username, password): + if self._API_HEADERS.get('Authorization'): + return + + headers = {**self._API_HEADERS, 'Content-Type': 'application/json'} + user_check = traverse_obj(self._download_json( + f'{self._API_BASE_URL}/api/user/is-valid-username', None, note='Checking username', + fatal=False, expected_status=400, headers=headers, + data=json.dumps({'username': username}, separators=(',', ':')).encode()), 'status') + + if user_check: # endpoint returns `"status":false` if username exists + raise ExtractorError('Unable to login: Invalid username', expected=True) + + login = self._download_json( + f'{self._API_BASE_URL}/user/auth', None, note='Logging in', fatal=False, + expected_status=400, headers=headers, data=json.dumps({ + 'username': username, + 'password': password, + }, separators=(',', ':')).encode()) or {} + + if not login.get('auth_token'): + if login.get('error') == 1008: + raise ExtractorError('Unable to login: Incorrect password', expected=True) + raise ExtractorError('Unable to login') + + self._API_HEADERS['Authorization'] = f'Bearer {login["auth_token"]}' + + def _get_comments(self, video_id, limit=15): + comment_info = self._download_json( + f'{self._API_BASE_URL}/api/videos/{video_id}/comments_v2', + video_id, fatal=False, note='Downloading comments API JSON', + headers=self._API_HEADERS, query={'limit': limit}) or {} + if not comment_info.get('comments'): + return + yield from traverse_obj(comment_info, ('comments', ..., { + 'id': ('id', {str_or_none}), + 'text': 'body', + 'author': ('author', 'username'), + 'author_id': ('author', 'user_id'), + 'timestamp': ('timestamp', {unified_timestamp}), + })) + + def _parse_video_info(self, video_info, username, user_id, display_id=None): + video_id = str(video_info['id']) + display_id = display_id or video_info.get('video_uuid') + + if traverse_obj(video_info, ( + None, ('transcoded_url', 'video_url', 'stream_url', 'audio_url'), + {lambda x: re.search(r'/copyright/', x)}), get_all=False): + self.raise_no_formats('This video has been removed due to licensing restrictions', expected=True) + + def format_info(url): + return { + 'url': url, + 'ext': determine_ext(url), + 'format_id': url_basename(url).split('.')[0], + } + + formats = [] + + if determine_ext(video_info.get('transcoded_url')) == 'm3u8': + formats.extend(self._extract_m3u8_formats( + video_info['transcoded_url'], video_id, 'mp4', m3u8_id='hls', fatal=False)) + + for video in traverse_obj(video_info, ('video_set', lambda _, v: url_or_none(v['url']))): + formats.append({ + **format_info(video['url']), + **parse_resolution(video.get('resolution')), + 'vcodec': video.get('codec'), + 'vbr': int_or_none(video.get('bitrate'), 1000), + }) + + video_url = traverse_obj(video_info, 'video_url', 'stream_url', expected_type=url_or_none) + if video_url: + formats.append({ + **format_info(video_url), + 'vcodec': 'h264', + **traverse_obj(video_info, { + 'width': 'width', + 'height': 'height', + 'filesize': 'filesize', + }, expected_type=int_or_none), + }) + + audio_url = url_or_none(video_info.get('audio_url')) + if audio_url: + formats.append(format_info(audio_url)) + + comment_count = traverse_obj(video_info, ('comment_count', {int_or_none})) + + return { + 'id': video_id, + 'display_id': display_id, + 'uploader': username, + 'uploader_id': user_id or traverse_obj(video_info, ('user', 'user_id', {str_or_none})), + 'webpage_url': urljoin(f'https://triller.co/@{username}/video/', display_id), + 'uploader_url': f'https://triller.co/@{username}', + 'extractor_key': TrillerIE.ie_key(), + 'extractor': TrillerIE.IE_NAME, + 'formats': formats, + 'comment_count': comment_count, + '__post_extractor': self.extract_comments(video_id, comment_count), + **traverse_obj(video_info, { + 'title': ('description', {lambda x: x.replace('\r\n', ' ')}), + 'description': 'description', + 'creator': ((('user'), ('users', lambda _, v: str(v['user_id']) == user_id)), 'name'), + 'thumbnail': ('thumbnail_url', {url_or_none}), + 'timestamp': ('timestamp', {unified_timestamp}), + 'duration': ('duration', {int_or_none}), + 'view_count': ('play_count', {int_or_none}), + 'like_count': ('likes_count', {int_or_none}), + 'artist': 'song_artist', + 'track': 'song_title', + }, get_all=False), + } + + +class TrillerIE(TrillerBaseIE): + _VALID_URL = r'''(?x) + https?://(?:www\.)?triller\.co/ + @(?P<username>[\w.]+)/video/(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12}) + ''' + _TESTS = [{ + 'url': 'https://triller.co/@theestallion/video/2358fcd7-3df2-4c77-84c8-1d091610a6cf', + 'md5': '228662d783923b60d78395fedddc0a20', + 'info_dict': { + 'id': '71595734', + 'ext': 'mp4', + 'title': 'md5:9a2bf9435c5c4292678996a464669416', + 'thumbnail': r're:^https://uploads\.cdn\.triller\.co/.+\.jpg$', + 'description': 'md5:9a2bf9435c5c4292678996a464669416', + 'uploader': 'theestallion', + 'uploader_id': '18992236', + 'creator': 'Megan Thee Stallion', + 'timestamp': 1660598222, + 'upload_date': '20220815', + 'duration': 47, + 'view_count': int, + 'like_count': int, + 'artist': 'Megan Thee Stallion', + 'track': 'Her', + 'uploader_url': 'https://triller.co/@theestallion', + 'comment_count': int, + }, + 'skip': 'This video has been removed due to licensing restrictions', + }, { + 'url': 'https://triller.co/@charlidamelio/video/46c6fcfa-aa9e-4503-a50c-68444f44cddc', + 'md5': '874055f462af5b0699b9dbb527a505a0', + 'info_dict': { + 'id': '71621339', + 'ext': 'mp4', + 'title': 'md5:4c91ea82760fe0fffb71b8c3aa7295fc', + 'display_id': '46c6fcfa-aa9e-4503-a50c-68444f44cddc', + 'thumbnail': r're:^https://uploads\.cdn\.triller\.co/.+\.jpg$', + 'description': 'md5:4c91ea82760fe0fffb71b8c3aa7295fc', + 'uploader': 'charlidamelio', + 'uploader_id': '1875551', + 'creator': 'charli damelio', + 'timestamp': 1660773354, + 'upload_date': '20220817', + 'duration': 16, + 'view_count': int, + 'like_count': int, + 'artist': 'Dixie', + 'track': 'Someone to Blame', + 'uploader_url': 'https://triller.co/@charlidamelio', + 'comment_count': int, + }, + }, { + 'url': 'https://triller.co/@theestallion/video/07f35f38-1f51-48e2-8c5f-f7a8e829988f', + 'md5': 'af7b3553e4b8bfca507636471ee2eb41', + 'info_dict': { + 'id': '71837829', + 'ext': 'mp4', + 'title': 'UNGRATEFUL VIDEO OUT NOW ðŸ‘ðŸ¾ðŸ‘ðŸ¾ðŸ‘🾠💙💙 link my bio #womeninhiphop', + 'display_id': '07f35f38-1f51-48e2-8c5f-f7a8e829988f', + 'thumbnail': r're:^https://uploads\.cdn\.triller\.co/.+\.jpg$', + 'description': 'UNGRATEFUL VIDEO OUT NOW ðŸ‘ðŸ¾ðŸ‘ðŸ¾ðŸ‘🾠💙💙 link my bio\r\n #womeninhiphop', + 'uploader': 'theestallion', + 'uploader_id': '18992236', + 'creator': 'Megan Thee Stallion', + 'timestamp': 1662486178, + 'upload_date': '20220906', + 'duration': 30, + 'view_count': int, + 'like_count': int, + 'artist': 'Unknown', + 'track': 'Unknown', + 'uploader_url': 'https://triller.co/@theestallion', + 'comment_count': int, + }, + }] + + def _real_extract(self, url): + username, display_id = self._match_valid_url(url).group('username', 'id') + + video_info = self._download_json( + f'{self._API_BASE_URL}/api/videos/{display_id}', display_id, + headers=self._API_HEADERS)['videos'][0] + + return self._parse_video_info(video_info, username, None, display_id) + + +class TrillerUserIE(TrillerBaseIE): + _VALID_URL = r'https?://(?:www\.)?triller\.co/@(?P<id>[\w.]+)/?(?:$|[#?])' + _TESTS = [{ + 'url': 'https://triller.co/@theestallion', + 'playlist_mincount': 12, + 'info_dict': { + 'id': '18992236', + 'title': 'theestallion', + 'thumbnail': r're:^https://uploads\.cdn\.triller\.co/.+\.jpg$', + }, + }, { + 'url': 'https://triller.co/@charlidamelio', + 'playlist_mincount': 150, + 'info_dict': { + 'id': '1875551', + 'title': 'charlidamelio', + 'thumbnail': r're:^https://uploads\.cdn\.triller\.co/.+\.jpg$', + }, + }] + + def _real_initialize(self): + if not self._API_HEADERS.get('Authorization'): + guest = self._download_json( + f'{self._API_BASE_URL}/user/create_guest', None, + note='Creating guest session', data=b'', headers=self._API_HEADERS, query={ + 'platform': 'Web', + 'app_version': '', + }) + if not guest.get('auth_token'): + raise ExtractorError('Unable to fetch required auth token for user extraction') + + self._API_HEADERS['Authorization'] = f'Bearer {guest["auth_token"]}' + + def _entries(self, username, user_id, limit=6): + query = {'limit': limit} + for page in itertools.count(1): + videos = self._download_json( + f'{self._API_BASE_URL}/api/users/{user_id}/videos', + username, note=f'Downloading user video list page {page}', + headers=self._API_HEADERS, query=query) + + for video in traverse_obj(videos, ('videos', ...)): + yield self._parse_video_info(video, username, user_id) + + query['before_time'] = traverse_obj(videos, ('videos', -1, 'timestamp')) + if not query['before_time']: + break + + def _real_extract(self, url): + username = self._match_id(url) + + user_info = traverse_obj(self._download_json( + f'{self._API_BASE_URL}/api/users/by_username/{username}', + username, note='Downloading user info', headers=self._API_HEADERS), ('user', {dict})) or {} + + if user_info.get('private') and user_info.get('followed_by_me') not in (True, 'true'): + raise ExtractorError('This user profile is private', expected=True) + elif traverse_obj(user_info, (('blocked_by_user', 'blocking_user'), {bool}), get_all=False): + raise ExtractorError('The author of the video is blocked', expected=True) + + user_id = str_or_none(user_info.get('user_id')) + if not user_id: + raise ExtractorError('Unable to extract user ID') + + return self.playlist_result( + self._entries(username, user_id), user_id, username, thumbnail=user_info.get('avatar_url')) + + +class TrillerShortIE(InfoExtractor): + _VALID_URL = r'https?://v\.triller\.co/(?P<id>\w+)' + _TESTS = [{ + 'url': 'https://v.triller.co/WWZNWk', + 'md5': '5eb8dc2c971bd8cd794ec9e8d5e9d101', + 'info_dict': { + 'id': '66210052', + 'ext': 'mp4', + 'title': 'md5:2dfc89d154cd91a4a18cd9582ba03e16', + 'display_id': 'f4480e1f-fb4e-45b9-a44c-9e6c679ce7eb', + 'thumbnail': r're:^https://uploads\.cdn\.triller\.co/.+\.jpg$', + 'description': 'md5:2dfc89d154cd91a4a18cd9582ba03e16', + 'uploader': 'statefairent', + 'uploader_id': '487545193', + 'creator': 'Official Summer Fair of LA', + 'timestamp': 1629655457, + 'upload_date': '20210822', + 'duration': 19, + 'view_count': int, + 'like_count': int, + 'artist': 'Unknown', + 'track': 'Unknown', + 'uploader_url': 'https://triller.co/@statefairent', + 'comment_count': int, + }, + }] + + def _real_extract(self, url): + real_url = self._request_webpage(HEADRequest(url), self._match_id(url)).url + if self.suitable(real_url): # Prevent infinite loop in case redirect fails + raise UnsupportedError(real_url) + return self.url_result(real_url) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/trilulilu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/trilulilu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/trilulilu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/trilulilu.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/trovo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/trovo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/trovo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/trovo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/trtcocuk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/trtcocuk.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/trtcocuk.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/trtcocuk.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/trueid.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/trueid.py new file mode 100644 index 0000000..86f0990 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/trueid.py @@ -0,0 +1,136 @@ +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + determine_ext, + ExtractorError, + int_or_none, + parse_age_limit, + traverse_obj, + unified_timestamp, + url_or_none +) + + +class TrueIDIE(InfoExtractor): + _VALID_URL = r'https?://(?P<domain>vn\.trueid\.net|trueid\.(?:id|ph))/(?:movie|series/[^/]+)/(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://trueid.id/movie/XYNlDOZZJzL6/pengabdi-setan/', + 'md5': '2552c7535125885901f1a2a4bcf32ca3', + 'info_dict': { + 'id': 'XYNlDOZZJzL6', + 'ext': 'mp4', + 'title': 'Pengabdi Setan', + 'display_id': 'pengabdi-setan', + 'description': 'md5:b0b41df08601e85e5291496c9bbe52cd', + 'timestamp': 1600243511, + 'categories': ['Film Indonesia', 'Horror', 'Mystery'], + 'release_timestamp': 1593536400, + 'release_year': 1982, + 'cast': list, + 'thumbnail': 'https://cms.dmpcdn.com/movie/2020/09/18/8b6e35c0-f97f-11ea-81fe-c52fc9dd314f_original.png', + 'upload_date': '20200916', + 'release_date': '20200630', + }, + 'expected_warnings': ['Video is geo restricted.'] + }, { + 'url': 'https://trueid.id/series/zZOBVPb62EwR/qXY73rwyl7oj/one-piece-ep-1/', + 'md5': '1c6d976049bc3c89a8a25aed2c3fb081', + 'info_dict': { + 'id': 'qXY73rwyl7oj', + 'ext': 'mp4', + 'title': 'One Piece Ep. 1', + 'display_id': 'one-piece-ep-1', + 'description': 'md5:13226d603bd03c4150a1cf5758e842ea', + 'timestamp': 1610421085, + 'categories': ['Animation & Cartoon', 'Kids & Family', 'Adventure'], + 'release_timestamp': 1612112400, + 'release_year': 1999, + 'age_limit': 7, + 'cast': ['Kounosuke Uda', 'Junji Shimizu'], + 'thumbnail': 'https://cms.dmpcdn.com/movie/2021/01/13/f84e9e70-5562-11eb-9fe2-dd6c2099a468_original.png', + 'upload_date': '20210112', + 'release_date': '20210131', + }, + 'expected_warnings': ['Video is geo restricted.'] + }, { + 'url': 'https://vn.trueid.net/series/7DNPM7Bpa9wv/pwLgEQ4Xbda2/haikyu-vua-bong-chuyen-phan-1/', + 'info_dict': { + 'id': 'pwLgEQ4Xbda2', + 'ext': 'mp4', + 'title': 'Haikyu!!: Vua Bóng Chuyá»n Phần 1 - Tập 1', + 'display_id': 'haikyu-vua-bong-chuyen-phan-1-tap-1', + 'description': 'md5:0374dd44d247799169449ee30cca963a', + 'timestamp': 1629270901, + 'categories': ['Anime', 'Phim Hài', 'Phim Há»c ÄÆ°á»ng', 'Phim Thể Thao', 'Shounen'], + 'release_timestamp': 1629270720, + 'release_year': 2014, + 'age_limit': 13, + 'thumbnail': 'https://cms.dmpcdn.com/movie/2021/09/28/b6e7ec00-2039-11ec-8436-974544e5841f_webp_original.jpg', + 'upload_date': '20210818', + 'release_date': '20210818', + }, + 'expected_warnings': ['Video is geo restricted.'] + }, { + 'url': 'https://trueid.ph/series/l8rvvAw7Jwv8/l8rvvAw7Jwv8/naruto-trailer/', + 'only_matching': True, + }] + _CUSTOM_RATINGS = { + 'PG': 7, + } + + def _real_extract(self, url): + domain, video_id = self._match_valid_url(url).group('domain', 'id') + webpage = self._download_webpage(url, video_id) + initial_data = traverse_obj( + self._search_nextjs_data(webpage, video_id, fatal=False), ('props', 'pageProps', 'initialContentData'), default={}) + + try: + stream_data = self._download_json( + f'https://{domain}/cmsPostProxy/contents/video/{video_id}/streamer?os=android', video_id, data=b'')['data'] + except ExtractorError as e: + if not isinstance(e.cause, HTTPError): + raise e + errmsg = self._parse_json(e.cause.response.read().decode(), video_id)['meta']['message'] + if 'country' in errmsg: + self.raise_geo_restricted( + errmsg, [initial_data['display_country']] if initial_data.get('display_country') else None, True) + else: + self.raise_no_formats(errmsg, video_id=video_id) + + if stream_data: + stream_url = stream_data['stream']['stream_url'] + stream_ext = determine_ext(stream_url) + if stream_ext == 'm3u8': + formats, subs = self._extract_m3u8_formats_and_subtitles(stream_url, video_id, 'mp4') + elif stream_ext == 'mpd': + formats, subs = self._extract_mpd_formats_and_subtitles(stream_url, video_id) + else: + formats = [{'url': stream_url}] + + thumbnails = [ + {'id': thumb_key, 'url': thumb_url} + for thumb_key, thumb_url in (initial_data.get('thumb_list') or {}).items() + if url_or_none(thumb_url)] + + return { + 'id': video_id, + 'title': initial_data.get('title') or self._html_search_regex( + [r'Nonton (?P<name>.+) Gratis', + r'Xem (?P<name>.+) Miá»…n phí', + r'Watch (?P<name>.+) Free'], webpage, 'title', group='name'), + 'display_id': initial_data.get('slug_title'), + 'description': initial_data.get('synopsis'), + 'timestamp': unified_timestamp(initial_data.get('create_date')), + # 'duration': int_or_none(initial_data.get('duration'), invscale=60), # duration field must atleast be accurate to the second + 'categories': traverse_obj(initial_data, ('article_category_details', ..., 'name')), + 'release_timestamp': unified_timestamp(initial_data.get('publish_date')), + 'release_year': int_or_none(initial_data.get('release_year')), + 'formats': formats, + 'subtitles': subs, + 'thumbnails': thumbnails, + 'age_limit': self._CUSTOM_RATINGS.get(initial_data.get('rate')) or parse_age_limit(initial_data.get('rate')), + 'cast': traverse_obj(initial_data, (('actor', 'director'), ...)), + 'view_count': int_or_none(initial_data.get('count_views')), + 'like_count': int_or_none(initial_data.get('count_likes')), + 'average_rating': int_or_none(initial_data.get('count_ratings')), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/trunews.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/trunews.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/trunews.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/trunews.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/truth.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/truth.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/truth.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/truth.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/trutv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/trutv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/trutv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/trutv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tube8.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tube8.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tube8.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tube8.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tubetugraz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tubetugraz.py new file mode 100644 index 0000000..a351e4e --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tubetugraz.py @@ -0,0 +1,252 @@ +from .common import InfoExtractor +from ..utils import ( + float_or_none, + parse_resolution, + traverse_obj, + urlencode_postdata, + variadic, +) + + +class TubeTuGrazBaseIE(InfoExtractor): + _NETRC_MACHINE = 'tubetugraz' + + _API_EPISODE = 'https://tube.tugraz.at/search/episode.json' + _FORMAT_TYPES = ('presentation', 'presenter') + + def _perform_login(self, username, password): + urlh = self._request_webpage( + 'https://tube.tugraz.at/Shibboleth.sso/Login?target=/paella/ui/index.html', + None, fatal=False, note='downloading login page', errnote='unable to fetch login page') + if not urlh: + return + + content, urlh = self._download_webpage_handle( + urlh.url, None, fatal=False, headers={'referer': urlh.url}, + note='logging in', errnote='unable to log in', + data=urlencode_postdata({ + 'lang': 'de', + '_eventId_proceed': '', + 'j_username': username, + 'j_password': password + })) + if not urlh or urlh.url == 'https://tube.tugraz.at/paella/ui/index.html': + return + + if not self._html_search_regex( + r'<p\b[^>]*>(Bitte geben Sie einen OTP-Wert ein:)</p>', + content, 'TFA prompt', default=None): + self.report_warning('unable to login: incorrect password') + return + + content, urlh = self._download_webpage_handle( + urlh.url, None, fatal=False, headers={'referer': urlh.url}, + note='logging in with TFA', errnote='unable to log in with TFA', + data=urlencode_postdata({ + 'lang': 'de', + '_eventId_proceed': '', + 'j_tokenNumber': self._get_tfa_info(), + })) + if not urlh or urlh.url == 'https://tube.tugraz.at/paella/ui/index.html': + return + + self.report_warning('unable to login: incorrect TFA code') + + def _extract_episode(self, episode_info): + id = episode_info.get('id') + formats = list(self._extract_formats( + traverse_obj(episode_info, ('mediapackage', 'media', 'track')), id)) + + title = traverse_obj(episode_info, ('mediapackage', 'title'), 'dcTitle') + series_title = traverse_obj(episode_info, ('mediapackage', 'seriestitle')) + creator = ', '.join(variadic(traverse_obj( + episode_info, ('mediapackage', 'creators', 'creator'), 'dcCreator', default=''))) + return { + 'id': id, + 'title': title, + 'creator': creator or None, + 'duration': traverse_obj(episode_info, ('mediapackage', 'duration'), 'dcExtent'), + 'series': series_title, + 'series_id': traverse_obj(episode_info, ('mediapackage', 'series'), 'dcIsPartOf'), + 'episode': series_title and title, + 'formats': formats + } + + def _set_format_type(self, formats, type): + for f in formats: + f['format_note'] = type + if not type.startswith(self._FORMAT_TYPES[0]): + f['preference'] = -2 + return formats + + def _extract_formats(self, format_list, id): + has_hls, has_dash = False, False + + for format_info in format_list or []: + url = traverse_obj(format_info, ('tags', 'url'), 'url') + if url is None: + continue + + type = format_info.get('type') or 'unknown' + transport = (format_info.get('transport') or 'https').lower() + + if transport == 'https': + formats = [{ + 'url': url, + 'abr': float_or_none(traverse_obj(format_info, ('audio', 'bitrate')), 1000), + 'vbr': float_or_none(traverse_obj(format_info, ('video', 'bitrate')), 1000), + 'fps': traverse_obj(format_info, ('video', 'framerate')), + **parse_resolution(traverse_obj(format_info, ('video', 'resolution'))), + }] + elif transport == 'hls': + has_hls, formats = True, self._extract_m3u8_formats( + url, id, 'mp4', fatal=False, note=f'downloading {type} HLS manifest') + elif transport == 'dash': + has_dash, formats = True, self._extract_mpd_formats( + url, id, fatal=False, note=f'downloading {type} DASH manifest') + else: + # RTMP, HDS, SMOOTH, and unknown formats + # - RTMP url fails on every tested entry until now + # - HDS url 404's on every tested entry until now + # - SMOOTH url 404's on every tested entry until now + continue + + yield from self._set_format_type(formats, type) + + # TODO: Add test for these + for type in self._FORMAT_TYPES: + if not has_hls: + hls_formats = self._extract_m3u8_formats( + f'https://wowza.tugraz.at/matterhorn_engage/smil:engage-player_{id}_{type}.smil/playlist.m3u8', + id, 'mp4', fatal=False, note=f'Downloading {type} HLS manifest', errnote=False) or [] + yield from self._set_format_type(hls_formats, type) + + if not has_dash: + dash_formats = self._extract_mpd_formats( + f'https://wowza.tugraz.at/matterhorn_engage/smil:engage-player_{id}_{type}.smil/manifest_mpm4sav_mvlist.mpd', + id, fatal=False, note=f'Downloading {type} DASH manifest', errnote=False) + yield from self._set_format_type(dash_formats, type) + + +class TubeTuGrazIE(TubeTuGrazBaseIE): + IE_DESC = 'tube.tugraz.at' + + _VALID_URL = r'''(?x) + https?://tube\.tugraz\.at/paella/ui/watch.html\?id= + (?P<id>[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}) + ''' + _TESTS = [ + { + 'url': 'https://tube.tugraz.at/paella/ui/watch.html?id=f2634392-e40e-4ac7-9ddc-47764aa23d40', + 'md5': 'a23a3d5c9aaca2b84932fdba66e17145', + 'info_dict': { + 'id': 'f2634392-e40e-4ac7-9ddc-47764aa23d40', + 'ext': 'mp4', + 'title': '#6 (23.11.2017)', + 'episode': '#6 (23.11.2017)', + 'series': '[INB03001UF] Einführung in die strukturierte Programmierung', + 'creator': 'Safran C', + 'duration': 3295818, + 'series_id': 'b1192fff-2aa7-4bf0-a5cf-7b15c3bd3b34', + } + }, { + 'url': 'https://tube.tugraz.at/paella/ui/watch.html?id=2df6d787-e56a-428d-8ef4-d57f07eef238', + 'md5': 'de0d854a56bf7318d2b693fe1adb89a5', + 'info_dict': { + 'id': '2df6d787-e56a-428d-8ef4-d57f07eef238', + 'title': 'TubeTuGraz video #2df6d787-e56a-428d-8ef4-d57f07eef238', + 'ext': 'mp4', + }, + 'expected_warnings': ['Extractor failed to obtain "title"'], + } + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + episode_data = self._download_json( + self._API_EPISODE, video_id, query={'id': video_id, 'limit': 1}, note='Downloading episode metadata') + + episode_info = traverse_obj(episode_data, ('search-results', 'result'), default={'id': video_id}) + return self._extract_episode(episode_info) + + +class TubeTuGrazSeriesIE(TubeTuGrazBaseIE): + _VALID_URL = r'''(?x) + https?://tube\.tugraz\.at/paella/ui/browse\.html\?series= + (?P<id>[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}) + ''' + _TESTS = [{ + 'url': 'https://tube.tugraz.at/paella/ui/browse.html?series=0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'id': '0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'info_dict': { + 'id': '0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'title': '[209351] Strassenwesen', + }, + 'playlist': [ + { + 'info_dict': { + 'id': 'ee17ce5d-34e2-48b7-a76a-fed148614e11', + 'series_id': '0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'ext': 'mp4', + 'title': '#4 Detailprojekt', + 'episode': '#4 Detailprojekt', + 'series': '[209351] Strassenwesen', + 'creator': 'Neuhold R', + 'duration': 6127024, + } + }, + { + 'info_dict': { + 'id': '87350498-799a-44d3-863f-d1518a98b114', + 'series_id': '0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'ext': 'mp4', + 'title': '#3 Generelles Projekt', + 'episode': '#3 Generelles Projekt', + 'series': '[209351] Strassenwesen', + 'creator': 'Neuhold R', + 'duration': 5374422, + } + }, + { + 'info_dict': { + 'id': '778599ea-489e-4189-9e05-3b4888e19bcd', + 'series_id': '0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'ext': 'mp4', + 'title': '#2 Vorprojekt', + 'episode': '#2 Vorprojekt', + 'series': '[209351] Strassenwesen', + 'creator': 'Neuhold R', + 'duration': 5566404, + } + }, + { + 'info_dict': { + 'id': '75e4c71c-d99d-4e56-b0e6-4f2bcdf11f29', + 'series_id': '0e6351b7-c372-491e-8a49-2c9b7e21c5a6', + 'ext': 'mp4', + 'title': '#1 Variantenstudium', + 'episode': '#1 Variantenstudium', + 'series': '[209351] Strassenwesen', + 'creator': 'Neuhold R', + 'duration': 5420200, + } + } + ], + 'min_playlist_count': 4 + }] + + def _real_extract(self, url): + id = self._match_id(url) + episodes_data = self._download_json(self._API_EPISODE, id, query={'sid': id}, note='Downloading episode list') + series_data = self._download_json( + 'https://tube.tugraz.at/series/series.json', id, fatal=False, + note='downloading series metadata', errnote='failed to download series metadata', + query={ + 'seriesId': id, + 'count': 1, + 'sort': 'TITLE' + }) + + return self.playlist_result( + map(self._extract_episode, episodes_data['search-results']['result']), id, + traverse_obj(series_data, ('catalogs', 0, 'http://purl.org/dc/terms/', 'title', 0, 'value'))) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tubitv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tubitv.py new file mode 100644 index 0000000..bd46bc3 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tubitv.py @@ -0,0 +1,168 @@ +import re + +from .common import InfoExtractor +from ..networking import Request +from ..utils import ( + ExtractorError, + int_or_none, + js_to_json, + traverse_obj, + urlencode_postdata, +) + + +class TubiTvIE(InfoExtractor): + _VALID_URL = r'''(?x) + (?: + tubitv:| + https?://(?:www\.)?tubitv\.com/(?:video|movies|tv-shows)/ + ) + (?P<id>[0-9]+)''' + _LOGIN_URL = 'http://tubitv.com/login' + _NETRC_MACHINE = 'tubitv' + _GEO_COUNTRIES = ['US'] + _TESTS = [{ + 'url': 'https://tubitv.com/movies/383676/tracker', + 'md5': '566fa0f76870302d11af0de89511d3f0', + 'info_dict': { + 'id': '383676', + 'ext': 'mp4', + 'title': 'Tracker', + 'description': 'md5:ff320baf43d0ad2655e538c1d5cd9706', + 'uploader_id': 'f866e2677ea2f0dff719788e4f7f9195', + 'release_year': 2010, + 'thumbnail': r're:^https?://.+\.(jpe?g|png)$', + 'duration': 6122, + }, + }, { + 'url': 'http://tubitv.com/video/283829/the_comedian_at_the_friday', + 'md5': '43ac06be9326f41912dc64ccf7a80320', + 'info_dict': { + 'id': '283829', + 'ext': 'mp4', + 'title': 'The Comedian at The Friday', + 'description': 'A stand up comedian is forced to look at the decisions in his life while on a one week trip to the west coast.', + 'uploader_id': 'bc168bee0d18dd1cb3b86c68706ab434', + }, + 'skip': 'Content Unavailable' + }, { + 'url': 'http://tubitv.com/tv-shows/321886/s01_e01_on_nom_stories', + 'only_matching': True, + }, { + 'url': 'https://tubitv.com/movies/560057/penitentiary?start=true', + 'info_dict': { + 'id': '560057', + 'ext': 'mp4', + 'title': 'Penitentiary', + 'description': 'md5:8d2fc793a93cc1575ff426fdcb8dd3f9', + 'uploader_id': 'd8fed30d4f24fcb22ec294421b9defc2', + 'release_year': 1979, + }, + 'skip': 'Content Unavailable' + }] + + # DRM formats are included only to raise appropriate error + _UNPLAYABLE_FORMATS = ('hlsv6_widevine', 'hlsv6_widevine_nonclearlead', 'hlsv6_playready_psshv0', + 'hlsv6_fairplay', 'dash_widevine', 'dash_widevine_nonclearlead') + + def _perform_login(self, username, password): + self.report_login() + form_data = { + 'username': username, + 'password': password, + } + payload = urlencode_postdata(form_data) + request = Request(self._LOGIN_URL, payload) + request.headers['Content-Type'] = 'application/x-www-form-urlencoded' + login_page = self._download_webpage( + request, None, False, 'Wrong login info') + if not re.search(r'id="tubi-logout"', login_page): + raise ExtractorError( + 'Login failed (invalid username/password)', expected=True) + + def _real_extract(self, url): + video_id = self._match_id(url) + video_data = self._download_json(f'https://tubitv.com/oz/videos/{video_id}/content', video_id, query={ + 'video_resources': ['dash', 'hlsv3', 'hlsv6', *self._UNPLAYABLE_FORMATS], + }) + title = video_data['title'] + + formats = [] + drm_formats = False + + for resource in video_data['video_resources']: + if resource['type'] in ('dash', ): + formats += self._extract_mpd_formats(resource['manifest']['url'], video_id, mpd_id=resource['type'], fatal=False) + elif resource['type'] in ('hlsv3', 'hlsv6'): + formats += self._extract_m3u8_formats(resource['manifest']['url'], video_id, 'mp4', m3u8_id=resource['type'], fatal=False) + elif resource['type'] in self._UNPLAYABLE_FORMATS: + drm_formats = True + + if not formats and drm_formats: + self.report_drm(video_id) + elif not formats and not video_data.get('policy_match'): # policy_match is False if content was removed + raise ExtractorError('This content is currently unavailable', expected=True) + + thumbnails = [] + for thumbnail_url in video_data.get('thumbnails', []): + if not thumbnail_url: + continue + thumbnails.append({ + 'url': self._proto_relative_url(thumbnail_url), + }) + + subtitles = {} + for sub in video_data.get('subtitles', []): + sub_url = sub.get('url') + if not sub_url: + continue + subtitles.setdefault(sub.get('lang', 'English'), []).append({ + 'url': self._proto_relative_url(sub_url), + }) + + season_number, episode_number, episode_title = self._search_regex( + r'^S(\d+):E(\d+) - (.+)', title, 'episode info', fatal=False, group=(1, 2, 3), default=(None, None, None)) + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'description': video_data.get('description'), + 'duration': int_or_none(video_data.get('duration')), + 'uploader_id': video_data.get('publisher_id'), + 'release_year': int_or_none(video_data.get('year')), + 'season_number': int_or_none(season_number), + 'episode_number': int_or_none(episode_number), + 'episode_title': episode_title + } + + +class TubiTvShowIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tubitv\.com/series/[0-9]+/(?P<show_name>[^/?#]+)' + _TESTS = [{ + 'url': 'https://tubitv.com/series/3936/the-joy-of-painting-with-bob-ross?start=true', + 'playlist_mincount': 390, + 'info_dict': { + 'id': 'the-joy-of-painting-with-bob-ross', + } + }] + + def _entries(self, show_url, show_name): + show_webpage = self._download_webpage(show_url, show_name) + + show_json = self._parse_json(self._search_regex( + r'window\.__data\s*=\s*({[^<]+});\s*</script>', + show_webpage, 'data'), show_name, transform_source=js_to_json)['video'] + + for episode_id in show_json['fullContentById'].keys(): + if traverse_obj(show_json, ('byId', episode_id, 'type')) == 's': + continue + yield self.url_result( + 'tubitv:%s' % episode_id, + ie=TubiTvIE.ie_key(), video_id=episode_id) + + def _real_extract(self, url): + show_name = self._match_valid_url(url).group('show_name') + return self.playlist_result(self._entries(url, show_name), playlist_id=show_name) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tumblr.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tumblr.py new file mode 100644 index 0000000..a26bdca --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tumblr.py @@ -0,0 +1,387 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + int_or_none, + traverse_obj, + urlencode_postdata +) + + +class TumblrIE(InfoExtractor): + _VALID_URL = r'https?://(?P<blog_name>[^/?#&]+)\.tumblr\.com/(?:post|video)/(?P<id>[0-9]+)(?:$|[/?#])' + _NETRC_MACHINE = 'tumblr' + _LOGIN_URL = 'https://www.tumblr.com/login' + _OAUTH_URL = 'https://www.tumblr.com/api/v2/oauth2/token' + _TESTS = [{ + 'url': 'http://tatianamaslanydaily.tumblr.com/post/54196191430/orphan-black-dvd-extra-behind-the-scenes', + 'md5': '479bb068e5b16462f5176a6828829767', + 'info_dict': { + 'id': '54196191430', + 'ext': 'mp4', + 'title': 'md5:dfac39636969fe6bf1caa2d50405f069', + 'description': 'md5:390ab77358960235b6937ab3b8528956', + 'uploader_id': 'tatianamaslanydaily', + 'uploader_url': 'https://tatianamaslanydaily.tumblr.com/', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 127, + 'like_count': int, + 'repost_count': int, + 'age_limit': 0, + 'tags': ['Orphan Black', 'Tatiana Maslany', 'Interview', 'Video', 'OB S1 DVD Extras'], + } + }, { + 'note': 'multiple formats', + 'url': 'https://maskofthedragon.tumblr.com/post/626907179849564160/mona-talking-in-english', + 'md5': 'f43ff8a8861712b6cf0e0c2bd84cfc68', + 'info_dict': { + 'id': '626907179849564160', + 'ext': 'mp4', + 'title': 'Mona\xa0“talking†in\xa0“englishâ€', + 'description': 'md5:082a3a621530cb786ad2b7592a6d9e2c', + 'uploader_id': 'maskofthedragon', + 'uploader_url': 'https://maskofthedragon.tumblr.com/', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 7, + 'like_count': int, + 'repost_count': int, + 'age_limit': 0, + 'tags': 'count:19', + }, + 'params': { + 'format': 'hd', + }, + }, { + 'note': 'non-iframe video (with related posts)', + 'url': 'https://shieldfoss.tumblr.com/post/675519763813908480', + 'md5': '12bdb75661ef443bffe5a4dac1dbf118', + 'info_dict': { + 'id': '675519763813908480', + 'ext': 'mp4', + 'title': 'Shieldfoss', + 'uploader_id': 'nerviovago', + 'uploader_url': 'https://nerviovago.tumblr.com/', + 'thumbnail': r're:^https?://.*\.jpg', + 'like_count': int, + 'repost_count': int, + 'age_limit': 0, + 'tags': [], + } + }, { + 'note': 'dashboard only (original post)', + 'url': 'https://jujanon.tumblr.com/post/159704441298/my-baby-eating', + 'md5': '029f7c91ab386701b211e3d494d2d95e', + 'info_dict': { + 'id': '159704441298', + 'ext': 'mp4', + 'title': 'md5:ba79365861101f4911452728d2950561', + 'description': 'md5:773738196cea76b6996ec71e285bdabc', + 'uploader_id': 'jujanon', + 'uploader_url': 'https://jujanon.tumblr.com/', + 'thumbnail': r're:^https?://.*\.jpg', + 'like_count': int, + 'repost_count': int, + 'age_limit': 0, + 'tags': ['crabs', 'my video', 'my pets'], + } + }, { + 'note': 'dashboard only (reblog)', + 'url': 'https://bartlebyshop.tumblr.com/post/180294460076/duality-of-bird', + 'md5': '04334e7cadb1af680d162912559f51a5', + 'info_dict': { + 'id': '180294460076', + 'ext': 'mp4', + 'title': 'duality of bird', + 'description': 'duality of bird', + 'uploader_id': 'todaysbird', + 'uploader_url': 'https://todaysbird.tumblr.com/', + 'thumbnail': r're:^https?://.*\.jpg', + 'like_count': int, + 'repost_count': int, + 'age_limit': 0, + 'tags': [], + } + }, { + 'note': 'dashboard only (external)', + 'url': 'https://afloweroutofstone.tumblr.com/post/675661759168823296/the-blues-remembers-everything-the-country-forgot', + 'info_dict': { + 'id': 'q67_fd7b8SU', + 'ext': 'mp4', + 'title': 'The Blues Remembers Everything the Country Forgot', + 'alt_title': 'The Blues Remembers Everything the Country Forgot', + 'description': 'md5:1a6b4097e451216835a24c1023707c79', + 'release_date': '20201224', + 'creator': 'md5:c2239ba15430e87c3b971ba450773272', + 'uploader': 'Moor Mother - Topic', + 'upload_date': '20201223', + 'uploader_id': 'UCxrMtFBRkFvQJ_vVM4il08w', + 'uploader_url': 'http://www.youtube.com/channel/UCxrMtFBRkFvQJ_vVM4il08w', + 'thumbnail': r're:^https?://i.ytimg.com/.*', + 'channel': 'Moor Mother - Topic', + 'channel_id': 'UCxrMtFBRkFvQJ_vVM4il08w', + 'channel_url': 'https://www.youtube.com/channel/UCxrMtFBRkFvQJ_vVM4il08w', + 'channel_follower_count': int, + 'duration': 181, + 'view_count': int, + 'like_count': int, + 'age_limit': 0, + 'categories': ['Music'], + 'tags': 'count:7', + 'live_status': 'not_live', + 'playable_in_embed': True, + 'availability': 'public', + 'track': 'The Blues Remembers Everything the Country Forgot', + 'artist': 'md5:c2239ba15430e87c3b971ba450773272', + 'album': 'Brass', + 'release_year': 2020, + }, + 'add_ie': ['Youtube'], + }, { + 'url': 'http://naked-yogi.tumblr.com/post/118312946248/naked-smoking-stretching', + 'md5': 'de07e5211d60d4f3a2c3df757ea9f6ab', + 'info_dict': { + 'id': 'Wmur', + 'ext': 'mp4', + 'title': 'naked smoking & stretching', + 'upload_date': '20150506', + 'timestamp': 1430931613, + 'age_limit': 18, + 'uploader_id': '1638622', + 'uploader': 'naked-yogi', + }, + # 'add_ie': ['Vidme'], + 'skip': 'dead embedded video host' + }, { + 'url': 'https://prozdvoices.tumblr.com/post/673201091169681408/what-recording-voice-acting-sounds-like', + 'md5': 'a0063fc8110e6c9afe44065b4ea68177', + 'info_dict': { + 'id': 'eomhW5MLGWA', + 'ext': 'mp4', + 'title': 'what recording voice acting sounds like', + 'description': 'md5:1da3faa22d0e0b1d8b50216c284ee798', + 'uploader': 'ProZD', + 'upload_date': '20220112', + 'uploader_id': 'ProZD', + 'uploader_url': 'http://www.youtube.com/user/ProZD', + 'thumbnail': r're:^https?://i.ytimg.com/.*', + 'channel': 'ProZD', + 'channel_id': 'UC6MFZAOHXlKK1FI7V0XQVeA', + 'channel_url': 'https://www.youtube.com/channel/UC6MFZAOHXlKK1FI7V0XQVeA', + 'channel_follower_count': int, + 'duration': 20, + 'view_count': int, + 'like_count': int, + 'age_limit': 0, + 'categories': ['Film & Animation'], + 'tags': [], + 'live_status': 'not_live', + 'playable_in_embed': True, + 'availability': 'public', + }, + 'add_ie': ['Youtube'], + }, { + 'url': 'https://dominustempori.tumblr.com/post/673572712813297664/youtubes-all-right-for-some-pretty-cool', + 'md5': '203e9eb8077e3f45bfaeb4c86c1467b8', + 'info_dict': { + 'id': '87816359', + 'ext': 'mov', + 'title': 'Harold Ramis', + 'description': 'md5:be8e68cbf56ce0785c77f0c6c6dfaf2c', + 'uploader': 'Resolution Productions Group', + 'uploader_id': 'resolutionproductions', + 'uploader_url': 'https://vimeo.com/resolutionproductions', + 'upload_date': '20140227', + 'thumbnail': r're:^https?://i.vimeocdn.com/video/.*', + 'timestamp': 1393523719, + 'duration': 291, + }, + 'add_ie': ['Vimeo'], + }, { + 'url': 'http://sutiblr.tumblr.com/post/139638707273', + 'md5': '2dd184b3669e049ba40563a7d423f95c', + 'info_dict': { + 'id': 'ir7qBEIKqvq', + 'ext': 'mp4', + 'title': 'Vine by sutiblr', + 'alt_title': 'Vine by sutiblr', + 'uploader': 'sutiblr', + 'uploader_id': '1198993975374495744', + 'upload_date': '20160220', + 'like_count': int, + 'comment_count': int, + 'repost_count': int, + 'thumbnail': r're:^https?://.*\.jpg', + 'timestamp': 1455940159, + 'view_count': int, + }, + 'add_ie': ['Vine'], + }, { + 'url': 'https://silami.tumblr.com/post/84250043974/my-bad-river-flows-in-you-impression-on-maschine', + 'md5': '3c92d7c3d867f14ccbeefa2119022277', + 'info_dict': { + 'id': 'nYtvtTPuTl', + 'ext': 'mp4', + 'title': 'Video by silbulterman', + 'description': '#maschine', + 'uploader_id': '242859024', + 'thumbnail': r're:^https?://.*\.jpg', + 'timestamp': 1398801174, + 'like_count': int, + 'uploader': 'Sil', + 'channel': 'silbulterman', + 'comment_count': int, + 'upload_date': '20140429', + }, + 'add_ie': ['Instagram'], + }] + + _providers = { + 'instagram': 'Instagram', + 'vimeo': 'Vimeo', + 'vine': 'Vine', + 'youtube': 'Youtube', + } + + _ACCESS_TOKEN = None + + def _initialize_pre_login(self): + login_page = self._download_webpage( + self._LOGIN_URL, None, 'Downloading login page', fatal=False) + if login_page: + self._ACCESS_TOKEN = self._search_regex( + r'"API_TOKEN":\s*"(\w+)"', login_page, 'API access token', fatal=False) + if not self._ACCESS_TOKEN: + self.report_warning('Failed to get access token; metadata will be missing and some videos may not work') + + def _perform_login(self, username, password): + if not self._ACCESS_TOKEN: + return + + self._download_json( + self._OAUTH_URL, None, 'Logging in', + data=urlencode_postdata({ + 'password': password, + 'grant_type': 'password', + 'username': username, + }), headers={ + 'Content-Type': 'application/x-www-form-urlencoded', + 'Authorization': f'Bearer {self._ACCESS_TOKEN}', + }, + errnote='Login failed', fatal=False) + + def _real_extract(self, url): + blog, video_id = self._match_valid_url(url).groups() + + url = f'http://{blog}.tumblr.com/post/{video_id}/' + webpage, urlh = self._download_webpage_handle(url, video_id) + + redirect_url = urlh.url + + api_only = bool(self._search_regex( + r'(tumblr.com|^)/(safe-mode|login_required|blog/view)', + redirect_url, 'redirect', default=None)) + + if api_only and not self._ACCESS_TOKEN: + raise ExtractorError('Cannot get data for dashboard-only post without access token') + + post_json = {} + if self._ACCESS_TOKEN: + post_json = traverse_obj( + self._download_json( + f'https://www.tumblr.com/api/v2/blog/{blog}/posts/{video_id}/permalink', + video_id, headers={'Authorization': f'Bearer {self._ACCESS_TOKEN}'}, fatal=False), + ('response', 'timeline', 'elements', 0)) or {} + content_json = traverse_obj(post_json, ('trail', 0, 'content'), ('content')) or [] + video_json = next( + (item for item in content_json if item.get('type') == 'video'), {}) + media_json = video_json.get('media') or {} + if api_only and not media_json.get('url') and not video_json.get('url'): + raise ExtractorError('Failed to find video data for dashboard-only post') + + if not media_json.get('url') and video_json.get('url'): + # external video host + return self.url_result( + video_json['url'], + self._providers.get(video_json.get('provider'), 'Generic')) + + video_url = self._og_search_video_url(webpage, default=None) + duration = None + formats = [] + + # iframes can supply duration and sometimes additional formats, so check for one + iframe_url = self._search_regex( + fr'src=\'(https?://www\.tumblr\.com/video/{blog}/{video_id}/[^\']+)\'', + webpage, 'iframe url', default=None) + if iframe_url: + iframe = self._download_webpage( + iframe_url, video_id, 'Downloading iframe page', + headers={'Referer': redirect_url}) + + options = self._parse_json( + self._search_regex( + r'data-crt-options=(["\'])(?P<options>.+?)\1', iframe, + 'hd video url', default='', group='options'), + video_id, fatal=False) + if options: + duration = int_or_none(options.get('duration')) + + hd_url = options.get('hdUrl') + if hd_url: + # there are multiple formats; extract them + # ignore other sources of width/height data as they may be wrong + sources = [] + sd_url = self._search_regex( + r'<source[^>]+src=(["\'])(?P<url>.+?)\1', iframe, + 'sd video url', default=None, group='url') + if sd_url: + sources.append((sd_url, 'sd')) + sources.append((hd_url, 'hd')) + + formats = [{ + 'url': video_url, + 'format_id': format_id, + 'height': int_or_none(self._search_regex( + r'_(\d+)\.\w+$', video_url, 'height', default=None)), + 'quality': quality, + } for quality, (video_url, format_id) in enumerate(sources)] + + if not media_json.get('url') and not video_url and not iframe_url: + # external video host (but we weren't able to figure it out from the api) + iframe_url = self._search_regex( + r'src=["\'](https?://safe\.txmblr\.com/svc/embed/inline/[^"\']+)["\']', + webpage, 'embed iframe url', default=None) + return self.url_result(iframe_url or redirect_url, 'Generic') + + formats = formats or [{ + 'url': media_json.get('url') or video_url, + 'width': int_or_none( + media_json.get('width') or self._og_search_property('video:width', webpage, default=None)), + 'height': int_or_none( + media_json.get('height') or self._og_search_property('video:height', webpage, default=None)), + }] + + # the url we're extracting from might be an original post or it might be a reblog. + # if it's a reblog, og:description will be the reblogger's comment, not the uploader's. + # content_json is always the op, so if it exists but has no text, there's no description + if content_json: + description = '\n\n'.join(( + item.get('text') for item in content_json if item.get('type') == 'text')) or None + else: + description = self._og_search_description(webpage, default=None) + uploader_id = traverse_obj(post_json, 'reblogged_root_name', 'blog_name') + + return { + 'id': video_id, + 'title': post_json.get('summary') or (blog if api_only else self._html_search_regex( + r'(?s)<title>(?P<title>.*?)(?: \| Tumblr)?', webpage, 'title')), + 'description': description, + 'thumbnail': (traverse_obj(video_json, ('poster', 0, 'url')) + or self._og_search_thumbnail(webpage, default=None)), + 'uploader_id': uploader_id, + 'uploader_url': f'https://{uploader_id}.tumblr.com/' if uploader_id else None, + 'duration': duration, + 'like_count': post_json.get('like_count'), + 'repost_count': post_json.get('reblog_count'), + 'age_limit': {True: 18, False: 0}.get(post_json.get('is_nsfw')), + 'tags': post_json.get('tags'), + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tunein.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tunein.py new file mode 100644 index 0000000..fd2fe13 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tunein.py @@ -0,0 +1,234 @@ +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + OnDemandPagedList, + determine_ext, + parse_iso8601, + traverse_obj, +) + + +class TuneInBaseIE(InfoExtractor): + _VALID_URL_BASE = r'https?://(?:www\.)?tunein\.com' + + def _extract_metadata(self, webpage, content_id): + return self._search_json(r'window.INITIAL_STATE=', webpage, 'hydration', content_id, fatal=False) + + def _extract_formats_and_subtitles(self, content_id): + streams = self._download_json( + f'https://opml.radiotime.com/Tune.ashx?render=json&formats=mp3,aac,ogg,flash,hls&id={content_id}', + content_id)['body'] + + formats, subtitles = [], {} + for stream in streams: + if stream.get('media_type') == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles(stream['url'], content_id, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + elif determine_ext(stream['url']) == 'pls': + playlist_content = self._download_webpage(stream['url'], content_id) + formats.append({ + 'url': self._search_regex(r'File1=(.*)', playlist_content, 'url', fatal=False), + 'abr': stream.get('bitrate'), + 'ext': stream.get('media_type'), + }) + else: + formats.append({ + 'url': stream['url'], + 'abr': stream.get('bitrate'), + 'ext': stream.get('media_type'), + }) + + return formats, subtitles + + +class TuneInStationIE(TuneInBaseIE): + _VALID_URL = TuneInBaseIE._VALID_URL_BASE + r'(?:/radio/[^?#]+-|/embed/player/)(?Ps\d+)' + _EMBED_REGEX = [r']+src=["\'](?P(?:https?://)?tunein\.com/embed/player/s\d+)'] + + _TESTS = [{ + 'url': 'https://tunein.com/radio/Jazz24-885-s34682/', + 'info_dict': { + 'id': 's34682', + 'title': 're:^Jazz24', + 'description': 'md5:d6d0b89063fd68d529fa7058ee98619b', + 'thumbnail': 're:^https?://[^?&]+/s34682', + 'location': 'Seattle-Tacoma, US', + 'ext': 'mp3', + 'live_status': 'is_live', + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://tunein.com/embed/player/s6404/', + 'only_matching': True, + }, { + 'url': 'https://tunein.com/radio/BBC-Radio-1-988-s24939/', + 'info_dict': { + 'id': 's24939', + 'title': 're:^BBC Radio 1', + 'description': 'md5:f3f75f7423398d87119043c26e7bfb84', + 'thumbnail': 're:^https?://[^?&]+/s24939', + 'location': 'London, UK', + 'ext': 'mp3', + 'live_status': 'is_live', + }, + 'params': { + 'skip_download': True, + }, + }] + + def _real_extract(self, url): + station_id = self._match_id(url) + + webpage = self._download_webpage(url, station_id) + metadata = self._extract_metadata(webpage, station_id) + + formats, subtitles = self._extract_formats_and_subtitles(station_id) + return { + 'id': station_id, + 'title': traverse_obj(metadata, ('profiles', station_id, 'title')), + 'description': traverse_obj(metadata, ('profiles', station_id, 'description')), + 'thumbnail': traverse_obj(metadata, ('profiles', station_id, 'image')), + 'timestamp': parse_iso8601( + traverse_obj(metadata, ('profiles', station_id, 'actions', 'play', 'publishTime'))), + 'location': traverse_obj( + metadata, ('profiles', station_id, 'metadata', 'properties', 'location', 'displayName'), + ('profiles', station_id, 'properties', 'location', 'displayName')), + 'formats': formats, + 'subtitles': subtitles, + 'is_live': traverse_obj(metadata, ('profiles', station_id, 'actions', 'play', 'isLive')), + } + + +class TuneInPodcastIE(TuneInBaseIE): + _VALID_URL = TuneInBaseIE._VALID_URL_BASE + r'/(?:podcasts/[^?#]+-|embed/player/)(?Pp\d+)/?(?:#|$)' + _EMBED_REGEX = [r']+src=["\'](?P(?:https?://)?tunein\.com/embed/player/p\d+)'] + + _TESTS = [{ + 'url': 'https://tunein.com/podcasts/Technology-Podcasts/Artificial-Intelligence-p1153019', + 'info_dict': { + 'id': 'p1153019', + 'title': 'Lex Fridman Podcast', + 'description': 'md5:bedc4e5f1c94f7dec6e4317b5654b00d', + }, + 'playlist_mincount': 200, + }, { + 'url': 'https://tunein.com/embed/player/p191660/', + 'only_matching': True + }, { + 'url': 'https://tunein.com/podcasts/World-News/BBC-News-p14/', + 'info_dict': { + 'id': 'p14', + 'title': 'BBC News', + 'description': 'md5:1218e575eeaff75f48ed978261fa2068', + }, + 'playlist_mincount': 200, + }] + + _PAGE_SIZE = 30 + + def _real_extract(self, url): + podcast_id = self._match_id(url) + + webpage = self._download_webpage(url, podcast_id, fatal=False) + metadata = self._extract_metadata(webpage, podcast_id) + + def page_func(page_num): + api_response = self._download_json( + f'https://api.tunein.com/profiles/{podcast_id}/contents', podcast_id, + note=f'Downloading page {page_num + 1}', query={ + 'filter': 't:free', + 'offset': page_num * self._PAGE_SIZE, + 'limit': self._PAGE_SIZE, + }) + + return [ + self.url_result( + f'https://tunein.com/podcasts/{podcast_id}?topicId={episode["GuideId"][1:]}', + TuneInPodcastEpisodeIE, title=episode.get('Title')) + for episode in api_response['Items']] + + entries = OnDemandPagedList(page_func, self._PAGE_SIZE) + return self.playlist_result( + entries, playlist_id=podcast_id, title=traverse_obj(metadata, ('profiles', podcast_id, 'title')), + description=traverse_obj(metadata, ('profiles', podcast_id, 'description'))) + + +class TuneInPodcastEpisodeIE(TuneInBaseIE): + _VALID_URL = TuneInBaseIE._VALID_URL_BASE + r'/podcasts/(?:[^?&]+-)?(?Pp\d+)/?\?topicId=(?P\w\d+)' + + _TESTS = [{ + 'url': 'https://tunein.com/podcasts/Technology-Podcasts/Artificial-Intelligence-p1153019/?topicId=236404354', + 'info_dict': { + 'id': 't236404354', + 'title': '#351 \u2013 MrBeast: Future of YouTube, Twitter, TikTok, and Instagram', + 'description': 'md5:e1734db6f525e472c0c290d124a2ad77', + 'thumbnail': 're:^https?://[^?&]+/p1153019', + 'timestamp': 1673458571, + 'upload_date': '20230111', + 'series_id': 'p1153019', + 'series': 'Lex Fridman Podcast', + 'ext': 'mp3', + }, + }] + + def _real_extract(self, url): + podcast_id, episode_id = self._match_valid_url(url).group('podcast_id', 'id') + episode_id = f't{episode_id}' + + webpage = self._download_webpage(url, episode_id) + metadata = self._extract_metadata(webpage, episode_id) + + formats, subtitles = self._extract_formats_and_subtitles(episode_id) + return { + 'id': episode_id, + 'title': traverse_obj(metadata, ('profiles', episode_id, 'title')), + 'description': traverse_obj(metadata, ('profiles', episode_id, 'description')), + 'thumbnail': traverse_obj(metadata, ('profiles', episode_id, 'image')), + 'timestamp': parse_iso8601( + traverse_obj(metadata, ('profiles', episode_id, 'actions', 'play', 'publishTime'))), + 'series_id': podcast_id, + 'series': traverse_obj(metadata, ('profiles', podcast_id, 'title')), + 'formats': formats, + 'subtitles': subtitles, + } + + +class TuneInShortenerIE(InfoExtractor): + IE_NAME = 'tunein:shortener' + IE_DESC = False # Do not list + _VALID_URL = r'https?://tun\.in/(?P[A-Za-z0-9]+)' + + _TEST = { + # test redirection + 'url': 'http://tun.in/ser7s', + 'info_dict': { + 'id': 's34682', + 'title': 're:^Jazz24', + 'description': 'md5:d6d0b89063fd68d529fa7058ee98619b', + 'thumbnail': 're:^https?://[^?&]+/s34682', + 'location': 'Seattle-Tacoma, US', + 'ext': 'mp3', + 'live_status': 'is_live', + }, + 'params': { + 'skip_download': True, # live stream + }, + } + + def _real_extract(self, url): + redirect_id = self._match_id(url) + # The server doesn't support HEAD requests + urlh = self._request_webpage( + url, redirect_id, note='Downloading redirect page') + + url = urlh.url + url_parsed = urllib.parse.urlparse(url) + if url_parsed.port == 443: + url = url_parsed._replace(netloc=url_parsed.hostname).url + + self.to_screen('Following redirect: %s' % url) + return self.url_result(url) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tunepk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tunepk.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tunepk.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tunepk.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/turbo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/turbo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/turbo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/turbo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/turner.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/turner.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/turner.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/turner.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tv2.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv2.py new file mode 100644 index 0000000..f6b452d --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv2.py @@ -0,0 +1,322 @@ +import re + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + determine_ext, + ExtractorError, + int_or_none, + float_or_none, + js_to_json, + parse_iso8601, + remove_end, + strip_or_none, + try_get, +) + + +class TV2IE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tv2\.no/v(?:ideo)?\d*/(?:[^?#]+/)*(?P\d+)' + _TESTS = [{ + 'url': 'http://www.tv2.no/v/1791207/', + 'info_dict': { + 'id': '1791207', + 'ext': 'mp4', + 'title': 'Her kolliderer romsonden med asteroiden ', + 'description': 'En romsonde har krasjet inn i en asteroide i verdensrommet. Kollisjonen skjedde klokken 01:14 natt til tirsdag 27. september norsk tid. \n\nNasa kaller det sitt første forsøk på planetforsvar.', + 'timestamp': 1664238190, + 'upload_date': '20220927', + 'duration': 146, + 'thumbnail': r're:^https://.*$', + 'view_count': int, + 'categories': list, + }, + }, { + 'url': 'http://www.tv2.no/v2/916509', + 'only_matching': True, + }, { + 'url': 'https://www.tv2.no/video/nyhetene/her-kolliderer-romsonden-med-asteroiden/1791207/', + 'only_matching': True, + }] + _PROTOCOLS = ('HLS', 'DASH') + _GEO_COUNTRIES = ['NO'] + + def _real_extract(self, url): + video_id = self._match_id(url) + asset = self._download_json('https://sumo.tv2.no/rest/assets/' + video_id, video_id, + 'Downloading metadata JSON') + title = asset['title'] + is_live = asset.get('live') is True + + formats = [] + format_urls = [] + for protocol in self._PROTOCOLS: + try: + data = self._download_json('https://api.sumo.tv2.no/play/%s?stream=%s' % (video_id, protocol), + video_id, 'Downloading playabck JSON', + headers={'content-type': 'application/json'}, + data='{"device":{"id":"1-1-1","name":"Nettleser (HTML)"}}'.encode())['playback'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + error = self._parse_json(e.cause.response.read().decode(), video_id)['error'] + error_code = error.get('code') + if error_code == 'ASSET_PLAYBACK_INVALID_GEO_LOCATION': + self.raise_geo_restricted(countries=self._GEO_COUNTRIES) + elif error_code == 'SESSION_NOT_AUTHENTICATED': + self.raise_login_required() + raise ExtractorError(error['description']) + raise + items = data.get('streams', []) + for item in items: + video_url = item.get('url') + if not video_url or video_url in format_urls: + continue + format_id = '%s-%s' % (protocol.lower(), item.get('type')) + if not self._is_valid_url(video_url, video_id, format_id): + continue + format_urls.append(video_url) + ext = determine_ext(video_url) + if ext == 'f4m': + formats.extend(self._extract_f4m_formats( + video_url, video_id, f4m_id=format_id, fatal=False)) + elif ext == 'm3u8': + if not data.get('drmProtected'): + formats.extend(self._extract_m3u8_formats( + video_url, video_id, 'mp4', live=is_live, m3u8_id=format_id, fatal=False)) + elif ext == 'mpd': + formats.extend(self._extract_mpd_formats( + video_url, video_id, format_id, fatal=False)) + elif ext == 'ism' or video_url.endswith('.ism/Manifest'): + pass + else: + formats.append({ + 'url': video_url, + 'format_id': format_id, + }) + if not formats and data.get('drmProtected'): + self.report_drm(video_id) + + thumbnails = [{ + 'id': type, + 'url': thumb_url, + } for type, thumb_url in (asset.get('images') or {}).items()] + + return { + 'id': video_id, + 'url': video_url, + 'title': title, + 'description': strip_or_none(asset.get('description')), + 'thumbnails': thumbnails, + 'timestamp': parse_iso8601(asset.get('live_broadcast_time') or asset.get('update_time')), + 'duration': float_or_none(asset.get('accurateDuration') or asset.get('duration')), + 'view_count': int_or_none(asset.get('views')), + 'categories': asset.get('tags', '').split(','), + 'formats': formats, + 'is_live': is_live, + } + + +class TV2ArticleIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tv2\.no/(?!v(?:ideo)?\d*/)[^?#]+/(?P\d+)' + _TESTS = [{ + 'url': 'https://www.tv2.no/underholdning/forraeder/katarina-flatland-angrer-etter-forraeder-exit/15095188/', + 'info_dict': { + 'id': '15095188', + 'title': 'Katarina Flatland angrer etter Forræder-exit', + 'description': 'SANDEFJORD (TV 2): Katarina Flatland (33) måtte følge i sine fars fotspor, da hun ble forvist fra Forræder.', + }, + 'playlist_count': 2, + }, { + 'url': 'http://www.tv2.no/a/6930542', + 'only_matching': True, + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + webpage = self._download_webpage(url, playlist_id) + + # Old embed pattern (looks unused nowadays) + assets = re.findall(r'data-assetid=["\'](\d+)', webpage) + + if not assets: + # New embed pattern + for v in re.findall(r'(?s)(?:TV2ContentboxVideo|TV2\.TV2Video)\(({.+?})\)', webpage): + video = self._parse_json( + v, playlist_id, transform_source=js_to_json, fatal=False) + if not video: + continue + asset = video.get('assetId') + if asset: + assets.append(asset) + + entries = [ + self.url_result('http://www.tv2.no/v/%s' % asset_id, 'TV2') + for asset_id in assets] + + title = remove_end(self._og_search_title(webpage), ' - TV2.no') + description = remove_end(self._og_search_description(webpage), ' - TV2.no') + + return self.playlist_result(entries, playlist_id, title, description) + + +class KatsomoIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?(?:katsomo|mtv(uutiset)?)\.fi/(?:sarja/[0-9a-z-]+-\d+/[0-9a-z-]+-|(?:#!/)?jakso/(?:\d+/[^/]+/)?|video/prog)(?P\d+)' + _TESTS = [{ + 'url': 'https://www.mtv.fi/sarja/mtv-uutiset-live-33001002003/lahden-pelicans-teki-kovan-ratkaisun-ville-nieminen-pihalle-1181321', + 'info_dict': { + 'id': '1181321', + 'ext': 'mp4', + 'title': 'Lahden Pelicans teki kovan ratkaisun – Ville Nieminen pihalle', + 'description': 'Päätöksen teki Pelicansin hallitus.', + 'timestamp': 1575116484, + 'upload_date': '20191130', + 'duration': 37.12, + 'view_count': int, + 'categories': list, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + 'url': 'http://www.katsomo.fi/#!/jakso/33001005/studio55-fi/658521/jukka-kuoppamaki-tekee-yha-lauluja-vaikka-lentokoneessa', + 'only_matching': True, + }, { + 'url': 'https://www.mtvuutiset.fi/video/prog1311159', + 'only_matching': True, + }, { + 'url': 'https://www.katsomo.fi/#!/jakso/1311159', + 'only_matching': True, + }] + _API_DOMAIN = 'api.katsomo.fi' + _PROTOCOLS = ('HLS', 'MPD') + _GEO_COUNTRIES = ['FI'] + + def _real_extract(self, url): + video_id = self._match_id(url) + api_base = 'http://%s/api/web/asset/%s' % (self._API_DOMAIN, video_id) + + asset = self._download_json( + api_base + '.json', video_id, + 'Downloading metadata JSON')['asset'] + title = asset.get('subtitle') or asset['title'] + is_live = asset.get('live') is True + + formats = [] + format_urls = [] + for protocol in self._PROTOCOLS: + try: + data = self._download_json( + api_base + '/play.json?protocol=%s&videoFormat=SMIL+ISMUSP' % protocol, + video_id, 'Downloading play JSON')['playback'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + error = self._parse_json(e.cause.response.read().decode(), video_id)['error'] + error_code = error.get('code') + if error_code == 'ASSET_PLAYBACK_INVALID_GEO_LOCATION': + self.raise_geo_restricted(countries=self._GEO_COUNTRIES) + elif error_code == 'SESSION_NOT_AUTHENTICATED': + self.raise_login_required() + raise ExtractorError(error['description']) + raise + items = try_get(data, lambda x: x['items']['item']) + if not items: + continue + if not isinstance(items, list): + items = [items] + for item in items: + if not isinstance(item, dict): + continue + video_url = item.get('url') + if not video_url or video_url in format_urls: + continue + format_id = '%s-%s' % (protocol.lower(), item.get('mediaFormat')) + if not self._is_valid_url(video_url, video_id, format_id): + continue + format_urls.append(video_url) + ext = determine_ext(video_url) + if ext == 'f4m': + formats.extend(self._extract_f4m_formats( + video_url, video_id, f4m_id=format_id, fatal=False)) + elif ext == 'm3u8': + if not data.get('drmProtected'): + formats.extend(self._extract_m3u8_formats( + video_url, video_id, 'mp4', live=is_live, m3u8_id=format_id, fatal=False)) + elif ext == 'mpd': + formats.extend(self._extract_mpd_formats( + video_url, video_id, format_id, fatal=False)) + elif ext == 'ism' or video_url.endswith('.ism/Manifest'): + pass + else: + formats.append({ + 'url': video_url, + 'format_id': format_id, + 'tbr': int_or_none(item.get('bitrate')), + 'filesize': int_or_none(item.get('fileSize')), + }) + if not formats and data.get('drmProtected'): + self.report_drm(video_id) + + thumbnails = [{ + 'id': thumbnail.get('@type'), + 'url': thumbnail.get('url'), + } for _, thumbnail in (asset.get('imageVersions') or {}).items()] + + return { + 'id': video_id, + 'url': video_url, + 'title': title, + 'description': strip_or_none(asset.get('description')), + 'thumbnails': thumbnails, + 'timestamp': parse_iso8601(asset.get('createTime')), + 'duration': float_or_none(asset.get('accurateDuration') or asset.get('duration')), + 'view_count': int_or_none(asset.get('views')), + 'categories': asset.get('keywords', '').split(','), + 'formats': formats, + 'is_live': is_live, + } + + +class MTVUutisetArticleIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)mtvuutiset\.fi/artikkeli/[^/]+/(?P\d+)' + _TESTS = [{ + 'url': 'https://www.mtvuutiset.fi/artikkeli/tallaisia-vaurioita-viking-amorellassa-on-useamman-osaston-alla-vetta/7931384', + 'info_dict': { + 'id': '1311159', + 'ext': 'mp4', + 'title': 'Viking Amorellan matkustajien evakuointi on alkanut – tältä operaatio näyttää laivalla', + 'description': 'Viking Amorellan matkustajien evakuointi on alkanut – tältä operaatio näyttää laivalla', + 'timestamp': 1600608966, + 'upload_date': '20200920', + 'duration': 153.7886666, + 'view_count': int, + 'categories': list, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + # multiple Youtube embeds + 'url': 'https://www.mtvuutiset.fi/artikkeli/50-vuotta-subarun-vastaiskua/6070962', + 'only_matching': True, + }] + + def _real_extract(self, url): + article_id = self._match_id(url) + article = self._download_json( + 'http://api.mtvuutiset.fi/mtvuutiset/api/json/' + article_id, + article_id) + + def entries(): + for video in (article.get('videos') or []): + video_type = video.get('videotype') + video_url = video.get('url') + if not (video_url and video_type in ('katsomo', 'youtube')): + continue + yield self.url_result( + video_url, video_type.capitalize(), video.get('video_id')) + + return self.playlist_result( + entries(), article_id, article.get('title'), article.get('description')) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tv24ua.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv24ua.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tv24ua.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tv24ua.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tv2dk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv2dk.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tv2dk.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tv2dk.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tv2hu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv2hu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tv2hu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tv2hu.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tv4.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv4.py new file mode 100644 index 0000000..10a2fe6 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv4.py @@ -0,0 +1,149 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + bool_or_none, + int_or_none, + parse_iso8601, + traverse_obj, + url_or_none, +) + + +class TV4IE(InfoExtractor): + IE_DESC = 'tv4.se and tv4play.se' + _VALID_URL = r'''(?x)https?://(?:www\.)? + (?: + tv4\.se/(?:[^/]+)/klipp/(?:.*)-| + tv4play\.se/ + (?: + (?:program|barn)/(?:(?:[^/]+/){1,2}|(?:[^\?]+)\?video_id=)| + iframe/video/| + film/| + sport/| + ) + )(?P[0-9]+)''' + _GEO_BYPASS = False + _TESTS = [ + { + # not geo-restricted + 'url': 'http://www.tv4.se/kalla-fakta/klipp/kalla-fakta-5-english-subtitles-2491650', + 'md5': 'cb837212f342d77cec06e6dad190e96d', + 'info_dict': { + 'id': '2491650', + 'ext': 'mp4', + 'title': 'Kalla Fakta 5 (english subtitles)', + 'description': '2491650', + 'series': 'Kalla fakta', + 'duration': 1335, + 'thumbnail': r're:^https?://[^/?#]+/api/v2/img/', + 'timestamp': 1385373240, + 'upload_date': '20131125', + }, + 'params': {'skip_download': 'm3u8'}, + 'expected_warnings': ['Unable to download f4m manifest'], + }, + { + 'url': 'http://www.tv4play.se/iframe/video/3054113', + 'md5': 'cb837212f342d77cec06e6dad190e96d', + 'info_dict': { + 'id': '3054113', + 'ext': 'mp4', + 'title': 'Så här jobbar ficktjuvarna - se avslöjande bilder', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'Unika bilder avslöjar hur turisternas fickor vittjas mitt på Stockholms central. Två experter på ficktjuvarna avslöjar knepen du ska se upp för.', + 'timestamp': int, + 'upload_date': '20150130', + }, + 'skip': '404 Not Found', + }, + { + 'url': 'http://www.tv4play.se/sport/3060959', + 'only_matching': True, + }, + { + 'url': 'http://www.tv4play.se/film/2378136', + 'only_matching': True, + }, + { + 'url': 'http://www.tv4play.se/barn/looney-tunes?video_id=3062412', + 'only_matching': True, + }, + { + 'url': 'http://www.tv4play.se/program/farang/3922081', + 'only_matching': True, + }, + { + 'url': 'https://www.tv4play.se/program/nyheterna/avsnitt/13315940', + 'only_matching': True, + } + ] + + def _call_api(self, endpoint, video_id, headers=None, query={}): + return self._download_json( + f'https://playback2.a2d.tv/{endpoint}/{video_id}', video_id, + f'Downloading {endpoint} API JSON', headers=headers, query={ + 'service': 'tv4', + 'device': 'browser', + 'protocol': 'hls', + **query, + }) + + def _real_extract(self, url): + video_id = self._match_id(url) + + info = traverse_obj(self._call_api('asset', video_id, query={ + 'protocol': 'hls,dash', + 'drm': 'widevine', + }), ('metadata', {dict})) or {} + + manifest_url = self._call_api( + 'play', video_id, headers=self.geo_verification_headers())['playbackItem']['manifestUrl'] + + formats, subtitles = [], {} + + fmts, subs = self._extract_m3u8_formats_and_subtitles( + manifest_url, video_id, 'mp4', + 'm3u8_native', m3u8_id='hls', fatal=False) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + + fmts, subs = self._extract_mpd_formats_and_subtitles( + manifest_url.replace('.m3u8', '.mpd'), + video_id, mpd_id='dash', fatal=False) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + + fmts = self._extract_f4m_formats( + manifest_url.replace('.m3u8', '.f4m'), + video_id, f4m_id='hds', fatal=False) + formats.extend(fmts) + + fmts, subs = self._extract_ism_formats_and_subtitles( + re.sub(r'\.ism/.*?\.m3u8', r'.ism/Manifest', manifest_url), + video_id, ism_id='mss', fatal=False) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + + if not formats and info.get('is_geo_restricted'): + self.raise_geo_restricted( + 'This video is not available from your location due to geo-restriction, or not being authenticated', + countries=['SE']) + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + **traverse_obj(info, { + 'title': ('title', {str}), + 'description': ('description', {str}), + 'timestamp': (('broadcast_date_time', 'broadcastDateTime'), {parse_iso8601}), + 'duration': ('duration', {int_or_none}), + 'thumbnail': ('image', {url_or_none}), + 'is_live': ('isLive', {bool_or_none}), + 'series': ('seriesTitle', {str}), + 'season_number': ('seasonNumber', {int_or_none}), + 'episode': ('episodeTitle', {str}), + 'episode_number': ('episodeNumber', {int_or_none}), + }, get_all=False), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tv5mondeplus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv5mondeplus.py new file mode 100644 index 0000000..4da1b26 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv5mondeplus.py @@ -0,0 +1,181 @@ +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + determine_ext, + extract_attributes, + int_or_none, + parse_duration, + traverse_obj, + try_get, + url_or_none, +) + + +class TV5MondePlusIE(InfoExtractor): + IE_DESC = 'TV5MONDE+' + _VALID_URL = r'https?://(?:www\.)?(?:tv5mondeplus|revoir\.tv5monde)\.com/toutes-les-videos/[^/]+/(?P[^/?#]+)' + _TESTS = [{ + # movie + 'url': 'https://revoir.tv5monde.com/toutes-les-videos/cinema/les-novices', + 'md5': 'c86f60bf8b75436455b1b205f9745955', + 'info_dict': { + 'id': 'ZX0ipMyFQq_6D4BA7b', + 'display_id': 'les-novices', + 'ext': 'mp4', + 'title': 'Les novices', + 'description': 'md5:2e7c33ba3ad48dabfcc2a956b88bde2b', + 'upload_date': '20230821', + 'thumbnail': 'https://revoir.tv5monde.com/uploads/media/video_thumbnail/0738/60/01e952b7ccf36b7c6007ec9131588954ab651de9.jpeg', + 'duration': 5177, + 'episode': 'Les novices', + }, + }, { + # series episode + 'url': 'https://revoir.tv5monde.com/toutes-les-videos/series-fictions/opj-les-dents-de-la-terre-2', + 'info_dict': { + 'id': 'wJ0eeEPozr_6D4BA7b', + 'display_id': 'opj-les-dents-de-la-terre-2', + 'ext': 'mp4', + 'title': "OPJ - Les dents de la Terre (2)", + 'description': 'md5:288f87fd68d993f814e66e60e5302d9d', + 'upload_date': '20230823', + 'series': 'OPJ', + 'episode': 'Les dents de la Terre (2)', + 'duration': 2877, + 'thumbnail': 'https://dl-revoir.tv5monde.com/images/1a/5753448.jpg' + }, + }, { + # movie + 'url': 'https://revoir.tv5monde.com/toutes-les-videos/cinema/ceux-qui-travaillent', + 'md5': '32fa0cde16a4480d1251502a66856d5f', + 'info_dict': { + 'id': 'dc57a011-ec4b-4648-2a9a-4f03f8352ed3', + 'display_id': 'ceux-qui-travaillent', + 'ext': 'mp4', + 'title': 'Ceux qui travaillent', + 'description': 'md5:570e8bb688036ace873b2d50d24c026d', + 'upload_date': '20210819', + }, + 'skip': 'no longer available', + }, { + # series episode + 'url': 'https://revoir.tv5monde.com/toutes-les-videos/series-fictions/vestiaires-caro-actrice', + 'info_dict': { + 'id': '9e9d599e-23af-6915-843e-ecbf62e97925', + 'display_id': 'vestiaires-caro-actrice', + 'ext': 'mp4', + 'title': "Vestiaires - Caro actrice", + 'description': 'md5:db15d2e1976641e08377f942778058ea', + 'upload_date': '20210819', + 'series': "Vestiaires", + 'episode': 'Caro actrice', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'no longer available', + }, { + 'url': 'https://revoir.tv5monde.com/toutes-les-videos/series-fictions/neuf-jours-en-hiver-neuf-jours-en-hiver', + 'only_matching': True, + }, { + 'url': 'https://revoir.tv5monde.com/toutes-les-videos/info-societe/le-journal-de-la-rts-edition-du-30-01-20-19h30', + 'only_matching': True, + }] + _GEO_BYPASS = False + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + + if ">Ce programme n'est malheureusement pas disponible pour votre zone géographique.<" in webpage: + self.raise_geo_restricted(countries=['FR']) + + title = episode = self._html_search_regex(r'

([^<]+)', webpage, 'title') + vpl_data = extract_attributes(self._search_regex( + r'(<[^>]+class="video_player_loader"[^>]+>)', + webpage, 'video player loader')) + + video_files = self._parse_json( + vpl_data['data-broadcast'], display_id) + formats = [] + video_id = None + + def process_video_files(v): + nonlocal video_id + for video_file in v: + v_url = video_file.get('url') + if not v_url: + continue + if video_file.get('type') == 'application/deferred': + d_param = urllib.parse.quote(v_url) + token = video_file.get('token') + if not token: + continue + deferred_json = self._download_json( + f'https://api.tv5monde.com/player/asset/{d_param}/resolve?condenseKS=true', display_id, + note='Downloading deferred info', headers={'Authorization': f'Bearer {token}'}, fatal=False) + v_url = traverse_obj(deferred_json, (0, 'url', {url_or_none})) + if not v_url: + continue + # data-guid from the webpage isn't stable, use the material id from the json urls + video_id = self._search_regex( + r'materials/([\da-zA-Z]{10}_[\da-fA-F]{7})/', v_url, 'video id', default=None) + process_video_files(deferred_json) + + video_format = video_file.get('format') or determine_ext(v_url) + if video_format == 'm3u8': + formats.extend(self._extract_m3u8_formats( + v_url, display_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False)) + elif video_format == 'mpd': + formats.extend(self._extract_mpd_formats( + v_url, display_id, fatal=False)) + else: + formats.append({ + 'url': v_url, + 'format_id': video_format, + }) + + process_video_files(video_files) + + metadata = self._parse_json( + vpl_data['data-metadata'], display_id) + duration = (int_or_none(try_get(metadata, lambda x: x['content']['duration'])) + or parse_duration(self._html_search_meta('duration', webpage))) + + description = self._html_search_regex( + r'(?s)]+class=["\']episode-texte[^>]+>(.+?)', webpage, + 'description', fatal=False) + + series = self._html_search_regex( + r']+class=["\']episode-emission[^>]+>([^<]+)', webpage, + 'series', default=None) + + if series and series != title: + title = '%s - %s' % (series, title) + + upload_date = self._search_regex( + r'(?:date_publication|publish_date)["\']\s*:\s*["\'](\d{4}_\d{2}_\d{2})', + webpage, 'upload date', default=None) + if upload_date: + upload_date = upload_date.replace('_', '') + + if not video_id: + video_id = self._search_regex( + (r'data-guid=["\']([\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})', + r'id_contenu["\']\s:\s*(\d+)'), webpage, 'video id', + default=display_id) + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': description, + 'thumbnail': vpl_data.get('data-image'), + 'duration': duration, + 'upload_date': upload_date, + 'formats': formats, + 'series': series, + 'episode': episode, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tv5unis.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tv5unis.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tv5unis.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tv5unis.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tva.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tva.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tva.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tva.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvanouvelles.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvanouvelles.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvanouvelles.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvanouvelles.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvc.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvc.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvc.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvc.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tver.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tver.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tver.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tver.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvigle.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvigle.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvigle.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvigle.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tviplayer.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tviplayer.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tviplayer.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tviplayer.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvland.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvland.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvland.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvland.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvn24.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvn24.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvn24.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvn24.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvnet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvnet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvnet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvnet.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvnoe.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvnoe.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvnoe.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvnoe.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvnow.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvnow.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvnow.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvnow.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tvopengr.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvopengr.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tvopengr.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tvopengr.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tvp.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvp.py new file mode 100644 index 0000000..2aa0dd8 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvp.py @@ -0,0 +1,629 @@ +import itertools +import random +import re + +from .common import InfoExtractor +from ..utils import ( + clean_html, + determine_ext, + dict_get, + ExtractorError, + int_or_none, + js_to_json, + str_or_none, + strip_or_none, + traverse_obj, + try_get, + url_or_none, +) + + +class TVPIE(InfoExtractor): + IE_NAME = 'tvp' + IE_DESC = 'Telewizja Polska' + _VALID_URL = r'https?://(?:[^/]+\.)?(?:tvp(?:parlament)?\.(?:pl|info)|tvpworld\.com|swipeto\.pl)/(?:(?!\d+/)[^/]+/)*(?P\d+)' + + _TESTS = [{ + # TVPlayer 2 in js wrapper + 'url': 'https://swipeto.pl/64095316/uliczny-foxtrot-wypozyczalnia-kaset-kto-pamieta-dvdvideo', + 'info_dict': { + 'id': '64095316', + 'ext': 'mp4', + 'title': 'Uliczny Foxtrot — Wypożyczalnia kaset. Kto pamiÄ™ta DVD-Video?', + 'age_limit': 0, + 'duration': 374, + 'thumbnail': r're:https://.+', + }, + 'expected_warnings': [ + 'Failed to download ISM manifest: HTTP Error 404: Not Found', + 'Failed to download m3u8 information: HTTP Error 404: Not Found', + ], + }, { + # TVPlayer legacy + 'url': 'https://www.tvp.pl/polska-press-video-uploader/wideo/62042351', + 'info_dict': { + 'id': '62042351', + 'ext': 'mp4', + 'title': 'Wideo', + 'description': 'Wideo Kamera', + 'duration': 24, + 'age_limit': 0, + 'thumbnail': r're:https://.+', + }, + }, { + # TVPlayer 2 in iframe + 'url': 'https://wiadomosci.tvp.pl/50725617/dzieci-na-sprzedaz-dla-homoseksualistow', + 'info_dict': { + 'id': '50725617', + 'ext': 'mp4', + 'title': 'Dzieci na sprzedaż dla homoseksualistów', + 'description': 'md5:7d318eef04e55ddd9f87a8488ac7d590', + 'age_limit': 12, + 'duration': 259, + 'thumbnail': r're:https://.+', + }, + }, { + # TVPlayer 2 in client-side rendered website (regional; window.__newsData) + 'url': 'https://warszawa.tvp.pl/25804446/studio-yayo', + 'info_dict': { + 'id': '25804446', + 'ext': 'mp4', + 'title': 'Studio Yayo', + 'upload_date': '20160616', + 'timestamp': 1466075700, + 'age_limit': 0, + 'duration': 20, + 'thumbnail': r're:https://.+', + }, + 'skip': 'Geo-blocked outside PL', + }, { + # TVPlayer 2 in client-side rendered website (tvp.info; window.__videoData) + 'url': 'https://www.tvp.info/52880236/09042021-0800', + 'info_dict': { + 'id': '52880236', + 'ext': 'mp4', + 'title': '09.04.2021, 08:00', + 'age_limit': 0, + 'thumbnail': r're:https://.+', + }, + 'skip': 'Geo-blocked outside PL', + }, { + # client-side rendered (regional) program (playlist) page + 'url': 'https://opole.tvp.pl/9660819/rozmowa-dnia', + 'info_dict': { + 'id': '9660819', + 'description': 'Od poniedziaÅ‚ku do piÄ…tku o 18:55', + 'title': 'Rozmowa dnia', + }, + 'playlist_mincount': 1800, + 'params': { + 'skip_download': True, + } + }, { + # ABC-specific video embeding + # moved to https://bajkowakraina.tvp.pl/wideo/50981130,teleranek,51027049,zubr,51116450 + 'url': 'https://abc.tvp.pl/48636269/zubry-odc-124', + 'info_dict': { + 'id': '48320456', + 'ext': 'mp4', + 'title': 'Teleranek, Å»ubr', + }, + 'skip': 'unavailable', + }, { + # yet another vue page + 'url': 'https://jp2.tvp.pl/46925618/filmy', + 'info_dict': { + 'id': '46925618', + 'title': 'Filmy', + }, + 'playlist_mincount': 19, + }, { + 'url': 'http://vod.tvp.pl/seriale/obyczajowe/na-sygnale/sezon-2-27-/odc-39/17834272', + 'only_matching': True, + }, { + 'url': 'http://wiadomosci.tvp.pl/25169746/24052016-1200', + 'only_matching': True, + }, { + 'url': 'http://krakow.tvp.pl/25511623/25lecie-mck-wyjatkowe-miejsce-na-mapie-krakowa', + 'only_matching': True, + }, { + 'url': 'http://teleexpress.tvp.pl/25522307/wierni-wzieli-udzial-w-procesjach', + 'only_matching': True, + }, { + 'url': 'http://sport.tvp.pl/25522165/krychowiak-uspokaja-w-sprawie-kontuzji-dwa-tygodnie-to-maksimum', + 'only_matching': True, + }, { + 'url': 'http://www.tvp.info/25511919/trwa-rewolucja-wladza-zdecydowala-sie-na-pogwalcenie-konstytucji', + 'only_matching': True, + }, { + 'url': 'https://tvp.info/49193823/teczowe-flagi-na-pomnikach-prokuratura-wszczela-postepowanie-wieszwiecej', + 'only_matching': True, + }, { + 'url': 'https://www.tvpparlament.pl/retransmisje-vod/inne/wizyta-premiera-mateusza-morawieckiego-w-firmie-berotu-sp-z-oo/48857277', + 'only_matching': True, + }, { + 'url': 'https://tvpworld.com/48583640/tescos-polish-business-bought-by-danish-chain-netto', + 'only_matching': True, + }] + + def _parse_vue_website_data(self, webpage, page_id): + website_data = self._search_regex([ + # website - regiony, tvp.info + # directory - jp2.tvp.pl + r'window\.__(?:website|directory)Data\s*=\s*({(?:.|\s)+?});', + ], webpage, 'website data') + if not website_data: + return None + return self._parse_json(website_data, page_id, transform_source=js_to_json) + + def _extract_vue_video(self, video_data, page_id=None): + if isinstance(video_data, str): + video_data = self._parse_json(video_data, page_id, transform_source=js_to_json) + thumbnails = [] + image = video_data.get('image') + if image: + for thumb in (image if isinstance(image, list) else [image]): + thmb_url = str_or_none(thumb.get('url')) + if thmb_url: + thumbnails.append({ + 'url': thmb_url, + }) + is_website = video_data.get('type') == 'website' + if is_website: + url = video_data['url'] + else: + url = 'tvp:' + str_or_none(video_data.get('_id') or page_id) + return { + '_type': 'url_transparent', + 'id': str_or_none(video_data.get('_id') or page_id), + 'url': url, + 'ie_key': (TVPIE if is_website else TVPEmbedIE).ie_key(), + 'title': str_or_none(video_data.get('title')), + 'description': str_or_none(video_data.get('lead')), + 'timestamp': int_or_none(video_data.get('release_date_long')), + 'duration': int_or_none(video_data.get('duration')), + 'thumbnails': thumbnails, + } + + def _handle_vuejs_page(self, url, webpage, page_id): + # vue client-side rendered sites (all regional pages + tvp.info) + video_data = self._search_regex([ + r'window\.__(?:news|video)Data\s*=\s*({(?:.|\s)+?})\s*;', + ], webpage, 'video data', default=None) + if video_data: + return self._extract_vue_video(video_data, page_id=page_id) + # paged playlists + website_data = self._parse_vue_website_data(webpage, page_id) + if website_data: + entries = self._vuejs_entries(url, website_data, page_id) + + return { + '_type': 'playlist', + 'id': page_id, + 'title': str_or_none(website_data.get('title')), + 'description': str_or_none(website_data.get('lead')), + 'entries': entries, + } + raise ExtractorError('Could not extract video/website data') + + def _vuejs_entries(self, url, website_data, page_id): + + def extract_videos(wd): + if wd.get('latestVideo'): + yield self._extract_vue_video(wd['latestVideo']) + for video in wd.get('videos') or []: + yield self._extract_vue_video(video) + for video in wd.get('items') or []: + yield self._extract_vue_video(video) + + yield from extract_videos(website_data) + + if website_data.get('items_total_count') > website_data.get('items_per_page'): + for page in itertools.count(2): + page_website_data = self._parse_vue_website_data( + self._download_webpage(url, page_id, note='Downloading page #%d' % page, + query={'page': page}), + page_id) + if not page_website_data.get('videos') and not page_website_data.get('items'): + break + yield from extract_videos(page_website_data) + + def _real_extract(self, url): + page_id = self._match_id(url) + webpage, urlh = self._download_webpage_handle(url, page_id) + + # The URL may redirect to a VOD + # example: https://vod.tvp.pl/48463890/wadowickie-spotkania-z-janem-pawlem-ii + for ie_cls in (TVPVODSeriesIE, TVPVODVideoIE): + if ie_cls.suitable(urlh.url): + return self.url_result(urlh.url, ie=ie_cls.ie_key(), video_id=page_id) + + if re.search( + r'window\.__(?:video|news|website|directory)Data\s*=', + webpage): + return self._handle_vuejs_page(url, webpage, page_id) + + # classic server-side rendered sites + video_id = self._search_regex([ + r']+src="[^"]*?embed\.php\?(?:[^&]+&)*ID=(\d+)', + r']+src="[^"]*?object_id=(\d+)', + r"object_id\s*:\s*'(\d+)'", + r'data-video-id="(\d+)"', + + # abc.tvp.pl - somehow there are more than one video IDs that seem to be the same video? + # the first one is referenced to as "copyid", and seems to be unused by the website + r'', + ], webpage, 'video id', default=page_id) + return { + '_type': 'url_transparent', + 'url': 'tvp:' + video_id, + 'description': self._og_search_description( + webpage, default=None) or (self._html_search_meta( + 'description', webpage, default=None) + if '//s.tvp.pl/files/portal/v' in webpage else None), + 'thumbnail': self._og_search_thumbnail(webpage, default=None), + 'ie_key': 'TVPEmbed', + } + + +class TVPStreamIE(InfoExtractor): + IE_NAME = 'tvp:stream' + _VALID_URL = r'(?:tvpstream:|https?://(?:tvpstream\.vod|stream)\.tvp\.pl/(?:\?(?:[^&]+[&;])*channel_id=)?)(?P\d*)' + _TESTS = [{ + 'url': 'https://stream.tvp.pl/?channel_id=56969941', + 'only_matching': True, + }, { + # untestable as "video" id changes many times across a day + 'url': 'https://tvpstream.vod.tvp.pl/?channel_id=1455', + 'only_matching': True, + }, { + 'url': 'tvpstream:39821455', + 'only_matching': True, + }, { + # the default stream when you provide no channel_id, most probably TVP Info + 'url': 'tvpstream:', + 'only_matching': True, + }, { + 'url': 'https://tvpstream.vod.tvp.pl/', + 'only_matching': True, + }] + + def _real_extract(self, url): + channel_id = self._match_id(url) + channel_url = self._proto_relative_url('//stream.tvp.pl/?channel_id=%s' % channel_id or 'default') + webpage = self._download_webpage(channel_url, channel_id or 'default', 'Downloading channel webpage') + channels = self._search_json( + r'window\.__channels\s*=', webpage, 'channel list', channel_id, + contains_pattern=r'\[\s*{(?s:.+)}\s*]') + channel = traverse_obj(channels, (lambda _, v: channel_id == str(v['id'])), get_all=False) if channel_id else channels[0] + audition = traverse_obj(channel, ('items', lambda _, v: v['is_live'] is True), get_all=False) + return { + '_type': 'url_transparent', + 'id': channel_id or channel['id'], + 'url': 'tvp:%s' % audition['video_id'], + 'title': audition.get('title'), + 'alt_title': channel.get('title'), + 'is_live': True, + 'ie_key': 'TVPEmbed', + } + + +class TVPEmbedIE(InfoExtractor): + IE_NAME = 'tvp:embed' + IE_DESC = 'Telewizja Polska' + _GEO_BYPASS = False + _VALID_URL = r'''(?x) + (?: + tvp: + |https?:// + (?:[^/]+\.)? + (?:tvp(?:parlament)?\.pl|tvp\.info|tvpworld\.com|swipeto\.pl)/ + (?:sess/ + (?:tvplayer\.php\?.*?object_id + |TVPlayer2/(?:embed|api)\.php\?.*[Ii][Dd]) + |shared/details\.php\?.*?object_id) + =) + (?P\d+) + ''' + _EMBED_REGEX = [rf'(?x)]+?src=(["\'])(?P{_VALID_URL[4:]})'] + + _TESTS = [{ + 'url': 'tvp:194536', + 'info_dict': { + 'id': '194536', + 'ext': 'mp4', + 'title': 'Czas honoru, odc. 13 – WÅ‚adek', + 'description': 'md5:76649d2014f65c99477be17f23a4dead', + 'age_limit': 12, + 'duration': 2652, + 'series': 'Czas honoru', + 'episode': 'Episode 13', + 'episode_number': 13, + 'season': 'sezon 1', + 'thumbnail': r're:https://.+', + }, + }, { + 'url': 'https://www.tvp.pl/sess/tvplayer.php?object_id=51247504&autoplay=false', + 'info_dict': { + 'id': '51247504', + 'ext': 'mp4', + 'title': 'Razmova 091220', + 'duration': 876, + 'age_limit': 0, + 'thumbnail': r're:https://.+', + }, + }, { + # TVPlayer2 embed URL + 'url': 'https://tvp.info/sess/TVPlayer2/embed.php?ID=50595757', + 'only_matching': True, + }, { + 'url': 'https://wiadomosci.tvp.pl/sess/TVPlayer2/api.php?id=51233452', + 'only_matching': True, + }, { + # pulsembed on dziennik.pl + 'url': 'https://www.tvp.pl/shared/details.php?copy_id=52205981&object_id=52204505&autoplay=false&is_muted=false&allowfullscreen=true&template=external-embed/video/iframe-video.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + # it could be anything that is a valid JS function name + callback = random.choice(( + 'jebac_pis', + 'jebacpis', + 'ziobro', + 'sasin70', + 'sasin_przejebal_70_milionow_PLN', + 'tvp_is_a_state_propaganda_service', + )) + + webpage = self._download_webpage( + ('https://www.tvp.pl/sess/TVPlayer2/api.php?id=%s' + + '&@method=getTvpConfig&@callback=%s') % (video_id, callback), video_id) + + # stripping JSONP padding + datastr = webpage[15 + len(callback):-3] + if datastr.startswith('null,'): + error = self._parse_json(datastr[5:], video_id, fatal=False) + error_desc = traverse_obj(error, (0, 'desc')) + + if error_desc == 'Obiekt wymaga pÅ‚atnoÅ›ci': + raise ExtractorError('Video requires payment and log-in, but log-in is not implemented') + + raise ExtractorError(error_desc or 'unexpected JSON error') + + content = self._parse_json(datastr, video_id)['content'] + info = content['info'] + is_live = try_get(info, lambda x: x['isLive'], bool) + + if info.get('isGeoBlocked'): + # actual country list is not provided, we just assume it's always available in PL + self.raise_geo_restricted(countries=['PL']) + + formats = [] + for file in content['files']: + video_url = url_or_none(file.get('url')) + if not video_url: + continue + ext = determine_ext(video_url, None) + if ext == 'm3u8': + formats.extend(self._extract_m3u8_formats(video_url, video_id, m3u8_id='hls', fatal=False, live=is_live)) + elif ext == 'mpd': + if is_live: + # doesn't work with either ffmpeg or native downloader + continue + formats.extend(self._extract_mpd_formats(video_url, video_id, mpd_id='dash', fatal=False)) + elif ext == 'f4m': + formats.extend(self._extract_f4m_formats(video_url, video_id, f4m_id='hds', fatal=False)) + elif video_url.endswith('.ism/manifest'): + formats.extend(self._extract_ism_formats(video_url, video_id, ism_id='mss', fatal=False)) + else: + formats.append({ + 'format_id': 'direct', + 'url': video_url, + 'ext': ext or file.get('type'), + 'fps': int_or_none(traverse_obj(file, ('quality', 'fps'))), + 'tbr': int_or_none(traverse_obj(file, ('quality', 'bitrate')), scale=1000), + 'width': int_or_none(traverse_obj(file, ('quality', 'width'))), + 'height': int_or_none(traverse_obj(file, ('quality', 'height'))), + }) + + title = dict_get(info, ('subtitle', 'title', 'seoTitle')) + description = dict_get(info, ('description', 'seoDescription')) + thumbnails = [] + for thumb in content.get('posters') or (): + thumb_url = thumb.get('src') + if not thumb_url or '{width}' in thumb_url or '{height}' in thumb_url: + continue + thumbnails.append({ + 'url': thumb.get('src'), + 'width': thumb.get('width'), + 'height': thumb.get('height'), + }) + age_limit = try_get(info, lambda x: x['ageGroup']['minAge'], int) + if age_limit == 1: + age_limit = 0 + duration = try_get(info, lambda x: x['duration'], int) if not is_live else None + + subtitles = {} + for sub in content.get('subtitles') or []: + if not sub.get('url'): + continue + subtitles.setdefault(sub['lang'], []).append({ + 'url': sub['url'], + 'ext': sub.get('type'), + }) + + info_dict = { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnails': thumbnails, + 'age_limit': age_limit, + 'is_live': is_live, + 'duration': duration, + 'formats': formats, + 'subtitles': subtitles, + } + + # vod.tvp.pl + if info.get('vortalName') == 'vod': + info_dict.update({ + 'title': '%s, %s' % (info.get('title'), info.get('subtitle')), + 'series': info.get('title'), + 'season': info.get('season'), + 'episode_number': info.get('episode'), + }) + + return info_dict + + +class TVPVODBaseIE(InfoExtractor): + _API_BASE_URL = 'https://vod.tvp.pl/api/products' + + def _call_api(self, resource, video_id, query={}, **kwargs): + is_valid = lambda x: 200 <= x < 300 + document, urlh = self._download_json_handle( + f'{self._API_BASE_URL}/{resource}', video_id, + query={'lang': 'pl', 'platform': 'BROWSER', **query}, + expected_status=lambda x: is_valid(x) or 400 <= x < 500, **kwargs) + if is_valid(urlh.status): + return document + raise ExtractorError(f'Woronicza said: {document.get("code")} (HTTP {urlh.status})') + + def _parse_video(self, video, with_url=True): + info_dict = traverse_obj(video, { + 'id': ('id', {str_or_none}), + 'title': 'title', + 'age_limit': ('rating', {int_or_none}), + 'duration': ('duration', {int_or_none}), + 'episode_number': ('number', {int_or_none}), + 'series': ('season', 'serial', 'title', {str_or_none}), + 'thumbnails': ('images', ..., ..., {'url': ('url', {url_or_none})}), + }) + info_dict['description'] = clean_html(dict_get(video, ('lead', 'description'))) + if with_url: + info_dict.update({ + '_type': 'url', + 'url': video['webUrl'], + 'ie_key': TVPVODVideoIE.ie_key(), + }) + return info_dict + + +class TVPVODVideoIE(TVPVODBaseIE): + IE_NAME = 'tvp:vod' + _VALID_URL = r'https?://vod\.tvp\.pl/[a-z\d-]+,\d+/[a-z\d-]+(?\d+)(?:\?[^#]+)?(?:#.+)?$' + + _TESTS = [{ + 'url': 'https://vod.tvp.pl/dla-dzieci,24/laboratorium-alchemika-odcinki,309338/odcinek-24,S01E24,311357', + 'info_dict': { + 'id': '311357', + 'ext': 'mp4', + 'title': 'Tusze termiczne. Jak zobaczyć niewidoczne. Odcinek 24', + 'description': 'md5:1d4098d3e537092ccbac1abf49b7cd4c', + 'duration': 300, + 'episode_number': 24, + 'episode': 'Episode 24', + 'age_limit': 0, + 'series': 'Laboratorium alchemika', + 'thumbnail': 're:https?://.+', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://vod.tvp.pl/filmy-dokumentalne,163/ukrainski-sluga-narodu,339667', + 'info_dict': { + 'id': '339667', + 'ext': 'mp4', + 'title': 'UkraiÅ„ski sÅ‚uga narodu', + 'description': 'md5:b7940c0a8e439b0c81653a986f544ef3', + 'age_limit': 12, + 'duration': 3051, + 'thumbnail': 're:https?://.+', + 'subtitles': 'count:2', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'note': 'embed fails with "payment required"', + 'url': 'https://vod.tvp.pl/seriale,18/polowanie-na-cmy-odcinki,390116/odcinek-7,S01E07,398869', + 'info_dict': { + 'id': '398869', + 'ext': 'mp4', + 'title': 'odc. 7', + 'description': 'md5:dd2bb33f023dc5c2fbaddfbe4cb5dba0', + 'duration': 2750, + 'age_limit': 16, + 'series': 'Polowanie na ćmy', + 'episode_number': 7, + 'episode': 'Episode 7', + 'thumbnail': 're:https?://.+', + }, + 'params': {'skip_download': 'm3u8'}, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + info_dict = self._parse_video(self._call_api(f'vods/{video_id}', video_id), with_url=False) + + playlist = self._call_api(f'{video_id}/videos/playlist', video_id, query={'videoType': 'MOVIE'}) + + info_dict['formats'] = [] + for manifest_url in traverse_obj(playlist, ('sources', 'HLS', ..., 'src')): + info_dict['formats'].extend(self._extract_m3u8_formats(manifest_url, video_id, fatal=False)) + for manifest_url in traverse_obj(playlist, ('sources', 'DASH', ..., 'src')): + info_dict['formats'].extend(self._extract_mpd_formats(manifest_url, video_id, fatal=False)) + + info_dict['subtitles'] = {} + for sub in playlist.get('subtitles') or []: + info_dict['subtitles'].setdefault(sub.get('language') or 'und', []).append({ + 'url': sub['url'], + 'ext': 'ttml', + }) + + return info_dict + + +class TVPVODSeriesIE(TVPVODBaseIE): + IE_NAME = 'tvp:vod:series' + _VALID_URL = r'https?://vod\.tvp\.pl/[a-z\d-]+,\d+/[a-z\d-]+-odcinki,(?P\d+)(?:\?[^#]+)?(?:#.+)?$' + + _TESTS = [{ + 'url': 'https://vod.tvp.pl/seriale,18/ranczo-odcinki,316445', + 'info_dict': { + 'id': '316445', + 'title': 'Ranczo', + 'age_limit': 12, + 'categories': ['seriale'], + }, + 'playlist_count': 130, + }, { + 'url': 'https://vod.tvp.pl/programy,88/rolnik-szuka-zony-odcinki,284514', + 'only_matching': True, + }, { + 'url': 'https://vod.tvp.pl/dla-dzieci,24/laboratorium-alchemika-odcinki,309338', + 'only_matching': True, + }] + + def _entries(self, seasons, playlist_id): + for season in seasons: + episodes = self._call_api( + f'vods/serials/{playlist_id}/seasons/{season["id"]}/episodes', playlist_id, + note=f'Downloading episode list for {season["title"]}') + yield from map(self._parse_video, episodes) + + def _real_extract(self, url): + playlist_id = self._match_id(url) + metadata = self._call_api( + f'vods/serials/{playlist_id}', playlist_id, + note='Downloading serial metadata') + seasons = self._call_api( + f'vods/serials/{playlist_id}/seasons', playlist_id, + note='Downloading season list') + return self.playlist_result( + self._entries(seasons, playlist_id), playlist_id, strip_or_none(metadata.get('title')), + clean_html(traverse_obj(metadata, ('description', 'lead'), expected_type=strip_or_none)), + categories=[traverse_obj(metadata, ('mainCategory', 'name'))], + age_limit=int_or_none(metadata.get('rating')), + ) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tvplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvplay.py new file mode 100644 index 0000000..48a6efe --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvplay.py @@ -0,0 +1,306 @@ +import re + +from .common import InfoExtractor +from ..compat import compat_urlparse +from ..networking.exceptions import HTTPError +from ..utils import ( + determine_ext, + ExtractorError, + int_or_none, + parse_iso8601, + qualities, + traverse_obj, + try_get, + update_url_query, + url_or_none, + urljoin, +) + + +class TVPlayIE(InfoExtractor): + IE_NAME = 'mtg' + IE_DESC = 'MTG services' + _VALID_URL = r'''(?x) + (?: + mtg:| + https?:// + (?:www\.)? + (?: + tvplay(?:\.skaties)?\.lv(?:/parraides)?| + (?:tv3play|play\.tv3)\.lt(?:/programos)?| + tv3play(?:\.tv3)?\.ee/sisu + ) + /(?:[^/]+/)+ + ) + (?P\d+) + ''' + _TESTS = [ + { + 'url': 'http://www.tvplay.lv/parraides/vinas-melo-labak/418113?autostart=true', + 'md5': 'a1612fe0849455423ad8718fe049be21', + 'info_dict': { + 'id': '418113', + 'ext': 'mp4', + 'title': 'KÄdi ir Ä«ri? - Viņas melo labÄk', + 'description': 'Baiba apsmej Ä«rus, kÄdi tie ir un ko viņi dara.', + 'series': 'Viņas melo labÄk', + 'season': '2.sezona', + 'season_number': 2, + 'duration': 25, + 'timestamp': 1406097056, + 'upload_date': '20140723', + }, + }, + { + 'url': 'http://play.tv3.lt/programos/moterys-meluoja-geriau/409229?autostart=true', + 'info_dict': { + 'id': '409229', + 'ext': 'flv', + 'title': 'Moterys meluoja geriau', + 'description': 'md5:9aec0fc68e2cbc992d2a140bd41fa89e', + 'series': 'Moterys meluoja geriau', + 'episode_number': 47, + 'season': '1 sezonas', + 'season_number': 1, + 'duration': 1330, + 'timestamp': 1403769181, + 'upload_date': '20140626', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + }, + { + 'url': 'http://www.tv3play.ee/sisu/kodu-keset-linna/238551?autostart=true', + 'info_dict': { + 'id': '238551', + 'ext': 'flv', + 'title': 'Kodu keset linna 398537', + 'description': 'md5:7df175e3c94db9e47c0d81ffa5d68701', + 'duration': 1257, + 'timestamp': 1292449761, + 'upload_date': '20101215', + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + }, + { + 'url': 'http://tvplay.skaties.lv/parraides/vinas-melo-labak/418113?autostart=true', + 'only_matching': True, + }, + { + 'url': 'https://tvplay.skaties.lv/vinas-melo-labak/418113/?autostart=true', + 'only_matching': True, + }, + { + # views is null + 'url': 'http://tvplay.skaties.lv/parraides/tv3-zinas/760183', + 'only_matching': True, + }, + { + 'url': 'http://tv3play.tv3.ee/sisu/kodu-keset-linna/238551?autostart=true', + 'only_matching': True, + }, + { + 'url': 'mtg:418113', + 'only_matching': True, + } + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + geo_country = self._search_regex( + r'https?://[^/]+\.([a-z]{2})', url, + 'geo country', default=None) + if geo_country: + self._initialize_geo_bypass({'countries': [geo_country.upper()]}) + video = self._download_json( + 'http://playapi.mtgx.tv/v3/videos/%s' % video_id, video_id, 'Downloading video JSON') + + title = video['title'] + + try: + streams = self._download_json( + 'http://playapi.mtgx.tv/v3/videos/stream/%s' % video_id, + video_id, 'Downloading streams JSON') + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + msg = self._parse_json(e.cause.response.read().decode('utf-8'), video_id) + raise ExtractorError(msg['msg'], expected=True) + raise + + quality = qualities(['hls', 'medium', 'high']) + formats = [] + for format_id, video_url in streams.get('streams', {}).items(): + video_url = url_or_none(video_url) + if not video_url: + continue + ext = determine_ext(video_url) + if ext == 'f4m': + formats.extend(self._extract_f4m_formats( + update_url_query(video_url, { + 'hdcore': '3.5.0', + 'plugin': 'aasp-3.5.0.151.81' + }), video_id, f4m_id='hds', fatal=False)) + elif ext == 'm3u8': + formats.extend(self._extract_m3u8_formats( + video_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False)) + else: + fmt = { + 'format_id': format_id, + 'quality': quality(format_id), + 'ext': ext, + } + if video_url.startswith('rtmp'): + m = re.search( + r'^(?Prtmp://[^/]+/(?P[^/]+))/(?P.+)$', video_url) + if not m: + continue + fmt.update({ + 'ext': 'flv', + 'url': m.group('url'), + 'app': m.group('app'), + 'play_path': m.group('playpath'), + 'preference': -1, + }) + else: + fmt.update({ + 'url': video_url, + }) + formats.append(fmt) + + if not formats and video.get('is_geo_blocked'): + self.raise_geo_restricted( + 'This content might not be available in your country due to copyright reasons', + metadata_available=True) + + # TODO: webvtt in m3u8 + subtitles = {} + sami_path = video.get('sami_path') + if sami_path: + lang = self._search_regex( + r'_([a-z]{2})\.xml', sami_path, 'lang', + default=compat_urlparse.urlparse(url).netloc.rsplit('.', 1)[-1]) + subtitles[lang] = [{ + 'url': sami_path, + }] + + series = video.get('format_title') + episode_number = int_or_none(video.get('format_position', {}).get('episode')) + season = video.get('_embedded', {}).get('season', {}).get('title') + season_number = int_or_none(video.get('format_position', {}).get('season')) + + return { + 'id': video_id, + 'title': title, + 'description': video.get('description'), + 'series': series, + 'episode_number': episode_number, + 'season': season, + 'season_number': season_number, + 'duration': int_or_none(video.get('duration')), + 'timestamp': parse_iso8601(video.get('created_at')), + 'view_count': try_get(video, lambda x: x['views']['total'], int), + 'age_limit': int_or_none(video.get('age_limit', 0)), + 'formats': formats, + 'subtitles': subtitles, + } + + +class TVPlayHomeIE(InfoExtractor): + _VALID_URL = r'''(?x) + https?:// + (?:tv3?)? + play\.(?:tv3|skaties)\.(?Plv|lt|ee)/ + (?Plives/)? + [^?#&]+(?:episode|programme|clip)-(?P\d+) + ''' + _TESTS = [{ + 'url': 'https://play.tv3.lt/series/gauju-karai-karveliai,serial-2343791/serija-8,episode-2343828', + 'info_dict': { + 'id': '2343828', + 'ext': 'mp4', + 'title': 'Gaujų karai. Karveliai (2021) | S01E08: Serija 8', + 'description': 'md5:f6fcfbb236429f05531131640dfa7c81', + 'duration': 2710, + 'season': 'Gaujų karai. Karveliai', + 'season_number': 1, + 'release_year': 2021, + 'episode': 'Serija 8', + 'episode_number': 8, + }, + 'params': { + 'skip_download': 'm3u8', + }, + }, { + 'url': 'https://play.tv3.lt/series/moterys-meluoja-geriau-n-7,serial-2574652/serija-25,episode-3284937', + 'info_dict': { + 'id': '3284937', + 'ext': 'mp4', + 'season': 'Moterys meluoja geriau [N-7]', + 'season_number': 14, + 'release_year': 2021, + 'episode': 'Serija 25', + 'episode_number': 25, + 'title': 'Moterys meluoja geriau [N-7] (2021) | S14|E25: Serija 25', + 'description': 'md5:c6926e9710f1a126f028fbe121eddb79', + 'duration': 2440, + }, + 'skip': '404' + }, { + 'url': 'https://play.tv3.lt/lives/tv6-lt,live-2838694/optibet-a-lygos-rungtynes-marijampoles-suduva--vilniaus-riteriai,programme-3422014', + 'only_matching': True, + }, { + 'url': 'https://tv3play.skaties.lv/series/women-lie-better-lv,serial-1024464/women-lie-better-lv,episode-1038762', + 'only_matching': True, + }, { + 'url': 'https://play.tv3.ee/series/_,serial-2654462/_,episode-2654474', + 'only_matching': True, + }, { + 'url': 'https://tv3play.skaties.lv/clips/tv3-zinas-valsti-lidz-15novembrim-bus-majsede,clip-3464509', + 'only_matching': True, + }] + + def _real_extract(self, url): + country, is_live, video_id = self._match_valid_url(url).groups() + + api_path = 'lives/programmes' if is_live else 'vods' + data = self._download_json( + urljoin(url, f'/api/products/{api_path}/{video_id}?platform=BROWSER&lang={country.upper()}'), + video_id) + + video_type = 'CATCHUP' if is_live else 'MOVIE' + stream_id = data['programRecordingId'] if is_live else video_id + stream = self._download_json( + urljoin(url, f'/api/products/{stream_id}/videos/playlist?videoType={video_type}&platform=BROWSER'), video_id) + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + stream['sources']['HLS'][0]['src'], video_id, 'mp4', 'm3u8_native', m3u8_id='hls') + + thumbnails = set(traverse_obj( + data, (('galary', 'images', 'artworks'), ..., ..., ('miniUrl', 'mainUrl')), expected_type=url_or_none)) + + return { + 'id': video_id, + 'title': self._resolve_title(data), + 'description': traverse_obj(data, 'description', 'lead'), + 'duration': int_or_none(data.get('duration')), + 'season': traverse_obj(data, ('season', 'serial', 'title')), + 'season_number': int_or_none(traverse_obj(data, ('season', 'number'))), + 'episode': data.get('title'), + 'episode_number': int_or_none(data.get('episode')), + 'release_year': int_or_none(traverse_obj(data, ('season', 'serial', 'year'))), + 'thumbnails': [{'url': url, 'ext': 'jpg'} for url in thumbnails], + 'formats': formats, + 'subtitles': subtitles, + } + + @staticmethod + def _resolve_title(data): + return try_get(data, lambda x: ( + f'{data["season"]["serial"]["title"]} ({data["season"]["serial"]["year"]}) | ' + f'S{data["season"]["number"]:02d}E{data["episode"]:02d}: {data["title"]}' + )) or data.get('title') diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/tvplayer.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvplayer.py new file mode 100644 index 0000000..228c236 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/tvplayer.py @@ -0,0 +1,80 @@ +from .common import InfoExtractor +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + extract_attributes, + try_get, + urlencode_postdata, + ExtractorError, +) + + +class TVPlayerIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?tvplayer\.com/watch/(?P[^/?#]+)' + _TEST = { + 'url': 'http://tvplayer.com/watch/bbcone', + 'info_dict': { + 'id': '89', + 'ext': 'mp4', + 'title': r're:^BBC One [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + }, + 'params': { + # m3u8 download + 'skip_download': True, + } + } + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + + current_channel = extract_attributes(self._search_regex( + r'(]+class="[^"]*current-channel[^"]*"[^>]*>)', + webpage, 'channel element')) + title = current_channel['data-name'] + + resource_id = current_channel['data-id'] + + token = self._search_regex( + r'data-token=(["\'])(?P(?!\1).+)\1', webpage, + 'token', group='token') + + context = self._download_json( + 'https://tvplayer.com/watch/context', display_id, + 'Downloading JSON context', query={ + 'resource': resource_id, + 'gen': token, + }) + + validate = context['validate'] + platform = try_get( + context, lambda x: x['platform']['key'], compat_str) or 'firefox' + + try: + response = self._download_json( + 'http://api.tvplayer.com/api/v2/stream/live', + display_id, 'Downloading JSON stream', headers={ + 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', + }, data=urlencode_postdata({ + 'id': resource_id, + 'service': 1, + 'platform': platform, + 'validate': validate, + }))['tvplayer']['response'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError): + response = self._parse_json( + e.cause.response.read().decode(), resource_id)['tvplayer']['response'] + raise ExtractorError( + '%s said: %s' % (self.IE_NAME, response['error']), expected=True) + raise + + formats = self._extract_m3u8_formats(response['stream'], display_id, 'mp4') + + return { + 'id': resource_id, + 'display_id': display_id, + 'title': title, + 'formats': formats, + 'is_live': True, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/tweakers.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/tweakers.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/tweakers.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/tweakers.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/twentyfourvideo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/twentyfourvideo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/twentyfourvideo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/twentyfourvideo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/twentymin.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/twentymin.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/twentymin.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/twentymin.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/twentythreevideo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/twentythreevideo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/twentythreevideo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/twentythreevideo.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/twitcasting.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/twitcasting.py new file mode 100644 index 0000000..540e217 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/twitcasting.py @@ -0,0 +1,306 @@ +import base64 +import itertools +import re + +from .common import InfoExtractor +from ..dependencies import websockets +from ..utils import ( + ExtractorError, + UserNotLive, + clean_html, + float_or_none, + get_element_by_class, + get_element_by_id, + parse_duration, + qualities, + str_to_int, + traverse_obj, + try_get, + unified_timestamp, + urlencode_postdata, + urljoin, +) + + +class TwitCastingIE(InfoExtractor): + _VALID_URL = r'https?://(?:[^/?#]+\.)?twitcasting\.tv/(?P[^/?#]+)/(?:movie|twplayer)/(?P\d+)' + _M3U8_HEADERS = { + 'Origin': 'https://twitcasting.tv', + 'Referer': 'https://twitcasting.tv/', + } + _TESTS = [{ + 'url': 'https://twitcasting.tv/ivetesangalo/movie/2357609', + 'md5': '745243cad58c4681dc752490f7540d7f', + 'info_dict': { + 'id': '2357609', + 'ext': 'mp4', + 'title': 'Live #2357609', + 'uploader_id': 'ivetesangalo', + 'description': 'Twitter Oficial da cantora brasileira Ivete Sangalo.', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20110822', + 'timestamp': 1313978424, + 'duration': 32, + 'view_count': int, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://twitcasting.tv/mttbernardini/movie/3689740', + 'info_dict': { + 'id': '3689740', + 'ext': 'mp4', + 'title': 'Live playing something #3689740', + 'uploader_id': 'mttbernardini', + 'description': 'md5:1dc7efa2f1ab932fcd119265cebeec69', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20120211', + 'timestamp': 1328995624, + 'duration': 681, + 'view_count': int, + }, + 'params': { + 'skip_download': True, + 'videopassword': 'abc', + }, + }, { + 'url': 'https://twitcasting.tv/loft_heaven/movie/685979292', + 'info_dict': { + 'id': '685979292', + 'ext': 'mp4', + 'title': 'ã€ç„¡æ–™é…ä¿¡ã€‘å—æ³¢ä¸€æµ·ã®hear/here “ナタリー望月哲ã•ã‚“ã«èžã編集ã¨ã€Œæ¸‹è°·ç³»ç‹‚騒曲ã€â€', + 'uploader_id': 'loft_heaven', + 'description': 'md5:3a0c7b53019df987ce545c935538bacf', + 'upload_date': '20210604', + 'timestamp': 1622802114, + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 6964, + 'view_count': int, + }, + 'params': { + 'skip_download': True, + }, + }] + + def _parse_data_movie_playlist(self, dmp, video_id): + # attempt 1: parse as JSON directly + try: + return self._parse_json(dmp, video_id) + except ExtractorError: + pass + # attempt 2: decode reversed base64 + decoded = base64.b64decode(dmp[::-1]) + return self._parse_json(decoded, video_id) + + def _real_extract(self, url): + uploader_id, video_id = self._match_valid_url(url).groups() + + webpage, urlh = self._download_webpage_handle(url, video_id) + video_password = self.get_param('videopassword') + request_data = None + if video_password: + request_data = urlencode_postdata({ + 'password': video_password, + **self._hidden_inputs(webpage), + }, encoding='utf-8') + webpage, urlh = self._download_webpage_handle( + url, video_id, data=request_data, + headers={'Origin': 'https://twitcasting.tv'}, + note='Trying video password') + if urlh.url != url and request_data: + webpage = self._download_webpage( + urlh.url, video_id, data=request_data, + headers={'Origin': 'https://twitcasting.tv'}, + note='Retrying authentication') + # has to check here as the first request can contain password input form even if the password is correct + if re.search(r'\s*]+?name="password"', webpage): + raise ExtractorError('This video is protected by a password, use the --video-password option', expected=True) + + title = (clean_html(get_element_by_id('movietitle', webpage)) + or self._html_search_meta(['og:title', 'twitter:title'], webpage, fatal=True)) + + video_js_data = try_get( + webpage, + lambda x: self._parse_data_movie_playlist(self._search_regex( + r'data-movie-playlist=\'([^\']+?)\'', + x, 'movie playlist', default=None), video_id)['2'], list) + + thumbnail = traverse_obj(video_js_data, (0, 'thumbnailUrl')) or self._og_search_thumbnail(webpage) + description = clean_html(get_element_by_id( + 'authorcomment', webpage)) or self._html_search_meta( + ['description', 'og:description', 'twitter:description'], webpage) + duration = (try_get(video_js_data, lambda x: sum(float_or_none(y.get('duration')) for y in x) / 1000) + or parse_duration(clean_html(get_element_by_class('tw-player-duration-time', webpage)))) + view_count = str_to_int(self._search_regex( + (r'Total\s*:\s*Views\s*([\d,]+)', r'ç·è¦–è´è€…\s*:\s*([\d,]+)\s*]+datetime="([^"]+)"', + webpage, 'datetime', None)) + + stream_server_data = self._download_json( + 'https://twitcasting.tv/streamserver.php?target=%s&mode=client' % uploader_id, video_id, + 'Downloading live info', fatal=False) + + is_live = 'data-status="online"' in webpage + if not traverse_obj(stream_server_data, 'llfmp4') and is_live: + self.raise_login_required(method='cookies') + + base_dict = { + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'timestamp': timestamp, + 'uploader_id': uploader_id, + 'duration': duration, + 'view_count': view_count, + 'is_live': is_live, + } + + def find_dmu(x): + data_movie_url = self._search_regex( + r'data-movie-url=(["\'])(?P(?:(?!\1).)+)\1', + x, 'm3u8 url', group='url', default=None) + if data_movie_url: + return [data_movie_url] + + m3u8_urls = (try_get(webpage, find_dmu, list) + or traverse_obj(video_js_data, (..., 'source', 'url')) + or ([f'https://twitcasting.tv/{uploader_id}/metastream.m3u8'] if is_live else None)) + if not m3u8_urls: + raise ExtractorError('Failed to get m3u8 playlist') + + if is_live: + m3u8_url = m3u8_urls[0] + formats = self._extract_m3u8_formats( + m3u8_url, video_id, ext='mp4', m3u8_id='hls', + live=True, headers=self._M3U8_HEADERS) + + if traverse_obj(stream_server_data, ('hls', 'source')): + formats.extend(self._extract_m3u8_formats( + m3u8_url, video_id, ext='mp4', m3u8_id='source', + live=True, query={'mode': 'source'}, + note='Downloading source quality m3u8', + headers=self._M3U8_HEADERS, fatal=False)) + + if websockets: + qq = qualities(['base', 'mobilesource', 'main']) + streams = traverse_obj(stream_server_data, ('llfmp4', 'streams')) or {} + for mode, ws_url in streams.items(): + formats.append({ + 'url': ws_url, + 'format_id': 'ws-%s' % mode, + 'ext': 'mp4', + 'quality': qq(mode), + 'source_preference': -10, + # TwitCasting simply sends moof atom directly over WS + 'protocol': 'websocket_frag', + }) + + infodict = { + 'formats': formats, + '_format_sort_fields': ('source', ), + } + elif len(m3u8_urls) == 1: + formats = self._extract_m3u8_formats( + m3u8_urls[0], video_id, 'mp4', headers=self._M3U8_HEADERS) + infodict = { + # No problem here since there's only one manifest + 'formats': formats, + 'http_headers': self._M3U8_HEADERS, + } + else: + infodict = { + '_type': 'multi_video', + 'entries': [{ + 'id': f'{video_id}-{num}', + 'url': m3u8_url, + 'ext': 'mp4', + # Requesting the manifests here will cause download to fail. + # So use ffmpeg instead. See: https://github.com/yt-dlp/yt-dlp/issues/382 + 'protocol': 'm3u8', + 'http_headers': self._M3U8_HEADERS, + **base_dict, + } for (num, m3u8_url) in enumerate(m3u8_urls)], + } + + return { + 'id': video_id, + **base_dict, + **infodict, + } + + +class TwitCastingLiveIE(InfoExtractor): + _VALID_URL = r'https?://(?:[^/?#]+\.)?twitcasting\.tv/(?P[^/?#]+)/?(?:[#?]|$)' + _TESTS = [{ + 'url': 'https://twitcasting.tv/ivetesangalo', + 'only_matching': True, + }, { + 'url': 'https://twitcasting.tv/c:unusedlive', + 'expected_exception': 'UserNotLive', + }] + + def _real_extract(self, url): + uploader_id = self._match_id(url) + self.to_screen( + 'Downloading live video of user {0}. ' + 'Pass "https://twitcasting.tv/{0}/show" to download the history'.format(uploader_id)) + + webpage = self._download_webpage(url, uploader_id) + current_live = self._search_regex( + (r'data-type="movie" data-id="(\d+)">', + r'tw-sound-flag-open-link" data-id="(\d+)" style=',), + webpage, 'current live ID', default=None) + if not current_live: + # fetch unfiltered /show to find running livestreams; we can't get ID of the password-protected livestream above + webpage = self._download_webpage( + f'https://twitcasting.tv/{uploader_id}/show/', uploader_id, + note='Downloading live history') + is_live = self._search_regex(r'(?s)(\s*LIVE)', webpage, 'is live?', default=None) + if is_live: + # get the first live; running live is always at the first + current_live = self._search_regex( + r'(?s)\d+)"\s*>.+?', + webpage, 'current live ID 2', default=None, group='video_id') + if not current_live: + raise UserNotLive(video_id=uploader_id) + return self.url_result('https://twitcasting.tv/%s/movie/%s' % (uploader_id, current_live)) + + +class TwitCastingUserIE(InfoExtractor): + _VALID_URL = r'https?://(?:[^/?#]+\.)?twitcasting\.tv/(?P[^/?#]+)/(:?show|archive)/?(?:[#?]|$)' + _TESTS = [{ + 'url': 'https://twitcasting.tv/natsuiromatsuri/archive/', + 'info_dict': { + 'id': 'natsuiromatsuri', + 'title': 'natsuiromatsuri - Live History', + }, + 'playlist_mincount': 235, + }, { + 'url': 'https://twitcasting.tv/noriyukicas/show', + 'only_matching': True, + }] + + def _entries(self, uploader_id): + base_url = next_url = 'https://twitcasting.tv/%s/show' % uploader_id + for page_num in itertools.count(1): + webpage = self._download_webpage( + next_url, uploader_id, query={'filter': 'watchable'}, note='Downloading page %d' % page_num) + matches = re.finditer( + r'''(?isx)/[^/]+/movie/\d+)"\s*>.+?''', + webpage) + for mobj in matches: + yield self.url_result(urljoin(base_url, mobj.group('url'))) + + next_url = self._search_regex( + r']+action=(["\'])(?P.+?)\1', page, + 'post url', default=self._LOGIN_POST_URL, group='url') + post_url = urljoin(page_url, post_url) + + headers = { + 'Referer': page_url, + 'Origin': 'https://www.twitch.tv', + 'Content-Type': 'text/plain;charset=UTF-8', + } + + response = self._download_json( + post_url, None, note, data=json.dumps(form).encode(), + headers=headers, expected_status=400) + error = dict_get(response, ('error', 'error_description', 'error_code')) + if error: + fail(error) + + if 'Authenticated successfully' in response.get('message', ''): + return None, None + + redirect_url = urljoin( + post_url, + response.get('redirect') or response['redirect_path']) + return self._download_webpage_handle( + redirect_url, None, 'Downloading login redirect page', + headers=headers) + + login_page, handle = self._download_webpage_handle( + self._LOGIN_FORM_URL, None, 'Downloading login page') + + # Some TOR nodes and public proxies are blocked completely + if 'blacklist_message' in login_page: + fail(clean_html(login_page)) + + redirect_page, handle = login_step( + login_page, handle, 'Logging in', { + 'username': username, + 'password': password, + 'client_id': self._CLIENT_ID, + }) + + # Successful login + if not redirect_page: + return + + if re.search(r'(?i)]+id="two-factor-submit"', redirect_page) is not None: + # TODO: Add mechanism to request an SMS or phone call + tfa_token = self._get_tfa_info('two-factor authentication token') + login_step(redirect_page, handle, 'Submitting TFA token', { + 'authy_token': tfa_token, + 'remember_2fa': 'true', + }) + + def _prefer_source(self, formats): + try: + source = next(f for f in formats if f['format_id'] == 'Source') + source['quality'] = 10 + except StopIteration: + for f in formats: + if '/chunked/' in f['url']: + f.update({ + 'quality': 10, + 'format_note': 'Source', + }) + + def _download_base_gql(self, video_id, ops, note, fatal=True): + headers = { + 'Content-Type': 'text/plain;charset=UTF-8', + 'Client-ID': self._CLIENT_ID, + } + gql_auth = self._get_cookies('https://gql.twitch.tv').get('auth-token') + if gql_auth: + headers['Authorization'] = 'OAuth ' + gql_auth.value + return self._download_json( + 'https://gql.twitch.tv/gql', video_id, note, + data=json.dumps(ops).encode(), + headers=headers, fatal=fatal) + + def _download_gql(self, video_id, ops, note, fatal=True): + for op in ops: + op['extensions'] = { + 'persistedQuery': { + 'version': 1, + 'sha256Hash': self._OPERATION_HASHES[op['operationName']], + } + } + return self._download_base_gql(video_id, ops, note) + + def _download_access_token(self, video_id, token_kind, param_name): + method = '%sPlaybackAccessToken' % token_kind + ops = { + 'query': '''{ + %s( + %s: "%s", + params: { + platform: "web", + playerBackend: "mediaplayer", + playerType: "site" + } + ) + { + value + signature + } + }''' % (method, param_name, video_id), + } + return self._download_base_gql( + video_id, ops, + 'Downloading %s access token GraphQL' % token_kind)['data'][method] + + def _get_thumbnails(self, thumbnail): + return [{ + 'url': re.sub(r'\d+x\d+(\.\w+)($|(?=[?#]))', r'0x0\g<1>', thumbnail), + 'preference': 1, + }, { + 'url': thumbnail, + }] if thumbnail else None + + +class TwitchVodIE(TwitchBaseIE): + IE_NAME = 'twitch:vod' + _VALID_URL = r'''(?x) + https?:// + (?: + (?:(?:www|go|m)\.)?twitch\.tv/(?:[^/]+/v(?:ideo)?|videos)/| + player\.twitch\.tv/\?.*?\bvideo=v?| + www\.twitch\.tv/[^/]+/schedule\?vodID= + ) + (?P\d+) + ''' + + _TESTS = [{ + 'url': 'http://www.twitch.tv/riotgames/v/6528877?t=5m10s', + 'info_dict': { + 'id': 'v6528877', + 'ext': 'mp4', + 'title': 'LCK Summer Split - Week 6 Day 1', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 17208, + 'timestamp': 1435131734, + 'upload_date': '20150624', + 'uploader': 'Riot Games', + 'uploader_id': 'riotgames', + 'view_count': int, + 'start_time': 310, + 'chapters': [ + { + 'start_time': 0, + 'end_time': 17208, + 'title': 'League of Legends' + } + ], + 'live_status': 'was_live', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + # Untitled broadcast (title is None) + 'url': 'http://www.twitch.tv/belkao_o/v/11230755', + 'info_dict': { + 'id': 'v11230755', + 'ext': 'mp4', + 'title': 'Untitled Broadcast', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 1638, + 'timestamp': 1439746708, + 'upload_date': '20150816', + 'uploader': 'BelkAO_o', + 'uploader_id': 'belkao_o', + 'view_count': int, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'skip': 'HTTP Error 404: Not Found', + }, { + 'url': 'http://player.twitch.tv/?t=5m10s&video=v6528877', + 'only_matching': True, + }, { + 'url': 'https://www.twitch.tv/videos/6528877', + 'only_matching': True, + }, { + 'url': 'https://m.twitch.tv/beagsandjam/v/247478721', + 'only_matching': True, + }, { + 'url': 'https://www.twitch.tv/northernlion/video/291940395', + 'only_matching': True, + }, { + 'url': 'https://player.twitch.tv/?video=480452374', + 'only_matching': True, + }, { + 'url': 'https://www.twitch.tv/videos/635475444', + 'info_dict': { + 'id': 'v635475444', + 'ext': 'mp4', + 'title': 'Riot Games', + 'duration': 11643, + 'uploader': 'Riot Games', + 'uploader_id': 'riotgames', + 'timestamp': 1590770569, + 'upload_date': '20200529', + 'chapters': [ + { + 'start_time': 0, + 'end_time': 573, + 'title': 'League of Legends' + }, + { + 'start_time': 573, + 'end_time': 3922, + 'title': 'Legends of Runeterra' + }, + { + 'start_time': 3922, + 'end_time': 11643, + 'title': 'Art' + } + ], + 'live_status': 'was_live', + 'thumbnail': r're:^https?://.*\.jpg$', + 'view_count': int, + }, + 'params': { + 'skip_download': True + }, + }, { + 'note': 'Storyboards', + 'url': 'https://www.twitch.tv/videos/635475444', + 'info_dict': { + 'id': 'v635475444', + 'format_id': 'sb0', + 'ext': 'mhtml', + 'title': 'Riot Games', + 'duration': 11643, + 'uploader': 'Riot Games', + 'uploader_id': 'riotgames', + 'timestamp': 1590770569, + 'upload_date': '20200529', + 'chapters': [ + { + 'start_time': 0, + 'end_time': 573, + 'title': 'League of Legends' + }, + { + 'start_time': 573, + 'end_time': 3922, + 'title': 'Legends of Runeterra' + }, + { + 'start_time': 3922, + 'end_time': 11643, + 'title': 'Art' + } + ], + 'live_status': 'was_live', + 'thumbnail': r're:^https?://.*\.jpg$', + 'view_count': int, + 'columns': int, + 'rows': int, + }, + 'params': { + 'format': 'mhtml', + 'skip_download': True + } + }, { + 'note': 'VOD with single chapter', + 'url': 'https://www.twitch.tv/videos/1536751224', + 'info_dict': { + 'id': 'v1536751224', + 'ext': 'mp4', + 'title': 'Porter Robinson Star Guardian Stream Tour with LilyPichu', + 'duration': 8353, + 'uploader': 'Riot Games', + 'uploader_id': 'riotgames', + 'timestamp': 1658267731, + 'upload_date': '20220719', + 'chapters': [ + { + 'start_time': 0, + 'end_time': 8353, + 'title': 'League of Legends' + } + ], + 'live_status': 'was_live', + 'thumbnail': r're:^https?://.*\.jpg$', + 'view_count': int, + }, + 'params': { + 'skip_download': True + }, + 'expected_warnings': ['Unable to download JSON metadata: HTTP Error 403: Forbidden'] + }, { + 'url': 'https://www.twitch.tv/tangotek/schedule?vodID=1822395420', + 'only_matching': True, + }] + + def _download_info(self, item_id): + data = self._download_gql( + item_id, [{ + 'operationName': 'VideoMetadata', + 'variables': { + 'channelLogin': '', + 'videoID': item_id, + }, + }, { + 'operationName': 'VideoPlayer_ChapterSelectButtonVideo', + 'variables': { + 'includePrivate': False, + 'videoID': item_id, + }, + }, { + 'operationName': 'VideoPlayer_VODSeekbarPreviewVideo', + 'variables': { + 'includePrivate': False, + 'videoID': item_id, + }, + }], + 'Downloading stream metadata GraphQL') + + video = traverse_obj(data, (..., 'data', 'video'), get_all=False) + if video is None: + raise ExtractorError(f'Video {item_id} does not exist', expected=True) + + video['moments'] = traverse_obj(data, (..., 'data', 'video', 'moments', 'edges', ..., 'node')) + video['storyboard'] = traverse_obj( + data, (..., 'data', 'video', 'seekPreviewsURL', {url_or_none}), get_all=False) + + return video + + def _extract_info(self, info): + status = info.get('status') + if status == 'recording': + is_live = True + elif status == 'recorded': + is_live = False + else: + is_live = None + _QUALITIES = ('small', 'medium', 'large') + quality_key = qualities(_QUALITIES) + thumbnails = [] + preview = info.get('preview') + if isinstance(preview, dict): + for thumbnail_id, thumbnail_url in preview.items(): + thumbnail_url = url_or_none(thumbnail_url) + if not thumbnail_url: + continue + if thumbnail_id not in _QUALITIES: + continue + thumbnails.append({ + 'url': thumbnail_url, + 'preference': quality_key(thumbnail_id), + }) + return { + 'id': info['_id'], + 'title': info.get('title') or 'Untitled Broadcast', + 'description': info.get('description'), + 'duration': int_or_none(info.get('length')), + 'thumbnails': thumbnails, + 'uploader': info.get('channel', {}).get('display_name'), + 'uploader_id': info.get('channel', {}).get('name'), + 'timestamp': parse_iso8601(info.get('recorded_at')), + 'view_count': int_or_none(info.get('views')), + 'is_live': is_live, + 'was_live': True, + } + + def _extract_chapters(self, info, item_id): + if not info.get('moments'): + game = traverse_obj(info, ('game', 'displayName')) + if game: + yield {'title': game} + return + + for moment in info['moments']: + start_time = int_or_none(moment.get('positionMilliseconds'), 1000) + duration = int_or_none(moment.get('durationMilliseconds'), 1000) + name = str_or_none(moment.get('description')) + + if start_time is None or duration is None: + self.report_warning(f'Important chapter information missing for chapter {name}', item_id) + continue + yield { + 'start_time': start_time, + 'end_time': start_time + duration, + 'title': name, + } + + def _extract_info_gql(self, info, item_id): + vod_id = info.get('id') or item_id + # id backward compatibility for download archives + if vod_id[0] != 'v': + vod_id = 'v%s' % vod_id + thumbnail = url_or_none(info.get('previewThumbnailURL')) + is_live = None + if thumbnail: + if re.findall(r'/404_processing_[^.?#]+\.png', thumbnail): + is_live, thumbnail = True, None + else: + is_live = False + + return { + 'id': vod_id, + 'title': info.get('title') or 'Untitled Broadcast', + 'description': info.get('description'), + 'duration': int_or_none(info.get('lengthSeconds')), + 'thumbnails': self._get_thumbnails(thumbnail), + 'uploader': try_get(info, lambda x: x['owner']['displayName'], compat_str), + 'uploader_id': try_get(info, lambda x: x['owner']['login'], compat_str), + 'timestamp': unified_timestamp(info.get('publishedAt')), + 'view_count': int_or_none(info.get('viewCount')), + 'chapters': list(self._extract_chapters(info, item_id)), + 'is_live': is_live, + 'was_live': True, + } + + def _extract_storyboard(self, item_id, storyboard_json_url, duration): + if not duration or not storyboard_json_url: + return + spec = self._download_json(storyboard_json_url, item_id, 'Downloading storyboard metadata JSON', fatal=False) or [] + # sort from highest quality to lowest + # This makes sb0 the highest-quality format, sb1 - lower, etc which is consistent with youtube sb ordering + spec.sort(key=lambda x: int_or_none(x.get('width')) or 0, reverse=True) + base = base_url(storyboard_json_url) + for i, s in enumerate(spec): + count = int_or_none(s.get('count')) + images = s.get('images') + if not (images and count): + continue + fragment_duration = duration / len(images) + yield { + 'format_id': f'sb{i}', + 'format_note': 'storyboard', + 'ext': 'mhtml', + 'protocol': 'mhtml', + 'acodec': 'none', + 'vcodec': 'none', + 'url': urljoin(base, images[0]), + 'width': int_or_none(s.get('width')), + 'height': int_or_none(s.get('height')), + 'fps': count / duration, + 'rows': int_or_none(s.get('rows')), + 'columns': int_or_none(s.get('cols')), + 'fragments': [{ + 'url': urljoin(base, path), + 'duration': fragment_duration, + } for path in images], + } + + def _real_extract(self, url): + vod_id = self._match_id(url) + + video = self._download_info(vod_id) + info = self._extract_info_gql(video, vod_id) + access_token = self._download_access_token(vod_id, 'video', 'id') + + formats = self._extract_m3u8_formats( + '%s/vod/%s.m3u8?%s' % ( + self._USHER_BASE, vod_id, + compat_urllib_parse_urlencode({ + 'allow_source': 'true', + 'allow_audio_only': 'true', + 'allow_spectre': 'true', + 'player': 'twitchweb', + 'playlist_include_framerate': 'true', + 'nauth': access_token['value'], + 'nauthsig': access_token['signature'], + })), + vod_id, 'mp4', entry_protocol='m3u8_native') + + formats.extend(self._extract_storyboard(vod_id, video.get('storyboard'), info.get('duration'))) + + self._prefer_source(formats) + info['formats'] = formats + + parsed_url = compat_urllib_parse_urlparse(url) + query = compat_parse_qs(parsed_url.query) + if 't' in query: + info['start_time'] = parse_duration(query['t'][0]) + + if info.get('timestamp') is not None: + info['subtitles'] = { + 'rechat': [{ + 'url': update_url_query( + 'https://api.twitch.tv/v5/videos/%s/comments' % vod_id, { + 'client_id': self._CLIENT_ID, + }), + 'ext': 'json', + }], + } + + return info + + +def _make_video_result(node): + assert isinstance(node, dict) + video_id = node.get('id') + if not video_id: + return + return { + '_type': 'url_transparent', + 'ie_key': TwitchVodIE.ie_key(), + 'id': 'v' + video_id, + 'url': 'https://www.twitch.tv/videos/%s' % video_id, + 'title': node.get('title'), + 'thumbnail': node.get('previewThumbnailURL'), + 'duration': float_or_none(node.get('lengthSeconds')), + 'view_count': int_or_none(node.get('viewCount')), + } + + +class TwitchCollectionIE(TwitchBaseIE): + _VALID_URL = r'https?://(?:(?:www|go|m)\.)?twitch\.tv/collections/(?P[^/]+)' + + _TESTS = [{ + 'url': 'https://www.twitch.tv/collections/wlDCoH0zEBZZbQ', + 'info_dict': { + 'id': 'wlDCoH0zEBZZbQ', + 'title': 'Overthrow Nook, capitalism for children', + }, + 'playlist_mincount': 13, + }] + + _OPERATION_NAME = 'CollectionSideBar' + + def _real_extract(self, url): + collection_id = self._match_id(url) + collection = self._download_gql( + collection_id, [{ + 'operationName': self._OPERATION_NAME, + 'variables': {'collectionID': collection_id}, + }], + 'Downloading collection GraphQL')[0]['data']['collection'] + title = collection.get('title') + entries = [] + for edge in collection['items']['edges']: + if not isinstance(edge, dict): + continue + node = edge.get('node') + if not isinstance(node, dict): + continue + video = _make_video_result(node) + if video: + entries.append(video) + return self.playlist_result( + entries, playlist_id=collection_id, playlist_title=title) + + +class TwitchPlaylistBaseIE(TwitchBaseIE): + _PAGE_LIMIT = 100 + + def _entries(self, channel_name, *args): + cursor = None + variables_common = self._make_variables(channel_name, *args) + entries_key = '%ss' % self._ENTRY_KIND + for page_num in itertools.count(1): + variables = variables_common.copy() + variables['limit'] = self._PAGE_LIMIT + if cursor: + variables['cursor'] = cursor + page = self._download_gql( + channel_name, [{ + 'operationName': self._OPERATION_NAME, + 'variables': variables, + }], + 'Downloading %ss GraphQL page %s' % (self._NODE_KIND, page_num), + fatal=False) + if not page: + break + edges = try_get( + page, lambda x: x[0]['data']['user'][entries_key]['edges'], list) + if not edges: + break + for edge in edges: + if not isinstance(edge, dict): + continue + if edge.get('__typename') != self._EDGE_KIND: + continue + node = edge.get('node') + if not isinstance(node, dict): + continue + if node.get('__typename') != self._NODE_KIND: + continue + entry = self._extract_entry(node) + if entry: + cursor = edge.get('cursor') + yield entry + if not cursor or not isinstance(cursor, compat_str): + break + + +class TwitchVideosIE(TwitchPlaylistBaseIE): + _VALID_URL = r'https?://(?:(?:www|go|m)\.)?twitch\.tv/(?P[^/]+)/(?:videos|profile)' + + _TESTS = [{ + # All Videos sorted by Date + 'url': 'https://www.twitch.tv/spamfish/videos?filter=all', + 'info_dict': { + 'id': 'spamfish', + 'title': 'spamfish - All Videos sorted by Date', + }, + 'playlist_mincount': 924, + }, { + # All Videos sorted by Popular + 'url': 'https://www.twitch.tv/spamfish/videos?filter=all&sort=views', + 'info_dict': { + 'id': 'spamfish', + 'title': 'spamfish - All Videos sorted by Popular', + }, + 'playlist_mincount': 931, + }, { + # Past Broadcasts sorted by Date + 'url': 'https://www.twitch.tv/spamfish/videos?filter=archives', + 'info_dict': { + 'id': 'spamfish', + 'title': 'spamfish - Past Broadcasts sorted by Date', + }, + 'playlist_mincount': 27, + }, { + # Highlights sorted by Date + 'url': 'https://www.twitch.tv/spamfish/videos?filter=highlights', + 'info_dict': { + 'id': 'spamfish', + 'title': 'spamfish - Highlights sorted by Date', + }, + 'playlist_mincount': 901, + }, { + # Uploads sorted by Date + 'url': 'https://www.twitch.tv/esl_csgo/videos?filter=uploads&sort=time', + 'info_dict': { + 'id': 'esl_csgo', + 'title': 'esl_csgo - Uploads sorted by Date', + }, + 'playlist_mincount': 5, + }, { + # Past Premieres sorted by Date + 'url': 'https://www.twitch.tv/spamfish/videos?filter=past_premieres', + 'info_dict': { + 'id': 'spamfish', + 'title': 'spamfish - Past Premieres sorted by Date', + }, + 'playlist_mincount': 1, + }, { + 'url': 'https://www.twitch.tv/spamfish/videos/all', + 'only_matching': True, + }, { + 'url': 'https://m.twitch.tv/spamfish/videos/all', + 'only_matching': True, + }, { + 'url': 'https://www.twitch.tv/spamfish/videos', + 'only_matching': True, + }] + + Broadcast = collections.namedtuple('Broadcast', ['type', 'label']) + + _DEFAULT_BROADCAST = Broadcast(None, 'All Videos') + _BROADCASTS = { + 'archives': Broadcast('ARCHIVE', 'Past Broadcasts'), + 'highlights': Broadcast('HIGHLIGHT', 'Highlights'), + 'uploads': Broadcast('UPLOAD', 'Uploads'), + 'past_premieres': Broadcast('PAST_PREMIERE', 'Past Premieres'), + 'all': _DEFAULT_BROADCAST, + } + + _DEFAULT_SORTED_BY = 'Date' + _SORTED_BY = { + 'time': _DEFAULT_SORTED_BY, + 'views': 'Popular', + } + + _OPERATION_NAME = 'FilterableVideoTower_Videos' + _ENTRY_KIND = 'video' + _EDGE_KIND = 'VideoEdge' + _NODE_KIND = 'Video' + + @classmethod + def suitable(cls, url): + return (False + if any(ie.suitable(url) for ie in ( + TwitchVideosClipsIE, + TwitchVideosCollectionsIE)) + else super(TwitchVideosIE, cls).suitable(url)) + + @staticmethod + def _make_variables(channel_name, broadcast_type, sort): + return { + 'channelOwnerLogin': channel_name, + 'broadcastType': broadcast_type, + 'videoSort': sort.upper(), + } + + @staticmethod + def _extract_entry(node): + return _make_video_result(node) + + def _real_extract(self, url): + channel_name = self._match_id(url) + qs = parse_qs(url) + filter = qs.get('filter', ['all'])[0] + sort = qs.get('sort', ['time'])[0] + broadcast = self._BROADCASTS.get(filter, self._DEFAULT_BROADCAST) + return self.playlist_result( + self._entries(channel_name, broadcast.type, sort), + playlist_id=channel_name, + playlist_title='%s - %s sorted by %s' + % (channel_name, broadcast.label, + self._SORTED_BY.get(sort, self._DEFAULT_SORTED_BY))) + + +class TwitchVideosClipsIE(TwitchPlaylistBaseIE): + _VALID_URL = r'https?://(?:(?:www|go|m)\.)?twitch\.tv/(?P[^/]+)/(?:clips|videos/*?\?.*?\bfilter=clips)' + + _TESTS = [{ + # Clips + 'url': 'https://www.twitch.tv/vanillatv/clips?filter=clips&range=all', + 'info_dict': { + 'id': 'vanillatv', + 'title': 'vanillatv - Clips Top All', + }, + 'playlist_mincount': 1, + }, { + 'url': 'https://www.twitch.tv/dota2ruhub/videos?filter=clips&range=7d', + 'only_matching': True, + }] + + Clip = collections.namedtuple('Clip', ['filter', 'label']) + + _DEFAULT_CLIP = Clip('LAST_WEEK', 'Top 7D') + _RANGE = { + '24hr': Clip('LAST_DAY', 'Top 24H'), + '7d': _DEFAULT_CLIP, + '30d': Clip('LAST_MONTH', 'Top 30D'), + 'all': Clip('ALL_TIME', 'Top All'), + } + + # NB: values other than 20 result in skipped videos + _PAGE_LIMIT = 20 + + _OPERATION_NAME = 'ClipsCards__User' + _ENTRY_KIND = 'clip' + _EDGE_KIND = 'ClipEdge' + _NODE_KIND = 'Clip' + + @staticmethod + def _make_variables(channel_name, filter): + return { + 'login': channel_name, + 'criteria': { + 'filter': filter, + }, + } + + @staticmethod + def _extract_entry(node): + assert isinstance(node, dict) + clip_url = url_or_none(node.get('url')) + if not clip_url: + return + return { + '_type': 'url_transparent', + 'ie_key': TwitchClipsIE.ie_key(), + 'id': node.get('id'), + 'url': clip_url, + 'title': node.get('title'), + 'thumbnail': node.get('thumbnailURL'), + 'duration': float_or_none(node.get('durationSeconds')), + 'timestamp': unified_timestamp(node.get('createdAt')), + 'view_count': int_or_none(node.get('viewCount')), + 'language': node.get('language'), + } + + def _real_extract(self, url): + channel_name = self._match_id(url) + qs = parse_qs(url) + range = qs.get('range', ['7d'])[0] + clip = self._RANGE.get(range, self._DEFAULT_CLIP) + return self.playlist_result( + self._entries(channel_name, clip.filter), + playlist_id=channel_name, + playlist_title='%s - Clips %s' % (channel_name, clip.label)) + + +class TwitchVideosCollectionsIE(TwitchPlaylistBaseIE): + _VALID_URL = r'https?://(?:(?:www|go|m)\.)?twitch\.tv/(?P[^/]+)/videos/*?\?.*?\bfilter=collections' + + _TESTS = [{ + # Collections + 'url': 'https://www.twitch.tv/spamfish/videos?filter=collections', + 'info_dict': { + 'id': 'spamfish', + 'title': 'spamfish - Collections', + }, + 'playlist_mincount': 3, + }, { + 'url': 'https://www.twitch.tv/monstercat/videos?filter=collections', + 'info_dict': { + 'id': 'monstercat', + 'title': 'monstercat - Collections', + }, + 'playlist_mincount': 13, + }] + + _OPERATION_NAME = 'ChannelCollectionsContent' + _ENTRY_KIND = 'collection' + _EDGE_KIND = 'CollectionsItemEdge' + _NODE_KIND = 'Collection' + + @staticmethod + def _make_variables(channel_name): + return { + 'ownerLogin': channel_name, + } + + @staticmethod + def _extract_entry(node): + assert isinstance(node, dict) + collection_id = node.get('id') + if not collection_id: + return + return { + '_type': 'url_transparent', + 'ie_key': TwitchCollectionIE.ie_key(), + 'id': collection_id, + 'url': 'https://www.twitch.tv/collections/%s' % collection_id, + 'title': node.get('title'), + 'thumbnail': node.get('thumbnailURL'), + 'duration': float_or_none(node.get('lengthSeconds')), + 'timestamp': unified_timestamp(node.get('updatedAt')), + 'view_count': int_or_none(node.get('viewCount')), + } + + def _real_extract(self, url): + channel_name = self._match_id(url) + return self.playlist_result( + self._entries(channel_name), playlist_id=channel_name, + playlist_title='%s - Collections' % channel_name) + + +class TwitchStreamIE(TwitchBaseIE): + IE_NAME = 'twitch:stream' + _VALID_URL = r'''(?x) + https?:// + (?: + (?:(?:www|go|m)\.)?twitch\.tv/| + player\.twitch\.tv/\?.*?\bchannel= + ) + (?P[^/#?]+) + ''' + + _TESTS = [{ + 'url': 'http://www.twitch.tv/shroomztv', + 'info_dict': { + 'id': '12772022048', + 'display_id': 'shroomztv', + 'ext': 'mp4', + 'title': 're:^ShroomzTV [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'description': 'H1Z1 - lonewolfing with ShroomzTV | A3 Battle Royale later - @ShroomzTV', + 'is_live': True, + 'timestamp': 1421928037, + 'upload_date': '20150122', + 'uploader': 'ShroomzTV', + 'uploader_id': 'shroomztv', + 'view_count': int, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'skip': 'User does not exist', + }, { + 'url': 'http://www.twitch.tv/miracle_doto#profile-0', + 'only_matching': True, + }, { + 'url': 'https://player.twitch.tv/?channel=lotsofs', + 'only_matching': True, + }, { + 'url': 'https://go.twitch.tv/food', + 'only_matching': True, + }, { + 'url': 'https://m.twitch.tv/food', + 'only_matching': True, + }, { + 'url': 'https://www.twitch.tv/monstercat', + 'info_dict': { + 'id': '40500071752', + 'display_id': 'monstercat', + 'title': 're:Monstercat', + 'description': 'md5:0945ad625e615bc8f0469396537d87d9', + 'is_live': True, + 'timestamp': 1677107190, + 'upload_date': '20230222', + 'uploader': 'Monstercat', + 'uploader_id': 'monstercat', + 'live_status': 'is_live', + 'thumbnail': 're:https://.*.jpg', + 'ext': 'mp4', + }, + 'params': { + 'skip_download': 'Livestream', + }, + }] + + @classmethod + def suitable(cls, url): + return (False + if any(ie.suitable(url) for ie in ( + TwitchVodIE, + TwitchCollectionIE, + TwitchVideosIE, + TwitchVideosClipsIE, + TwitchVideosCollectionsIE, + TwitchClipsIE)) + else super(TwitchStreamIE, cls).suitable(url)) + + def _real_extract(self, url): + channel_name = self._match_id(url).lower() + + gql = self._download_gql( + channel_name, [{ + 'operationName': 'StreamMetadata', + 'variables': {'channelLogin': channel_name}, + }, { + 'operationName': 'ComscoreStreamingQuery', + 'variables': { + 'channel': channel_name, + 'clipSlug': '', + 'isClip': False, + 'isLive': True, + 'isVodOrCollection': False, + 'vodID': '', + }, + }, { + 'operationName': 'VideoPreviewOverlay', + 'variables': {'login': channel_name}, + }], + 'Downloading stream GraphQL') + + user = gql[0]['data']['user'] + + if not user: + raise ExtractorError( + '%s does not exist' % channel_name, expected=True) + + stream = user['stream'] + + if not stream: + raise UserNotLive(video_id=channel_name) + + access_token = self._download_access_token( + channel_name, 'stream', 'channelName') + token = access_token['value'] + + stream_id = stream.get('id') or channel_name + query = { + 'allow_source': 'true', + 'allow_audio_only': 'true', + 'allow_spectre': 'true', + 'p': random.randint(1000000, 10000000), + 'player': 'twitchweb', + 'playlist_include_framerate': 'true', + 'segment_preference': '4', + 'sig': access_token['signature'].encode('utf-8'), + 'token': token.encode('utf-8'), + } + formats = self._extract_m3u8_formats( + '%s/api/channel/hls/%s.m3u8' % (self._USHER_BASE, channel_name), + stream_id, 'mp4', query=query) + self._prefer_source(formats) + + view_count = stream.get('viewers') + timestamp = unified_timestamp(stream.get('createdAt')) + + sq_user = try_get(gql, lambda x: x[1]['data']['user'], dict) or {} + uploader = sq_user.get('displayName') + description = try_get( + sq_user, lambda x: x['broadcastSettings']['title'], compat_str) + + thumbnail = url_or_none(try_get( + gql, lambda x: x[2]['data']['user']['stream']['previewImageURL'], + compat_str)) + + title = uploader or channel_name + stream_type = stream.get('type') + if stream_type in ['rerun', 'live']: + title += ' (%s)' % stream_type + + return { + 'id': stream_id, + 'display_id': channel_name, + 'title': title, + 'description': description, + 'thumbnails': self._get_thumbnails(thumbnail), + 'uploader': uploader, + 'uploader_id': channel_name, + 'timestamp': timestamp, + 'view_count': view_count, + 'formats': formats, + 'is_live': stream_type == 'live', + } + + +class TwitchClipsIE(TwitchBaseIE): + IE_NAME = 'twitch:clips' + _VALID_URL = r'''(?x) + https?:// + (?: + clips\.twitch\.tv/(?:embed\?.*?\bclip=|(?:[^/]+/)*)| + (?:(?:www|go|m)\.)?twitch\.tv/(?:[^/]+/)?clip/ + ) + (?P[^/?#&]+) + ''' + + _TESTS = [{ + 'url': 'https://clips.twitch.tv/FaintLightGullWholeWheat', + 'md5': '761769e1eafce0ffebfb4089cb3847cd', + 'info_dict': { + 'id': '42850523', + 'display_id': 'FaintLightGullWholeWheat', + 'ext': 'mp4', + 'title': 'EA Play 2016 Live from the Novo Theatre', + 'thumbnail': r're:^https?://.*\.jpg', + 'timestamp': 1465767393, + 'upload_date': '20160612', + 'creator': 'EA', + 'uploader': 'stereotype_', + 'uploader_id': '43566419', + }, + }, { + # multiple formats + 'url': 'https://clips.twitch.tv/rflegendary/UninterestedBeeDAESuppy', + 'only_matching': True, + }, { + 'url': 'https://www.twitch.tv/sergeynixon/clip/StormyThankfulSproutFutureMan', + 'only_matching': True, + }, { + 'url': 'https://clips.twitch.tv/embed?clip=InquisitiveBreakableYogurtJebaited', + 'only_matching': True, + }, { + 'url': 'https://m.twitch.tv/rossbroadcast/clip/ConfidentBraveHumanChefFrank', + 'only_matching': True, + }, { + 'url': 'https://go.twitch.tv/rossbroadcast/clip/ConfidentBraveHumanChefFrank', + 'only_matching': True, + }, { + 'url': 'https://m.twitch.tv/clip/FaintLightGullWholeWheat', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + clip = self._download_gql( + video_id, [{ + 'operationName': 'VideoAccessToken_Clip', + 'variables': { + 'slug': video_id, + }, + }], + 'Downloading clip access token GraphQL')[0]['data']['clip'] + + if not clip: + raise ExtractorError( + 'This clip is no longer available', expected=True) + + access_query = { + 'sig': clip['playbackAccessToken']['signature'], + 'token': clip['playbackAccessToken']['value'], + } + + data = self._download_base_gql( + video_id, { + 'query': '''{ + clip(slug: "%s") { + broadcaster { + displayName + } + createdAt + curator { + displayName + id + } + durationSeconds + id + tiny: thumbnailURL(width: 86, height: 45) + small: thumbnailURL(width: 260, height: 147) + medium: thumbnailURL(width: 480, height: 272) + title + videoQualities { + frameRate + quality + sourceURL + } + viewCount + } +}''' % video_id}, 'Downloading clip GraphQL', fatal=False) + + if data: + clip = try_get(data, lambda x: x['data']['clip'], dict) or clip + + formats = [] + for option in clip.get('videoQualities', []): + if not isinstance(option, dict): + continue + source = url_or_none(option.get('sourceURL')) + if not source: + continue + formats.append({ + 'url': update_url_query(source, access_query), + 'format_id': option.get('quality'), + 'height': int_or_none(option.get('quality')), + 'fps': int_or_none(option.get('frameRate')), + }) + + thumbnails = [] + for thumbnail_id in ('tiny', 'small', 'medium'): + thumbnail_url = clip.get(thumbnail_id) + if not thumbnail_url: + continue + thumb = { + 'id': thumbnail_id, + 'url': thumbnail_url, + } + mobj = re.search(r'-(\d+)x(\d+)\.', thumbnail_url) + if mobj: + thumb.update({ + 'height': int(mobj.group(2)), + 'width': int(mobj.group(1)), + }) + thumbnails.append(thumb) + + old_id = self._search_regex(r'%7C(\d+)(?:-\d+)?.mp4', formats[-1]['url'], 'old id', default=None) + + return { + 'id': clip.get('id') or video_id, + '_old_archive_ids': [make_archive_id(self, old_id)] if old_id else None, + 'display_id': video_id, + 'title': clip.get('title'), + 'formats': formats, + 'duration': int_or_none(clip.get('durationSeconds')), + 'view_count': int_or_none(clip.get('viewCount')), + 'timestamp': unified_timestamp(clip.get('createdAt')), + 'thumbnails': thumbnails, + 'creator': try_get(clip, lambda x: x['broadcaster']['displayName'], compat_str), + 'uploader': try_get(clip, lambda x: x['curator']['displayName'], compat_str), + 'uploader_id': try_get(clip, lambda x: x['curator']['id'], compat_str), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/twitter.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/twitter.py new file mode 100644 index 0000000..b638621 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/twitter.py @@ -0,0 +1,1757 @@ +import json +import random +import re + +from .common import InfoExtractor +from .periscope import PeriscopeBaseIE, PeriscopeIE +from ..compat import functools # isort: split +from ..compat import ( + compat_parse_qs, + compat_urllib_parse_unquote, + compat_urllib_parse_urlparse, +) +from ..utils import ( + ExtractorError, + dict_get, + filter_dict, + float_or_none, + format_field, + int_or_none, + make_archive_id, + remove_end, + str_or_none, + strip_or_none, + traverse_obj, + try_call, + try_get, + unified_timestamp, + update_url_query, + url_or_none, + xpath_text, +) + + +class TwitterBaseIE(InfoExtractor): + _NETRC_MACHINE = 'twitter' + _API_BASE = 'https://api.twitter.com/1.1/' + _GRAPHQL_API_BASE = 'https://twitter.com/i/api/graphql/' + _BASE_REGEX = r'https?://(?:(?:www|m(?:obile)?)\.)?(?:twitter\.com|twitter3e4tixl4xyajtrzo62zg5vztmjuricljdp2c5kshju4avyoid\.onion)/' + _AUTH = 'AAAAAAAAAAAAAAAAAAAAANRILgAAAAAAnNwIzUejRCOuH5E6I8xnZz4puTs%3D1Zv7ttfk8LF81IUq16cHjhLTvJu4FA33AGWWjCpTnA' + _LEGACY_AUTH = 'AAAAAAAAAAAAAAAAAAAAAIK1zgAAAAAA2tUWuhGZ2JceoId5GwYWU5GspY4%3DUq7gzFoCZs1QfwGoVdvSac3IniczZEYXIcDyumCauIXpcAPorE' + _flow_token = None + + _LOGIN_INIT_DATA = json.dumps({ + 'input_flow_data': { + 'flow_context': { + 'debug_overrides': {}, + 'start_location': { + 'location': 'unknown' + } + } + }, + 'subtask_versions': { + 'action_list': 2, + 'alert_dialog': 1, + 'app_download_cta': 1, + 'check_logged_in_account': 1, + 'choice_selection': 3, + 'contacts_live_sync_permission_prompt': 0, + 'cta': 7, + 'email_verification': 2, + 'end_flow': 1, + 'enter_date': 1, + 'enter_email': 2, + 'enter_password': 5, + 'enter_phone': 2, + 'enter_recaptcha': 1, + 'enter_text': 5, + 'enter_username': 2, + 'generic_urt': 3, + 'in_app_notification': 1, + 'interest_picker': 3, + 'js_instrumentation': 1, + 'menu_dialog': 1, + 'notifications_permission_prompt': 2, + 'open_account': 2, + 'open_home_timeline': 1, + 'open_link': 1, + 'phone_verification': 4, + 'privacy_options': 1, + 'security_key': 3, + 'select_avatar': 4, + 'select_banner': 2, + 'settings_list': 7, + 'show_code': 1, + 'sign_up': 2, + 'sign_up_review': 4, + 'tweet_selection_urt': 1, + 'update_users': 1, + 'upload_media': 1, + 'user_recommendations_list': 4, + 'user_recommendations_urt': 1, + 'wait_spinner': 3, + 'web_modal': 1 + } + }, separators=(',', ':')).encode() + + def _extract_variant_formats(self, variant, video_id): + variant_url = variant.get('url') + if not variant_url: + return [], {} + elif '.m3u8' in variant_url: + return self._extract_m3u8_formats_and_subtitles( + variant_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False) + else: + tbr = int_or_none(dict_get(variant, ('bitrate', 'bit_rate')), 1000) or None + f = { + 'url': variant_url, + 'format_id': 'http' + ('-%d' % tbr if tbr else ''), + 'tbr': tbr, + } + self._search_dimensions_in_video_url(f, variant_url) + return [f], {} + + def _extract_formats_from_vmap_url(self, vmap_url, video_id): + vmap_url = url_or_none(vmap_url) + if not vmap_url: + return [], {} + vmap_data = self._download_xml(vmap_url, video_id) + formats = [] + subtitles = {} + urls = [] + for video_variant in vmap_data.findall('.//{http://twitter.com/schema/videoVMapV2.xsd}videoVariant'): + video_variant.attrib['url'] = compat_urllib_parse_unquote( + video_variant.attrib['url']) + urls.append(video_variant.attrib['url']) + fmts, subs = self._extract_variant_formats( + video_variant.attrib, video_id) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + video_url = strip_or_none(xpath_text(vmap_data, './/MediaFile')) + if video_url not in urls: + fmts, subs = self._extract_variant_formats({'url': video_url}, video_id) + formats.extend(fmts) + subtitles = self._merge_subtitles(subtitles, subs) + return formats, subtitles + + @staticmethod + def _search_dimensions_in_video_url(a_format, video_url): + m = re.search(r'/(?P\d+)x(?P\d+)/', video_url) + if m: + a_format.update({ + 'width': int(m.group('width')), + 'height': int(m.group('height')), + }) + + @property + def is_logged_in(self): + return bool(self._get_cookies(self._API_BASE).get('auth_token')) + + @functools.cached_property + def _selected_api(self): + return self._configuration_arg('api', ['graphql'], ie_key='Twitter')[0] + + def _fetch_guest_token(self, display_id): + guest_token = traverse_obj(self._download_json( + f'{self._API_BASE}guest/activate.json', display_id, 'Downloading guest token', data=b'', + headers=self._set_base_headers(legacy=display_id and self._selected_api == 'legacy')), + ('guest_token', {str})) + if not guest_token: + raise ExtractorError('Could not retrieve guest token') + return guest_token + + def _set_base_headers(self, legacy=False): + bearer_token = self._LEGACY_AUTH if legacy and not self.is_logged_in else self._AUTH + return filter_dict({ + 'Authorization': f'Bearer {bearer_token}', + 'x-csrf-token': try_call(lambda: self._get_cookies(self._API_BASE)['ct0'].value), + }) + + def _call_login_api(self, note, headers, query={}, data=None): + response = self._download_json( + f'{self._API_BASE}onboarding/task.json', None, note, + headers=headers, query=query, data=data, expected_status=400) + error = traverse_obj(response, ('errors', 0, 'message', {str})) + if error: + raise ExtractorError(f'Login failed, Twitter API says: {error}', expected=True) + elif traverse_obj(response, 'status') != 'success': + raise ExtractorError('Login was unsuccessful') + + subtask = traverse_obj( + response, ('subtasks', ..., 'subtask_id', {str}), get_all=False) + if not subtask: + raise ExtractorError('Twitter API did not return next login subtask') + + self._flow_token = response['flow_token'] + + return subtask + + def _perform_login(self, username, password): + if self.is_logged_in: + return + + webpage = self._download_webpage('https://twitter.com/', None, 'Downloading login page') + guest_token = self._search_regex( + r'\.cookie\s*=\s*["\']gt=(\d+);', webpage, 'gt', default=None) or self._fetch_guest_token(None) + headers = { + **self._set_base_headers(), + 'content-type': 'application/json', + 'x-guest-token': guest_token, + 'x-twitter-client-language': 'en', + 'x-twitter-active-user': 'yes', + 'Referer': 'https://twitter.com/', + 'Origin': 'https://twitter.com', + } + + def build_login_json(*subtask_inputs): + return json.dumps({ + 'flow_token': self._flow_token, + 'subtask_inputs': subtask_inputs + }, separators=(',', ':')).encode() + + def input_dict(subtask_id, text): + return { + 'subtask_id': subtask_id, + 'enter_text': { + 'text': text, + 'link': 'next_link' + } + } + + next_subtask = self._call_login_api( + 'Downloading flow token', headers, query={'flow_name': 'login'}, data=self._LOGIN_INIT_DATA) + + while not self.is_logged_in: + if next_subtask == 'LoginJsInstrumentationSubtask': + next_subtask = self._call_login_api( + 'Submitting JS instrumentation response', headers, data=build_login_json({ + 'subtask_id': next_subtask, + 'js_instrumentation': { + 'response': '{}', + 'link': 'next_link' + } + })) + + elif next_subtask == 'LoginEnterUserIdentifierSSO': + next_subtask = self._call_login_api( + 'Submitting username', headers, data=build_login_json({ + 'subtask_id': next_subtask, + 'settings_list': { + 'setting_responses': [{ + 'key': 'user_identifier', + 'response_data': { + 'text_data': { + 'result': username + } + } + }], + 'link': 'next_link' + } + })) + + elif next_subtask == 'LoginEnterAlternateIdentifierSubtask': + next_subtask = self._call_login_api( + 'Submitting alternate identifier', headers, + data=build_login_json(input_dict(next_subtask, self._get_tfa_info( + 'one of username, phone number or email that was not used as --username')))) + + elif next_subtask == 'LoginEnterPassword': + next_subtask = self._call_login_api( + 'Submitting password', headers, data=build_login_json({ + 'subtask_id': next_subtask, + 'enter_password': { + 'password': password, + 'link': 'next_link' + } + })) + + elif next_subtask == 'AccountDuplicationCheck': + next_subtask = self._call_login_api( + 'Submitting account duplication check', headers, data=build_login_json({ + 'subtask_id': next_subtask, + 'check_logged_in_account': { + 'link': 'AccountDuplicationCheck_false' + } + })) + + elif next_subtask == 'LoginTwoFactorAuthChallenge': + next_subtask = self._call_login_api( + 'Submitting 2FA token', headers, data=build_login_json(input_dict( + next_subtask, self._get_tfa_info('two-factor authentication token')))) + + elif next_subtask == 'LoginAcid': + next_subtask = self._call_login_api( + 'Submitting confirmation code', headers, data=build_login_json(input_dict( + next_subtask, self._get_tfa_info('confirmation code sent to your email or phone')))) + + elif next_subtask == 'ArkoseLogin': + self.raise_login_required('Twitter is requiring captcha for this login attempt', method='cookies') + + elif next_subtask == 'DenyLoginSubtask': + self.raise_login_required('Twitter rejected this login attempt as suspicious', method='cookies') + + elif next_subtask == 'LoginSuccessSubtask': + raise ExtractorError('Twitter API did not grant auth token cookie') + + else: + raise ExtractorError(f'Unrecognized subtask ID "{next_subtask}"') + + self.report_login() + + def _call_api(self, path, video_id, query={}, graphql=False): + headers = self._set_base_headers(legacy=not graphql and self._selected_api == 'legacy') + headers.update({ + 'x-twitter-auth-type': 'OAuth2Session', + 'x-twitter-client-language': 'en', + 'x-twitter-active-user': 'yes', + } if self.is_logged_in else { + 'x-guest-token': self._fetch_guest_token(video_id) + }) + allowed_status = {400, 401, 403, 404} if graphql else {403} + result = self._download_json( + (self._GRAPHQL_API_BASE if graphql else self._API_BASE) + path, + video_id, headers=headers, query=query, expected_status=allowed_status, + note=f'Downloading {"GraphQL" if graphql else "legacy API"} JSON') + + if result.get('errors'): + errors = ', '.join(set(traverse_obj(result, ('errors', ..., 'message', {str})))) + if errors and 'not authorized' in errors: + self.raise_login_required(remove_end(errors, '.')) + raise ExtractorError(f'Error(s) while querying API: {errors or "Unknown error"}') + + return result + + def _build_graphql_query(self, media_id): + raise NotImplementedError('Method must be implemented to support GraphQL') + + def _call_graphql_api(self, endpoint, media_id): + data = self._build_graphql_query(media_id) + query = {key: json.dumps(value, separators=(',', ':')) for key, value in data.items()} + return traverse_obj(self._call_api(endpoint, media_id, query=query, graphql=True), 'data') + + +class TwitterCardIE(InfoExtractor): + IE_NAME = 'twitter:card' + _VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/(?:cards/tfw/v1|videos(?:/tweet)?)/(?P\d+)' + _TESTS = [ + { + 'url': 'https://twitter.com/i/cards/tfw/v1/560070183650213889', + # MD5 checksums are different in different places + 'info_dict': { + 'id': '560070131976392705', + 'ext': 'mp4', + 'title': "Twitter - You can now shoot, edit and share video on Twitter. Capture life's most moving moments from your perspective.", + 'description': 'md5:18d3e24bb4f6e5007487dd546e53bd96', + 'uploader': 'Twitter', + 'uploader_id': 'Twitter', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 30.033, + 'timestamp': 1422366112, + 'upload_date': '20150127', + 'age_limit': 0, + 'comment_count': int, + 'tags': [], + 'repost_count': int, + 'like_count': int, + 'display_id': '560070183650213889', + 'uploader_url': 'https://twitter.com/Twitter', + }, + }, + { + 'url': 'https://twitter.com/i/cards/tfw/v1/623160978427936768', + 'md5': '7137eca597f72b9abbe61e5ae0161399', + 'info_dict': { + 'id': '623160978427936768', + 'ext': 'mp4', + 'title': "NASA - Fly over Pluto's icy Norgay Mountains and Sputnik Plain in this @NASANewHorizons #PlutoFlyby video.", + 'description': "Fly over Pluto's icy Norgay Mountains and Sputnik Plain in this @NASANewHorizons #PlutoFlyby video. https://t.co/BJYgOjSeGA", + 'uploader': 'NASA', + 'uploader_id': 'NASA', + 'timestamp': 1437408129, + 'upload_date': '20150720', + 'uploader_url': 'https://twitter.com/NASA', + 'age_limit': 0, + 'comment_count': int, + 'like_count': int, + 'repost_count': int, + 'tags': ['PlutoFlyby'], + }, + 'params': {'format': '[protocol=https]'} + }, + { + 'url': 'https://twitter.com/i/cards/tfw/v1/654001591733886977', + 'md5': 'b6d9683dd3f48e340ded81c0e917ad46', + 'info_dict': { + 'id': 'dq4Oj5quskI', + 'ext': 'mp4', + 'title': 'Ubuntu 11.10 Overview', + 'description': 'md5:a831e97fa384863d6e26ce48d1c43376', + 'upload_date': '20111013', + 'uploader': 'OMG! UBUNTU!', + 'uploader_id': 'omgubuntu', + 'channel_url': 'https://www.youtube.com/channel/UCIiSwcm9xiFb3Y4wjzR41eQ', + 'channel_id': 'UCIiSwcm9xiFb3Y4wjzR41eQ', + 'channel_follower_count': int, + 'chapters': 'count:8', + 'uploader_url': 'http://www.youtube.com/user/omgubuntu', + 'duration': 138, + 'categories': ['Film & Animation'], + 'age_limit': 0, + 'comment_count': int, + 'availability': 'public', + 'like_count': int, + 'thumbnail': 'https://i.ytimg.com/vi/dq4Oj5quskI/maxresdefault.jpg', + 'view_count': int, + 'tags': 'count:12', + 'channel': 'OMG! UBUNTU!', + 'playable_in_embed': True, + }, + 'add_ie': ['Youtube'], + }, + { + 'url': 'https://twitter.com/i/cards/tfw/v1/665289828897005568', + 'info_dict': { + 'id': 'iBb2x00UVlv', + 'ext': 'mp4', + 'upload_date': '20151113', + 'uploader_id': '1189339351084113920', + 'uploader': 'ArsenalTerje', + 'title': 'Vine by ArsenalTerje', + 'timestamp': 1447451307, + 'alt_title': 'Vine by ArsenalTerje', + 'comment_count': int, + 'like_count': int, + 'thumbnail': r're:^https?://[^?#]+\.jpg', + 'view_count': int, + 'repost_count': int, + }, + 'add_ie': ['Vine'], + 'params': {'skip_download': 'm3u8'}, + }, + { + 'url': 'https://twitter.com/i/videos/tweet/705235433198714880', + 'md5': '884812a2adc8aaf6fe52b15ccbfa3b88', + 'info_dict': { + 'id': '705235433198714880', + 'ext': 'mp4', + 'title': "Brent Yarina - Khalil Iverson's missed highlight dunk. And made highlight dunk. In one highlight.", + 'description': "Khalil Iverson's missed highlight dunk. And made highlight dunk. In one highlight. https://t.co/OrxcJ28Bns", + 'uploader': 'Brent Yarina', + 'uploader_id': 'BTNBrentYarina', + 'timestamp': 1456976204, + 'upload_date': '20160303', + }, + 'skip': 'This content is no longer available.', + }, + { + 'url': 'https://twitter.com/i/videos/752274308186120192', + 'only_matching': True, + }, + ] + + def _real_extract(self, url): + status_id = self._match_id(url) + return self.url_result( + 'https://twitter.com/statuses/' + status_id, + TwitterIE.ie_key(), status_id) + + +class TwitterIE(TwitterBaseIE): + IE_NAME = 'twitter' + _VALID_URL = TwitterBaseIE._BASE_REGEX + r'(?:(?:i/web|[^/]+)/status|statuses)/(?P\d+)(?:/(?:video|photo)/(?P\d+))?' + + _TESTS = [{ + 'url': 'https://twitter.com/freethenipple/status/643211948184596480', + 'info_dict': { + 'id': '643211870443208704', + 'display_id': '643211948184596480', + 'ext': 'mp4', + 'title': 'FREE THE NIPPLE - FTN supporters on Hollywood Blvd today!', + 'thumbnail': r're:^https?://.*\.jpg', + 'description': 'FTN supporters on Hollywood Blvd today! http://t.co/c7jHH749xJ', + 'uploader': 'FREE THE NIPPLE', + 'uploader_id': 'freethenipple', + 'duration': 12.922, + 'timestamp': 1442188653, + 'upload_date': '20150913', + 'uploader_url': 'https://twitter.com/freethenipple', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': [], + 'age_limit': 18, + }, + }, { + 'url': 'https://twitter.com/giphz/status/657991469417025536/photo/1', + 'md5': 'f36dcd5fb92bf7057f155e7d927eeb42', + 'info_dict': { + 'id': '657991469417025536', + 'ext': 'mp4', + 'title': 'Gifs - tu vai cai tu vai cai tu nao eh capaz disso tu vai cai', + 'description': 'Gifs on Twitter: "tu vai cai tu vai cai tu nao eh capaz disso tu vai cai https://t.co/tM46VHFlO5"', + 'thumbnail': r're:^https?://.*\.png', + 'uploader': 'Gifs', + 'uploader_id': 'giphz', + }, + 'expected_warnings': ['height', 'width'], + 'skip': 'Account suspended', + }, { + 'url': 'https://twitter.com/starwars/status/665052190608723968', + 'info_dict': { + 'id': '665052190608723968', + 'display_id': '665052190608723968', + 'ext': 'mp4', + 'title': r're:Star Wars.*A new beginning is coming December 18.*', + 'description': 'A new beginning is coming December 18. Watch the official 60 second #TV spot for #StarWars: #TheForceAwakens. https://t.co/OkSqT2fjWJ', + 'uploader_id': 'starwars', + 'uploader': r're:Star Wars.*', + 'timestamp': 1447395772, + 'upload_date': '20151113', + 'uploader_url': 'https://twitter.com/starwars', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'tags': ['TV', 'StarWars', 'TheForceAwakens'], + 'age_limit': 0, + }, + }, { + 'url': 'https://twitter.com/BTNBrentYarina/status/705235433198714880', + 'info_dict': { + 'id': '705235433198714880', + 'ext': 'mp4', + 'title': "Brent Yarina - Khalil Iverson's missed highlight dunk. And made highlight dunk. In one highlight.", + 'description': "Khalil Iverson's missed highlight dunk. And made highlight dunk. In one highlight. https://t.co/OrxcJ28Bns", + 'uploader_id': 'BTNBrentYarina', + 'uploader': 'Brent Yarina', + 'timestamp': 1456976204, + 'upload_date': '20160303', + 'uploader_url': 'https://twitter.com/BTNBrentYarina', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'tags': [], + 'age_limit': 0, + }, + 'params': { + # The same video as https://twitter.com/i/videos/tweet/705235433198714880 + # Test case of TwitterCardIE + 'skip_download': True, + }, + 'skip': 'Dead external link', + }, { + 'url': 'https://twitter.com/jaydingeer/status/700207533655363584', + 'info_dict': { + 'id': '700207414000242688', + 'display_id': '700207533655363584', + 'ext': 'mp4', + 'title': 'jaydin donte geer - BEAT PROD: @suhmeduh #Damndaniel', + 'description': 'BEAT PROD: @suhmeduh https://t.co/HBrQ4AfpvZ #Damndaniel https://t.co/byBooq2ejZ', + 'thumbnail': r're:^https?://.*\.jpg', + 'uploader': 'jaydin donte geer', + 'uploader_id': 'jaydingeer', + 'duration': 30.0, + 'timestamp': 1455777459, + 'upload_date': '20160218', + 'uploader_url': 'https://twitter.com/jaydingeer', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': ['Damndaniel'], + 'age_limit': 0, + }, + }, { + 'url': 'https://twitter.com/Filmdrunk/status/713801302971588609', + 'md5': '89a15ed345d13b86e9a5a5e051fa308a', + 'info_dict': { + 'id': 'MIOxnrUteUd', + 'ext': 'mp4', + 'title': 'Dr.Pepperã®é£²ã¿æ–¹ #japanese #ãƒã‚« #ドクペ #電動ガン', + 'uploader': 'TAKUMA', + 'uploader_id': '1004126642786242560', + 'timestamp': 1402826626, + 'upload_date': '20140615', + 'thumbnail': r're:^https?://.*\.jpg', + 'alt_title': 'Vine by TAKUMA', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + }, + 'add_ie': ['Vine'], + }, { + 'url': 'https://twitter.com/captainamerica/status/719944021058060289', + 'info_dict': { + 'id': '717462543795523584', + 'display_id': '719944021058060289', + 'ext': 'mp4', + 'title': 'Captain America - @King0fNerd Are you sure you made the right choice? Find out in theaters.', + 'description': '@King0fNerd Are you sure you made the right choice? Find out in theaters. https://t.co/GpgYi9xMJI', + 'uploader_id': 'CaptainAmerica', + 'uploader': 'Captain America', + 'duration': 3.17, + 'timestamp': 1460483005, + 'upload_date': '20160412', + 'uploader_url': 'https://twitter.com/CaptainAmerica', + 'thumbnail': r're:^https?://.*\.jpg', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': [], + 'age_limit': 0, + }, + }, { + 'url': 'https://twitter.com/OPP_HSD/status/779210622571536384', + 'info_dict': { + 'id': '1zqKVVlkqLaKB', + 'ext': 'mp4', + 'title': 'Sgt Kerry Schmidt - Ontario Provincial Police - Road rage, mischief, assault, rollover and fire in one occurrence', + 'upload_date': '20160923', + 'uploader_id': '1PmKqpJdOJQoY', + 'uploader': 'Sgt Kerry Schmidt - Ontario Provincial Police', + 'timestamp': 1474613214, + 'thumbnail': r're:^https?://.*\.jpg', + }, + 'add_ie': ['Periscope'], + }, { + # has mp4 formats via mobile API + 'url': 'https://twitter.com/news_al3alm/status/852138619213144067', + 'info_dict': { + 'id': '852077943283097602', + 'ext': 'mp4', + 'title': 'عالم الأخبار - كلمة تاريخية بجلسة الجناسي التاريخية.. النائب خالد مؤنس العتيبي للمعارضين : اتقوا الله .. الظلم ظلمات يوم القيامة', + 'description': 'كلمة تاريخية بجلسة الجناسي التاريخية.. النائب خالد مؤنس العتيبي للمعارضين : اتقوا الله .. الظلم ظلمات يوم القيامة https://t.co/xg6OhpyKfN', + 'uploader': 'عالم الأخبار', + 'uploader_id': 'news_al3alm', + 'duration': 277.4, + 'timestamp': 1492000653, + 'upload_date': '20170412', + 'display_id': '852138619213144067', + 'age_limit': 0, + 'uploader_url': 'https://twitter.com/news_al3alm', + 'thumbnail': r're:^https?://.*\.jpg', + 'tags': [], + 'repost_count': int, + 'view_count': int, + 'like_count': int, + 'comment_count': int, + }, + }, { + 'url': 'https://twitter.com/i/web/status/910031516746514432', + 'info_dict': { + 'id': '910030238373089285', + 'display_id': '910031516746514432', + 'ext': 'mp4', + 'title': 'Préfet de Guadeloupe - [Direct] #Maria Le centre se trouve actuellement au sud de Basse-Terre. Restez confinés. Réfugiez-vous dans la pièce la + sûre.', + 'thumbnail': r're:^https?://.*\.jpg', + 'description': '[Direct] #Maria Le centre se trouve actuellement au sud de Basse-Terre. Restez confinés. Réfugiez-vous dans la pièce la + sûre. https://t.co/mwx01Rs4lo', + 'uploader': 'Préfet de Guadeloupe', + 'uploader_id': 'Prefet971', + 'duration': 47.48, + 'timestamp': 1505803395, + 'upload_date': '20170919', + 'uploader_url': 'https://twitter.com/Prefet971', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': ['Maria'], + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, # requires ffmpeg + }, + }, { + # card via api.twitter.com/1.1/videos/tweet/config + 'url': 'https://twitter.com/LisPower1/status/1001551623938805763', + 'info_dict': { + 'id': '1001551417340022785', + 'display_id': '1001551623938805763', + 'ext': 'mp4', + 'title': 're:.*?Shep is on a roll today.*?', + 'thumbnail': r're:^https?://.*\.jpg', + 'description': 'md5:37b9f2ff31720cef23b2bd42ee8a0f09', + 'uploader': 'Lis Power', + 'uploader_id': 'LisPower1', + 'duration': 111.278, + 'timestamp': 1527623489, + 'upload_date': '20180529', + 'uploader_url': 'https://twitter.com/LisPower1', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': [], + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, # requires ffmpeg + }, + }, { + 'url': 'https://twitter.com/foobar/status/1087791357756956680', + 'info_dict': { + 'id': '1087791272830607360', + 'display_id': '1087791357756956680', + 'ext': 'mp4', + 'title': 'X - A new is coming. Some of you got an opt-in to try it now. Check out the emoji button, quick keyboard shortcuts, upgraded trends, advanced search, and more. Let us know your thoughts!', + 'thumbnail': r're:^https?://.*\.jpg', + 'description': 'md5:6dfd341a3310fb97d80d2bf7145df976', + 'uploader': 'X', + 'uploader_id': 'X', + 'duration': 61.567, + 'timestamp': 1548184644, + 'upload_date': '20190122', + 'uploader_url': 'https://twitter.com/X', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': [], + 'age_limit': 0, + }, + 'skip': 'This Tweet is unavailable', + }, { + # not available in Periscope + 'url': 'https://twitter.com/ViviEducation/status/1136534865145286656', + 'info_dict': { + 'id': '1vOGwqejwoWxB', + 'ext': 'mp4', + 'title': 'Vivi - Vivi founder @lior_rauchy announcing our new student feedback tool live at @EduTECH_AU #EduTECH2019', + 'uploader': 'Vivi', + 'uploader_id': '1eVjYOLGkGrQL', + 'thumbnail': r're:^https?://.*\.jpg', + 'tags': ['EduTECH2019'], + 'view_count': int, + }, + 'add_ie': ['TwitterBroadcast'], + 'skip': 'Broadcast no longer exists', + }, { + # unified card + 'url': 'https://twitter.com/BrooklynNets/status/1349794411333394432?s=20', + 'info_dict': { + 'id': '1349774757969989634', + 'display_id': '1349794411333394432', + 'ext': 'mp4', + 'title': 'md5:d1c4941658e4caaa6cb579260d85dcba', + 'thumbnail': r're:^https?://.*\.jpg', + 'description': 'md5:71ead15ec44cee55071547d6447c6a3e', + 'uploader': 'Brooklyn Nets', + 'uploader_id': 'BrooklynNets', + 'duration': 324.484, + 'timestamp': 1610651040, + 'upload_date': '20210114', + 'uploader_url': 'https://twitter.com/BrooklynNets', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'tags': [], + 'age_limit': 0, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://twitter.com/oshtru/status/1577855540407197696', + 'info_dict': { + 'id': '1577855447914409984', + 'display_id': '1577855540407197696', + 'ext': 'mp4', + 'title': 'md5:9d198efb93557b8f8d5b78c480407214', + 'description': 'md5:b9c3699335447391d11753ab21c70a74', + 'upload_date': '20221006', + 'uploader': 'oshtru', + 'uploader_id': 'oshtru', + 'uploader_url': 'https://twitter.com/oshtru', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 30.03, + 'timestamp': 1665025050, + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': [], + 'age_limit': 0, + }, + 'params': {'skip_download': True}, + }, { + 'url': 'https://twitter.com/UltimaShadowX/status/1577719286659006464', + 'info_dict': { + 'id': '1577719286659006464', + 'title': 'Ultima📛| New Era - Test', + 'description': 'Test https://t.co/Y3KEZD7Dad', + 'uploader': 'Ultima📛| New Era', + 'uploader_id': 'UltimaShadowX', + 'uploader_url': 'https://twitter.com/UltimaShadowX', + 'upload_date': '20221005', + 'timestamp': 1664992565, + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'tags': [], + 'age_limit': 0, + }, + 'playlist_count': 4, + 'params': {'skip_download': True}, + }, { + 'url': 'https://twitter.com/MesoMax919/status/1575560063510810624', + 'info_dict': { + 'id': '1575559336759263233', + 'display_id': '1575560063510810624', + 'ext': 'mp4', + 'title': 'md5:eec26382babd0f7c18f041db8ae1c9c9', + 'thumbnail': r're:^https?://.*\.jpg', + 'description': 'md5:95aea692fda36a12081b9629b02daa92', + 'uploader': 'Max Olson', + 'uploader_id': 'MesoMax919', + 'uploader_url': 'https://twitter.com/MesoMax919', + 'duration': 21.321, + 'timestamp': 1664477766, + 'upload_date': '20220929', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'view_count': int, + 'tags': ['HurricaneIan'], + 'age_limit': 0, + }, + }, { + # Adult content, fails if not logged in + 'url': 'https://twitter.com/Rizdraws/status/1575199173472927762', + 'info_dict': { + 'id': '1575199163847000068', + 'display_id': '1575199173472927762', + 'ext': 'mp4', + 'title': str, + 'description': str, + 'uploader': str, + 'uploader_id': 'Rizdraws', + 'uploader_url': 'https://twitter.com/Rizdraws', + 'upload_date': '20220928', + 'timestamp': 1664391723, + 'thumbnail': r're:^https?://.+\.jpg', + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'age_limit': 18, + 'tags': [] + }, + 'params': {'skip_download': 'The media could not be played'}, + 'skip': 'Requires authentication', + }, { + # Playlist result only with graphql API + 'url': 'https://twitter.com/Srirachachau/status/1395079556562706435', + 'playlist_mincount': 2, + 'info_dict': { + 'id': '1395079556562706435', + 'title': str, + 'tags': [], + 'uploader': str, + 'like_count': int, + 'upload_date': '20210519', + 'age_limit': 0, + 'repost_count': int, + 'description': 'Here it is! Finished my gothic western cartoon. Pretty proud of it. It\'s got some goofs and lots of splashy over the top violence, something for everyone, hope you like it https://t.co/fOsG5glUnw', + 'uploader_id': 'Srirachachau', + 'comment_count': int, + 'uploader_url': 'https://twitter.com/Srirachachau', + 'timestamp': 1621447860, + }, + }, { + 'url': 'https://twitter.com/DavidToons_/status/1578353380363501568', + 'playlist_mincount': 2, + 'info_dict': { + 'id': '1578353380363501568', + 'title': str, + 'uploader_id': 'DavidToons_', + 'repost_count': int, + 'like_count': int, + 'uploader': str, + 'timestamp': 1665143744, + 'uploader_url': 'https://twitter.com/DavidToons_', + 'description': 'Chris sounds like Linda from Bob\'s Burgers, so as an animator: this had to be done. https://t.co/WgJauwIW1w', + 'tags': [], + 'comment_count': int, + 'upload_date': '20221007', + 'age_limit': 0, + }, + }, { + 'url': 'https://twitter.com/primevideouk/status/1578401165338976258', + 'playlist_count': 2, + 'info_dict': { + 'id': '1578401165338976258', + 'title': str, + 'description': 'md5:659a6b517a034b4cee5d795381a2dc41', + 'uploader': str, + 'uploader_id': 'primevideouk', + 'timestamp': 1665155137, + 'upload_date': '20221007', + 'age_limit': 0, + 'uploader_url': 'https://twitter.com/primevideouk', + 'comment_count': int, + 'repost_count': int, + 'like_count': int, + 'tags': ['TheRingsOfPower'], + }, + }, { + # Twitter Spaces + 'url': 'https://twitter.com/MoniqueCamarra/status/1550101959377551360', + 'info_dict': { + 'id': '1lPJqmBeeNAJb', + 'ext': 'm4a', + 'title': 'EuroFile@6 Ukraine Up-date-Draghi Defenestration-the West', + 'uploader': r're:Monique Camarra.+?', + 'uploader_id': 'MoniqueCamarra', + 'live_status': 'was_live', + 'release_timestamp': 1658417414, + 'description': 'md5:acce559345fd49f129c20dbcda3f1201', + 'timestamp': 1658407771, + 'release_date': '20220721', + 'upload_date': '20220721', + }, + 'add_ie': ['TwitterSpaces'], + 'params': {'skip_download': 'm3u8'}, + 'skip': 'Requires authentication', + }, { + # URL specifies video number but --yes-playlist + 'url': 'https://twitter.com/CTVJLaidlaw/status/1600649710662213632/video/1', + 'playlist_mincount': 2, + 'info_dict': { + 'id': '1600649710662213632', + 'title': 'md5:be05989b0722e114103ed3851a0ffae2', + 'timestamp': 1670459604.0, + 'description': 'md5:591c19ce66fadc2359725d5cd0d1052c', + 'comment_count': int, + 'uploader_id': 'CTVJLaidlaw', + 'repost_count': int, + 'tags': ['colorectalcancer', 'cancerjourney', 'imnotaquitter'], + 'upload_date': '20221208', + 'age_limit': 0, + 'uploader': 'Jocelyn Laidlaw', + 'uploader_url': 'https://twitter.com/CTVJLaidlaw', + 'like_count': int, + }, + }, { + # URL specifies video number and --no-playlist + 'url': 'https://twitter.com/CTVJLaidlaw/status/1600649710662213632/video/2', + 'info_dict': { + 'id': '1600649511827013632', + 'ext': 'mp4', + 'title': 'md5:7662a0a27ce6faa3e5b160340f3cfab1', + 'thumbnail': r're:^https?://.+\.jpg', + 'timestamp': 1670459604.0, + 'uploader_id': 'CTVJLaidlaw', + 'uploader': 'Jocelyn Laidlaw', + 'repost_count': int, + 'comment_count': int, + 'tags': ['colorectalcancer', 'cancerjourney', 'imnotaquitter'], + 'duration': 102.226, + 'uploader_url': 'https://twitter.com/CTVJLaidlaw', + 'display_id': '1600649710662213632', + 'like_count': int, + 'view_count': int, + 'description': 'md5:591c19ce66fadc2359725d5cd0d1052c', + 'upload_date': '20221208', + 'age_limit': 0, + }, + 'params': {'noplaylist': True}, + }, { + # id pointing to TweetWithVisibilityResults type entity which wraps the actual Tweet over + # note the id different between extraction and url + 'url': 'https://twitter.com/s2FAKER/status/1621117700482416640', + 'info_dict': { + 'id': '1621117577354424321', + 'display_id': '1621117700482416640', + 'ext': 'mp4', + 'title': 'ë½€ - ì•„ 최우제 ì´ë™ì†ë„ ë´', + 'description': 'ì•„ 최우제 ì´ë™ì†ë„ ë´ https://t.co/dxu2U5vXXB', + 'duration': 24.598, + 'uploader': 'ë½€', + 'uploader_id': 's2FAKER', + 'uploader_url': 'https://twitter.com/s2FAKER', + 'upload_date': '20230202', + 'timestamp': 1675339553.0, + 'thumbnail': r're:https?://pbs\.twimg\.com/.+', + 'age_limit': 18, + 'tags': [], + 'like_count': int, + 'repost_count': int, + 'comment_count': int, + 'view_count': int, + }, + }, { + 'url': 'https://twitter.com/hlo_again/status/1599108751385972737/video/2', + 'info_dict': { + 'id': '1599108643743473680', + 'display_id': '1599108751385972737', + 'ext': 'mp4', + 'title': '\u06ea - \U0001F48B', + 'uploader_url': 'https://twitter.com/hlo_again', + 'like_count': int, + 'uploader_id': 'hlo_again', + 'thumbnail': 'https://pbs.twimg.com/ext_tw_video_thumb/1599108643743473680/pu/img/UG3xjov4rgg5sbYM.jpg?name=orig', + 'repost_count': int, + 'duration': 9.531, + 'comment_count': int, + 'view_count': int, + 'upload_date': '20221203', + 'age_limit': 0, + 'timestamp': 1670092210.0, + 'tags': [], + 'uploader': '\u06ea', + 'description': '\U0001F48B https://t.co/bTj9Qz7vQP', + }, + 'params': {'noplaylist': True}, + }, { + 'url': 'https://twitter.com/MunTheShinobi/status/1600009574919962625', + 'info_dict': { + 'id': '1600009362759733248', + 'display_id': '1600009574919962625', + 'ext': 'mp4', + 'uploader_url': 'https://twitter.com/MunTheShinobi', + 'description': 'This is a genius ad by Apple. \U0001f525\U0001f525\U0001f525\U0001f525\U0001f525 https://t.co/cNsA0MoOml', + 'view_count': int, + 'thumbnail': 'https://pbs.twimg.com/ext_tw_video_thumb/1600009362759733248/pu/img/XVhFQivj75H_YxxV.jpg?name=orig', + 'age_limit': 0, + 'uploader': 'Mün', + 'repost_count': int, + 'upload_date': '20221206', + 'title': 'Mün - This is a genius ad by Apple. \U0001f525\U0001f525\U0001f525\U0001f525\U0001f525', + 'comment_count': int, + 'like_count': int, + 'tags': [], + 'uploader_id': 'MunTheShinobi', + 'duration': 139.987, + 'timestamp': 1670306984.0, + }, + }, { + # retweeted_status (private) + 'url': 'https://twitter.com/liberdalau/status/1623739803874349067', + 'info_dict': { + 'id': '1623274794488659969', + 'display_id': '1623739803874349067', + 'ext': 'mp4', + 'title': 'Johnny Bullets - Me after going viral to over 30million people: Whoopsie-daisy', + 'description': 'md5:b06864cd3dc2554821cc327f5348485a', + 'uploader': 'Johnny Bullets', + 'uploader_id': 'Johnnybull3ts', + 'uploader_url': 'https://twitter.com/Johnnybull3ts', + 'age_limit': 0, + 'tags': [], + 'duration': 8.033, + 'timestamp': 1675853859.0, + 'upload_date': '20230208', + 'thumbnail': r're:https://pbs\.twimg\.com/ext_tw_video_thumb/.+', + 'like_count': int, + 'repost_count': int, + }, + 'skip': 'Protected tweet', + }, { + # retweeted_status + 'url': 'https://twitter.com/playstrumpcard/status/1695424220702888009', + 'info_dict': { + 'id': '1694928337846538240', + 'ext': 'mp4', + 'display_id': '1695424220702888009', + 'title': 'md5:e8daa9527bc2b947121395494f786d9d', + 'description': 'md5:004f2d37fd58737724ec75bc7e679938', + 'uploader': 'Benny Johnson', + 'uploader_id': 'bennyjohnson', + 'uploader_url': 'https://twitter.com/bennyjohnson', + 'age_limit': 0, + 'tags': [], + 'duration': 45.001, + 'timestamp': 1692962814.0, + 'upload_date': '20230825', + 'thumbnail': r're:https://pbs\.twimg\.com/amplify_video_thumb/.+', + 'like_count': int, + 'repost_count': int, + 'view_count': int, + 'comment_count': int, + }, + }, { + # retweeted_status w/ legacy API + 'url': 'https://twitter.com/playstrumpcard/status/1695424220702888009', + 'info_dict': { + 'id': '1694928337846538240', + 'ext': 'mp4', + 'display_id': '1695424220702888009', + 'title': 'md5:e8daa9527bc2b947121395494f786d9d', + 'description': 'md5:004f2d37fd58737724ec75bc7e679938', + 'uploader': 'Benny Johnson', + 'uploader_id': 'bennyjohnson', + 'uploader_url': 'https://twitter.com/bennyjohnson', + 'age_limit': 0, + 'tags': [], + 'duration': 45.001, + 'timestamp': 1692962814.0, + 'upload_date': '20230825', + 'thumbnail': r're:https://pbs\.twimg\.com/amplify_video_thumb/.+', + 'like_count': int, + 'repost_count': int, + }, + 'params': {'extractor_args': {'twitter': {'api': ['legacy']}}}, + }, { + # Broadcast embedded in tweet + 'url': 'https://twitter.com/JessicaDobsonWX/status/1693057346933600402', + 'info_dict': { + 'id': '1yNGaNLjEblJj', + 'ext': 'mp4', + 'title': 'Jessica Dobson - WAVE Weather Now - Saturday 8/19/23 Update', + 'uploader': 'Jessica Dobson', + 'uploader_id': '1DZEoDwDovRQa', + 'thumbnail': r're:^https?://.*\.jpg', + 'view_count': int, + }, + 'add_ie': ['TwitterBroadcast'], + }, { + # Animated gif and quote tweet video, with syndication API + 'url': 'https://twitter.com/BAKKOOONN/status/1696256659889565950', + 'playlist_mincount': 2, + 'info_dict': { + 'id': '1696256659889565950', + 'title': 'BAKOON - https://t.co/zom968d0a0', + 'description': 'https://t.co/zom968d0a0', + 'tags': [], + 'uploader': 'BAKOON', + 'uploader_id': 'BAKKOOONN', + 'uploader_url': 'https://twitter.com/BAKKOOONN', + 'age_limit': 18, + 'timestamp': 1693254077.0, + 'upload_date': '20230828', + 'like_count': int, + }, + 'params': {'extractor_args': {'twitter': {'api': ['syndication']}}}, + 'expected_warnings': ['Not all metadata'], + }, { + # onion route + 'url': 'https://twitter3e4tixl4xyajtrzo62zg5vztmjuricljdp2c5kshju4avyoid.onion/TwitterBlue/status/1484226494708662273', + 'only_matching': True, + }, { + # Twitch Clip Embed + 'url': 'https://twitter.com/GunB1g/status/1163218564784017422', + 'only_matching': True, + }, { + # promo_video_website card + 'url': 'https://twitter.com/GunB1g/status/1163218564784017422', + 'only_matching': True, + }, { + # promo_video_convo card + 'url': 'https://twitter.com/poco_dandy/status/1047395834013384704', + 'only_matching': True, + }, { + # appplayer card + 'url': 'https://twitter.com/poco_dandy/status/1150646424461176832', + 'only_matching': True, + }, { + # video_direct_message card + 'url': 'https://twitter.com/qarev001/status/1348948114569269251', + 'only_matching': True, + }, { + # poll2choice_video card + 'url': 'https://twitter.com/CAF_Online/status/1349365911120195585', + 'only_matching': True, + }, { + # poll3choice_video card + 'url': 'https://twitter.com/SamsungMobileSA/status/1348609186725289984', + 'only_matching': True, + }, { + # poll4choice_video card + 'url': 'https://twitter.com/SouthamptonFC/status/1347577658079641604', + 'only_matching': True, + }] + + _MEDIA_ID_RE = re.compile(r'_video/(\d+)/') + + @property + def _GRAPHQL_ENDPOINT(self): + if self.is_logged_in: + return 'zZXycP0V6H7m-2r0mOnFcA/TweetDetail' + return '2ICDjqPd81tulZcYrtpTuQ/TweetResultByRestId' + + def _graphql_to_legacy(self, data, twid): + result = traverse_obj(data, ( + 'threaded_conversation_with_injections_v2', 'instructions', 0, 'entries', + lambda _, v: v['entryId'] == f'tweet-{twid}', 'content', 'itemContent', + 'tweet_results', 'result', ('tweet', None), {dict}, + ), default={}, get_all=False) if self.is_logged_in else traverse_obj( + data, ('tweetResult', 'result', {dict}), default={}) + + if result.get('__typename') not in ('Tweet', 'TweetTombstone', 'TweetUnavailable', None): + self.report_warning(f'Unknown typename: {result.get("__typename")}', twid, only_once=True) + + if 'tombstone' in result: + cause = remove_end(traverse_obj(result, ('tombstone', 'text', 'text', {str})), '. Learn more') + raise ExtractorError(f'Twitter API says: {cause or "Unknown error"}', expected=True) + elif result.get('__typename') == 'TweetUnavailable': + reason = result.get('reason') + if reason == 'NsfwLoggedOut': + self.raise_login_required('NSFW tweet requires authentication') + elif reason == 'Protected': + self.raise_login_required('You are not authorized to view this protected tweet') + raise ExtractorError(reason or 'Requested tweet is unavailable', expected=True) + + status = result.get('legacy', {}) + status.update(traverse_obj(result, { + 'user': ('core', 'user_results', 'result', 'legacy'), + 'card': ('card', 'legacy'), + 'quoted_status': ('quoted_status_result', 'result', 'legacy'), + 'retweeted_status': ('legacy', 'retweeted_status_result', 'result', 'legacy'), + }, expected_type=dict, default={})) + + # extra transformations needed since result does not match legacy format + if status.get('retweeted_status'): + status['retweeted_status']['user'] = traverse_obj(status, ( + 'retweeted_status_result', 'result', 'core', 'user_results', 'result', 'legacy', {dict})) or {} + + binding_values = { + binding_value.get('key'): binding_value.get('value') + for binding_value in traverse_obj(status, ('card', 'binding_values', ..., {dict})) + } + if binding_values: + status['card']['binding_values'] = binding_values + + return status + + def _build_graphql_query(self, media_id): + return { + 'variables': { + 'focalTweetId': media_id, + 'includePromotedContent': True, + 'with_rux_injections': False, + 'withBirdwatchNotes': True, + 'withCommunity': True, + 'withDownvotePerspective': False, + 'withQuickPromoteEligibilityTweetFields': True, + 'withReactionsMetadata': False, + 'withReactionsPerspective': False, + 'withSuperFollowsTweetFields': True, + 'withSuperFollowsUserFields': True, + 'withV2Timeline': True, + 'withVoice': True, + }, + 'features': { + 'graphql_is_translatable_rweb_tweet_is_translatable_enabled': False, + 'interactive_text_enabled': True, + 'responsive_web_edit_tweet_api_enabled': True, + 'responsive_web_enhance_cards_enabled': True, + 'responsive_web_graphql_timeline_navigation_enabled': False, + 'responsive_web_text_conversations_enabled': False, + 'responsive_web_uc_gql_enabled': True, + 'standardized_nudges_misinfo': True, + 'tweet_with_visibility_results_prefer_gql_limited_actions_policy_enabled': False, + 'tweetypie_unmention_optimization_enabled': True, + 'unified_cards_ad_metadata_container_dynamic_card_content_query_enabled': True, + 'verified_phone_label_enabled': False, + 'vibe_api_enabled': True, + }, + } if self.is_logged_in else { + 'variables': { + 'tweetId': media_id, + 'withCommunity': False, + 'includePromotedContent': False, + 'withVoice': False, + }, + 'features': { + 'creator_subscriptions_tweet_preview_api_enabled': True, + 'tweetypie_unmention_optimization_enabled': True, + 'responsive_web_edit_tweet_api_enabled': True, + 'graphql_is_translatable_rweb_tweet_is_translatable_enabled': True, + 'view_counts_everywhere_api_enabled': True, + 'longform_notetweets_consumption_enabled': True, + 'responsive_web_twitter_article_tweet_consumption_enabled': False, + 'tweet_awards_web_tipping_enabled': False, + 'freedom_of_speech_not_reach_fetch_enabled': True, + 'standardized_nudges_misinfo': True, + 'tweet_with_visibility_results_prefer_gql_limited_actions_policy_enabled': True, + 'longform_notetweets_rich_text_read_enabled': True, + 'longform_notetweets_inline_media_enabled': True, + 'responsive_web_graphql_exclude_directive_enabled': True, + 'verified_phone_label_enabled': False, + 'responsive_web_media_download_video_enabled': False, + 'responsive_web_graphql_skip_user_profile_image_extensions_enabled': False, + 'responsive_web_graphql_timeline_navigation_enabled': True, + 'responsive_web_enhance_cards_enabled': False + }, + 'fieldToggles': { + 'withArticleRichContentState': False + } + } + + def _extract_status(self, twid): + if self.is_logged_in or self._selected_api == 'graphql': + status = self._graphql_to_legacy(self._call_graphql_api(self._GRAPHQL_ENDPOINT, twid), twid) + + elif self._selected_api == 'legacy': + status = self._call_api(f'statuses/show/{twid}.json', twid, { + 'cards_platform': 'Web-12', + 'include_cards': 1, + 'include_reply_count': 1, + 'include_user_entities': 0, + 'tweet_mode': 'extended', + }) + + elif self._selected_api == 'syndication': + self.report_warning( + 'Not all metadata or media is available via syndication endpoint', twid, only_once=True) + status = self._download_json( + 'https://cdn.syndication.twimg.com/tweet-result', twid, 'Downloading syndication JSON', + headers={'User-Agent': 'Googlebot'}, query={ + 'id': twid, + # TODO: token = ((Number(twid) / 1e15) * Math.PI).toString(36).replace(/(0+|\.)/g, '') + 'token': ''.join(random.choices('123456789abcdefghijklmnopqrstuvwxyz', k=10)), + }) + if not status: + raise ExtractorError('Syndication endpoint returned empty JSON response') + # Transform the result so its structure matches that of legacy/graphql + media = [] + for detail in traverse_obj(status, ((None, 'quoted_tweet'), 'mediaDetails', ..., {dict})): + detail['id_str'] = traverse_obj(detail, ( + 'video_info', 'variants', ..., 'url', {self._MEDIA_ID_RE.search}, 1), get_all=False) or twid + media.append(detail) + status['extended_entities'] = {'media': media} + + else: + raise ExtractorError(f'"{self._selected_api}" is not a valid API selection', expected=True) + + return traverse_obj(status, 'retweeted_status', None, expected_type=dict) or {} + + def _real_extract(self, url): + twid, selected_index = self._match_valid_url(url).group('id', 'index') + status = self._extract_status(twid) + + title = description = traverse_obj( + status, (('full_text', 'text'), {lambda x: x.replace('\n', ' ')}), get_all=False) or '' + # strip 'https -_t.co_BJYgOjSeGA' junk from filenames + title = re.sub(r'\s+(https?://[^ ]+)', '', title) + user = status.get('user') or {} + uploader = user.get('name') + if uploader: + title = f'{uploader} - {title}' + uploader_id = user.get('screen_name') + + info = { + 'id': twid, + 'title': title, + 'description': description, + 'uploader': uploader, + 'timestamp': unified_timestamp(status.get('created_at')), + 'uploader_id': uploader_id, + 'uploader_url': format_field(uploader_id, None, 'https://twitter.com/%s'), + 'like_count': int_or_none(status.get('favorite_count')), + 'repost_count': int_or_none(status.get('retweet_count')), + 'comment_count': int_or_none(status.get('reply_count')), + 'age_limit': 18 if status.get('possibly_sensitive') else 0, + 'tags': traverse_obj(status, ('entities', 'hashtags', ..., 'text')), + } + + def extract_from_video_info(media): + media_id = traverse_obj(media, 'id_str', 'id', expected_type=str_or_none) + self.write_debug(f'Extracting from video info: {media_id}') + + formats = [] + subtitles = {} + for variant in traverse_obj(media, ('video_info', 'variants', ...)): + fmts, subs = self._extract_variant_formats(variant, twid) + subtitles = self._merge_subtitles(subtitles, subs) + formats.extend(fmts) + + thumbnails = [] + media_url = media.get('media_url_https') or media.get('media_url') + if media_url: + def add_thumbnail(name, size): + thumbnails.append({ + 'id': name, + 'url': update_url_query(media_url, {'name': name}), + 'width': int_or_none(size.get('w') or size.get('width')), + 'height': int_or_none(size.get('h') or size.get('height')), + }) + for name, size in media.get('sizes', {}).items(): + add_thumbnail(name, size) + add_thumbnail('orig', media.get('original_info') or {}) + + return { + 'id': media_id, + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'view_count': traverse_obj(media, ('mediaStats', 'viewCount', {int_or_none})), + 'duration': float_or_none(traverse_obj(media, ('video_info', 'duration_millis')), 1000), + # The codec of http formats are unknown + '_format_sort_fields': ('res', 'br', 'size', 'proto'), + } + + def extract_from_card_info(card): + if not card: + return + + self.write_debug(f'Extracting from card info: {card.get("url")}') + binding_values = card['binding_values'] + + def get_binding_value(k): + o = binding_values.get(k) or {} + return try_get(o, lambda x: x[x['type'].lower() + '_value']) + + card_name = card['name'].split(':')[-1] + if card_name == 'player': + yield { + '_type': 'url', + 'url': get_binding_value('player_url'), + } + elif card_name == 'periscope_broadcast': + yield { + '_type': 'url', + 'url': get_binding_value('url') or get_binding_value('player_url'), + 'ie_key': PeriscopeIE.ie_key(), + } + elif card_name == 'broadcast': + yield { + '_type': 'url', + 'url': get_binding_value('broadcast_url'), + 'ie_key': TwitterBroadcastIE.ie_key(), + } + elif card_name == 'audiospace': + yield { + '_type': 'url', + 'url': f'https://twitter.com/i/spaces/{get_binding_value("id")}', + 'ie_key': TwitterSpacesIE.ie_key(), + } + elif card_name == 'summary': + yield { + '_type': 'url', + 'url': get_binding_value('card_url'), + } + elif card_name == 'unified_card': + unified_card = self._parse_json(get_binding_value('unified_card'), twid) + yield from map(extract_from_video_info, traverse_obj( + unified_card, ('media_entities', ...), expected_type=dict)) + # amplify, promo_video_website, promo_video_convo, appplayer, + # video_direct_message, poll2choice_video, poll3choice_video, + # poll4choice_video, ... + else: + is_amplify = card_name == 'amplify' + vmap_url = get_binding_value('amplify_url_vmap') if is_amplify else get_binding_value('player_stream_url') + content_id = get_binding_value('%s_content_id' % (card_name if is_amplify else 'player')) + formats, subtitles = self._extract_formats_from_vmap_url(vmap_url, content_id or twid) + + thumbnails = [] + for suffix in ('_small', '', '_large', '_x_large', '_original'): + image = get_binding_value('player_image' + suffix) or {} + image_url = image.get('url') + if not image_url or '/player-placeholder' in image_url: + continue + thumbnails.append({ + 'id': suffix[1:] if suffix else 'medium', + 'url': image_url, + 'width': int_or_none(image.get('width')), + 'height': int_or_none(image.get('height')), + }) + + yield { + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'duration': int_or_none(get_binding_value( + 'content_duration_seconds')), + } + + videos = traverse_obj(status, ( + (None, 'quoted_status'), 'extended_entities', 'media', lambda _, m: m['type'] != 'photo', {dict})) + + if self._yes_playlist(twid, selected_index, video_label='URL-specified video number'): + selected_entries = (*map(extract_from_video_info, videos), *extract_from_card_info(status.get('card'))) + else: + desired_obj = traverse_obj(status, ( + (None, 'quoted_status'), 'extended_entities', 'media', int(selected_index) - 1, {dict}), get_all=False) + if not desired_obj: + raise ExtractorError(f'Video #{selected_index} is unavailable', expected=True) + elif desired_obj.get('type') != 'video': + raise ExtractorError(f'Media #{selected_index} is not a video', expected=True) + + # Restore original archive id and video index in title + for index, entry in enumerate(videos, 1): + if entry.get('id') != desired_obj.get('id'): + continue + if index == 1: + info['_old_archive_ids'] = [make_archive_id(self, twid)] + if len(videos) != 1: + info['title'] += f' #{index}' + break + + return {**info, **extract_from_video_info(desired_obj), 'display_id': twid} + + entries = [{**info, **data, 'display_id': twid} for data in selected_entries] + if not entries: + expanded_url = traverse_obj(status, ('entities', 'urls', 0, 'expanded_url'), expected_type=url_or_none) + if not expanded_url or expanded_url == url: + self.raise_no_formats('No video could be found in this tweet', expected=True) + return info + + return self.url_result(expanded_url, display_id=twid, **info) + + entries[0]['_old_archive_ids'] = [make_archive_id(self, twid)] + + if len(entries) == 1: + return entries[0] + + for index, entry in enumerate(entries, 1): + entry['title'] += f' #{index}' + + return self.playlist_result(entries, **info) + + +class TwitterAmplifyIE(TwitterBaseIE): + IE_NAME = 'twitter:amplify' + _VALID_URL = r'https?://amp\.twimg\.com/v/(?P[0-9a-f\-]{36})' + + _TEST = { + 'url': 'https://amp.twimg.com/v/0ba0c3c7-0af3-4c0a-bed5-7efd1ffa2951', + 'md5': 'fec25801d18a4557c5c9f33d2c379ffa', + 'info_dict': { + 'id': '0ba0c3c7-0af3-4c0a-bed5-7efd1ffa2951', + 'ext': 'mp4', + 'title': 'Twitter Video', + 'thumbnail': 're:^https?://.*', + }, + 'params': {'format': '[protocol=https]'}, + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + vmap_url = self._html_search_meta( + 'twitter:amplify:vmap', webpage, 'vmap url') + formats, _ = self._extract_formats_from_vmap_url(vmap_url, video_id) + + thumbnails = [] + thumbnail = self._html_search_meta( + 'twitter:image:src', webpage, 'thumbnail', fatal=False) + + def _find_dimension(target): + w = int_or_none(self._html_search_meta( + 'twitter:%s:width' % target, webpage, fatal=False)) + h = int_or_none(self._html_search_meta( + 'twitter:%s:height' % target, webpage, fatal=False)) + return w, h + + if thumbnail: + thumbnail_w, thumbnail_h = _find_dimension('image') + thumbnails.append({ + 'url': thumbnail, + 'width': thumbnail_w, + 'height': thumbnail_h, + }) + + video_w, video_h = _find_dimension('player') + formats[0].update({ + 'width': video_w, + 'height': video_h, + }) + + return { + 'id': video_id, + 'title': 'Twitter Video', + 'formats': formats, + 'thumbnails': thumbnails, + } + + +class TwitterBroadcastIE(TwitterBaseIE, PeriscopeBaseIE): + IE_NAME = 'twitter:broadcast' + _VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/broadcasts/(?P[0-9a-zA-Z]{13})' + + _TEST = { + # untitled Periscope video + 'url': 'https://twitter.com/i/broadcasts/1yNGaQLWpejGj', + 'info_dict': { + 'id': '1yNGaQLWpejGj', + 'ext': 'mp4', + 'title': 'Andrea May Sahouri - Periscope Broadcast', + 'uploader': 'Andrea May Sahouri', + 'uploader_id': '1PXEdBZWpGwKe', + 'thumbnail': r're:^https?://[^?#]+\.jpg\?token=', + 'view_count': int, + }, + } + + def _real_extract(self, url): + broadcast_id = self._match_id(url) + broadcast = self._call_api( + 'broadcasts/show.json', broadcast_id, + {'ids': broadcast_id})['broadcasts'][broadcast_id] + if not broadcast: + raise ExtractorError('Broadcast no longer exists', expected=True) + info = self._parse_broadcast_data(broadcast, broadcast_id) + media_key = broadcast['media_key'] + source = self._call_api( + f'live_video_stream/status/{media_key}', media_key)['source'] + m3u8_url = source.get('noRedirectPlaybackUrl') or source['location'] + if '/live_video_stream/geoblocked/' in m3u8_url: + self.raise_geo_restricted() + m3u8_id = compat_parse_qs(compat_urllib_parse_urlparse( + m3u8_url).query).get('type', [None])[0] + state, width, height = self._extract_common_format_info(broadcast) + info['formats'] = self._extract_pscp_m3u8_formats( + m3u8_url, broadcast_id, m3u8_id, state, width, height) + return info + + +class TwitterSpacesIE(TwitterBaseIE): + IE_NAME = 'twitter:spaces' + _VALID_URL = TwitterBaseIE._BASE_REGEX + r'i/spaces/(?P[0-9a-zA-Z]{13})' + + _TESTS = [{ + 'url': 'https://twitter.com/i/spaces/1RDxlgyvNXzJL', + 'info_dict': { + 'id': '1RDxlgyvNXzJL', + 'ext': 'm4a', + 'title': 'King Carlo e la mossa Kansas City per fare il Grande Centro', + 'description': 'Twitter Space participated by annarita digiorgio, Signor Ernesto, Raffaello Colosimo, Simone M. Sepe', + 'uploader': r're:Lucio Di Gaetano.*?', + 'uploader_id': 'luciodigaetano', + 'live_status': 'was_live', + 'timestamp': 1659877956, + 'upload_date': '20220807', + 'release_timestamp': 1659904215, + 'release_date': '20220807', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + # post_live/TimedOut but downloadable + 'url': 'https://twitter.com/i/spaces/1vAxRAVQWONJl', + 'info_dict': { + 'id': '1vAxRAVQWONJl', + 'ext': 'm4a', + 'title': 'Framing Up FinOps: Billing Tools', + 'description': 'Twitter Space participated by rupa, Alfonso Hernandez', + 'uploader': 'Google Cloud', + 'uploader_id': 'googlecloud', + 'live_status': 'post_live', + 'timestamp': 1681409554, + 'upload_date': '20230413', + 'release_timestamp': 1681839000, + 'release_date': '20230418', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + # Needs ffmpeg as downloader, see: https://github.com/yt-dlp/yt-dlp/issues/7536 + 'url': 'https://twitter.com/i/spaces/1eaKbrQbjoRKX', + 'info_dict': { + 'id': '1eaKbrQbjoRKX', + 'ext': 'm4a', + 'title': 'ã‚', + 'description': 'Twitter Space participated by nobody yet', + 'uploader': 'æ¯æ ¹ã¨ã‚る🔪Twitchã§å¾©æ´»', + 'uploader_id': 'tomeru_ikinone', + 'live_status': 'was_live', + 'timestamp': 1685617198, + 'upload_date': '20230601', + }, + 'params': {'skip_download': 'm3u8'}, + }] + + SPACE_STATUS = { + 'notstarted': 'is_upcoming', + 'ended': 'was_live', + 'running': 'is_live', + 'timedout': 'post_live', + } + + def _build_graphql_query(self, space_id): + return { + 'variables': { + 'id': space_id, + 'isMetatagsQuery': True, + 'withDownvotePerspective': False, + 'withReactionsMetadata': False, + 'withReactionsPerspective': False, + 'withReplays': True, + 'withSuperFollowsUserFields': True, + 'withSuperFollowsTweetFields': True, + }, + 'features': { + 'dont_mention_me_view_api_enabled': True, + 'interactive_text_enabled': True, + 'responsive_web_edit_tweet_api_enabled': True, + 'responsive_web_enhance_cards_enabled': True, + 'responsive_web_uc_gql_enabled': True, + 'spaces_2022_h2_clipping': True, + 'spaces_2022_h2_spaces_communities': False, + 'standardized_nudges_misinfo': True, + 'tweet_with_visibility_results_prefer_gql_limited_actions_policy_enabled': False, + 'vibe_api_enabled': True, + }, + } + + def _real_extract(self, url): + space_id = self._match_id(url) + if not self.is_logged_in: + self.raise_login_required('Twitter Spaces require authentication') + space_data = self._call_graphql_api('HPEisOmj1epUNLCWTYhUWw/AudioSpaceById', space_id)['audioSpace'] + if not space_data: + raise ExtractorError('Twitter Space not found', expected=True) + + metadata = space_data['metadata'] + live_status = try_call(lambda: self.SPACE_STATUS[metadata['state'].lower()]) + is_live = live_status == 'is_live' + + formats = [] + headers = {'Referer': 'https://twitter.com/'} + if live_status == 'is_upcoming': + self.raise_no_formats('Twitter Space not started yet', expected=True) + elif not is_live and not metadata.get('is_space_available_for_replay'): + self.raise_no_formats('Twitter Space ended and replay is disabled', expected=True) + elif metadata.get('media_key'): + source = traverse_obj( + self._call_api(f'live_video_stream/status/{metadata["media_key"]}', metadata['media_key']), + ('source', ('noRedirectPlaybackUrl', 'location'), {url_or_none}), get_all=False) + formats = self._extract_m3u8_formats( # XXX: Some Spaces need ffmpeg as downloader + source, metadata['media_key'], 'm4a', entry_protocol='m3u8', live=is_live, + headers=headers, fatal=False) if source else [] + for fmt in formats: + fmt.update({'vcodec': 'none', 'acodec': 'aac'}) + if not is_live: + fmt['container'] = 'm4a_dash' + + participants = ', '.join(traverse_obj( + space_data, ('participants', 'speakers', ..., 'display_name'))) or 'nobody yet' + + if not formats and live_status == 'post_live': + self.raise_no_formats('Twitter Space ended but not downloadable yet', expected=True) + + return { + 'id': space_id, + 'title': metadata.get('title'), + 'description': f'Twitter Space participated by {participants}', + 'uploader': traverse_obj( + metadata, ('creator_results', 'result', 'legacy', 'name')), + 'uploader_id': traverse_obj( + metadata, ('creator_results', 'result', 'legacy', 'screen_name')), + 'live_status': live_status, + 'release_timestamp': try_call( + lambda: int_or_none(metadata['scheduled_start'], scale=1000)), + 'timestamp': int_or_none(metadata.get('created_at'), scale=1000), + 'formats': formats, + 'http_headers': headers, + } + + +class TwitterShortenerIE(TwitterBaseIE): + IE_NAME = 'twitter:shortener' + _VALID_URL = r'https?://t\.co/(?P[^?#]+)|tco:(?P[^?#]+)' + _BASE_URL = 'https://t.co/' + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + eid, id = mobj.group('eid', 'id') + if eid: + id = eid + url = self._BASE_URL + id + new_url = self._request_webpage(url, id, headers={'User-Agent': 'curl'}).url + __UNSAFE_LINK = "https://twitter.com/safety/unsafe_link_warning?unsafe_link=" + if new_url.startswith(__UNSAFE_LINK): + new_url = new_url.replace(__UNSAFE_LINK, "") + return self.url_result(new_url) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/txxx.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/txxx.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/txxx.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/txxx.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/udemy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/udemy.py new file mode 100644 index 0000000..5c29605 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/udemy.py @@ -0,0 +1,474 @@ +import re + +from .common import InfoExtractor +from ..compat import compat_str, compat_urlparse +from ..networking import Request +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + determine_ext, + extract_attributes, + float_or_none, + int_or_none, + js_to_json, + smuggle_url, + try_get, + unescapeHTML, + unsmuggle_url, + url_or_none, + urlencode_postdata, +) + + +class UdemyIE(InfoExtractor): + IE_NAME = 'udemy' + _VALID_URL = r'''(?x) + https?:// + (?:[^/]+\.)?udemy\.com/ + (?: + [^#]+\#/lecture/| + lecture/view/?\?lectureId=| + [^/]+/learn/v4/t/lecture/ + ) + (?P\d+) + ''' + _LOGIN_URL = 'https://www.udemy.com/join/login-popup/?displayType=ajax&showSkipButton=1' + _ORIGIN_URL = 'https://www.udemy.com' + _NETRC_MACHINE = 'udemy' + + _TESTS = [{ + 'url': 'https://www.udemy.com/java-tutorial/#/lecture/172757', + 'md5': '98eda5b657e752cf945d8445e261b5c5', + 'info_dict': { + 'id': '160614', + 'ext': 'mp4', + 'title': 'Introduction and Installation', + 'description': 'md5:c0d51f6f21ef4ec65f091055a5eef876', + 'duration': 579.29, + }, + 'skip': 'Requires udemy account credentials', + }, { + # new URL schema + 'url': 'https://www.udemy.com/electric-bass-right-from-the-start/learn/v4/t/lecture/4580906', + 'only_matching': True, + }, { + # no url in outputs format entry + 'url': 'https://www.udemy.com/learn-web-development-complete-step-by-step-guide-to-success/learn/v4/t/lecture/4125812', + 'only_matching': True, + }, { + # only outputs rendition + 'url': 'https://www.udemy.com/how-you-can-help-your-local-community-5-amazing-examples/learn/v4/t/lecture/3225750?start=0', + 'only_matching': True, + }, { + 'url': 'https://wipro.udemy.com/java-tutorial/#/lecture/172757', + 'only_matching': True, + }] + + def _extract_course_info(self, webpage, video_id): + course = self._parse_json( + unescapeHTML(self._search_regex( + r'ng-init=["\'].*\bcourse=({.+?})[;"\']', + webpage, 'course', default='{}')), + video_id, fatal=False) or {} + course_id = course.get('id') or self._search_regex( + [ + r'data-course-id=["\'](\d+)', + r'"courseId"\s*:\s*(\d+)' + ], webpage, 'course id') + return course_id, course.get('title') + + def _enroll_course(self, base_url, webpage, course_id): + def combine_url(base_url, url): + return compat_urlparse.urljoin(base_url, url) if not url.startswith('http') else url + + checkout_url = unescapeHTML(self._search_regex( + r'href=(["\'])(?P(?:https?://(?:www\.)?udemy\.com)?/(?:payment|cart)/checkout/.+?)\1', + webpage, 'checkout url', group='url', default=None)) + if checkout_url: + raise ExtractorError( + 'Course %s is not free. You have to pay for it before you can download. ' + 'Use this URL to confirm purchase: %s' + % (course_id, combine_url(base_url, checkout_url)), + expected=True) + + enroll_url = unescapeHTML(self._search_regex( + r'href=(["\'])(?P(?:https?://(?:www\.)?udemy\.com)?/course/subscribe/.+?)\1', + webpage, 'enroll url', group='url', default=None)) + if enroll_url: + webpage = self._download_webpage( + combine_url(base_url, enroll_url), + course_id, 'Enrolling in the course', + headers={'Referer': base_url}) + if '>You have enrolled in' in webpage: + self.to_screen('%s: Successfully enrolled in the course' % course_id) + + def _download_lecture(self, course_id, lecture_id): + return self._download_json( + 'https://www.udemy.com/api-2.0/users/me/subscribed-courses/%s/lectures/%s?' + % (course_id, lecture_id), + lecture_id, 'Downloading lecture JSON', query={ + 'fields[lecture]': 'title,description,view_html,asset', + 'fields[asset]': 'asset_type,stream_url,thumbnail_url,download_urls,stream_urls,captions,data,course_is_drmed', + }) + + def _handle_error(self, response): + if not isinstance(response, dict): + return + error = response.get('error') + if error: + error_str = 'Udemy returned error #%s: %s' % (error.get('code'), error.get('message')) + error_data = error.get('data') + if error_data: + error_str += ' - %s' % error_data.get('formErrors') + raise ExtractorError(error_str, expected=True) + + def _download_webpage_handle(self, *args, **kwargs): + headers = kwargs.get('headers', {}).copy() + headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36' + kwargs['headers'] = headers + ret = super(UdemyIE, self)._download_webpage_handle( + *args, **kwargs) + if not ret: + return ret + webpage, _ = ret + if any(p in webpage for p in ( + '>Please verify you are a human', + 'Access to this page has been denied because we believe you are using automation tools to browse the website', + '"_pxCaptcha"')): + raise ExtractorError( + 'Udemy asks you to solve a CAPTCHA. Login with browser, ' + 'solve CAPTCHA, then export cookies and pass cookie file to ' + 'yt-dlp with --cookies.', expected=True) + return ret + + def _download_json(self, url_or_request, *args, **kwargs): + headers = { + 'X-Udemy-Snail-Case': 'true', + 'X-Requested-With': 'XMLHttpRequest', + } + for cookie in self.cookiejar: + if cookie.name == 'client_id': + headers['X-Udemy-Client-Id'] = cookie.value + elif cookie.name == 'access_token': + headers['X-Udemy-Bearer-Token'] = cookie.value + headers['X-Udemy-Authorization'] = 'Bearer %s' % cookie.value + + if isinstance(url_or_request, Request): + url_or_request.headers.update(headers) + else: + url_or_request = Request(url_or_request, headers=headers) + + response = super(UdemyIE, self)._download_json(url_or_request, *args, **kwargs) + self._handle_error(response) + return response + + def _perform_login(self, username, password): + login_popup = self._download_webpage( + self._LOGIN_URL, None, 'Downloading login popup') + + def is_logged(webpage): + return any(re.search(p, webpage) for p in ( + r'href=["\'](?:https://www\.udemy\.com)?/user/logout/', + r'>Logout<')) + + # already logged in + if is_logged(login_popup): + return + + login_form = self._form_hidden_inputs('login-form', login_popup) + + login_form.update({ + 'email': username, + 'password': password, + }) + + response = self._download_webpage( + self._LOGIN_URL, None, 'Logging in', + data=urlencode_postdata(login_form), + headers={ + 'Referer': self._ORIGIN_URL, + 'Origin': self._ORIGIN_URL, + }) + + if not is_logged(response): + error = self._html_search_regex( + r'(?s)]+class="form-errors[^"]*">(.+?)', + response, 'error message', default=None) + if error: + raise ExtractorError('Unable to login: %s' % error, expected=True) + raise ExtractorError('Unable to log in') + + def _real_extract(self, url): + lecture_id = self._match_id(url) + course_id = unsmuggle_url(url, {})[1].get('course_id') + + webpage = None + if not course_id: + webpage = self._download_webpage(url, lecture_id) + course_id, _ = self._extract_course_info(webpage, lecture_id) + + try: + lecture = self._download_lecture(course_id, lecture_id) + except ExtractorError as e: + # Error could possibly mean we are not enrolled in the course + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + webpage = webpage or self._download_webpage(url, lecture_id) + self._enroll_course(url, webpage, course_id) + lecture = self._download_lecture(course_id, lecture_id) + else: + raise + + title = lecture['title'] + description = lecture.get('description') + + asset = lecture['asset'] + + asset_type = asset.get('asset_type') or asset.get('assetType') + if asset_type != 'Video': + raise ExtractorError( + 'Lecture %s is not a video' % lecture_id, expected=True) + + stream_url = asset.get('stream_url') or asset.get('streamUrl') + if stream_url: + youtube_url = self._search_regex( + r'(https?://www\.youtube\.com/watch\?v=.*)', stream_url, 'youtube URL', default=None) + if youtube_url: + return self.url_result(youtube_url, 'Youtube') + + video_id = compat_str(asset['id']) + thumbnail = asset.get('thumbnail_url') or asset.get('thumbnailUrl') + duration = float_or_none(asset.get('data', {}).get('duration')) + + subtitles = {} + automatic_captions = {} + + formats = [] + + def extract_output_format(src, f_id): + return { + 'url': src.get('url'), + 'format_id': '%sp' % (src.get('height') or f_id), + 'width': int_or_none(src.get('width')), + 'height': int_or_none(src.get('height')), + 'vbr': int_or_none(src.get('video_bitrate_in_kbps')), + 'vcodec': src.get('video_codec'), + 'fps': int_or_none(src.get('frame_rate')), + 'abr': int_or_none(src.get('audio_bitrate_in_kbps')), + 'acodec': src.get('audio_codec'), + 'asr': int_or_none(src.get('audio_sample_rate')), + 'tbr': int_or_none(src.get('total_bitrate_in_kbps')), + 'filesize': int_or_none(src.get('file_size_in_bytes')), + } + + outputs = asset.get('data', {}).get('outputs') + if not isinstance(outputs, dict): + outputs = {} + + def add_output_format_meta(f, key): + output = outputs.get(key) + if isinstance(output, dict): + output_format = extract_output_format(output, key) + output_format.update(f) + return output_format + return f + + def extract_formats(source_list): + if not isinstance(source_list, list): + return + for source in source_list: + video_url = url_or_none(source.get('file') or source.get('src')) + if not video_url: + continue + if source.get('type') == 'application/x-mpegURL' or determine_ext(video_url) == 'm3u8': + formats.extend(self._extract_m3u8_formats( + video_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + continue + format_id = source.get('label') + f = { + 'url': video_url, + 'format_id': '%sp' % format_id, + 'height': int_or_none(format_id), + } + if format_id: + # Some videos contain additional metadata (e.g. + # https://www.udemy.com/ios9-swift/learn/#/lecture/3383208) + f = add_output_format_meta(f, format_id) + formats.append(f) + + def extract_subtitles(track_list): + if not isinstance(track_list, list): + return + for track in track_list: + if not isinstance(track, dict): + continue + if track.get('kind') != 'captions': + continue + src = url_or_none(track.get('src')) + if not src: + continue + lang = track.get('language') or track.get( + 'srclang') or track.get('label') + sub_dict = automatic_captions if track.get( + 'autogenerated') is True else subtitles + sub_dict.setdefault(lang, []).append({ + 'url': src, + }) + + for url_kind in ('download', 'stream'): + urls = asset.get('%s_urls' % url_kind) + if isinstance(urls, dict): + extract_formats(urls.get('Video')) + + captions = asset.get('captions') + if isinstance(captions, list): + for cc in captions: + if not isinstance(cc, dict): + continue + cc_url = url_or_none(cc.get('url')) + if not cc_url: + continue + lang = try_get(cc, lambda x: x['locale']['locale'], compat_str) + sub_dict = (automatic_captions if cc.get('source') == 'auto' + else subtitles) + sub_dict.setdefault(lang or 'en', []).append({ + 'url': cc_url, + }) + + view_html = lecture.get('view_html') + if view_html: + view_html_urls = set() + for source in re.findall(r']+>', view_html): + attributes = extract_attributes(source) + src = attributes.get('src') + if not src: + continue + res = attributes.get('data-res') + height = int_or_none(res) + if src in view_html_urls: + continue + view_html_urls.add(src) + if attributes.get('type') == 'application/x-mpegURL' or determine_ext(src) == 'm3u8': + m3u8_formats = self._extract_m3u8_formats( + src, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False) + for f in m3u8_formats: + m = re.search(r'/hls_(?P\d{3,4})_(?P\d{2,})/', f['url']) + if m: + if not f.get('height'): + f['height'] = int(m.group('height')) + if not f.get('tbr'): + f['tbr'] = int(m.group('tbr')) + formats.extend(m3u8_formats) + else: + formats.append(add_output_format_meta({ + 'url': src, + 'format_id': '%dp' % height if height else None, + 'height': height, + }, res)) + + # react rendition since 2017.04.15 (see + # https://github.com/ytdl-org/youtube-dl/issues/12744) + data = self._parse_json( + self._search_regex( + r'videojs-setup-data=(["\'])(?P{.+?})\1', view_html, + 'setup data', default='{}', group='data'), video_id, + transform_source=unescapeHTML, fatal=False) + if data and isinstance(data, dict): + extract_formats(data.get('sources')) + if not duration: + duration = int_or_none(data.get('duration')) + extract_subtitles(data.get('tracks')) + + if not subtitles and not automatic_captions: + text_tracks = self._parse_json( + self._search_regex( + r'text-tracks=(["\'])(?P\[.+?\])\1', view_html, + 'text tracks', default='{}', group='data'), video_id, + transform_source=lambda s: js_to_json(unescapeHTML(s)), + fatal=False) + extract_subtitles(text_tracks) + + if not formats and outputs: + for format_id, output in outputs.items(): + f = extract_output_format(output, format_id) + if f.get('url'): + formats.append(f) + + if not formats and asset.get('course_is_drmed'): + self.report_drm(video_id) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'duration': duration, + 'formats': formats, + 'subtitles': subtitles, + 'automatic_captions': automatic_captions, + } + + +class UdemyCourseIE(UdemyIE): # XXX: Do not subclass from concrete IE + IE_NAME = 'udemy:course' + _VALID_URL = r'https?://(?:[^/]+\.)?udemy\.com/(?P[^/?#&]+)' + _TESTS = [{ + 'url': 'https://www.udemy.com/java-tutorial/', + 'only_matching': True, + }, { + 'url': 'https://wipro.udemy.com/java-tutorial/', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if UdemyIE.suitable(url) else super(UdemyCourseIE, cls).suitable(url) + + def _real_extract(self, url): + course_path = self._match_id(url) + + webpage = self._download_webpage(url, course_path) + + course_id, title = self._extract_course_info(webpage, course_path) + + self._enroll_course(url, webpage, course_id) + + response = self._download_json( + 'https://www.udemy.com/api-2.0/courses/%s/cached-subscriber-curriculum-items' % course_id, + course_id, 'Downloading course curriculum', query={ + 'fields[chapter]': 'title,object_index', + 'fields[lecture]': 'title,asset', + 'page_size': '1000', + }) + + entries = [] + chapter, chapter_number = [None] * 2 + for entry in response['results']: + clazz = entry.get('_class') + if clazz == 'lecture': + asset = entry.get('asset') + if isinstance(asset, dict): + asset_type = asset.get('asset_type') or asset.get('assetType') + if asset_type != 'Video': + continue + lecture_id = entry.get('id') + if lecture_id: + entry = { + '_type': 'url_transparent', + 'url': smuggle_url( + f'https://www.udemy.com/{course_path}/learn/v4/t/lecture/{entry["id"]}', + {'course_id': course_id}), + 'title': entry.get('title'), + 'ie_key': UdemyIE.ie_key(), + } + if chapter_number: + entry['chapter_number'] = chapter_number + if chapter: + entry['chapter'] = chapter + entries.append(entry) + elif clazz == 'chapter': + chapter_number = entry.get('object_index') + chapter = entry.get('title') + + return self.playlist_result(entries, course_id, title) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/udn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/udn.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/udn.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/udn.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ufctv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ufctv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ufctv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ufctv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ukcolumn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ukcolumn.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ukcolumn.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ukcolumn.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/uktvplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/uktvplay.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/uktvplay.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/uktvplay.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/umg.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/umg.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/umg.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/umg.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/unistra.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/unistra.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/unistra.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/unistra.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/unity.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/unity.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/unity.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/unity.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/unscripted.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/unscripted.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/unscripted.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/unscripted.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/unsupported.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/unsupported.py new file mode 100644 index 0000000..bbcbf3a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/unsupported.py @@ -0,0 +1,177 @@ +from .common import InfoExtractor +from ..utils import ExtractorError, classproperty, remove_start + + +class UnsupportedInfoExtractor(InfoExtractor): + IE_DESC = False + URLS = () # Redefine in subclasses + + @classproperty + def IE_NAME(cls): + return remove_start(super().IE_NAME, 'Known') + + @classproperty + def _VALID_URL(cls): + return rf'https?://(?:www\.)?(?:{"|".join(cls.URLS)})' + + +LF = '\n ' + + +class KnownDRMIE(UnsupportedInfoExtractor): + """Sites that are known to use DRM for all their videos + + Add to this list only if: + * You are reasonably certain that the site uses DRM for ALL their videos + * Multiple users have asked about this site on github/reddit/discord + """ + + URLS = ( + r'play\.hbomax\.com', + r'channel(?:4|5)\.com', + r'peacocktv\.com', + r'(?:[\w\.]+\.)?disneyplus\.com', + r'open\.spotify\.com/(?:track|playlist|album|artist)', + r'tvnz\.co\.nz', + r'oneplus\.ch', + r'artstation\.com/learning/courses', + r'philo\.com', + r'(?:[\w\.]+\.)?mech-plus\.com', + r'aha\.video', + r'mubi\.com', + r'vootkids\.com', + r'nowtv\.it/watch', + r'tv\.apple\.com', + r'primevideo\.com', + r'hulu\.com', + r'resource\.inkryptvideos\.com', + r'joyn\.de', + r'amazon\.(?:\w{2}\.)?\w+/gp/video', + r'music\.amazon\.(?:\w{2}\.)?\w+', + ) + + _TESTS = [{ + # https://github.com/yt-dlp/yt-dlp/issues/4309 + 'url': 'https://peacocktv.com/watch/playback/vod/GMO_00000000073159_01/f9d03003-eb04-3c7f-a7b6-a83ab7eb55bc', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/1719, + 'url': 'https://www.channel4.com/programmes/gurren-lagann/on-demand/69960-001', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/1548 + 'url': 'https://www.channel5.com/show/uk-s-strongest-man-2021/season-2021/episode-1', + 'only_matching': True, + }, { + 'url': r'https://hsesn.apps.disneyplus.com', + 'only_matching': True, + }, { + 'url': r'https://www.disneyplus.com', + 'only_matching': True, + }, { + 'url': 'https://open.spotify.com/artist/', + 'only_matching': True, + }, { + 'url': 'https://open.spotify.com/track/', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/4122 + 'url': 'https://www.tvnz.co.nz/shows/ice-airport-alaska/episodes/s1-e1', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/1922 + 'url': 'https://www.oneplus.ch/play/1008188', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/1140 + 'url': 'https://www.artstation.com/learning/courses/dqQ/character-design-masterclass-with-serge-birault/chapters/Rxn3/introduction', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/3544 + 'url': 'https://www.philo.com/player/player/vod/Vk9EOjYwODU0ODg5OTY0ODY0OTQ5NA', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/3533 + 'url': 'https://www.mech-plus.com/player/24892/stream?assetType=episodes&playlist_id=6', + 'only_matching': True, + }, { + 'url': 'https://watch.mech-plus.com/details/25240?playlist_id=6', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/2934 + 'url': 'https://www.aha.video/player/movie/lucky-man', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/2743 + 'url': 'https://mubi.com/films/the-night-doctor', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/3287 + 'url': 'https://www.vootkids.com/movies/chhota-bheem-the-rise-of-kirmada/764459', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/2744 + 'url': 'https://www.nowtv.it/watch/home/asset/and-just-like-that/skyserie_f8fe979772e8437d8a61ab83b6d293e9/seasons/1/episodes/8/R_126182_HD', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/5557 + 'url': 'https://tv.apple.com/it/show/loot---una-fortuna/umc.cmc.5erbujil1mpazuerhr1udnk45?ctx_brand=tvs.sbd.4000', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/3072 + 'url': 'https://www.joyn.de/play/serien/clannad/1-1-wo-die-kirschblueten-fallen', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/7323 + 'url': 'https://music.amazon.co.jp/albums/B088Y368TK', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/7323 + 'url': 'https://www.amazon.co.jp/gp/video/detail/B09X5HBYRS/', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/6125 + 'url': 'https://www.primevideo.com/region/eu/detail/0H3DDB4KBJFNDCKKLHNRLRLVKQ/ref=atv_br_def_r_br_c_unkc_1_10', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/5740 + 'url': 'https://resource.inkryptvideos.com/v2-a83ns52/iframe/index.html#video_id=7999ea0f6e03439eb40d056258c2d736&otp=xxx', + 'only_matching': True, + }, { + # https://github.com/yt-dlp/yt-dlp/issues/5767 + 'url': 'https://www.hulu.com/movie/anthem-6b25fac9-da2b-45a3-8e09-e4156b0471cc', + 'only_matching': True, + }] + + def _real_extract(self, url): + raise ExtractorError( + f'The requested site is known to use DRM protection. ' + f'It will {self._downloader._format_err("NOT", self._downloader.Styles.EMPHASIS)} be supported.{LF}' + f'Please {self._downloader._format_err("DO NOT", self._downloader.Styles.ERROR)} open an issue, ' + 'unless you have evidence that the video is not DRM protected', expected=True) + + +class KnownPiracyIE(UnsupportedInfoExtractor): + """Sites that have been deemed to be piracy + + In order for this to not end up being a catalog of piracy sites, + only sites that were once supported should be added to this list + """ + + URLS = ( + r'dood\.(?:to|watch|so|pm|wf|re)', + # Sites youtube-dl supports, but we won't + r'viewsb\.com', + r'filemoon\.sx', + r'hentai\.animestigma\.com', + ) + + _TESTS = [{ + 'url': 'http://dood.to/e/5s1wmbdacezb', + 'only_matching': True, + }] + + def _real_extract(self, url): + raise ExtractorError( + f'This website is no longer supported since it has been determined to be primarily used for piracy.{LF}' + f'{self._downloader._format_err("DO NOT", self._downloader.Styles.ERROR)} open issues for it', expected=True) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/uol.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/uol.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/uol.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/uol.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/uplynk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/uplynk.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/uplynk.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/uplynk.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/urort.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/urort.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/urort.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/urort.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/urplay.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/urplay.py new file mode 100644 index 0000000..7f97fc9 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/urplay.py @@ -0,0 +1,164 @@ +from .common import InfoExtractor +from ..utils import ( + dict_get, + ExtractorError, + int_or_none, + ISO639Utils, + parse_age_limit, + try_get, + unified_timestamp, +) + + +class URPlayIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?ur(?:play|skola)\.se/(?:program|Produkter)/(?P[0-9]+)' + _TESTS = [{ + 'url': 'https://urplay.se/program/203704-ur-samtiden-livet-universum-och-rymdens-markliga-musik-om-vetenskap-kritiskt-tankande-och-motstand', + 'md5': '5ba36643c77cc3d34ffeadad89937d1e', + 'info_dict': { + 'id': '203704', + 'ext': 'mp4', + 'title': 'UR Samtiden - Livet, universum och rymdens märkliga musik : Om vetenskap, kritiskt tänkande och motstÃ¥nd', + 'description': 'md5:5344508a52aa78c1ced6c1b8b9e44e9a', + 'thumbnail': r're:^https?://.+\.jpg', + 'timestamp': 1513292400, + 'upload_date': '20171214', + 'series': 'UR Samtiden - Livet, universum och rymdens märkliga musik', + 'duration': 2269, + 'categories': ['Vetenskap & teknik'], + 'tags': ['Kritiskt tänkande', 'Vetenskap', 'Vetenskaplig verksamhet'], + 'episode': 'Om vetenskap, kritiskt tänkande och motstÃ¥nd', + 'age_limit': 15, + }, + }, { + 'url': 'https://urplay.se/program/222967-en-foralders-dagbok-mitt-barn-skadar-sig-sjalv', + 'info_dict': { + 'id': '222967', + 'ext': 'mp4', + 'title': 'En förälders dagbok : Mitt barn skadar sig själv', + 'description': 'md5:9f771eef03a732a213b367b52fe826ca', + 'thumbnail': r're:^https?://.+\.jpg', + 'timestamp': 1629676800, + 'upload_date': '20210823', + 'series': 'En förälders dagbok', + 'duration': 1740, + 'age_limit': 15, + 'episode_number': 3, + 'categories': 'count:2', + 'tags': 'count:7', + 'episode': 'Mitt barn skadar sig själv', + }, + }, { + 'url': 'https://urskola.se/Produkter/190031-Tripp-Trapp-Trad-Sovkudde', + 'info_dict': { + 'id': '190031', + 'ext': 'mp4', + 'title': 'Tripp, Trapp, Träd : Sovkudde', + 'description': 'md5:b86bffdae04a7e9379d1d7e5947df1d1', + 'thumbnail': r're:^https?://.+\.jpg', + 'timestamp': 1440086400, + 'upload_date': '20150820', + 'series': 'Tripp, Trapp, Träd', + 'duration': 865, + 'age_limit': 1, + 'episode_number': 1, + 'categories': [], + 'tags': ['Sova'], + 'episode': 'Sovkudde', + 'season': 'Säsong 1', + }, + }, { + 'url': 'http://urskola.se/Produkter/155794-Smasagor-meankieli-Grodan-i-vida-varlden', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + url = url.replace('skola.se/Produkter', 'play.se/program') + webpage = self._download_webpage(url, video_id) + urplayer_data = self._search_nextjs_data(webpage, video_id, fatal=False) or {} + if urplayer_data: + urplayer_data = try_get(urplayer_data, lambda x: x['props']['pageProps']['program'], dict) + if not urplayer_data: + raise ExtractorError('Unable to parse __NEXT_DATA__') + else: + accessible_episodes = self._parse_json(self._html_search_regex( + r'data-react-class="routes/Product/components/ProgramContainer/ProgramContainer"[^>]+data-react-props="({.+?})"', + webpage, 'urplayer data'), video_id)['accessibleEpisodes'] + urplayer_data = next(e for e in accessible_episodes if e.get('id') == int_or_none(video_id)) + episode = urplayer_data['title'] + + host = self._download_json('http://streaming-loadbalancer.ur.se/loadbalancer.json', video_id)['redirect'] + formats = [] + urplayer_streams = urplayer_data.get('streamingInfo', {}) + + for k, v in urplayer_streams.get('raw', {}).items(): + if not (k in ('sd', 'hd', 'mp3', 'm4a') and isinstance(v, dict)): + continue + file_http = v.get('location') + if file_http: + formats.extend(self._extract_wowza_formats( + 'http://%s/%splaylist.m3u8' % (host, file_http), + video_id, skip_protocols=['f4m', 'rtmp', 'rtsp'])) + + subtitles = {} + + def parse_lang_code(code): + "3-character language code or None (utils candidate)" + if code is None: + return + lang = code.lower() + if not ISO639Utils.long2short(lang): + lang = ISO639Utils.short2long(lang) + return lang or None + + for stream in urplayer_data['streamingInfo'].values(): + for k, v in stream.items(): + if (k in ('sd', 'hd') or not isinstance(v, dict)): + continue + lang, sttl_url = (v.get(kk) for kk in ('language', 'location', )) + if not sttl_url: + continue + lang = parse_lang_code(lang) + if not lang: + continue + sttl = subtitles.get(lang) or [] + sttl.append({'ext': k, 'url': sttl_url, }) + subtitles[lang] = sttl + + image = urplayer_data.get('image') or {} + thumbnails = [] + for k, v in image.items(): + t = { + 'id': k, + 'url': v, + } + wh = k.split('x') + if len(wh) == 2: + t.update({ + 'width': int_or_none(wh[0]), + 'height': int_or_none(wh[1]), + }) + thumbnails.append(t) + + series = urplayer_data.get('series') or {} + series_title = dict_get(series, ('seriesTitle', 'title')) or dict_get(urplayer_data, ('seriesTitle', 'mainTitle')) + + return { + 'id': video_id, + 'title': '%s : %s' % (series_title, episode) if series_title else episode, + 'description': urplayer_data.get('description'), + 'thumbnails': thumbnails, + 'timestamp': unified_timestamp(urplayer_data.get('publishedAt')), + 'series': series_title, + 'formats': formats, + 'duration': int_or_none(urplayer_data.get('duration')), + 'categories': urplayer_data.get('categories'), + 'tags': urplayer_data.get('keywords'), + 'season': series.get('label'), + 'episode': episode, + 'episode_number': int_or_none(urplayer_data.get('episodeNumber')), + 'age_limit': parse_age_limit(min(try_get(a, lambda x: x['from'], int) or 0 + for a in urplayer_data.get('ageRanges', []))), + 'subtitles': subtitles, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/usanetwork.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/usanetwork.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/usanetwork.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/usanetwork.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/usatoday.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/usatoday.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/usatoday.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/usatoday.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ustream.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ustream.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ustream.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ustream.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ustudio.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ustudio.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ustudio.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ustudio.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/utreon.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/utreon.py new file mode 100644 index 0000000..8a91691 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/utreon.py @@ -0,0 +1,81 @@ +from .common import InfoExtractor +from ..utils import ( + dict_get, + int_or_none, + str_or_none, + try_get, + unified_strdate, + url_or_none, +) + + +class UtreonIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?utreon\.com/v/(?P[\w-]+)' + _TESTS = [{ + 'url': 'https://utreon.com/v/z_I7ikQbuDw', + 'info_dict': { + 'id': 'z_I7ikQbuDw', + 'ext': 'mp4', + 'title': 'Freedom Friday meditation - Rising in the wind', + 'description': 'md5:a9bf15a42434a062fe313b938343ad1b', + 'uploader': 'Heather Dawn Elemental Health', + 'thumbnail': 'https://data-1.utreon.com/v/MG/M2/NT/z_I7ikQbuDw/z_I7ikQbuDw_preview.jpg', + 'release_date': '20210723', + } + }, { + 'url': 'https://utreon.com/v/jerJw5EOOVU', + 'info_dict': { + 'id': 'jerJw5EOOVU', + 'ext': 'mp4', + 'title': 'When I\'m alone, I love to reflect in peace, to make my dreams come true... [Quotes and Poems]', + 'description': 'md5:61ee6c2da98be51b04b969ca80273aaa', + 'uploader': 'Frases e Poemas Quotes and Poems', + 'thumbnail': 'https://data-1.utreon.com/v/Mz/Zh/ND/jerJw5EOOVU/jerJw5EOOVU_89af85470a4b16eededde7f8674c96d9_cover.jpg', + 'release_date': '20210723', + } + }, { + 'url': 'https://utreon.com/v/C4ZxXhYBBmE', + 'info_dict': { + 'id': 'C4ZxXhYBBmE', + 'ext': 'mp4', + 'title': 'Biden’s Capital Gains Tax Rate to Test World’s Highest', + 'description': 'md5:fb5a6c2e506f013cc76f133f673bc5c8', + 'uploader': 'Nomad Capitalist', + 'thumbnail': 'https://data-1.utreon.com/v/ZD/k1/Mj/C4ZxXhYBBmE/C4ZxXhYBBmE_628342076198c9c06dd6b2c665978584_cover.jpg', + 'release_date': '20210723', + } + }, { + 'url': 'https://utreon.com/v/Y-stEH-FBm8', + 'info_dict': { + 'id': 'Y-stEH-FBm8', + 'ext': 'mp4', + 'title': 'Creeper-Chan Pranks Steve! 💚 [MINECRAFT ANIME]', + 'description': 'md5:7a48450b0d761b96dec194be0c5ecb5f', + 'uploader': 'Merryweather Comics', + 'thumbnail': 'https://data-1.utreon.com/v/MT/E4/Zj/Y-stEH-FBm8/Y-stEH-FBm8_5290676a41a4a1096db133b09f54f77b_cover.jpg', + 'release_date': '20210718', + }}, + ] + + def _real_extract(self, url): + video_id = self._match_id(url) + json_data = self._download_json( + 'https://api.utreon.com/v1/videos/' + video_id, + video_id) + videos_json = json_data['videos'] + formats = [{ + 'url': format_url, + 'format_id': format_key.split('_')[1], + 'height': int(format_key.split('_')[1][:-1]), + } for format_key, format_url in videos_json.items() if url_or_none(format_url)] + thumbnail = url_or_none(dict_get(json_data, ('cover_image_url', 'preview_image_url'))) + return { + 'id': video_id, + 'title': json_data['title'], + 'formats': formats, + 'description': str_or_none(json_data.get('description')), + 'duration': int_or_none(json_data.get('duration')), + 'uploader': str_or_none(try_get(json_data, lambda x: x['channel']['title'])), + 'thumbnail': thumbnail, + 'release_date': unified_strdate(json_data.get('published_datetime')), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/varzesh3.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/varzesh3.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/varzesh3.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/varzesh3.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vbox7.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vbox7.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vbox7.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vbox7.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/veehd.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/veehd.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/veehd.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/veehd.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/veo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/veo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/veo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/veo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/veoh.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/veoh.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/veoh.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/veoh.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vesti.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vesti.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vesti.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vesti.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vevo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vevo.py new file mode 100644 index 0000000..aa40227 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vevo.py @@ -0,0 +1,353 @@ +import re +import json + +from .common import InfoExtractor +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + parse_iso8601, + parse_qs, +) + + +class VevoBaseIE(InfoExtractor): + def _extract_json(self, webpage, video_id): + return self._parse_json( + self._search_regex( + r'window\.__INITIAL_STORE__\s*=\s*({.+?});\s*', + webpage, 'initial store'), + video_id) + + +class VevoIE(VevoBaseIE): + ''' + Accepts urls from vevo.com or in the format 'vevo:{id}' + (currently used by MTVIE and MySpaceIE) + ''' + _VALID_URL = r'''(?x) + (?:https?://(?:www\.)?vevo\.com/watch/(?!playlist|genre)(?:[^/]+/(?:[^/]+/)?)?| + https?://cache\.vevo\.com/m/html/embed\.html\?video=| + https?://videoplayer\.vevo\.com/embed/embedded\?videoId=| + https?://embed\.vevo\.com/.*?[?&]isrc=| + https?://tv\.vevo\.com/watch/artist/(?:[^/]+)/| + vevo:) + (?P[^&?#]+)''' + _EMBED_REGEX = [r']+?src=(["\'])(?P(?:https?:)?//(?:cache\.)?vevo\.com/.+?)\1'] + + _TESTS = [{ + 'url': 'http://www.vevo.com/watch/hurts/somebody-to-die-for/GB1101300280', + 'md5': '95ee28ee45e70130e3ab02b0f579ae23', + 'info_dict': { + 'id': 'GB1101300280', + 'ext': 'mp4', + 'title': 'Hurts - Somebody to Die For', + 'timestamp': 1372057200, + 'upload_date': '20130624', + 'uploader': 'Hurts', + 'track': 'Somebody to Die For', + 'artist': 'Hurts', + 'genre': 'Pop', + }, + 'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'], + }, { + 'note': 'v3 SMIL format', + 'url': 'http://www.vevo.com/watch/cassadee-pope/i-wish-i-could-break-your-heart/USUV71302923', + 'md5': 'f6ab09b034f8c22969020b042e5ac7fc', + 'info_dict': { + 'id': 'USUV71302923', + 'ext': 'mp4', + 'title': 'Cassadee Pope - I Wish I Could Break Your Heart', + 'timestamp': 1392796919, + 'upload_date': '20140219', + 'uploader': 'Cassadee Pope', + 'track': 'I Wish I Could Break Your Heart', + 'artist': 'Cassadee Pope', + 'genre': 'Country', + }, + 'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'], + }, { + 'note': 'Age-limited video', + 'url': 'https://www.vevo.com/watch/justin-timberlake/tunnel-vision-explicit/USRV81300282', + 'info_dict': { + 'id': 'USRV81300282', + 'ext': 'mp4', + 'title': 'Justin Timberlake - Tunnel Vision (Explicit)', + 'age_limit': 18, + 'timestamp': 1372888800, + 'upload_date': '20130703', + 'uploader': 'Justin Timberlake', + 'track': 'Tunnel Vision (Explicit)', + 'artist': 'Justin Timberlake', + 'genre': 'Pop', + }, + 'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'], + }, { + 'note': 'No video_info', + 'url': 'http://www.vevo.com/watch/k-camp-1/Till-I-Die/USUV71503000', + 'md5': '8b83cc492d72fc9cf74a02acee7dc1b0', + 'info_dict': { + 'id': 'USUV71503000', + 'ext': 'mp4', + 'title': 'K Camp ft. T.I. - Till I Die', + 'age_limit': 18, + 'timestamp': 1449468000, + 'upload_date': '20151207', + 'uploader': 'K Camp', + 'track': 'Till I Die', + 'artist': 'K Camp', + 'genre': 'Hip-Hop', + }, + 'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'], + }, { + 'note': 'Featured test', + 'url': 'https://www.vevo.com/watch/lemaitre/Wait/USUV71402190', + 'md5': 'd28675e5e8805035d949dc5cf161071d', + 'info_dict': { + 'id': 'USUV71402190', + 'ext': 'mp4', + 'title': 'Lemaitre ft. LoLo - Wait', + 'age_limit': 0, + 'timestamp': 1413432000, + 'upload_date': '20141016', + 'uploader': 'Lemaitre', + 'track': 'Wait', + 'artist': 'Lemaitre', + 'genre': 'Electronic', + }, + 'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'], + }, { + 'note': 'Only available via webpage', + 'url': 'http://www.vevo.com/watch/GBUV71600656', + 'md5': '67e79210613865b66a47c33baa5e37fe', + 'info_dict': { + 'id': 'GBUV71600656', + 'ext': 'mp4', + 'title': 'ABC - Viva Love', + 'age_limit': 0, + 'timestamp': 1461830400, + 'upload_date': '20160428', + 'uploader': 'ABC', + 'track': 'Viva Love', + 'artist': 'ABC', + 'genre': 'Pop', + }, + 'expected_warnings': ['Failed to download video versions info'], + }, { + # no genres available + 'url': 'http://www.vevo.com/watch/INS171400764', + 'only_matching': True, + }, { + # Another case available only via the webpage; using streams/streamsV3 formats + # Geo-restricted to Netherlands/Germany + 'url': 'http://www.vevo.com/watch/boostee/pop-corn-clip-officiel/FR1A91600909', + 'only_matching': True, + }, { + 'url': 'https://embed.vevo.com/?isrc=USH5V1923499&partnerId=4d61b777-8023-4191-9ede-497ed6c24647&partnerAdCode=', + 'only_matching': True, + }, { + 'url': 'https://tv.vevo.com/watch/artist/janet-jackson/US0450100550', + 'only_matching': True, + }] + _VERSIONS = { + 0: 'youtube', # only in AuthenticateVideo videoVersions + 1: 'level3', + 2: 'akamai', + 3: 'level3', + 4: 'amazon', + } + + def _initialize_api(self, video_id): + webpage = self._download_webpage( + 'https://accounts.vevo.com/token', None, + note='Retrieving oauth token', + errnote='Unable to retrieve oauth token', + data=json.dumps({ + 'client_id': 'SPupX1tvqFEopQ1YS6SS', + 'grant_type': 'urn:vevo:params:oauth:grant-type:anonymous', + }).encode('utf-8'), + headers={ + 'Content-Type': 'application/json', + }) + + if re.search(r'(?i)THIS PAGE IS CURRENTLY UNAVAILABLE IN YOUR REGION', webpage): + self.raise_geo_restricted( + '%s said: This page is currently unavailable in your region' % self.IE_NAME) + + auth_info = self._parse_json(webpage, video_id) + self._api_url_template = self.http_scheme() + '//apiv2.vevo.com/%s?token=' + auth_info['legacy_token'] + + def _call_api(self, path, *args, **kwargs): + try: + data = self._download_json(self._api_url_template % path, *args, **kwargs) + except ExtractorError as e: + if isinstance(e.cause, HTTPError): + errors = self._parse_json(e.cause.response.read().decode(), None)['errors'] + error_message = ', '.join([error['message'] for error in errors]) + raise ExtractorError('%s said: %s' % (self.IE_NAME, error_message), expected=True) + raise + return data + + def _real_extract(self, url): + video_id = self._match_id(url) + + self._initialize_api(video_id) + + video_info = self._call_api( + 'video/%s' % video_id, video_id, 'Downloading api video info', + 'Failed to download video info') + + video_versions = self._call_api( + 'video/%s/streams' % video_id, video_id, + 'Downloading video versions info', + 'Failed to download video versions info', + fatal=False) + + # Some videos are only available via webpage (e.g. + # https://github.com/ytdl-org/youtube-dl/issues/9366) + if not video_versions: + webpage = self._download_webpage(url, video_id) + json_data = self._extract_json(webpage, video_id) + if 'streams' in json_data.get('default', {}): + video_versions = json_data['default']['streams'][video_id][0] + else: + video_versions = [ + value + for key, value in json_data['apollo']['data'].items() + if key.startswith('%s.streams' % video_id)] + + uploader = None + artist = None + featured_artist = None + artists = video_info.get('artists') + for curr_artist in artists: + if curr_artist.get('role') == 'Featured': + featured_artist = curr_artist['name'] + else: + artist = uploader = curr_artist['name'] + + formats = [] + for video_version in video_versions: + version = self._VERSIONS.get(video_version.get('version'), 'generic') + version_url = video_version.get('url') + if not version_url: + continue + + if '.ism' in version_url: + continue + elif '.mpd' in version_url: + formats.extend(self._extract_mpd_formats( + version_url, video_id, mpd_id='dash-%s' % version, + note='Downloading %s MPD information' % version, + errnote='Failed to download %s MPD information' % version, + fatal=False)) + elif '.m3u8' in version_url: + formats.extend(self._extract_m3u8_formats( + version_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls-%s' % version, + note='Downloading %s m3u8 information' % version, + errnote='Failed to download %s m3u8 information' % version, + fatal=False)) + else: + m = re.search(r'''(?xi) + _(?P[a-z0-9]+) + _(?P[0-9]+)x(?P[0-9]+) + _(?P[a-z0-9]+) + _(?P[0-9]+) + _(?P[a-z0-9]+) + _(?P[0-9]+) + \.(?P[a-z0-9]+)''', version_url) + if not m: + continue + + formats.append({ + 'url': version_url, + 'format_id': f'http-{version}-{video_version.get("quality") or m.group("quality")}', + 'vcodec': m.group('vcodec'), + 'acodec': m.group('acodec'), + 'vbr': int(m.group('vbr')), + 'abr': int(m.group('abr')), + 'ext': m.group('ext'), + 'width': int(m.group('width')), + 'height': int(m.group('height')), + }) + + track = video_info['title'] + if featured_artist: + artist = '%s ft. %s' % (artist, featured_artist) + title = '%s - %s' % (artist, track) if artist else track + + genres = video_info.get('genres') + genre = ( + genres[0] if genres and isinstance(genres, list) + and isinstance(genres[0], compat_str) else None) + + is_explicit = video_info.get('isExplicit') + if is_explicit is True: + age_limit = 18 + elif is_explicit is False: + age_limit = 0 + else: + age_limit = None + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'thumbnail': video_info.get('imageUrl') or video_info.get('thumbnailUrl'), + 'timestamp': parse_iso8601(video_info.get('releaseDate')), + 'uploader': uploader, + 'duration': int_or_none(video_info.get('duration')), + 'view_count': int_or_none(video_info.get('views', {}).get('total')), + 'age_limit': age_limit, + 'track': track, + 'artist': uploader, + 'genre': genre, + } + + +class VevoPlaylistIE(VevoBaseIE): + _VALID_URL = r'https?://(?:www\.)?vevo\.com/watch/(?Pplaylist|genre)/(?P[^/?#&]+)' + + _TESTS = [{ + 'url': 'http://www.vevo.com/watch/genre/rock', + 'info_dict': { + 'id': 'rock', + 'title': 'Rock', + }, + 'playlist_count': 20, + }, { + 'url': 'http://www.vevo.com/watch/genre/rock?index=0', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + playlist_id = mobj.group('id') + playlist_kind = mobj.group('kind') + + webpage = self._download_webpage(url, playlist_id) + + qs = parse_qs(url) + index = qs.get('index', [None])[0] + + if index: + video_id = self._search_regex( + r']+content=(["\'])vevo://video/(?P.+?)\1[^>]*>', + webpage, 'video id', default=None, group='id') + if video_id: + return self.url_result('vevo:%s' % video_id, VevoIE.ie_key()) + + playlists = self._extract_json(webpage, playlist_id)['default']['%ss' % playlist_kind] + + playlist = (list(playlists.values())[0] + if playlist_kind == 'playlist' else playlists[playlist_id]) + + entries = [ + self.url_result('vevo:%s' % src, VevoIE.ie_key()) + for src in playlist['isrcs']] + + return self.playlist_result( + entries, playlist.get('playlistId') or playlist_id, + playlist.get('name'), playlist.get('description')) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vgtv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vgtv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vgtv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vgtv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vh1.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vh1.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vh1.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vh1.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vice.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vice.py new file mode 100644 index 0000000..8a71268 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vice.py @@ -0,0 +1,319 @@ +import functools +import hashlib +import json +import random +import time + +from .adobepass import AdobePassIE +from .common import InfoExtractor +from .youtube import YoutubeIE +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + clean_html, + ExtractorError, + int_or_none, + OnDemandPagedList, + parse_age_limit, + str_or_none, + try_get, +) + + +class ViceBaseIE(InfoExtractor): + def _call_api(self, resource, resource_key, resource_id, locale, fields, args=''): + return self._download_json( + 'https://video.vice.com/api/v1/graphql', resource_id, query={ + 'query': '''{ + %s(locale: "%s", %s: "%s"%s) { + %s + } +}''' % (resource, locale, resource_key, resource_id, args, fields), + })['data'][resource] + + +class ViceIE(ViceBaseIE, AdobePassIE): + IE_NAME = 'vice' + _VALID_URL = r'https?://(?:(?:video|vms)\.vice|(?:www\.)?vice(?:land|tv))\.com/(?P[^/]+)/(?:video/[^/]+|embed)/(?P[\da-f]{24})' + _EMBED_REGEX = [r']+\bsrc=["\'](?P(?:https?:)?//video\.vice\.com/[^/]+/embed/[\da-f]{24})'] + _TESTS = [{ + 'url': 'https://video.vice.com/en_us/video/pet-cremator/58c69e38a55424f1227dc3f7', + 'info_dict': { + 'id': '58c69e38a55424f1227dc3f7', + 'ext': 'mp4', + 'title': '10 Questions You Always Wanted To Ask: Pet Cremator', + 'description': 'md5:fe856caacf61fe0e74fab15ce2b07ca5', + 'uploader': 'vice', + 'uploader_id': '57a204088cb727dec794c67b', + 'timestamp': 1489664942, + 'upload_date': '20170316', + 'age_limit': 14, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + # geo restricted to US + 'url': 'https://video.vice.com/en_us/video/the-signal-from-tolva/5816510690b70e6c5fd39a56', + 'info_dict': { + 'id': '5816510690b70e6c5fd39a56', + 'ext': 'mp4', + 'uploader': 'vice', + 'title': 'The Signal From Tölva', + 'description': 'md5:3927e3c79f9e8094606a2b3c5b5e55d5', + 'uploader_id': '57a204088cb727dec794c67b', + 'timestamp': 1477941983, + 'upload_date': '20161031', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + 'url': 'https://video.vice.com/alps/video/ulfs-wien-beruchtigste-grafitti-crew-part-1/581b12b60a0e1f4c0fb6ea2f', + 'info_dict': { + 'id': '581b12b60a0e1f4c0fb6ea2f', + 'ext': 'mp4', + 'title': 'ULFs - Wien berüchtigste Grafitti Crew - Part 1', + 'description': 'Zwischen Hinterzimmer-Tattoos und U-Bahnschächten erzählen uns die Ulfs, wie es ist, "süchtig nach Sachbeschädigung" zu sein.', + 'uploader': 'vice', + 'uploader_id': '57a204088cb727dec794c67b', + 'timestamp': 1485368119, + 'upload_date': '20170125', + 'age_limit': 14, + }, + 'params': { + # AES-encrypted m3u8 + 'skip_download': True, + }, + }, { + 'url': 'https://video.vice.com/en_us/video/pizza-show-trailer/56d8c9a54d286ed92f7f30e4', + 'only_matching': True, + }, { + 'url': 'https://video.vice.com/en_us/embed/57f41d3556a0a80f54726060', + 'only_matching': True, + }, { + 'url': 'https://vms.vice.com/en_us/video/preplay/58c69e38a55424f1227dc3f7', + 'only_matching': True, + }, { + 'url': 'https://www.viceland.com/en_us/video/thursday-march-1-2018/5a8f2d7ff1cdb332dd446ec1', + 'only_matching': True, + }] + + def _real_extract(self, url): + locale, video_id = self._match_valid_url(url).groups() + + video = self._call_api('videos', 'id', video_id, locale, '''body + locked + rating + thumbnail_url + title''')[0] + title = video['title'].strip() + rating = video.get('rating') + + query = {} + if video.get('locked'): + resource = self._get_mvpd_resource( + 'VICELAND', title, video_id, rating) + query['tvetoken'] = self._extract_mvpd_auth( + url, video_id, 'VICELAND', resource) + + # signature generation algorithm is reverse engineered from signatureGenerator in + # webpack:///../shared/~/vice-player/dist/js/vice-player.js in + # https://www.viceland.com/assets/common/js/web.vendor.bundle.js + # new JS is located here https://vice-web-statics-cdn.vice.com/vice-player/player-embed.js + exp = int(time.time()) + 1440 + + query.update({ + 'exp': exp, + 'sign': hashlib.sha512(('%s:GET:%d' % (video_id, exp)).encode()).hexdigest(), + 'skipadstitching': 1, + 'platform': 'desktop', + 'rn': random.randint(10000, 100000), + }) + + try: + preplay = self._download_json( + 'https://vms.vice.com/%s/video/preplay/%s' % (locale, video_id), + video_id, query=query) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status in (400, 401): + error = json.loads(e.cause.response.read().decode()) + error_message = error.get('error_description') or error['details'] + raise ExtractorError('%s said: %s' % ( + self.IE_NAME, error_message), expected=True) + raise + + video_data = preplay['video'] + formats = self._extract_m3u8_formats( + preplay['playURL'], video_id, 'mp4', 'm3u8_native') + episode = video_data.get('episode') or {} + channel = video_data.get('channel') or {} + season = video_data.get('season') or {} + + subtitles = {} + for subtitle in preplay.get('subtitleURLs', []): + cc_url = subtitle.get('url') + if not cc_url: + continue + language_code = try_get(subtitle, lambda x: x['languages'][0]['language_code'], compat_str) or 'en' + subtitles.setdefault(language_code, []).append({ + 'url': cc_url, + }) + + return { + 'formats': formats, + 'id': video_id, + 'title': title, + 'description': clean_html(video.get('body')), + 'thumbnail': video.get('thumbnail_url'), + 'duration': int_or_none(video_data.get('video_duration')), + 'timestamp': int_or_none(video_data.get('created_at'), 1000), + 'age_limit': parse_age_limit(video_data.get('video_rating') or rating), + 'series': try_get(video_data, lambda x: x['show']['base']['display_title'], compat_str), + 'episode_number': int_or_none(episode.get('episode_number')), + 'episode_id': str_or_none(episode.get('id') or video_data.get('episode_id')), + 'season_number': int_or_none(season.get('season_number')), + 'season_id': str_or_none(season.get('id') or video_data.get('season_id')), + 'uploader': channel.get('name'), + 'uploader_id': str_or_none(channel.get('id')), + 'subtitles': subtitles, + } + + +class ViceShowIE(ViceBaseIE): + IE_NAME = 'vice:show' + _VALID_URL = r'https?://(?:video\.vice|(?:www\.)?vice(?:land|tv))\.com/(?P[^/]+)/show/(?P[^/?#&]+)' + _PAGE_SIZE = 25 + _TESTS = [{ + 'url': 'https://video.vice.com/en_us/show/fck-thats-delicious', + 'info_dict': { + 'id': '57a2040c8cb727dec794c901', + 'title': 'F*ck, That’s Delicious', + 'description': 'The life and eating habits of rap’s greatest bon vivant, Action Bronson.', + }, + 'playlist_mincount': 64, + }, { + 'url': 'https://www.vicetv.com/en_us/show/fck-thats-delicious', + 'only_matching': True, + }] + + def _fetch_page(self, locale, show_id, page): + videos = self._call_api('videos', 'show_id', show_id, locale, '''body + id + url''', ', page: %d, per_page: %d' % (page + 1, self._PAGE_SIZE)) + for video in videos: + yield self.url_result( + video['url'], ViceIE.ie_key(), video.get('id')) + + def _real_extract(self, url): + locale, display_id = self._match_valid_url(url).groups() + show = self._call_api('shows', 'slug', display_id, locale, '''dek + id + title''')[0] + show_id = show['id'] + + entries = OnDemandPagedList( + functools.partial(self._fetch_page, locale, show_id), + self._PAGE_SIZE) + + return self.playlist_result( + entries, show_id, show.get('title'), show.get('dek')) + + +class ViceArticleIE(ViceBaseIE): + IE_NAME = 'vice:article' + _VALID_URL = r'https://(?:www\.)?vice\.com/(?P[^/]+)/article/(?:[0-9a-z]{6}/)?(?P[^?#]+)' + + _TESTS = [{ + 'url': 'https://www.vice.com/en_us/article/on-set-with-the-woman-making-mormon-porn-in-utah', + 'info_dict': { + 'id': '58dc0a3dee202d2a0ccfcbd8', + 'ext': 'mp4', + 'title': 'Mormon War on Porn', + 'description': 'md5:1c5d91fe25fa8aa304f9def118b92dbf', + 'uploader': 'vice', + 'uploader_id': '57a204088cb727dec794c67b', + 'timestamp': 1491883129, + 'upload_date': '20170411', + 'age_limit': 17, + }, + 'params': { + # AES-encrypted m3u8 + 'skip_download': True, + }, + 'add_ie': [ViceIE.ie_key()], + }, { + 'url': 'https://www.vice.com/en_us/article/how-to-hack-a-car', + 'md5': '13010ee0bc694ea87ec40724397c2349', + 'info_dict': { + 'id': '3jstaBeXgAs', + 'ext': 'mp4', + 'title': 'How to Hack a Car: Phreaked Out (Episode 2)', + 'description': 'md5:ee95453f7ff495db8efe14ae8bf56f30', + 'uploader': 'Motherboard', + 'uploader_id': 'MotherboardTV', + 'upload_date': '20140529', + }, + 'add_ie': [YoutubeIE.ie_key()], + }, { + 'url': 'https://www.vice.com/en_us/article/znm9dx/karley-sciortino-slutever-reloaded', + 'md5': 'a7ecf64ee4fa19b916c16f4b56184ae2', + 'info_dict': { + 'id': '57f41d3556a0a80f54726060', + 'ext': 'mp4', + 'title': "Making The World's First Male Sex Doll", + 'description': 'md5:19b00b215b99961cf869c40fbe9df755', + 'uploader': 'vice', + 'uploader_id': '57a204088cb727dec794c67b', + 'timestamp': 1476919911, + 'upload_date': '20161019', + 'age_limit': 17, + }, + 'params': { + 'skip_download': True, + }, + 'add_ie': [ViceIE.ie_key()], + }, { + 'url': 'https://www.vice.com/en_us/article/cowboy-capitalists-part-1', + 'only_matching': True, + }, { + 'url': 'https://www.vice.com/ru/article/big-night-out-ibiza-clive-martin-229', + 'only_matching': True, + }] + + def _real_extract(self, url): + locale, display_id = self._match_valid_url(url).groups() + + article = self._call_api('articles', 'slug', display_id, locale, '''body + embed_code''')[0] + body = article['body'] + + def _url_res(video_url, ie_key): + return { + '_type': 'url_transparent', + 'url': video_url, + 'display_id': display_id, + 'ie_key': ie_key, + } + + vice_url = ViceIE._extract_url(body) + if vice_url: + return _url_res(vice_url, ViceIE.ie_key()) + + embed_code = self._search_regex( + r'embedCode=([^&\'"]+)', body, + 'ooyala embed code', default=None) + if embed_code: + return _url_res('ooyala:%s' % embed_code, 'Ooyala') + + youtube_url = YoutubeIE._extract_url(body) + if youtube_url: + return _url_res(youtube_url, YoutubeIE.ie_key()) + + video_url = self._html_search_regex( + r'data-video-url="([^"]+)"', + article['embed_code'], 'video URL') + + return _url_res(video_url, ViceIE.ie_key()) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vidbit.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vidbit.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vidbit.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vidbit.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/viddler.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/viddler.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/viddler.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/viddler.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/videa.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/videa.py new file mode 100644 index 0000000..634d2ed --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/videa.py @@ -0,0 +1,188 @@ +import random +import string +import struct + +from .common import InfoExtractor +from ..compat import compat_b64decode, compat_ord +from ..utils import ( + ExtractorError, + int_or_none, + mimetype2ext, + parse_codecs, + parse_qs, + update_url_query, + urljoin, + xpath_element, + xpath_text, +) + + +class VideaIE(InfoExtractor): + _VALID_URL = r'''(?x) + https?:// + videa(?:kid)?\.hu/ + (?: + videok/(?:[^/]+/)*[^?#&]+-| + (?:videojs_)?player\?.*?\bv=| + player/v/ + ) + (?P[^?#&]+) + ''' + _EMBED_REGEX = [r']+src=(["\'])(?P(?:https?:)?//videa\.hu/player\?.*?\bv=.+?)\1'] + _TESTS = [{ + 'url': 'http://videa.hu/videok/allatok/az-orult-kigyasz-285-kigyot-kigyo-8YfIAjxwWGwT8HVQ', + 'md5': '97a7af41faeaffd9f1fc864a7c7e7603', + 'info_dict': { + 'id': '8YfIAjxwWGwT8HVQ', + 'ext': 'mp4', + 'title': 'Az Å‘rült kígyász 285 kígyót enged szabadon', + 'thumbnail': r're:^https?://.*', + 'duration': 21, + 'age_limit': 0, + }, + }, { + 'url': 'http://videa.hu/videok/origo/jarmuvek/supercars-elozes-jAHDWfWSJH5XuFhH', + 'md5': 'd57ccd8812c7fd491d33b1eab8c99975', + 'info_dict': { + 'id': 'jAHDWfWSJH5XuFhH', + 'ext': 'mp4', + 'title': 'Supercars elÅ‘zés', + 'thumbnail': r're:^https?://.*', + 'duration': 64, + 'age_limit': 0, + }, + }, { + 'url': 'http://videa.hu/player?v=8YfIAjxwWGwT8HVQ', + 'md5': '97a7af41faeaffd9f1fc864a7c7e7603', + 'info_dict': { + 'id': '8YfIAjxwWGwT8HVQ', + 'ext': 'mp4', + 'title': 'Az Å‘rült kígyász 285 kígyót enged szabadon', + 'thumbnail': r're:^https?://.*', + 'duration': 21, + 'age_limit': 0, + }, + }, { + 'url': 'http://videa.hu/player/v/8YfIAjxwWGwT8HVQ?autoplay=1', + 'only_matching': True, + }, { + 'url': 'https://videakid.hu/videok/origo/jarmuvek/supercars-elozes-jAHDWfWSJH5XuFhH', + 'only_matching': True, + }, { + 'url': 'https://videakid.hu/player?v=8YfIAjxwWGwT8HVQ', + 'only_matching': True, + }, { + 'url': 'https://videakid.hu/player/v/8YfIAjxwWGwT8HVQ?autoplay=1', + 'only_matching': True, + }] + _STATIC_SECRET = 'xHb0ZvME5q8CBcoQi6AngerDu3FGO9fkUlwPmLVY_RTzj2hJIS4NasXWKy1td7p' + + @staticmethod + def rc4(cipher_text, key): + res = b'' + + key_len = len(key) + S = list(range(256)) + + j = 0 + for i in range(256): + j = (j + S[i] + ord(key[i % key_len])) % 256 + S[i], S[j] = S[j], S[i] + + i = 0 + j = 0 + for m in range(len(cipher_text)): + i = (i + 1) % 256 + j = (j + S[i]) % 256 + S[i], S[j] = S[j], S[i] + k = S[(S[i] + S[j]) % 256] + res += struct.pack('B', k ^ compat_ord(cipher_text[m])) + + return res.decode() + + def _real_extract(self, url): + video_id = self._match_id(url) + video_page = self._download_webpage(url, video_id) + + if 'videa.hu/player' in url: + player_url = url + player_page = video_page + else: + player_url = self._search_regex( + r'%s)/(?: + m/(?P[0-9a-f]+)| + (?:category/)?video/(?P[\w-]+)/(?P[0-9a-f]{32})| + media/embed.*(?:\?|&)key=(?P[0-9a-f]{32}&?) + )''' % ('|'.join(map(re.escape, _INSTANCES))) + + _TESTS = [ + { + 'url': 'https://videocampus.sachsen.de/m/e0d6c8ce6e394c188f1342f1ab7c50ed6fc4490b808699801def5cb2e46d76ca7367f622a9f516c542ffb805b24d6b643bd7c81f385acaac4c59081b87a2767b', + 'info_dict': { + 'id': 'e6b9349905c1628631f175712250f2a1', + 'title': 'Konstruktiver Entwicklungsprozess Vorlesung 7', + 'description': 'Konstruktiver Entwicklungsprozess Vorlesung 7', + 'thumbnail': 'https://videocampus.sachsen.de/cache/1a985379ad3aecba8097a6902c7daa4e.jpg', + 'ext': 'mp4', + }, + }, + { + 'url': 'https://videocampus.sachsen.de/video/Was-ist-selbstgesteuertes-Lernen/fc99c527e4205b121cb7c74433469262', + 'info_dict': { + 'id': 'fc99c527e4205b121cb7c74433469262', + 'title': 'Was ist selbstgesteuertes Lernen?', + 'description': 'md5:196aa3b0509a526db62f84679522a2f5', + 'thumbnail': 'https://videocampus.sachsen.de/cache/6f4a85096ba24cb398e6ce54446b57ae.jpg', + 'display_id': 'Was-ist-selbstgesteuertes-Lernen', + 'ext': 'mp4', + }, + }, + { + 'url': 'https://videocampus.sachsen.de/category/video/Tutorial-zur-Nutzung-von-Adobe-Connect-aus-Veranstalter-Sicht/09d4ed029002eb1bdda610f1103dd54c/100', + 'info_dict': { + 'id': '09d4ed029002eb1bdda610f1103dd54c', + 'title': 'Tutorial zur Nutzung von Adobe Connect aus Veranstalter-Sicht', + 'description': 'md5:3d379ca3cc17b9da6784d7f58cca4d58', + 'thumbnail': 'https://videocampus.sachsen.de/cache/2452498fe8c2d5a7dc79a05d30f407b6.jpg', + 'display_id': 'Tutorial-zur-Nutzung-von-Adobe-Connect-aus-Veranstalter-Sicht', + 'ext': 'mp4', + }, + }, + { + 'url': 'https://www2.univ-sba.dz/video/Presentation-de-la-Faculte-de-droit-et-des-sciences-politiques-Journee-portes-ouvertes-202122/0183356e41af7bfb83d7667b20d9b6a3', + 'info_dict': { + 'url': 'https://www2.univ-sba.dz/getMedium/0183356e41af7bfb83d7667b20d9b6a3.mp4', + 'id': '0183356e41af7bfb83d7667b20d9b6a3', + 'title': 'Présentation de la Faculté de droit et des sciences politiques - Journée portes ouvertes 2021/22', + 'description': 'md5:508958bd93e0ca002ac731d94182a54f', + 'thumbnail': 'https://www2.univ-sba.dz/cache/4d5d4a0b4189271a8cc6cb5328e14769.jpg', + 'display_id': 'Presentation-de-la-Faculte-de-droit-et-des-sciences-politiques-Journee-portes-ouvertes-202122', + 'ext': 'mp4', + } + }, + { + 'url': 'https://vimp.weka-fachmedien.de/video/Preisverleihung-Produkte-des-Jahres-2022/c8816f1cc942c12b6cce57c835cffd7c', + 'info_dict': { + 'id': 'c8816f1cc942c12b6cce57c835cffd7c', + 'title': 'Preisverleihung »Produkte des Jahres 2022«', + 'description': 'md5:60c347568ca89aa25b772c4ea564ebd3', + 'thumbnail': 'https://vimp.weka-fachmedien.de/cache/da9f3090e9227b25beacf67ccf94de14.png', + 'display_id': 'Preisverleihung-Produkte-des-Jahres-2022', + 'ext': 'mp4', + }, + }, + { + 'url': 'https://videocampus.sachsen.de/media/embed?key=fc99c527e4205b121cb7c74433469262', + 'info_dict': { + 'id': 'fc99c527e4205b121cb7c74433469262', + 'title': 'Was ist selbstgesteuertes Lernen?', + 'ext': 'mp4', + }, + }, + ] + + def _real_extract(self, url): + host, video_id, tmp_id, display_id, embed_id = self._match_valid_url(url).group( + 'host', 'id', 'tmp_id', 'display_id', 'embed_id') + webpage = self._download_webpage(url, video_id or tmp_id, fatal=False) or '' + + if not video_id: + video_id = embed_id or self._html_search_regex( + rf'src="https?://{host}/media/embed.*(?:\?|&)key=([0-9a-f]+)&?', + webpage, 'video_id') + + if not (display_id or tmp_id): + # Title, description from embedded page's meta wouldn't be correct + title = self._html_search_regex(r']* data-piwik-title="([^"<]+)"', webpage, 'title', fatal=False) + description = None + thumbnail = None + else: + title = self._html_search_meta(('og:title', 'twitter:title', 'title'), webpage, fatal=False) + description = self._html_search_meta( + ('og:description', 'twitter:description', 'description'), webpage, fatal=False) + thumbnail = self._html_search_meta(('og:image', 'twitter:image'), webpage, fatal=False) + + formats, subtitles = [], {} + try: + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + f'https://{host}/media/hlsMedium/key/{video_id}/format/auto/ext/mp4/learning/0/path/m3u8', + video_id, 'mp4', m3u8_id='hls', fatal=True) + except ExtractorError as e: + if not isinstance(e.cause, HTTPError) or e.cause.status not in (404, 500): + raise + + formats.append({'url': f'https://{host}/getMedium/{video_id}.mp4'}) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'display_id': display_id, + 'formats': formats, + 'subtitles': subtitles, + } + + +class ViMPPlaylistIE(InfoExtractor): + IE_NAME = 'ViMP:Playlist' + _VALID_URL = r'''(?x)(?Phttps?://(?:%s))/(?: + album/view/aid/(?P[0-9]+)| + (?Pcategory|channel)/(?P[\w-]+)/(?P[0-9]+) + )''' % '|'.join(map(re.escape, VideocampusSachsenIE._INSTANCES)) + + _TESTS = [{ + 'url': 'https://vimp.oth-regensburg.de/channel/Designtheorie-1-SoSe-2020/3', + 'info_dict': { + 'id': 'channel-3', + 'title': 'Designtheorie 1 SoSe 2020 :: Channels :: ViMP OTH Regensburg', + }, + 'playlist_mincount': 9, + }, { + 'url': 'https://www.fh-bielefeld.de/medienportal/album/view/aid/208', + 'info_dict': { + 'id': 'album-208', + 'title': 'KG Praktikum ABT/MEC :: Playlists :: FH-Medienportal', + }, + 'playlist_mincount': 4, + }, { + 'url': 'https://videocampus.sachsen.de/category/online-tutorials-onyx/91', + 'info_dict': { + 'id': 'category-91', + 'title': 'Online-Seminare ONYX - BPS - Bildungseinrichtungen - VCS', + }, + 'playlist_mincount': 7, + }] + _PAGE_SIZE = 10 + + def _fetch_page(self, host, url_part, id, data, page): + webpage = self._download_webpage( + f'{host}/media/ajax/component/boxList/{url_part}', id, + query={'page': page, 'page_only': 1}, data=urlencode_postdata(data)) + urls = re.findall(r'"([^"]+/video/[^"]+)"', webpage) + + for url in urls: + yield self.url_result(host + url, VideocampusSachsenIE) + + def _real_extract(self, url): + host, album_id, mode, name, id = self._match_valid_url(url).group( + 'host', 'album_id', 'mode', 'name', 'id') + + webpage = self._download_webpage(url, album_id or id, fatal=False) or '' + title = (self._html_search_meta('title', webpage, fatal=False) + or self._html_extract_title(webpage)) + + url_part = (f'aid/{album_id}' if album_id + else f'category/{name}/category_id/{id}' if mode == 'category' + else f'title/{name}/channel/{id}') + + mode = mode or 'album' + data = { + 'vars[mode]': mode, + f'vars[{mode}]': album_id or id, + 'vars[context]': '4' if album_id else '1' if mode == 'category' else '3', + 'vars[context_id]': album_id or id, + 'vars[layout]': 'thumb', + 'vars[per_page][thumb]': str(self._PAGE_SIZE), + } + + return self.playlist_result( + OnDemandPagedList(functools.partial( + self._fetch_page, host, url_part, album_id or id, data), self._PAGE_SIZE), + playlist_title=title, id=f'{mode}-{album_id or id}') diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/videodetective.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/videodetective.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/videodetective.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/videodetective.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/videofyme.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/videofyme.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/videofyme.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/videofyme.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/videoken.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/videoken.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/videoken.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/videoken.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/videomore.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/videomore.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/videomore.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/videomore.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/videopress.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/videopress.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/videopress.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/videopress.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vidio.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vidio.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vidio.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vidio.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vidlii.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vidlii.py new file mode 100644 index 0000000..44353b7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vidlii.py @@ -0,0 +1,154 @@ +import re + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import ( + format_field, + float_or_none, + get_element_by_id, + int_or_none, + str_to_int, + strip_or_none, + unified_strdate, + urljoin, +) + + +class VidLiiIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?vidlii\.com/(?:watch|embed)\?.*?\bv=(?P[0-9A-Za-z_-]{11})' + _TESTS = [{ + 'url': 'https://www.vidlii.com/watch?v=tJluaH4BJ3v', + 'md5': '9bf7d1e005dfa909b6efb0a1ff5175e2', + 'info_dict': { + 'id': 'tJluaH4BJ3v', + 'ext': 'mp4', + 'title': 'Vidlii is against me', + 'description': 'md5:fa3f119287a2bfb922623b52b1856145', + 'thumbnail': 're:https://.*.jpg', + 'uploader': 'APPle5auc31995', + 'uploader_url': 'https://www.vidlii.com/user/APPle5auc31995', + 'upload_date': '20171107', + 'duration': 212, + 'view_count': int, + 'comment_count': int, + 'average_rating': float, + 'categories': ['News & Politics'], + 'tags': ['Vidlii', 'Jan', 'Videogames'], + } + }, { + 'url': 'https://www.vidlii.com/watch?v=zTAtaAgOLKt', + 'md5': '5778f7366aa4c569b77002f8bf6b614f', + 'info_dict': { + 'id': 'zTAtaAgOLKt', + 'ext': 'mp4', + 'title': 'FULPTUBE SUCKS.', + 'description': 'md5:087b2ca355d4c8f8f77e97c43e72d711', + 'thumbnail': 'https://www.vidlii.com/usfi/thmp/zTAtaAgOLKt.jpg', + 'uploader': 'Homicide', + 'uploader_url': 'https://www.vidlii.com/user/Homicide', + 'upload_date': '20210612', + 'duration': 89, + 'view_count': int, + 'comment_count': int, + 'average_rating': float, + 'categories': ['News & Politics'], + 'tags': ['fulp', 'tube', 'sucks', 'bad', 'fulptube'], + }, + }, { + 'url': 'https://www.vidlii.com/embed?v=tJluaH4BJ3v&a=0', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage( + 'https://www.vidlii.com/watch?v=%s' % video_id, video_id) + formats = [] + + sources = [source[1] for source in re.findall( + r'src\s*:\s*(["\'])(?P(?:https?://)?(?:(?!\1).)+)\1', + webpage) or []] + for source in sources: + source = urljoin(url, source) + height = int(self._search_regex(r'(\d+).mp4', source, 'height', default=360)) + if self._request_webpage(HEADRequest(source), video_id, f'Checking {height}p url', errnote=False): + formats.append({ + 'url': source, + 'format_id': f'{height}p', + 'height': height, + }) + + title = self._search_regex( + (r'

([^<]+)

', r'([^<]+) - VidLii<'), webpage, + 'title') + + description = self._html_search_meta( + ('description', 'twitter:description'), webpage, + default=None) or strip_or_none( + get_element_by_id('des_text', webpage)) + + thumbnail = self._html_search_meta( + 'twitter:image', webpage, default=None) + if not thumbnail: + thumbnail_path = self._search_regex( + r'img\s*:\s*(["\'])(?P<url>(?:(?!\1).)+)\1', webpage, + 'thumbnail', fatal=False, group='url') + if thumbnail_path: + thumbnail = urljoin(url, thumbnail_path) + + uploader = self._search_regex( + r'<div[^>]+class=["\']wt_person[^>]+>\s*<a[^>]+\bhref=["\']/user/[^>]+>([^<]+)', + webpage, 'uploader', fatal=False) + uploader_url = format_field(uploader, None, 'https://www.vidlii.com/user/%s') + + upload_date = unified_strdate(self._html_search_meta( + 'datePublished', webpage, default=None) or self._search_regex( + r'<date>([^<]+)', webpage, 'upload date', fatal=False)) + + duration = int_or_none(self._html_search_meta( + 'video:duration', webpage, 'duration', + default=None) or self._search_regex( + r'duration\s*:\s*(\d+)', webpage, 'duration', fatal=False)) + + view_count = str_to_int(self._search_regex( + (r'<strong>([,0-9]+)</strong> views', + r'Views\s*:\s*<strong>([,0-9]+)</strong>'), + webpage, 'view count', fatal=False)) + + comment_count = int_or_none(self._search_regex( + (r'<span[^>]+id=["\']cmt_num[^>]+>(\d+)', + r'Comments\s*:\s*<strong>(\d+)'), + webpage, 'comment count', fatal=False)) + + average_rating = float_or_none(self._search_regex( + r'rating\s*:\s*([\d.]+)', webpage, 'average rating', fatal=False)) + + category = self._html_search_regex( + r'<div>Category\s*:\s*</div>\s*<div>\s*<a[^>]+>([^<]+)', webpage, + 'category', fatal=False) + categories = [category] if category else None + + tags = [ + strip_or_none(tag) + for tag in re.findall( + r'<a[^>]+\bhref=["\']/results\?.*?q=[^>]*>([^<]+)', + webpage) if strip_or_none(tag) + ] or None + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'uploader': uploader, + 'formats': formats, + 'uploader_url': uploader_url, + 'upload_date': upload_date, + 'duration': duration, + 'view_count': view_count, + 'comment_count': comment_count, + 'average_rating': average_rating, + 'categories': categories, + 'tags': tags, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/viewlift.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/viewlift.py new file mode 100644 index 0000000..8f686f0 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/viewlift.py @@ -0,0 +1,336 @@ +import json + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + parse_age_limit, + traverse_obj, +) + + +class ViewLiftBaseIE(InfoExtractor): + _API_BASE = 'https://prod-api.viewlift.com/' + _DOMAINS_REGEX = r'(?:(?:main\.)?snagfilms|snagxtreme|funnyforfree|kiddovid|winnersview|(?:monumental|lax)sportsnetwork|vayafilm|failarmy|ftfnext|lnppass\.legapallacanestro|moviespree|app\.myoutdoortv|neoufitness|pflmma|theidentitytb)\.com|(?:hoichoi|app\.horseandcountry|kronon|marquee|supercrosslive)\.tv' + _SITE_MAP = { + 'ftfnext': 'lax', + 'funnyforfree': 'snagfilms', + 'hoichoi': 'hoichoitv', + 'kiddovid': 'snagfilms', + 'laxsportsnetwork': 'lax', + 'legapallacanestro': 'lnp', + 'marquee': 'marquee-tv', + 'monumentalsportsnetwork': 'monumental-network', + 'moviespree': 'bingeflix', + 'pflmma': 'pfl', + 'snagxtreme': 'snagfilms', + 'theidentitytb': 'tampabay', + 'vayafilm': 'snagfilms', + } + _TOKENS = {} + + def _fetch_token(self, site, url): + if self._TOKENS.get(site): + return + + cookies = self._get_cookies(url) + if cookies and cookies.get('token'): + self._TOKENS[site] = self._search_regex(r'22authorizationToken\%22:\%22([^\%]+)\%22', cookies['token'].value, 'token') + if not self._TOKENS.get(site): + self.raise_login_required('Cookies (not necessarily logged in) are needed to download from this website', method='cookies') + + def _call_api(self, site, path, video_id, url, query): + self._fetch_token(site, url) + try: + return self._download_json( + self._API_BASE + path, video_id, headers={'Authorization': self._TOKENS.get(site)}, query=query) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + webpage = e.cause.response.read().decode() + try: + error_message = traverse_obj(json.loads(webpage), 'errorMessage', 'message') + except json.JSONDecodeError: + raise ExtractorError(f'{site} said: {webpage}', cause=e.cause) + if error_message: + if 'has not purchased' in error_message: + self.raise_login_required(method='cookies') + raise ExtractorError(error_message, expected=True) + raise + + +class ViewLiftEmbedIE(ViewLiftBaseIE): + IE_NAME = 'viewlift:embed' + _VALID_URL = r'https?://(?:(?:www|embed)\.)?(?P<domain>%s)/embed/player\?.*\bfilmId=(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12})' % ViewLiftBaseIE._DOMAINS_REGEX + _EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:embed\.)?(?:%s)/embed/player.+?)\1' % ViewLiftBaseIE._DOMAINS_REGEX] + _TESTS = [{ + 'url': 'http://embed.snagfilms.com/embed/player?filmId=74849a00-85a9-11e1-9660-123139220831&w=500', + 'md5': '2924e9215c6eff7a55ed35b72276bd93', + 'info_dict': { + 'id': '74849a00-85a9-11e1-9660-123139220831', + 'ext': 'mp4', + 'title': '#whilewewatch', + 'description': 'md5:b542bef32a6f657dadd0df06e26fb0c8', + 'timestamp': 1334350096, + 'upload_date': '20120413', + } + }, { + # invalid labels, 360p is better that 480p + 'url': 'http://www.snagfilms.com/embed/player?filmId=17ca0950-a74a-11e0-a92a-0026bb61d036', + 'md5': '882fca19b9eb27ef865efeeaed376a48', + 'info_dict': { + 'id': '17ca0950-a74a-11e0-a92a-0026bb61d036', + 'ext': 'mp4', + 'title': 'Life in Limbo', + }, + 'skip': 'The video does not exist', + }, { + 'url': 'http://www.snagfilms.com/embed/player?filmId=0000014c-de2f-d5d6-abcf-ffef58af0017', + 'only_matching': True, + }] + + def _real_extract(self, url): + domain, film_id = self._match_valid_url(url).groups() + site = domain.split('.')[-2] + if site in self._SITE_MAP: + site = self._SITE_MAP[site] + + content_data = self._call_api( + site, 'entitlement/video/status', film_id, url, { + 'id': film_id + })['video'] + gist = content_data['gist'] + title = gist['title'] + video_assets = content_data['streamingInfo']['videoAssets'] + + hls_url = video_assets.get('hls') + formats, subtitles = [], {} + if hls_url: + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + hls_url, film_id, 'mp4', 'm3u8_native', m3u8_id='hls', fatal=False) + + for video_asset in video_assets.get('mpeg') or []: + video_asset_url = video_asset.get('url') + if not video_asset_url: + continue + bitrate = int_or_none(video_asset.get('bitrate')) + height = int_or_none(self._search_regex( + r'^_?(\d+)[pP]$', video_asset.get('renditionValue'), + 'height', default=None)) + formats.append({ + 'url': video_asset_url, + 'format_id': 'http%s' % ('-%d' % bitrate if bitrate else ''), + 'tbr': bitrate, + 'height': height, + 'vcodec': video_asset.get('codec'), + }) + + subs = {} + for sub in traverse_obj(content_data, ('contentDetails', 'closedCaptions')) or []: + sub_url = sub.get('url') + if not sub_url: + continue + subs.setdefault(sub.get('language', 'English'), []).append({ + 'url': sub_url, + }) + + return { + 'id': film_id, + 'title': title, + 'description': gist.get('description'), + 'thumbnail': gist.get('videoImageUrl'), + 'duration': int_or_none(gist.get('runtime')), + 'age_limit': parse_age_limit(content_data.get('parentalRating')), + 'timestamp': int_or_none(gist.get('publishDate'), 1000), + 'formats': formats, + 'subtitles': self._merge_subtitles(subs, subtitles), + 'categories': traverse_obj(content_data, ('categories', ..., 'title')), + 'tags': traverse_obj(content_data, ('tags', ..., 'title')), + } + + +class ViewLiftIE(ViewLiftBaseIE): + IE_NAME = 'viewlift' + _API_BASE = 'https://prod-api-cached-2.viewlift.com/' + _VALID_URL = r'https?://(?:www\.)?(?P<domain>%s)(?P<path>(?:/(?:films/title|show|(?:news/)?videos?|watch))?/(?P<id>[^?#]+))' % ViewLiftBaseIE._DOMAINS_REGEX + _TESTS = [{ + 'url': 'http://www.snagfilms.com/films/title/lost_for_life', + 'md5': '19844f897b35af219773fd63bdec2942', + 'info_dict': { + 'id': '0000014c-de2f-d5d6-abcf-ffef58af0017', + 'display_id': 'lost_for_life', + 'ext': 'mp4', + 'title': 'Lost for Life', + 'description': 'md5:ea10b5a50405ae1f7b5269a6ec594102', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 4489, + 'categories': 'mincount:3', + 'age_limit': 14, + 'upload_date': '20150421', + 'timestamp': 1429656820, + } + }, { + 'url': 'http://www.snagfilms.com/show/the_world_cut_project/india', + 'md5': 'e6292e5b837642bbda82d7f8bf3fbdfd', + 'info_dict': { + 'id': '00000145-d75c-d96e-a9c7-ff5c67b20000', + 'display_id': 'the_world_cut_project/india', + 'ext': 'mp4', + 'title': 'India', + 'description': 'md5:5c168c5a8f4719c146aad2e0dfac6f5f', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 979, + 'timestamp': 1399478279, + 'upload_date': '20140507', + } + }, { + 'url': 'http://main.snagfilms.com/augie_alone/s_2_ep_12_love', + 'info_dict': { + 'id': '00000148-7b53-de26-a9fb-fbf306f70020', + 'display_id': 'augie_alone/s_2_ep_12_love', + 'ext': 'mp4', + 'title': 'S. 2 Ep. 12 - Love', + 'description': 'Augie finds love.', + 'thumbnail': r're:^https?://.*\.jpg', + 'duration': 107, + 'upload_date': '20141012', + 'timestamp': 1413129540, + 'age_limit': 17, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'http://main.snagfilms.com/films/title/the_freebie', + 'only_matching': True, + }, { + # Film is not playable in your area. + 'url': 'http://www.snagfilms.com/films/title/inside_mecca', + 'only_matching': True, + }, { + # Film is not available. + 'url': 'http://www.snagfilms.com/show/augie_alone/flirting', + 'only_matching': True, + }, { + 'url': 'http://www.winnersview.com/videos/the-good-son', + 'only_matching': True, + }, { + # Was once Kaltura embed + 'url': 'https://www.monumentalsportsnetwork.com/videos/john-carlson-postgame-2-25-15', + 'only_matching': True, + }, { + 'url': 'https://www.marquee.tv/watch/sadlerswells-sacredmonsters', + 'only_matching': True, + }, { # Free film with langauge code + 'url': 'https://www.hoichoi.tv/bn/films/title/shuyopoka', + 'info_dict': { + 'id': '7a7a9d33-1f4c-4771-9173-ee4fb6dbf196', + 'ext': 'mp4', + 'title': 'Shuyopoka', + 'description': 'md5:e28f2fb8680096a69c944d37c1fa5ffc', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20211006', + 'series': None + }, + 'params': {'skip_download': True}, + }, { # Free film + 'url': 'https://www.hoichoi.tv/films/title/dadu-no1', + 'info_dict': { + 'id': '0000015b-b009-d126-a1db-b81ff3780000', + 'ext': 'mp4', + 'title': 'Dadu No.1', + 'description': 'md5:605cba408e51a79dafcb824bdeded51e', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20210827', + 'series': None + }, + 'params': {'skip_download': True}, + }, { # Free episode + 'url': 'https://www.hoichoi.tv/webseries/case-jaundice-s01-e01', + 'info_dict': { + 'id': 'f779e07c-30c8-459c-8612-5a834ab5e5ba', + 'ext': 'mp4', + 'title': 'Humans Vs. Corona', + 'description': 'md5:ca30a682b4528d02a3eb6d0427dd0f87', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20210830', + 'series': 'Case Jaundice' + }, + 'params': {'skip_download': True}, + }, { # Free video + 'url': 'https://www.hoichoi.tv/videos/1549072415320-six-episode-02-hindi', + 'info_dict': { + 'id': 'b41fa1ce-aca6-47b6-b208-283ff0a2de30', + 'ext': 'mp4', + 'title': 'Woman in red - Hindi', + 'description': 'md5:9d21edc1827d32f8633eb67c2054fc31', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20211006', + 'series': 'Six (Hindi)' + }, + 'params': {'skip_download': True}, + }, { # Free episode + 'url': 'https://www.hoichoi.tv/shows/watch-asian-paints-moner-thikana-online-season-1-episode-1', + 'info_dict': { + 'id': '1f45d185-8500-455c-b88d-13252307c3eb', + 'ext': 'mp4', + 'title': 'Jisshu Sengupta', + 'description': 'md5:ef6ffae01a3d83438597367400f824ed', + 'thumbnail': r're:^https?://.*\.jpg$', + 'upload_date': '20211004', + 'series': 'Asian Paints Moner Thikana' + }, + 'params': {'skip_download': True}, + }, { # Free series + 'url': 'https://www.hoichoi.tv/shows/watch-moner-thikana-bengali-web-series-online', + 'playlist_mincount': 5, + 'info_dict': { + 'id': 'watch-moner-thikana-bengali-web-series-online', + }, + }, { # Premium series + 'url': 'https://www.hoichoi.tv/shows/watch-byomkesh-bengali-web-series-online', + 'playlist_mincount': 14, + 'info_dict': { + 'id': 'watch-byomkesh-bengali-web-series-online', + }, + }, { # Premium movie + 'url': 'https://www.hoichoi.tv/movies/detective-2020', + 'only_matching': True + }] + + @classmethod + def suitable(cls, url): + return False if ViewLiftEmbedIE.suitable(url) else super(ViewLiftIE, cls).suitable(url) + + def _show_entries(self, domain, seasons): + for season in seasons: + for episode in season.get('episodes') or []: + path = traverse_obj(episode, ('gist', 'permalink')) + if path: + yield self.url_result(f'https://www.{domain}{path}', ie=self.ie_key()) + + def _real_extract(self, url): + domain, path, display_id = self._match_valid_url(url).groups() + site = domain.split('.')[-2] + if site in self._SITE_MAP: + site = self._SITE_MAP[site] + modules = self._call_api( + site, 'content/pages', display_id, url, { + 'includeContent': 'true', + 'moduleOffset': 1, + 'path': path, + 'site': site, + })['modules'] + + seasons = next((m['contentData'][0]['seasons'] for m in modules if m.get('moduleType') == 'ShowDetailModule'), None) + if seasons: + return self.playlist_result(self._show_entries(domain, seasons), display_id) + + film_id = next(m['contentData'][0]['gist']['id'] for m in modules if m.get('moduleType') == 'VideoDetailModule') + return { + '_type': 'url_transparent', + 'url': 'http://%s/embed/player?filmId=%s' % (domain, film_id), + 'id': film_id, + 'display_id': display_id, + 'ie_key': 'ViewLiftEmbed', + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/viidea.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/viidea.py new file mode 100644 index 0000000..649ffe3 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/viidea.py @@ -0,0 +1,199 @@ +import re + +from .common import InfoExtractor +from ..compat import ( + compat_str, + compat_urlparse, +) +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + js_to_json, + parse_duration, + parse_iso8601, +) + + +class ViideaIE(InfoExtractor): + _VALID_URL = r'''(?x)https?://(?:www\.)?(?: + videolectures\.net| + flexilearn\.viidea\.net| + presentations\.ocwconsortium\.org| + video\.travel-zoom\.si| + video\.pomp-forum\.si| + tv\.nil\.si| + video\.hekovnik.com| + video\.szko\.si| + kpk\.viidea\.com| + inside\.viidea\.net| + video\.kiberpipa\.org| + bvvideo\.si| + kongres\.viidea\.net| + edemokracija\.viidea\.com + )(?:/lecture)?/(?P<id>[^/]+)(?:/video/(?P<part>\d+))?/*(?:[#?].*)?$''' + + _TESTS = [{ + 'url': 'http://videolectures.net/promogram_igor_mekjavic_eng/', + 'info_dict': { + 'id': '20171', + 'display_id': 'promogram_igor_mekjavic_eng', + 'ext': 'mp4', + 'title': 'Automatics, robotics and biocybernetics', + 'description': 'md5:815fc1deb6b3a2bff99de2d5325be482', + 'thumbnail': r're:http://.*\.jpg', + 'timestamp': 1372349289, + 'upload_date': '20130627', + 'duration': 565, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + # video with invalid direct format links (HTTP 403) + 'url': 'http://videolectures.net/russir2010_filippova_nlp/', + 'info_dict': { + 'id': '14891', + 'display_id': 'russir2010_filippova_nlp', + 'ext': 'flv', + 'title': 'NLP at Google', + 'description': 'md5:fc7a6d9bf0302d7cc0e53f7ca23747b3', + 'thumbnail': r're:http://.*\.jpg', + 'timestamp': 1284375600, + 'upload_date': '20100913', + 'duration': 5352, + }, + 'params': { + # rtmp download + 'skip_download': True, + }, + }, { + # event playlist + 'url': 'http://videolectures.net/deeplearning2015_montreal/', + 'info_dict': { + 'id': '23181', + 'title': 'Deep Learning Summer School, Montreal 2015', + 'description': 'md5:0533a85e4bd918df52a01f0e1ebe87b7', + 'thumbnail': r're:http://.*\.jpg', + 'timestamp': 1438560000, + }, + 'playlist_count': 30, + }, { + # multi part lecture + 'url': 'http://videolectures.net/mlss09uk_bishop_ibi/', + 'info_dict': { + 'id': '9737', + 'display_id': 'mlss09uk_bishop_ibi', + 'title': 'Introduction To Bayesian Inference', + 'thumbnail': r're:http://.*\.jpg', + 'timestamp': 1251622800, + }, + 'playlist': [{ + 'info_dict': { + 'id': '9737_part1', + 'display_id': 'mlss09uk_bishop_ibi_part1', + 'ext': 'wmv', + 'title': 'Introduction To Bayesian Inference (Part 1)', + 'thumbnail': r're:http://.*\.jpg', + 'duration': 4622, + 'timestamp': 1251622800, + 'upload_date': '20090830', + }, + }, { + 'info_dict': { + 'id': '9737_part2', + 'display_id': 'mlss09uk_bishop_ibi_part2', + 'ext': 'wmv', + 'title': 'Introduction To Bayesian Inference (Part 2)', + 'thumbnail': r're:http://.*\.jpg', + 'duration': 5641, + 'timestamp': 1251622800, + 'upload_date': '20090830', + }, + }], + 'playlist_count': 2, + }] + + def _real_extract(self, url): + lecture_slug, explicit_part_id = self._match_valid_url(url).groups() + + webpage = self._download_webpage(url, lecture_slug) + + cfg = self._parse_json(self._search_regex( + [r'cfg\s*:\s*({.+?})\s*,\s*[\da-zA-Z_]+\s*:\s*\(?\s*function', + r'cfg\s*:\s*({[^}]+})'], + webpage, 'cfg'), lecture_slug, js_to_json) + + lecture_id = compat_str(cfg['obj_id']) + + base_url = self._proto_relative_url(cfg['livepipe'], 'http:') + + try: + lecture_data = self._download_json( + '%s/site/api/lecture/%s?format=json' % (base_url, lecture_id), + lecture_id)['lecture'][0] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 403: + msg = self._parse_json( + e.cause.response.read().decode('utf-8'), lecture_id) + raise ExtractorError(msg['detail'], expected=True) + raise + + lecture_info = { + 'id': lecture_id, + 'display_id': lecture_slug, + 'title': lecture_data['title'], + 'timestamp': parse_iso8601(lecture_data.get('time')), + 'description': lecture_data.get('description_wiki'), + 'thumbnail': lecture_data.get('thumb'), + } + + playlist_entries = [] + lecture_type = lecture_data.get('type') + parts = [compat_str(video) for video in cfg.get('videos', [])] + if parts: + multipart = len(parts) > 1 + + def extract_part(part_id): + smil_url = '%s/%s/video/%s/smil.xml' % (base_url, lecture_slug, part_id) + smil = self._download_smil(smil_url, lecture_id) + info = self._parse_smil(smil, smil_url, lecture_id) + info['id'] = lecture_id if not multipart else '%s_part%s' % (lecture_id, part_id) + info['display_id'] = lecture_slug if not multipart else '%s_part%s' % (lecture_slug, part_id) + if multipart: + info['title'] += ' (Part %s)' % part_id + switch = smil.find('.//switch') + if switch is not None: + info['duration'] = parse_duration(switch.attrib.get('dur')) + item_info = lecture_info.copy() + item_info.update(info) + return item_info + + if explicit_part_id or not multipart: + result = extract_part(explicit_part_id or parts[0]) + else: + result = { + '_type': 'multi_video', + 'entries': [extract_part(part) for part in parts], + } + result.update(lecture_info) + + # Immediately return explicitly requested part or non event item + if explicit_part_id or lecture_type != 'evt': + return result + + playlist_entries.append(result) + + # It's probably a playlist + if not parts or lecture_type == 'evt': + playlist_webpage = self._download_webpage( + '%s/site/ajax/drilldown/?id=%s' % (base_url, lecture_id), lecture_id) + entries = [ + self.url_result(compat_urlparse.urljoin(url, video_url), 'Viidea') + for _, video_url in re.findall( + r'<a[^>]+href=(["\'])(.+?)\1[^>]+id=["\']lec=\d+', playlist_webpage)] + playlist_entries.extend(entries) + + playlist = self.playlist_result(playlist_entries, lecture_id) + playlist.update(lecture_info) + return playlist diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/viki.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/viki.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/viki.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/viki.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vimeo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vimeo.py new file mode 100644 index 0000000..e72fa50 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vimeo.py @@ -0,0 +1,1447 @@ +import base64 +import functools +import re +import itertools + +from .common import InfoExtractor +from ..compat import compat_str, compat_urlparse +from ..networking import HEADRequest, Request +from ..networking.exceptions import HTTPError +from ..utils import ( + clean_html, + determine_ext, + ExtractorError, + get_element_by_class, + js_to_json, + int_or_none, + merge_dicts, + OnDemandPagedList, + parse_filesize, + parse_iso8601, + parse_qs, + smuggle_url, + str_or_none, + try_get, + unified_timestamp, + unsmuggle_url, + urlencode_postdata, + urljoin, + urlhandle_detect_ext, +) + + +class VimeoBaseInfoExtractor(InfoExtractor): + _NETRC_MACHINE = 'vimeo' + _LOGIN_REQUIRED = False + _LOGIN_URL = 'https://vimeo.com/log_in' + + @staticmethod + def _smuggle_referrer(url, referrer_url): + return smuggle_url(url, {'http_headers': {'Referer': referrer_url}}) + + def _unsmuggle_headers(self, url): + """@returns (url, smuggled_data, headers)""" + url, data = unsmuggle_url(url, {}) + headers = self.get_param('http_headers').copy() + if 'http_headers' in data: + headers.update(data['http_headers']) + return url, data, headers + + def _perform_login(self, username, password): + webpage = self._download_webpage( + self._LOGIN_URL, None, 'Downloading login page') + token, vuid = self._extract_xsrft_and_vuid(webpage) + data = { + 'action': 'login', + 'email': username, + 'password': password, + 'service': 'vimeo', + 'token': token, + } + self._set_vimeo_cookie('vuid', vuid) + try: + self._download_webpage( + self._LOGIN_URL, None, 'Logging in', + data=urlencode_postdata(data), headers={ + 'Content-Type': 'application/x-www-form-urlencoded', + 'Referer': self._LOGIN_URL, + }) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 418: + raise ExtractorError( + 'Unable to log in: bad username or password', + expected=True) + raise ExtractorError('Unable to log in') + + def _real_initialize(self): + if self._LOGIN_REQUIRED and not self._get_cookies('https://vimeo.com').get('vuid'): + self._raise_login_required() + + def _get_video_password(self): + password = self.get_param('videopassword') + if password is None: + raise ExtractorError( + 'This video is protected by a password, use the --video-password option', + expected=True) + return password + + def _verify_video_password(self, url, video_id, password, token, vuid): + if url.startswith('http://'): + # vimeo only supports https now, but the user can give an http url + url = url.replace('http://', 'https://') + self._set_vimeo_cookie('vuid', vuid) + return self._download_webpage( + url + '/password', video_id, 'Verifying the password', + 'Wrong password', data=urlencode_postdata({ + 'password': password, + 'token': token, + }), headers={ + 'Content-Type': 'application/x-www-form-urlencoded', + 'Referer': url, + }) + + def _extract_xsrft_and_vuid(self, webpage): + xsrft = self._search_regex( + r'(?:(?P<q1>["\'])xsrft(?P=q1)\s*:|xsrft\s*[=:])\s*(?P<q>["\'])(?P<xsrft>.+?)(?P=q)', + webpage, 'login token', group='xsrft') + vuid = self._search_regex( + r'["\']vuid["\']\s*:\s*(["\'])(?P<vuid>.+?)\1', + webpage, 'vuid', group='vuid') + return xsrft, vuid + + def _extract_vimeo_config(self, webpage, video_id, *args, **kwargs): + vimeo_config = self._search_regex( + r'vimeo\.config\s*=\s*(?:({.+?})|_extend\([^,]+,\s+({.+?})\));', + webpage, 'vimeo config', *args, **kwargs) + if vimeo_config: + return self._parse_json(vimeo_config, video_id) + + def _set_vimeo_cookie(self, name, value): + self._set_cookie('vimeo.com', name, value) + + def _parse_config(self, config, video_id): + video_data = config['video'] + video_title = video_data.get('title') + live_event = video_data.get('live_event') or {} + is_live = live_event.get('status') == 'started' + request = config.get('request') or {} + + formats = [] + subtitles = {} + + config_files = video_data.get('files') or request.get('files') or {} + for f in (config_files.get('progressive') or []): + video_url = f.get('url') + if not video_url: + continue + formats.append({ + 'url': video_url, + 'format_id': 'http-%s' % f.get('quality'), + 'source_preference': 10, + 'width': int_or_none(f.get('width')), + 'height': int_or_none(f.get('height')), + 'fps': int_or_none(f.get('fps')), + 'tbr': int_or_none(f.get('bitrate')), + }) + + # TODO: fix handling of 308 status code returned for live archive manifest requests + sep_pattern = r'/sep/video/' + for files_type in ('hls', 'dash'): + for cdn_name, cdn_data in (try_get(config_files, lambda x: x[files_type]['cdns']) or {}).items(): + manifest_url = cdn_data.get('url') + if not manifest_url: + continue + format_id = '%s-%s' % (files_type, cdn_name) + sep_manifest_urls = [] + if re.search(sep_pattern, manifest_url): + for suffix, repl in (('', 'video'), ('_sep', 'sep/video')): + sep_manifest_urls.append((format_id + suffix, re.sub( + sep_pattern, '/%s/' % repl, manifest_url))) + else: + sep_manifest_urls = [(format_id, manifest_url)] + for f_id, m_url in sep_manifest_urls: + if files_type == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + m_url, video_id, 'mp4', live=is_live, m3u8_id=f_id, + note='Downloading %s m3u8 information' % cdn_name, + fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + elif files_type == 'dash': + if 'json=1' in m_url: + real_m_url = (self._download_json(m_url, video_id, fatal=False) or {}).get('url') + if real_m_url: + m_url = real_m_url + fmts, subs = self._extract_mpd_formats_and_subtitles( + m_url.replace('/master.json', '/master.mpd'), video_id, f_id, + 'Downloading %s MPD information' % cdn_name, + fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + + live_archive = live_event.get('archive') or {} + live_archive_source_url = live_archive.get('source_url') + if live_archive_source_url and live_archive.get('status') == 'done': + formats.append({ + 'format_id': 'live-archive-source', + 'url': live_archive_source_url, + 'quality': 10, + }) + + for tt in (request.get('text_tracks') or []): + subtitles.setdefault(tt['lang'], []).append({ + 'ext': 'vtt', + 'url': urljoin('https://vimeo.com', tt['url']), + }) + + thumbnails = [] + if not is_live: + for key, thumb in (video_data.get('thumbs') or {}).items(): + thumbnails.append({ + 'id': key, + 'width': int_or_none(key), + 'url': thumb, + }) + thumbnail = video_data.get('thumbnail') + if thumbnail: + thumbnails.append({ + 'url': thumbnail, + }) + + owner = video_data.get('owner') or {} + video_uploader_url = owner.get('url') + + duration = int_or_none(video_data.get('duration')) + chapter_data = try_get(config, lambda x: x['embed']['chapters']) or [] + chapters = [{ + 'title': current_chapter.get('title'), + 'start_time': current_chapter.get('timecode'), + 'end_time': next_chapter.get('timecode'), + } for current_chapter, next_chapter in zip(chapter_data, chapter_data[1:] + [{'timecode': duration}])] + if chapters and chapters[0]['start_time']: # Chapters may not start from 0 + chapters[:0] = [{'title': '<Untitled>', 'start_time': 0, 'end_time': chapters[0]['start_time']}] + + return { + 'id': str_or_none(video_data.get('id')) or video_id, + 'title': video_title, + 'uploader': owner.get('name'), + 'uploader_id': video_uploader_url.split('/')[-1] if video_uploader_url else None, + 'uploader_url': video_uploader_url, + 'thumbnails': thumbnails, + 'duration': duration, + 'chapters': chapters or None, + 'formats': formats, + 'subtitles': subtitles, + 'is_live': is_live, + # Note: Bitrates are completely broken. Single m3u8 may contain entries in kbps and bps + # at the same time without actual units specified. + '_format_sort_fields': ('quality', 'res', 'fps', 'hdr:12', 'source'), + } + + def _extract_original_format(self, url, video_id, unlisted_hash=None): + query = {'action': 'load_download_config'} + if unlisted_hash: + query['unlisted_hash'] = unlisted_hash + download_data = self._download_json( + url, video_id, fatal=False, query=query, + headers={'X-Requested-With': 'XMLHttpRequest'}, + expected_status=(403, 404)) or {} + source_file = download_data.get('source_file') + download_url = try_get(source_file, lambda x: x['download_url']) + if download_url and not source_file.get('is_cold') and not source_file.get('is_defrosting'): + source_name = source_file.get('public_name', 'Original') + if self._is_valid_url(download_url, video_id, '%s video' % source_name): + ext = (try_get( + source_file, lambda x: x['extension'], + compat_str) or determine_ext( + download_url, None) or 'mp4').lower() + return { + 'url': download_url, + 'ext': ext, + 'width': int_or_none(source_file.get('width')), + 'height': int_or_none(source_file.get('height')), + 'filesize': parse_filesize(source_file.get('size')), + 'format_id': source_name, + 'quality': 1, + } + + jwt_response = self._download_json( + 'https://vimeo.com/_rv/viewer', video_id, note='Downloading jwt token', fatal=False) or {} + if not jwt_response.get('jwt'): + return + headers = {'Authorization': 'jwt %s' % jwt_response['jwt']} + original_response = self._download_json( + f'https://api.vimeo.com/videos/{video_id}', video_id, + headers=headers, fatal=False, expected_status=(403, 404)) or {} + for download_data in original_response.get('download') or []: + download_url = download_data.get('link') + if not download_url or download_data.get('quality') != 'source': + continue + ext = determine_ext(parse_qs(download_url).get('filename', [''])[0].lower(), default_ext=None) + if not ext: + urlh = self._request_webpage( + HEADRequest(download_url), video_id, fatal=False, note='Determining source extension') + ext = urlh and urlhandle_detect_ext(urlh) + return { + 'url': download_url, + 'ext': ext or 'unknown_video', + 'format_id': download_data.get('public_name', 'Original'), + 'width': int_or_none(download_data.get('width')), + 'height': int_or_none(download_data.get('height')), + 'fps': int_or_none(download_data.get('fps')), + 'filesize': int_or_none(download_data.get('size')), + 'quality': 1, + } + + +class VimeoIE(VimeoBaseInfoExtractor): + """Information extractor for vimeo.com.""" + + # _VALID_URL matches Vimeo URLs + _VALID_URL = r'''(?x) + https?:// + (?: + (?: + www| + player + ) + \. + )? + vimeo\.com/ + (?: + (?P<u>user)| + (?!(?:channels|album|showcase)/[^/?#]+/?(?:$|[?#])|[^/]+/review/|ondemand/) + (?:.*?/)?? + (?P<q> + (?: + play_redirect_hls| + moogaloop\.swf)\?clip_id= + )? + (?:videos?/)? + ) + (?P<id>[0-9]+) + (?(u) + /(?!videos|likes)[^/?#]+/?| + (?(q)|/(?P<unlisted_hash>[\da-f]{10}))? + ) + (?:(?(q)[&]|(?(u)|/?)[?]).*?)?(?:[#].*)?$ + ''' + IE_NAME = 'vimeo' + _EMBED_REGEX = [ + # iframe + r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//player\.vimeo\.com/video/\d+.*?)\1', + # Embedded (swf embed) Vimeo player + r'<embed[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?vimeo\.com/moogaloop\.swf.+?)\1', + # Non-standard embedded Vimeo player + r'<video[^>]+src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?vimeo\.com/[0-9]+)\1', + ] + _TESTS = [ + { + 'url': 'http://vimeo.com/56015672#at=0', + 'md5': '8879b6cc097e987f02484baf890129e5', + 'info_dict': { + 'id': '56015672', + 'ext': 'mp4', + 'title': "youtube-dl test video '' ä↭ð•-BaW jenozKc", + 'description': 'md5:2d3305bad981a06ff79f027f19865021', + 'timestamp': 1355990239, + 'upload_date': '20121220', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/user7108434', + 'uploader_id': 'user7108434', + 'uploader': 'Filippo Valsorda', + 'duration': 10, + 'license': 'by-sa', + }, + 'params': { + 'format': 'best[protocol=https]', + }, + 'skip': 'No longer available' + }, + { + 'url': 'http://player.vimeo.com/video/54469442', + 'md5': '619b811a4417aa4abe78dc653becf511', + 'note': 'Videos that embed the url in the player page', + 'info_dict': { + 'id': '54469442', + 'ext': 'mp4', + 'title': 'Kathy Sierra: Building the minimum Badass User, Business of Software 2012', + 'uploader': 'Business of Software', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/businessofsoftware', + 'uploader_id': 'businessofsoftware', + 'duration': 3610, + 'description': None, + 'thumbnail': 'https://i.vimeocdn.com/video/376682406-f34043e7b766af6bef2af81366eacd6724f3fc3173179a11a97a1e26587c9529-d_1280', + }, + 'params': { + 'format': 'best[protocol=https]', + }, + }, + { + 'url': 'http://vimeo.com/68375962', + 'md5': 'aaf896bdb7ddd6476df50007a0ac0ae7', + 'note': 'Video protected with password', + 'info_dict': { + 'id': '68375962', + 'ext': 'mp4', + 'title': 'youtube-dl password protected test video', + 'timestamp': 1371200155, + 'upload_date': '20130614', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/user18948128', + 'uploader_id': 'user18948128', + 'uploader': 'Jaime Marquínez Ferrándiz', + 'duration': 10, + 'description': 'md5:6173f270cd0c0119f22817204b3eb86c', + 'thumbnail': 'https://i.vimeocdn.com/video/440665496-b2c5aee2b61089442c794f64113a8e8f7d5763c3e6b3ebfaf696ae6413f8b1f4-d_1280', + 'view_count': int, + 'comment_count': int, + 'like_count': int, + }, + 'params': { + 'format': 'best[protocol=https]', + 'videopassword': 'youtube-dl', + }, + }, + { + 'url': 'http://vimeo.com/channels/keypeele/75629013', + 'md5': '2f86a05afe9d7abc0b9126d229bbe15d', + 'info_dict': { + 'id': '75629013', + 'ext': 'mp4', + 'title': 'Key & Peele: Terrorist Interrogation', + 'description': 'md5:6173f270cd0c0119f22817204b3eb86c', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/atencio', + 'uploader_id': 'atencio', + 'uploader': 'Peter Atencio', + 'channel_id': 'keypeele', + 'channel_url': r're:https?://(?:www\.)?vimeo\.com/channels/keypeele', + 'timestamp': 1380339469, + 'upload_date': '20130928', + 'duration': 187, + 'thumbnail': 'https://i.vimeocdn.com/video/450239872-a05512d9b1e55d707a7c04365c10980f327b06d966351bc403a5d5d65c95e572-d_1280', + 'view_count': int, + 'comment_count': int, + 'like_count': int, + }, + 'params': {'format': 'http-1080p'}, + }, + { + 'url': 'http://vimeo.com/76979871', + 'note': 'Video with subtitles', + 'info_dict': { + 'id': '76979871', + 'ext': 'mov', + 'title': 'The New Vimeo Player (You Know, For Videos)', + 'description': 'md5:2ec900bf97c3f389378a96aee11260ea', + 'timestamp': 1381846109, + 'upload_date': '20131015', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/staff', + 'uploader_id': 'staff', + 'uploader': 'Vimeo Staff', + 'duration': 62, + 'subtitles': { + 'de': [{'ext': 'vtt'}], + 'en': [{'ext': 'vtt'}], + 'es': [{'ext': 'vtt'}], + 'fr': [{'ext': 'vtt'}], + }, + }, + 'expected_warnings': ['Ignoring subtitle tracks found in the HLS manifest'], + }, + { + # from https://www.ouya.tv/game/Pier-Solar-and-the-Great-Architects/ + 'url': 'https://player.vimeo.com/video/98044508', + 'note': 'The js code contains assignments to the same variable as the config', + 'info_dict': { + 'id': '98044508', + 'ext': 'mp4', + 'title': 'Pier Solar OUYA Official Trailer', + 'uploader': 'Tulio Gonçalves', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/user28849593', + 'uploader_id': 'user28849593', + 'duration': 118, + 'thumbnail': 'https://i.vimeocdn.com/video/478636036-c18440305ef3df9decfb6bf207a61fe39d2d17fa462a96f6f2d93d30492b037d-d_1280', + }, + }, + { + # contains original format + 'url': 'https://vimeo.com/33951933', + 'md5': '53c688fa95a55bf4b7293d37a89c5c53', + 'info_dict': { + 'id': '33951933', + 'ext': 'mp4', + 'title': 'FOX CLASSICS - Forever Classic ID - A Full Minute', + 'uploader': 'The DMCI', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/dmci', + 'uploader_id': 'dmci', + 'timestamp': 1324343742, + 'upload_date': '20111220', + 'description': 'md5:ae23671e82d05415868f7ad1aec21147', + 'duration': 60, + 'comment_count': int, + 'view_count': int, + 'thumbnail': 'https://i.vimeocdn.com/video/231174622-dd07f015e9221ff529d451e1cc31c982b5d87bfafa48c4189b1da72824ee289a-d_1280', + 'like_count': int, + }, + }, + { + 'note': 'Contains original format not accessible in webpage', + 'url': 'https://vimeo.com/393756517', + 'md5': 'c464af248b592190a5ffbb5d33f382b0', + 'info_dict': { + 'id': '393756517', + 'ext': 'mov', + 'timestamp': 1582642091, + 'uploader_id': 'frameworkla', + 'title': 'Straight To Hell - Sabrina: Netflix', + 'uploader': 'Framework Studio', + 'description': 'md5:f2edc61af3ea7a5592681ddbb683db73', + 'upload_date': '20200225', + 'duration': 176, + 'thumbnail': 'https://i.vimeocdn.com/video/859377297-836494a4ef775e9d4edbace83937d9ad34dc846c688c0c419c0e87f7ab06c4b3-d_1280', + 'uploader_url': 'https://vimeo.com/frameworkla', + }, + }, + { + # only available via https://vimeo.com/channels/tributes/6213729 and + # not via https://vimeo.com/6213729 + 'url': 'https://vimeo.com/channels/tributes/6213729', + 'info_dict': { + 'id': '6213729', + 'ext': 'mp4', + 'title': 'Vimeo Tribute: The Shining', + 'uploader': 'Casey Donahue', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/caseydonahue', + 'uploader_id': 'caseydonahue', + 'channel_url': r're:https?://(?:www\.)?vimeo\.com/channels/tributes', + 'channel_id': 'tributes', + 'timestamp': 1250886430, + 'upload_date': '20090821', + 'description': 'md5:bdbf314014e58713e6e5b66eb252f4a6', + 'duration': 321, + 'comment_count': int, + 'view_count': int, + 'thumbnail': 'https://i.vimeocdn.com/video/22728298-bfc22146f930de7cf497821c7b0b9f168099201ecca39b00b6bd31fcedfca7a6-d_1280', + 'like_count': int, + }, + 'params': { + 'skip_download': True, + }, + }, + { + # redirects to ondemand extractor and should be passed through it + # for successful extraction + 'url': 'https://vimeo.com/73445910', + 'info_dict': { + 'id': '73445910', + 'ext': 'mp4', + 'title': 'The Reluctant Revolutionary', + 'uploader': '10Ft Films', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/tenfootfilms', + 'uploader_id': 'tenfootfilms', + 'description': 'md5:0fa704e05b04f91f40b7f3ca2e801384', + 'upload_date': '20130830', + 'timestamp': 1377853339, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'this page is no longer available.', + }, + { + 'url': 'http://player.vimeo.com/video/68375962', + 'md5': 'aaf896bdb7ddd6476df50007a0ac0ae7', + 'info_dict': { + 'id': '68375962', + 'ext': 'mp4', + 'title': 'youtube-dl password protected test video', + 'timestamp': 1371200155, + 'upload_date': '20130614', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/user18948128', + 'uploader_id': 'user18948128', + 'uploader': 'Jaime Marquínez Ferrándiz', + 'duration': 10, + 'description': 'md5:6173f270cd0c0119f22817204b3eb86c', + 'thumbnail': 'https://i.vimeocdn.com/video/440665496-b2c5aee2b61089442c794f64113a8e8f7d5763c3e6b3ebfaf696ae6413f8b1f4-d_1280', + 'view_count': int, + 'comment_count': int, + 'like_count': int, + }, + 'params': { + 'format': 'best[protocol=https]', + 'videopassword': 'youtube-dl', + }, + }, + { + 'url': 'http://vimeo.com/moogaloop.swf?clip_id=2539741', + 'only_matching': True, + }, + { + 'url': 'https://vimeo.com/109815029', + 'note': 'Video not completely processed, "failed" seed status', + 'only_matching': True, + }, + { + 'url': 'https://vimeo.com/groups/travelhd/videos/22439234', + 'only_matching': True, + }, + { + 'url': 'https://vimeo.com/album/2632481/video/79010983', + 'only_matching': True, + }, + { + 'url': 'https://vimeo.com/showcase/3253534/video/119195465', + 'note': 'A video in a password protected album (showcase)', + 'info_dict': { + 'id': '119195465', + 'ext': 'mp4', + 'title': "youtube-dl test video '' ä↭ð•-BaW jenozKc", + 'uploader': 'Philipp Hagemeister', + 'uploader_id': 'user20132939', + 'description': 'md5:fa7b6c6d8db0bdc353893df2f111855b', + 'upload_date': '20150209', + 'timestamp': 1423518307, + 'thumbnail': 'https://i.vimeocdn.com/video/default_1280', + 'duration': 10, + 'like_count': int, + 'uploader_url': 'https://vimeo.com/user20132939', + 'view_count': int, + 'comment_count': int, + }, + 'params': { + 'format': 'best[protocol=https]', + 'videopassword': 'youtube-dl', + }, + }, + { + # source file returns 403: Forbidden + 'url': 'https://vimeo.com/7809605', + 'only_matching': True, + }, + { + 'note': 'Direct URL with hash', + 'url': 'https://vimeo.com/160743502/abd0e13fb4', + 'info_dict': { + 'id': '160743502', + 'ext': 'mp4', + 'uploader': 'Julian Tryba', + 'uploader_id': 'aliniamedia', + 'title': 'Harrisville New Hampshire', + 'timestamp': 1459259666, + 'upload_date': '20160329', + 'release_timestamp': 1459259666, + 'license': 'by-nc', + 'duration': 159, + 'comment_count': int, + 'thumbnail': 'https://i.vimeocdn.com/video/562802436-585eeb13b5020c6ac0f171a2234067938098f84737787df05ff0d767f6d54ee9-d_1280', + 'like_count': int, + 'uploader_url': 'https://vimeo.com/aliniamedia', + 'release_date': '20160329', + }, + 'params': {'skip_download': True}, + }, + { + 'url': 'https://vimeo.com/138909882', + 'info_dict': { + 'id': '138909882', + 'ext': 'mp4', + 'title': 'Eastnor Castle 2015 Firework Champions - The Promo!', + 'description': 'md5:5967e090768a831488f6e74b7821b3c1', + 'uploader_id': 'fireworkchampions', + 'uploader': 'Firework Champions', + 'upload_date': '20150910', + 'timestamp': 1441901895, + }, + 'params': { + 'skip_download': True, + 'format': 'Original', + }, + }, + { + 'url': 'https://vimeo.com/channels/staffpicks/143603739', + 'info_dict': { + 'id': '143603739', + 'ext': 'mp4', + 'uploader': 'Karim Huu Do', + 'timestamp': 1445846953, + 'upload_date': '20151026', + 'title': 'The Shoes - Submarine Feat. Blaine Harrison', + 'uploader_id': 'karimhd', + 'description': 'md5:8e2eea76de4504c2e8020a9bcfa1e843', + 'channel_id': 'staffpicks', + 'duration': 336, + 'comment_count': int, + 'view_count': int, + 'thumbnail': 'https://i.vimeocdn.com/video/541243181-b593db36a16db2f0096f655da3f5a4dc46b8766d77b0f440df937ecb0c418347-d_1280', + 'like_count': int, + 'uploader_url': 'https://vimeo.com/karimhd', + 'channel_url': 'https://vimeo.com/channels/staffpicks', + }, + 'params': {'skip_download': 'm3u8'}, + }, + { + # requires passing unlisted_hash(a52724358e) to load_download_config request + 'url': 'https://vimeo.com/392479337/a52724358e', + 'only_matching': True, + }, + { + # similar, but all numeric: ID must be 581039021, not 9603038895 + # issue #29690 + 'url': 'https://vimeo.com/581039021/9603038895', + 'info_dict': { + 'id': '581039021', + 'ext': 'mp4', + 'timestamp': 1627621014, + 'release_timestamp': 1627621014, + 'duration': 976, + 'comment_count': int, + 'thumbnail': 'https://i.vimeocdn.com/video/1202249320-4ddb2c30398c0dc0ee059172d1bd5ea481ad12f0e0e3ad01d2266f56c744b015-d_1280', + 'like_count': int, + 'uploader_url': 'https://vimeo.com/txwestcapital', + 'release_date': '20210730', + 'uploader': 'Christopher Inks', + 'title': 'Thursday, July 29, 2021 BMA Evening Video Update', + 'uploader_id': 'txwestcapital', + 'upload_date': '20210730', + }, + 'params': { + 'skip_download': True, + }, + }, + { + # user playlist alias -> https://vimeo.com/258705797 + 'url': 'https://vimeo.com/user26785108/newspiritualguide', + 'only_matching': True, + }, + # https://gettingthingsdone.com/workflowmap/ + # vimeo embed with check-password page protected by Referer header + ] + + @classmethod + def _extract_embed_urls(cls, url, webpage): + for embed_url in super()._extract_embed_urls(url, webpage): + yield cls._smuggle_referrer(embed_url, url) + + @classmethod + def _extract_url(cls, url, webpage): + return next(cls._extract_embed_urls(url, webpage), None) + + def _verify_player_video_password(self, url, video_id, headers): + password = self._get_video_password() + data = urlencode_postdata({ + 'password': base64.b64encode(password.encode()), + }) + headers = merge_dicts(headers, { + 'Content-Type': 'application/x-www-form-urlencoded', + }) + checked = self._download_json( + f'{compat_urlparse.urlsplit(url)._replace(query=None).geturl()}/check-password', + video_id, 'Verifying the password', data=data, headers=headers) + if checked is False: + raise ExtractorError('Wrong video password', expected=True) + return checked + + def _extract_from_api(self, video_id, unlisted_hash=None): + token = self._download_json( + 'https://vimeo.com/_rv/jwt', video_id, headers={ + 'X-Requested-With': 'XMLHttpRequest' + })['token'] + api_url = 'https://api.vimeo.com/videos/' + video_id + if unlisted_hash: + api_url += ':' + unlisted_hash + video = self._download_json( + api_url, video_id, headers={ + 'Authorization': 'jwt ' + token, + }, query={ + 'fields': 'config_url,created_time,description,license,metadata.connections.comments.total,metadata.connections.likes.total,release_time,stats.plays', + }) + info = self._parse_config(self._download_json( + video['config_url'], video_id), video_id) + get_timestamp = lambda x: parse_iso8601(video.get(x + '_time')) + info.update({ + 'description': video.get('description'), + 'license': video.get('license'), + 'release_timestamp': get_timestamp('release'), + 'timestamp': get_timestamp('created'), + 'view_count': int_or_none(try_get(video, lambda x: x['stats']['plays'])), + }) + connections = try_get( + video, lambda x: x['metadata']['connections'], dict) or {} + for k in ('comment', 'like'): + info[k + '_count'] = int_or_none(try_get(connections, lambda x: x[k + 's']['total'])) + return info + + def _try_album_password(self, url): + album_id = self._search_regex( + r'vimeo\.com/(?:album|showcase)/([^/]+)', url, 'album id', default=None) + if not album_id: + return + viewer = self._download_json( + 'https://vimeo.com/_rv/viewer', album_id, fatal=False) + if not viewer: + webpage = self._download_webpage(url, album_id) + viewer = self._parse_json(self._search_regex( + r'bootstrap_data\s*=\s*({.+?})</script>', + webpage, 'bootstrap data'), album_id)['viewer'] + jwt = viewer['jwt'] + album = self._download_json( + 'https://api.vimeo.com/albums/' + album_id, + album_id, headers={'Authorization': 'jwt ' + jwt}, + query={'fields': 'description,name,privacy'}) + if try_get(album, lambda x: x['privacy']['view']) == 'password': + password = self.get_param('videopassword') + if not password: + raise ExtractorError( + 'This album is protected by a password, use the --video-password option', + expected=True) + self._set_vimeo_cookie('vuid', viewer['vuid']) + try: + self._download_json( + 'https://vimeo.com/showcase/%s/auth' % album_id, + album_id, 'Verifying the password', data=urlencode_postdata({ + 'password': password, + 'token': viewer['xsrft'], + }), headers={ + 'X-Requested-With': 'XMLHttpRequest', + }) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + raise ExtractorError('Wrong password', expected=True) + raise + + def _real_extract(self, url): + url, data, headers = self._unsmuggle_headers(url) + if 'Referer' not in headers: + headers['Referer'] = url + + # Extract ID from URL + mobj = self._match_valid_url(url).groupdict() + video_id, unlisted_hash = mobj['id'], mobj.get('unlisted_hash') + if unlisted_hash: + return self._extract_from_api(video_id, unlisted_hash) + + if any(p in url for p in ('play_redirect_hls', 'moogaloop.swf')): + url = 'https://vimeo.com/' + video_id + + self._try_album_password(url) + try: + # Retrieve video webpage to extract further information + webpage, urlh = self._download_webpage_handle( + url, video_id, headers=headers) + redirect_url = urlh.url + except ExtractorError as ee: + if isinstance(ee.cause, HTTPError) and ee.cause.status == 403: + errmsg = ee.cause.response.read() + if b'Because of its privacy settings, this video cannot be played here' in errmsg: + raise ExtractorError( + 'Cannot download embed-only video without embedding ' + 'URL. Please call yt-dlp with the URL of the page ' + 'that embeds this video.', + expected=True) + raise + + if '://player.vimeo.com/video/' in url: + config = self._search_json( + r'\b(?:playerC|c)onfig\s*=', webpage, 'info section', video_id) + if config.get('view') == 4: + config = self._verify_player_video_password( + redirect_url, video_id, headers) + return self._parse_config(config, video_id) + + if re.search(r'<form[^>]+?id="pw_form"', webpage): + video_password = self._get_video_password() + token, vuid = self._extract_xsrft_and_vuid(webpage) + webpage = self._verify_video_password( + redirect_url, video_id, video_password, token, vuid) + + vimeo_config = self._extract_vimeo_config(webpage, video_id, default=None) + if vimeo_config: + seed_status = vimeo_config.get('seed_status') or {} + if seed_status.get('state') == 'failed': + raise ExtractorError( + '%s said: %s' % (self.IE_NAME, seed_status['title']), + expected=True) + + cc_license = None + timestamp = None + video_description = None + info_dict = {} + config_url = None + + channel_id = self._search_regex( + r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None) + if channel_id: + config_url = self._html_search_regex( + r'\bdata-config-url="([^"]+)"', webpage, 'config URL', default=None) + video_description = clean_html(get_element_by_class('description', webpage)) + info_dict.update({ + 'channel_id': channel_id, + 'channel_url': 'https://vimeo.com/channels/' + channel_id, + }) + if not config_url: + page_config = self._parse_json(self._search_regex( + r'vimeo\.(?:clip|vod_title)_page_config\s*=\s*({.+?});', + webpage, 'page config', default='{}'), video_id, fatal=False) + if not page_config: + return self._extract_from_api(video_id) + config_url = page_config['player']['config_url'] + cc_license = page_config.get('cc_license') + clip = page_config.get('clip') or {} + timestamp = clip.get('uploaded_on') + video_description = clean_html( + clip.get('description') or page_config.get('description_html_escaped')) + config = self._download_json(config_url, video_id) + video = config.get('video') or {} + vod = video.get('vod') or {} + + def is_rented(): + if '>You rented this title.<' in webpage: + return True + if try_get(config, lambda x: x['user']['purchased']): + return True + for purchase_option in (vod.get('purchase_options') or []): + if purchase_option.get('purchased'): + return True + label = purchase_option.get('label_string') + if label and (label.startswith('You rented this') or label.endswith(' remaining')): + return True + return False + + if is_rented() and vod.get('is_trailer'): + feature_id = vod.get('feature_id') + if feature_id and not data.get('force_feature_id', False): + return self.url_result(smuggle_url( + 'https://player.vimeo.com/player/%s' % feature_id, + {'force_feature_id': True}), 'Vimeo') + + if not video_description: + video_description = self._html_search_regex( + r'(?s)<div\s+class="[^"]*description[^"]*"[^>]*>(.*?)</div>', + webpage, 'description', default=None) + if not video_description: + video_description = self._html_search_meta( + ['description', 'og:description', 'twitter:description'], + webpage, default=None) + if not video_description: + self.report_warning('Cannot find video description') + + if not timestamp: + timestamp = self._search_regex( + r'<time[^>]+datetime="([^"]+)"', webpage, + 'timestamp', default=None) + + view_count = int_or_none(self._search_regex(r'UserPlays:(\d+)', webpage, 'view count', default=None)) + like_count = int_or_none(self._search_regex(r'UserLikes:(\d+)', webpage, 'like count', default=None)) + comment_count = int_or_none(self._search_regex(r'UserComments:(\d+)', webpage, 'comment count', default=None)) + + formats = [] + + source_format = self._extract_original_format( + 'https://vimeo.com/' + video_id, video_id, video.get('unlisted_hash')) + if source_format: + formats.append(source_format) + + info_dict_config = self._parse_config(config, video_id) + formats.extend(info_dict_config['formats']) + info_dict['_format_sort_fields'] = info_dict_config['_format_sort_fields'] + + json_ld = self._search_json_ld(webpage, video_id, default={}) + + if not cc_license: + cc_license = self._search_regex( + r'<link[^>]+rel=["\']license["\'][^>]+href=(["\'])(?P<license>(?:(?!\1).)+)\1', + webpage, 'license', default=None, group='license') + + info_dict.update({ + 'formats': formats, + 'timestamp': unified_timestamp(timestamp), + 'description': video_description, + 'webpage_url': url, + 'view_count': view_count, + 'like_count': like_count, + 'comment_count': comment_count, + 'license': cc_license, + }) + + return merge_dicts(info_dict, info_dict_config, json_ld) + + +class VimeoOndemandIE(VimeoIE): # XXX: Do not subclass from concrete IE + IE_NAME = 'vimeo:ondemand' + _VALID_URL = r'https?://(?:www\.)?vimeo\.com/ondemand/(?:[^/]+/)?(?P<id>[^/?#&]+)' + _TESTS = [{ + # ondemand video not available via https://vimeo.com/id + 'url': 'https://vimeo.com/ondemand/20704', + 'md5': 'c424deda8c7f73c1dfb3edd7630e2f35', + 'info_dict': { + 'id': '105442900', + 'ext': 'mp4', + 'title': 'המעבדה - במ××™ ×™×•×ª× ×¤×œ×“×ž×Ÿ', + 'uploader': '×’× ×¡×¨×˜×™×', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/gumfilms', + 'uploader_id': 'gumfilms', + 'description': 'md5:aeeba3dbd4d04b0fa98a4fdc9c639998', + 'upload_date': '20140906', + 'timestamp': 1410032453, + 'thumbnail': 'https://i.vimeocdn.com/video/488238335-d7bf151c364cff8d467f1b73784668fe60aae28a54573a35d53a1210ae283bd8-d_1280', + 'comment_count': int, + 'license': 'https://creativecommons.org/licenses/by-nc-nd/3.0/', + 'duration': 53, + 'view_count': int, + 'like_count': int, + }, + 'params': { + 'format': 'best[protocol=https]', + }, + 'expected_warnings': ['Unable to download JSON metadata'], + }, { + # requires Referer to be passed along with og:video:url + 'url': 'https://vimeo.com/ondemand/36938/126682985', + 'info_dict': { + 'id': '126584684', + 'ext': 'mp4', + 'title': 'Rävlock, rätt läte pÃ¥ rätt plats', + 'uploader': 'Lindroth & Norin', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/lindrothnorin', + 'uploader_id': 'lindrothnorin', + 'description': 'md5:c3c46a90529612c8279fb6af803fc0df', + 'upload_date': '20150502', + 'timestamp': 1430586422, + 'duration': 121, + 'comment_count': int, + 'view_count': int, + 'thumbnail': 'https://i.vimeocdn.com/video/517077723-7066ae1d9a79d3eb361334fb5d58ec13c8f04b52f8dd5eadfbd6fb0bcf11f613-d_1280', + 'like_count': int, + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': ['Unable to download JSON metadata'], + }, { + 'url': 'https://vimeo.com/ondemand/nazmaalik', + 'only_matching': True, + }, { + 'url': 'https://vimeo.com/ondemand/141692381', + 'only_matching': True, + }, { + 'url': 'https://vimeo.com/ondemand/thelastcolony/150274832', + 'only_matching': True, + }] + + +class VimeoChannelIE(VimeoBaseInfoExtractor): + IE_NAME = 'vimeo:channel' + _VALID_URL = r'https://vimeo\.com/channels/(?P<id>[^/?#]+)/?(?:$|[?#])' + _MORE_PAGES_INDICATOR = r'<a.+?rel="next"' + _TITLE = None + _TITLE_RE = r'<link rel="alternate"[^>]+?title="(.*?)"' + _TESTS = [{ + 'url': 'https://vimeo.com/channels/tributes', + 'info_dict': { + 'id': 'tributes', + 'title': 'Vimeo Tributes', + }, + 'playlist_mincount': 22, + }] + _BASE_URL_TEMPL = 'https://vimeo.com/channels/%s' + + def _page_url(self, base_url, pagenum): + return '%s/videos/page:%d/' % (base_url, pagenum) + + def _extract_list_title(self, webpage): + return self._TITLE or self._html_search_regex( + self._TITLE_RE, webpage, 'list title', fatal=False) + + def _title_and_entries(self, list_id, base_url): + for pagenum in itertools.count(1): + page_url = self._page_url(base_url, pagenum) + webpage = self._download_webpage( + page_url, list_id, + 'Downloading page %s' % pagenum) + + if pagenum == 1: + yield self._extract_list_title(webpage) + + # Try extracting href first since not all videos are available via + # short https://vimeo.com/id URL (e.g. https://vimeo.com/channels/tributes/6213729) + clips = re.findall( + r'id="clip_(\d+)"[^>]*>\s*<a[^>]+href="(/(?:[^/]+/)*\1)(?:[^>]+\btitle="([^"]+)")?', webpage) + if clips: + for video_id, video_url, video_title in clips: + yield self.url_result( + compat_urlparse.urljoin(base_url, video_url), + VimeoIE.ie_key(), video_id=video_id, video_title=video_title) + # More relaxed fallback + else: + for video_id in re.findall(r'id=["\']clip_(\d+)', webpage): + yield self.url_result( + 'https://vimeo.com/%s' % video_id, + VimeoIE.ie_key(), video_id=video_id) + + if re.search(self._MORE_PAGES_INDICATOR, webpage, re.DOTALL) is None: + break + + def _extract_videos(self, list_id, base_url): + title_and_entries = self._title_and_entries(list_id, base_url) + list_title = next(title_and_entries) + return self.playlist_result(title_and_entries, list_id, list_title) + + def _real_extract(self, url): + channel_id = self._match_id(url) + return self._extract_videos(channel_id, self._BASE_URL_TEMPL % channel_id) + + +class VimeoUserIE(VimeoChannelIE): # XXX: Do not subclass from concrete IE + IE_NAME = 'vimeo:user' + _VALID_URL = r'https://vimeo\.com/(?!(?:[0-9]+|watchlater)(?:$|[?#/]))(?P<id>[^/]+)(?:/videos)?/?(?:$|[?#])' + _TITLE_RE = r'<a[^>]+?class="user">([^<>]+?)</a>' + _TESTS = [{ + 'url': 'https://vimeo.com/nkistudio/videos', + 'info_dict': { + 'title': 'Nki', + 'id': 'nkistudio', + }, + 'playlist_mincount': 66, + }, { + 'url': 'https://vimeo.com/nkistudio/', + 'only_matching': True, + }] + _BASE_URL_TEMPL = 'https://vimeo.com/%s' + + +class VimeoAlbumIE(VimeoBaseInfoExtractor): + IE_NAME = 'vimeo:album' + _VALID_URL = r'https://vimeo\.com/(?:album|showcase)/(?P<id>\d+)(?:$|[?#]|/(?!video))' + _TITLE_RE = r'<header id="page_header">\n\s*<h1>(.*?)</h1>' + _TESTS = [{ + 'url': 'https://vimeo.com/album/2632481', + 'info_dict': { + 'id': '2632481', + 'title': 'Staff Favorites: November 2013', + }, + 'playlist_mincount': 13, + }, { + 'note': 'Password-protected album', + 'url': 'https://vimeo.com/album/3253534', + 'info_dict': { + 'title': 'test', + 'id': '3253534', + }, + 'playlist_count': 1, + 'params': { + 'videopassword': 'youtube-dl', + } + }] + _PAGE_SIZE = 100 + + def _fetch_page(self, album_id, authorization, hashed_pass, page): + api_page = page + 1 + query = { + 'fields': 'link,uri', + 'page': api_page, + 'per_page': self._PAGE_SIZE, + } + if hashed_pass: + query['_hashed_pass'] = hashed_pass + try: + videos = self._download_json( + 'https://api.vimeo.com/albums/%s/videos' % album_id, + album_id, 'Downloading page %d' % api_page, query=query, headers={ + 'Authorization': 'jwt ' + authorization, + })['data'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 400: + return + for video in videos: + link = video.get('link') + if not link: + continue + uri = video.get('uri') + video_id = self._search_regex(r'/videos/(\d+)', uri, 'video_id', default=None) if uri else None + yield self.url_result(link, VimeoIE.ie_key(), video_id) + + def _real_extract(self, url): + album_id = self._match_id(url) + viewer = self._download_json( + 'https://vimeo.com/_rv/viewer', album_id, fatal=False) + if not viewer: + webpage = self._download_webpage(url, album_id) + viewer = self._parse_json(self._search_regex( + r'bootstrap_data\s*=\s*({.+?})</script>', + webpage, 'bootstrap data'), album_id)['viewer'] + jwt = viewer['jwt'] + album = self._download_json( + 'https://api.vimeo.com/albums/' + album_id, + album_id, headers={'Authorization': 'jwt ' + jwt}, + query={'fields': 'description,name,privacy'}) + hashed_pass = None + if try_get(album, lambda x: x['privacy']['view']) == 'password': + password = self.get_param('videopassword') + if not password: + raise ExtractorError( + 'This album is protected by a password, use the --video-password option', + expected=True) + self._set_vimeo_cookie('vuid', viewer['vuid']) + try: + hashed_pass = self._download_json( + 'https://vimeo.com/showcase/%s/auth' % album_id, + album_id, 'Verifying the password', data=urlencode_postdata({ + 'password': password, + 'token': viewer['xsrft'], + }), headers={ + 'X-Requested-With': 'XMLHttpRequest', + })['hashed_pass'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + raise ExtractorError('Wrong password', expected=True) + raise + entries = OnDemandPagedList(functools.partial( + self._fetch_page, album_id, jwt, hashed_pass), self._PAGE_SIZE) + return self.playlist_result( + entries, album_id, album.get('name'), album.get('description')) + + +class VimeoGroupsIE(VimeoChannelIE): # XXX: Do not subclass from concrete IE + IE_NAME = 'vimeo:group' + _VALID_URL = r'https://vimeo\.com/groups/(?P<id>[^/]+)(?:/(?!videos?/\d+)|$)' + _TESTS = [{ + 'url': 'https://vimeo.com/groups/meetup', + 'info_dict': { + 'id': 'meetup', + 'title': 'Vimeo Meetup!', + }, + 'playlist_mincount': 27, + }] + _BASE_URL_TEMPL = 'https://vimeo.com/groups/%s' + + +class VimeoReviewIE(VimeoBaseInfoExtractor): + IE_NAME = 'vimeo:review' + IE_DESC = 'Review pages on vimeo' + _VALID_URL = r'(?P<url>https://vimeo\.com/[^/]+/review/(?P<id>[^/]+)/[0-9a-f]{10})' + _TESTS = [{ + 'url': 'https://vimeo.com/user21297594/review/75524534/3c257a1b5d', + 'md5': 'c507a72f780cacc12b2248bb4006d253', + 'info_dict': { + 'id': '75524534', + 'ext': 'mp4', + 'title': "DICK HARDWICK 'Comedian'", + 'uploader': 'Richard Hardwick', + 'uploader_id': 'user21297594', + 'description': "Comedian Dick Hardwick's five minute demo filmed in front of a live theater audience.\nEdit by Doug Mattocks", + 'duration': 304, + 'thumbnail': 'https://i.vimeocdn.com/video/450115033-43303819d9ebe24c2630352e18b7056d25197d09b3ae901abdac4c4f1d68de71-d_1280', + 'uploader_url': 'https://vimeo.com/user21297594', + }, + }, { + 'note': 'video player needs Referer', + 'url': 'https://vimeo.com/user22258446/review/91613211/13f927e053', + 'md5': '6295fdab8f4bf6a002d058b2c6dce276', + 'info_dict': { + 'id': '91613211', + 'ext': 'mp4', + 'title': 're:(?i)^Death by dogma versus assembling agile . Sander Hoogendoorn', + 'uploader': 'DevWeek Events', + 'duration': 2773, + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader_id': 'user22258446', + }, + 'skip': 'video gone', + }, { + 'note': 'Password protected', + 'url': 'https://vimeo.com/user37284429/review/138823582/c4d865efde', + 'info_dict': { + 'id': '138823582', + 'ext': 'mp4', + 'title': 'EFFICIENT PICKUP MASTERCLASS MODULE 1', + 'uploader': 'TMB', + 'uploader_id': 'user37284429', + }, + 'params': { + 'videopassword': 'holygrail', + }, + 'skip': 'video gone', + }] + + def _real_extract(self, url): + page_url, video_id = self._match_valid_url(url).groups() + data = self._download_json( + page_url.replace('/review/', '/review/data/'), video_id) + if data.get('isLocked') is True: + video_password = self._get_video_password() + viewer = self._download_json( + 'https://vimeo.com/_rv/viewer', video_id) + webpage = self._verify_video_password( + 'https://vimeo.com/' + video_id, video_id, + video_password, viewer['xsrft'], viewer['vuid']) + clip_page_config = self._parse_json(self._search_regex( + r'window\.vimeo\.clip_page_config\s*=\s*({.+?});', + webpage, 'clip page config'), video_id) + config_url = clip_page_config['player']['config_url'] + clip_data = clip_page_config.get('clip') or {} + else: + clip_data = data['clipData'] + config_url = clip_data['configUrl'] + config = self._download_json(config_url, video_id) + info_dict = self._parse_config(config, video_id) + source_format = self._extract_original_format( + page_url + '/action', video_id) + if source_format: + info_dict['formats'].append(source_format) + info_dict['description'] = clean_html(clip_data.get('description')) + return info_dict + + +class VimeoWatchLaterIE(VimeoChannelIE): # XXX: Do not subclass from concrete IE + IE_NAME = 'vimeo:watchlater' + IE_DESC = 'Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication)' + _VALID_URL = r'https://vimeo\.com/(?:home/)?watchlater|:vimeowatchlater' + _TITLE = 'Watch Later' + _LOGIN_REQUIRED = True + _TESTS = [{ + 'url': 'https://vimeo.com/watchlater', + 'only_matching': True, + }] + + def _page_url(self, base_url, pagenum): + url = '%s/page:%d/' % (base_url, pagenum) + request = Request(url) + # Set the header to get a partial html page with the ids, + # the normal page doesn't contain them. + request.headers['X-Requested-With'] = 'XMLHttpRequest' + return request + + def _real_extract(self, url): + return self._extract_videos('watchlater', 'https://vimeo.com/watchlater') + + +class VimeoLikesIE(VimeoChannelIE): # XXX: Do not subclass from concrete IE + _VALID_URL = r'https://(?:www\.)?vimeo\.com/(?P<id>[^/]+)/likes/?(?:$|[?#]|sort:)' + IE_NAME = 'vimeo:likes' + IE_DESC = 'Vimeo user likes' + _TESTS = [{ + 'url': 'https://vimeo.com/user755559/likes/', + 'playlist_mincount': 293, + 'info_dict': { + 'id': 'user755559', + 'title': 'urza’s Likes', + }, + }, { + 'url': 'https://vimeo.com/stormlapse/likes', + 'only_matching': True, + }] + + def _page_url(self, base_url, pagenum): + return '%s/page:%d/' % (base_url, pagenum) + + def _real_extract(self, url): + user_id = self._match_id(url) + return self._extract_videos(user_id, 'https://vimeo.com/%s/likes' % user_id) + + +class VHXEmbedIE(VimeoBaseInfoExtractor): + IE_NAME = 'vhx:embed' + _VALID_URL = r'https?://embed\.vhx\.tv/videos/(?P<id>\d+)' + _EMBED_REGEX = [r'<iframe[^>]+src="(?P<url>https?://embed\.vhx\.tv/videos/\d+[^"]*)"'] + + @classmethod + def _extract_embed_urls(cls, url, webpage): + for embed_url in super()._extract_embed_urls(url, webpage): + yield cls._smuggle_referrer(embed_url, url) + + def _real_extract(self, url): + video_id = self._match_id(url) + url, _, headers = self._unsmuggle_headers(url) + webpage = self._download_webpage(url, video_id, headers=headers) + config_url = self._parse_json(self._search_regex( + r'window\.OTTData\s*=\s*({.+})', webpage, + 'ott data'), video_id, js_to_json)['config_url'] + config = self._download_json(config_url, video_id) + info = self._parse_config(config, video_id) + info['id'] = video_id + return info + + +class VimeoProIE(VimeoBaseInfoExtractor): + IE_NAME = 'vimeo:pro' + _VALID_URL = r'https?://(?:www\.)?vimeopro\.com/[^/?#]+/(?P<slug>[^/?#]+)(?:(?:/videos?/(?P<id>[0-9]+)))?' + _TESTS = [{ + # Vimeo URL derived from video_id + 'url': 'http://vimeopro.com/openstreetmapus/state-of-the-map-us-2013/video/68093876', + 'md5': '3b5ca6aa22b60dfeeadf50b72e44ed82', + 'note': 'Vimeo Pro video (#1197)', + 'info_dict': { + 'id': '68093876', + 'ext': 'mp4', + 'uploader_url': r're:https?://(?:www\.)?vimeo\.com/openstreetmapus', + 'uploader_id': 'openstreetmapus', + 'uploader': 'OpenStreetMap US', + 'title': 'Andy Allan - Putting the Carto into OpenStreetMap Cartography', + 'description': 'md5:2c362968038d4499f4d79f88458590c1', + 'duration': 1595, + 'upload_date': '20130610', + 'timestamp': 1370893156, + 'license': 'by', + 'thumbnail': 'https://i.vimeocdn.com/video/440260469-19b0d92fca3bd84066623b53f1eb8aaa3980c6c809e2d67b6b39ab7b4a77a344-d_960', + 'view_count': int, + 'comment_count': int, + 'like_count': int, + 'tags': 'count:1', + }, + 'params': { + 'format': 'best[protocol=https]', + }, + }, { + # password-protected VimeoPro page with Vimeo player embed + 'url': 'https://vimeopro.com/cadfem/simulation-conference-mechanische-systeme-in-perfektion', + 'info_dict': { + 'id': '764543723', + 'ext': 'mp4', + 'title': 'Mechanische Systeme in Perfektion: Realität erfassen, Innovation treiben', + 'thumbnail': 'https://i.vimeocdn.com/video/1543784598-a1a750494a485e601110136b9fe11e28c2131942452b3a5d30391cb3800ca8fd-d_1280', + 'description': 'md5:2a9d195cd1b0f6f79827107dc88c2420', + 'uploader': 'CADFEM', + 'uploader_id': 'cadfem', + 'uploader_url': 'https://vimeo.com/cadfem', + 'duration': 12505, + 'chapters': 'count:10', + }, + 'params': { + 'videopassword': 'Conference2022', + 'skip_download': True, + }, + }] + + def _real_extract(self, url): + display_id, video_id = self._match_valid_url(url).group('slug', 'id') + if video_id: + display_id = video_id + webpage = self._download_webpage(url, display_id) + + password_form = self._search_regex( + r'(?is)<form[^>]+?method=["\']post["\'][^>]*>(.+?password.+?)</form>', + webpage, 'password form', default=None) + if password_form: + try: + webpage = self._download_webpage(url, display_id, data=urlencode_postdata({ + 'password': self._get_video_password(), + **self._hidden_inputs(password_form), + }), note='Logging in with video password') + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 418: + raise ExtractorError('Wrong video password', expected=True) + raise + + description = None + # even if we have video_id, some videos require player URL with portfolio_id query param + # https://github.com/ytdl-org/youtube-dl/issues/20070 + vimeo_url = VimeoIE._extract_url(url, webpage) + if vimeo_url: + description = self._html_search_meta('description', webpage, default=None) + elif video_id: + vimeo_url = f'https://vimeo.com/{video_id}' + else: + raise ExtractorError( + 'No Vimeo embed or video ID could be found in VimeoPro page', expected=True) + + return self.url_result(vimeo_url, VimeoIE, video_id, url_transparent=True, + description=description) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vimm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vimm.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vimm.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vimm.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vimple.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vimple.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vimple.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vimple.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vine.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vine.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vine.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vine.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/viqeo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/viqeo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/viqeo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/viqeo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/viu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/viu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/viu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/viu.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vk.py new file mode 100644 index 0000000..c12e873 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vk.py @@ -0,0 +1,842 @@ +import collections +import hashlib +import re + +from .common import InfoExtractor +from .dailymotion import DailymotionIE +from .odnoklassniki import OdnoklassnikiIE +from .pladform import PladformIE +from .sibnet import SibnetEmbedIE +from .vimeo import VimeoIE +from .youtube import YoutubeIE +from ..utils import ( + ExtractorError, + UserNotLive, + clean_html, + get_element_by_class, + get_element_html_by_id, + int_or_none, + join_nonempty, + parse_resolution, + str_or_none, + str_to_int, + try_call, + unescapeHTML, + unified_timestamp, + update_url_query, + url_or_none, + urlencode_postdata, + urljoin, + traverse_obj, +) + + +class VKBaseIE(InfoExtractor): + _NETRC_MACHINE = 'vk' + + def _download_webpage_handle(self, url_or_request, video_id, *args, fatal=True, **kwargs): + response = super()._download_webpage_handle(url_or_request, video_id, *args, fatal=fatal, **kwargs) + challenge_url, cookie = response[1].url if response else '', None + if challenge_url.startswith('https://vk.com/429.html?'): + cookie = self._get_cookies(challenge_url).get('hash429') + if not cookie: + return response + + hash429 = hashlib.md5(cookie.value.encode('ascii')).hexdigest() + self._request_webpage( + update_url_query(challenge_url, {'key': hash429}), video_id, fatal=fatal, + note='Resolving WAF challenge', errnote='Failed to bypass WAF challenge') + return super()._download_webpage_handle(url_or_request, video_id, *args, fatal=True, **kwargs) + + def _perform_login(self, username, password): + login_page, url_handle = self._download_webpage_handle( + 'https://vk.com', None, 'Downloading login page') + + login_form = self._hidden_inputs(login_page) + + login_form.update({ + 'email': username.encode('cp1251'), + 'pass': password.encode('cp1251'), + }) + + # vk serves two same remixlhk cookies in Set-Cookie header and expects + # first one to be actually set + self._apply_first_set_cookie_header(url_handle, 'remixlhk') + + login_page = self._download_webpage( + 'https://vk.com/login', None, + note='Logging in', + data=urlencode_postdata(login_form)) + + if re.search(r'onLoginFailed', login_page): + raise ExtractorError( + 'Unable to login, incorrect username and/or password', expected=True) + + def _download_payload(self, path, video_id, data, fatal=True): + endpoint = f'https://vk.com/{path}.php' + data['al'] = 1 + code, payload = self._download_json( + endpoint, video_id, data=urlencode_postdata(data), fatal=fatal, + headers={ + 'Referer': endpoint, + 'X-Requested-With': 'XMLHttpRequest', + })['payload'] + if code == '3': + self.raise_login_required() + elif code == '8': + raise ExtractorError(clean_html(payload[0][1:-1]), expected=True) + return payload + + +class VKIE(VKBaseIE): + IE_NAME = 'vk' + IE_DESC = 'VK' + _EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>https?://vk\.com/video_ext\.php.+?)\1'] + _VALID_URL = r'''(?x) + https?:// + (?: + (?: + (?:(?:m|new)\.)?vk\.com/video_| + (?:www\.)?daxab\.com/ + ) + ext\.php\?(?P<embed_query>.*?\boid=(?P<oid>-?\d+).*?\bid=(?P<id>\d+).*)| + (?: + (?:(?:m|new)\.)?vk\.com/(?:.+?\?.*?z=)?(?:video|clip)| + (?:www\.)?daxab\.com/embed/ + ) + (?P<videoid>-?\d+_\d+)(?:.*\blist=(?P<list_id>([\da-f]+)|(ln-[\da-zA-Z]+)))? + ) + ''' + + _TESTS = [ + { + 'url': 'http://vk.com/videos-77521?z=video-77521_162222515%2Fclub77521', + 'info_dict': { + 'id': '-77521_162222515', + 'ext': 'mp4', + 'title': 'ProtivoGunz - Ð¥ÑƒÑ‘Ð²Ð°Ñ Ð¿ÐµÑнÑ', + 'uploader': 're:(?:Noize MC|Alexander Ilyashenko).*', + 'uploader_id': '39545378', + 'duration': 195, + 'timestamp': 1329049880, + 'upload_date': '20120212', + 'comment_count': int, + 'like_count': int, + 'thumbnail': r're:https?://.+(?:\.jpg|getVideoPreview.*)$', + }, + 'params': {'skip_download': 'm3u8'}, + }, + { + 'url': 'http://vk.com/video205387401_165548505', + 'info_dict': { + 'id': '205387401_165548505', + 'ext': 'mp4', + 'title': 'No name', + 'uploader': 'Tom Cruise', + 'uploader_id': '205387401', + 'duration': 9, + 'timestamp': 1374364108, + 'upload_date': '20130720', + 'comment_count': int, + 'like_count': int, + 'thumbnail': r're:https?://.+(?:\.jpg|getVideoPreview.*)$', + } + }, + { + 'note': 'Embedded video', + 'url': 'https://vk.com/video_ext.php?oid=-77521&id=162222515&hash=87b046504ccd8bfa', + 'info_dict': { + 'id': '-77521_162222515', + 'ext': 'mp4', + 'uploader': 're:(?:Noize MC|Alexander Ilyashenko).*', + 'title': 'ProtivoGunz - Ð¥ÑƒÑ‘Ð²Ð°Ñ Ð¿ÐµÑнÑ', + 'duration': 195, + 'upload_date': '20120212', + 'timestamp': 1329049880, + 'uploader_id': '39545378', + 'thumbnail': r're:https?://.+(?:\.jpg|getVideoPreview.*)$', + }, + 'params': {'skip_download': 'm3u8'}, + }, + { + 'url': 'https://vk.com/video-93049196_456239755?list=ln-cBjJ7S4jYYx3ADnmDT', + 'info_dict': { + 'id': '-93049196_456239755', + 'ext': 'mp4', + 'title': '8 ÑÐµÑ€Ð¸Ñ (озвучка)', + 'duration': 8383, + 'comment_count': int, + 'uploader': 'Dizi2021', + 'like_count': int, + 'timestamp': 1640162189, + 'upload_date': '20211222', + 'uploader_id': '-93049196', + 'thumbnail': r're:https?://.+(?:\.jpg|getVideoPreview.*)$', + }, + }, + { + 'note': 'youtube embed', + 'url': 'https://vk.com/video276849682_170681728', + 'info_dict': { + 'id': 'V3K4mi0SYkc', + 'ext': 'mp4', + 'title': "DSWD Awards 'Children's Joy Foundation, Inc.' Certificate of Registration and License to Operate", + 'description': 'md5:bf9c26cfa4acdfb146362682edd3827a', + 'duration': 178, + 'upload_date': '20130117', + 'uploader': "Children's Joy Foundation Inc.", + 'uploader_id': 'thecjf', + 'view_count': int, + 'channel_id': 'UCgzCNQ11TmR9V97ECnhi3gw', + 'availability': 'public', + 'like_count': int, + 'live_status': 'not_live', + 'playable_in_embed': True, + 'channel': 'Children\'s Joy Foundation Inc.', + 'uploader_url': 'http://www.youtube.com/user/thecjf', + 'thumbnail': r're:https?://.+\.jpg$', + 'tags': 'count:27', + 'start_time': 0.0, + 'categories': ['Nonprofits & Activism'], + 'channel_url': 'https://www.youtube.com/channel/UCgzCNQ11TmR9V97ECnhi3gw', + 'channel_follower_count': int, + 'age_limit': 0, + }, + }, + { + 'note': 'dailymotion embed', + 'url': 'https://vk.com/video-95168827_456239103?list=cca524a0f0d5557e16', + 'info_dict': { + 'id': 'x8gfli0', + 'ext': 'mp4', + 'title': 'md5:45410f60ccd4b2760da98cb5fc777d70', + 'description': 'md5:2e71c5c9413735cfa06cf1a166f16c84', + 'uploader': 'Movies and cinema.', + 'upload_date': '20221218', + 'uploader_id': 'x1jdavv', + 'timestamp': 1671387617, + 'age_limit': 0, + 'duration': 2918, + 'like_count': int, + 'view_count': int, + 'thumbnail': r're:https?://.+x1080$', + 'tags': list + }, + }, + { + 'url': 'https://vk.com/clips-74006511?z=clip-74006511_456247211', + 'info_dict': { + 'id': '-74006511_456247211', + 'ext': 'mp4', + 'comment_count': int, + 'duration': 9, + 'like_count': int, + 'thumbnail': r're:https?://.+(?:\.jpg|getVideoPreview.*)$', + 'timestamp': 1664995597, + 'title': 'Clip by @madempress', + 'upload_date': '20221005', + 'uploader': 'Ð¨Ð°Ð»ÑŒÐ½Ð°Ñ Ð¸Ð¼Ð¿ÐµÑ€Ð°Ñ‚Ñ€Ð¸Ñ†Ð°', + 'uploader_id': '-74006511', + }, + }, + { + # video key is extra_data not url\d+ + 'url': 'http://vk.com/video-110305615_171782105', + 'md5': 'e13fcda136f99764872e739d13fac1d1', + 'info_dict': { + 'id': '-110305615_171782105', + 'ext': 'mp4', + 'title': 'S-Dance, репетиции к The way show', + 'uploader': 'THE WAY SHOW | 17 апрелÑ', + 'uploader_id': '-110305615', + 'timestamp': 1454859345, + 'upload_date': '20160207', + }, + 'skip': 'Removed', + }, + { + 'note': 'finished live stream, postlive_mp4', + 'url': 'https://vk.com/videos-387766?z=video-387766_456242764%2Fpl_-387766_-2', + 'info_dict': { + 'id': '-387766_456242764', + 'ext': 'mp4', + 'title': 'ИгроМир 2016 День 1 — Ð˜Ð³Ñ€Ð¾Ð¼Ð°Ð½Ð¸Ñ Ð£Ñ‚Ñ€Ð¾Ð¼', + 'uploader': 'ИгроманиÑ', + 'duration': 5239, + 'upload_date': '20160929', + 'uploader_id': '-387766', + 'timestamp': 1475137527, + 'thumbnail': r're:https?://.+\.jpg$', + 'comment_count': int, + 'like_count': int, + }, + 'params': { + 'skip_download': True, + }, + }, + { + # live stream, hls and rtmp links, most likely already finished live + # stream by the time you are reading this comment + 'url': 'https://vk.com/video-140332_456239111', + 'only_matching': True, + }, + { + # removed video, just testing that we match the pattern + 'url': 'http://vk.com/feed?z=video-43215063_166094326%2Fbb50cacd3177146d7a', + 'only_matching': True, + }, + { + # age restricted video, requires vk account credentials + 'url': 'https://vk.com/video205387401_164765225', + 'only_matching': True, + }, + { + # pladform embed + 'url': 'https://vk.com/video-76116461_171554880', + 'only_matching': True, + }, + { + 'url': 'http://new.vk.com/video205387401_165548505', + 'only_matching': True, + }, + { + # This video is no longer available, because its author has been blocked. + 'url': 'https://vk.com/video-10639516_456240611', + 'only_matching': True, + }, + { + # The video is not available in your region. + 'url': 'https://vk.com/video-51812607_171445436', + 'only_matching': True, + }, + { + 'url': 'https://vk.com/clip30014565_456240946', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + video_id = mobj.group('videoid') + + mv_data = {} + if video_id: + data = { + 'act': 'show', + 'video': video_id, + } + # Some videos (removed?) can only be downloaded with list id specified + list_id = mobj.group('list_id') + if list_id: + data['list'] = list_id + + payload = self._download_payload('al_video', video_id, data) + info_page = payload[1] + opts = payload[-1] + mv_data = opts.get('mvData') or {} + player = opts.get('player') or {} + else: + video_id = '%s_%s' % (mobj.group('oid'), mobj.group('id')) + + info_page = self._download_webpage( + 'http://vk.com/video_ext.php?' + mobj.group('embed_query'), video_id) + + error_message = self._html_search_regex( + [r'(?s)<!><div[^>]+class="video_layer_message"[^>]*>(.+?)</div>', + r'(?s)<div[^>]+id="video_ext_msg"[^>]*>(.+?)</div>'], + info_page, 'error message', default=None) + if error_message: + raise ExtractorError(error_message, expected=True) + + if re.search(r'<!>/login\.php\?.*\bact=security_check', info_page): + raise ExtractorError( + 'You are trying to log in from an unusual location. You should confirm ownership at vk.com to log in with this IP.', + expected=True) + + ERROR_COPYRIGHT = 'Video %s has been removed from public access due to rightholder complaint.' + + ERRORS = { + r'>ВидеозапиÑÑŒ .*? была изъÑта из публичного доÑтупа в ÑвÑзи Ñ Ð¾Ð±Ñ€Ð°Ñ‰ÐµÐ½Ð¸ÐµÐ¼ правообладателÑ.<': + ERROR_COPYRIGHT, + + r'>The video .*? was removed from public access by request of the copyright holder.<': + ERROR_COPYRIGHT, + + r'<!>Please log in or <': + 'Video %s is only available for registered users, ' + 'use --username and --password options to provide account credentials.', + + r'<!>Unknown error': + 'Video %s does not exist.', + + r'<!>Видео временно недоÑтупно': + 'Video %s is temporarily unavailable.', + + r'<!>Access denied': + 'Access denied to video %s.', + + r'<!>ВидеозапиÑÑŒ недоÑтупна, так как её автор был заблокирован.': + 'Video %s is no longer available, because its author has been blocked.', + + r'<!>This video is no longer available, because its author has been blocked.': + 'Video %s is no longer available, because its author has been blocked.', + + r'<!>This video is no longer available, because it has been deleted.': + 'Video %s is no longer available, because it has been deleted.', + + r'<!>The video .+? is not available in your region.': + 'Video %s is not available in your region.', + } + + for error_re, error_msg in ERRORS.items(): + if re.search(error_re, info_page): + raise ExtractorError(error_msg % video_id, expected=True) + + player = self._parse_json(self._search_regex( + r'var\s+playerParams\s*=\s*({.+?})\s*;\s*\n', + info_page, 'player params'), video_id) + + youtube_url = YoutubeIE._extract_url(info_page) + if youtube_url: + return self.url_result(youtube_url, YoutubeIE.ie_key()) + + vimeo_url = VimeoIE._extract_url(url, info_page) + if vimeo_url is not None: + return self.url_result(vimeo_url, VimeoIE.ie_key()) + + pladform_url = PladformIE._extract_url(info_page) + if pladform_url: + return self.url_result(pladform_url, PladformIE.ie_key()) + + m_rutube = re.search( + r'\ssrc="((?:https?:)?//rutube\.ru\\?/(?:video|play)\\?/embed(?:.*?))\\?"', info_page) + if m_rutube is not None: + rutube_url = self._proto_relative_url( + m_rutube.group(1).replace('\\', '')) + return self.url_result(rutube_url) + + dailymotion_url = next(DailymotionIE._extract_embed_urls(url, info_page), None) + if dailymotion_url: + return self.url_result(dailymotion_url, DailymotionIE.ie_key()) + + odnoklassniki_url = OdnoklassnikiIE._extract_url(info_page) + if odnoklassniki_url: + return self.url_result(odnoklassniki_url, OdnoklassnikiIE.ie_key()) + + sibnet_url = next(SibnetEmbedIE._extract_embed_urls(url, info_page), None) + if sibnet_url: + return self.url_result(sibnet_url) + + m_opts = re.search(r'(?s)var\s+opts\s*=\s*({.+?});', info_page) + if m_opts: + m_opts_url = re.search(r"url\s*:\s*'((?!/\b)[^']+)", m_opts.group(1)) + if m_opts_url: + opts_url = m_opts_url.group(1) + if opts_url.startswith('//'): + opts_url = 'http:' + opts_url + return self.url_result(opts_url) + + data = player['params'][0] + title = unescapeHTML(data['md_title']) + + # 2 = live + # 3 = post live (finished live) + is_live = data.get('live') == 2 + + timestamp = unified_timestamp(self._html_search_regex( + r'class=["\']mv_info_date[^>]+>([^<]+)(?:<|from)', info_page, + 'upload date', default=None)) or int_or_none(data.get('date')) + + view_count = str_to_int(self._search_regex( + r'class=["\']mv_views_count[^>]+>\s*([\d,.]+)', + info_page, 'view count', default=None)) + + formats = [] + for format_id, format_url in data.items(): + format_url = url_or_none(format_url) + if not format_url or not format_url.startswith(('http', '//', 'rtmp')): + continue + if (format_id.startswith(('url', 'cache')) + or format_id in ('extra_data', 'live_mp4', 'postlive_mp4')): + height = int_or_none(self._search_regex( + r'^(?:url|cache)(\d+)', format_id, 'height', default=None)) + formats.append({ + 'format_id': format_id, + 'url': format_url, + 'height': height, + }) + elif format_id == 'hls': + formats.extend(self._extract_m3u8_formats( + format_url, video_id, 'mp4', 'm3u8_native', + m3u8_id=format_id, fatal=False, live=is_live)) + elif format_id == 'rtmp': + formats.append({ + 'format_id': format_id, + 'url': format_url, + 'ext': 'flv', + }) + + subtitles = {} + for sub in data.get('subs') or {}: + subtitles.setdefault(sub.get('lang', 'en'), []).append({ + 'ext': sub.get('title', '.srt').split('.')[-1], + 'url': url_or_none(sub.get('url')), + }) + + return { + 'id': video_id, + 'formats': formats, + 'title': title, + 'thumbnail': data.get('jpg'), + 'uploader': data.get('md_author'), + 'uploader_id': str_or_none(data.get('author_id') or mv_data.get('authorId')), + 'duration': int_or_none(data.get('duration') or mv_data.get('duration')), + 'timestamp': timestamp, + 'view_count': view_count, + 'like_count': int_or_none(mv_data.get('likes')), + 'comment_count': int_or_none(mv_data.get('commcount')), + 'is_live': is_live, + 'subtitles': subtitles, + } + + +class VKUserVideosIE(VKBaseIE): + IE_NAME = 'vk:uservideos' + IE_DESC = "VK - User's Videos" + _VALID_URL = r'https?://(?:(?:m|new)\.)?vk\.com/video/(?:playlist/)?(?P<id>[^?$#/&]+)(?!\?.*\bz=video)(?:[/?#&](?:.*?\bsection=(?P<section>\w+))?|$)' + _TEMPLATE_URL = 'https://vk.com/videos' + _TESTS = [{ + 'url': 'https://vk.com/video/@mobidevices', + 'info_dict': { + 'id': '-17892518_all', + }, + 'playlist_mincount': 1355, + }, { + 'url': 'https://vk.com/video/@mobidevices?section=uploaded', + 'info_dict': { + 'id': '-17892518_uploaded', + }, + 'playlist_mincount': 182, + }, { + 'url': 'https://vk.com/video/playlist/-174476437_2', + 'info_dict': { + 'id': '-174476437_playlist_2', + 'title': 'ÐнонÑÑ‹' + }, + 'playlist_mincount': 108, + }] + _VIDEO = collections.namedtuple('Video', ['owner_id', 'id']) + + def _entries(self, page_id, section): + video_list_json = self._download_payload('al_video', page_id, { + 'act': 'load_videos_silent', + 'offset': 0, + 'oid': page_id, + 'section': section, + })[0][section] + count = video_list_json['count'] + total = video_list_json['total'] + video_list = video_list_json['list'] + + while True: + for video in video_list: + v = self._VIDEO._make(video[:2]) + video_id = '%d_%d' % (v.owner_id, v.id) + yield self.url_result( + 'http://vk.com/video' + video_id, VKIE.ie_key(), video_id) + if count >= total: + break + video_list_json = self._download_payload('al_video', page_id, { + 'act': 'load_videos_silent', + 'offset': count, + 'oid': page_id, + 'section': section, + })[0][section] + count += video_list_json['count'] + video_list = video_list_json['list'] + + def _real_extract(self, url): + u_id, section = self._match_valid_url(url).groups() + webpage = self._download_webpage(url, u_id) + + if u_id.startswith('@'): + page_id = self._search_regex(r'data-owner-id\s?=\s?"([^"]+)"', webpage, 'page_id') + elif '_' in u_id: + page_id, section = u_id.split('_', 1) + section = f'playlist_{section}' + else: + raise ExtractorError('Invalid URL', expected=True) + + if not section: + section = 'all' + + playlist_title = clean_html(get_element_by_class('VideoInfoPanel__title', webpage)) + return self.playlist_result(self._entries(page_id, section), '%s_%s' % (page_id, section), playlist_title) + + +class VKWallPostIE(VKBaseIE): + IE_NAME = 'vk:wallpost' + _VALID_URL = r'https?://(?:(?:(?:(?:m|new)\.)?vk\.com/(?:[^?]+\?.*\bw=)?wall(?P<id>-?\d+_\d+)))' + _TESTS = [{ + # public page URL, audio playlist + 'url': 'https://vk.com/bs.official?w=wall-23538238_35', + 'info_dict': { + 'id': '-23538238_35', + 'title': 'Black Shadow - Wall post -23538238_35', + 'description': 'md5:190c78f905a53e0de793d83933c6e67f', + }, + 'playlist': [{ + 'md5': '5ba93864ec5b85f7ce19a9af4af080f6', + 'info_dict': { + 'id': '135220665_111806521', + 'ext': 'm4a', + 'title': 'Black Shadow - Слепое Верование', + 'duration': 370, + 'uploader': 'Black Shadow', + 'artist': 'Black Shadow', + 'track': 'Слепое Верование', + }, + }, { + 'md5': '4cc7e804579122b17ea95af7834c9233', + 'info_dict': { + 'id': '135220665_111802303', + 'ext': 'm4a', + 'title': 'Black Shadow - Война - ÐегаÑимое Бездны ПламÑ!', + 'duration': 423, + 'uploader': 'Black Shadow', + 'artist': 'Black Shadow', + 'track': 'Война - ÐегаÑимое Бездны ПламÑ!', + }, + }], + 'params': { + 'skip_download': True, + }, + }, { + # single YouTube embed with irrelevant reaction videos + 'url': 'https://vk.com/wall-32370614_7173954', + 'info_dict': { + 'id': '-32370614_7173954', + 'title': 'md5:9f93c405bbc00061d34007d78c75e3bc', + 'description': 'md5:953b811f26fa9f21ee5856e2ea8e68fc', + }, + 'playlist_count': 1, + }, { + # wall page URL + 'url': 'https://vk.com/wall-23538238_35', + 'only_matching': True, + }, { + # mobile wall page URL + 'url': 'https://m.vk.com/wall-23538238_35', + 'only_matching': True, + }] + _BASE64_CHARS = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN0PQRSTUVWXYZO123456789+/=' + _AUDIO = collections.namedtuple('Audio', ['id', 'owner_id', 'url', 'title', 'performer', 'duration', 'album_id', 'unk', 'author_link', 'lyrics', 'flags', 'context', 'extra', 'hashes', 'cover_url', 'ads']) + + def _decode(self, enc): + dec = '' + e = n = 0 + for c in enc: + r = self._BASE64_CHARS.index(c) + cond = n % 4 + e = 64 * e + r if cond else r + n += 1 + if cond: + dec += chr(255 & e >> (-2 * n & 6)) + return dec + + def _unmask_url(self, mask_url, vk_id): + if 'audio_api_unavailable' in mask_url: + extra = mask_url.split('?extra=')[1].split('#') + func, base = self._decode(extra[1]).split(chr(11)) + mask_url = list(self._decode(extra[0])) + url_len = len(mask_url) + indexes = [None] * url_len + index = int(base) ^ vk_id + for n in range(url_len - 1, -1, -1): + index = (url_len * (n + 1) ^ index + n) % url_len + indexes[n] = index + for n in range(1, url_len): + c = mask_url[n] + index = indexes[url_len - 1 - n] + mask_url[n] = mask_url[index] + mask_url[index] = c + mask_url = ''.join(mask_url) + return mask_url + + def _real_extract(self, url): + post_id = self._match_id(url) + + webpage = self._download_payload('wkview', post_id, { + 'act': 'show', + 'w': 'wall' + post_id, + })[1] + + uploader = clean_html(get_element_by_class('PostHeaderTitle__authorName', webpage)) + + entries = [] + + for audio in re.findall(r'data-audio="([^"]+)', webpage): + audio = self._parse_json(unescapeHTML(audio), post_id) + if not audio['url']: + continue + title = unescapeHTML(audio.get('title')) + artist = unescapeHTML(audio.get('artist')) + entries.append({ + 'id': f'{audio["owner_id"]}_{audio["id"]}', + 'title': join_nonempty(artist, title, delim=' - '), + 'thumbnails': try_call(lambda: [{'url': u} for u in audio['coverUrl'].split(',')]), + 'duration': int_or_none(audio.get('duration')), + 'uploader': uploader, + 'artist': artist, + 'track': title, + 'formats': [{ + 'url': audio['url'], + 'ext': 'm4a', + 'vcodec': 'none', + 'acodec': 'mp3', + 'container': 'm4a_dash', + }], + }) + + entries.extend(self.url_result(urljoin(url, entry), VKIE) for entry in set(re.findall( + r'<a[^>]+href=(?:["\'])(/video(?:-?[\d_]+)[^"\']*)', + get_element_html_by_id('wl_post_body', webpage)))) + + return self.playlist_result( + entries, post_id, join_nonempty(uploader, f'Wall post {post_id}', delim=' - '), + clean_html(get_element_by_class('wall_post_text', webpage))) + + +class VKPlayBaseIE(InfoExtractor): + _RESOLUTIONS = { + 'tiny': '256x144', + 'lowest': '426x240', + 'low': '640x360', + 'medium': '852x480', + 'high': '1280x720', + 'full_hd': '1920x1080', + 'quad_hd': '2560x1440', + } + + def _extract_from_initial_state(self, url, video_id, path): + webpage = self._download_webpage(url, video_id) + video_info = traverse_obj(self._search_json( + r'<script[^>]+\bid="initial-state"[^>]*>', webpage, 'initial state', video_id), + path, expected_type=dict) + if not video_info: + raise ExtractorError('Unable to extract video info from html inline initial state') + return video_info + + def _extract_formats(self, stream_info, video_id): + formats = [] + for stream in traverse_obj(stream_info, ( + 'data', 0, 'playerUrls', lambda _, v: url_or_none(v['url']) and v['type'])): + url = stream['url'] + format_id = str_or_none(stream['type']) + if format_id in ('hls', 'live_hls', 'live_playback_hls') or '.m3u8' in url: + formats.extend(self._extract_m3u8_formats(url, video_id, m3u8_id=format_id, fatal=False)) + elif format_id == 'dash': + formats.extend(self._extract_mpd_formats(url, video_id, mpd_id=format_id, fatal=False)) + elif format_id in ('live_dash', 'live_playback_dash'): + self.write_debug(f'Not extracting unsupported format "{format_id}"') + else: + formats.append({ + 'url': url, + 'ext': 'mp4', + 'format_id': format_id, + **parse_resolution(self._RESOLUTIONS.get(format_id)), + }) + return formats + + def _extract_common_meta(self, stream_info): + return traverse_obj(stream_info, { + 'id': ('id', {str_or_none}), + 'title': ('title', {str}), + 'release_timestamp': ('startTime', {int_or_none}), + 'thumbnail': ('previewUrl', {url_or_none}), + 'view_count': ('count', 'views', {int_or_none}), + 'like_count': ('count', 'likes', {int_or_none}), + 'categories': ('category', 'title', {str}, {lambda x: [x] if x else None}), + 'uploader': (('user', ('blog', 'owner')), 'nick', {str}), + 'uploader_id': (('user', ('blog', 'owner')), 'id', {str_or_none}), + 'duration': ('duration', {int_or_none}), + 'is_live': ('isOnline', {bool}), + 'concurrent_view_count': ('count', 'viewers', {int_or_none}), + }, get_all=False) + + +class VKPlayIE(VKPlayBaseIE): + _VALID_URL = r'https?://vkplay\.live/(?P<username>[^/#?]+)/record/(?P<id>[a-f0-9-]+)' + _TESTS = [{ + 'url': 'https://vkplay.live/zitsmann/record/f5e6e3b5-dc52-4d14-965d-0680dd2882da', + 'info_dict': { + 'id': 'f5e6e3b5-dc52-4d14-965d-0680dd2882da', + 'ext': 'mp4', + 'title': 'Atomic Heart (пробуем!) ÑпаÑибо подпиÑчику EKZO!', + 'uploader': 'ZitsmanN', + 'uploader_id': '13159830', + 'release_timestamp': 1683461378, + 'release_date': '20230507', + 'thumbnail': r're:https://images.vkplay.live/public_video_stream/record/f5e6e3b5-dc52-4d14-965d-0680dd2882da/preview\?change_time=\d+', + 'duration': 10608, + 'view_count': int, + 'like_count': int, + 'categories': ['Atomic Heart'], + }, + 'params': {'skip_download': 'm3u8'}, + }] + + def _real_extract(self, url): + username, video_id = self._match_valid_url(url).groups() + + record_info = traverse_obj(self._download_json( + f'https://api.vkplay.live/v1/blog/{username}/public_video_stream/record/{video_id}', video_id, fatal=False), + ('data', 'record', {dict})) + if not record_info: + record_info = self._extract_from_initial_state(url, video_id, ('record', 'currentRecord', 'data')) + + return { + **self._extract_common_meta(record_info), + 'id': video_id, + 'formats': self._extract_formats(record_info, video_id), + } + + +class VKPlayLiveIE(VKPlayBaseIE): + _VALID_URL = r'https?://vkplay\.live/(?P<id>[^/#?]+)/?(?:[#?]|$)' + _TESTS = [{ + 'url': 'https://vkplay.live/bayda', + 'info_dict': { + 'id': 'f02c321e-427b-408d-b12f-ae34e53e0ea2', + 'ext': 'mp4', + 'title': r're:ÑÑкапизм крута .*', + 'uploader': 'Bayda', + 'uploader_id': 12279401, + 'release_timestamp': 1687209962, + 'release_date': '20230619', + 'thumbnail': r're:https://images.vkplay.live/public_video_stream/12279401/preview\?change_time=\d+', + 'view_count': int, + 'concurrent_view_count': int, + 'like_count': int, + 'categories': ['EVE Online'], + 'live_status': 'is_live', + }, + 'skip': 'livestream', + 'params': {'skip_download': True}, + }] + + def _real_extract(self, url): + username = self._match_id(url) + + stream_info = self._download_json( + f'https://api.vkplay.live/v1/blog/{username}/public_video_stream', username, fatal=False) + if not stream_info: + stream_info = self._extract_from_initial_state(url, username, ('stream', 'stream', 'data', 'stream')) + + formats = self._extract_formats(stream_info, username) + if not formats and not traverse_obj(stream_info, ('isOnline', {bool})): + raise UserNotLive(video_id=username) + + return { + **self._extract_common_meta(stream_info), + 'formats': formats, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vocaroo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vocaroo.py new file mode 100644 index 0000000..d98fbfd --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vocaroo.py @@ -0,0 +1,63 @@ +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import float_or_none + + +class VocarooIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?(?:vocaroo\.com|voca\.ro)/(?:embed/)?(?P<id>\w+)' + _EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?://)?(?:www\.)?vocaroo\.com/embed/.+?)\1'] + _TESTS = [ + { + 'url': 'https://vocaroo.com/1de8yA3LNe77', + 'md5': 'c557841d5e50261777a6585648adf439', + 'info_dict': { + 'id': '1de8yA3LNe77', + 'ext': 'mp3', + 'title': 'Vocaroo video #1de8yA3LNe77', + 'timestamp': 1675059800.370, + 'upload_date': '20230130', + }, + }, + { + 'url': 'https://vocaroo.com/embed/12WqtjLnpj6g?autoplay=0', + 'only_matching': True, + }, + { + 'url': 'https://voca.ro/12D52rgpzkB0', + 'only_matching': True, + }, + ] + + _WEBPAGE_TESTS = [ + { + 'url': 'https://qbnu.github.io/cool.html', + 'md5': 'f322e529275dd8a47994919eeac404a5', + 'info_dict': { + 'id': '19cgWmKO6AmC', + 'ext': 'mp3', + 'title': 'Vocaroo video #19cgWmKO6AmC', + 'timestamp': 1675093841.408, + 'upload_date': '20230130', + }, + }, + ] + + def _real_extract(self, url): + audio_id = self._match_id(url) + if len(audio_id) == 10 or (len(audio_id) == 12 and audio_id[0] == '1'): + media_subdomain = 'media1' + else: + media_subdomain = 'media' + + url = f'https://{media_subdomain}.vocaroo.com/mp3/{audio_id}' + http_headers = {'Referer': 'https://vocaroo.com/'} + resp = self._request_webpage(HEADRequest(url), audio_id, headers=http_headers) + return { + 'id': audio_id, + 'title': '', + 'url': url, + 'ext': 'mp3', + 'timestamp': float_or_none(resp.getheader('x-bz-upload-timestamp'), scale=1000), + 'vcodec': 'none', + 'http_headers': http_headers, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vodlocker.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vodlocker.py new file mode 100644 index 0000000..b215d6c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vodlocker.py @@ -0,0 +1,73 @@ +from .common import InfoExtractor +from ..networking import Request +from ..utils import NO_DEFAULT, ExtractorError, urlencode_postdata + + +class VodlockerIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?vodlocker\.(?:com|city)/(?:embed-)?(?P<id>[0-9a-zA-Z]+)(?:\..*?)?' + + _TESTS = [{ + 'url': 'http://vodlocker.com/e8wvyzz4sl42', + 'md5': 'ce0c2d18fa0735f1bd91b69b0e54aacf', + 'info_dict': { + 'id': 'e8wvyzz4sl42', + 'ext': 'mp4', + 'title': 'Germany vs Brazil', + 'thumbnail': r're:http://.*\.jpg', + }, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + if any(p in webpage for p in ( + '>THIS FILE WAS DELETED<', + '>File Not Found<', + 'The file you were looking for could not be found, sorry for any inconvenience.<', + '>The file was removed')): + raise ExtractorError('Video %s does not exist' % video_id, expected=True) + + fields = self._hidden_inputs(webpage) + + if fields['op'] == 'download1': + self._sleep(3, video_id) # they do detect when requests happen too fast! + post = urlencode_postdata(fields) + req = Request(url, post) + req.headers['Content-type'] = 'application/x-www-form-urlencoded' + webpage = self._download_webpage( + req, video_id, 'Downloading video page') + + def extract_file_url(html, default=NO_DEFAULT): + return self._search_regex( + r'file:\s*"(http[^\"]+)",', html, 'file url', default=default) + + video_url = extract_file_url(webpage, default=None) + + if not video_url: + embed_url = self._search_regex( + r'<iframe[^>]+src=(["\'])(?P<url>(?:https?://)?vodlocker\.(?:com|city)/embed-.+?)\1', + webpage, 'embed url', group='url') + embed_webpage = self._download_webpage( + embed_url, video_id, 'Downloading embed webpage') + video_url = extract_file_url(embed_webpage) + thumbnail_webpage = embed_webpage + else: + thumbnail_webpage = webpage + + title = self._search_regex( + r'id="file_title".*?>\s*(.*?)\s*<(?:br|span)', webpage, 'title') + thumbnail = self._search_regex( + r'image:\s*"(http[^\"]+)",', thumbnail_webpage, 'thumbnail', fatal=False) + + formats = [{ + 'format_id': 'sd', + 'url': video_url, + }] + + return { + 'id': video_id, + 'title': title, + 'thumbnail': thumbnail, + 'formats': formats, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vodpl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vodpl.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vodpl.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vodpl.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vodplatform.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vodplatform.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vodplatform.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vodplatform.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/voicerepublic.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/voicerepublic.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/voicerepublic.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/voicerepublic.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/voicy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/voicy.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/voicy.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/voicy.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/volejtv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/volejtv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/volejtv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/volejtv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/voot.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/voot.py new file mode 100644 index 0000000..b19a279 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/voot.py @@ -0,0 +1,210 @@ +import json +import time +import uuid + +from .common import InfoExtractor +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + float_or_none, + int_or_none, + jwt_decode_hs256, + parse_age_limit, + traverse_obj, + try_call, + try_get, + unified_strdate, +) + + +class VootBaseIE(InfoExtractor): + _NETRC_MACHINE = 'voot' + _GEO_BYPASS = False + _LOGIN_HINT = 'Log in with "-u <email_address> -p <password>", or use "-u token -p <auth_token>" to login with auth token.' + _TOKEN = None + _EXPIRY = 0 + _API_HEADERS = {'Origin': 'https://www.voot.com', 'Referer': 'https://www.voot.com/'} + + def _perform_login(self, username, password): + if self._TOKEN and self._EXPIRY: + return + + if username.lower() == 'token' and try_call(lambda: jwt_decode_hs256(password)): + VootBaseIE._TOKEN = password + VootBaseIE._EXPIRY = jwt_decode_hs256(password)['exp'] + self.report_login() + + # Mobile number as username is not supported + elif not username.isdigit(): + check_username = self._download_json( + 'https://userauth.voot.com/usersV3/v3/checkUser', None, data=json.dumps({ + 'type': 'email', + 'email': username + }, separators=(',', ':')).encode(), headers={ + **self._API_HEADERS, + 'Content-Type': 'application/json;charset=utf-8', + }, note='Checking username', expected_status=403) + if not traverse_obj(check_username, ('isExist', {bool})): + if traverse_obj(check_username, ('status', 'code', {int})) == 9999: + self.raise_geo_restricted(countries=['IN']) + raise ExtractorError('Incorrect username', expected=True) + auth_token = traverse_obj(self._download_json( + 'https://userauth.voot.com/usersV3/v3/login', None, data=json.dumps({ + 'type': 'traditional', + 'deviceId': str(uuid.uuid4()), + 'deviceBrand': 'PC/MAC', + 'data': { + 'email': username, + 'password': password + } + }, separators=(',', ':')).encode(), headers={ + **self._API_HEADERS, + 'Content-Type': 'application/json;charset=utf-8', + }, note='Logging in', expected_status=400), ('data', 'authToken', {dict})) + if not auth_token: + raise ExtractorError('Incorrect password', expected=True) + VootBaseIE._TOKEN = auth_token['accessToken'] + VootBaseIE._EXPIRY = auth_token['expirationTime'] + + else: + raise ExtractorError(self._LOGIN_HINT, expected=True) + + def _check_token_expiry(self): + if int(time.time()) >= self._EXPIRY: + raise ExtractorError('Access token has expired', expected=True) + + def _real_initialize(self): + if not self._TOKEN: + self.raise_login_required(self._LOGIN_HINT, method=None) + self._check_token_expiry() + + +class VootIE(VootBaseIE): + _VALID_URL = r'''(?x) + (?: + voot:| + https?://(?:www\.)?voot\.com/? + (?: + movies?/[^/]+/| + (?:shows|kids)/(?:[^/]+/){4} + ) + ) + (?P<id>\d{3,}) + ''' + _TESTS = [{ + 'url': 'https://www.voot.com/shows/ishq-ka-rang-safed/1/360558/is-this-the-end-of-kamini-/441353', + 'info_dict': { + 'id': '441353', + 'ext': 'mp4', + 'title': 'Is this the end of Kamini?', + 'description': 'md5:06291fbbbc4dcbe21235c40c262507c1', + 'timestamp': 1472103000, + 'upload_date': '20160825', + 'series': 'Ishq Ka Rang Safed', + 'season_number': 1, + 'episode': 'Is this the end of Kamini?', + 'episode_number': 340, + 'release_date': '20160825', + 'season': 'Season 1', + 'age_limit': 13, + 'duration': 1146.0, + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://www.voot.com/kids/characters/mighty-cat-masked-niyander-e-/400478/school-bag-disappears/440925', + 'only_matching': True, + }, { + 'url': 'https://www.voot.com/movies/pandavas-5/424627', + 'only_matching': True, + }, { + 'url': 'https://www.voot.com/movie/fight-club/621842', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + media_info = self._download_json( + 'https://psapi.voot.com/jio/voot/v1/voot-web/content/query/asset-details', video_id, + query={'ids': f'include:{video_id}', 'responseType': 'common'}, headers={'accesstoken': self._TOKEN}) + + try: + m3u8_url = self._download_json( + 'https://vootapi.media.jio.com/playback/v1/playbackrights', video_id, + 'Downloading playback JSON', data=b'{}', headers={ + **self.geo_verification_headers(), + **self._API_HEADERS, + 'Content-Type': 'application/json;charset=utf-8', + 'platform': 'androidwebdesktop', + 'vootid': video_id, + 'voottoken': self._TOKEN, + })['m3u8'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 400: + self._check_token_expiry() + raise + + formats = self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', m3u8_id='hls') + self._remove_duplicate_formats(formats) + + return { + 'id': video_id, + # '/_definst_/smil:vod/' m3u8 manifests claim to have 720p+ formats but max out at 480p + 'formats': traverse_obj(formats, ( + lambda _, v: '/_definst_/smil:vod/' not in v['url'] or v['height'] <= 480)), + 'http_headers': self._API_HEADERS, + **traverse_obj(media_info, ('result', 0, { + 'title': ('fullTitle', {str}), + 'description': ('fullSynopsis', {str}), + 'series': ('showName', {str}), + 'season_number': ('season', {int_or_none}), + 'episode': ('fullTitle', {str}), + 'episode_number': ('episode', {int_or_none}), + 'timestamp': ('uploadTime', {int_or_none}), + 'release_date': ('telecastDate', {unified_strdate}), + 'age_limit': ('ageNemonic', {parse_age_limit}), + 'duration': ('duration', {float_or_none}), + })), + } + + +class VootSeriesIE(VootBaseIE): + _VALID_URL = r'https?://(?:www\.)?voot\.com/shows/[^/]+/(?P<id>\d{3,})' + _TESTS = [{ + 'url': 'https://www.voot.com/shows/chakravartin-ashoka-samrat/100002', + 'playlist_mincount': 442, + 'info_dict': { + 'id': '100002', + }, + }, { + 'url': 'https://www.voot.com/shows/ishq-ka-rang-safed/100003', + 'playlist_mincount': 341, + 'info_dict': { + 'id': '100003', + }, + }] + _SHOW_API = 'https://psapi.voot.com/media/voot/v1/voot-web/content/generic/season-by-show?sort=season%3Aasc&id={}&responseType=common' + _SEASON_API = 'https://psapi.voot.com/media/voot/v1/voot-web/content/generic/series-wise-episode?sort=episode%3Aasc&id={}&responseType=common&page={:d}' + + def _entries(self, show_id): + show_json = self._download_json(self._SHOW_API.format(show_id), video_id=show_id) + for season in show_json.get('result', []): + page_num = 1 + season_id = try_get(season, lambda x: x['id'], compat_str) + season_json = self._download_json(self._SEASON_API.format(season_id, page_num), + video_id=season_id, + note='Downloading JSON metadata page %d' % page_num) + episodes_json = season_json.get('result', []) + while episodes_json: + page_num += 1 + for episode in episodes_json: + video_id = episode.get('id') + yield self.url_result( + 'voot:%s' % video_id, ie=VootIE.ie_key(), video_id=video_id) + episodes_json = self._download_json(self._SEASON_API.format(season_id, page_num), + video_id=season_id, + note='Downloading JSON metadata page %d' % page_num)['result'] + + def _real_extract(self, url): + show_id = self._match_id(url) + return self.playlist_result(self._entries(show_id), playlist_id=show_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/voxmedia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/voxmedia.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/voxmedia.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/voxmedia.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vrak.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vrak.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vrak.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vrak.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vrt.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vrt.py new file mode 100644 index 0000000..497233d --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vrt.py @@ -0,0 +1,427 @@ +import functools +import json +import time +import urllib.parse + +from .gigya import GigyaBaseIE +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + clean_html, + extract_attributes, + float_or_none, + get_element_by_class, + get_element_html_by_class, + int_or_none, + join_nonempty, + jwt_encode_hs256, + make_archive_id, + parse_age_limit, + parse_iso8601, + str_or_none, + strip_or_none, + traverse_obj, + url_or_none, + urlencode_postdata, +) + + +class VRTBaseIE(GigyaBaseIE): + _GEO_BYPASS = False + _PLAYER_INFO = { + 'platform': 'desktop', + 'app': { + 'type': 'browser', + 'name': 'Chrome', + }, + 'device': 'undefined (undefined)', + 'os': { + 'name': 'Windows', + 'version': 'x86_64' + }, + 'player': { + 'name': 'VRT web player', + 'version': '2.7.4-prod-2023-04-19T06:05:45' + } + } + # From https://player.vrt.be/vrtnws/js/main.js & https://player.vrt.be/ketnet/js/main.8cdb11341bcb79e4cd44.js + _JWT_KEY_ID = '0-0Fp51UZykfaiCJrfTE3+oMI8zvDteYfPtR+2n1R+z8w=' + _JWT_SIGNING_KEY = 'b5f500d55cb44715107249ccd8a5c0136cfb2788dbb71b90a4f142423bacaf38' # -dev + # player-stag.vrt.be key: d23987504521ae6fbf2716caca6700a24bb1579477b43c84e146b279de5ca595 + # player.vrt.be key: 2a9251d782700769fb856da5725daf38661874ca6f80ae7dc2b05ec1a81a24ae + + def _extract_formats_and_subtitles(self, data, video_id): + if traverse_obj(data, 'drm'): + self.report_drm(video_id) + + formats, subtitles = [], {} + for target in traverse_obj(data, ('targetUrls', lambda _, v: url_or_none(v['url']) and v['type'])): + format_type = target['type'].upper() + format_url = target['url'] + if format_type in ('HLS', 'HLS_AES'): + fmts, subs = self._extract_m3u8_formats_and_subtitles( + format_url, video_id, 'mp4', m3u8_id=format_type, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + elif format_type == 'HDS': + formats.extend(self._extract_f4m_formats( + format_url, video_id, f4m_id=format_type, fatal=False)) + elif format_type == 'MPEG_DASH': + fmts, subs = self._extract_mpd_formats_and_subtitles( + format_url, video_id, mpd_id=format_type, fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + elif format_type == 'HSS': + fmts, subs = self._extract_ism_formats_and_subtitles( + format_url, video_id, ism_id='mss', fatal=False) + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + else: + formats.append({ + 'format_id': format_type, + 'url': format_url, + }) + + for sub in traverse_obj(data, ('subtitleUrls', lambda _, v: v['url'] and v['type'] == 'CLOSED')): + subtitles.setdefault('nl', []).append({'url': sub['url']}) + + return formats, subtitles + + def _call_api(self, video_id, client='null', id_token=None, version='v2'): + player_info = {'exp': (round(time.time(), 3) + 900), **self._PLAYER_INFO} + player_token = self._download_json( + 'https://media-services-public.vrt.be/vualto-video-aggregator-web/rest/external/v2/tokens', + video_id, 'Downloading player token', headers={ + **self.geo_verification_headers(), + 'Content-Type': 'application/json', + }, data=json.dumps({ + 'identityToken': id_token or {}, + 'playerInfo': jwt_encode_hs256(player_info, self._JWT_SIGNING_KEY, headers={ + 'kid': self._JWT_KEY_ID + }).decode() + }, separators=(',', ':')).encode())['vrtPlayerToken'] + + return self._download_json( + f'https://media-services-public.vrt.be/media-aggregator/{version}/media-items/{video_id}', + video_id, 'Downloading API JSON', query={ + 'vrtPlayerToken': player_token, + 'client': client, + }, expected_status=400) + + +class VRTIE(VRTBaseIE): + IE_DESC = 'VRT NWS, Flanders News, Flandern Info and Sporza' + _VALID_URL = r'https?://(?:www\.)?(?P<site>vrt\.be/vrtnws|sporza\.be)/[a-z]{2}/\d{4}/\d{2}/\d{2}/(?P<id>[^/?&#]+)' + _TESTS = [{ + 'url': 'https://www.vrt.be/vrtnws/nl/2019/05/15/beelden-van-binnenkant-notre-dame-een-maand-na-de-brand/', + 'info_dict': { + 'id': 'pbs-pub-7855fc7b-1448-49bc-b073-316cb60caa71$vid-2ca50305-c38a-4762-9890-65cbd098b7bd', + 'ext': 'mp4', + 'title': 'Beelden van binnenkant Notre-Dame, één maand na de brand', + 'description': 'md5:6fd85f999b2d1841aa5568f4bf02c3ff', + 'duration': 31.2, + 'thumbnail': 'https://images.vrt.be/orig/2019/05/15/2d914d61-7710-11e9-abcc-02b7b76bf47f.jpg', + }, + 'params': {'skip_download': 'm3u8'}, + }, { + 'url': 'https://sporza.be/nl/2019/05/15/de-belgian-cats-zijn-klaar-voor-het-ek/', + 'info_dict': { + 'id': 'pbs-pub-f2c86a46-8138-413a-a4b9-a0015a16ce2c$vid-1f112b31-e58e-4379-908d-aca6d80f8818', + 'ext': 'mp4', + 'title': 'De Belgian Cats zijn klaar voor het EK', + 'description': 'Video: De Belgian Cats zijn klaar voor het EK mét Ann Wauters | basketbal, sport in het journaal', + 'duration': 115.17, + 'thumbnail': 'https://images.vrt.be/orig/2019/05/15/11c0dba3-770e-11e9-abcc-02b7b76bf47f.jpg', + }, + 'params': {'skip_download': 'm3u8'}, + }] + _CLIENT_MAP = { + 'vrt.be/vrtnws': 'vrtnieuws', + 'sporza.be': 'sporza', + } + + def _real_extract(self, url): + site, display_id = self._match_valid_url(url).groups() + webpage = self._download_webpage(url, display_id) + attrs = extract_attributes(get_element_html_by_class('vrtvideo', webpage) or '') + + asset_id = attrs.get('data-video-id') or attrs['data-videoid'] + publication_id = traverse_obj(attrs, 'data-publication-id', 'data-publicationid') + if publication_id: + asset_id = f'{publication_id}${asset_id}' + client = traverse_obj(attrs, 'data-client-code', 'data-client') or self._CLIENT_MAP[site] + + data = self._call_api(asset_id, client) + formats, subtitles = self._extract_formats_and_subtitles(data, asset_id) + + description = self._html_search_meta( + ['og:description', 'twitter:description', 'description'], webpage) + if description == '…': + description = None + + return { + 'id': asset_id, + 'formats': formats, + 'subtitles': subtitles, + 'description': description, + 'thumbnail': url_or_none(attrs.get('data-posterimage')), + 'duration': float_or_none(attrs.get('data-duration'), 1000), + '_old_archive_ids': [make_archive_id('Canvas', asset_id)], + **traverse_obj(data, { + 'title': ('title', {str}), + 'description': ('shortDescription', {str}), + 'duration': ('duration', {functools.partial(float_or_none, scale=1000)}), + 'thumbnail': ('posterImageUrl', {url_or_none}), + }), + } + + +class VrtNUIE(VRTBaseIE): + IE_DESC = 'VRT MAX' + _VALID_URL = r'https?://(?:www\.)?vrt\.be/vrtnu/a-z/(?:[^/]+/){2}(?P<id>[^/?#&]+)' + _TESTS = [{ + # CONTENT_IS_AGE_RESTRICTED + 'url': 'https://www.vrt.be/vrtnu/a-z/de-ideale-wereld/2023-vj/de-ideale-wereld-d20230116/', + 'info_dict': { + 'id': 'pbs-pub-855b00a8-6ce2-4032-ac4f-1fcf3ae78524$vid-d2243aa1-ec46-4e34-a55b-92568459906f', + 'ext': 'mp4', + 'title': 'Tom Waes', + 'description': 'Satirisch actualiteitenmagazine met Ella Leyers. Tom Waes is te gast.', + 'timestamp': 1673905125, + 'release_timestamp': 1673905125, + 'series': 'De ideale wereld', + 'season_id': '1672830988794', + 'episode': 'Aflevering 1', + 'episode_number': 1, + 'episode_id': '1672830988861', + 'display_id': 'de-ideale-wereld-d20230116', + 'channel': 'VRT', + 'duration': 1939.0, + 'thumbnail': 'https://images.vrt.be/orig/2023/01/10/1bb39cb3-9115-11ed-b07d-02b7b76bf47f.jpg', + 'release_date': '20230116', + 'upload_date': '20230116', + 'age_limit': 12, + }, + }, { + 'url': 'https://www.vrt.be/vrtnu/a-z/buurman--wat-doet-u-nu-/6/buurman--wat-doet-u-nu--s6-trailer/', + 'info_dict': { + 'id': 'pbs-pub-ad4050eb-d9e5-48c2-9ec8-b6c355032361$vid-0465537a-34a8-4617-8352-4d8d983b4eee', + 'ext': 'mp4', + 'title': 'Trailer seizoen 6 \'Buurman, wat doet u nu?\'', + 'description': 'md5:197424726c61384b4e5c519f16c0cf02', + 'timestamp': 1652940000, + 'release_timestamp': 1652940000, + 'series': 'Buurman, wat doet u nu?', + 'season': 'Seizoen 6', + 'season_number': 6, + 'season_id': '1652344200907', + 'episode': 'Aflevering 0', + 'episode_number': 0, + 'episode_id': '1652951873524', + 'display_id': 'buurman--wat-doet-u-nu--s6-trailer', + 'channel': 'VRT', + 'duration': 33.13, + 'thumbnail': 'https://images.vrt.be/orig/2022/05/23/3c234d21-da83-11ec-b07d-02b7b76bf47f.jpg', + 'release_date': '20220519', + 'upload_date': '20220519', + }, + 'params': {'skip_download': 'm3u8'}, + }] + _NETRC_MACHINE = 'vrtnu' + _authenticated = False + + def _perform_login(self, username, password): + auth_info = self._gigya_login({ + 'APIKey': '3_0Z2HujMtiWq_pkAjgnS2Md2E11a1AwZjYiBETtwNE-EoEHDINgtnvcAOpNgmrVGy', + 'targetEnv': 'jssdk', + 'loginID': username, + 'password': password, + 'authMode': 'cookie', + }) + + if auth_info.get('errorDetails'): + raise ExtractorError(f'Unable to login. VrtNU said: {auth_info["errorDetails"]}', expected=True) + + # Sometimes authentication fails for no good reason, retry + for retry in self.RetryManager(): + if retry.attempt > 1: + self._sleep(1, None) + try: + self._request_webpage( + 'https://token.vrt.be/vrtnuinitlogin', None, note='Requesting XSRF Token', + errnote='Could not get XSRF Token', query={ + 'provider': 'site', + 'destination': 'https://www.vrt.be/vrtnu/', + }) + self._request_webpage( + 'https://login.vrt.be/perform_login', None, + note='Performing login', errnote='Login failed', + query={'client_id': 'vrtnu-site'}, data=urlencode_postdata({ + 'UID': auth_info['UID'], + 'UIDSignature': auth_info['UIDSignature'], + 'signatureTimestamp': auth_info['signatureTimestamp'], + '_csrf': self._get_cookies('https://login.vrt.be').get('OIDCXSRF').value, + })) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + retry.error = e + continue + raise + + self._authenticated = True + + def _real_extract(self, url): + display_id = self._match_id(url) + parsed_url = urllib.parse.urlparse(url) + details = self._download_json( + f'{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path.rstrip("/")}.model.json', + display_id, 'Downloading asset JSON', 'Unable to download asset JSON')['details'] + + watch_info = traverse_obj(details, ( + 'actions', lambda _, v: v['type'] == 'watch-episode', {dict}), get_all=False) or {} + video_id = join_nonempty( + 'episodePublicationId', 'episodeVideoId', delim='$', from_dict=watch_info) + if '$' not in video_id: + raise ExtractorError('Unable to extract video ID') + + vrtnutoken = self._download_json( + 'https://token.vrt.be/refreshtoken', video_id, note='Retrieving vrtnutoken', + errnote='Token refresh failed')['vrtnutoken'] if self._authenticated else None + + video_info = self._call_api(video_id, 'vrtnu-web@PROD', vrtnutoken) + + if 'title' not in video_info: + code = video_info.get('code') + if code in ('AUTHENTICATION_REQUIRED', 'CONTENT_IS_AGE_RESTRICTED'): + self.raise_login_required(code, method='password') + elif code in ('INVALID_LOCATION', 'CONTENT_AVAILABLE_ONLY_IN_BE'): + self.raise_geo_restricted(countries=['BE']) + elif code == 'CONTENT_AVAILABLE_ONLY_FOR_BE_RESIDENTS_AND_EXPATS': + if not self._authenticated: + self.raise_login_required(code, method='password') + self.raise_geo_restricted(countries=['BE']) + raise ExtractorError(code, expected=True) + + formats, subtitles = self._extract_formats_and_subtitles(video_info, video_id) + + return { + **traverse_obj(details, { + 'title': 'title', + 'description': ('description', {clean_html}), + 'timestamp': ('data', 'episode', 'onTime', 'raw', {parse_iso8601}), + 'release_timestamp': ('data', 'episode', 'onTime', 'raw', {parse_iso8601}), + 'series': ('data', 'program', 'title'), + 'season': ('data', 'season', 'title', 'value'), + 'season_number': ('data', 'season', 'title', 'raw', {int_or_none}), + 'season_id': ('data', 'season', 'id', {str_or_none}), + 'episode': ('data', 'episode', 'number', 'value', {str_or_none}), + 'episode_number': ('data', 'episode', 'number', 'raw', {int_or_none}), + 'episode_id': ('data', 'episode', 'id', {str_or_none}), + 'age_limit': ('data', 'episode', 'age', 'raw', {parse_age_limit}), + }), + 'id': video_id, + 'display_id': display_id, + 'channel': 'VRT', + 'formats': formats, + 'duration': float_or_none(video_info.get('duration'), 1000), + 'thumbnail': url_or_none(video_info.get('posterImageUrl')), + 'subtitles': subtitles, + '_old_archive_ids': [make_archive_id('Canvas', video_id)], + } + + +class KetnetIE(VRTBaseIE): + _VALID_URL = r'https?://(?:www\.)?ketnet\.be/(?P<id>(?:[^/]+/)*[^/?#&]+)' + _TESTS = [{ + 'url': 'https://www.ketnet.be/kijken/m/meisjes/6/meisjes-s6a5', + 'info_dict': { + 'id': 'pbs-pub-39f8351c-a0a0-43e6-8394-205d597d6162$vid-5e306921-a9aa-4fa9-9f39-5b82c8f1028e', + 'ext': 'mp4', + 'title': 'Meisjes', + 'episode': 'Reeks 6: Week 5', + 'season': 'Reeks 6', + 'series': 'Meisjes', + 'timestamp': 1685251800, + 'upload_date': '20230528', + }, + 'params': {'skip_download': 'm3u8'}, + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + + video = self._download_json( + 'https://senior-bff.ketnet.be/graphql', display_id, query={ + 'query': '''{ + video(id: "content/ketnet/nl/%s.model.json") { + description + episodeNr + imageUrl + mediaReference + programTitle + publicationDate + seasonTitle + subtitleVideodetail + titleVideodetail + } +}''' % display_id, + })['data']['video'] + + video_id = urllib.parse.unquote(video['mediaReference']) + data = self._call_api(video_id, 'ketnet@PROD', version='v1') + formats, subtitles = self._extract_formats_and_subtitles(data, video_id) + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + '_old_archive_ids': [make_archive_id('Canvas', video_id)], + **traverse_obj(video, { + 'title': ('titleVideodetail', {str}), + 'description': ('description', {str}), + 'thumbnail': ('thumbnail', {url_or_none}), + 'timestamp': ('publicationDate', {parse_iso8601}), + 'series': ('programTitle', {str}), + 'season': ('seasonTitle', {str}), + 'episode': ('subtitleVideodetail', {str}), + 'episode_number': ('episodeNr', {int_or_none}), + }), + } + + +class DagelijkseKostIE(VRTBaseIE): + IE_DESC = 'dagelijksekost.een.be' + _VALID_URL = r'https?://dagelijksekost\.een\.be/gerechten/(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://dagelijksekost.een.be/gerechten/hachis-parmentier-met-witloof', + 'info_dict': { + 'id': 'md-ast-27a4d1ff-7d7b-425e-b84f-a4d227f592fa', + 'ext': 'mp4', + 'title': 'Hachis parmentier met witloof', + 'description': 'md5:9960478392d87f63567b5b117688cdc5', + 'display_id': 'hachis-parmentier-met-witloof', + }, + 'params': {'skip_download': 'm3u8'}, + }] + + def _real_extract(self, url): + display_id = self._match_id(url) + webpage = self._download_webpage(url, display_id) + video_id = self._html_search_regex( + r'data-url=(["\'])(?P<id>(?:(?!\1).)+)\1', webpage, 'video id', group='id') + + data = self._call_api(video_id, 'dako@prod', version='v1') + formats, subtitles = self._extract_formats_and_subtitles(data, video_id) + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + 'display_id': display_id, + 'title': strip_or_none(get_element_by_class( + 'dish-metadata__title', webpage) or self._html_search_meta('twitter:title', webpage)), + 'description': clean_html(get_element_by_class( + 'dish-description', webpage)) or self._html_search_meta( + ['description', 'twitter:description', 'og:description'], webpage), + '_old_archive_ids': [make_archive_id('Canvas', video_id)], + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vrv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vrv.py new file mode 100644 index 0000000..523c442 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vrv.py @@ -0,0 +1,269 @@ +import base64 +import hashlib +import hmac +import json +import random +import string +import time +import urllib.parse + +from .common import InfoExtractor +from ..compat import compat_urllib_parse_urlencode +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + float_or_none, + int_or_none, + join_nonempty, + traverse_obj, +) + + +class VRVBaseIE(InfoExtractor): + _API_DOMAIN = None + _API_PARAMS = {} + _CMS_SIGNING = {} + _TOKEN = None + _TOKEN_SECRET = '' + + def _call_api(self, path, video_id, note, data=None): + # https://tools.ietf.org/html/rfc5849#section-3 + base_url = self._API_DOMAIN + '/core/' + path + query = [ + ('oauth_consumer_key', self._API_PARAMS['oAuthKey']), + ('oauth_nonce', ''.join(random.choices(string.ascii_letters, k=32))), + ('oauth_signature_method', 'HMAC-SHA1'), + ('oauth_timestamp', int(time.time())), + ] + if self._TOKEN: + query.append(('oauth_token', self._TOKEN)) + encoded_query = compat_urllib_parse_urlencode(query) + headers = self.geo_verification_headers() + if data: + data = json.dumps(data).encode() + headers['Content-Type'] = 'application/json' + base_string = '&'.join([ + 'POST' if data else 'GET', + urllib.parse.quote(base_url, ''), + urllib.parse.quote(encoded_query, '')]) + oauth_signature = base64.b64encode(hmac.new( + (self._API_PARAMS['oAuthSecret'] + '&' + self._TOKEN_SECRET).encode('ascii'), + base_string.encode(), hashlib.sha1).digest()).decode() + encoded_query += '&oauth_signature=' + urllib.parse.quote(oauth_signature, '') + try: + return self._download_json( + '?'.join([base_url, encoded_query]), video_id, + note='Downloading %s JSON metadata' % note, headers=headers, data=data) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + raise ExtractorError(json.loads(e.cause.response.read().decode())['message'], expected=True) + raise + + def _call_cms(self, path, video_id, note): + if not self._CMS_SIGNING: + index = self._call_api('index', video_id, 'CMS Signing') + self._CMS_SIGNING = index.get('cms_signing') or {} + if not self._CMS_SIGNING: + for signing_policy in index.get('signing_policies', []): + signing_path = signing_policy.get('path') + if signing_path and signing_path.startswith('/cms/'): + name, value = signing_policy.get('name'), signing_policy.get('value') + if name and value: + self._CMS_SIGNING[name] = value + return self._download_json( + self._API_DOMAIN + path, video_id, query=self._CMS_SIGNING, + note='Downloading %s JSON metadata' % note, headers=self.geo_verification_headers()) + + def _get_cms_resource(self, resource_key, video_id): + return self._call_api( + 'cms_resource', video_id, 'resource path', data={ + 'resource_key': resource_key, + })['__links__']['cms_resource']['href'] + + def _extract_vrv_formats(self, url, video_id, stream_format, audio_lang, hardsub_lang): + if not url or stream_format not in ('hls', 'dash', 'adaptive_hls'): + return [] + format_id = join_nonempty( + stream_format, + audio_lang and 'audio-%s' % audio_lang, + hardsub_lang and 'hardsub-%s' % hardsub_lang) + if 'hls' in stream_format: + adaptive_formats = self._extract_m3u8_formats( + url, video_id, 'mp4', m3u8_id=format_id, + note='Downloading %s information' % format_id, + fatal=False) + elif stream_format == 'dash': + adaptive_formats = self._extract_mpd_formats( + url, video_id, mpd_id=format_id, + note='Downloading %s information' % format_id, + fatal=False) + if audio_lang: + for f in adaptive_formats: + if f.get('acodec') != 'none': + f['language'] = audio_lang + return adaptive_formats + + def _set_api_params(self): + webpage = self._download_webpage( + 'https://vrv.co/', None, headers=self.geo_verification_headers()) + self._API_PARAMS = self._parse_json(self._search_regex( + [ + r'window\.__APP_CONFIG__\s*=\s*({.+?})(?:</script>|;)', + r'window\.__APP_CONFIG__\s*=\s*({.+})' + ], webpage, 'app config'), None)['cxApiParams'] + self._API_DOMAIN = self._API_PARAMS.get('apiDomain', 'https://api.vrv.co') + + +class VRVIE(VRVBaseIE): + IE_NAME = 'vrv' + _VALID_URL = r'https?://(?:www\.)?vrv\.co/watch/(?P<id>[A-Z0-9]+)' + _TESTS = [{ + 'url': 'https://vrv.co/watch/GR9PNZ396/Hidden-America-with-Jonah-Ray:BOSTON-WHERE-THE-PAST-IS-THE-PRESENT', + 'info_dict': { + 'id': 'GR9PNZ396', + 'ext': 'mp4', + 'title': 'BOSTON: WHERE THE PAST IS THE PRESENT', + 'description': 'md5:4ec8844ac262ca2df9e67c0983c6b83f', + 'uploader_id': 'seeso', + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + }, { + # movie listing + 'url': 'https://vrv.co/watch/G6NQXZ1J6/Lily-CAT', + 'info_dict': { + 'id': 'G6NQXZ1J6', + 'title': 'Lily C.A.T', + 'description': 'md5:988b031e7809a6aeb60968be4af7db07', + }, + 'playlist_count': 2, + }] + _NETRC_MACHINE = 'vrv' + + def _perform_login(self, username, password): + token_credentials = self._call_api( + 'authenticate/by:credentials', None, 'Token Credentials', data={ + 'email': username, + 'password': password, + }) + self._TOKEN = token_credentials['oauth_token'] + self._TOKEN_SECRET = token_credentials['oauth_token_secret'] + + def _initialize_pre_login(self): + return self._set_api_params() + + def _real_extract(self, url): + video_id = self._match_id(url) + + object_data = self._call_cms(self._get_cms_resource( + 'cms:/objects/' + video_id, video_id), video_id, 'object')['items'][0] + resource_path = object_data['__links__']['resource']['href'] + video_data = self._call_cms(resource_path, video_id, 'video') + title = video_data['title'] + description = video_data.get('description') + + if video_data.get('__class__') == 'movie_listing': + items = self._call_cms( + video_data['__links__']['movie_listing/movies']['href'], + video_id, 'movie listing').get('items') or [] + if len(items) != 1: + entries = [] + for item in items: + item_id = item.get('id') + if not item_id: + continue + entries.append(self.url_result( + 'https://vrv.co/watch/' + item_id, + self.ie_key(), item_id, item.get('title'))) + return self.playlist_result(entries, video_id, title, description) + video_data = items[0] + + streams_path = video_data['__links__'].get('streams', {}).get('href') + if not streams_path: + self.raise_login_required() + streams_json = self._call_cms(streams_path, video_id, 'streams') + + audio_locale = streams_json.get('audio_locale') + formats = [] + for stream_type, streams in streams_json.get('streams', {}).items(): + if stream_type in ('adaptive_hls', 'adaptive_dash'): + for stream in streams.values(): + formats.extend(self._extract_vrv_formats( + stream.get('url'), video_id, stream_type.split('_')[1], + audio_locale, stream.get('hardsub_locale'))) + + subtitles = {} + for k in ('captions', 'subtitles'): + for subtitle in streams_json.get(k, {}).values(): + subtitle_url = subtitle.get('url') + if not subtitle_url: + continue + subtitles.setdefault(subtitle.get('locale', 'en-US'), []).append({ + 'url': subtitle_url, + 'ext': subtitle.get('format', 'ass'), + }) + + thumbnails = [] + for thumbnail in traverse_obj(video_data, ('images', 'thumbnail', ..., ...)) or []: + thumbnail_url = thumbnail.get('source') + if not thumbnail_url: + continue + thumbnails.append({ + 'url': thumbnail_url, + 'width': int_or_none(thumbnail.get('width')), + 'height': int_or_none(thumbnail.get('height')), + }) + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'subtitles': subtitles, + 'thumbnails': thumbnails, + 'description': description, + 'duration': float_or_none(video_data.get('duration_ms'), 1000), + 'uploader_id': video_data.get('channel_id'), + 'series': video_data.get('series_title'), + 'season': video_data.get('season_title'), + 'season_number': int_or_none(video_data.get('season_number')), + 'season_id': video_data.get('season_id'), + 'episode': title, + 'episode_number': int_or_none(video_data.get('episode_number')), + 'episode_id': video_data.get('production_episode_id'), + } + + +class VRVSeriesIE(VRVBaseIE): + IE_NAME = 'vrv:series' + _VALID_URL = r'https?://(?:www\.)?vrv\.co/series/(?P<id>[A-Z0-9]+)' + _TEST = { + 'url': 'https://vrv.co/series/G68VXG3G6/The-Perfect-Insider', + 'info_dict': { + 'id': 'G68VXG3G6', + }, + 'playlist_mincount': 11, + } + + def _initialize_pre_login(self): + return self._set_api_params() + + def _real_extract(self, url): + series_id = self._match_id(url) + + seasons_path = self._get_cms_resource( + 'cms:/seasons?series_id=' + series_id, series_id) + seasons_data = self._call_cms(seasons_path, series_id, 'seasons') + + entries = [] + for season in seasons_data.get('items', []): + episodes_path = season['__links__']['season/episodes']['href'] + episodes = self._call_cms(episodes_path, series_id, 'episodes') + for episode in episodes.get('items', []): + episode_id = episode['id'] + entries.append(self.url_result( + 'https://vrv.co/watch/' + episode_id, + 'VRV', episode_id, episode.get('title'))) + + return self.playlist_result(entries, series_id) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/vshare.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vshare.py new file mode 100644 index 0000000..443ed43 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/vshare.py @@ -0,0 +1,57 @@ +from .common import InfoExtractor +from ..utils import ExtractorError, decode_packed_codes + + +class VShareIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?vshare\.io/[dv]/(?P<id>[^/?#&]+)' + _EMBED_REGEX = [r'<iframe[^>]+?src=["\'](?P<url>(?:https?:)?//(?:www\.)?vshare\.io/v/[^/?#&]+)'] + _TESTS = [{ + 'url': 'https://vshare.io/d/0f64ce6', + 'md5': '17b39f55b5497ae8b59f5fbce8e35886', + 'info_dict': { + 'id': '0f64ce6', + 'title': 'vl14062007715967', + 'ext': 'mp4', + } + }, { + 'url': 'https://vshare.io/v/0f64ce6/width-650/height-430/1', + 'only_matching': True, + }] + + def _extract_packed(self, webpage): + packed = self._search_regex( + r'(eval\(function.+)', webpage, 'packed code') + unpacked = decode_packed_codes(packed) + digits = self._search_regex(r'\[([\d,]+)\]', unpacked, 'digits') + digits = [int(digit) for digit in digits.split(',')] + key_digit = self._search_regex( + r'fromCharCode\(.+?(\d+)\)}', unpacked, 'key digit') + chars = [chr(d - int(key_digit)) for d in digits] + return ''.join(chars) + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage( + 'https://vshare.io/v/%s/width-650/height-430/1' % video_id, + video_id, headers={'Referer': url}) + + title = self._html_extract_title(webpage) + title = title.split(' - ')[0] + + error = self._html_search_regex( + r'(?s)<div[^>]+\bclass=["\']xxx-error[^>]+>(.+?)</div', webpage, + 'error', default=None) + if error: + raise ExtractorError(error, expected=True) + + info = self._parse_html5_media_entries( + url, '<video>%s</video>' % self._extract_packed(webpage), + video_id)[0] + + info.update({ + 'id': video_id, + 'title': title, + }) + + return info diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vtm.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vtm.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vtm.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vtm.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vuclip.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vuclip.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vuclip.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vuclip.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vupload.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vupload.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vupload.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vupload.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vvvvid.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vvvvid.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vvvvid.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vvvvid.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vyborymos.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vyborymos.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vyborymos.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vyborymos.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/vzaar.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/vzaar.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/vzaar.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/vzaar.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wakanim.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wakanim.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wakanim.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wakanim.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/walla.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/walla.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/walla.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/walla.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wasdtv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wasdtv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wasdtv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wasdtv.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/washingtonpost.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/washingtonpost.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/washingtonpost.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/washingtonpost.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wat.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wat.py new file mode 100644 index 0000000..9ea3fdd --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wat.py @@ -0,0 +1,119 @@ +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + ExtractorError, + int_or_none, + try_get, + unified_strdate, +) + + +class WatIE(InfoExtractor): + _VALID_URL = r'(?:wat:|https?://(?:www\.)?wat\.tv/video/.*-)(?P<id>[0-9a-z]+)' + IE_NAME = 'wat.tv' + _TESTS = [ + { + 'url': 'http://www.wat.tv/video/soupe-figues-l-orange-aux-epices-6z1uz_2hvf7_.html', + 'info_dict': { + 'id': '11713067', + 'ext': 'mp4', + 'title': 'Soupe de figues à l\'orange et aux épices', + 'description': 'Retrouvez l\'émission "Petits plats en équilibre", diffusée le 18 août 2014.', + 'upload_date': '20140819', + 'duration': 120, + }, + 'params': { + # m3u8 download + 'skip_download': True, + }, + 'expected_warnings': ['HTTP Error 404'], + 'skip': 'This content is no longer available', + }, + { + 'url': 'http://www.wat.tv/video/gregory-lemarchal-voix-ange-6z1v7_6ygkj_.html', + 'md5': 'b16574df2c3cd1a36ca0098f2a791925', + 'info_dict': { + 'id': '11713075', + 'ext': 'mp4', + 'title': 'Grégory Lemarchal, une voix d\'ange depuis 10 ans (1/3)', + 'upload_date': '20140816', + }, + 'expected_warnings': ["Ce contenu n'est pas disponible pour l'instant."], + 'skip': 'This content is no longer available', + }, + { + 'url': 'wat:14010600', + 'info_dict': { + 'id': '14010600', + 'title': 'Burger Quiz - S03 EP21 avec Eye Haidara, Anne Depétrini, Jonathan Zaccaï et Pio Marmaï', + 'thumbnail': 'https://photos.tf1.fr/1280/720/burger-quiz-11-9adb79-0@1x.jpg', + 'upload_date': '20230819', + 'duration': 2312, + 'ext': 'mp4', + }, + 'params': {'skip_download': 'm3u8'}, + } + ] + _GEO_BYPASS = False + + def _real_extract(self, url): + video_id = self._match_id(url) + video_id = video_id if video_id.isdigit() and len(video_id) > 6 else compat_str(int(video_id, 36)) + + # 'contentv4' is used in the website, but it also returns the related + # videos, we don't need them + # video_data = self._download_json( + # 'http://www.wat.tv/interface/contentv4s/' + video_id, video_id) + video_data = self._download_json( + 'https://mediainfo.tf1.fr/mediainfocombo/' + video_id, + video_id, query={'pver': '5010000'}) + video_info = video_data['media'] + + error_desc = video_info.get('error_desc') + if error_desc: + if video_info.get('error_code') == 'GEOBLOCKED': + self.raise_geo_restricted(error_desc, video_info.get('geoList')) + raise ExtractorError(error_desc, expected=True) + + title = video_info['title'] + + formats = [] + subtitles = {} + + def extract_formats(manifest_urls): + for f, f_url in manifest_urls.items(): + if not f_url: + continue + if f in ('dash', 'mpd'): + fmts, subs = self._extract_mpd_formats_and_subtitles( + f_url.replace('://das-q1.tf1.fr/', '://das-q1-ssl.tf1.fr/'), + video_id, mpd_id='dash', fatal=False) + elif f == 'hls': + fmts, subs = self._extract_m3u8_formats_and_subtitles( + f_url, video_id, 'mp4', + 'm3u8_native', m3u8_id='hls', fatal=False) + else: + continue + formats.extend(fmts) + self._merge_subtitles(subs, target=subtitles) + + delivery = video_data.get('delivery') or {} + extract_formats({delivery.get('format'): delivery.get('url')}) + if not formats: + if delivery.get('drm'): + self.report_drm(video_id) + manifest_urls = self._download_json( + 'http://www.wat.tv/get/webhtml/' + video_id, video_id, fatal=False) + if manifest_urls: + extract_formats(manifest_urls) + + return { + 'id': video_id, + 'title': title, + 'thumbnail': video_info.get('preview'), + 'upload_date': unified_strdate(try_get( + video_data, lambda x: x['mediametrie']['chapters'][0]['estatS4'])), + 'duration': int_or_none(video_info.get('duration')), + 'formats': formats, + 'subtitles': subtitles, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/watchbox.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/watchbox.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/watchbox.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/watchbox.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/watchindianporn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/watchindianporn.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/watchindianporn.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/watchindianporn.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wdr.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wdr.py new file mode 100644 index 0000000..6767f26 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wdr.py @@ -0,0 +1,385 @@ +import re + +from .common import InfoExtractor +from ..compat import ( + compat_str, + compat_urlparse, +) +from ..utils import ( + determine_ext, + dict_get, + ExtractorError, + js_to_json, + strip_jsonp, + try_get, + unified_strdate, + update_url_query, + urlhandle_detect_ext, + url_or_none, +) + + +class WDRIE(InfoExtractor): + __API_URL_TPL = '//deviceids-medp.wdr.de/ondemand/%s/%s' + _VALID_URL = r'''(?x)https?:// + (?:deviceids-medp\.wdr\.de/ondemand/\d+/| + kinder\.wdr\.de/(?!mediathek/)[^#?]+-) + (?P<id>\d+)\.(?:js|assetjsonp) + ''' + _GEO_COUNTRIES = ['DE'] + _TESTS = [{ + 'url': 'http://deviceids-medp.wdr.de/ondemand/155/1557833.js', + 'info_dict': { + 'id': 'mdb-1557833', + 'ext': 'mp4', + 'title': 'Biathlon-Staffel verpasst Podest bei Olympia-Generalprobe', + 'upload_date': '20180112', + }, + }] + + def _asset_url(self, wdr_id): + id_len = max(len(wdr_id), 5) + return ''.join(('https:', self.__API_URL_TPL % (wdr_id[:id_len - 4], wdr_id, ), '.js')) + + def _real_extract(self, url): + video_id = self._match_id(url) + + if url.startswith('wdr:'): + video_id = url[4:] + url = self._asset_url(video_id) + + metadata = self._download_json( + url, video_id, transform_source=strip_jsonp) + + is_live = metadata.get('mediaType') == 'live' + + tracker_data = metadata['trackerData'] + title = tracker_data['trackerClipTitle'] + media_resource = metadata['mediaResource'] + + formats = [] + subtitles = {} + + # check if the metadata contains a direct URL to a file + for kind, media in media_resource.items(): + if kind == 'captionsHash': + for ext, url in media.items(): + subtitles.setdefault('de', []).append({ + 'url': url, + 'ext': ext, + }) + continue + + if kind not in ('dflt', 'alt'): + continue + if not isinstance(media, dict): + continue + + for tag_name, medium_url in media.items(): + if tag_name not in ('videoURL', 'audioURL'): + continue + + ext = determine_ext(medium_url) + if ext == 'm3u8': + formats.extend(self._extract_m3u8_formats( + medium_url, video_id, 'mp4', 'm3u8_native', + m3u8_id='hls')) + elif ext == 'f4m': + manifest_url = update_url_query( + medium_url, {'hdcore': '3.2.0', 'plugin': 'aasp-3.2.0.77.18'}) + formats.extend(self._extract_f4m_formats( + manifest_url, video_id, f4m_id='hds', fatal=False)) + elif ext == 'smil': + formats.extend(self._extract_smil_formats( + medium_url, 'stream', fatal=False)) + else: + a_format = { + 'url': medium_url + } + if ext == 'unknown_video': + urlh = self._request_webpage( + medium_url, video_id, note='Determining extension') + ext = urlhandle_detect_ext(urlh) + a_format['ext'] = ext + formats.append(a_format) + + caption_url = media_resource.get('captionURL') + if caption_url: + subtitles['de'] = [{ + 'url': caption_url, + 'ext': 'ttml', + }] + captions_hash = media_resource.get('captionsHash') + if isinstance(captions_hash, dict): + for ext, format_url in captions_hash.items(): + format_url = url_or_none(format_url) + if not format_url: + continue + subtitles.setdefault('de', []).append({ + 'url': format_url, + 'ext': determine_ext(format_url, None) or ext, + }) + + return { + 'id': tracker_data.get('trackerClipId', video_id), + 'title': title, + 'alt_title': tracker_data.get('trackerClipSubcategory'), + 'formats': formats, + 'subtitles': subtitles, + 'upload_date': unified_strdate(tracker_data.get('trackerClipAirTime')), + 'is_live': is_live, + } + + +class WDRPageIE(WDRIE): # XXX: Do not subclass from concrete IE + _MAUS_REGEX = r'https?://(?:www\.)wdrmaus.de/(?:[^/]+/)*?(?P<maus_id>[^/?#.]+)(?:/?|/index\.php5|\.php5)$' + _PAGE_REGEX = r'/(?:mediathek/)?(?:[^/]+/)*(?P<display_id>[^/]+)\.html' + _VALID_URL = r'https?://(?:www\d?\.)?(?:(?:kinder\.)?wdr\d?|sportschau)\.de' + _PAGE_REGEX + '|' + _MAUS_REGEX + + _TESTS = [ + { + 'url': 'http://www1.wdr.de/mediathek/video/sendungen/doku-am-freitag/video-geheimnis-aachener-dom-100.html', + # HDS download, MD5 is unstable + 'info_dict': { + 'id': 'mdb-1058683', + 'ext': 'flv', + 'display_id': 'doku-am-freitag/video-geheimnis-aachener-dom-100', + 'title': 'Geheimnis Aachener Dom', + 'alt_title': 'Doku am Freitag', + 'upload_date': '20160304', + 'description': 'md5:87be8ff14d8dfd7a7ee46f0299b52318', + 'is_live': False, + 'subtitles': {'de': [{ + 'url': 'http://ondemand-ww.wdr.de/medp/fsk0/105/1058683/1058683_12220974.xml', + 'ext': 'ttml', + }]}, + }, + 'skip': 'HTTP Error 404: Not Found', + }, + { + 'url': 'http://www1.wdr.de/mediathek/audio/wdr3/wdr3-gespraech-am-samstag/audio-schriftstellerin-juli-zeh-100.html', + 'md5': 'f4c1f96d01cf285240f53ea4309663d8', + 'info_dict': { + 'id': 'mdb-1072000', + 'ext': 'mp3', + 'display_id': 'wdr3-gespraech-am-samstag/audio-schriftstellerin-juli-zeh-100', + 'title': 'Schriftstellerin Juli Zeh', + 'alt_title': 'WDR 3 Gespräch am Samstag', + 'upload_date': '20160312', + 'description': 'md5:e127d320bc2b1f149be697ce044a3dd7', + 'is_live': False, + 'subtitles': {} + }, + 'skip': 'HTTP Error 404: Not Found', + }, + { + # FIXME: Asset JSON is directly embedded in webpage + 'url': 'http://www1.wdr.de/mediathek/video/live/index.html', + 'info_dict': { + 'id': 'mdb-2296252', + 'ext': 'mp4', + 'title': r're:^WDR Fernsehen im Livestream (?:\(nur in Deutschland erreichbar\) )?[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'alt_title': 'WDR Fernsehen Live', + 'upload_date': '20201112', + 'is_live': True, + }, + 'params': { + 'skip_download': True, # m3u8 download + }, + }, + { + 'url': 'http://www1.wdr.de/mediathek/video/sendungen/aktuelle-stunde/aktuelle-stunde-120.html', + 'playlist_mincount': 6, + 'info_dict': { + 'id': 'aktuelle-stunde-120', + }, + }, + { + 'url': 'http://www.wdrmaus.de/aktuelle-sendung/index.php5', + 'info_dict': { + 'id': 'mdb-2627637', + 'ext': 'mp4', + 'upload_date': 're:^[0-9]{8}$', + 'title': 're:^Die Sendung (?:mit der Maus )?vom [0-9.]{10}$', + }, + 'skip': 'The id changes from week to week because of the new episode' + }, + { + 'url': 'http://www.wdrmaus.de/filme/sachgeschichten/achterbahn.php5', + 'md5': '803138901f6368ee497b4d195bb164f2', + 'info_dict': { + 'id': 'mdb-186083', + 'ext': 'mp4', + 'upload_date': '20130919', + 'title': 'Sachgeschichte - Achterbahn ', + }, + 'skip': 'HTTP Error 404: Not Found', + }, + { + 'url': 'http://www1.wdr.de/radio/player/radioplayer116~_layout-popupVersion.html', + # Live stream, MD5 unstable + 'info_dict': { + 'id': 'mdb-869971', + 'ext': 'mp4', + 'title': r're:^COSMO Livestream [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', + 'alt_title': 'COSMO Livestream', + 'live_status': 'is_live', + 'upload_date': '20160101', + }, + 'params': { + 'skip_download': True, # m3u8 download + } + }, + { + 'url': 'http://www.sportschau.de/handballem2018/handball-nationalmannschaft-em-stolperstein-vorrunde-100.html', + 'info_dict': { + 'id': 'mdb-1556012', + 'ext': 'mp4', + 'title': 'DHB-Vizepräsident Bob Hanning - "Die Weltspitze ist extrem breit"', + 'upload_date': '20180111', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'HTTP Error 404: Not Found', + }, + { + 'url': 'http://www.sportschau.de/handballem2018/audio-vorschau---die-handball-em-startet-mit-grossem-favoritenfeld-100.html', + 'only_matching': True, + }, + { + 'url': 'https://kinder.wdr.de/tv/die-sendung-mit-dem-elefanten/av/video-folge---astronaut-100.html', + 'only_matching': True, + }, + { + 'url': 'https://www1.wdr.de/mediathek/video/sendungen/rockpalast/video-baroness---freak-valley-festival--100.html', + 'info_dict': { + 'id': 'mdb-2741028', + 'ext': 'mp4', + 'title': 'Baroness - Freak Valley Festival 2022', + 'alt_title': 'Rockpalast', + 'upload_date': '20220725', + }, + } + ] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + display_id = dict_get(mobj.groupdict(), ('display_id', 'maus_id'), 'wdrmaus') + webpage = self._download_webpage(url, display_id) + + entries = [] + + # Article with several videos + + # for wdr.de the data-extension-ard is in a tag with the class "mediaLink" + # for wdr.de radio players, in a tag with the class "wdrrPlayerPlayBtn" + # for wdrmaus, in a tag with the class "videoButton" (previously a link + # to the page in a multiline "videoLink"-tag) + for mobj in re.finditer( + r'''(?sx)class= + (?: + (["\'])(?:mediaLink|wdrrPlayerPlayBtn|videoButton)\b.*?\1[^>]+| + (["\'])videoLink\b.*?\2[\s]*>\n[^\n]* + )data-extension(?:-ard)?=(["\'])(?P<data>(?:(?!\3).)+)\3 + ''', webpage): + media_link_obj = self._parse_json( + mobj.group('data'), display_id, transform_source=js_to_json, + fatal=False) + if not media_link_obj: + continue + jsonp_url = try_get( + media_link_obj, lambda x: x['mediaObj']['url'], compat_str) + if jsonp_url: + # metadata, or player JS with ['ref'] giving WDR id, or just media, perhaps + clip_id = media_link_obj['mediaObj'].get('ref') + if jsonp_url.endswith('.assetjsonp'): + asset = self._download_json( + jsonp_url, display_id, fatal=False, transform_source=strip_jsonp) + clip_id = try_get(asset, lambda x: x['trackerData']['trackerClipId'], compat_str) + if clip_id: + jsonp_url = self._asset_url(clip_id[4:]) + entries.append(self.url_result(jsonp_url, ie=WDRIE.ie_key())) + + # Playlist (e.g. https://www1.wdr.de/mediathek/video/sendungen/aktuelle-stunde/aktuelle-stunde-120.html) + if not entries: + entries = [ + self.url_result( + compat_urlparse.urljoin(url, mobj.group('href')), + ie=WDRPageIE.ie_key()) + for mobj in re.finditer( + r'<a[^>]+\bhref=(["\'])(?P<href>(?:(?!\1).)+)\1[^>]+\bdata-extension(?:-ard)?=', + webpage) if re.match(self._PAGE_REGEX, mobj.group('href')) + ] + + return self.playlist_result(entries, playlist_id=display_id) + + +class WDRElefantIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)wdrmaus\.de/elefantenseite/#(?P<id>.+)' + _TEST = { + 'url': 'http://www.wdrmaus.de/elefantenseite/#elefantenkino_wippe', + # adaptive stream: unstable file MD5 + 'info_dict': { + 'title': 'Wippe', + 'id': 'mdb-1198320', + 'ext': 'mp4', + 'age_limit': None, + 'upload_date': '20071003' + }, + } + + def _real_extract(self, url): + display_id = self._match_id(url) + + # Table of Contents seems to always be at this address, so fetch it directly. + # The website fetches configurationJS.php5, which links to tableOfContentsJS.php5. + table_of_contents = self._download_json( + 'https://www.wdrmaus.de/elefantenseite/data/tableOfContentsJS.php5', + display_id) + if display_id not in table_of_contents: + raise ExtractorError( + 'No entry in site\'s table of contents for this URL. ' + 'Is the fragment part of the URL (after the #) correct?', + expected=True) + xml_metadata_path = table_of_contents[display_id]['xmlPath'] + xml_metadata = self._download_xml( + 'https://www.wdrmaus.de/elefantenseite/' + xml_metadata_path, + display_id) + zmdb_url_element = xml_metadata.find('./movie/zmdb_url') + if zmdb_url_element is None: + raise ExtractorError( + '%s is not a video' % display_id, expected=True) + return self.url_result(zmdb_url_element.text, ie=WDRIE.ie_key()) + + +class WDRMobileIE(InfoExtractor): + _VALID_URL = r'''(?x) + https?://mobile-ondemand\.wdr\.de/ + .*?/fsk(?P<age_limit>[0-9]+) + /[0-9]+/[0-9]+/ + (?P<id>[0-9]+)_(?P<title>[0-9]+)''' + IE_NAME = 'wdr:mobile' + _WORKING = False # no such domain + _TEST = { + 'url': 'http://mobile-ondemand.wdr.de/CMS2010/mdb/ondemand/weltweit/fsk0/42/421735/421735_4283021.mp4', + 'info_dict': { + 'title': '4283021', + 'id': '421735', + 'ext': 'mp4', + 'age_limit': 0, + }, + 'skip': 'Problems with loading data.' + } + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + return { + 'id': mobj.group('id'), + 'title': mobj.group('title'), + 'age_limit': int(mobj.group('age_limit')), + 'url': url, + 'http_headers': { + 'User-Agent': 'mobile', + }, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/webcamerapl.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/webcamerapl.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/webcamerapl.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/webcamerapl.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/webcaster.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/webcaster.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/webcaster.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/webcaster.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/webofstories.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/webofstories.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/webofstories.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/webofstories.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/weibo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/weibo.py new file mode 100644 index 0000000..b0c3052 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/weibo.py @@ -0,0 +1,241 @@ +import random +import itertools +import urllib.parse + +from .common import InfoExtractor +from ..utils import ( + int_or_none, + make_archive_id, + mimetype2ext, + parse_resolution, + str_or_none, + strip_jsonp, + traverse_obj, + url_or_none, + urlencode_postdata, + urljoin, +) + + +class WeiboBaseIE(InfoExtractor): + def _update_visitor_cookies(self, video_id): + visitor_data = self._download_json( + 'https://passport.weibo.com/visitor/genvisitor', video_id, + note='Generating first-visit guest request', + transform_source=strip_jsonp, + data=urlencode_postdata({ + 'cb': 'gen_callback', + 'fp': '{"os":"2","browser":"Gecko57,0,0,0","fonts":"undefined","screenInfo":"1440*900*24","plugins":""}', + })) + + self._download_webpage( + 'https://passport.weibo.com/visitor/visitor', video_id, + note='Running first-visit callback to get guest cookies', + query={ + 'a': 'incarnate', + 't': visitor_data['data']['tid'], + 'w': 2, + 'c': '%03d' % visitor_data['data']['confidence'], + 'cb': 'cross_domain', + 'from': 'weibo', + '_rand': random.random(), + }) + + def _weibo_download_json(self, url, video_id, *args, fatal=True, note='Downloading JSON metadata', **kwargs): + webpage, urlh = self._download_webpage_handle(url, video_id, *args, fatal=fatal, note=note, **kwargs) + if urllib.parse.urlparse(urlh.url).netloc == 'passport.weibo.com': + self._update_visitor_cookies(video_id) + webpage = self._download_webpage(url, video_id, *args, fatal=fatal, note=note, **kwargs) + return self._parse_json(webpage, video_id, fatal=fatal) + + def _extract_formats(self, video_info): + media_info = traverse_obj(video_info, ('page_info', 'media_info')) + formats = traverse_obj(media_info, ( + 'playback_list', lambda _, v: url_or_none(v['play_info']['url']), 'play_info', { + 'url': 'url', + 'format': ('quality_desc', {str}), + 'format_id': ('label', {str}), + 'ext': ('mime', {mimetype2ext}), + 'tbr': ('bitrate', {int_or_none}, {lambda x: x or None}), + 'vcodec': ('video_codecs', {str}), + 'fps': ('fps', {int_or_none}), + 'width': ('width', {int_or_none}), + 'height': ('height', {int_or_none}), + 'filesize': ('size', {int_or_none}), + 'acodec': ('audio_codecs', {str}), + 'asr': ('audio_sample_rate', {int_or_none}), + 'audio_channels': ('audio_channels', {int_or_none}), + })) + if not formats: # fallback, should be barely used + for url in set(traverse_obj(media_info, (..., {url_or_none}))): + if 'label=' in url: # filter out non-video urls + format_id, resolution = self._search_regex( + r'label=(\w+)&template=(\d+x\d+)', url, 'format info', + group=(1, 2), default=(None, None)) + formats.append({ + 'url': url, + 'format_id': format_id, + **parse_resolution(resolution), + **traverse_obj(media_info, ( + 'video_details', lambda _, v: v['label'].startswith(format_id), { + 'size': ('size', {int_or_none}), + 'tbr': ('bitrate', {int_or_none}), + } + ), get_all=False), + }) + return formats + + def _parse_video_info(self, video_info, video_id=None): + return { + 'id': video_id, + 'extractor_key': WeiboIE.ie_key(), + 'extractor': WeiboIE.IE_NAME, + 'formats': self._extract_formats(video_info), + 'http_headers': {'Referer': 'https://weibo.com/'}, + '_old_archive_ids': [make_archive_id('WeiboMobile', video_id)], + **traverse_obj(video_info, { + 'id': (('id', 'id_str', 'mid'), {str_or_none}), + 'display_id': ('mblogid', {str_or_none}), + 'title': ('page_info', 'media_info', ('video_title', 'kol_title', 'name'), {str}, {lambda x: x or None}), + 'description': ('text_raw', {str}), + 'duration': ('page_info', 'media_info', 'duration', {int_or_none}), + 'timestamp': ('page_info', 'media_info', 'video_publish_time', {int_or_none}), + 'thumbnail': ('page_info', 'page_pic', {url_or_none}), + 'uploader': ('user', 'screen_name', {str}), + 'uploader_id': ('user', ('id', 'id_str'), {str_or_none}), + 'uploader_url': ('user', 'profile_url', {lambda x: urljoin('https://weibo.com/', x)}), + 'view_count': ('page_info', 'media_info', 'online_users_number', {int_or_none}), + 'like_count': ('attitudes_count', {int_or_none}), + 'repost_count': ('reposts_count', {int_or_none}), + }, get_all=False), + 'tags': traverse_obj(video_info, ('topic_struct', ..., 'topic_title', {str})) or None, + } + + +class WeiboIE(WeiboBaseIE): + _VALID_URL = r'https?://(?:m\.weibo\.cn/status|(?:www\.)?weibo\.com/\d+)/(?P<id>[a-zA-Z0-9]+)' + _TESTS = [{ + 'url': 'https://weibo.com/7827771738/N4xlMvjhI', + 'info_dict': { + 'id': '4910815147462302', + 'ext': 'mp4', + 'display_id': 'N4xlMvjhI', + 'title': 'ã€ç¡å‰æ¶ˆæ¯æš‘å‡ç‰ˆç¬¬ä¸€æœŸï¼šæ‹‰æ³°å›½ä¸€æŠŠ 对中国有好处】', + 'description': 'md5:e2637a7673980d68694ea7c43cf12a5f', + 'duration': 918, + 'timestamp': 1686312819, + 'upload_date': '20230609', + 'thumbnail': r're:https://.*\.jpg', + 'uploader': 'ç¡å‰è§†é¢‘基地', + 'uploader_id': '7827771738', + 'uploader_url': 'https://weibo.com/u/7827771738', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + 'tags': ['泰国大选远进党获胜', 'ç¡å‰æ¶ˆæ¯', '暑期版'], + }, + }, { + 'url': 'https://m.weibo.cn/status/4189191225395228', + 'info_dict': { + 'id': '4189191225395228', + 'ext': 'mp4', + 'display_id': 'FBqgOmDxO', + 'title': '柴犬柴犬的秒æ‹è§†é¢‘', + 'description': 'md5:80f461ab5cdae6bbdb70efbf5a1db24f', + 'duration': 53, + 'timestamp': 1514264429, + 'upload_date': '20171226', + 'thumbnail': r're:https://.*\.jpg', + 'uploader': '柴犬柴犬', + 'uploader_id': '5926682210', + 'uploader_url': 'https://weibo.com/u/5926682210', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + } + }, { + 'url': 'https://weibo.com/0/4224132150961381', + 'note': 'no playback_list example', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + return self._parse_video_info(self._weibo_download_json( + f'https://weibo.com/ajax/statuses/show?id={video_id}', video_id)) + + +class WeiboVideoIE(WeiboBaseIE): + _VALID_URL = r'https?://(?:www\.)?weibo\.com/tv/show/(?P<id>\d+:\d+)' + _TESTS = [{ + 'url': 'https://weibo.com/tv/show/1034:4797699866951785?from=old_pc_videoshow', + 'info_dict': { + 'id': '4797700463137878', + 'ext': 'mp4', + 'display_id': 'LEZDodaiW', + 'title': '呃,ç¨å¾®äº†è§£äº†ä¸€ä¸‹é¡çƒŸmiya,感觉这东西也太二了', + 'description': '呃,ç¨å¾®äº†è§£äº†ä¸€ä¸‹é¡çƒŸmiya,感觉这东西也太二了 http://t.cn/A6aerGsM ​​​', + 'duration': 76, + 'timestamp': 1659344278, + 'upload_date': '20220801', + 'thumbnail': r're:https://.*\.jpg', + 'uploader': 'å›å­çˆ±è´¢é™ˆå¹³å®‰', + 'uploader_id': '3905382233', + 'uploader_url': 'https://weibo.com/u/3905382233', + 'view_count': int, + 'like_count': int, + 'repost_count': int, + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + post_data = f'data={{"Component_Play_Playinfo":{{"oid":"{video_id}"}}}}'.encode() + video_info = self._weibo_download_json( + f'https://weibo.com/tv/api/component?page=%2Ftv%2Fshow%2F{video_id.replace(":", "%3A")}', + video_id, headers={'Referer': url}, data=post_data)['data']['Component_Play_Playinfo'] + return self.url_result(f'https://weibo.com/0/{video_info["mid"]}', WeiboIE) + + +class WeiboUserIE(WeiboBaseIE): + _VALID_URL = r'https?://(?:www\.)?weibo\.com/u/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://weibo.com/u/2066652961?tabtype=video', + 'info_dict': { + 'id': '2066652961', + 'title': 'è§å½±æ®¿ä¸‹çš„视频', + 'description': 'è§å½±æ®¿ä¸‹çš„全部视频', + 'uploader': 'è§å½±æ®¿ä¸‹', + }, + 'playlist_mincount': 195, + }] + + def _fetch_page(self, uid, cursor=0, page=1): + return self._weibo_download_json( + 'https://weibo.com/ajax/profile/getWaterFallContent', + uid, note=f'Downloading videos page {page}', + query={'uid': uid, 'cursor': cursor})['data'] + + def _entries(self, uid, first_page): + cursor = 0 + for page in itertools.count(1): + response = first_page if page == 1 else self._fetch_page(uid, cursor, page) + for video_info in traverse_obj(response, ('list', ..., {dict})): + yield self._parse_video_info(video_info) + cursor = response.get('next_cursor') + if (int_or_none(cursor) or -1) < 0: + break + + def _real_extract(self, url): + uid = self._match_id(url) + first_page = self._fetch_page(uid) + uploader = traverse_obj(first_page, ('list', ..., 'user', 'screen_name', {str}), get_all=False) + metainfo = { + 'title': f'{uploader}的视频', + 'description': f'{uploader}的全部视频', + 'uploader': uploader, + } if uploader else {} + + return self.playlist_result(self._entries(uid, first_page), uid, **metainfo) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/weiqitv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/weiqitv.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/weiqitv.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/weiqitv.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/weverse.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/weverse.py new file mode 100644 index 0000000..47f3680 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/weverse.py @@ -0,0 +1,608 @@ +import base64 +import hashlib +import hmac +import itertools +import json +import re +import time +import urllib.parse +import uuid + +from .common import InfoExtractor +from .naver import NaverBaseIE +from .youtube import YoutubeIE +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + UserNotLive, + float_or_none, + int_or_none, + str_or_none, + traverse_obj, + try_call, + update_url_query, + url_or_none, +) + + +class WeverseBaseIE(InfoExtractor): + _NETRC_MACHINE = 'weverse' + _ACCOUNT_API_BASE = 'https://accountapi.weverse.io/web/api/v2' + _API_HEADERS = { + 'Referer': 'https://weverse.io/', + 'WEV-device-Id': str(uuid.uuid4()), + } + + def _perform_login(self, username, password): + if self._API_HEADERS.get('Authorization'): + return + + headers = { + 'x-acc-app-secret': '5419526f1c624b38b10787e5c10b2a7a', + 'x-acc-app-version': '2.2.6', + 'x-acc-language': 'en', + 'x-acc-service-id': 'weverse', + 'x-acc-trace-id': str(uuid.uuid4()), + 'x-clog-user-device-id': str(uuid.uuid4()), + } + check_username = self._download_json( + f'{self._ACCOUNT_API_BASE}/signup/email/status', None, + note='Checking username', query={'email': username}, headers=headers) + if not check_username.get('hasPassword'): + raise ExtractorError('Invalid username provided', expected=True) + + headers['content-type'] = 'application/json' + try: + auth = self._download_json( + f'{self._ACCOUNT_API_BASE}/auth/token/by-credentials', None, data=json.dumps({ + 'email': username, + 'password': password, + }, separators=(',', ':')).encode(), headers=headers, note='Logging in') + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + raise ExtractorError('Invalid password provided', expected=True) + raise + + WeverseBaseIE._API_HEADERS['Authorization'] = f'Bearer {auth["accessToken"]}' + + def _real_initialize(self): + if self._API_HEADERS.get('Authorization'): + return + + token = try_call(lambda: self._get_cookies('https://weverse.io/')['we2_access_token'].value) + if token: + WeverseBaseIE._API_HEADERS['Authorization'] = f'Bearer {token}' + + def _call_api(self, ep, video_id, data=None, note='Downloading API JSON'): + # Ref: https://ssl.pstatic.net/static/wevweb/2_3_2_11101725/public/static/js/2488.a09b41ff.chunk.js + # From https://ssl.pstatic.net/static/wevweb/2_3_2_11101725/public/static/js/main.e206f7c1.js: + key = b'1b9cb6378d959b45714bec49971ade22e6e24e42' + api_path = update_url_query(ep, { + 'appId': 'be4d79eb8fc7bd008ee82c8ec4ff6fd4', + 'language': 'en', + 'platform': 'WEB', + 'wpf': 'pc', + }) + wmsgpad = int(time.time() * 1000) + wmd = base64.b64encode(hmac.HMAC( + key, f'{api_path[:255]}{wmsgpad}'.encode(), digestmod=hashlib.sha1).digest()).decode() + headers = {'Content-Type': 'application/json'} if data else {} + try: + return self._download_json( + f'https://global.apis.naver.com/weverse/wevweb{api_path}', video_id, note=note, + data=data, headers={**self._API_HEADERS, **headers}, query={ + 'wmsgpad': wmsgpad, + 'wmd': wmd, + }) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 401: + self.raise_login_required( + 'Session token has expired. Log in again or refresh cookies in browser') + elif isinstance(e.cause, HTTPError) and e.cause.status == 403: + if 'Authorization' in self._API_HEADERS: + raise ExtractorError('Your account does not have access to this content', expected=True) + self.raise_login_required() + raise + + def _call_post_api(self, video_id): + path = '' if 'Authorization' in self._API_HEADERS else '/preview' + return self._call_api(f'/post/v1.0/post-{video_id}{path}?fieldSet=postV1', video_id) + + def _get_community_id(self, channel): + return str(self._call_api( + f'/community/v1.0/communityIdUrlPathByUrlPathArtistCode?keyword={channel}', + channel, note='Fetching community ID')['communityId']) + + def _get_formats(self, data, video_id): + formats = traverse_obj(data, ('videos', 'list', lambda _, v: url_or_none(v['source']), { + 'url': 'source', + 'width': ('encodingOption', 'width', {int_or_none}), + 'height': ('encodingOption', 'height', {int_or_none}), + 'vcodec': 'type', + 'vbr': ('bitrate', 'video', {int_or_none}), + 'abr': ('bitrate', 'audio', {int_or_none}), + 'filesize': ('size', {int_or_none}), + 'format_id': ('encodingOption', 'id', {str_or_none}), + })) + + for stream in traverse_obj(data, ('streams', lambda _, v: v['type'] == 'HLS' and url_or_none(v['source']))): + query = {} + for param in traverse_obj(stream, ('keys', lambda _, v: v['type'] == 'param' and v['name'])): + query[param['name']] = param.get('value', '') + fmts = self._extract_m3u8_formats( + stream['source'], video_id, 'mp4', m3u8_id='hls', fatal=False, query=query) + if query: + for fmt in fmts: + fmt['url'] = update_url_query(fmt['url'], query) + fmt['extra_param_to_segment_url'] = urllib.parse.urlencode(query) + formats.extend(fmts) + + return formats + + def _get_subs(self, caption_url): + subs_ext_re = r'\.(?:ttml|vtt)' + replace_ext = lambda x, y: re.sub(subs_ext_re, y, x) + if re.search(subs_ext_re, caption_url): + return [replace_ext(caption_url, '.ttml'), replace_ext(caption_url, '.vtt')] + return [caption_url] + + def _parse_post_meta(self, metadata): + return traverse_obj(metadata, { + 'title': ((('extension', 'mediaInfo', 'title'), 'title'), {str}), + 'description': ((('extension', 'mediaInfo', 'body'), 'body'), {str}), + 'uploader': ('author', 'profileName', {str}), + 'uploader_id': ('author', 'memberId', {str}), + 'creator': ('community', 'communityName', {str}), + 'channel_id': (('community', 'author'), 'communityId', {str_or_none}), + 'duration': ('extension', 'video', 'playTime', {float_or_none}), + 'timestamp': ('publishedAt', {lambda x: int_or_none(x, 1000)}), + 'release_timestamp': ('extension', 'video', 'onAirStartAt', {lambda x: int_or_none(x, 1000)}), + 'thumbnail': ('extension', (('mediaInfo', 'thumbnail', 'url'), ('video', 'thumb')), {url_or_none}), + 'view_count': ('extension', 'video', 'playCount', {int_or_none}), + 'like_count': ('extension', 'video', 'likeCount', {int_or_none}), + 'comment_count': ('commentCount', {int_or_none}), + }, get_all=False) + + def _extract_availability(self, data): + return self._availability(**traverse_obj(data, ((('extension', 'video'), None), { + 'needs_premium': 'paid', + 'needs_subscription': 'membershipOnly', + }), get_all=False, expected_type=bool), needs_auth=True) + + def _extract_live_status(self, data): + data = traverse_obj(data, ('extension', 'video', {dict})) or {} + if data.get('type') == 'LIVE': + return traverse_obj({ + 'ONAIR': 'is_live', + 'DONE': 'post_live', + 'STANDBY': 'is_upcoming', + 'DELAY': 'is_upcoming', + }, (data.get('status'), {str})) or 'is_live' + return 'was_live' if data.get('liveToVod') else 'not_live' + + +class WeverseIE(WeverseBaseIE): + _VALID_URL = r'https?://(?:www\.|m\.)?weverse\.io/(?P<artist>[^/?#]+)/live/(?P<id>[\d-]+)' + _TESTS = [{ + 'url': 'https://weverse.io/billlie/live/0-107323480', + 'md5': '1fa849f00181eef9100d3c8254c47979', + 'info_dict': { + 'id': '0-107323480', + 'ext': 'mp4', + 'title': '행복한 í‰ì´ë£¨ðŸ’œ', + 'description': '', + 'uploader': 'Billlie', + 'uploader_id': '5ae14aed7b7cdc65fa87c41fe06cc936', + 'channel': 'billlie', + 'channel_id': '72', + 'channel_url': 'https://weverse.io/billlie', + 'creator': 'Billlie', + 'timestamp': 1666262062, + 'upload_date': '20221020', + 'release_timestamp': 1666262058, + 'release_date': '20221020', + 'duration': 3102, + 'thumbnail': r're:^https?://.*\.jpe?g$', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'availability': 'needs_auth', + 'live_status': 'was_live', + }, + }, { + 'url': 'https://weverse.io/lesserafim/live/2-102331763', + 'md5': 'e46125c08b13a6c8c1f4565035cca987', + 'info_dict': { + 'id': '2-102331763', + 'ext': 'mp4', + 'title': 'ðŸŽ‚ê¹€ì±„ì› ìƒì‹ ðŸŽ‚', + 'description': 'ðŸŽ‚ê¹€ì±„ì› ìƒì‹ ðŸŽ‚', + 'uploader': 'LE SSERAFIM ', + 'uploader_id': 'd26ddc1e258488a0a2b795218d14d59d', + 'channel': 'lesserafim', + 'channel_id': '47', + 'channel_url': 'https://weverse.io/lesserafim', + 'creator': 'LE SSERAFIM', + 'timestamp': 1659353400, + 'upload_date': '20220801', + 'release_timestamp': 1659353400, + 'release_date': '20220801', + 'duration': 3006, + 'thumbnail': r're:^https?://.*\.jpe?g$', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'availability': 'needs_auth', + 'live_status': 'was_live', + 'subtitles': { + 'id_ID': 'count:2', + 'en_US': 'count:2', + 'es_ES': 'count:2', + 'vi_VN': 'count:2', + 'th_TH': 'count:2', + 'zh_CN': 'count:2', + 'zh_TW': 'count:2', + 'ja_JP': 'count:2', + 'ko_KR': 'count:2', + }, + }, + }, { + 'url': 'https://weverse.io/treasure/live/2-117230416', + 'info_dict': { + 'id': '2-117230416', + 'ext': 'mp4', + 'title': r're:스껄ë„려님 첫 스무살 ìƒíŒŒðŸ¦‹', + 'description': '', + 'uploader': 'TREASURE', + 'uploader_id': '77eabbc449ca37f7970054a136f60082', + 'channel': 'treasure', + 'channel_id': '20', + 'channel_url': 'https://weverse.io/treasure', + 'creator': 'TREASURE', + 'timestamp': 1680667651, + 'upload_date': '20230405', + 'release_timestamp': 1680667639, + 'release_date': '20230405', + 'thumbnail': r're:^https?://.*\.jpe?g$', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'availability': 'needs_auth', + 'live_status': 'is_live', + }, + 'skip': 'Livestream has ended', + }] + + def _real_extract(self, url): + channel, video_id = self._match_valid_url(url).group('artist', 'id') + post = self._call_post_api(video_id) + api_video_id = post['extension']['video']['videoId'] + availability = self._extract_availability(post) + live_status = self._extract_live_status(post) + video_info, formats = {}, [] + + if live_status == 'is_upcoming': + self.raise_no_formats('Livestream has not yet started', expected=True) + + elif live_status == 'is_live': + video_info = self._call_api( + f'/video/v1.0/lives/{api_video_id}/playInfo?preview.format=json&preview.version=v2', + video_id, note='Downloading live JSON') + playback = self._parse_json(video_info['lipPlayback'], video_id) + m3u8_url = traverse_obj(playback, ( + 'media', lambda _, v: v['protocol'] == 'HLS', 'path', {url_or_none}), get_all=False) + formats = self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', m3u8_id='hls', live=True) + + elif live_status == 'post_live': + if availability in ('premium_only', 'subscriber_only'): + self.report_drm(video_id) + self.raise_no_formats( + 'Livestream has ended and downloadable VOD is not available', expected=True) + + else: + infra_video_id = post['extension']['video']['infraVideoId'] + in_key = self._call_api( + f'/video/v1.0/vod/{api_video_id}/inKey?preview=false', video_id, + data=b'{}', note='Downloading VOD API key')['inKey'] + + video_info = self._download_json( + f'https://global.apis.naver.com/rmcnmv/rmcnmv/vod/play/v2.0/{infra_video_id}', + video_id, note='Downloading VOD JSON', query={ + 'key': in_key, + 'sid': traverse_obj(post, ('extension', 'video', 'serviceId')) or '2070', + 'pid': str(uuid.uuid4()), + 'nonce': int(time.time() * 1000), + 'devt': 'html5_pc', + 'prv': 'Y' if post.get('membershipOnly') else 'N', + 'aup': 'N', + 'stpb': 'N', + 'cpl': 'en', + 'env': 'prod', + 'lc': 'en', + 'adi': '[{"adSystem":"null"}]', + 'adu': '/', + }) + + formats = self._get_formats(video_info, video_id) + has_drm = traverse_obj(video_info, ('meta', 'provider', 'name', {str.lower})) == 'drm' + if has_drm and formats: + self.report_warning( + 'Requested content is DRM-protected, only a 30-second preview is available', video_id) + elif has_drm and not formats: + self.report_drm(video_id) + + return { + 'id': video_id, + 'channel': channel, + 'channel_url': f'https://weverse.io/{channel}', + 'formats': formats, + 'availability': availability, + 'live_status': live_status, + **self._parse_post_meta(post), + **NaverBaseIE.process_subtitles(video_info, self._get_subs), + } + + +class WeverseMediaIE(WeverseBaseIE): + _VALID_URL = r'https?://(?:www\.|m\.)?weverse\.io/(?P<artist>[^/?#]+)/media/(?P<id>[\d-]+)' + _TESTS = [{ + 'url': 'https://weverse.io/billlie/media/4-116372884', + 'md5': '8efc9cfd61b2f25209eb1a5326314d28', + 'info_dict': { + 'id': 'e-C9wLSQs6o', + 'ext': 'mp4', + 'title': 'Billlie | \'EUNOIA\' Performance Video (heartbeat ver.)', + 'description': 'md5:6181caaf2a2397bca913ffe368c104e5', + 'channel': 'Billlie', + 'channel_id': 'UCyc9sUCxELTDK9vELO5Fzeg', + 'channel_url': 'https://www.youtube.com/channel/UCyc9sUCxELTDK9vELO5Fzeg', + 'uploader': 'Billlie', + 'uploader_id': '@Billlie', + 'uploader_url': 'http://www.youtube.com/@Billlie', + 'upload_date': '20230403', + 'duration': 211, + 'age_limit': 0, + 'playable_in_embed': True, + 'live_status': 'not_live', + 'availability': 'public', + 'view_count': int, + 'comment_count': int, + 'like_count': int, + 'channel_follower_count': int, + 'thumbnail': 'https://i.ytimg.com/vi/e-C9wLSQs6o/maxresdefault.jpg', + 'categories': ['Entertainment'], + 'tags': 'count:7', + }, + }, { + 'url': 'https://weverse.io/billlie/media/3-102914520', + 'md5': '031551fcbd716bc4f080cb6174a43d8a', + 'info_dict': { + 'id': '3-102914520', + 'ext': 'mp4', + 'title': 'From. SUHYEON🌸', + 'description': 'Billlie 멤버별 ë…ì  ì˜ìƒ 공개💙💜', + 'uploader': 'Billlie_official', + 'uploader_id': 'f569c6e92f7eaffef0a395037dcaa54f', + 'channel': 'billlie', + 'channel_id': '72', + 'channel_url': 'https://weverse.io/billlie', + 'creator': 'Billlie', + 'timestamp': 1662174000, + 'upload_date': '20220903', + 'release_timestamp': 1662174000, + 'release_date': '20220903', + 'duration': 17.0, + 'thumbnail': r're:^https?://.*\.jpe?g$', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'availability': 'needs_auth', + 'live_status': 'not_live', + }, + }] + + def _real_extract(self, url): + channel, video_id = self._match_valid_url(url).group('artist', 'id') + post = self._call_post_api(video_id) + media_type = traverse_obj(post, ('extension', 'mediaInfo', 'mediaType', {str.lower})) + youtube_id = traverse_obj(post, ('extension', 'youtube', 'youtubeVideoId', {str})) + + if media_type == 'vod': + return self.url_result(f'https://weverse.io/{channel}/live/{video_id}', WeverseIE) + elif media_type == 'youtube' and youtube_id: + return self.url_result(youtube_id, YoutubeIE) + elif media_type == 'image': + self.raise_no_formats('No video content found in webpage', expected=True) + elif media_type: + raise ExtractorError(f'Unsupported media type "{media_type}"') + + self.raise_no_formats('No video content found in webpage') + + +class WeverseMomentIE(WeverseBaseIE): + _VALID_URL = r'https?://(?:www\.|m\.)?weverse\.io/(?P<artist>[^/?#]+)/moment/(?P<uid>[\da-f]+)/post/(?P<id>[\d-]+)' + _TESTS = [{ + 'url': 'https://weverse.io/secretnumber/moment/66a07e164b56a696ee71c99315ffe27b/post/1-117229444', + 'md5': '87733ac19a54081b7dfc2442036d282b', + 'info_dict': { + 'id': '1-117229444', + 'ext': 'mp4', + 'title': '今日もã‚ã£ã¡ã‚ƒã„ã„天気☀ï¸ðŸŒ¤ï¸', + 'uploader': '레아', + 'uploader_id': '66a07e164b56a696ee71c99315ffe27b', + 'channel': 'secretnumber', + 'channel_id': '56', + 'creator': 'SECRET NUMBER', + 'duration': 10, + 'upload_date': '20230405', + 'timestamp': 1680653968, + 'thumbnail': r're:^https?://.*\.jpe?g$', + 'like_count': int, + 'comment_count': int, + 'availability': 'needs_auth', + }, + 'skip': 'Moment has expired', + }] + + def _real_extract(self, url): + channel, uploader_id, video_id = self._match_valid_url(url).group('artist', 'uid', 'id') + post = self._call_post_api(video_id) + api_video_id = post['extension']['moment']['video']['videoId'] + video_info = self._call_api( + f'/cvideo/v1.0/cvideo-{api_video_id}/playInfo?videoId={api_video_id}', video_id, + note='Downloading moment JSON')['playInfo'] + + return { + 'id': video_id, + 'channel': channel, + 'uploader_id': uploader_id, + 'formats': self._get_formats(video_info, video_id), + 'availability': self._extract_availability(post), + **traverse_obj(post, { + 'title': ((('extension', 'moment', 'body'), 'body'), {str}), + 'uploader': ('author', 'profileName', {str}), + 'creator': (('community', 'author'), 'communityName', {str}), + 'channel_id': (('community', 'author'), 'communityId', {str_or_none}), + 'duration': ('extension', 'moment', 'video', 'uploadInfo', 'playTime', {float_or_none}), + 'timestamp': ('publishedAt', {lambda x: int_or_none(x, 1000)}), + 'thumbnail': ('extension', 'moment', 'video', 'uploadInfo', 'imageUrl', {url_or_none}), + 'like_count': ('emotionCount', {int_or_none}), + 'comment_count': ('commentCount', {int_or_none}), + }, get_all=False), + **NaverBaseIE.process_subtitles(video_info, self._get_subs), + } + + +class WeverseTabBaseIE(WeverseBaseIE): + _ENDPOINT = None + _PATH = None + _QUERY = {} + _RESULT_IE = None + + def _entries(self, channel_id, channel, first_page): + query = self._QUERY.copy() + + for page in itertools.count(1): + posts = first_page if page == 1 else self._call_api( + update_url_query(self._ENDPOINT % channel_id, query), channel, + note=f'Downloading {self._PATH} tab page {page}') + + for post in traverse_obj(posts, ('data', lambda _, v: v['postId'])): + yield self.url_result( + f'https://weverse.io/{channel}/{self._PATH}/{post["postId"]}', + self._RESULT_IE, post['postId'], **self._parse_post_meta(post), + channel=channel, channel_url=f'https://weverse.io/{channel}', + availability=self._extract_availability(post), + live_status=self._extract_live_status(post)) + + query['after'] = traverse_obj(posts, ('paging', 'nextParams', 'after', {str})) + if not query['after']: + break + + def _real_extract(self, url): + channel = self._match_id(url) + channel_id = self._get_community_id(channel) + + first_page = self._call_api( + update_url_query(self._ENDPOINT % channel_id, self._QUERY), channel, + note=f'Downloading {self._PATH} tab page 1') + + return self.playlist_result( + self._entries(channel_id, channel, first_page), f'{channel}-{self._PATH}', + **traverse_obj(first_page, ('data', ..., { + 'playlist_title': ('community', 'communityName', {str}), + 'thumbnail': ('author', 'profileImageUrl', {url_or_none}), + }), get_all=False)) + + +class WeverseLiveTabIE(WeverseTabBaseIE): + _VALID_URL = r'https?://(?:www\.|m\.)?weverse\.io/(?P<id>[^/?#]+)/live/?(?:[?#]|$)' + _TESTS = [{ + 'url': 'https://weverse.io/billlie/live/', + 'playlist_mincount': 55, + 'info_dict': { + 'id': 'billlie-live', + 'title': 'Billlie', + 'thumbnail': r're:^https?://.*\.jpe?g$', + }, + }] + + _ENDPOINT = '/post/v1.0/community-%s/liveTabPosts' + _PATH = 'live' + _QUERY = {'fieldSet': 'postsV1'} + _RESULT_IE = WeverseIE + + +class WeverseMediaTabIE(WeverseTabBaseIE): + _VALID_URL = r'https?://(?:www\.|m\.)?weverse\.io/(?P<id>[^/?#]+)/media(?:/|/all|/new)?(?:[?#]|$)' + _TESTS = [{ + 'url': 'https://weverse.io/billlie/media/', + 'playlist_mincount': 231, + 'info_dict': { + 'id': 'billlie-media', + 'title': 'Billlie', + 'thumbnail': r're:^https?://.*\.jpe?g$', + }, + }, { + 'url': 'https://weverse.io/lesserafim/media/all', + 'only_matching': True, + }, { + 'url': 'https://weverse.io/lesserafim/media/new', + 'only_matching': True, + }] + + _ENDPOINT = '/media/v1.0/community-%s/more' + _PATH = 'media' + _QUERY = {'fieldSet': 'postsV1', 'filterType': 'RECENT'} + _RESULT_IE = WeverseMediaIE + + +class WeverseLiveIE(WeverseBaseIE): + _VALID_URL = r'https?://(?:www\.|m\.)?weverse\.io/(?P<id>[^/?#]+)/?(?:[?#]|$)' + _TESTS = [{ + 'url': 'https://weverse.io/purplekiss', + 'info_dict': { + 'id': '3-116560493', + 'ext': 'mp4', + 'title': r're:모하냥🫶ðŸ»', + 'description': 'ë‚´ì¼ì€ 금요ì¼~><', + 'uploader': '채ì¸', + 'uploader_id': '1ffb1d9d904d6b3db2783f876eb9229d', + 'channel': 'purplekiss', + 'channel_id': '35', + 'channel_url': 'https://weverse.io/purplekiss', + 'creator': 'PURPLE KISS', + 'timestamp': 1680780892, + 'upload_date': '20230406', + 'release_timestamp': 1680780883, + 'release_date': '20230406', + 'thumbnail': 'https://weverse-live.pstatic.net/v1.0/live/62044/thumb', + 'view_count': int, + 'like_count': int, + 'comment_count': int, + 'availability': 'needs_auth', + 'live_status': 'is_live', + }, + 'skip': 'Livestream has ended', + }, { + 'url': 'https://weverse.io/billlie/', + 'only_matching': True, + }] + + def _real_extract(self, url): + channel = self._match_id(url) + channel_id = self._get_community_id(channel) + + video_id = traverse_obj( + self._call_api(update_url_query(f'/post/v1.0/community-{channel_id}/liveTab', { + 'debugMessage': 'true', + 'fields': 'onAirLivePosts.fieldSet(postsV1).limit(10),reservedLivePosts.fieldSet(postsV1).limit(10)', + }), channel, note='Downloading live JSON'), ( + ('onAirLivePosts', 'reservedLivePosts'), 'data', + lambda _, v: self._extract_live_status(v) in ('is_live', 'is_upcoming'), 'postId', {str}), + get_all=False) + + if not video_id: + raise UserNotLive(video_id=channel) + + return self.url_result(f'https://weverse.io/{channel}/live/{video_id}', WeverseIE) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wevidi.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wevidi.py new file mode 100644 index 0000000..3b6d032 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wevidi.py @@ -0,0 +1,108 @@ +from .common import InfoExtractor +from ..utils import clean_html, float_or_none, get_element_by_class, js_to_json, traverse_obj + + +class WeVidiIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?wevidi\.net/watch/(?P<id>[\w-]{11})' + _TESTS = [{ + 'url': 'https://wevidi.net/watch/2th7UO5F4KV', + 'md5': 'b913d1ff5bbad499e2c7ef4aa6d829d7', + 'info_dict': { + 'id': '2th7UO5F4KV', + 'ext': 'mp4', + 'title': 'YouTube Alternative: WeVidi - customizable channels & more', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:73a27d0a87d49fbcc5584566326ebeed', + 'uploader': 'eclecRC', + 'duration': 932.098, + } + }, { + 'url': 'https://wevidi.net/watch/ievRuuQHbPS', + 'md5': 'ce8a94989a959bff9003fa27ee572935', + 'info_dict': { + 'id': 'ievRuuQHbPS', + 'ext': 'mp4', + 'title': 'WeVidi Playlists', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:32cdfca272687390d9bd9b0c9c6153ee', + 'uploader': 'WeVidi', + 'duration': 36.1999, + } + }, { + 'url': 'https://wevidi.net/watch/PcMzDWaQSWb', + 'md5': '55ee0d3434be5d9e5cc76b83f2bb57ec', + 'info_dict': { + 'id': 'PcMzDWaQSWb', + 'ext': 'mp4', + 'title': 'Cat blep', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:e2c9e2b54b8bb424cc64937c8fdc068f', + 'uploader': 'WeVidi', + 'duration': 41.972, + } + }, { + 'url': 'https://wevidi.net/watch/wJnRqDHNe_u', + 'md5': 'c8f263dd47e66cc17546b3abf47b5a77', + 'info_dict': { + 'id': 'wJnRqDHNe_u', + 'ext': 'mp4', + 'title': 'Gissy Talks: YouTube Alternatives', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:e65036f0d4af80e0af191bd11af5195e', + 'uploader': 'GissyEva', + 'duration': 630.451, + } + }, { + 'url': 'https://wevidi.net/watch/4m1c4yJR_yc', + 'md5': 'c63ce5ca6990dce86855fc02ca5bc1ed', + 'info_dict': { + 'id': '4m1c4yJR_yc', + 'ext': 'mp4', + 'title': 'Enough of that! - Awesome Exilez Podcast', + 'thumbnail': r're:^https?://.*\.jpg$', + 'description': 'md5:96af99dd63468b2dfab3020560e3e9b2', + 'uploader': 'eclecRC', + 'duration': 6.804, + } + }] + + def _extract_formats(self, wvplayer_props): + # Taken from WeVidi player JS: https://wevidi.net/layouts/default/static/player.min.js + resolution_map = { + 1: 144, + 2: 240, + 3: 360, + 4: 480, + 5: 720, + 6: 1080 + } + + src_path = f'{wvplayer_props["srcVID"]}/{wvplayer_props["srcUID"]}/{wvplayer_props["srcNAME"]}' + for res in traverse_obj(wvplayer_props, ('resolutions', ..., {int}, {lambda x: x or None})): + format_id = str(-(res // -2) - 1) + yield { + 'acodec': 'mp4a.40.2', + 'ext': 'mp4', + 'format_id': format_id, + 'height': resolution_map.get(res), + 'url': f'https://www.wevidi.net/videoplayback/{src_path}/{format_id}', + 'vcodec': 'avc1.42E01E', + } + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + + wvplayer_props = self._search_json( + r'WVPlayer\(', webpage, 'player', video_id, + transform_source=lambda x: js_to_json(x.replace('||', '}'))) + + return { + 'id': video_id, + 'title': clean_html(get_element_by_class('video_title', webpage)), + 'description': clean_html(get_element_by_class('descr_long', webpage)), + 'uploader': clean_html(get_element_by_class('username', webpage)), + 'formats': list(self._extract_formats(wvplayer_props)), + 'thumbnail': self._og_search_thumbnail(webpage), + 'duration': float_or_none(wvplayer_props.get('duration')), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/weyyak.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/weyyak.py new file mode 100644 index 0000000..ef12be8 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/weyyak.py @@ -0,0 +1,86 @@ +from .common import InfoExtractor +from ..utils import ( + float_or_none, + int_or_none, + parse_age_limit, + traverse_obj, + unified_timestamp, + url_or_none, +) + + +class WeyyakIE(InfoExtractor): + _VALID_URL = r'https?://weyyak\.com/(?P<lang>\w+)/(?:player/)?(?P<type>episode|movie)/(?P<id>\d+)' + _TESTS = [ + { + 'url': 'https://weyyak.com/en/player/episode/1341952/Ribat-Al-Hob-Episode49', + 'md5': '0caf55c1a615531c8fe60f146ae46849', + 'info_dict': { + 'id': '1341952', + 'ext': 'mp4', + 'title': 'Ribat Al Hob', + 'duration': 2771, + 'alt_title': 'رباط الحب', + 'season': 'Season 1', + 'season_number': 1, + 'episode': 'Episode 49', + 'episode_number': 49, + 'timestamp': 1485907200, + 'upload_date': '20170201', + 'thumbnail': r're:^https://content\.weyyak\.com/.+/poster-image', + 'categories': ['Drama', 'Thrillers', 'Romance'], + 'tags': 'count:8', + }, + }, + { + 'url': 'https://weyyak.com/en/movie/233255/8-Seconds', + 'md5': 'fe740ae0f63e4d1c8a7fc147a410c564', + 'info_dict': { + 'id': '233255', + 'ext': 'mp4', + 'title': '8 Seconds', + 'duration': 6490, + 'alt_title': '8 ثواني', + 'description': 'md5:45b83a155c30b49950624c7e99600b9d', + 'age_limit': 15, + 'release_year': 2015, + 'timestamp': 1683106031, + 'upload_date': '20230503', + 'thumbnail': r're:^https://content\.weyyak\.com/.+/poster-image', + 'categories': ['Drama', 'Social'], + 'cast': ['Ceylin Adiyaman', 'Esra Inal'], + }, + }, + ] + + def _real_extract(self, url): + video_id, lang, type_ = self._match_valid_url(url).group('id', 'lang', 'type') + + path = 'episode/' if type_ == 'episode' else 'contents/moviedetails?contentkey=' + data = self._download_json( + f'https://msapifo-prod-me.weyyak.z5.com/v1/{lang}/{path}{video_id}', video_id)['data'] + m3u8_url = self._download_json( + f'https://api-weyyak.akamaized.net/get_info/{data["video_id"]}', + video_id, 'Extracting video details')['url_video'] + formats, subtitles = self._extract_m3u8_formats_and_subtitles(m3u8_url, video_id) + + return { + 'id': video_id, + 'formats': formats, + 'subtitles': subtitles, + **traverse_obj(data, { + 'title': ('title', {str}), + 'alt_title': ('translated_title', {str}), + 'description': ('synopsis', {str}), + 'duration': ('length', {float_or_none}), + 'age_limit': ('age_rating', {parse_age_limit}), + 'season_number': ('season_number', {int_or_none}), + 'episode_number': ('episode_number', {int_or_none}), + 'thumbnail': ('imagery', 'thumbnail', {url_or_none}), + 'categories': ('genres', ..., {str}), + 'tags': ('tags', ..., {str}), + 'cast': (('main_actor', 'main_actress'), {str}), + 'timestamp': ('insertedAt', {unified_timestamp}), + 'release_year': ('production_year', {int_or_none}), + }), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/whowatch.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/whowatch.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/whowatch.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/whowatch.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/whyp.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/whyp.py new file mode 100644 index 0000000..fef89c3 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/whyp.py @@ -0,0 +1,50 @@ +from .common import InfoExtractor +from ..utils import ( + float_or_none, + str_or_none, + traverse_obj, + url_or_none, +) + + +class WhypIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?whyp\.it/tracks/(?P<id>\d+)' + _TESTS = [{ + 'url': 'https://www.whyp.it/tracks/18337/home-page-example-track-b4kq7', + 'md5': 'c1187b42ebf8605284e3dc92aeb33d16', + 'info_dict': { + 'url': 'https://cdn.whyp.it/50eb17cc-e9ff-4e18-b89b-dc9206a95cb1.mp3', + 'id': '18337', + 'title': 'Home Page Example Track', + 'description': 'md5:bd758000fb93f3159339c852b5b9133c', + 'ext': 'mp3', + 'duration': 52.82, + 'uploader': 'Brad', + 'uploader_id': '1', + 'thumbnail': 'https://cdn.whyp.it/a537bb36-3373-4c61-96c8-27fc1b2f427a.jpg', + }, + }, { + 'url': 'https://www.whyp.it/tracks/18337', + 'only_matching': True, + }] + + def _real_extract(self, url): + unique_id = self._match_id(url) + webpage = self._download_webpage(url, unique_id) + data = self._search_nuxt_data(webpage, unique_id)['rawTrack'] + + return { + 'url': data['audio_url'], + 'id': unique_id, + **traverse_obj(data, { + 'title': 'title', + 'description': 'description', + 'duration': ('duration', {float_or_none}), + 'uploader': ('user', 'username'), + 'uploader_id': ('user', 'id', {str_or_none}), + 'thumbnail': ('artwork_url', {url_or_none}), + }), + 'ext': 'mp3', + 'vcodec': 'none', + 'http_headers': {'Referer': 'https://whyp.it/'}, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wikimedia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wikimedia.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wikimedia.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wikimedia.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/willow.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/willow.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/willow.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/willow.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wimbledon.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wimbledon.py new file mode 100644 index 0000000..0223e54 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wimbledon.py @@ -0,0 +1,61 @@ +from .common import InfoExtractor +from ..utils import ( + parse_duration, + traverse_obj, +) + + +class WimbledonIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?wimbledon\.com/\w+/video/media/(?P<id>\d+)\.html' + _TESTS = [{ + 'url': 'https://www.wimbledon.com/en_GB/video/media/6330247525112.html', + 'info_dict': { + 'id': '6330247525112', + 'ext': 'mp4', + 'timestamp': 1687972186, + 'description': '', + 'thumbnail': r're:^https://[\w.-]+\.prod\.boltdns\.net/[^?#]+/image\.jpg', + 'upload_date': '20230628', + 'title': 'Coco Gauff | My Wimbledon Inspiration', + 'tags': ['features', 'trending', 'homepage'], + 'uploader_id': '3506358525001', + 'duration': 163072.0, + }, + }, { + 'url': 'https://www.wimbledon.com/en_GB/video/media/6308703111112.html', + 'info_dict': { + 'id': '6308703111112', + 'ext': 'mp4', + 'thumbnail': r're:^https://[\w.-]+\.prod\.boltdns\.net/[^?#]+/image\.jpg', + 'description': 'null', + 'upload_date': '20220629', + 'uploader_id': '3506358525001', + 'title': 'Roblox | WimbleWorld ', + 'duration': 101440.0, + 'tags': ['features', 'kids'], + 'timestamp': 1656500867, + }, + }, { + 'url': 'https://www.wimbledon.com/en_US/video/media/6309327106112.html', + 'only_matching': True, + }, { + 'url': 'https://www.wimbledon.com/es_Es/video/media/6308377909112.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + metadata = self._download_json( + f'https://www.wimbledon.com/relatedcontent/rest/v2/wim_v1/en/content/wim_v1_{video_id}_en', video_id) + + return { + '_type': 'url_transparent', + 'url': f'http://players.brightcove.net/3506358525001/default_default/index.html?videoId={video_id}', + 'ie_key': 'BrightcoveNew', + 'id': video_id, + **traverse_obj(metadata, { + 'title': 'title', + 'description': 'description', + 'duration': ('metadata', 'duration', {parse_duration}), + }), + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wimtv.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wimtv.py new file mode 100644 index 0000000..f9bf092 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wimtv.py @@ -0,0 +1,150 @@ +from .common import InfoExtractor +from ..utils import ( + determine_ext, + parse_duration, + urlencode_postdata, + ExtractorError, +) + + +class WimTVIE(InfoExtractor): + _player = None + _UUID_RE = r'[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}' + _VALID_URL = r'''(?x: + https?://platform\.wim\.tv/ + (?: + (?:embed/)?\? + |\#/webtv/.+?/ + ) + (?P<type>vod|live|cast)[=/] + (?P<id>%s).*?)''' % _UUID_RE + _EMBED_REGEX = [rf'<iframe[^>]+src=["\'](?P<url>{_VALID_URL})'] + _TESTS = [{ + # vod stream + 'url': 'https://platform.wim.tv/embed/?vod=db29fb32-bade-47b6-a3a6-cb69fe80267a', + 'md5': 'db29fb32-bade-47b6-a3a6-cb69fe80267a', + 'info_dict': { + 'id': 'db29fb32-bade-47b6-a3a6-cb69fe80267a', + 'ext': 'mp4', + 'title': 'AMA SUPERCROSS 2020 - R2 ST. LOUIS', + 'duration': 6481, + 'thumbnail': r're:https?://.+?/thumbnail/.+?/720$' + }, + 'params': { + 'skip_download': True, + }, + }, { + # live stream + 'url': 'https://platform.wim.tv/embed/?live=28e22c22-49db-40f3-8c37-8cbb0ff44556&autostart=true', + 'info_dict': { + 'id': '28e22c22-49db-40f3-8c37-8cbb0ff44556', + 'ext': 'mp4', + 'title': 'Streaming MSmotorTV', + 'is_live': True, + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://platform.wim.tv/#/webtv/automotornews/vod/422492b6-539e-474d-9c6b-68c9d5893365', + 'only_matching': True, + }, { + 'url': 'https://platform.wim.tv/#/webtv/renzoarborechannel/cast/f47e0d15-5b45-455e-bf0d-dba8ffa96365', + 'only_matching': True, + }] + + def _real_initialize(self): + if not self._player: + self._get_player_data() + + def _get_player_data(self): + msg_id = 'Player data' + self._player = {} + + datas = [{ + 'url': 'https://platform.wim.tv/common/libs/player/wimtv/wim-rest.js', + 'vars': [{ + 'regex': r'appAuth = "(.+?)"', + 'variable': 'app_auth', + }] + }, { + 'url': 'https://platform.wim.tv/common/config/endpointconfig.js', + 'vars': [{ + 'regex': r'PRODUCTION_HOSTNAME_THUMB = "(.+?)"', + 'variable': 'thumb_server', + }, { + 'regex': r'PRODUCTION_HOSTNAME_THUMB\s*\+\s*"(.+?)"', + 'variable': 'thumb_server_path', + }] + }] + + for data in datas: + temp = self._download_webpage(data['url'], msg_id) + for var in data['vars']: + val = self._search_regex(var['regex'], temp, msg_id) + if not val: + raise ExtractorError('%s not found' % var['variable']) + self._player[var['variable']] = val + + def _generate_token(self): + json = self._download_json( + 'https://platform.wim.tv/wimtv-server/oauth/token', 'Token generation', + headers={'Authorization': 'Basic %s' % self._player['app_auth']}, + data=urlencode_postdata({'grant_type': 'client_credentials'})) + token = json.get('access_token') + if not token: + raise ExtractorError('access token not generated') + return token + + def _generate_thumbnail(self, thumb_id, width='720'): + if not thumb_id or not self._player.get('thumb_server'): + return None + if not self._player.get('thumb_server_path'): + self._player['thumb_server_path'] = '' + return '%s%s/asset/thumbnail/%s/%s' % ( + self._player['thumb_server'], + self._player['thumb_server_path'], + thumb_id, width) + + def _real_extract(self, url): + urlc = self._match_valid_url(url).groupdict() + video_id = urlc['id'] + stream_type = is_live = None + if urlc['type'] in {'live', 'cast'}: + stream_type = urlc['type'] + '/channel' + is_live = True + else: + stream_type = 'vod' + is_live = False + token = self._generate_token() + json = self._download_json( + 'https://platform.wim.tv/wimtv-server/api/public/%s/%s/play' % ( + stream_type, video_id), video_id, + headers={'Authorization': 'Bearer %s' % token, + 'Content-Type': 'application/json'}, + data=bytes('{}', 'utf-8')) + + formats = [] + for src in json.get('srcs') or []: + if src.get('mimeType') == 'application/x-mpegurl': + formats.extend( + self._extract_m3u8_formats( + src.get('uniqueStreamer'), video_id, 'mp4')) + if src.get('mimeType') == 'video/flash': + formats.append({ + 'format_id': 'rtmp', + 'url': src.get('uniqueStreamer'), + 'ext': determine_ext(src.get('uniqueStreamer'), 'flv'), + 'rtmp_live': is_live, + }) + json = json.get('resource') + thumb = self._generate_thumbnail(json.get('thumbnailId')) + + return { + 'id': video_id, + 'title': json.get('title') or json.get('name'), + 'duration': parse_duration(json.get('duration')), + 'formats': formats, + 'thumbnail': thumb, + 'is_live': is_live, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wistia.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wistia.py new file mode 100644 index 0000000..bce5e83 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wistia.py @@ -0,0 +1,394 @@ +import re +import urllib.parse +from base64 import b64decode + +from .common import InfoExtractor +from ..networking import HEADRequest +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + determine_ext, + float_or_none, + int_or_none, + parse_qs, + traverse_obj, + try_get, + update_url_query, + urlhandle_detect_ext, +) + + +class WistiaBaseIE(InfoExtractor): + _VALID_ID_REGEX = r'(?P<id>[a-z0-9]{10})' + _VALID_URL_BASE = r'https?://(?:\w+\.)?wistia\.(?:net|com)/(?:embed/)?' + _EMBED_BASE_URL = 'http://fast.wistia.net/embed/' + + def _download_embed_config(self, config_type, config_id, referer): + base_url = self._EMBED_BASE_URL + '%s/%s' % (config_type, config_id) + embed_config = self._download_json( + base_url + '.json', config_id, headers={ + 'Referer': referer if referer.startswith('http') else base_url, # Some videos require this. + }) + + error = traverse_obj(embed_config, 'error') + if error: + raise ExtractorError( + f'Error while getting the playlist: {error}', expected=True) + + return embed_config + + def _get_real_ext(self, url): + ext = determine_ext(url, default_ext='bin') + if ext == 'bin': + urlh = self._request_webpage( + HEADRequest(url), None, note='Checking media extension', + errnote='HEAD request returned error', fatal=False) + if urlh: + ext = urlhandle_detect_ext(urlh, default='bin') + return 'mp4' if ext == 'mov' else ext + + def _extract_media(self, embed_config): + data = embed_config['media'] + video_id = data['hashedId'] + title = data['name'] + + formats = [] + thumbnails = [] + for a in data['assets']: + aurl = a.get('url') + if not aurl: + continue + astatus = a.get('status') + atype = a.get('type') + if (astatus is not None and astatus != 2) or atype in ('preview', 'storyboard'): + continue + elif atype in ('still', 'still_image'): + thumbnails.append({ + 'url': aurl.replace('.bin', f'.{self._get_real_ext(aurl)}'), + 'width': int_or_none(a.get('width')), + 'height': int_or_none(a.get('height')), + 'filesize': int_or_none(a.get('size')), + }) + else: + aext = a.get('ext') or self._get_real_ext(aurl) + display_name = a.get('display_name') + format_id = atype + if atype and atype.endswith('_video') and display_name: + format_id = '%s-%s' % (atype[:-6], display_name) + f = { + 'format_id': format_id, + 'url': aurl, + 'tbr': int_or_none(a.get('bitrate')) or None, + 'quality': 1 if atype == 'original' else None, + } + if display_name == 'Audio': + f.update({ + 'vcodec': 'none', + }) + else: + f.update({ + 'width': int_or_none(a.get('width')), + 'height': int_or_none(a.get('height')), + 'vcodec': a.get('codec'), + }) + if a.get('container') == 'm3u8' or aext == 'm3u8': + ts_f = f.copy() + ts_f.update({ + 'ext': 'ts', + 'format_id': f['format_id'].replace('hls-', 'ts-'), + 'url': f['url'].replace('.bin', '.ts'), + }) + formats.append(ts_f) + f.update({ + 'ext': 'mp4', + 'protocol': 'm3u8_native', + }) + else: + f.update({ + 'container': a.get('container'), + 'ext': aext, + 'filesize': int_or_none(a.get('size')), + }) + formats.append(f) + + subtitles = {} + for caption in data.get('captions', []): + language = caption.get('language') + if not language: + continue + subtitles[language] = [{ + 'url': self._EMBED_BASE_URL + 'captions/' + video_id + '.vtt?language=' + language, + }] + + return { + 'id': video_id, + 'title': title, + 'description': data.get('seoDescription'), + 'formats': formats, + 'thumbnails': thumbnails, + 'duration': float_or_none(data.get('duration')), + 'timestamp': int_or_none(data.get('createdAt')), + 'subtitles': subtitles, + } + + @classmethod + def _extract_from_webpage(cls, url, webpage): + from .teachable import TeachableIE + + if list(TeachableIE._extract_embed_urls(url, webpage)): + return + + yield from super()._extract_from_webpage(url, webpage) + + @classmethod + def _extract_wistia_async_embed(cls, webpage): + # https://wistia.com/support/embed-and-share/video-on-your-website + # https://wistia.com/support/embed-and-share/channel-embeds + yield from re.finditer( + r'''(?sx) + <(?:div|section)[^>]+class=([\"'])(?:(?!\1).)*?(?P<type>wistia[a-z_0-9]+)\s*\bwistia_async_(?P<id>[a-z0-9]{10})\b(?:(?!\1).)*?\1 + ''', webpage) + + @classmethod + def _extract_url_media_id(cls, url): + mobj = re.search(r'(?:wmediaid|wvideo(?:id)?)]?=(?P<id>[a-z0-9]{10})', urllib.parse.unquote_plus(url)) + if mobj: + return mobj.group('id') + + +class WistiaIE(WistiaBaseIE): + _VALID_URL = r'(?:wistia:|%s(?:iframe|medias)/)%s' % (WistiaBaseIE._VALID_URL_BASE, WistiaBaseIE._VALID_ID_REGEX) + _EMBED_REGEX = [ + r'''(?x) + <(?:meta[^>]+?content|(?:iframe|script)[^>]+?src)=["\'] + (?P<url>(?:https?:)?//(?:fast\.)?wistia\.(?:net|com)/embed/(?:iframe|medias)/[a-z0-9]{10}) + '''] + _TESTS = [{ + # with hls video + 'url': 'wistia:807fafadvk', + 'md5': 'daff0f3687a41d9a71b40e0e8c2610fe', + 'info_dict': { + 'id': '807fafadvk', + 'ext': 'mp4', + 'title': 'Drip Brennan Dunn Workshop', + 'description': 'a JV Webinars video', + 'upload_date': '20160518', + 'timestamp': 1463607249, + 'duration': 4987.11, + }, + 'skip': 'video unavailable', + }, { + 'url': 'wistia:a6ndpko1wg', + 'md5': '10c1ce9c4dde638202513ed17a3767bd', + 'info_dict': { + 'id': 'a6ndpko1wg', + 'ext': 'mp4', + 'title': 'Episode 2: Boxed Water\'s retention is thirsty', + 'upload_date': '20210324', + 'description': 'md5:da5994c2c2d254833b412469d9666b7a', + 'duration': 966.0, + 'timestamp': 1616614369, + 'thumbnail': 'https://embed-ssl.wistia.com/deliveries/53dc60239348dc9b9fba3755173ea4c2.png', + } + }, { + 'url': 'wistia:5vd7p4bct5', + 'md5': 'b9676d24bf30945d97060638fbfe77f0', + 'info_dict': { + 'id': '5vd7p4bct5', + 'ext': 'mp4', + 'title': 'md5:eaa9f64c4efd7b5f098b9b6118597679', + 'description': 'md5:a9bea0315f0616aa5df2dc413ddcdd0f', + 'upload_date': '20220915', + 'timestamp': 1663258727, + 'duration': 623.019, + 'thumbnail': r're:https?://embed(?:-ssl)?.wistia.com/.+\.jpg$', + }, + }, { + 'url': 'wistia:sh7fpupwlt', + 'only_matching': True, + }, { + 'url': 'http://fast.wistia.net/embed/iframe/sh7fpupwlt', + 'only_matching': True, + }, { + 'url': 'http://fast.wistia.com/embed/iframe/sh7fpupwlt', + 'only_matching': True, + }, { + 'url': 'http://fast.wistia.net/embed/medias/sh7fpupwlt.json', + 'only_matching': True, + }] + + _WEBPAGE_TESTS = [{ + 'url': 'https://www.weidert.com/blog/wistia-channels-video-marketing-tool', + 'info_dict': { + 'id': 'cqwukac3z1', + 'ext': 'mp4', + 'title': 'How Wistia Channels Can Help Capture Inbound Value From Your Video Content', + 'duration': 158.125, + 'timestamp': 1618974400, + 'description': 'md5:27abc99a758573560be72600ef95cece', + 'upload_date': '20210421', + 'thumbnail': 'https://embed-ssl.wistia.com/deliveries/6c551820ae950cdee2306d6cbe9ef742.jpg', + } + }, { + 'url': 'https://study.com/academy/lesson/north-american-exploration-failed-colonies-of-spain-france-england.html#lesson', + 'md5': 'b9676d24bf30945d97060638fbfe77f0', + 'info_dict': { + 'id': '5vd7p4bct5', + 'ext': 'mp4', + 'title': 'paywall_north-american-exploration-failed-colonies-of-spain-france-england', + 'upload_date': '20220915', + 'timestamp': 1663258727, + 'duration': 623.019, + 'thumbnail': 'https://embed-ssl.wistia.com/deliveries/83e6ec693e2c05a0ce65809cbaead86a.jpg', + 'description': 'a Paywall Videos video', + }, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + embed_config = self._download_embed_config('medias', video_id, url) + return self._extract_media(embed_config) + + @classmethod + def _extract_embed_urls(cls, url, webpage): + urls = list(super()._extract_embed_urls(url, webpage)) + for match in cls._extract_wistia_async_embed(webpage): + if match.group('type') != 'wistia_channel': + urls.append('wistia:%s' % match.group('id')) + for match in re.finditer(r'(?:data-wistia-?id=["\']|Wistia\.embed\(["\']|id=["\']wistia_)(?P<id>[a-z0-9]{10})', + webpage): + urls.append('wistia:%s' % match.group('id')) + if not WistiaChannelIE._extract_embed_urls(url, webpage): # Fallback + media_id = cls._extract_url_media_id(url) + if media_id: + urls.append('wistia:%s' % match.group('id')) + return urls + + +class WistiaPlaylistIE(WistiaBaseIE): + _VALID_URL = r'%splaylists/%s' % (WistiaBaseIE._VALID_URL_BASE, WistiaBaseIE._VALID_ID_REGEX) + + _TEST = { + 'url': 'https://fast.wistia.net/embed/playlists/aodt9etokc', + 'info_dict': { + 'id': 'aodt9etokc', + }, + 'playlist_count': 3, + } + + def _real_extract(self, url): + playlist_id = self._match_id(url) + playlist = self._download_embed_config('playlists', playlist_id, url) + + entries = [] + for media in (try_get(playlist, lambda x: x[0]['medias']) or []): + embed_config = media.get('embed_config') + if not embed_config: + continue + entries.append(self._extract_media(embed_config)) + + return self.playlist_result(entries, playlist_id) + + +class WistiaChannelIE(WistiaBaseIE): + _VALID_URL = r'(?:wistiachannel:|%schannel/)%s' % (WistiaBaseIE._VALID_URL_BASE, WistiaBaseIE._VALID_ID_REGEX) + + _TESTS = [{ + # JSON Embed API returns 403, should fall back to webpage + 'url': 'https://fast.wistia.net/embed/channel/yvyvu7wjbg?wchannelid=yvyvu7wjbg', + 'info_dict': { + 'id': 'yvyvu7wjbg', + 'title': 'Copysmith Tutorials and Education!', + 'description': 'Learn all things Copysmith via short and informative videos!' + }, + 'playlist_mincount': 7, + 'expected_warnings': ['falling back to webpage'], + }, { + 'url': 'https://fast.wistia.net/embed/channel/3802iirk0l', + 'info_dict': { + 'id': '3802iirk0l', + 'title': 'The Roof', + }, + 'playlist_mincount': 20, + }, { + # link to popup video, follow --no-playlist + 'url': 'https://fast.wistia.net/embed/channel/3802iirk0l?wchannelid=3802iirk0l&wmediaid=sp5dqjzw3n', + 'info_dict': { + 'id': 'sp5dqjzw3n', + 'ext': 'mp4', + 'title': 'The Roof S2: The Modern CRO', + 'thumbnail': 'https://embed-ssl.wistia.com/deliveries/dadfa9233eaa505d5e0c85c23ff70741.png', + 'duration': 86.487, + 'description': 'A sales leader on The Roof? Man, they really must be letting anyone up here this season.\n', + 'timestamp': 1619790290, + 'upload_date': '20210430', + }, + 'params': {'noplaylist': True, 'skip_download': True}, + }] + _WEBPAGE_TESTS = [{ + 'url': 'https://www.profitwell.com/recur/boxed-out', + 'info_dict': { + 'id': '6jyvmqz6zs', + 'title': 'Boxed Out', + 'description': 'md5:14a8a93a1dbe236718e6a59f8c8c7bae', + }, + 'playlist_mincount': 30, + }, { + # section instead of div + 'url': 'https://360learning.com/studio/onboarding-joei/', + 'info_dict': { + 'id': 'z874k93n2o', + 'title': 'Onboarding Joei.', + 'description': 'Coming to you weekly starting Feb 19th.', + }, + 'playlist_mincount': 20, + }, { + 'url': 'https://amplitude.com/amplify-sessions?amp%5Bwmediaid%5D=pz0m0l0if3&%5Bwvideo%5D=pz0m0l0if3&wchannelid=emyjmwjf79&wmediaid=i8um783bdt', + 'info_dict': { + 'id': 'pz0m0l0if3', + 'title': 'A Framework for Improving Product Team Performance', + 'ext': 'mp4', + 'timestamp': 1653935275, + 'upload_date': '20220530', + 'description': 'Learn how to help your company improve and achieve your product related goals.', + 'duration': 1854.39, + 'thumbnail': 'https://embed-ssl.wistia.com/deliveries/12fd19e56413d9d6f04e2185c16a6f8854e25226.png', + }, + 'params': {'noplaylist': True, 'skip_download': True}, + }] + + def _real_extract(self, url): + channel_id = self._match_id(url) + media_id = self._extract_url_media_id(url) + if not self._yes_playlist(channel_id, media_id, playlist_label='channel'): + return self.url_result(f'wistia:{media_id}', 'Wistia') + + try: + data = self._download_embed_config('channel', channel_id, url) + except (ExtractorError, HTTPError): + # Some channels give a 403 from the JSON API + self.report_warning('Failed to download channel data from API, falling back to webpage.') + webpage = self._download_webpage(f'https://fast.wistia.net/embed/channel/{channel_id}', channel_id) + data = self._parse_json( + self._search_regex(r'wchanneljsonp-%s\'\]\s*=[^\"]*\"([A-Za-z0-9=/]*)' % channel_id, webpage, 'jsonp', channel_id), + channel_id, transform_source=lambda x: urllib.parse.unquote_plus(b64decode(x).decode('utf-8'))) + + # XXX: can there be more than one series? + series = traverse_obj(data, ('series', 0), default={}) + + entries = [ + self.url_result(f'wistia:{video["hashedId"]}', WistiaIE, title=video.get('name')) + for video in traverse_obj(series, ('sections', ..., 'videos', ...)) or [] + if video.get('hashedId') + ] + + return self.playlist_result( + entries, channel_id, playlist_title=series.get('title'), playlist_description=series.get('description')) + + @classmethod + def _extract_embed_urls(cls, url, webpage): + yield from super()._extract_embed_urls(url, webpage) + for match in cls._extract_wistia_async_embed(webpage): + if match.group('type') == 'wistia_channel': + # original url may contain wmediaid query param + yield update_url_query(f'wistiachannel:{match.group("id")}', parse_qs(url)) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wordpress.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wordpress.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wordpress.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wordpress.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/worldstarhiphop.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/worldstarhiphop.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/worldstarhiphop.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/worldstarhiphop.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wppilot.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wppilot.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wppilot.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wppilot.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wrestleuniverse.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wrestleuniverse.py new file mode 100644 index 0000000..145246a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wrestleuniverse.py @@ -0,0 +1,304 @@ +import base64 +import binascii +import json +import time +import uuid + +from .common import InfoExtractor +from ..dependencies import Cryptodome +from ..utils import ( + ExtractorError, + int_or_none, + jwt_decode_hs256, + traverse_obj, + try_call, + url_or_none, + urlencode_postdata, + variadic, +) + + +class WrestleUniverseBaseIE(InfoExtractor): + _NETRC_MACHINE = 'wrestleuniverse' + _VALID_URL_TMPL = r'https?://(?:www\.)?wrestle-universe\.com/(?:(?P<lang>\w{2})/)?%s/(?P<id>\w+)' + _API_HOST = 'api.wrestle-universe.com' + _API_PATH = None + _REAL_TOKEN = None + _TOKEN_EXPIRY = None + _REFRESH_TOKEN = None + _DEVICE_ID = None + _LOGIN_QUERY = {'key': 'AIzaSyCaRPBsDQYVDUWWBXjsTrHESi2r_F3RAdA'} + _LOGIN_HEADERS = { + 'Accept': '*/*', + 'Content-Type': 'application/json', + 'X-Client-Version': 'Chrome/JsCore/9.9.4/FirebaseCore-web', + 'X-Firebase-gmpid': '1:307308870738:web:820f38fe5150c8976e338b', + 'Referer': 'https://www.wrestle-universe.com/', + 'Origin': 'https://www.wrestle-universe.com', + } + + @property + def _TOKEN(self): + if not self._REAL_TOKEN or not self._TOKEN_EXPIRY: + token = try_call(lambda: self._get_cookies('https://www.wrestle-universe.com/')['token'].value) + if not token and not self._REFRESH_TOKEN: + self.raise_login_required() + self._TOKEN = token + + if not self._REAL_TOKEN or self._TOKEN_EXPIRY <= int(time.time()): + if not self._REFRESH_TOKEN: + raise ExtractorError( + 'Expired token. Refresh your cookies in browser and try again', expected=True) + self._refresh_token() + + return self._REAL_TOKEN + + @_TOKEN.setter + def _TOKEN(self, value): + self._REAL_TOKEN = value + + expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int_or_none})) + if not expiry: + raise ExtractorError('There was a problem with the auth token') + self._TOKEN_EXPIRY = expiry + + def _perform_login(self, username, password): + login = self._download_json( + 'https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword', None, + 'Logging in', query=self._LOGIN_QUERY, headers=self._LOGIN_HEADERS, data=json.dumps({ + 'returnSecureToken': True, + 'email': username, + 'password': password, + }, separators=(',', ':')).encode(), expected_status=400) + token = traverse_obj(login, ('idToken', {str})) + if not token: + raise ExtractorError( + f'Unable to log in: {traverse_obj(login, ("error", "message"))}', expected=True) + self._REFRESH_TOKEN = traverse_obj(login, ('refreshToken', {str})) + if not self._REFRESH_TOKEN: + self.report_warning('No refresh token was granted') + self._TOKEN = token + + def _real_initialize(self): + if self._DEVICE_ID: + return + + self._DEVICE_ID = self._configuration_arg('device_id', [None], ie_key=self._NETRC_MACHINE)[0] + if not self._DEVICE_ID: + self._DEVICE_ID = self.cache.load(self._NETRC_MACHINE, 'device_id') + if self._DEVICE_ID: + return + self._DEVICE_ID = str(uuid.uuid4()) + + self.cache.store(self._NETRC_MACHINE, 'device_id', self._DEVICE_ID) + + def _refresh_token(self): + refresh = self._download_json( + 'https://securetoken.googleapis.com/v1/token', None, 'Refreshing token', + query=self._LOGIN_QUERY, data=urlencode_postdata({ + 'grant_type': 'refresh_token', + 'refresh_token': self._REFRESH_TOKEN, + }), headers={ + **self._LOGIN_HEADERS, + 'Content-Type': 'application/x-www-form-urlencoded', + }) + if traverse_obj(refresh, ('refresh_token', {str})): + self._REFRESH_TOKEN = refresh['refresh_token'] + token = traverse_obj(refresh, 'access_token', 'id_token', expected_type=str) + if not token: + raise ExtractorError('No auth token returned from refresh request') + self._TOKEN = token + + def _call_api(self, video_id, param='', msg='API', auth=True, data=None, query={}, fatal=True): + headers = {'CA-CID': ''} + if data: + headers['Content-Type'] = 'application/json;charset=utf-8' + data = json.dumps(data, separators=(',', ':')).encode() + if auth and self._TOKEN: + headers['Authorization'] = f'Bearer {self._TOKEN}' + return self._download_json( + f'https://{self._API_HOST}/v1/{self._API_PATH}/{video_id}{param}', video_id, + note=f'Downloading {msg} JSON', errnote=f'Failed to download {msg} JSON', + data=data, headers=headers, query=query, fatal=fatal) + + def _call_encrypted_api(self, video_id, param='', msg='API', data={}, query={}, fatal=True): + if not Cryptodome.RSA: + raise ExtractorError('pycryptodomex not found. Please install', expected=True) + private_key = Cryptodome.RSA.generate(2048) + cipher = Cryptodome.PKCS1_OAEP.new(private_key, hashAlgo=Cryptodome.SHA1) + + def decrypt(data): + if not data: + return None + try: + return cipher.decrypt(base64.b64decode(data)).decode() + except (ValueError, binascii.Error) as e: + raise ExtractorError(f'Could not decrypt data: {e}') + + token = base64.b64encode(private_key.public_key().export_key('DER')).decode() + api_json = self._call_api(video_id, param, msg, data={ + 'deviceId': self._DEVICE_ID, + 'token': token, + **data, + }, query=query, fatal=fatal) + return api_json, decrypt + + def _download_metadata(self, url, video_id, lang, props_keys): + metadata = self._call_api(video_id, msg='metadata', query={'al': lang or 'ja'}, auth=False, fatal=False) + if not metadata: + webpage = self._download_webpage(url, video_id) + nextjs_data = self._search_nextjs_data(webpage, video_id) + metadata = traverse_obj(nextjs_data, ( + 'props', 'pageProps', *variadic(props_keys, (str, bytes, dict, set)), {dict})) or {} + return metadata + + def _get_formats(self, data, path, video_id=None): + hls_url = traverse_obj(data, path, get_all=False) + if not hls_url and not data.get('canWatch'): + self.raise_no_formats( + 'This account does not have access to the requested content', expected=True) + elif not hls_url: + self.raise_no_formats('No supported formats found') + return self._extract_m3u8_formats(hls_url, video_id, 'mp4', m3u8_id='hls', live=True) + + +class WrestleUniverseVODIE(WrestleUniverseBaseIE): + _VALID_URL = WrestleUniverseBaseIE._VALID_URL_TMPL % 'videos' + _TESTS = [{ + 'url': 'https://www.wrestle-universe.com/en/videos/dp8mpjmcKfxzUhEHM2uFws', + 'info_dict': { + 'id': 'dp8mpjmcKfxzUhEHM2uFws', + 'ext': 'mp4', + 'title': 'The 3rd “Futari wa Princess†Max Heart Tournament', + 'description': 'md5:318d5061e944797fbbb81d5c7dd00bf5', + 'location': '埼玉・春日部ãµã‚Œã‚ã„キューブ', + 'channel': 'tjpw', + 'duration': 7119, + 'timestamp': 1674979200, + 'upload_date': '20230129', + 'thumbnail': 'https://image.asset.wrestle-universe.com/8FjD67P8rZc446RBQs5RBN/8FjD67P8rZc446RBQs5RBN', + 'chapters': 'count:7', + 'cast': 'count:21', + }, + 'params': { + 'skip_download': 'm3u8', + }, + }] + + _API_PATH = 'videoEpisodes' + + def _real_extract(self, url): + lang, video_id = self._match_valid_url(url).group('lang', 'id') + metadata = self._download_metadata(url, video_id, lang, 'videoEpisodeFallbackData') + video_data = self._call_api(video_id, ':watch', 'watch', data={'deviceId': self._DEVICE_ID}) + + return { + 'id': video_id, + 'formats': self._get_formats(video_data, ( + (('protocolHls', 'url'), ('chromecastUrls', ...)), {url_or_none}), video_id), + **traverse_obj(metadata, { + 'title': ('displayName', {str}), + 'description': ('description', {str}), + 'channel': ('labels', 'group', {str}), + 'location': ('labels', 'venue', {str}), + 'timestamp': ('watchStartTime', {int_or_none}), + 'thumbnail': ('keyVisualUrl', {url_or_none}), + 'cast': ('casts', ..., 'displayName', {str}), + 'duration': ('duration', {int}), + 'chapters': ('videoChapters', lambda _, v: isinstance(v.get('start'), int), { + 'title': ('displayName', {str}), + 'start_time': ('start', {int}), + 'end_time': ('end', {int}), + }), + }), + } + + +class WrestleUniversePPVIE(WrestleUniverseBaseIE): + _VALID_URL = WrestleUniverseBaseIE._VALID_URL_TMPL % 'lives' + _TESTS = [{ + 'note': 'HLS AES-128 key obtained via API', + 'url': 'https://www.wrestle-universe.com/en/lives/buH9ibbfhdJAY4GKZcEuJX', + 'info_dict': { + 'id': 'buH9ibbfhdJAY4GKZcEuJX', + 'ext': 'mp4', + 'title': 'ã€PPV】Beyond the origins, into the future', + 'description': 'md5:9a872db68cd09be4a1e35a3ee8b0bdfc', + 'channel': 'tjpw', + 'location': 'æ±äº¬ãƒ»Twin Box AKIHABARA', + 'duration': 10098, + 'timestamp': 1675076400, + 'upload_date': '20230130', + 'thumbnail': 'https://image.asset.wrestle-universe.com/rJs2m7cBaLXrwCcxMdQGRM/rJs2m7cBaLXrwCcxMdQGRM', + 'thumbnails': 'count:3', + 'hls_aes': { + 'key': '5633184acd6e43f1f1ac71c6447a4186', + 'iv': '5bac71beb33197d5600337ce86de7862', + }, + }, + 'params': { + 'skip_download': 'm3u8', + }, + 'skip': 'No longer available', + }, { + 'note': 'unencrypted HLS', + 'url': 'https://www.wrestle-universe.com/en/lives/wUG8hP5iApC63jbtQzhVVx', + 'info_dict': { + 'id': 'wUG8hP5iApC63jbtQzhVVx', + 'ext': 'mp4', + 'title': 'GRAND PRINCESS \'22', + 'description': 'md5:e4f43d0d4262de3952ff34831bc99858', + 'channel': 'tjpw', + 'location': 'æ±äº¬ãƒ»ä¸¡å›½å›½æŠ€é¤¨', + 'duration': 18044, + 'timestamp': 1647665400, + 'upload_date': '20220319', + 'thumbnail': 'https://image.asset.wrestle-universe.com/i8jxSTCHPfdAKD4zN41Psx/i8jxSTCHPfdAKD4zN41Psx', + 'thumbnails': 'count:3', + }, + 'params': { + 'skip_download': 'm3u8', + }, + }] + + _API_PATH = 'events' + + def _real_extract(self, url): + lang, video_id = self._match_valid_url(url).group('lang', 'id') + metadata = self._download_metadata(url, video_id, lang, 'eventFallbackData') + + info = { + 'id': video_id, + **traverse_obj(metadata, { + 'title': ('displayName', {str}), + 'description': ('description', {str}), + 'channel': ('labels', 'group', {str}), + 'location': ('labels', 'venue', {str}), + 'timestamp': ('startTime', {int_or_none}), + 'thumbnails': (('keyVisualUrl', 'alterKeyVisualUrl', 'heroKeyVisualUrl'), {'url': {url_or_none}}), + }), + } + + ended_time = traverse_obj(metadata, ('endedTime', {int_or_none})) + if info.get('timestamp') and ended_time: + info['duration'] = ended_time - info['timestamp'] + + video_data, decrypt = self._call_encrypted_api( + video_id, ':watchArchive', 'watch archive', data={'method': 1}) + info['formats'] = self._get_formats(video_data, ( + ('hls', None), ('urls', 'chromecastUrls'), ..., {url_or_none}), video_id) + for f in info['formats']: + # bitrates are exaggerated in PPV playlists, so avoid wrong/huge filesize_approx values + if f.get('tbr'): + f['tbr'] = int(f['tbr'] / 2.5) + + hls_aes_key = traverse_obj(video_data, ('hls', 'key', {decrypt})) + if hls_aes_key: + info['hls_aes'] = { + 'key': hls_aes_key, + 'iv': traverse_obj(video_data, ('hls', 'iv', {decrypt})), + } + elif traverse_obj(video_data, ('hls', 'encryptType', {int})): + self.report_warning('HLS AES-128 key was not found in API response') + + return info diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wsj.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wsj.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wsj.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wsj.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/wwe.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wwe.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/wwe.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/wwe.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/wykop.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/wykop.py new file mode 100644 index 0000000..1d29cc8 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/wykop.py @@ -0,0 +1,268 @@ +import json + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + format_field, + parse_iso8601, + traverse_obj, + url_or_none, +) + + +class WykopBaseExtractor(InfoExtractor): + def _get_token(self, force_refresh=False): + if not force_refresh: + maybe_cached = self.cache.load('wykop', 'bearer') + if maybe_cached: + return maybe_cached + + new_token = traverse_obj( + self._do_call_api('auth', None, 'Downloading anonymous auth token', data={ + # hardcoded in frontend + 'key': 'w53947240748', + 'secret': 'd537d9e0a7adc1510842059ae5316419', + }), ('data', 'token')) + + self.cache.store('wykop', 'bearer', new_token) + return new_token + + def _do_call_api(self, path, video_id, note='Downloading JSON metadata', data=None, headers={}): + if data: + data = json.dumps({'data': data}).encode() + headers['Content-Type'] = 'application/json' + + return self._download_json( + f'https://wykop.pl/api/v3/{path}', video_id, + note=note, data=data, headers=headers) + + def _call_api(self, path, video_id, note='Downloading JSON metadata'): + token = self._get_token() + for retrying in range(2): + try: + return self._do_call_api(path, video_id, note, headers={'Authorization': f'Bearer {token}'}) + except ExtractorError as e: + if not retrying and isinstance(e.cause, HTTPError) and e.cause.status == 403: + token = self._get_token(True) + continue + raise + + def _common_data_extract(self, data): + author = traverse_obj(data, ('author', 'username'), expected_type=str) + + return { + '_type': 'url_transparent', + 'display_id': data.get('slug'), + 'url': traverse_obj(data, + ('media', 'embed', 'url'), # what gets an iframe embed + ('source', 'url'), # clickable url (dig only) + expected_type=url_or_none), + 'thumbnail': traverse_obj( + data, ('media', 'photo', 'url'), ('media', 'embed', 'thumbnail'), expected_type=url_or_none), + 'uploader': author, + 'uploader_id': author, + 'uploader_url': format_field(author, None, 'https://wykop.pl/ludzie/%s'), + 'timestamp': parse_iso8601(data.get('created_at'), delimiter=' '), # time it got submitted + 'like_count': traverse_obj(data, ('votes', 'up'), expected_type=int), + 'dislike_count': traverse_obj(data, ('votes', 'down'), expected_type=int), + 'comment_count': traverse_obj(data, ('comments', 'count'), expected_type=int), + 'age_limit': 18 if data.get('adult') else 0, + 'tags': data.get('tags'), + } + + +class WykopDigIE(WykopBaseExtractor): + IE_NAME = 'wykop:dig' + _VALID_URL = r'https?://(?:www\.)?wykop\.pl/link/(?P<id>\d+)' + + _TESTS = [{ + 'url': 'https://wykop.pl/link/6912923/najbardziej-zrzedliwy-kot-na-swiecie-i-frozen-planet-ii-i-bbc-earth', + 'info_dict': { + 'id': 'rlSTBvViflc', + 'ext': 'mp4', + 'title': 'Najbardziej zrzÄ™dliwy kot na Å›wiecie I Frozen Planet II I BBC Earth', + 'display_id': 'najbardziej-zrzedliwy-kot-na-swiecie-i-frozen-planet-ii-i-bbc-earth', + 'description': 'md5:ac0f87dea1cdcb6b0c53f3612a095c87', + 'tags': ['zwierzaczki', 'koty', 'smiesznykotek', 'humor', 'rozrywka', 'ciekawostki'], + 'age_limit': 0, + 'timestamp': 1669154480, + 'release_timestamp': 1669194241, + 'release_date': '20221123', + 'uploader': 'starnak', + 'uploader_id': 'starnak', + 'uploader_url': 'https://wykop.pl/ludzie/starnak', + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'thumbnail': r're:https?://wykop\.pl/cdn/.+', + 'view_count': int, + 'channel': 'BBC Earth', + 'channel_id': 'UCwmZiChSryoWQCZMIQezgTg', + 'channel_url': 'https://www.youtube.com/channel/UCwmZiChSryoWQCZMIQezgTg', + 'categories': ['Pets & Animals'], + 'upload_date': '20220923', + 'duration': 191, + 'channel_follower_count': int, + 'availability': 'public', + 'live_status': 'not_live', + 'playable_in_embed': True, + }, + }] + + @classmethod + def suitable(cls, url): + return cls._match_valid_url(url) and not WykopDigCommentIE.suitable(url) + + def _real_extract(self, url): + video_id = self._match_id(url) + data = self._call_api(f'links/{video_id}', video_id)['data'] + + return { + **self._common_data_extract(data), + 'id': video_id, + 'title': data['title'], + 'description': data.get('description'), + # time it got "digged" to the homepage + 'release_timestamp': parse_iso8601(data.get('published_at'), delimiter=' '), + } + + +class WykopDigCommentIE(WykopBaseExtractor): + IE_NAME = 'wykop:dig:comment' + _VALID_URL = r'https?://(?:www\.)?wykop\.pl/link/(?P<dig_id>\d+)/[^/]+/komentarz/(?P<id>\d+)' + + _TESTS = [{ + 'url': 'https://wykop.pl/link/6992589/strollowal-oszusta-przez-ponad-24-minuty-udawal-naiwniaka-i-nagral-rozmowe/komentarz/114540527/podobna-sytuacja-ponizej-ciekawa-dyskusja-z-oszustem-na-sam-koniec-sam-bylem-w-biurze-swiadkiem-podobnej-rozmowy-niemal-zakonczonej-sukcesem-bandyty-g', + 'info_dict': { + 'id': 'u6tEi2FmKZY', + 'ext': 'mp4', + 'title': 'md5:e7c741c5baa7ed6478000caf72865577', + 'display_id': 'md5:45b2d12bd0e262d09cc7cf7abc8412db', + 'description': 'md5:bcec7983429f9c0630f9deb9d3d1ba5e', + 'timestamp': 1674476945, + 'uploader': 'Bartholomew', + 'uploader_id': 'Bartholomew', + 'uploader_url': 'https://wykop.pl/ludzie/Bartholomew', + 'thumbnail': r're:https?://wykop\.pl/cdn/.+', + 'tags': [], + 'availability': 'public', + 'duration': 1838, + 'upload_date': '20230117', + 'categories': ['Entertainment'], + 'view_count': int, + 'like_count': int, + 'dislike_count': int, + 'comment_count': int, + 'channel_follower_count': int, + 'playable_in_embed': True, + 'live_status': 'not_live', + 'age_limit': 0, + 'chapters': 'count:3', + 'channel': 'Poszukiwacze Okazji', + 'channel_id': 'UCzzvJDZThwv06dR4xmzrZBw', + 'channel_url': 'https://www.youtube.com/channel/UCzzvJDZThwv06dR4xmzrZBw', + }, + }] + + def _real_extract(self, url): + dig_id, comment_id = self._search_regex(self._VALID_URL, url, 'dig and comment ids', group=('dig_id', 'id')) + data = self._call_api(f'links/{dig_id}/comments/{comment_id}', comment_id)['data'] + + return { + **self._common_data_extract(data), + 'id': comment_id, + 'title': f"{traverse_obj(data, ('author', 'username'))} - {data.get('content') or ''}", + 'description': data.get('content'), + } + + +class WykopPostIE(WykopBaseExtractor): + IE_NAME = 'wykop:post' + _VALID_URL = r'https?://(?:www\.)?wykop\.pl/wpis/(?P<id>\d+)' + + _TESTS = [{ + 'url': 'https://wykop.pl/wpis/68893343/kot-koty-smiesznykotek', + 'info_dict': { + 'id': 'PL8JMjiUPHUhwc9ZlKa_5IFeBwBV8Xe7jI', + 'title': 'PawelW124 - #kot #koty #smiesznykotek', + 'description': '#kot #koty #smiesznykotek', + 'display_id': 'kot-koty-smiesznykotek', + 'tags': ['kot', 'koty', 'smiesznykotek'], + 'uploader': 'PawelW124', + 'uploader_id': 'PawelW124', + 'uploader_url': 'https://wykop.pl/ludzie/PawelW124', + 'timestamp': 1668938142, + 'age_limit': 0, + 'like_count': int, + 'dislike_count': int, + 'thumbnail': r're:https?://wykop\.pl/cdn/.+', + 'comment_count': int, + 'channel': 'Revan', + 'channel_id': 'UCW9T_-uZoiI7ROARQdTDyOw', + 'channel_url': 'https://www.youtube.com/channel/UCW9T_-uZoiI7ROARQdTDyOw', + 'upload_date': '20221120', + 'modified_date': '20220814', + 'availability': 'public', + 'view_count': int, + }, + 'playlist_mincount': 15, + 'params': { + 'flat_playlist': True, + } + }] + + @classmethod + def suitable(cls, url): + return cls._match_valid_url(url) and not WykopPostCommentIE.suitable(url) + + def _real_extract(self, url): + video_id = self._match_id(url) + data = self._call_api(f'entries/{video_id}', video_id)['data'] + + return { + **self._common_data_extract(data), + 'id': video_id, + 'title': f"{traverse_obj(data, ('author', 'username'))} - {data.get('content') or ''}", + 'description': data.get('content'), + } + + +class WykopPostCommentIE(WykopBaseExtractor): + IE_NAME = 'wykop:post:comment' + _VALID_URL = r'https?://(?:www\.)?wykop\.pl/wpis/(?P<post_id>\d+)/[^/#]+#(?P<id>\d+)' + + _TESTS = [{ + 'url': 'https://wykop.pl/wpis/70084873/test-test-test#249303979', + 'info_dict': { + 'id': 'confusedquickarmyant', + 'ext': 'mp4', + 'title': 'tpap - treść komentarza', + 'display_id': 'tresc-komentarza', + 'description': 'treść komentarza', + 'uploader': 'tpap', + 'uploader_id': 'tpap', + 'uploader_url': 'https://wykop.pl/ludzie/tpap', + 'timestamp': 1675349470, + 'upload_date': '20230202', + 'tags': [], + 'duration': 2.12, + 'age_limit': 0, + 'categories': [], + 'view_count': int, + 'like_count': int, + 'dislike_count': int, + 'thumbnail': r're:https?://wykop\.pl/cdn/.+', + }, + }] + + def _real_extract(self, url): + post_id, comment_id = self._search_regex(self._VALID_URL, url, 'post and comment ids', group=('post_id', 'id')) + data = self._call_api(f'entries/{post_id}/comments/{comment_id}', comment_id)['data'] + + return { + **self._common_data_extract(data), + 'id': comment_id, + 'title': f"{traverse_obj(data, ('author', 'username'))} - {data.get('content') or ''}", + 'description': data.get('content'), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xanimu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xanimu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xanimu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xanimu.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xbef.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xbef.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xbef.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xbef.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xboxclips.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xboxclips.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xboxclips.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xboxclips.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xfileshare.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xfileshare.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xfileshare.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xfileshare.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/xhamster.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xhamster.py new file mode 100644 index 0000000..01ac5dd --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/xhamster.py @@ -0,0 +1,465 @@ +import itertools +import re + +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + clean_html, + determine_ext, + dict_get, + extract_attributes, + ExtractorError, + float_or_none, + int_or_none, + parse_duration, + str_or_none, + try_get, + unified_strdate, + url_or_none, + urljoin, +) + + +class XHamsterIE(InfoExtractor): + _DOMAINS = r'(?:xhamster\.(?:com|one|desi)|xhms\.pro|xhamster\d+\.com|xhday\.com|xhvid\.com)' + _VALID_URL = r'''(?x) + https?:// + (?:[^/?#]+\.)?%s/ + (?: + movies/(?P<id>[\dA-Za-z]+)/(?P<display_id>[^/]*)\.html| + videos/(?P<display_id_2>[^/]*)-(?P<id_2>[\dA-Za-z]+) + ) + ''' % _DOMAINS + _TESTS = [{ + 'url': 'https://xhamster.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445', + 'md5': '34e1ab926db5dc2750fed9e1f34304bb', + 'info_dict': { + 'id': '1509445', + 'display_id': 'femaleagent-shy-beauty-takes-the-bait', + 'ext': 'mp4', + 'title': 'FemaleAgent Shy beauty takes the bait', + 'timestamp': 1350194821, + 'upload_date': '20121014', + 'uploader': 'Ruseful2011', + 'uploader_id': 'ruseful2011', + 'duration': 893, + 'age_limit': 18, + }, + }, { + 'url': 'https://xhamster.com/videos/britney-spears-sexy-booty-2221348?hd=', + 'info_dict': { + 'id': '2221348', + 'display_id': 'britney-spears-sexy-booty', + 'ext': 'mp4', + 'title': 'Britney Spears Sexy Booty', + 'timestamp': 1379123460, + 'upload_date': '20130914', + 'uploader': 'jojo747400', + 'duration': 200, + 'age_limit': 18, + }, + 'params': { + 'skip_download': True, + }, + }, { + # empty seo, unavailable via new URL schema + 'url': 'http://xhamster.com/movies/5667973/.html', + 'info_dict': { + 'id': '5667973', + 'ext': 'mp4', + 'title': '....', + 'timestamp': 1454948101, + 'upload_date': '20160208', + 'uploader': 'parejafree', + 'uploader_id': 'parejafree', + 'duration': 72, + 'age_limit': 18, + }, + 'params': { + 'skip_download': True, + }, + }, { + # mobile site + 'url': 'https://m.xhamster.com/videos/cute-teen-jacqueline-solo-masturbation-8559111', + 'only_matching': True, + }, { + 'url': 'https://xhamster.com/movies/2272726/amber_slayed_by_the_knight.html', + 'only_matching': True, + }, { + # This video is visible for marcoalfa123456's friends only + 'url': 'https://it.xhamster.com/movies/7263980/la_mia_vicina.html', + 'only_matching': True, + }, { + # new URL schema + 'url': 'https://pt.xhamster.com/videos/euro-pedal-pumping-7937821', + 'only_matching': True, + }, { + 'url': 'https://xhamster.one/videos/femaleagent-shy-beauty-takes-the-bait-1509445', + 'only_matching': True, + }, { + 'url': 'https://xhamster.desi/videos/femaleagent-shy-beauty-takes-the-bait-1509445', + 'only_matching': True, + }, { + 'url': 'https://xhamster2.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445', + 'only_matching': True, + }, { + 'url': 'https://xhamster11.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445', + 'only_matching': True, + }, { + 'url': 'https://xhamster26.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445', + 'only_matching': True, + }, { + 'url': 'http://xhamster.com/movies/1509445/femaleagent_shy_beauty_takes_the_bait.html', + 'only_matching': True, + }, { + 'url': 'http://xhamster.com/movies/2221348/britney_spears_sexy_booty.html?hd', + 'only_matching': True, + }, { + 'url': 'http://de.xhamster.com/videos/skinny-girl-fucks-herself-hard-in-the-forest-xhnBJZx', + 'only_matching': True, + }, { + 'url': 'https://xhday.com/videos/strapless-threesome-xhh7yVf', + 'only_matching': True, + }, { + 'url': 'https://xhvid.com/videos/lk-mm-xhc6wn6', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + video_id = mobj.group('id') or mobj.group('id_2') + display_id = mobj.group('display_id') or mobj.group('display_id_2') + + desktop_url = re.sub(r'^(https?://(?:.+?\.)?)m\.', r'\1', url) + webpage, urlh = self._download_webpage_handle(desktop_url, video_id) + + error = self._html_search_regex( + r'<div[^>]+id=["\']videoClosed["\'][^>]*>(.+?)</div>', + webpage, 'error', default=None) + if error: + raise ExtractorError(error, expected=True) + + age_limit = self._rta_search(webpage) + + def get_height(s): + return int_or_none(self._search_regex( + r'^(\d+)[pP]', s, 'height', default=None)) + + initials = self._parse_json( + self._search_regex( + (r'window\.initials\s*=\s*({.+?})\s*;\s*</script>', + r'window\.initials\s*=\s*({.+?})\s*;'), webpage, 'initials', + default='{}'), + video_id, fatal=False) + if initials: + video = initials['videoModel'] + title = video['title'] + formats = [] + format_urls = set() + format_sizes = {} + sources = try_get(video, lambda x: x['sources'], dict) or {} + for format_id, formats_dict in sources.items(): + if not isinstance(formats_dict, dict): + continue + download_sources = try_get(sources, lambda x: x['download'], dict) or {} + for quality, format_dict in download_sources.items(): + if not isinstance(format_dict, dict): + continue + format_sizes[quality] = float_or_none(format_dict.get('size')) + for quality, format_item in formats_dict.items(): + if format_id == 'download': + # Download link takes some time to be generated, + # skipping for now + continue + format_url = format_item + format_url = url_or_none(format_url) + if not format_url or format_url in format_urls: + continue + format_urls.add(format_url) + formats.append({ + 'format_id': '%s-%s' % (format_id, quality), + 'url': format_url, + 'ext': determine_ext(format_url, 'mp4'), + 'height': get_height(quality), + 'filesize': format_sizes.get(quality), + 'http_headers': { + 'Referer': urlh.url, + }, + }) + xplayer_sources = try_get( + initials, lambda x: x['xplayerSettings']['sources'], dict) + if xplayer_sources: + hls_sources = xplayer_sources.get('hls') + if isinstance(hls_sources, dict): + for hls_format_key in ('url', 'fallback'): + hls_url = hls_sources.get(hls_format_key) + if not hls_url: + continue + hls_url = urljoin(url, hls_url) + if not hls_url or hls_url in format_urls: + continue + format_urls.add(hls_url) + formats.extend(self._extract_m3u8_formats( + hls_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + standard_sources = xplayer_sources.get('standard') + if isinstance(standard_sources, dict): + for format_id, formats_list in standard_sources.items(): + if not isinstance(formats_list, list): + continue + for standard_format in formats_list: + if not isinstance(standard_format, dict): + continue + for standard_format_key in ('url', 'fallback'): + standard_url = standard_format.get(standard_format_key) + if not standard_url: + continue + standard_url = urljoin(url, standard_url) + if not standard_url or standard_url in format_urls: + continue + format_urls.add(standard_url) + ext = determine_ext(standard_url, 'mp4') + if ext == 'm3u8': + formats.extend(self._extract_m3u8_formats( + standard_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + continue + quality = (str_or_none(standard_format.get('quality')) + or str_or_none(standard_format.get('label')) + or '') + formats.append({ + 'format_id': '%s-%s' % (format_id, quality), + 'url': standard_url, + 'ext': ext, + 'height': get_height(quality), + 'filesize': format_sizes.get(quality), + 'http_headers': { + 'Referer': standard_url, + }, + }) + + categories_list = video.get('categories') + if isinstance(categories_list, list): + categories = [] + for c in categories_list: + if not isinstance(c, dict): + continue + c_name = c.get('name') + if isinstance(c_name, compat_str): + categories.append(c_name) + else: + categories = None + + uploader_url = url_or_none(try_get(video, lambda x: x['author']['pageURL'])) + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': video.get('description'), + 'timestamp': int_or_none(video.get('created')), + 'uploader': try_get( + video, lambda x: x['author']['name'], compat_str), + 'uploader_url': uploader_url, + 'uploader_id': uploader_url.split('/')[-1] if uploader_url else None, + 'thumbnail': video.get('thumbURL'), + 'duration': int_or_none(video.get('duration')), + 'view_count': int_or_none(video.get('views')), + 'like_count': int_or_none(try_get( + video, lambda x: x['rating']['likes'], int)), + 'dislike_count': int_or_none(try_get( + video, lambda x: x['rating']['dislikes'], int)), + 'comment_count': int_or_none(video.get('views')), + 'age_limit': age_limit if age_limit is not None else 18, + 'categories': categories, + 'formats': formats, + } + + # Old layout fallback + + title = self._html_search_regex( + [r'<h1[^>]*>([^<]+)</h1>', + r'<meta[^>]+itemprop=".*?caption.*?"[^>]+content="(.+?)"', + r'<title[^>]*>(.+?)(?:,\s*[^,]*?\s*Porn\s*[^,]*?:\s*xHamster[^<]*| - xHamster\.com)'], + webpage, 'title') + + formats = [] + format_urls = set() + + sources = self._parse_json( + self._search_regex( + r'sources\s*:\s*({.+?})\s*,?\s*\n', webpage, 'sources', + default='{}'), + video_id, fatal=False) + for format_id, format_url in sources.items(): + format_url = url_or_none(format_url) + if not format_url: + continue + if format_url in format_urls: + continue + format_urls.add(format_url) + formats.append({ + 'format_id': format_id, + 'url': format_url, + 'height': get_height(format_id), + }) + + video_url = self._search_regex( + [r'''file\s*:\s*(?P["'])(?P.+?)(?P=q)''', + r'''["'])(?P.+?)(?P=q)\s+class=["']mp4Thumb''', + r''']+file=(?P["'])(?P.+?)(?P=q)[^>]*>'''], + webpage, 'video url', group='mp4', default=None) + if video_url and video_url not in format_urls: + formats.append({ + 'url': video_url, + }) + + # Only a few videos have an description + mobj = re.search(r'Description: ([^<]+)', webpage) + description = mobj.group(1) if mobj else None + + upload_date = unified_strdate(self._search_regex( + r'hint=["\'](\d{4}-\d{2}-\d{2}) \d{2}:\d{2}:\d{2} [A-Z]{3,4}', + webpage, 'upload date', fatal=False)) + + uploader = self._html_search_regex( + r']+itemprop=["\']author[^>]+>]+>]+>([^<]+)', + webpage, 'uploader', default='anonymous') + + thumbnail = self._search_regex( + [r'''["']thumbUrl["']\s*:\s*(?P["'])(?P.+?)(?P=q)''', + r''']+"poster"=(?P["'])(?P.+?)(?P=q)[^>]*>'''], + webpage, 'thumbnail', fatal=False, group='thumbnail') + + duration = parse_duration(self._search_regex( + [r'<[^<]+\bitemprop=["\']duration["\'][^<]+\bcontent=["\'](.+?)["\']', + r'Runtime:\s*\s*([\d:]+)'], webpage, + 'duration', fatal=False)) + + view_count = int_or_none(self._search_regex( + r'content=["\']User(?:View|Play)s:(\d+)', + webpage, 'view count', fatal=False)) + + mobj = re.search(r'hint=[\'"](?P\d+) Likes / (?P\d+) Dislikes', webpage) + (like_count, dislike_count) = (mobj.group('likecount'), mobj.group('dislikecount')) if mobj else (None, None) + + mobj = re.search(r'Comments \((?P\d+)\)', webpage) + comment_count = mobj.group('commentcount') if mobj else 0 + + categories_html = self._search_regex( + r'(?s)Categories:.+?)', webpage, + 'categories', default=None) + categories = [clean_html(category) for category in re.findall( + r']+>(.+?)
', categories_html)] if categories_html else None + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': description, + 'upload_date': upload_date, + 'uploader': uploader, + 'uploader_id': uploader.lower() if uploader else None, + 'thumbnail': thumbnail, + 'duration': duration, + 'view_count': view_count, + 'like_count': int_or_none(like_count), + 'dislike_count': int_or_none(dislike_count), + 'comment_count': int_or_none(comment_count), + 'age_limit': age_limit, + 'categories': categories, + 'formats': formats, + } + + +class XHamsterEmbedIE(InfoExtractor): + _VALID_URL = r'https?://(?:[^/?#]+\.)?%s/xembed\.php\?video=(?P\d+)' % XHamsterIE._DOMAINS + _EMBED_REGEX = [r']+?src=(["\'])(?P(?:https?:)?//(?:www\.)?xhamster\.com/xembed\.php\?video=\d+)\1'] + _TEST = { + 'url': 'http://xhamster.com/xembed.php?video=3328539', + 'info_dict': { + 'id': '3328539', + 'ext': 'mp4', + 'title': 'Pen Masturbation', + 'timestamp': 1406581861, + 'upload_date': '20140728', + 'uploader': 'ManyakisArt', + 'duration': 5, + 'age_limit': 18, + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + + video_url = self._search_regex( + r'href="(https?://xhamster\.com/(?:movies/{0}/[^"]*\.html|videos/[^/]*-{0})[^"]*)"'.format(video_id), + webpage, 'xhamster url', default=None) + + if not video_url: + vars = self._parse_json( + self._search_regex(r'vars\s*:\s*({.+?})\s*,\s*\n', webpage, 'vars'), + video_id) + video_url = dict_get(vars, ('downloadLink', 'homepageLink', 'commentsLink', 'shareUrl')) + + return self.url_result(video_url, 'XHamster') + + +class XHamsterUserIE(InfoExtractor): + _VALID_URL = rf'https?://(?:[^/?#]+\.)?{XHamsterIE._DOMAINS}/(?:(?Pusers)|creators)/(?P[^/?#&]+)' + _TESTS = [{ + # Paginated user profile + 'url': 'https://xhamster.com/users/netvideogirls/videos', + 'info_dict': { + 'id': 'netvideogirls', + }, + 'playlist_mincount': 267, + }, { + # Non-paginated user profile + 'url': 'https://xhamster.com/users/firatkaan/videos', + 'info_dict': { + 'id': 'firatkaan', + }, + 'playlist_mincount': 1, + }, { + 'url': 'https://xhamster.com/creators/squirt-orgasm-69', + 'info_dict': { + 'id': 'squirt-orgasm-69', + }, + 'playlist_mincount': 150, + }, { + 'url': 'https://xhday.com/users/mobhunter', + 'only_matching': True, + }, { + 'url': 'https://xhvid.com/users/pelushe21', + 'only_matching': True, + }] + + def _entries(self, user_id, is_user): + prefix, suffix = ('users', 'videos') if is_user else ('creators', 'exclusive') + next_page_url = f'https://xhamster.com/{prefix}/{user_id}/{suffix}/1' + for pagenum in itertools.count(1): + page = self._download_webpage( + next_page_url, user_id, 'Downloading page %s' % pagenum) + for video_tag in re.findall( + r'(]+class=["\'].*?\bvideo-thumb__image-container[^>]+>)', + page): + video = extract_attributes(video_tag) + video_url = url_or_none(video.get('href')) + if not video_url or not XHamsterIE.suitable(video_url): + continue + video_id = XHamsterIE._match_id(video_url) + yield self.url_result( + video_url, ie=XHamsterIE.ie_key(), video_id=video_id) + mobj = re.search(r']+data-page=["\']next[^>]+>', page) + if not mobj: + break + next_page = extract_attributes(mobj.group(0)) + next_page_url = url_or_none(next_page.get('href')) + if not next_page_url: + break + + def _real_extract(self, url): + user, user_id = self._match_valid_url(url).group('user', 'id') + return self.playlist_result(self._entries(user_id, bool(user)), user_id) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/ximalaya.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ximalaya.py new file mode 100644 index 0000000..3d5e6cf --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/ximalaya.py @@ -0,0 +1,167 @@ +import math + +from .common import InfoExtractor +from ..utils import traverse_obj, try_call, InAdvancePagedList + + +class XimalayaBaseIE(InfoExtractor): + _GEO_COUNTRIES = ['CN'] + + +class XimalayaIE(XimalayaBaseIE): + IE_NAME = 'ximalaya' + IE_DESC = '喜马拉雅FM' + _VALID_URL = r'https?://(?:www\.|m\.)?ximalaya\.com/(:?(?P\d+)/)?sound/(?P[0-9]+)' + _TESTS = [ + { + 'url': 'http://www.ximalaya.com/sound/47740352/', + 'info_dict': { + 'id': '47740352', + 'ext': 'm4a', + 'uploader': 'å°å½¬å½¬çˆ±å¬ä¹¦', + 'uploader_id': 61425525, + 'uploader_url': 'http://www.ximalaya.com/zhubo/61425525/', + 'title': '261.å”诗三百首.å·å…«.é€å­Ÿæµ©ç„¶ä¹‹å¹¿é™µ.æŽç™½', + 'description': "contains:《é€å­Ÿæµ©ç„¶ä¹‹å¹¿é™µã€‹\n作者:æŽç™½\n故人西辞黄鹤楼,烟花三月下扬州。\n孤帆远影碧空尽,惟è§é•¿æ±Ÿå¤©é™…æµã€‚", + 'thumbnail': r're:^https?://.*\.jpg', + 'thumbnails': [ + { + 'name': 'cover_url', + 'url': r're:^https?://.*\.jpg', + }, + { + 'name': 'cover_url_142', + 'url': r're:^https?://.*\.jpg', + 'width': 180, + 'height': 180 + } + ], + 'categories': ['å…¶ä»–'], + 'duration': 93, + 'view_count': int, + 'like_count': int, + } + }, + { + 'url': 'http://m.ximalaya.com/61425525/sound/47740352/', + 'info_dict': { + 'id': '47740352', + 'ext': 'm4a', + 'uploader': 'å°å½¬å½¬çˆ±å¬ä¹¦', + 'uploader_id': 61425525, + 'uploader_url': 'http://www.ximalaya.com/zhubo/61425525/', + 'title': '261.å”诗三百首.å·å…«.é€å­Ÿæµ©ç„¶ä¹‹å¹¿é™µ.æŽç™½', + 'description': "contains:《é€å­Ÿæµ©ç„¶ä¹‹å¹¿é™µã€‹\n作者:æŽç™½\n故人西辞黄鹤楼,烟花三月下扬州。\n孤帆远影碧空尽,惟è§é•¿æ±Ÿå¤©é™…æµã€‚", + 'thumbnail': r're:^https?://.*\.jpg', + 'thumbnails': [ + { + 'name': 'cover_url', + 'url': r're:^https?://.*\.jpg', + }, + { + 'name': 'cover_url_142', + 'url': r're:^https?://.*\.jpg', + 'width': 180, + 'height': 180 + } + ], + 'categories': ['人文'], + 'duration': 93, + 'view_count': int, + 'like_count': int, + } + } + ] + + def _real_extract(self, url): + scheme = 'https' if url.startswith('https') else 'http' + + audio_id = self._match_id(url) + audio_info_file = '%s://m.ximalaya.com/tracks/%s.json' % (scheme, audio_id) + audio_info = self._download_json(audio_info_file, audio_id, + 'Downloading info json %s' % audio_info_file, + 'Unable to download info file') + + formats = [{ + 'format_id': f'{bps}k', + 'url': audio_info[k], + 'abr': bps, + 'vcodec': 'none' + } for bps, k in ((24, 'play_path_32'), (64, 'play_path_64')) if audio_info.get(k)] + + thumbnails = [] + for k in audio_info.keys(): + # cover pics kyes like: cover_url', 'cover_url_142' + if k.startswith('cover_url'): + thumbnail = {'name': k, 'url': audio_info[k]} + if k == 'cover_url_142': + thumbnail['width'] = 180 + thumbnail['height'] = 180 + thumbnails.append(thumbnail) + + audio_uploader_id = audio_info.get('uid') + + audio_description = try_call( + lambda: audio_info['intro'].replace('\r\n\r\n\r\n ', '\n').replace('\r\n', '\n')) + + return { + 'id': audio_id, + 'uploader': audio_info.get('nickname'), + 'uploader_id': audio_uploader_id, + 'uploader_url': f'{scheme}://www.ximalaya.com/zhubo/{audio_uploader_id}/' if audio_uploader_id else None, + 'title': audio_info['title'], + 'thumbnails': thumbnails, + 'description': audio_description, + 'categories': list(filter(None, [audio_info.get('category_name')])), + 'duration': audio_info.get('duration'), + 'view_count': audio_info.get('play_count'), + 'like_count': audio_info.get('favorites_count'), + 'formats': formats, + } + + +class XimalayaAlbumIE(XimalayaBaseIE): + IE_NAME = 'ximalaya:album' + IE_DESC = '喜马拉雅FM 专辑' + _VALID_URL = r'https?://(?:www\.|m\.)?ximalaya\.com/(?:\d+/)?album/(?P[0-9]+)' + _TESTS = [{ + 'url': 'http://www.ximalaya.com/61425525/album/5534601/', + 'info_dict': { + 'title': 'å”诗三百首(å«èµæžï¼‰', + 'id': '5534601', + }, + 'playlist_mincount': 323, + }, { + 'url': 'https://www.ximalaya.com/album/6912905', + 'info_dict': { + 'title': '埃克哈特《修炼当下的力é‡ã€‹', + 'id': '6912905', + }, + 'playlist_mincount': 41, + }] + + def _real_extract(self, url): + playlist_id = self._match_id(url) + + first_page = self._fetch_page(playlist_id, 1) + page_count = math.ceil(first_page['trackTotalCount'] / first_page['pageSize']) + + entries = InAdvancePagedList( + lambda idx: self._get_entries(self._fetch_page(playlist_id, idx + 1) if idx else first_page), + page_count, first_page['pageSize']) + + title = traverse_obj(first_page, ('tracks', 0, 'albumTitle'), expected_type=str) + + return self.playlist_result(entries, playlist_id, title) + + def _fetch_page(self, playlist_id, page_idx): + return self._download_json( + 'https://www.ximalaya.com/revision/album/v1/getTracksList', + playlist_id, note=f'Downloading tracks list page {page_idx}', + query={'albumId': playlist_id, 'pageNum': page_idx})['data'] + + def _get_entries(self, page_data): + for e in page_data['tracks']: + yield self.url_result( + self._proto_relative_url(f'//www.ximalaya.com{e["url"]}'), + XimalayaIE, e.get('trackId'), e.get('title')) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xinpianchang.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xinpianchang.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xinpianchang.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xinpianchang.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xminus.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xminus.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xminus.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xminus.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xnxx.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xnxx.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xnxx.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xnxx.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xstream.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xstream.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xstream.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xstream.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/xtube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xtube.py new file mode 100644 index 0000000..db82925 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/xtube.py @@ -0,0 +1,214 @@ +import itertools +import re + +from .common import InfoExtractor +from ..networking import Request +from ..utils import ( + int_or_none, + js_to_json, + orderedSet, + parse_duration, + str_to_int, + url_or_none, +) + + +class XTubeIE(InfoExtractor): + _VALID_URL = r'''(?x) + (?: + xtube:| + https?://(?:www\.)?xtube\.com/(?:watch\.php\?.*\bv=|video-watch/(?:embedded/)?(?P[^/]+)-) + ) + (?P[^/?&#]+) + ''' + + _TESTS = [{ + # old URL schema + 'url': 'http://www.xtube.com/watch.php?v=kVTUy_G222_', + 'md5': '092fbdd3cbe292c920ef6fc6a8a9cdab', + 'info_dict': { + 'id': 'kVTUy_G222_', + 'ext': 'mp4', + 'title': 'strange erotica', + 'description': 'contains:an ET kind of thing', + 'uploader': 'greenshowers', + 'duration': 450, + 'view_count': int, + 'comment_count': int, + 'age_limit': 18, + } + }, { + # new URL schema + 'url': 'http://www.xtube.com/video-watch/strange-erotica-625837', + 'only_matching': True, + }, { + 'url': 'xtube:625837', + 'only_matching': True, + }, { + 'url': 'xtube:kVTUy_G222_', + 'only_matching': True, + }, { + 'url': 'https://www.xtube.com/video-watch/embedded/milf-tara-and-teen-shared-and-cum-covered-extreme-bukkake-32203482?embedsize=big', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + video_id = mobj.group('id') + display_id = mobj.group('display_id') + + if not display_id: + display_id = video_id + + if video_id.isdigit() and len(video_id) < 11: + url_pattern = 'http://www.xtube.com/video-watch/-%s' + else: + url_pattern = 'http://www.xtube.com/watch.php?v=%s' + + webpage = self._download_webpage( + url_pattern % video_id, display_id, headers={ + 'Cookie': 'age_verified=1; cookiesAccepted=1', + }) + + title, thumbnail, duration, sources, media_definition = [None] * 5 + + config = self._parse_json(self._search_regex( + r'playerConf\s*=\s*({.+?})\s*,\s*(?:\n|loaderConf|playerWrapper)', webpage, 'config', + default='{}'), video_id, transform_source=js_to_json, fatal=False) + if config: + config = config.get('mainRoll') + if isinstance(config, dict): + title = config.get('title') + thumbnail = config.get('poster') + duration = int_or_none(config.get('duration')) + sources = config.get('sources') or config.get('format') + media_definition = config.get('mediaDefinition') + + if not isinstance(sources, dict) and not media_definition: + sources = self._parse_json(self._search_regex( + r'(["\'])?sources\1?\s*:\s*(?P{.+?}),', + webpage, 'sources', group='sources'), video_id, + transform_source=js_to_json) + + formats = [] + format_urls = set() + + if isinstance(sources, dict): + for format_id, format_url in sources.items(): + format_url = url_or_none(format_url) + if not format_url: + continue + if format_url in format_urls: + continue + format_urls.add(format_url) + formats.append({ + 'url': format_url, + 'format_id': format_id, + 'height': int_or_none(format_id), + }) + + if isinstance(media_definition, list): + for media in media_definition: + video_url = url_or_none(media.get('videoUrl')) + if not video_url: + continue + if video_url in format_urls: + continue + format_urls.add(video_url) + format_id = media.get('format') + if format_id == 'hls': + formats.extend(self._extract_m3u8_formats( + video_url, video_id, 'mp4', entry_protocol='m3u8_native', + m3u8_id='hls', fatal=False)) + elif format_id == 'mp4': + height = int_or_none(media.get('quality')) + formats.append({ + 'url': video_url, + 'format_id': '%s-%d' % (format_id, height) if height else format_id, + 'height': height, + }) + + self._remove_duplicate_formats(formats) + + if not title: + title = self._search_regex( + (r'

\s*(?P[^<]+?)\s*</h1>', r'videoTitle\s*:\s*(["\'])(?P<title>.+?)\1'), + webpage, 'title', group='title') + description = self._og_search_description( + webpage, default=None) or self._html_search_meta( + 'twitter:description', webpage, default=None) or self._search_regex( + r'</h1>\s*<p>([^<]+)', webpage, 'description', fatal=False) + uploader = self._search_regex( + (r'<input[^>]+name="contentOwnerId"[^>]+value="([^"]+)"', + r'<span[^>]+class="nickname"[^>]*>([^<]+)'), + webpage, 'uploader', fatal=False) + if not duration: + duration = parse_duration(self._search_regex( + r'<dt>Runtime:?</dt>\s*<dd>([^<]+)</dd>', + webpage, 'duration', fatal=False)) + view_count = str_to_int(self._search_regex( + (r'["\']viewsCount["\'][^>]*>(\d+)\s+views', + r'<dt>Views:?</dt>\s*<dd>([\d,\.]+)</dd>'), + webpage, 'view count', fatal=False)) + comment_count = str_to_int(self._html_search_regex( + r'>Comments? \(([\d,\.]+)\)<', + webpage, 'comment count', fatal=False)) + + return { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'uploader': uploader, + 'duration': duration, + 'view_count': view_count, + 'comment_count': comment_count, + 'age_limit': 18, + 'formats': formats, + } + + +class XTubeUserIE(InfoExtractor): + IE_DESC = 'XTube user profile' + _VALID_URL = r'https?://(?:www\.)?xtube\.com/profile/(?P<id>[^/]+-\d+)' + _TEST = { + 'url': 'http://www.xtube.com/profile/greenshowers-4056496', + 'info_dict': { + 'id': 'greenshowers-4056496', + 'age_limit': 18, + }, + 'playlist_mincount': 154, + } + + def _real_extract(self, url): + user_id = self._match_id(url) + + entries = [] + for pagenum in itertools.count(1): + request = Request( + 'http://www.xtube.com/profile/%s/videos/%d' % (user_id, pagenum), + headers={ + 'Cookie': 'popunder=4', + 'X-Requested-With': 'XMLHttpRequest', + 'Referer': url, + }) + + page = self._download_json( + request, user_id, 'Downloading videos JSON page %d' % pagenum) + + html = page.get('html') + if not html: + break + + for video_id in orderedSet([video_id for _, video_id in re.findall( + r'data-plid=(["\'])(.+?)\1', html)]): + entries.append(self.url_result('xtube:%s' % video_id, XTubeIE.ie_key())) + + page_count = int_or_none(page.get('pageCount')) + if not page_count or pagenum == page_count: + break + + playlist = self.playlist_result(entries, user_id) + playlist['age_limit'] = 18 + return playlist diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xuite.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xuite.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xuite.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xuite.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xvideos.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xvideos.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xvideos.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xvideos.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/xxxymovies.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/xxxymovies.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/xxxymovies.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/xxxymovies.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/yahoo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yahoo.py new file mode 100644 index 0000000..24148a0 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/yahoo.py @@ -0,0 +1,430 @@ +import hashlib +import itertools +import urllib.parse + +from .common import InfoExtractor, SearchInfoExtractor +from .youtube import YoutubeIE +from ..utils import ( + ExtractorError, + clean_html, + int_or_none, + mimetype2ext, + parse_iso8601, + traverse_obj, + try_get, + url_or_none, +) + + +class YahooIE(InfoExtractor): + IE_DESC = 'Yahoo screen and movies' + _VALID_URL = r'(?P<url>https?://(?:(?P<country>[a-zA-Z]{2}(?:-[a-zA-Z]{2})?|malaysia)\.)?(?:[\da-zA-Z_-]+\.)?yahoo\.com/(?:[^/]+/)*(?P<id>[^?&#]*-[0-9]+(?:-[a-z]+)?)\.html)' + _EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>https?://(?:screen|movies)\.yahoo\.com/.+?\.html\?format=embed)\1'] + + _TESTS = [{ + 'url': 'http://screen.yahoo.com/julian-smith-travis-legg-watch-214727115.html', + 'info_dict': { + 'id': '2d25e626-2378-391f-ada0-ddaf1417e588', + 'ext': 'mp4', + 'title': 'Julian Smith & Travis Legg Watch Julian Smith', + 'description': 'Julian and Travis watch Julian Smith', + 'duration': 6863, + 'timestamp': 1369812016, + 'upload_date': '20130529', + }, + 'skip': 'No longer exists', + }, { + 'url': 'https://screen.yahoo.com/community/community-sizzle-reel-203225340.html?format=embed', + 'md5': '7993e572fac98e044588d0b5260f4352', + 'info_dict': { + 'id': '4fe78544-8d48-39d8-97cd-13f205d9fcdb', + 'ext': 'mp4', + 'title': "Yahoo Saves 'Community'", + 'description': 'md5:4d4145af2fd3de00cbb6c1d664105053', + 'duration': 170, + 'timestamp': 1406838636, + 'upload_date': '20140731', + }, + 'skip': 'Unfortunately, this video is not available in your region', + }, { + 'url': 'https://uk.screen.yahoo.com/editor-picks/cute-raccoon-freed-drain-using-091756545.html', + 'md5': '71298482f7c64cbb7fa064e4553ff1c1', + 'info_dict': { + 'id': 'b3affa53-2e14-3590-852b-0e0db6cd1a58', + 'ext': 'webm', + 'title': 'Cute Raccoon Freed From Drain\u00a0Using Angle Grinder', + 'description': 'md5:f66c890e1490f4910a9953c941dee944', + 'duration': 97, + 'timestamp': 1414489862, + 'upload_date': '20141028', + }, + 'skip': 'No longer exists', + }, { + 'url': 'http://news.yahoo.com/video/china-moses-crazy-blues-104538833.html', + 'md5': '88e209b417f173d86186bef6e4d1f160', + 'info_dict': { + 'id': 'f885cf7f-43d4-3450-9fac-46ac30ece521', + 'ext': 'mp4', + 'title': 'China Moses Is Crazy About the Blues', + 'description': 'md5:9900ab8cd5808175c7b3fe55b979bed0', + 'duration': 128, + 'timestamp': 1385722202, + 'upload_date': '20131129', + } + }, { + 'url': 'https://www.yahoo.com/movies/v/true-story-trailer-173000497.html', + 'md5': '2a9752f74cb898af5d1083ea9f661b58', + 'info_dict': { + 'id': '071c4013-ce30-3a93-a5b2-e0413cd4a9d1', + 'ext': 'mp4', + 'title': '\'True Story\' Trailer', + 'description': 'True Story', + 'duration': 150, + 'timestamp': 1418919206, + 'upload_date': '20141218', + }, + }, { + 'url': 'https://gma.yahoo.com/pizza-delivery-man-surprised-huge-tip-college-kids-195200785.html', + 'only_matching': True, + }, { + 'note': 'NBC Sports embeds', + 'url': 'http://sports.yahoo.com/blogs/ncaab-the-dagger/tyler-kalinoski-s-buzzer-beater-caps-davidson-s-comeback-win-185609842.html?guid=nbc_cbk_davidsonbuzzerbeater_150313', + 'info_dict': { + 'id': '9CsDKds0kvHI', + 'ext': 'flv', + 'description': 'md5:df390f70a9ba7c95ff1daace988f0d8d', + 'title': 'Tyler Kalinoski hits buzzer-beater to lift Davidson', + 'upload_date': '20150313', + 'uploader': 'NBCU-SPORTS', + 'timestamp': 1426270238, + }, + }, { + 'url': 'https://tw.news.yahoo.com/-100120367.html', + 'only_matching': True, + }, { + # Query result is embedded in webpage, but explicit request to video API fails with geo restriction + 'url': 'https://screen.yahoo.com/community/communitary-community-episode-1-ladders-154501237.html', + 'md5': '4fbafb9c9b6f07aa8f870629f6671b35', + 'info_dict': { + 'id': '1f32853c-a271-3eef-8cb6-f6d6872cb504', + 'ext': 'mp4', + 'title': 'Communitary - Community Episode 1: Ladders', + 'description': 'md5:8fc39608213295748e1e289807838c97', + 'duration': 1646, + 'timestamp': 1440436550, + 'upload_date': '20150824', + 'series': 'Communitary', + 'season_number': 6, + 'episode_number': 1, + }, + 'skip': 'No longer exists', + }, { + # ytwnews://cavideo/ + 'url': 'https://tw.video.yahoo.com/movie-tw/單車天使-中文版é -092316541.html', + 'info_dict': { + 'id': 'ba133ff2-0793-3510-b636-59dfe9ff6cff', + 'ext': 'mp4', + 'title': '單車天使 - 中文版é ', + 'description': '中文版é ', + 'timestamp': 1476696196, + 'upload_date': '20161017', + }, + 'params': { + 'skip_download': True, + }, + }, { + # Contains both a Yahoo hosted video and multiple Youtube embeds + 'url': 'https://www.yahoo.com/entertainment/gwen-stefani-reveals-the-pop-hit-she-passed-on-assigns-it-to-her-voice-contestant-instead-033045672.html', + 'info_dict': { + 'id': '46c5d95a-528f-3d03-b732-732fcadd51de', + 'title': 'Gwen Stefani reveals the pop hit she passed on, assigns it to her \'Voice\' contestant instead', + 'description': 'Gwen decided not to record this hit herself, but she decided it was the perfect fit for Kyndall Inskeep.', + }, + 'playlist': [{ + 'info_dict': { + 'id': '966d4262-4fd1-3aaa-b45b-049ca6e38ba6', + 'ext': 'mp4', + 'title': 'Gwen Stefani reveals she turned down one of Sia\'s best songs', + 'description': 'On "The Voice" Tuesday, Gwen Stefani told Taylor Swift which Sia hit was almost hers.', + 'timestamp': 1572406500, + 'upload_date': '20191030', + }, + }, { + 'info_dict': { + 'id': '352CFDOQrKg', + 'ext': 'mp4', + 'title': 'Kyndal Inskeep "Performs the Hell Out of" Sia\'s "Elastic Heart" - The Voice Knockouts 2019', + 'description': 'md5:7fe8e3d5806f96002e55f190d1d94479', + 'uploader': 'The Voice', + 'uploader_id': 'NBCTheVoice', + 'upload_date': '20191029', + }, + }], + 'params': { + 'playlistend': 2, + }, + 'expected_warnings': ['HTTP Error 404', 'Ignoring subtitle tracks'], + }, { + 'url': 'https://malaysia.news.yahoo.com/video/bystanders-help-ontario-policeman-bust-190932818.html', + 'only_matching': True, + }, { + 'url': 'https://es-us.noticias.yahoo.com/es-la-puerta-irrompible-que-110539379.html', + 'only_matching': True, + }, { + 'url': 'https://www.yahoo.com/entertainment/v/longtime-cbs-news-60-minutes-032036500-cbs.html', + 'only_matching': True, + }] + + def _extract_yahoo_video(self, video_id, country): + video = self._download_json( + 'https://%s.yahoo.com/_td/api/resource/VideoService.videos;view=full;video_ids=["%s"]' % (country, video_id), + video_id, 'Downloading video JSON metadata')[0] + title = video['title'] + + if country == 'malaysia': + country = 'my' + + is_live = video.get('live_state') == 'live' + fmts = ('m3u8',) if is_live else ('webm', 'mp4') + + urls = [] + formats = [] + subtitles = {} + for fmt in fmts: + media_obj = self._download_json( + 'https://video-api.yql.yahoo.com/v1/video/sapi/streams/' + video_id, + video_id, 'Downloading %s JSON metadata' % fmt, + headers=self.geo_verification_headers(), query={ + 'format': fmt, + 'region': country.upper(), + })['query']['results']['mediaObj'][0] + msg = media_obj.get('status', {}).get('msg') + + for s in media_obj.get('streams', []): + host = s.get('host') + path = s.get('path') + if not host or not path: + continue + s_url = host + path + if s.get('format') == 'm3u8': + formats.extend(self._extract_m3u8_formats( + s_url, video_id, 'mp4', m3u8_id='hls', fatal=False)) + continue + tbr = int_or_none(s.get('bitrate')) + formats.append({ + 'url': s_url, + 'format_id': fmt + ('-%d' % tbr if tbr else ''), + 'width': int_or_none(s.get('width')), + 'height': int_or_none(s.get('height')), + 'tbr': tbr, + 'fps': int_or_none(s.get('framerate')), + }) + + for cc in media_obj.get('closedcaptions', []): + cc_url = cc.get('url') + if not cc_url or cc_url in urls: + continue + urls.append(cc_url) + subtitles.setdefault(cc.get('lang') or 'en-US', []).append({ + 'url': cc_url, + 'ext': mimetype2ext(cc.get('content_type')), + }) + + streaming_url = video.get('streaming_url') + if streaming_url and not is_live: + formats.extend(self._extract_m3u8_formats( + streaming_url, video_id, 'mp4', + 'm3u8_native', m3u8_id='hls', fatal=False)) + + if not formats and msg == 'geo restricted': + self.raise_geo_restricted(metadata_available=True) + + thumbnails = [] + for thumb in video.get('thumbnails', []): + thumb_url = thumb.get('url') + if not thumb_url: + continue + thumbnails.append({ + 'id': thumb.get('tag'), + 'url': thumb.get('url'), + 'width': int_or_none(thumb.get('width')), + 'height': int_or_none(thumb.get('height')), + }) + + series_info = video.get('series_info') or {} + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'thumbnails': thumbnails, + 'description': clean_html(video.get('description')), + 'timestamp': parse_iso8601(video.get('publish_time')), + 'subtitles': subtitles, + 'duration': int_or_none(video.get('duration')), + 'view_count': int_or_none(video.get('view_count')), + 'is_live': is_live, + 'series': video.get('show_name'), + 'season_number': int_or_none(series_info.get('season_number')), + 'episode_number': int_or_none(series_info.get('episode_number')), + } + + def _real_extract(self, url): + url, country, display_id = self._match_valid_url(url).groups() + if not country: + country = 'us' + else: + country = country.split('-')[0] + + items = self._download_json( + 'https://%s.yahoo.com/caas/content/article' % country, display_id, + 'Downloading content JSON metadata', query={ + 'url': url + })['items'][0] + + item = items['data']['partnerData'] + if item.get('type') != 'video': + entries = [] + + cover = item.get('cover') or {} + if cover.get('type') == 'yvideo': + cover_url = cover.get('url') + if cover_url: + entries.append(self.url_result( + cover_url, 'Yahoo', cover.get('uuid'))) + + for e in (item.get('body') or []): + if e.get('type') == 'videoIframe': + iframe_url = e.get('url') + if iframe_url: + entries.append(self.url_result(iframe_url)) + + if item.get('type') == 'storywithleadvideo': + iframe_url = try_get(item, lambda x: x['meta']['player']['url']) + if iframe_url: + entries.append(self.url_result(iframe_url)) + else: + self.report_warning("Yahoo didn't provide an iframe url for this storywithleadvideo") + + if items.get('markup'): + entries.extend( + self.url_result(yt_url) for yt_url in YoutubeIE._extract_embed_urls(url, items['markup'])) + + return self.playlist_result( + entries, item.get('uuid'), + item.get('title'), item.get('summary')) + + info = self._extract_yahoo_video(item['uuid'], country) + info['display_id'] = display_id + return info + + +class YahooSearchIE(SearchInfoExtractor): + IE_DESC = 'Yahoo screen search' + _MAX_RESULTS = 1000 + IE_NAME = 'screen.yahoo:search' + _SEARCH_KEY = 'yvsearch' + + def _search_results(self, query): + for pagenum in itertools.count(0): + result_url = 'http://video.search.yahoo.com/search/?p=%s&fr=screen&o=js&gs=0&b=%d' % (urllib.parse.quote_plus(query), pagenum * 30) + info = self._download_json(result_url, query, + note='Downloading results page ' + str(pagenum + 1)) + yield from (self.url_result(result['rurl']) for result in info['results']) + if info['m']['last'] >= info['m']['total'] - 1: + break + + +class YahooJapanNewsIE(InfoExtractor): + IE_NAME = 'yahoo:japannews' + IE_DESC = 'Yahoo! Japan News' + _VALID_URL = r'https?://news\.yahoo\.co\.jp/(?:articles|feature)/(?P<id>[a-zA-Z0-9]+)' + _GEO_COUNTRIES = ['JP'] + _TESTS = [{ + 'url': 'https://news.yahoo.co.jp/articles/a70fe3a064f1cfec937e2252c7fc6c1ba3201c0e', + 'info_dict': { + 'id': 'a70fe3a064f1cfec937e2252c7fc6c1ba3201c0e', + 'ext': 'mp4', + 'title': 'ã€ç‹¬è‡ªã€‘安å€å…ƒç·ç†ã€Œå›½è‘¬ã€ä¸­æ­¢æ±‚ã‚“脅迫メールâ€â€¦ã€Œå­ã©ã‚‚誘æ‹ã€â€œé€ä¿¡è€…â€ã‚’追跡', + 'description': 'md5:1c06974575f930f692d8696fbcfdc546', + 'thumbnail': r're:https://.+', + }, + 'params': { + 'skip_download': True, + }, + }, { + 'url': 'https://news.yahoo.co.jp/feature/1356', + 'only_matching': True + }] + + def _extract_formats(self, json_data, content_id): + formats = [] + + for vid in traverse_obj(json_data, ('ResultSet', 'Result', ..., 'VideoUrlSet', 'VideoUrl', ...)) or []: + delivery = vid.get('delivery') + url = url_or_none(vid.get('Url')) + if not delivery or not url: + continue + elif delivery == 'hls': + formats.extend( + self._extract_m3u8_formats( + url, content_id, 'mp4', 'm3u8_native', + m3u8_id='hls', fatal=False)) + else: + formats.append({ + 'url': url, + 'format_id': f'http-{vid.get("bitrate")}', + 'height': int_or_none(vid.get('height')), + 'width': int_or_none(vid.get('width')), + 'tbr': int_or_none(vid.get('bitrate')), + }) + self._remove_duplicate_formats(formats) + + return formats + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + preloaded_state = self._search_json(r'__PRELOADED_STATE__\s*=', webpage, 'preloaded state', video_id) + + content_id = traverse_obj( + preloaded_state, ('articleDetail', 'paragraphs', ..., 'objectItems', ..., 'video', 'vid'), + get_all=False, expected_type=int) + if content_id is None: + raise ExtractorError('This article does not contain a video', expected=True) + + HOST = 'news.yahoo.co.jp' + space_id = traverse_obj(preloaded_state, ('pageData', 'spaceId'), expected_type=str) + json_data = self._download_json( + f'https://feapi-yvpub.yahooapis.jp/v1/content/{content_id}', + video_id, query={ + 'appid': 'dj0zaiZpPVZMTVFJR0FwZWpiMyZzPWNvbnN1bWVyc2VjcmV0Jng9YjU-', + 'output': 'json', + 'domain': HOST, + 'ak': hashlib.md5('_'.join((space_id, HOST)).encode()).hexdigest() if space_id else '', + 'device_type': '1100', + }) + + title = ( + traverse_obj(preloaded_state, + ('articleDetail', 'headline'), ('pageData', 'pageParam', 'title'), + expected_type=str) + or self._html_search_meta(('og:title', 'twitter:title'), webpage, 'title', default=None) + or self._html_extract_title(webpage)) + description = ( + traverse_obj(preloaded_state, ('pageData', 'description'), expected_type=str) + or self._html_search_meta( + ('og:description', 'description', 'twitter:description'), + webpage, 'description', default=None)) + thumbnail = ( + traverse_obj(preloaded_state, ('pageData', 'ogpImage'), expected_type=str) + or self._og_search_thumbnail(webpage, default=None) + or self._html_search_meta('twitter:image', webpage, 'thumbnail', default=None)) + + return { + 'id': video_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'formats': self._extract_formats(json_data, video_id), + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yandexdisk.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yandexdisk.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yandexdisk.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yandexdisk.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yandexmusic.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yandexmusic.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yandexmusic.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yandexmusic.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yandexvideo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yandexvideo.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yandexvideo.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yandexvideo.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yapfiles.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yapfiles.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yapfiles.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yapfiles.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/yappy.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yappy.py new file mode 100644 index 0000000..7b3d0cb --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/yappy.py @@ -0,0 +1,127 @@ +from .common import InfoExtractor +from ..utils import ( + OnDemandPagedList, + int_or_none, + traverse_obj, + unified_timestamp, + url_or_none, +) + + +class YappyIE(InfoExtractor): + _VALID_URL = r'https?://yappy\.media/video/(?P<id>\w+)' + _TESTS = [{ + 'url': 'https://yappy.media/video/47fea6d8586f48d1a0cf96a7342aabd2', + 'info_dict': { + 'id': '47fea6d8586f48d1a0cf96a7342aabd2', + 'ext': 'mp4', + 'title': 'Куда нажимать? Как Ñнимать? Смотри Ð²Ð¸Ð´Ð¾Ñ Ð¸ погнали!🤘ðŸ»', + 'timestamp': 1661893200, + 'description': 'Куда нажимать? Как Ñнимать? Смотри Ð²Ð¸Ð´Ð¾Ñ Ð¸ погнали!🤘ðŸ»', + 'thumbnail': 'https://cdn-st.ritm.media/static/pic/thumbnails/0c7c4d73388f47848acaf540d2e2bb8c-thumbnail.jpg', + 'upload_date': '20220830', + 'view_count': int, + 'like_count': int, + 'uploader_id': '59a0c8c485e5410b9c43474bf4c6a373', + 'categories': ['Образование и наука', 'Лайфхак', 'Технологии', 'Ðрт/иÑкуÑÑтво'], + 'repost_count': int, + 'uploader': 'YAPPY', + } + }, { + 'url': 'https://yappy.media/video/3862451954ad4bd58ae2ccefddb0bd33', + 'info_dict': { + 'id': '3862451954ad4bd58ae2ccefddb0bd33', + 'ext': 'mp4', + 'title': 'Опиши Ñвой характер 3 Ñловами🙃\n#пÑÐ¸Ñ…Ð¾Ð»Ð¾Ð³Ð¸Ñ #дружба #отношениÑ', + 'timestamp': 1674726985, + 'like_count': int, + 'description': 'Опиши Ñвой характер 3 Ñловами🙃\n#пÑÐ¸Ñ…Ð¾Ð»Ð¾Ð³Ð¸Ñ #дружба #отношениÑ', + 'uploader_id': '6793ee3581974a3586fc01e157de6c99', + 'view_count': int, + 'repost_count': int, + 'uploader': 'LENA SHTURMAN', + 'upload_date': '20230126', + 'thumbnail': 'https://cdn-st.ritm.media/static/pic/user_thumbnails/6e76bb4bbad640b6/9ec84c115b2b1967/1674716171.jpg', + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + webpage = self._download_webpage(url, video_id) + json_ld = self._search_json_ld(webpage, video_id) + nextjs_data = self._search_nextjs_data(webpage, video_id) + + media_data = ( + traverse_obj( + nextjs_data, ('props', 'pageProps', ('data', 'OpenGraphParameters')), get_all=False) + or self._download_json(f'https://yappy.media/api/video/{video_id}', video_id)) + + media_url = traverse_obj(media_data, ('link', {url_or_none})) or '' + has_watermark = media_url.endswith('-wm.mp4') + + formats = [{ + 'url': media_url, + 'ext': 'mp4', + 'format_note': 'Watermarked' if has_watermark else None, + 'preference': -10 if has_watermark else None + }] if media_url else [] + + if has_watermark: + formats.append({ + 'url': media_url.replace('-wm.mp4', '.mp4'), + 'ext': 'mp4' + }) + + audio_link = traverse_obj(media_data, ('audio', 'link')) + if audio_link: + formats.append({ + 'url': audio_link, + 'ext': 'mp3', + 'acodec': 'mp3', + 'vcodec': 'none' + }) + + return { + 'id': video_id, + 'title': (json_ld.get('description') or self._html_search_meta(['og:title'], webpage) + or self._html_extract_title(webpage)), + 'formats': formats, + 'thumbnail': (media_data.get('thumbnail') + or self._html_search_meta(['og:image', 'og:image:secure_url'], webpage)), + 'description': (media_data.get('description') or json_ld.get('description') + or self._html_search_meta(['description', 'og:description'], webpage)), + 'timestamp': unified_timestamp(media_data.get('publishedAt') or json_ld.get('timestamp')), + 'view_count': int_or_none(media_data.get('viewsCount') or json_ld.get('view_count')), + 'like_count': int_or_none(media_data.get('likesCount')), + 'uploader': traverse_obj(media_data, ('creator', 'firstName')), + 'uploader_id': traverse_obj(media_data, ('creator', ('uuid', 'nickname')), get_all=False), + 'categories': traverse_obj(media_data, ('categories', ..., 'name')) or None, + 'repost_count': int_or_none(media_data.get('sharingCount')) + } + + +class YappyProfileIE(InfoExtractor): + _VALID_URL = r'https?://yappy\.media/profile/(?P<id>\w+)' + _TESTS = [{ + 'url': 'https://yappy.media/profile/59a0c8c485e5410b9c43474bf4c6a373', + 'info_dict': { + 'id': '59a0c8c485e5410b9c43474bf4c6a373', + }, + 'playlist_mincount': 527, + }] + + def _real_extract(self, url): + profile_id = self._match_id(url) + + def fetch_page(page_num): + page_num += 1 + videos = self._download_json( + f'https://yappy.media/api/video/list/{profile_id}?page={page_num}', + profile_id, f'Downloading profile page {page_num} JSON') + + for video in traverse_obj(videos, ('results', lambda _, v: v['uuid'])): + yield self.url_result( + f'https://yappy.media/video/{video["uuid"]}', YappyIE, + video['uuid'], video.get('description')) + + return self.playlist_result(OnDemandPagedList(fetch_page, 15), profile_id) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/yesjapan.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yesjapan.py new file mode 100644 index 0000000..94e4166 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/yesjapan.py @@ -0,0 +1,56 @@ +from .common import InfoExtractor +from ..networking import HEADRequest +from ..utils import get_element_by_attribute, parse_iso8601 + + +class YesJapanIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?yesjapan\.com/video/(?P<slug>[A-Za-z0-9\-]*)_(?P<id>[A-Za-z0-9]+)\.html' + _TEST = { + 'url': 'http://www.yesjapan.com/video/japanese-in-5-20-wa-and-ga-particle-usages_726497834.html', + 'md5': 'f0be416314e5be21a12b499b330c21cf', + 'info_dict': { + 'id': '726497834', + 'title': 'Japanese in 5! #20 - WA And GA Particle Usages', + 'description': 'This should clear up some issues most students of Japanese encounter with WA and GA....', + 'ext': 'mp4', + 'timestamp': 1416391590, + 'upload_date': '20141119', + 'thumbnail': r're:^https?://.*\.jpg$', + } + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id) + title = self._og_search_title(webpage) + video_url = self._og_search_video_url(webpage) + description = self._og_search_description(webpage) + thumbnail = self._og_search_thumbnail(webpage) + + timestamp = None + submit_info = get_element_by_attribute('class', 'pm-submit-data', webpage) + if submit_info: + timestamp = parse_iso8601(self._search_regex( + r'datetime="([^"]+)"', submit_info, 'upload date', fatal=False, default=None)) + + # attempt to resolve the final URL in order to get a proper extension + redirect_req = HEADRequest(video_url) + req = self._request_webpage( + redirect_req, video_id, note='Resolving final URL', errnote='Could not resolve final URL', fatal=False) + if req: + video_url = req.url + + formats = [{ + 'format_id': 'sd', + 'url': video_url, + }] + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'description': description, + 'timestamp': timestamp, + 'thumbnail': thumbnail, + } diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yinyuetai.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yinyuetai.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yinyuetai.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yinyuetai.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yle_areena.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yle_areena.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yle_areena.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yle_areena.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/ynet.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/ynet.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/ynet.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/ynet.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/youjizz.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/youjizz.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/youjizz.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/youjizz.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/youku.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/youku.py new file mode 100644 index 0000000..e351765 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/youku.py @@ -0,0 +1,290 @@ +import random +import re +import string +import time + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + clean_html, + get_element_by_class, + js_to_json, + str_or_none, + strip_jsonp, +) + + +class YoukuIE(InfoExtractor): + IE_NAME = 'youku' + IE_DESC = '优酷' + _VALID_URL = r'''(?x) + (?: + https?://( + (?:v|play(?:er)?)\.(?:youku|tudou)\.com/(?:v_show/id_|player\.php/sid/)| + video\.tudou\.com/v/)| + youku:) + (?P<id>[A-Za-z0-9]+)(?:\.html|/v\.swf|) + ''' + + _TESTS = [{ + 'url': 'http://player.youku.com/player.php/sid/XNDgyMDQ2NTQw/v.swf', + 'only_matching': True, + }, { + 'url': 'http://v.youku.com/v_show/id_XNjA1NzA2Njgw.html', + 'note': 'Video protected with password', + 'info_dict': { + 'id': 'XNjA1NzA2Njgw', + 'ext': 'mp4', + 'title': 'é‚¢ç¾©ç”°å¤æ—¦è®²åº§ä¹‹æƒ³è±¡ä¸­çš„胡人—从“左衽孔å­â€è¯´èµ·', + 'duration': 7264.5, + 'thumbnail': r're:^https?://.*', + 'uploader': 'FoxJin1006', + 'uploader_id': '322014285', + 'uploader_url': 'http://i.youku.com/u/UMTI4ODA1NzE0MA==', + 'tags': list, + 'skip': '404', + }, + 'params': { + 'videopassword': '100600', + }, + }, { + # /play/get.json contains streams with "channel_type":"tail" + 'url': 'http://v.youku.com/v_show/id_XOTUxMzg4NDMy.html', + 'info_dict': { + 'id': 'XOTUxMzg4NDMy', + 'ext': 'mp4', + 'title': '我的世界☆明月庄主☆车震猎æ€â˜†æ€äººè‰ºæœ¯Minecraft', + 'duration': 702.08, + 'thumbnail': r're:^https?://.*', + 'uploader': '明月庄主moon', + 'uploader_id': '38465621', + 'uploader_url': 'https://www.youku.com/profile/index/?uid=UMTUzODYyNDg0', + 'tags': list, + }, + }, { + 'url': 'https://v.youku.com/v_show/id_XNTA2NTA0MjA1Mg==.html', + 'info_dict': { + 'id': 'XNTA2NTA0MjA1Mg', + 'ext': 'mp4', + 'title': 'Minecraft我的世界:建造超大巨型航空飞机,èœé¸Ÿvs高手vs黑客', + 'duration': 542.13, + 'thumbnail': r're:^https?://.*', + 'uploader': '波哥游æˆè§£è¯´', + 'uploader_id': '156688084', + 'uploader_url': 'https://www.youku.com/profile/index/?uid=UNjI2NzUyMzM2', + 'tags': list, + }, + }, { + 'url': 'https://v.youku.com/v_show/id_XNTE1MzczOTg4MA==.html', + 'info_dict': { + 'id': 'XNTE1MzczOTg4MA', + 'ext': 'mp4', + 'title': '国产超A特工片', + 'duration': 362.97, + 'thumbnail': r're:^https?://.*', + 'uploader': '陈晓娟说历å²', + 'uploader_id': '1640913339', + 'uploader_url': 'https://www.youku.com/profile/index/?uid=UNjU2MzY1MzM1Ng==', + 'tags': list, + }, + }, { + 'url': 'https://play.tudou.com/v_show/id_XNjAxNjI2OTU3Ng==.html?', + 'info_dict': { + 'id': 'XNjAxNjI2OTU3Ng', + 'ext': 'mp4', + 'title': '阿斯塔æ„识到哈里æ€äº†äººï¼Œè‡ªå·±è¢«éª—了', + 'thumbnail': 'https://m.ykimg.com/0541010164F732752794D4D7B70331D1', + 'uploader_id': '88758207', + 'tags': [], + 'uploader_url': 'https://www.youku.com/profile/index/?uid=UMzU1MDMyODI4', + 'uploader': '英美剧场', + 'duration': 72.91, + }, + }] + + @staticmethod + def get_ysuid(): + return '%d%s' % (int(time.time()), ''.join( + random.choices(string.ascii_letters, k=3))) + + def get_format_name(self, fm): + _dict = { + '3gp': 'h6', + '3gphd': 'h5', + 'flv': 'h4', + 'flvhd': 'h4', + 'mp4': 'h3', + 'mp4hd': 'h3', + 'mp4hd2': 'h4', + 'mp4hd3': 'h4', + 'hd2': 'h2', + 'hd3': 'h1', + } + return _dict.get(fm) + + def _real_extract(self, url): + video_id = self._match_id(url) + + self._set_cookie('youku.com', '__ysuid', self.get_ysuid()) + self._set_cookie('youku.com', 'xreferrer', 'http://www.youku.com') + + _, urlh = self._download_webpage_handle( + 'https://log.mmstat.com/eg.js', video_id, 'Retrieving cna info') + # The etag header is '"foobar"'; let's remove the double quotes + cna = urlh.headers['etag'][1:-1] + + # request basic data + basic_data_params = { + 'vid': video_id, + 'ccode': '0524', + 'client_ip': '192.168.1.1', + 'utid': cna, + 'client_ts': time.time() / 1000, + } + + video_password = self.get_param('videopassword') + if video_password: + basic_data_params['password'] = video_password + + headers = { + 'Referer': url, + } + headers.update(self.geo_verification_headers()) + data = self._download_json( + 'https://ups.youku.com/ups/get.json', video_id, + 'Downloading JSON metadata', + query=basic_data_params, headers=headers)['data'] + + error = data.get('error') + if error: + error_note = error.get('note') + if error_note is not None and '因版æƒåŽŸå› æ— æ³•è§‚çœ‹æ­¤è§†é¢‘' in error_note: + raise ExtractorError( + 'Youku said: Sorry, this video is available in China only', expected=True) + elif error_note and '该视频被设为ç§å¯†' in error_note: + raise ExtractorError( + 'Youku said: Sorry, this video is private', expected=True) + else: + msg = 'Youku server reported error %i' % error.get('code') + if error_note is not None: + msg += ': ' + clean_html(error_note) + raise ExtractorError(msg) + + # get video title + video_data = data['video'] + title = video_data['title'] + + formats = [{ + 'url': stream['m3u8_url'], + 'format_id': self.get_format_name(stream.get('stream_type')), + 'ext': 'mp4', + 'protocol': 'm3u8_native', + 'filesize': int(stream.get('size')), + 'width': stream.get('width'), + 'height': stream.get('height'), + } for stream in data['stream'] if stream.get('channel_type') != 'tail'] + + return { + 'id': video_id, + 'title': title, + 'formats': formats, + 'duration': video_data.get('seconds'), + 'thumbnail': video_data.get('logo'), + 'uploader': video_data.get('username'), + 'uploader_id': str_or_none(video_data.get('userid')), + 'uploader_url': data.get('uploader', {}).get('homepage'), + 'tags': video_data.get('tags'), + } + + +class YoukuShowIE(InfoExtractor): + _VALID_URL = r'https?://list\.youku\.com/show/id_(?P<id>[0-9a-z]+)\.html' + IE_NAME = 'youku:show' + + _TESTS = [{ + 'url': 'http://list.youku.com/show/id_zc7c670be07ff11e48b3f.html', + 'info_dict': { + 'id': 'zc7c670be07ff11e48b3f', + 'title': '花åƒéª¨ DVD版', + 'description': 'md5:a1ae6f5618571bbeb5c9821f9c81b558', + }, + 'playlist_count': 50, + }, { + # Episode number not starting from 1 + 'url': 'http://list.youku.com/show/id_zefbfbd70efbfbd780bef.html', + 'info_dict': { + 'id': 'zefbfbd70efbfbd780bef', + 'title': '超级飞侠3', + 'description': 'md5:275715156abebe5ccc2a1992e9d56b98', + }, + 'playlist_count': 24, + }, { + # Ongoing playlist. The initial page is the last one + 'url': 'http://list.youku.com/show/id_za7c275ecd7b411e1a19e.html', + 'only_matching': True, + }, { + # No data-id value. + 'url': 'http://list.youku.com/show/id_zefbfbd61237fefbfbdef.html', + 'only_matching': True, + }, { + # Wrong number of reload_id. + 'url': 'http://list.youku.com/show/id_z20eb4acaf5c211e3b2ad.html', + 'only_matching': True, + }] + + def _extract_entries(self, playlist_data_url, show_id, note, query): + query['callback'] = 'cb' + playlist_data = self._download_json( + playlist_data_url, show_id, query=query, note=note, + transform_source=lambda s: js_to_json(strip_jsonp(s))).get('html') + if playlist_data is None: + return [None, None] + drama_list = (get_element_by_class('p-drama-grid', playlist_data) + or get_element_by_class('p-drama-half-row', playlist_data)) + if drama_list is None: + raise ExtractorError('No episodes found') + video_urls = re.findall(r'<a[^>]+href="([^"]+)"', drama_list) + return playlist_data, [ + self.url_result(self._proto_relative_url(video_url, 'http:'), YoukuIE.ie_key()) + for video_url in video_urls] + + def _real_extract(self, url): + show_id = self._match_id(url) + webpage = self._download_webpage(url, show_id) + + entries = [] + page_config = self._parse_json(self._search_regex( + r'var\s+PageConfig\s*=\s*({.+});', webpage, 'page config'), + show_id, transform_source=js_to_json) + first_page, initial_entries = self._extract_entries( + 'http://list.youku.com/show/module', show_id, + note='Downloading initial playlist data page', + query={ + 'id': page_config['showid'], + 'tab': 'showInfo', + }) + first_page_reload_id = self._html_search_regex( + r'<div[^>]+id="(reload_\d+)', first_page, 'first page reload id') + # The first reload_id has the same items as first_page + reload_ids = re.findall('<li[^>]+data-id="([^"]+)">', first_page) + entries.extend(initial_entries) + for idx, reload_id in enumerate(reload_ids): + if reload_id == first_page_reload_id: + continue + _, new_entries = self._extract_entries( + 'http://list.youku.com/show/episode', show_id, + note='Downloading playlist data page %d' % (idx + 1), + query={ + 'id': page_config['showid'], + 'stage': reload_id, + }) + if new_entries is not None: + entries.extend(new_entries) + desc = self._html_search_meta('description', webpage, fatal=False) + playlist_title = desc.split(',')[0] if desc else None + detail_li = get_element_by_class('p-intro', webpage) + playlist_description = get_element_by_class( + 'intro-more', detail_li) if detail_li else None + + return self.playlist_result( + entries, show_id, playlist_title, playlist_description) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/younow.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/younow.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/younow.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/younow.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/youporn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/youporn.py new file mode 100644 index 0000000..6ee0abc --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/youporn.py @@ -0,0 +1,198 @@ +import re + +from .common import InfoExtractor +from ..utils import ( + extract_attributes, + int_or_none, + merge_dicts, + str_to_int, + traverse_obj, + unified_strdate, + url_or_none, +) + + +class YouPornIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?youporn\.com/(?:watch|embed)/(?P<id>\d+)(?:/(?P<display_id>[^/?#&]+))?' + _EMBED_REGEX = [r'<iframe[^>]+\bsrc=["\'](?P<url>(?:https?:)?//(?:www\.)?youporn\.com/embed/\d+)'] + _TESTS = [{ + 'url': 'http://www.youporn.com/watch/505835/sex-ed-is-it-safe-to-masturbate-daily/', + 'md5': '3744d24c50438cf5b6f6d59feb5055c2', + 'info_dict': { + 'id': '505835', + 'display_id': 'sex-ed-is-it-safe-to-masturbate-daily', + 'ext': 'mp4', + 'title': 'Sex Ed: Is It Safe To Masturbate Daily?', + 'description': 'Love & Sex Answers: http://bit.ly/DanAndJenn -- Is It Unhealthy To Masturbate Daily?', + 'thumbnail': r're:^https?://.*\.jpg$', + 'duration': 210, + 'uploader': 'Ask Dan And Jennifer', + 'upload_date': '20101217', + 'average_rating': int, + 'view_count': int, + 'categories': list, + 'tags': list, + 'age_limit': 18, + }, + 'skip': 'This video has been disabled', + }, { + # Unknown uploader + 'url': 'http://www.youporn.com/watch/561726/big-tits-awesome-brunette-on-amazing-webcam-show/?from=related3&al=2&from_id=561726&pos=4', + 'info_dict': { + 'id': '561726', + 'display_id': 'big-tits-awesome-brunette-on-amazing-webcam-show', + 'ext': 'mp4', + 'title': 'Big Tits Awesome Brunette On amazing webcam show', + 'description': 'http://sweetlivegirls.com Big Tits Awesome Brunette On amazing webcam show.mp4', + 'thumbnail': r're:^https?://.*\.jpg$', + 'uploader': 'Unknown', + 'upload_date': '20110418', + 'average_rating': int, + 'view_count': int, + 'categories': list, + 'tags': list, + 'age_limit': 18, + }, + 'params': { + 'skip_download': True, + }, + 'skip': '404', + }, { + 'url': 'https://www.youporn.com/embed/505835/sex-ed-is-it-safe-to-masturbate-daily/', + 'only_matching': True, + }, { + 'url': 'http://www.youporn.com/watch/505835', + 'only_matching': True, + }, { + 'url': 'https://www.youporn.com/watch/13922959/femdom-principal/', + 'only_matching': True, + }, { + 'url': 'https://www.youporn.com/watch/16290308/tinderspecial-trailer1/', + 'info_dict': { + 'id': '16290308', + 'age_limit': 18, + 'categories': [], + 'description': 'md5:00ea70f642f431c379763c17c2f396bc', + 'display_id': 'tinderspecial-trailer1', + 'duration': 298.0, + 'ext': 'mp4', + 'upload_date': '20201123', + 'uploader': 'Ersties', + 'tags': [], + 'thumbnail': 'https://fi1.ypncdn.com/202011/23/16290308/original/8/tinderspecial-trailer1-8(m=eaAaaEPbaaaa).jpg', + 'timestamp': 1606089600, + 'title': 'Tinder In Real Life', + 'view_count': int, + } + }] + + def _real_extract(self, url): + video_id, display_id = self._match_valid_url(url).group('id', 'display_id') + definitions = self._download_json( + f'https://www.youporn.com/api/video/media_definitions/{video_id}/', display_id or video_id) + + def get_format_data(data, f): + return traverse_obj(data, lambda _, v: v['format'] == f and url_or_none(v['videoUrl'])) + + formats = [] + # Try to extract only the actual master m3u8 first, avoiding the duplicate single resolution "master" m3u8s + for hls_url in traverse_obj(get_format_data(definitions, 'hls'), ( + lambda _, v: not isinstance(v['defaultQuality'], bool), 'videoUrl'), (..., 'videoUrl')): + formats.extend(self._extract_m3u8_formats(hls_url, video_id, 'mp4', fatal=False, m3u8_id='hls')) + + for definition in get_format_data(definitions, 'mp4'): + f = traverse_obj(definition, { + 'url': 'videoUrl', + 'filesize': ('videoSize', {int_or_none}) + }) + height = int_or_none(definition.get('quality')) + # Video URL's path looks like this: + # /201012/17/505835/720p_1500k_505835/YouPorn%20-%20Sex%20Ed%20Is%20It%20Safe%20To%20Masturbate%20Daily.mp4 + # /201012/17/505835/vl_240p_240k_505835/YouPorn%20-%20Sex%20Ed%20Is%20It%20Safe%20To%20Masturbate%20Daily.mp4 + # /videos/201703/11/109285532/1080P_4000K_109285532.mp4 + # We will benefit from it by extracting some metadata + mobj = re.search(r'(?P<height>\d{3,4})[pP]_(?P<bitrate>\d+)[kK]_\d+', definition['videoUrl']) + if mobj: + if not height: + height = int(mobj.group('height')) + bitrate = int(mobj.group('bitrate')) + f.update({ + 'format_id': '%dp-%dk' % (height, bitrate), + 'tbr': bitrate, + }) + f['height'] = height + formats.append(f) + + webpage = self._download_webpage( + 'http://www.youporn.com/watch/%s' % video_id, display_id, + headers={'Cookie': 'age_verified=1'}) + + title = self._html_search_regex( + r'(?s)<div[^>]+class=["\']watchVideoTitle[^>]+>(.+?)</div>', + webpage, 'title', default=None) or self._og_search_title( + webpage, default=None) or self._html_search_meta( + 'title', webpage, fatal=True) + + description = self._html_search_regex( + r'(?s)<div[^>]+\bid=["\']description["\'][^>]*>(.+?)</div>', + webpage, 'description', + default=None) or self._og_search_description( + webpage, default=None) + thumbnail = self._search_regex( + r'(?:imageurl\s*=|poster\s*:)\s*(["\'])(?P<thumbnail>.+?)\1', + webpage, 'thumbnail', fatal=False, group='thumbnail') + duration = int_or_none(self._html_search_meta( + 'video:duration', webpage, 'duration', fatal=False)) + + uploader = self._html_search_regex( + r'(?s)<div[^>]+class=["\']submitByLink["\'][^>]*>(.+?)</div>', + webpage, 'uploader', fatal=False) + upload_date = unified_strdate(self._html_search_regex( + (r'UPLOADED:\s*<span>([^<]+)', + r'Date\s+[Aa]dded:\s*<span>([^<]+)', + r'''(?s)<div[^>]+class=["']videoInfo(?:Date|Time)\b[^>]*>(.+?)</div>''', + r'(?s)<label\b[^>]*>Uploaded[^<]*</label>\s*<span\b[^>]*>(.+?)</span>'), + webpage, 'upload date', fatal=False)) + + age_limit = self._rta_search(webpage) + + view_count = None + views = self._search_regex( + r'(<div[^>]+\bclass=["\']js_videoInfoViews["\']>)', webpage, + 'views', default=None) + if views: + view_count = str_to_int(extract_attributes(views).get('data-value')) + comment_count = str_to_int(self._search_regex( + r'>All [Cc]omments? \(([\d,.]+)\)', + webpage, 'comment count', default=None)) + + def extract_tag_box(regex, title): + tag_box = self._search_regex(regex, webpage, title, default=None) + if not tag_box: + return [] + return re.findall(r'<a[^>]+href=[^>]+>([^<]+)', tag_box) + + categories = extract_tag_box( + r'(?s)Categories:.*?</[^>]+>(.+?)</div>', 'categories') + tags = extract_tag_box( + r'(?s)Tags:.*?</div>\s*<div[^>]+class=["\']tagBoxContent["\'][^>]*>(.+?)</div>', + 'tags') + + data = self._search_json_ld(webpage, video_id, expected_type='VideoObject', fatal=False) + data.pop('url', None) + return merge_dicts(data, { + 'id': video_id, + 'display_id': display_id, + 'title': title, + 'description': description, + 'thumbnail': thumbnail, + 'duration': duration, + 'uploader': uploader, + 'upload_date': upload_date, + 'view_count': view_count, + 'comment_count': comment_count, + 'categories': categories, + 'tags': tags, + 'age_limit': age_limit, + 'formats': formats, + }) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yourporn.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yourporn.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yourporn.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yourporn.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/yourupload.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/yourupload.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/yourupload.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/yourupload.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py new file mode 100644 index 0000000..ac28ed7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/youtube.py @@ -0,0 +1,7346 @@ +import base64 +import calendar +import collections +import copy +import datetime +import enum +import hashlib +import itertools +import json +import math +import os.path +import random +import re +import sys +import threading +import time +import traceback +import urllib.parse + +from .common import InfoExtractor, SearchInfoExtractor +from .openload import PhantomJSwrapper +from ..compat import functools +from ..jsinterp import JSInterpreter +from ..networking.exceptions import HTTPError, network_exceptions +from ..utils import ( + NO_DEFAULT, + ExtractorError, + LazyList, + UserNotLive, + bug_reports_message, + classproperty, + clean_html, + datetime_from_str, + dict_get, + filter_dict, + float_or_none, + format_field, + get_first, + int_or_none, + is_html, + join_nonempty, + js_to_json, + mimetype2ext, + orderedSet, + parse_codecs, + parse_count, + parse_duration, + parse_iso8601, + parse_qs, + qualities, + remove_start, + smuggle_url, + str_or_none, + str_to_int, + strftime_or_none, + traverse_obj, + try_get, + unescapeHTML, + unified_strdate, + unified_timestamp, + unsmuggle_url, + update_url_query, + url_or_none, + urljoin, + variadic, +) + +STREAMING_DATA_CLIENT_NAME = '__yt_dlp_client' +# any clients starting with _ cannot be explicitly requested by the user +INNERTUBE_CLIENTS = { + 'web': { + 'INNERTUBE_API_KEY': 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'WEB', + 'clientVersion': '2.20220801.00.00', + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 1 + }, + 'web_embedded': { + 'INNERTUBE_API_KEY': 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'WEB_EMBEDDED_PLAYER', + 'clientVersion': '1.20220731.00.00', + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 56 + }, + 'web_music': { + 'INNERTUBE_API_KEY': 'AIzaSyC9XL3ZjWddXya6X74dJoCTL-WEYFDNX30', + 'INNERTUBE_HOST': 'music.youtube.com', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'WEB_REMIX', + 'clientVersion': '1.20220727.01.00', + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 67, + }, + 'web_creator': { + 'INNERTUBE_API_KEY': 'AIzaSyBUPetSUmoZL-OhlxA7wSac5XinrygCqMo', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'WEB_CREATOR', + 'clientVersion': '1.20220726.00.00', + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 62, + }, + 'android': { + 'INNERTUBE_API_KEY': 'AIzaSyA8eiZmM1FaDVjRy-df2KTyQ_vz_yYM39w', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'ANDROID', + 'clientVersion': '17.31.35', + 'androidSdkVersion': 30, + 'userAgent': 'com.google.android.youtube/17.31.35 (Linux; U; Android 11) gzip' + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 3, + 'REQUIRE_JS_PLAYER': False + }, + 'android_embedded': { + 'INNERTUBE_API_KEY': 'AIzaSyCjc_pVEDi4qsv5MtC2dMXzpIaDoRFLsxw', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'ANDROID_EMBEDDED_PLAYER', + 'clientVersion': '17.31.35', + 'androidSdkVersion': 30, + 'userAgent': 'com.google.android.youtube/17.31.35 (Linux; U; Android 11) gzip' + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 55, + 'REQUIRE_JS_PLAYER': False + }, + 'android_music': { + 'INNERTUBE_API_KEY': 'AIzaSyAOghZGza2MQSZkY_zfZ370N-PUdXEo8AI', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'ANDROID_MUSIC', + 'clientVersion': '5.16.51', + 'androidSdkVersion': 30, + 'userAgent': 'com.google.android.apps.youtube.music/5.16.51 (Linux; U; Android 11) gzip' + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 21, + 'REQUIRE_JS_PLAYER': False + }, + 'android_creator': { + 'INNERTUBE_API_KEY': 'AIzaSyD_qjV8zaaUMehtLkrKFgVeSX_Iqbtyws8', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'ANDROID_CREATOR', + 'clientVersion': '22.30.100', + 'androidSdkVersion': 30, + 'userAgent': 'com.google.android.apps.youtube.creator/22.30.100 (Linux; U; Android 11) gzip' + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 14, + 'REQUIRE_JS_PLAYER': False + }, + # iOS clients have HLS live streams. Setting device model to get 60fps formats. + # See: https://github.com/TeamNewPipe/NewPipeExtractor/issues/680#issuecomment-1002724558 + 'ios': { + 'INNERTUBE_API_KEY': 'AIzaSyB-63vPrdThhKuerbB2N_l7Kwwcxj6yUAc', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'IOS', + 'clientVersion': '17.33.2', + 'deviceModel': 'iPhone14,3', + 'userAgent': 'com.google.ios.youtube/17.33.2 (iPhone14,3; U; CPU iOS 15_6 like Mac OS X)' + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 5, + 'REQUIRE_JS_PLAYER': False + }, + 'ios_embedded': { + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'IOS_MESSAGES_EXTENSION', + 'clientVersion': '17.33.2', + 'deviceModel': 'iPhone14,3', + 'userAgent': 'com.google.ios.youtube/17.33.2 (iPhone14,3; U; CPU iOS 15_6 like Mac OS X)' + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 66, + 'REQUIRE_JS_PLAYER': False + }, + 'ios_music': { + 'INNERTUBE_API_KEY': 'AIzaSyBAETezhkwP0ZWA02RsqT1zu78Fpt0bC_s', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'IOS_MUSIC', + 'clientVersion': '5.21', + 'deviceModel': 'iPhone14,3', + 'userAgent': 'com.google.ios.youtubemusic/5.21 (iPhone14,3; U; CPU iOS 15_6 like Mac OS X)' + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 26, + 'REQUIRE_JS_PLAYER': False + }, + 'ios_creator': { + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'IOS_CREATOR', + 'clientVersion': '22.33.101', + 'deviceModel': 'iPhone14,3', + 'userAgent': 'com.google.ios.ytcreator/22.33.101 (iPhone14,3; U; CPU iOS 15_6 like Mac OS X)' + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 15, + 'REQUIRE_JS_PLAYER': False + }, + # mweb has 'ultralow' formats + # See: https://github.com/yt-dlp/yt-dlp/pull/557 + 'mweb': { + 'INNERTUBE_API_KEY': 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'MWEB', + 'clientVersion': '2.20220801.00.00', + } + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 2 + }, + # This client can access age restricted videos (unless the uploader has disabled the 'allow embedding' option) + # See: https://github.com/zerodytrash/YouTube-Internal-Clients + 'tv_embedded': { + 'INNERTUBE_API_KEY': 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8', + 'INNERTUBE_CONTEXT': { + 'client': { + 'clientName': 'TVHTML5_SIMPLY_EMBEDDED_PLAYER', + 'clientVersion': '2.0', + }, + }, + 'INNERTUBE_CONTEXT_CLIENT_NAME': 85 + }, +} + + +def _split_innertube_client(client_name): + variant, *base = client_name.rsplit('.', 1) + if base: + return variant, base[0], variant + base, *variant = client_name.split('_', 1) + return client_name, base, variant[0] if variant else None + + +def short_client_name(client_name): + main, *parts = _split_innertube_client(client_name)[0].replace('embedscreen', 'e_s').split('_') + return join_nonempty(main[:4], ''.join(x[0] for x in parts)).upper() + + +def build_innertube_clients(): + THIRD_PARTY = { + 'embedUrl': 'https://www.youtube.com/', # Can be any valid URL + } + BASE_CLIENTS = ('ios', 'android', 'web', 'tv', 'mweb') + priority = qualities(BASE_CLIENTS[::-1]) + + for client, ytcfg in tuple(INNERTUBE_CLIENTS.items()): + ytcfg.setdefault('INNERTUBE_API_KEY', 'AIzaSyDCU8hByM-4DrUqRUYnGn-3llEO78bcxq8') + ytcfg.setdefault('INNERTUBE_HOST', 'www.youtube.com') + ytcfg.setdefault('REQUIRE_JS_PLAYER', True) + ytcfg['INNERTUBE_CONTEXT']['client'].setdefault('hl', 'en') + + _, base_client, variant = _split_innertube_client(client) + ytcfg['priority'] = 10 * priority(base_client) + + if not variant: + INNERTUBE_CLIENTS[f'{client}_embedscreen'] = embedscreen = copy.deepcopy(ytcfg) + embedscreen['INNERTUBE_CONTEXT']['client']['clientScreen'] = 'EMBED' + embedscreen['INNERTUBE_CONTEXT']['thirdParty'] = THIRD_PARTY + embedscreen['priority'] -= 3 + elif variant == 'embedded': + ytcfg['INNERTUBE_CONTEXT']['thirdParty'] = THIRD_PARTY + ytcfg['priority'] -= 2 + else: + ytcfg['priority'] -= 3 + + +build_innertube_clients() + + +class BadgeType(enum.Enum): + AVAILABILITY_UNLISTED = enum.auto() + AVAILABILITY_PRIVATE = enum.auto() + AVAILABILITY_PUBLIC = enum.auto() + AVAILABILITY_PREMIUM = enum.auto() + AVAILABILITY_SUBSCRIPTION = enum.auto() + LIVE_NOW = enum.auto() + VERIFIED = enum.auto() + + +class YoutubeBaseInfoExtractor(InfoExtractor): + """Provide base functions for Youtube extractors""" + + _RESERVED_NAMES = ( + r'channel|c|user|playlist|watch|w|v|embed|e|live|watch_popup|clip|' + r'shorts|movies|results|search|shared|hashtag|trending|explore|feed|feeds|' + r'browse|oembed|get_video_info|iframe_api|s/player|source|' + r'storefront|oops|index|account|t/terms|about|upload|signin|logout') + + _PLAYLIST_ID_RE = r'(?:(?:PL|LL|EC|UU|FL|RD|UL|TL|PU|OLAK5uy_)[0-9A-Za-z-_]{10,}|RDMM|WL|LL|LM)' + + # _NETRC_MACHINE = 'youtube' + + # If True it will raise an error if no login info is provided + _LOGIN_REQUIRED = False + + _INVIDIOUS_SITES = ( + # invidious-redirect websites + r'(?:www\.)?redirect\.invidious\.io', + r'(?:(?:www|dev)\.)?invidio\.us', + # Invidious instances taken from https://github.com/iv-org/documentation/blob/master/docs/instances.md + r'(?:www\.)?invidious\.pussthecat\.org', + r'(?:www\.)?invidious\.zee\.li', + r'(?:www\.)?invidious\.ethibox\.fr', + r'(?:www\.)?iv\.ggtyler\.dev', + r'(?:www\.)?inv\.vern\.i2p', + r'(?:www\.)?am74vkcrjp2d5v36lcdqgsj2m6x36tbrkhsruoegwfcizzabnfgf5zyd\.onion', + r'(?:www\.)?inv\.riverside\.rocks', + r'(?:www\.)?invidious\.silur\.me', + r'(?:www\.)?inv\.bp\.projectsegfau\.lt', + r'(?:www\.)?invidious\.g4c3eya4clenolymqbpgwz3q3tawoxw56yhzk4vugqrl6dtu3ejvhjid\.onion', + r'(?:www\.)?invidious\.slipfox\.xyz', + r'(?:www\.)?invidious\.esmail5pdn24shtvieloeedh7ehz3nrwcdivnfhfcedl7gf4kwddhkqd\.onion', + r'(?:www\.)?inv\.vernccvbvyi5qhfzyqengccj7lkove6bjot2xhh5kajhwvidqafczrad\.onion', + r'(?:www\.)?invidious\.tiekoetter\.com', + r'(?:www\.)?iv\.odysfvr23q5wgt7i456o5t3trw2cw5dgn56vbjfbq2m7xsc5vqbqpcyd\.onion', + r'(?:www\.)?invidious\.nerdvpn\.de', + r'(?:www\.)?invidious\.weblibre\.org', + r'(?:www\.)?inv\.odyssey346\.dev', + r'(?:www\.)?invidious\.dhusch\.de', + r'(?:www\.)?iv\.melmac\.space', + r'(?:www\.)?watch\.thekitty\.zone', + r'(?:www\.)?invidious\.privacydev\.net', + r'(?:www\.)?ng27owmagn5amdm7l5s3rsqxwscl5ynppnis5dqcasogkyxcfqn7psid\.onion', + r'(?:www\.)?invidious\.drivet\.xyz', + r'(?:www\.)?vid\.priv\.au', + r'(?:www\.)?euxxcnhsynwmfidvhjf6uzptsmh4dipkmgdmcmxxuo7tunp3ad2jrwyd\.onion', + r'(?:www\.)?inv\.vern\.cc', + r'(?:www\.)?invidious\.esmailelbob\.xyz', + r'(?:www\.)?invidious\.sethforprivacy\.com', + r'(?:www\.)?yt\.oelrichsgarcia\.de', + r'(?:www\.)?yt\.artemislena\.eu', + r'(?:www\.)?invidious\.flokinet\.to', + r'(?:www\.)?invidious\.baczek\.me', + r'(?:www\.)?y\.com\.sb', + r'(?:www\.)?invidious\.epicsite\.xyz', + r'(?:www\.)?invidious\.lidarshield\.cloud', + r'(?:www\.)?yt\.funami\.tech', + r'(?:www\.)?invidious\.3o7z6yfxhbw7n3za4rss6l434kmv55cgw2vuziwuigpwegswvwzqipyd\.onion', + r'(?:www\.)?osbivz6guyeahrwp2lnwyjk2xos342h4ocsxyqrlaopqjuhwn2djiiyd\.onion', + r'(?:www\.)?u2cvlit75owumwpy4dj2hsmvkq7nvrclkpht7xgyye2pyoxhpmclkrad\.onion', + # youtube-dl invidious instances list + r'(?:(?:www|no)\.)?invidiou\.sh', + r'(?:(?:www|fi)\.)?invidious\.snopyta\.org', + r'(?:www\.)?invidious\.kabi\.tk', + r'(?:www\.)?invidious\.mastodon\.host', + r'(?:www\.)?invidious\.zapashcanon\.fr', + r'(?:www\.)?(?:invidious(?:-us)?|piped)\.kavin\.rocks', + r'(?:www\.)?invidious\.tinfoil-hat\.net', + r'(?:www\.)?invidious\.himiko\.cloud', + r'(?:www\.)?invidious\.reallyancient\.tech', + r'(?:www\.)?invidious\.tube', + r'(?:www\.)?invidiou\.site', + r'(?:www\.)?invidious\.site', + r'(?:www\.)?invidious\.xyz', + r'(?:www\.)?invidious\.nixnet\.xyz', + r'(?:www\.)?invidious\.048596\.xyz', + r'(?:www\.)?invidious\.drycat\.fr', + r'(?:www\.)?inv\.skyn3t\.in', + r'(?:www\.)?tube\.poal\.co', + r'(?:www\.)?tube\.connect\.cafe', + r'(?:www\.)?vid\.wxzm\.sx', + r'(?:www\.)?vid\.mint\.lgbt', + r'(?:www\.)?vid\.puffyan\.us', + r'(?:www\.)?yewtu\.be', + r'(?:www\.)?yt\.elukerio\.org', + r'(?:www\.)?yt\.lelux\.fi', + r'(?:www\.)?invidious\.ggc-project\.de', + r'(?:www\.)?yt\.maisputain\.ovh', + r'(?:www\.)?ytprivate\.com', + r'(?:www\.)?invidious\.13ad\.de', + r'(?:www\.)?invidious\.toot\.koeln', + r'(?:www\.)?invidious\.fdn\.fr', + r'(?:www\.)?watch\.nettohikari\.com', + r'(?:www\.)?invidious\.namazso\.eu', + r'(?:www\.)?invidious\.silkky\.cloud', + r'(?:www\.)?invidious\.exonip\.de', + r'(?:www\.)?invidious\.riverside\.rocks', + r'(?:www\.)?invidious\.blamefran\.net', + r'(?:www\.)?invidious\.moomoo\.de', + r'(?:www\.)?ytb\.trom\.tf', + r'(?:www\.)?yt\.cyberhost\.uk', + r'(?:www\.)?kgg2m7yk5aybusll\.onion', + r'(?:www\.)?qklhadlycap4cnod\.onion', + r'(?:www\.)?axqzx4s6s54s32yentfqojs3x5i7faxza6xo3ehd4bzzsg2ii4fv2iid\.onion', + r'(?:www\.)?c7hqkpkpemu6e7emz5b4vyz7idjgdvgaaa3dyimmeojqbgpea3xqjoid\.onion', + r'(?:www\.)?fz253lmuao3strwbfbmx46yu7acac2jz27iwtorgmbqlkurlclmancad\.onion', + r'(?:www\.)?invidious\.l4qlywnpwqsluw65ts7md3khrivpirse744un3x7mlskqauz5pyuzgqd\.onion', + r'(?:www\.)?owxfohz4kjyv25fvlqilyxast7inivgiktls3th44jhk3ej3i7ya\.b32\.i2p', + r'(?:www\.)?4l2dgddgsrkf2ous66i6seeyi6etzfgrue332grh2n7madpwopotugyd\.onion', + r'(?:www\.)?w6ijuptxiku4xpnnaetxvnkc5vqcdu7mgns2u77qefoixi63vbvnpnqd\.onion', + r'(?:www\.)?kbjggqkzv65ivcqj6bumvp337z6264huv5kpkwuv6gu5yjiskvan7fad\.onion', + r'(?:www\.)?grwp24hodrefzvjjuccrkw3mjq4tzhaaq32amf33dzpmuxe7ilepcmad\.onion', + r'(?:www\.)?hpniueoejy4opn7bc4ftgazyqjoeqwlvh2uiku2xqku6zpoa4bf5ruid\.onion', + # piped instances from https://github.com/TeamPiped/Piped/wiki/Instances + r'(?:www\.)?piped\.kavin\.rocks', + r'(?:www\.)?piped\.tokhmi\.xyz', + r'(?:www\.)?piped\.syncpundit\.io', + r'(?:www\.)?piped\.mha\.fi', + r'(?:www\.)?watch\.whatever\.social', + r'(?:www\.)?piped\.garudalinux\.org', + r'(?:www\.)?piped\.rivo\.lol', + r'(?:www\.)?piped-libre\.kavin\.rocks', + r'(?:www\.)?yt\.jae\.fi', + r'(?:www\.)?piped\.mint\.lgbt', + r'(?:www\.)?il\.ax', + r'(?:www\.)?piped\.esmailelbob\.xyz', + r'(?:www\.)?piped\.projectsegfau\.lt', + r'(?:www\.)?piped\.privacydev\.net', + r'(?:www\.)?piped\.palveluntarjoaja\.eu', + r'(?:www\.)?piped\.smnz\.de', + r'(?:www\.)?piped\.adminforge\.de', + r'(?:www\.)?watch\.whatevertinfoil\.de', + r'(?:www\.)?piped\.qdi\.fi', + r'(?:www\.)?piped\.video', + r'(?:www\.)?piped\.aeong\.one', + r'(?:www\.)?piped\.moomoo\.me', + r'(?:www\.)?piped\.chauvet\.pro', + r'(?:www\.)?watch\.leptons\.xyz', + r'(?:www\.)?pd\.vern\.cc', + r'(?:www\.)?piped\.hostux\.net', + r'(?:www\.)?piped\.lunar\.icu', + # Hyperpipe instances from https://hyperpipe.codeberg.page/ + r'(?:www\.)?hyperpipe\.surge\.sh', + r'(?:www\.)?hyperpipe\.esmailelbob\.xyz', + r'(?:www\.)?listen\.whatever\.social', + r'(?:www\.)?music\.adminforge\.de', + ) + + # extracted from account/account_menu ep + # XXX: These are the supported YouTube UI and API languages, + # which is slightly different from languages supported for translation in YouTube studio + _SUPPORTED_LANG_CODES = [ + 'af', 'az', 'id', 'ms', 'bs', 'ca', 'cs', 'da', 'de', 'et', 'en-IN', 'en-GB', 'en', 'es', + 'es-419', 'es-US', 'eu', 'fil', 'fr', 'fr-CA', 'gl', 'hr', 'zu', 'is', 'it', 'sw', 'lv', + 'lt', 'hu', 'nl', 'no', 'uz', 'pl', 'pt-PT', 'pt', 'ro', 'sq', 'sk', 'sl', 'sr-Latn', 'fi', + 'sv', 'vi', 'tr', 'be', 'bg', 'ky', 'kk', 'mk', 'mn', 'ru', 'sr', 'uk', 'el', 'hy', 'iw', + 'ur', 'ar', 'fa', 'ne', 'mr', 'hi', 'as', 'bn', 'pa', 'gu', 'or', 'ta', 'te', 'kn', 'ml', + 'si', 'th', 'lo', 'my', 'ka', 'am', 'km', 'zh-CN', 'zh-TW', 'zh-HK', 'ja', 'ko' + ] + + _IGNORED_WARNINGS = {'Unavailable videos will be hidden during playback'} + + _YT_HANDLE_RE = r'@[\w.-]{3,30}' # https://support.google.com/youtube/answer/11585688?hl=en + _YT_CHANNEL_UCID_RE = r'UC[\w-]{22}' + + def ucid_or_none(self, ucid): + return self._search_regex(rf'^({self._YT_CHANNEL_UCID_RE})$', ucid, 'UC-id', default=None) + + def handle_or_none(self, handle): + return self._search_regex(rf'^({self._YT_HANDLE_RE})$', handle, '@-handle', default=None) + + def handle_from_url(self, url): + return self._search_regex(rf'^(?:https?://(?:www\.)?youtube\.com)?/({self._YT_HANDLE_RE})', + url, 'channel handle', default=None) + + def ucid_from_url(self, url): + return self._search_regex(rf'^(?:https?://(?:www\.)?youtube\.com)?/({self._YT_CHANNEL_UCID_RE})', + url, 'channel id', default=None) + + @functools.cached_property + def _preferred_lang(self): + """ + Returns a language code supported by YouTube for the user preferred language. + Returns None if no preferred language set. + """ + preferred_lang = self._configuration_arg('lang', ie_key='Youtube', casesense=True, default=[''])[0] + if not preferred_lang: + return + if preferred_lang not in self._SUPPORTED_LANG_CODES: + raise ExtractorError( + f'Unsupported language code: {preferred_lang}. Supported language codes (case-sensitive): {join_nonempty(*self._SUPPORTED_LANG_CODES, delim=", ")}.', + expected=True) + elif preferred_lang != 'en': + self.report_warning( + f'Preferring "{preferred_lang}" translated fields. Note that some metadata extraction may fail or be incorrect.') + return preferred_lang + + def _initialize_consent(self): + cookies = self._get_cookies('https://www.youtube.com/') + if cookies.get('__Secure-3PSID'): + return + socs = cookies.get('SOCS') + if socs and not socs.value.startswith('CAA'): # not consented + return + self._set_cookie('.youtube.com', 'SOCS', 'CAI', secure=True) # accept all (required for mixes) + + def _initialize_pref(self): + cookies = self._get_cookies('https://www.youtube.com/') + pref_cookie = cookies.get('PREF') + pref = {} + if pref_cookie: + try: + pref = dict(urllib.parse.parse_qsl(pref_cookie.value)) + except ValueError: + self.report_warning('Failed to parse user PREF cookie' + bug_reports_message()) + pref.update({'hl': self._preferred_lang or 'en', 'tz': 'UTC'}) + self._set_cookie('.youtube.com', name='PREF', value=urllib.parse.urlencode(pref)) + + def _real_initialize(self): + self._initialize_pref() + self._initialize_consent() + self._check_login_required() + + def _check_login_required(self): + if self._LOGIN_REQUIRED and not self._cookies_passed: + self.raise_login_required('Login details are needed to download this content', method='cookies') + + _YT_INITIAL_DATA_RE = r'(?:window\s*\[\s*["\']ytInitialData["\']\s*\]|ytInitialData)\s*=' + _YT_INITIAL_PLAYER_RESPONSE_RE = r'ytInitialPlayerResponse\s*=' + + def _get_default_ytcfg(self, client='web'): + return copy.deepcopy(INNERTUBE_CLIENTS[client]) + + def _get_innertube_host(self, client='web'): + return INNERTUBE_CLIENTS[client]['INNERTUBE_HOST'] + + def _ytcfg_get_safe(self, ytcfg, getter, expected_type=None, default_client='web'): + # try_get but with fallback to default ytcfg client values when present + _func = lambda y: try_get(y, getter, expected_type) + return _func(ytcfg) or _func(self._get_default_ytcfg(default_client)) + + def _extract_client_name(self, ytcfg, default_client='web'): + return self._ytcfg_get_safe( + ytcfg, (lambda x: x['INNERTUBE_CLIENT_NAME'], + lambda x: x['INNERTUBE_CONTEXT']['client']['clientName']), str, default_client) + + def _extract_client_version(self, ytcfg, default_client='web'): + return self._ytcfg_get_safe( + ytcfg, (lambda x: x['INNERTUBE_CLIENT_VERSION'], + lambda x: x['INNERTUBE_CONTEXT']['client']['clientVersion']), str, default_client) + + def _select_api_hostname(self, req_api_hostname, default_client=None): + return (self._configuration_arg('innertube_host', [''], ie_key=YoutubeIE.ie_key())[0] + or req_api_hostname or self._get_innertube_host(default_client or 'web')) + + def _extract_api_key(self, ytcfg=None, default_client='web'): + return self._ytcfg_get_safe(ytcfg, lambda x: x['INNERTUBE_API_KEY'], str, default_client) + + def _extract_context(self, ytcfg=None, default_client='web'): + context = get_first( + (ytcfg, self._get_default_ytcfg(default_client)), 'INNERTUBE_CONTEXT', expected_type=dict) + # Enforce language and tz for extraction + client_context = traverse_obj(context, 'client', expected_type=dict, default={}) + client_context.update({'hl': self._preferred_lang or 'en', 'timeZone': 'UTC', 'utcOffsetMinutes': 0}) + return context + + _SAPISID = None + + def _generate_sapisidhash_header(self, origin='https://www.youtube.com'): + time_now = round(time.time()) + if self._SAPISID is None: + yt_cookies = self._get_cookies('https://www.youtube.com') + # Sometimes SAPISID cookie isn't present but __Secure-3PAPISID is. + # See: https://github.com/yt-dlp/yt-dlp/issues/393 + sapisid_cookie = dict_get( + yt_cookies, ('__Secure-3PAPISID', 'SAPISID')) + if sapisid_cookie and sapisid_cookie.value: + self._SAPISID = sapisid_cookie.value + self.write_debug('Extracted SAPISID cookie') + # SAPISID cookie is required if not already present + if not yt_cookies.get('SAPISID'): + self.write_debug('Copying __Secure-3PAPISID cookie to SAPISID cookie') + self._set_cookie( + '.youtube.com', 'SAPISID', self._SAPISID, secure=True, expire_time=time_now + 3600) + else: + self._SAPISID = False + if not self._SAPISID: + return None + # SAPISIDHASH algorithm from https://stackoverflow.com/a/32065323 + sapisidhash = hashlib.sha1( + f'{time_now} {self._SAPISID} {origin}'.encode()).hexdigest() + return f'SAPISIDHASH {time_now}_{sapisidhash}' + + def _call_api(self, ep, query, video_id, fatal=True, headers=None, + note='Downloading API JSON', errnote='Unable to download API page', + context=None, api_key=None, api_hostname=None, default_client='web'): + + data = {'context': context} if context else {'context': self._extract_context(default_client=default_client)} + data.update(query) + real_headers = self.generate_api_headers(default_client=default_client) + real_headers.update({'content-type': 'application/json'}) + if headers: + real_headers.update(headers) + api_key = (self._configuration_arg('innertube_key', [''], ie_key=YoutubeIE.ie_key(), casesense=True)[0] + or api_key or self._extract_api_key(default_client=default_client)) + return self._download_json( + f'https://{self._select_api_hostname(api_hostname, default_client)}/youtubei/v1/{ep}', + video_id=video_id, fatal=fatal, note=note, errnote=errnote, + data=json.dumps(data).encode('utf8'), headers=real_headers, + query={'key': api_key, 'prettyPrint': 'false'}) + + def extract_yt_initial_data(self, item_id, webpage, fatal=True): + return self._search_json(self._YT_INITIAL_DATA_RE, webpage, 'yt initial data', item_id, fatal=fatal) + + @staticmethod + def _extract_session_index(*data): + """ + Index of current account in account list. + See: https://github.com/yt-dlp/yt-dlp/pull/519 + """ + for ytcfg in data: + session_index = int_or_none(try_get(ytcfg, lambda x: x['SESSION_INDEX'])) + if session_index is not None: + return session_index + + # Deprecated? + def _extract_identity_token(self, ytcfg=None, webpage=None): + if ytcfg: + token = try_get(ytcfg, lambda x: x['ID_TOKEN'], str) + if token: + return token + if webpage: + return self._search_regex( + r'\bID_TOKEN["\']\s*:\s*["\'](.+?)["\']', webpage, + 'identity token', default=None, fatal=False) + + @staticmethod + def _extract_account_syncid(*args): + """ + Extract syncId required to download private playlists of secondary channels + @params response and/or ytcfg + """ + for data in args: + # ytcfg includes channel_syncid if on secondary channel + delegated_sid = try_get(data, lambda x: x['DELEGATED_SESSION_ID'], str) + if delegated_sid: + return delegated_sid + sync_ids = (try_get( + data, (lambda x: x['responseContext']['mainAppWebResponseContext']['datasyncId'], + lambda x: x['DATASYNC_ID']), str) or '').split('||') + if len(sync_ids) >= 2 and sync_ids[1]: + # datasyncid is of the form "channel_syncid||user_syncid" for secondary channel + # and just "user_syncid||" for primary channel. We only want the channel_syncid + return sync_ids[0] + + @staticmethod + def _extract_visitor_data(*args): + """ + Extracts visitorData from an API response or ytcfg + Appears to be used to track session state + """ + return get_first( + args, [('VISITOR_DATA', ('INNERTUBE_CONTEXT', 'client', 'visitorData'), ('responseContext', 'visitorData'))], + expected_type=str) + + @functools.cached_property + def is_authenticated(self): + return bool(self._generate_sapisidhash_header()) + + def extract_ytcfg(self, video_id, webpage): + if not webpage: + return {} + return self._parse_json( + self._search_regex( + r'ytcfg\.set\s*\(\s*({.+?})\s*\)\s*;', webpage, 'ytcfg', + default='{}'), video_id, fatal=False) or {} + + def generate_api_headers( + self, *, ytcfg=None, account_syncid=None, session_index=None, + visitor_data=None, identity_token=None, api_hostname=None, default_client='web'): + + origin = 'https://' + (self._select_api_hostname(api_hostname, default_client)) + headers = { + 'X-YouTube-Client-Name': str( + self._ytcfg_get_safe(ytcfg, lambda x: x['INNERTUBE_CONTEXT_CLIENT_NAME'], default_client=default_client)), + 'X-YouTube-Client-Version': self._extract_client_version(ytcfg, default_client), + 'Origin': origin, + 'X-Youtube-Identity-Token': identity_token or self._extract_identity_token(ytcfg), + 'X-Goog-PageId': account_syncid or self._extract_account_syncid(ytcfg), + 'X-Goog-Visitor-Id': visitor_data or self._extract_visitor_data(ytcfg), + 'User-Agent': self._ytcfg_get_safe(ytcfg, lambda x: x['INNERTUBE_CONTEXT']['client']['userAgent'], default_client=default_client) + } + if session_index is None: + session_index = self._extract_session_index(ytcfg) + if account_syncid or session_index is not None: + headers['X-Goog-AuthUser'] = session_index if session_index is not None else 0 + + auth = self._generate_sapisidhash_header(origin) + if auth is not None: + headers['Authorization'] = auth + headers['X-Origin'] = origin + return filter_dict(headers) + + def _download_ytcfg(self, client, video_id): + url = { + 'web': 'https://www.youtube.com', + 'web_music': 'https://music.youtube.com', + 'web_embedded': f'https://www.youtube.com/embed/{video_id}?html5=1' + }.get(client) + if not url: + return {} + webpage = self._download_webpage( + url, video_id, fatal=False, note=f'Downloading {client.replace("_", " ").strip()} client config') + return self.extract_ytcfg(video_id, webpage) or {} + + @staticmethod + def _build_api_continuation_query(continuation, ctp=None): + query = { + 'continuation': continuation + } + # TODO: Inconsistency with clickTrackingParams. + # Currently we have a fixed ctp contained within context (from ytcfg) + # and a ctp in root query for continuation. + if ctp: + query['clickTracking'] = {'clickTrackingParams': ctp} + return query + + @classmethod + def _extract_next_continuation_data(cls, renderer): + next_continuation = try_get( + renderer, (lambda x: x['continuations'][0]['nextContinuationData'], + lambda x: x['continuation']['reloadContinuationData']), dict) + if not next_continuation: + return + continuation = next_continuation.get('continuation') + if not continuation: + return + ctp = next_continuation.get('clickTrackingParams') + return cls._build_api_continuation_query(continuation, ctp) + + @classmethod + def _extract_continuation_ep_data(cls, continuation_ep: dict): + if isinstance(continuation_ep, dict): + continuation = try_get( + continuation_ep, lambda x: x['continuationCommand']['token'], str) + if not continuation: + return + ctp = continuation_ep.get('clickTrackingParams') + return cls._build_api_continuation_query(continuation, ctp) + + @classmethod + def _extract_continuation(cls, renderer): + next_continuation = cls._extract_next_continuation_data(renderer) + if next_continuation: + return next_continuation + + return traverse_obj(renderer, ( + ('contents', 'items', 'rows'), ..., 'continuationItemRenderer', + ('continuationEndpoint', ('button', 'buttonRenderer', 'command')) + ), get_all=False, expected_type=cls._extract_continuation_ep_data) + + @classmethod + def _extract_alerts(cls, data): + for alert_dict in try_get(data, lambda x: x['alerts'], list) or []: + if not isinstance(alert_dict, dict): + continue + for alert in alert_dict.values(): + alert_type = alert.get('type') + if not alert_type: + continue + message = cls._get_text(alert, 'text') + if message: + yield alert_type, message + + def _report_alerts(self, alerts, expected=True, fatal=True, only_once=False): + errors, warnings = [], [] + for alert_type, alert_message in alerts: + if alert_type.lower() == 'error' and fatal: + errors.append([alert_type, alert_message]) + elif alert_message not in self._IGNORED_WARNINGS: + warnings.append([alert_type, alert_message]) + + for alert_type, alert_message in (warnings + errors[:-1]): + self.report_warning(f'YouTube said: {alert_type} - {alert_message}', only_once=only_once) + if errors: + raise ExtractorError('YouTube said: %s' % errors[-1][1], expected=expected) + + def _extract_and_report_alerts(self, data, *args, **kwargs): + return self._report_alerts(self._extract_alerts(data), *args, **kwargs) + + def _extract_badges(self, badge_list: list): + """ + Extract known BadgeType's from a list of badge renderers. + @returns [{'type': BadgeType}] + """ + icon_type_map = { + 'PRIVACY_UNLISTED': BadgeType.AVAILABILITY_UNLISTED, + 'PRIVACY_PRIVATE': BadgeType.AVAILABILITY_PRIVATE, + 'PRIVACY_PUBLIC': BadgeType.AVAILABILITY_PUBLIC, + 'CHECK_CIRCLE_THICK': BadgeType.VERIFIED, + 'OFFICIAL_ARTIST_BADGE': BadgeType.VERIFIED, + 'CHECK': BadgeType.VERIFIED, + } + + badge_style_map = { + 'BADGE_STYLE_TYPE_MEMBERS_ONLY': BadgeType.AVAILABILITY_SUBSCRIPTION, + 'BADGE_STYLE_TYPE_PREMIUM': BadgeType.AVAILABILITY_PREMIUM, + 'BADGE_STYLE_TYPE_LIVE_NOW': BadgeType.LIVE_NOW, + 'BADGE_STYLE_TYPE_VERIFIED': BadgeType.VERIFIED, + 'BADGE_STYLE_TYPE_VERIFIED_ARTIST': BadgeType.VERIFIED, + } + + label_map = { + 'unlisted': BadgeType.AVAILABILITY_UNLISTED, + 'private': BadgeType.AVAILABILITY_PRIVATE, + 'members only': BadgeType.AVAILABILITY_SUBSCRIPTION, + 'live': BadgeType.LIVE_NOW, + 'premium': BadgeType.AVAILABILITY_PREMIUM, + 'verified': BadgeType.VERIFIED, + 'official artist channel': BadgeType.VERIFIED, + } + + badges = [] + for badge in traverse_obj(badge_list, (..., lambda key, _: re.search(r'[bB]adgeRenderer$', key))): + badge_type = ( + icon_type_map.get(traverse_obj(badge, ('icon', 'iconType'), expected_type=str)) + or badge_style_map.get(traverse_obj(badge, 'style')) + ) + if badge_type: + badges.append({'type': badge_type}) + continue + + # fallback, won't work in some languages + label = traverse_obj( + badge, 'label', ('accessibilityData', 'label'), 'tooltip', 'iconTooltip', get_all=False, expected_type=str, default='') + for match, label_badge_type in label_map.items(): + if match in label.lower(): + badges.append({'type': label_badge_type}) + break + + return badges + + @staticmethod + def _has_badge(badges, badge_type): + return bool(traverse_obj(badges, lambda _, v: v['type'] == badge_type)) + + @staticmethod + def _get_text(data, *path_list, max_runs=None): + for path in path_list or [None]: + if path is None: + obj = [data] + else: + obj = traverse_obj(data, path, default=[]) + if not any(key is ... or isinstance(key, (list, tuple)) for key in variadic(path)): + obj = [obj] + for item in obj: + text = try_get(item, lambda x: x['simpleText'], str) + if text: + return text + runs = try_get(item, lambda x: x['runs'], list) or [] + if not runs and isinstance(item, list): + runs = item + + runs = runs[:min(len(runs), max_runs or len(runs))] + text = ''.join(traverse_obj(runs, (..., 'text'), expected_type=str)) + if text: + return text + + def _get_count(self, data, *path_list): + count_text = self._get_text(data, *path_list) or '' + count = parse_count(count_text) + if count is None: + count = str_to_int( + self._search_regex(r'^([\d,]+)', re.sub(r'\s', '', count_text), 'count', default=None)) + return count + + @staticmethod + def _extract_thumbnails(data, *path_list): + """ + Extract thumbnails from thumbnails dict + @param path_list: path list to level that contains 'thumbnails' key + """ + thumbnails = [] + for path in path_list or [()]: + for thumbnail in traverse_obj(data, (*variadic(path), 'thumbnails', ...)): + thumbnail_url = url_or_none(thumbnail.get('url')) + if not thumbnail_url: + continue + # Sometimes youtube gives a wrong thumbnail URL. See: + # https://github.com/yt-dlp/yt-dlp/issues/233 + # https://github.com/ytdl-org/youtube-dl/issues/28023 + if 'maxresdefault' in thumbnail_url: + thumbnail_url = thumbnail_url.split('?')[0] + thumbnails.append({ + 'url': thumbnail_url, + 'height': int_or_none(thumbnail.get('height')), + 'width': int_or_none(thumbnail.get('width')), + }) + return thumbnails + + @staticmethod + def extract_relative_time(relative_time_text): + """ + Extracts a relative time from string and converts to dt object + e.g. 'streamed 6 days ago', '5 seconds ago (edited)', 'updated today', '8 yr ago' + """ + + # XXX: this could be moved to a general function in utils/_utils.py + # The relative time text strings are roughly the same as what + # Javascript's Intl.RelativeTimeFormat function generates. + # See: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/RelativeTimeFormat + mobj = re.search( + r'(?P<start>today|yesterday|now)|(?P<time>\d+)\s*(?P<unit>sec(?:ond)?|s|min(?:ute)?|h(?:our|r)?|d(?:ay)?|w(?:eek|k)?|mo(?:nth)?|y(?:ear|r)?)s?\s*ago', + relative_time_text) + if mobj: + start = mobj.group('start') + if start: + return datetime_from_str(start) + try: + return datetime_from_str('now-%s%s' % (mobj.group('time'), mobj.group('unit'))) + except ValueError: + return None + + def _parse_time_text(self, text): + if not text: + return + dt = self.extract_relative_time(text) + timestamp = None + if isinstance(dt, datetime.datetime): + timestamp = calendar.timegm(dt.timetuple()) + + if timestamp is None: + timestamp = ( + unified_timestamp(text) or unified_timestamp( + self._search_regex( + (r'([a-z]+\s*\d{1,2},?\s*20\d{2})', r'(?:.+|^)(?:live|premieres|ed|ing)(?:\s*(?:on|for))?\s*(.+\d)'), + text.lower(), 'time text', default=None))) + + if text and timestamp is None and self._preferred_lang in (None, 'en'): + self.report_warning( + f'Cannot parse localized time text "{text}"', only_once=True) + return timestamp + + def _extract_response(self, item_id, query, note='Downloading API JSON', headers=None, + ytcfg=None, check_get_keys=None, ep='browse', fatal=True, api_hostname=None, + default_client='web'): + raise_for_incomplete = bool(self._configuration_arg('raise_incomplete_data', ie_key=YoutubeIE)) + # Incomplete Data should be a warning by default when retries are exhausted, while other errors should be fatal. + icd_retries = iter(self.RetryManager(fatal=raise_for_incomplete)) + icd_rm = next(icd_retries) + main_retries = iter(self.RetryManager()) + main_rm = next(main_retries) + # Manual retry loop for multiple RetryManagers + # The proper RetryManager MUST be advanced after an error + # and its result MUST be checked if the manager is non fatal + while True: + try: + response = self._call_api( + ep=ep, fatal=True, headers=headers, + video_id=item_id, query=query, note=note, + context=self._extract_context(ytcfg, default_client), + api_key=self._extract_api_key(ytcfg, default_client), + api_hostname=api_hostname, default_client=default_client) + except ExtractorError as e: + if not isinstance(e.cause, network_exceptions): + return self._error_or_warning(e, fatal=fatal) + elif not isinstance(e.cause, HTTPError): + main_rm.error = e + next(main_retries) + continue + + first_bytes = e.cause.response.read(512) + if not is_html(first_bytes): + yt_error = try_get( + self._parse_json( + self._webpage_read_content(e.cause.response, None, item_id, prefix=first_bytes) or '{}', item_id, fatal=False), + lambda x: x['error']['message'], str) + if yt_error: + self._report_alerts([('ERROR', yt_error)], fatal=False) + # Downloading page may result in intermittent 5xx HTTP error + # Sometimes a 404 is also received. See: https://github.com/ytdl-org/youtube-dl/issues/28289 + # We also want to catch all other network exceptions since errors in later pages can be troublesome + # See https://github.com/yt-dlp/yt-dlp/issues/507#issuecomment-880188210 + if e.cause.status not in (403, 429): + main_rm.error = e + next(main_retries) + continue + return self._error_or_warning(e, fatal=fatal) + + try: + self._extract_and_report_alerts(response, only_once=True) + except ExtractorError as e: + # YouTube's servers may return errors we want to retry on in a 200 OK response + # See: https://github.com/yt-dlp/yt-dlp/issues/839 + if 'unknown error' in e.msg.lower(): + main_rm.error = e + next(main_retries) + continue + return self._error_or_warning(e, fatal=fatal) + # Youtube sometimes sends incomplete data + # See: https://github.com/ytdl-org/youtube-dl/issues/28194 + if not traverse_obj(response, *variadic(check_get_keys)): + icd_rm.error = ExtractorError('Incomplete data received', expected=True) + should_retry = next(icd_retries, None) + if not should_retry: + return None + continue + + return response + + @staticmethod + def is_music_url(url): + return re.match(r'(https?://)?music\.youtube\.com/', url) is not None + + def _extract_video(self, renderer): + video_id = renderer.get('videoId') + + reel_header_renderer = traverse_obj(renderer, ( + 'navigationEndpoint', 'reelWatchEndpoint', 'overlay', 'reelPlayerOverlayRenderer', + 'reelPlayerHeaderSupportedRenderers', 'reelPlayerHeaderRenderer')) + + title = self._get_text(renderer, 'title', 'headline') or self._get_text(reel_header_renderer, 'reelTitleText') + description = self._get_text(renderer, 'descriptionSnippet') + + duration = int_or_none(renderer.get('lengthSeconds')) + if duration is None: + duration = parse_duration(self._get_text( + renderer, 'lengthText', ('thumbnailOverlays', ..., 'thumbnailOverlayTimeStatusRenderer', 'text'))) + if duration is None: + # XXX: should write a parser to be more general to support more cases (e.g. shorts in shorts tab) + duration = parse_duration(self._search_regex( + r'(?i)(ago)(?!.*\1)\s+(?P<duration>[a-z0-9 ,]+?)(?:\s+[\d,]+\s+views)?(?:\s+-\s+play\s+short)?$', + traverse_obj(renderer, ('title', 'accessibility', 'accessibilityData', 'label'), default='', expected_type=str), + video_id, default=None, group='duration')) + + channel_id = traverse_obj( + renderer, ('shortBylineText', 'runs', ..., 'navigationEndpoint', 'browseEndpoint', 'browseId'), + expected_type=str, get_all=False) + if not channel_id: + channel_id = traverse_obj(reel_header_renderer, ('channelNavigationEndpoint', 'browseEndpoint', 'browseId')) + + channel_id = self.ucid_or_none(channel_id) + + overlay_style = traverse_obj( + renderer, ('thumbnailOverlays', ..., 'thumbnailOverlayTimeStatusRenderer', 'style'), + get_all=False, expected_type=str) + badges = self._extract_badges(traverse_obj(renderer, 'badges')) + owner_badges = self._extract_badges(traverse_obj(renderer, 'ownerBadges')) + navigation_url = urljoin('https://www.youtube.com/', traverse_obj( + renderer, ('navigationEndpoint', 'commandMetadata', 'webCommandMetadata', 'url'), + expected_type=str)) or '' + url = f'https://www.youtube.com/watch?v={video_id}' + if overlay_style == 'SHORTS' or '/shorts/' in navigation_url: + url = f'https://www.youtube.com/shorts/{video_id}' + + time_text = (self._get_text(renderer, 'publishedTimeText', 'videoInfo') + or self._get_text(reel_header_renderer, 'timestampText') or '') + scheduled_timestamp = str_to_int(traverse_obj(renderer, ('upcomingEventData', 'startTime'), get_all=False)) + + live_status = ( + 'is_upcoming' if scheduled_timestamp is not None + else 'was_live' if 'streamed' in time_text.lower() + else 'is_live' if overlay_style == 'LIVE' or self._has_badge(badges, BadgeType.LIVE_NOW) + else None) + + # videoInfo is a string like '50K views • 10 years ago'. + view_count_text = self._get_text(renderer, 'viewCountText', 'shortViewCountText', 'videoInfo') or '' + view_count = (0 if 'no views' in view_count_text.lower() + else self._get_count({'simpleText': view_count_text})) + view_count_field = 'concurrent_view_count' if live_status in ('is_live', 'is_upcoming') else 'view_count' + + channel = (self._get_text(renderer, 'ownerText', 'shortBylineText') + or self._get_text(reel_header_renderer, 'channelTitleText')) + + channel_handle = traverse_obj(renderer, ( + 'shortBylineText', 'runs', ..., 'navigationEndpoint', + (('commandMetadata', 'webCommandMetadata', 'url'), ('browseEndpoint', 'canonicalBaseUrl'))), + expected_type=self.handle_from_url, get_all=False) + return { + '_type': 'url', + 'ie_key': YoutubeIE.ie_key(), + 'id': video_id, + 'url': url, + 'title': title, + 'description': description, + 'duration': duration, + 'channel_id': channel_id, + 'channel': channel, + 'channel_url': f'https://www.youtube.com/channel/{channel_id}' if channel_id else None, + 'uploader': channel, + 'uploader_id': channel_handle, + 'uploader_url': format_field(channel_handle, None, 'https://www.youtube.com/%s', default=None), + 'thumbnails': self._extract_thumbnails(renderer, 'thumbnail'), + 'timestamp': (self._parse_time_text(time_text) + if self._configuration_arg('approximate_date', ie_key=YoutubeTabIE) + else None), + 'release_timestamp': scheduled_timestamp, + 'availability': + 'public' if self._has_badge(badges, BadgeType.AVAILABILITY_PUBLIC) + else self._availability( + is_private=self._has_badge(badges, BadgeType.AVAILABILITY_PRIVATE) or None, + needs_premium=self._has_badge(badges, BadgeType.AVAILABILITY_PREMIUM) or None, + needs_subscription=self._has_badge(badges, BadgeType.AVAILABILITY_SUBSCRIPTION) or None, + is_unlisted=self._has_badge(badges, BadgeType.AVAILABILITY_UNLISTED) or None), + view_count_field: view_count, + 'live_status': live_status, + 'channel_is_verified': True if self._has_badge(owner_badges, BadgeType.VERIFIED) else None + } + + +class YoutubeIE(YoutubeBaseInfoExtractor): + IE_DESC = 'YouTube' + _VALID_URL = r"""(?x)^ + ( + (?:https?://|//) # http(s):// or protocol-independent URL + (?:(?:(?:(?:\w+\.)?[yY][oO][uU][tT][uU][bB][eE](?:-nocookie|kids)?\.com| + (?:www\.)?deturl\.com/www\.youtube\.com| + (?:www\.)?pwnyoutube\.com| + (?:www\.)?hooktube\.com| + (?:www\.)?yourepeat\.com| + tube\.majestyc\.net| + %(invidious)s| + youtube\.googleapis\.com)/ # the various hostnames, with wildcard subdomains + (?:.*?\#/)? # handle anchor (#/) redirect urls + (?: # the various things that can precede the ID: + (?:(?:v|embed|e|shorts|live)/(?!videoseries|live_stream)) # v/ or embed/ or e/ or shorts/ + |(?: # or the v= param in all its forms + (?:(?:watch|movie)(?:_popup)?(?:\.php)?/?)? # preceding watch(_popup|.php) or nothing (like /?v=xxxx) + (?:\?|\#!?) # the params delimiter ? or # or #! + (?:.*?[&;])?? # any other preceding param (like /?s=tuff&v=xxxx or ?s=tuff&v=V36LpHqtcDY) + v= + ) + )) + |(?: + youtu\.be| # just youtu.be/xxxx + vid\.plus| # or vid.plus/xxxx + zwearz\.com/watch| # or zwearz.com/watch/xxxx + %(invidious)s + )/ + |(?:www\.)?cleanvideosearch\.com/media/action/yt/watch\?videoId= + ) + )? # all until now is optional -> you can pass the naked ID + (?P<id>[0-9A-Za-z_-]{11}) # here is it! the YouTube video ID + (?(1).+)? # if we found the ID, everything can follow + (?:\#|$)""" % { + 'invidious': '|'.join(YoutubeBaseInfoExtractor._INVIDIOUS_SITES), + } + _EMBED_REGEX = [ + r'''(?x) + (?: + <(?:[0-9A-Za-z-]+?)?iframe[^>]+?src=| + data-video-url=| + <embed[^>]+?src=| + embedSWF\(?:\s*| + <object[^>]+data=| + new\s+SWFObject\( + ) + (["\']) + (?P<url>(?:https?:)?//(?:www\.)?youtube(?:-nocookie)?\.com/ + (?:embed|v|p)/[0-9A-Za-z_-]{11}.*?) + \1''', + # https://wordpress.org/plugins/lazy-load-for-videos/ + r'''(?xs) + <a\s[^>]*\bhref="(?P<url>https://www\.youtube\.com/watch\?v=[0-9A-Za-z_-]{11})" + \s[^>]*\bclass="[^"]*\blazy-load-youtube''', + ] + _RETURN_TYPE = 'video' # XXX: How to handle multifeed? + + _PLAYER_INFO_RE = ( + r'/s/player/(?P<id>[a-zA-Z0-9_-]{8,})/player', + r'/(?P<id>[a-zA-Z0-9_-]{8,})/player(?:_ias\.vflset(?:/[a-zA-Z]{2,3}_[a-zA-Z]{2,3})?|-plasma-ias-(?:phone|tablet)-[a-z]{2}_[A-Z]{2}\.vflset)/base\.js$', + r'\b(?P<id>vfl[a-zA-Z0-9_-]+)\b.*?\.js$', + ) + _formats = { + '5': {'ext': 'flv', 'width': 400, 'height': 240, 'acodec': 'mp3', 'abr': 64, 'vcodec': 'h263'}, + '6': {'ext': 'flv', 'width': 450, 'height': 270, 'acodec': 'mp3', 'abr': 64, 'vcodec': 'h263'}, + '13': {'ext': '3gp', 'acodec': 'aac', 'vcodec': 'mp4v'}, + '17': {'ext': '3gp', 'width': 176, 'height': 144, 'acodec': 'aac', 'abr': 24, 'vcodec': 'mp4v'}, + '18': {'ext': 'mp4', 'width': 640, 'height': 360, 'acodec': 'aac', 'abr': 96, 'vcodec': 'h264'}, + '22': {'ext': 'mp4', 'width': 1280, 'height': 720, 'acodec': 'aac', 'abr': 192, 'vcodec': 'h264'}, + '34': {'ext': 'flv', 'width': 640, 'height': 360, 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264'}, + '35': {'ext': 'flv', 'width': 854, 'height': 480, 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264'}, + # itag 36 videos are either 320x180 (BaW_jenozKc) or 320x240 (__2ABJjxzNo), abr varies as well + '36': {'ext': '3gp', 'width': 320, 'acodec': 'aac', 'vcodec': 'mp4v'}, + '37': {'ext': 'mp4', 'width': 1920, 'height': 1080, 'acodec': 'aac', 'abr': 192, 'vcodec': 'h264'}, + '38': {'ext': 'mp4', 'width': 4096, 'height': 3072, 'acodec': 'aac', 'abr': 192, 'vcodec': 'h264'}, + '43': {'ext': 'webm', 'width': 640, 'height': 360, 'acodec': 'vorbis', 'abr': 128, 'vcodec': 'vp8'}, + '44': {'ext': 'webm', 'width': 854, 'height': 480, 'acodec': 'vorbis', 'abr': 128, 'vcodec': 'vp8'}, + '45': {'ext': 'webm', 'width': 1280, 'height': 720, 'acodec': 'vorbis', 'abr': 192, 'vcodec': 'vp8'}, + '46': {'ext': 'webm', 'width': 1920, 'height': 1080, 'acodec': 'vorbis', 'abr': 192, 'vcodec': 'vp8'}, + '59': {'ext': 'mp4', 'width': 854, 'height': 480, 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264'}, + '78': {'ext': 'mp4', 'width': 854, 'height': 480, 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264'}, + + + # 3D videos + '82': {'ext': 'mp4', 'height': 360, 'format_note': '3D', 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264', 'preference': -20}, + '83': {'ext': 'mp4', 'height': 480, 'format_note': '3D', 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264', 'preference': -20}, + '84': {'ext': 'mp4', 'height': 720, 'format_note': '3D', 'acodec': 'aac', 'abr': 192, 'vcodec': 'h264', 'preference': -20}, + '85': {'ext': 'mp4', 'height': 1080, 'format_note': '3D', 'acodec': 'aac', 'abr': 192, 'vcodec': 'h264', 'preference': -20}, + '100': {'ext': 'webm', 'height': 360, 'format_note': '3D', 'acodec': 'vorbis', 'abr': 128, 'vcodec': 'vp8', 'preference': -20}, + '101': {'ext': 'webm', 'height': 480, 'format_note': '3D', 'acodec': 'vorbis', 'abr': 192, 'vcodec': 'vp8', 'preference': -20}, + '102': {'ext': 'webm', 'height': 720, 'format_note': '3D', 'acodec': 'vorbis', 'abr': 192, 'vcodec': 'vp8', 'preference': -20}, + + # Apple HTTP Live Streaming + '91': {'ext': 'mp4', 'height': 144, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 48, 'vcodec': 'h264', 'preference': -10}, + '92': {'ext': 'mp4', 'height': 240, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 48, 'vcodec': 'h264', 'preference': -10}, + '93': {'ext': 'mp4', 'height': 360, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264', 'preference': -10}, + '94': {'ext': 'mp4', 'height': 480, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 128, 'vcodec': 'h264', 'preference': -10}, + '95': {'ext': 'mp4', 'height': 720, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 256, 'vcodec': 'h264', 'preference': -10}, + '96': {'ext': 'mp4', 'height': 1080, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 256, 'vcodec': 'h264', 'preference': -10}, + '132': {'ext': 'mp4', 'height': 240, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 48, 'vcodec': 'h264', 'preference': -10}, + '151': {'ext': 'mp4', 'height': 72, 'format_note': 'HLS', 'acodec': 'aac', 'abr': 24, 'vcodec': 'h264', 'preference': -10}, + + # DASH mp4 video + '133': {'ext': 'mp4', 'height': 240, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '134': {'ext': 'mp4', 'height': 360, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '135': {'ext': 'mp4', 'height': 480, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '136': {'ext': 'mp4', 'height': 720, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '137': {'ext': 'mp4', 'height': 1080, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '138': {'ext': 'mp4', 'format_note': 'DASH video', 'vcodec': 'h264'}, # Height can vary (https://github.com/ytdl-org/youtube-dl/issues/4559) + '160': {'ext': 'mp4', 'height': 144, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '212': {'ext': 'mp4', 'height': 480, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '264': {'ext': 'mp4', 'height': 1440, 'format_note': 'DASH video', 'vcodec': 'h264'}, + '298': {'ext': 'mp4', 'height': 720, 'format_note': 'DASH video', 'vcodec': 'h264', 'fps': 60}, + '299': {'ext': 'mp4', 'height': 1080, 'format_note': 'DASH video', 'vcodec': 'h264', 'fps': 60}, + '266': {'ext': 'mp4', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'h264'}, + + # Dash mp4 audio + '139': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'aac', 'abr': 48, 'container': 'm4a_dash'}, + '140': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'aac', 'abr': 128, 'container': 'm4a_dash'}, + '141': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'aac', 'abr': 256, 'container': 'm4a_dash'}, + '256': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'aac', 'container': 'm4a_dash'}, + '258': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'aac', 'container': 'm4a_dash'}, + '325': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'dtse', 'container': 'm4a_dash'}, + '328': {'ext': 'm4a', 'format_note': 'DASH audio', 'acodec': 'ec-3', 'container': 'm4a_dash'}, + + # Dash webm + '167': {'ext': 'webm', 'height': 360, 'width': 640, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp8'}, + '168': {'ext': 'webm', 'height': 480, 'width': 854, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp8'}, + '169': {'ext': 'webm', 'height': 720, 'width': 1280, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp8'}, + '170': {'ext': 'webm', 'height': 1080, 'width': 1920, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp8'}, + '218': {'ext': 'webm', 'height': 480, 'width': 854, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp8'}, + '219': {'ext': 'webm', 'height': 480, 'width': 854, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp8'}, + '278': {'ext': 'webm', 'height': 144, 'format_note': 'DASH video', 'container': 'webm', 'vcodec': 'vp9'}, + '242': {'ext': 'webm', 'height': 240, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '243': {'ext': 'webm', 'height': 360, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '244': {'ext': 'webm', 'height': 480, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '245': {'ext': 'webm', 'height': 480, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '246': {'ext': 'webm', 'height': 480, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '247': {'ext': 'webm', 'height': 720, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '248': {'ext': 'webm', 'height': 1080, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '271': {'ext': 'webm', 'height': 1440, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + # itag 272 videos are either 3840x2160 (e.g. RtoitU2A-3E) or 7680x4320 (sLprVF6d7Ug) + '272': {'ext': 'webm', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '302': {'ext': 'webm', 'height': 720, 'format_note': 'DASH video', 'vcodec': 'vp9', 'fps': 60}, + '303': {'ext': 'webm', 'height': 1080, 'format_note': 'DASH video', 'vcodec': 'vp9', 'fps': 60}, + '308': {'ext': 'webm', 'height': 1440, 'format_note': 'DASH video', 'vcodec': 'vp9', 'fps': 60}, + '313': {'ext': 'webm', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'vp9'}, + '315': {'ext': 'webm', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'vp9', 'fps': 60}, + + # Dash webm audio + '171': {'ext': 'webm', 'acodec': 'vorbis', 'format_note': 'DASH audio', 'abr': 128}, + '172': {'ext': 'webm', 'acodec': 'vorbis', 'format_note': 'DASH audio', 'abr': 256}, + + # Dash webm audio with opus inside + '249': {'ext': 'webm', 'format_note': 'DASH audio', 'acodec': 'opus', 'abr': 50}, + '250': {'ext': 'webm', 'format_note': 'DASH audio', 'acodec': 'opus', 'abr': 70}, + '251': {'ext': 'webm', 'format_note': 'DASH audio', 'acodec': 'opus', 'abr': 160}, + + # RTMP (unnamed) + '_rtmp': {'protocol': 'rtmp'}, + + # av01 video only formats sometimes served with "unknown" codecs + '394': {'ext': 'mp4', 'height': 144, 'format_note': 'DASH video', 'vcodec': 'av01.0.00M.08'}, + '395': {'ext': 'mp4', 'height': 240, 'format_note': 'DASH video', 'vcodec': 'av01.0.00M.08'}, + '396': {'ext': 'mp4', 'height': 360, 'format_note': 'DASH video', 'vcodec': 'av01.0.01M.08'}, + '397': {'ext': 'mp4', 'height': 480, 'format_note': 'DASH video', 'vcodec': 'av01.0.04M.08'}, + '398': {'ext': 'mp4', 'height': 720, 'format_note': 'DASH video', 'vcodec': 'av01.0.05M.08'}, + '399': {'ext': 'mp4', 'height': 1080, 'format_note': 'DASH video', 'vcodec': 'av01.0.08M.08'}, + '400': {'ext': 'mp4', 'height': 1440, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'}, + '401': {'ext': 'mp4', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'}, + } + _SUBTITLE_FORMATS = ('json3', 'srv1', 'srv2', 'srv3', 'ttml', 'vtt') + + _GEO_BYPASS = False + + IE_NAME = 'youtube' + _TESTS = [ + { + 'url': 'https://www.youtube.com/watch?v=BaW_jenozKc&t=1s&end=9', + 'info_dict': { + 'id': 'BaW_jenozKc', + 'ext': 'mp4', + 'title': 'youtube-dl test video "\'/\\ä↭ð•', + 'channel': 'Philipp Hagemeister', + 'channel_id': 'UCLqxVugv74EIW3VWh2NOa3Q', + 'channel_url': r're:https?://(?:www\.)?youtube\.com/channel/UCLqxVugv74EIW3VWh2NOa3Q', + 'upload_date': '20121002', + 'description': 'md5:8fb536f4877b8a7455c2ec23794dbc22', + 'categories': ['Science & Technology'], + 'tags': ['youtube-dl'], + 'duration': 10, + 'view_count': int, + 'like_count': int, + 'availability': 'public', + 'playable_in_embed': True, + 'thumbnail': 'https://i.ytimg.com/vi/BaW_jenozKc/maxresdefault.jpg', + 'live_status': 'not_live', + 'age_limit': 0, + 'start_time': 1, + 'end_time': 9, + 'comment_count': int, + 'channel_follower_count': int, + 'uploader': 'Philipp Hagemeister', + 'uploader_url': 'https://www.youtube.com/@PhilippHagemeister', + 'uploader_id': '@PhilippHagemeister', + 'heatmap': 'count:100', + } + }, + { + 'url': '//www.YouTube.com/watch?v=yZIXLfi8CZQ', + 'note': 'Embed-only video (#1746)', + 'info_dict': { + 'id': 'yZIXLfi8CZQ', + 'ext': 'mp4', + 'upload_date': '20120608', + 'title': 'Principal Sexually Assaults A Teacher - Episode 117 - 8th June 2012', + 'description': 'md5:09b78bd971f1e3e289601dfba15ca4f7', + 'age_limit': 18, + }, + 'skip': 'Private video', + }, + { + 'url': 'https://www.youtube.com/watch?v=BaW_jenozKc&v=yZIXLfi8CZQ', + 'note': 'Use the first video ID in the URL', + 'info_dict': { + 'id': 'BaW_jenozKc', + 'ext': 'mp4', + 'title': 'youtube-dl test video "\'/\\ä↭ð•', + 'channel': 'Philipp Hagemeister', + 'channel_id': 'UCLqxVugv74EIW3VWh2NOa3Q', + 'channel_url': r're:https?://(?:www\.)?youtube\.com/channel/UCLqxVugv74EIW3VWh2NOa3Q', + 'upload_date': '20121002', + 'description': 'md5:8fb536f4877b8a7455c2ec23794dbc22', + 'categories': ['Science & Technology'], + 'tags': ['youtube-dl'], + 'duration': 10, + 'view_count': int, + 'like_count': int, + 'availability': 'public', + 'playable_in_embed': True, + 'thumbnail': 'https://i.ytimg.com/vi/BaW_jenozKc/maxresdefault.jpg', + 'live_status': 'not_live', + 'age_limit': 0, + 'comment_count': int, + 'channel_follower_count': int, + 'uploader': 'Philipp Hagemeister', + 'uploader_url': 'https://www.youtube.com/@PhilippHagemeister', + 'uploader_id': '@PhilippHagemeister', + 'heatmap': 'count:100', + }, + 'params': { + 'skip_download': True, + }, + }, + { + 'url': 'https://www.youtube.com/watch?v=a9LDPn-MO4I', + 'note': '256k DASH audio (format 141) via DASH manifest', + 'info_dict': { + 'id': 'a9LDPn-MO4I', + 'ext': 'm4a', + 'upload_date': '20121002', + 'description': '', + 'title': 'UHDTV TEST 8K VIDEO.mp4' + }, + 'params': { + 'youtube_include_dash_manifest': True, + 'format': '141', + }, + 'skip': 'format 141 not served anymore', + }, + # DASH manifest with encrypted signature + { + 'url': 'https://www.youtube.com/watch?v=IB3lcPjvWLA', + 'info_dict': { + 'id': 'IB3lcPjvWLA', + 'ext': 'm4a', + 'title': 'Afrojack, Spree Wilson - The Spark (Official Music Video) ft. Spree Wilson', + 'description': 'md5:8f5e2b82460520b619ccac1f509d43bf', + 'duration': 244, + 'upload_date': '20131011', + 'abr': 129.495, + 'like_count': int, + 'channel_id': 'UChuZAo1RKL85gev3Eal9_zg', + 'playable_in_embed': True, + 'channel_url': 'https://www.youtube.com/channel/UChuZAo1RKL85gev3Eal9_zg', + 'view_count': int, + 'track': 'The Spark', + 'live_status': 'not_live', + 'thumbnail': 'https://i.ytimg.com/vi_webp/IB3lcPjvWLA/maxresdefault.webp', + 'channel': 'Afrojack', + 'tags': 'count:19', + 'availability': 'public', + 'categories': ['Music'], + 'age_limit': 0, + 'alt_title': 'The Spark', + 'channel_follower_count': int, + 'uploader': 'Afrojack', + 'uploader_url': 'https://www.youtube.com/@Afrojack', + 'uploader_id': '@Afrojack', + }, + 'params': { + 'youtube_include_dash_manifest': True, + 'format': '141/bestaudio[ext=m4a]', + }, + }, + # Age-gate videos. See https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-888837000 + { + 'note': 'Embed allowed age-gate video', + 'url': 'https://youtube.com/watch?v=HtVdAasjOgU', + 'info_dict': { + 'id': 'HtVdAasjOgU', + 'ext': 'mp4', + 'title': 'The Witcher 3: Wild Hunt - The Sword Of Destiny Trailer', + 'description': r're:(?s).{100,}About the Game\n.*?The Witcher 3: Wild Hunt.{100,}', + 'duration': 142, + 'upload_date': '20140605', + 'age_limit': 18, + 'categories': ['Gaming'], + 'thumbnail': 'https://i.ytimg.com/vi_webp/HtVdAasjOgU/maxresdefault.webp', + 'availability': 'needs_auth', + 'channel_url': 'https://www.youtube.com/channel/UCzybXLxv08IApdjdN0mJhEg', + 'like_count': int, + 'channel': 'The Witcher', + 'live_status': 'not_live', + 'tags': 'count:17', + 'channel_id': 'UCzybXLxv08IApdjdN0mJhEg', + 'playable_in_embed': True, + 'view_count': int, + 'channel_follower_count': int, + 'uploader': 'The Witcher', + 'uploader_url': 'https://www.youtube.com/@thewitcher', + 'uploader_id': '@thewitcher', + 'comment_count': int, + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + }, + { + 'note': 'Age-gate video with embed allowed in public site', + 'url': 'https://youtube.com/watch?v=HsUATh_Nc2U', + 'info_dict': { + 'id': 'HsUATh_Nc2U', + 'ext': 'mp4', + 'title': 'Godzilla 2 (Official Video)', + 'description': 'md5:bf77e03fcae5529475e500129b05668a', + 'upload_date': '20200408', + 'age_limit': 18, + 'availability': 'needs_auth', + 'channel_id': 'UCYQT13AtrJC0gsM1far_zJg', + 'channel': 'FlyingKitty', + 'channel_url': 'https://www.youtube.com/channel/UCYQT13AtrJC0gsM1far_zJg', + 'view_count': int, + 'categories': ['Entertainment'], + 'live_status': 'not_live', + 'tags': ['Flyingkitty', 'godzilla 2'], + 'thumbnail': 'https://i.ytimg.com/vi/HsUATh_Nc2U/maxresdefault.jpg', + 'like_count': int, + 'duration': 177, + 'playable_in_embed': True, + 'channel_follower_count': int, + 'uploader': 'FlyingKitty', + 'uploader_url': 'https://www.youtube.com/@FlyingKitty900', + 'uploader_id': '@FlyingKitty900', + 'comment_count': int, + 'channel_is_verified': True, + }, + }, + { + 'note': 'Age-gate video embedable only with clientScreen=EMBED', + 'url': 'https://youtube.com/watch?v=Tq92D6wQ1mg', + 'info_dict': { + 'id': 'Tq92D6wQ1mg', + 'title': '[MMD] Adios - EVERGLOW [+Motion DL]', + 'ext': 'mp4', + 'upload_date': '20191228', + 'description': 'md5:17eccca93a786d51bc67646756894066', + 'age_limit': 18, + 'like_count': int, + 'availability': 'needs_auth', + 'channel_id': 'UC1yoRdFoFJaCY-AGfD9W0wQ', + 'view_count': int, + 'thumbnail': 'https://i.ytimg.com/vi_webp/Tq92D6wQ1mg/sddefault.webp', + 'channel': 'Projekt Melody', + 'live_status': 'not_live', + 'tags': ['mmd', 'dance', 'mikumikudance', 'kpop', 'vtuber'], + 'playable_in_embed': True, + 'categories': ['Entertainment'], + 'duration': 106, + 'channel_url': 'https://www.youtube.com/channel/UC1yoRdFoFJaCY-AGfD9W0wQ', + 'comment_count': int, + 'channel_follower_count': int, + 'uploader': 'Projekt Melody', + 'uploader_url': 'https://www.youtube.com/@ProjektMelody', + 'uploader_id': '@ProjektMelody', + }, + }, + { + 'note': 'Non-Agegated non-embeddable video', + 'url': 'https://youtube.com/watch?v=MeJVWBSsPAY', + 'info_dict': { + 'id': 'MeJVWBSsPAY', + 'ext': 'mp4', + 'title': 'OOMPH! - Such Mich Find Mich (Lyrics)', + 'description': 'Fan Video. Music & Lyrics by OOMPH!.', + 'upload_date': '20130730', + 'track': 'Such mich find mich', + 'age_limit': 0, + 'tags': ['oomph', 'such mich find mich', 'lyrics', 'german industrial', 'musica industrial'], + 'like_count': int, + 'playable_in_embed': False, + 'creator': 'OOMPH!', + 'thumbnail': 'https://i.ytimg.com/vi/MeJVWBSsPAY/sddefault.jpg', + 'view_count': int, + 'alt_title': 'Such mich find mich', + 'duration': 210, + 'channel': 'Herr Lurik', + 'channel_id': 'UCdR3RSDPqub28LjZx0v9-aA', + 'categories': ['Music'], + 'availability': 'public', + 'channel_url': 'https://www.youtube.com/channel/UCdR3RSDPqub28LjZx0v9-aA', + 'live_status': 'not_live', + 'artist': 'OOMPH!', + 'channel_follower_count': int, + 'uploader': 'Herr Lurik', + 'uploader_url': 'https://www.youtube.com/@HerrLurik', + 'uploader_id': '@HerrLurik', + }, + }, + { + 'note': 'Non-bypassable age-gated video', + 'url': 'https://youtube.com/watch?v=Cr381pDsSsA', + 'only_matching': True, + }, + # video_info is None (https://github.com/ytdl-org/youtube-dl/issues/4421) + # YouTube Red ad is not captured for creator + { + 'url': '__2ABJjxzNo', + 'info_dict': { + 'id': '__2ABJjxzNo', + 'ext': 'mp4', + 'duration': 266, + 'upload_date': '20100430', + 'creator': 'deadmau5', + 'description': 'md5:6cbcd3a92ce1bc676fc4d6ab4ace2336', + 'title': 'Deadmau5 - Some Chords (HD)', + 'alt_title': 'Some Chords', + 'availability': 'public', + 'tags': 'count:14', + 'channel_id': 'UCYEK6xds6eo-3tr4xRdflmQ', + 'view_count': int, + 'live_status': 'not_live', + 'channel': 'deadmau5', + 'thumbnail': 'https://i.ytimg.com/vi_webp/__2ABJjxzNo/maxresdefault.webp', + 'like_count': int, + 'track': 'Some Chords', + 'artist': 'deadmau5', + 'playable_in_embed': True, + 'age_limit': 0, + 'channel_url': 'https://www.youtube.com/channel/UCYEK6xds6eo-3tr4xRdflmQ', + 'categories': ['Music'], + 'album': 'Some Chords', + 'channel_follower_count': int, + 'uploader': 'deadmau5', + 'uploader_url': 'https://www.youtube.com/@deadmau5', + 'uploader_id': '@deadmau5', + }, + 'expected_warnings': [ + 'DASH manifest missing', + ] + }, + # Olympics (https://github.com/ytdl-org/youtube-dl/issues/4431) + { + 'url': 'lqQg6PlCWgI', + 'info_dict': { + 'id': 'lqQg6PlCWgI', + 'ext': 'mp4', + 'duration': 6085, + 'upload_date': '20150827', + 'description': 'md5:04bbbf3ccceb6795947572ca36f45904', + 'title': 'Hockey - Women - GER-AUS - London 2012 Olympic Games', + 'like_count': int, + 'release_timestamp': 1343767800, + 'playable_in_embed': True, + 'categories': ['Sports'], + 'release_date': '20120731', + 'channel': 'Olympics', + 'tags': ['Hockey', '2012-07-31', '31 July 2012', 'Riverbank Arena', 'Session', 'Olympics', 'Olympic Games', 'London 2012', '2012 Summer Olympics', 'Summer Games'], + 'channel_id': 'UCTl3QQTvqHFjurroKxexy2Q', + 'thumbnail': 'https://i.ytimg.com/vi/lqQg6PlCWgI/maxresdefault.jpg', + 'age_limit': 0, + 'availability': 'public', + 'live_status': 'was_live', + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCTl3QQTvqHFjurroKxexy2Q', + 'channel_follower_count': int, + 'uploader': 'Olympics', + 'uploader_url': 'https://www.youtube.com/@Olympics', + 'uploader_id': '@Olympics', + 'channel_is_verified': True, + }, + 'params': { + 'skip_download': 'requires avconv', + } + }, + # Non-square pixels + { + 'url': 'https://www.youtube.com/watch?v=_b-2C3KPAM0', + 'info_dict': { + 'id': '_b-2C3KPAM0', + 'ext': 'mp4', + 'stretched_ratio': 16 / 9., + 'duration': 85, + 'upload_date': '20110310', + 'description': 'made by Wacom from Korea | 字幕&加油添醋 by TY\'s Allen | 感è¬heylisa00cavey1001åŒå­¸ç†±æƒ…æä¾›æ¢—åŠç¿»è­¯', + 'title': '[A-made] 變態å¦å­—幕版 å¤ªå¦ æˆ‘å°±æ˜¯é€™æ¨£çš„äºº', + 'playable_in_embed': True, + 'channel': 'å­«á„‹á„…', + 'age_limit': 0, + 'tags': 'count:11', + 'channel_url': 'https://www.youtube.com/channel/UCS-xxCmRaA6BFdmgDPA_BIw', + 'channel_id': 'UCS-xxCmRaA6BFdmgDPA_BIw', + 'thumbnail': 'https://i.ytimg.com/vi/_b-2C3KPAM0/maxresdefault.jpg', + 'view_count': int, + 'categories': ['People & Blogs'], + 'like_count': int, + 'live_status': 'not_live', + 'availability': 'unlisted', + 'comment_count': int, + 'channel_follower_count': int, + 'uploader': 'å­«á„‹á„…', + 'uploader_url': 'https://www.youtube.com/@AllenMeow', + 'uploader_id': '@AllenMeow', + }, + }, + # url_encoded_fmt_stream_map is empty string + { + 'url': 'qEJwOuvDf7I', + 'info_dict': { + 'id': 'qEJwOuvDf7I', + 'ext': 'webm', + 'title': 'ОбÑуждение Ñудебной практики по выборам 14 ÑентÑÐ±Ñ€Ñ 2014 года в Санкт-Петербурге', + 'description': '', + 'upload_date': '20150404', + }, + 'params': { + 'skip_download': 'requires avconv', + }, + 'skip': 'This live event has ended.', + }, + # Extraction from multiple DASH manifests (https://github.com/ytdl-org/youtube-dl/pull/6097) + { + 'url': 'https://www.youtube.com/watch?v=FIl7x6_3R5Y', + 'info_dict': { + 'id': 'FIl7x6_3R5Y', + 'ext': 'webm', + 'title': 'md5:7b81415841e02ecd4313668cde88737a', + 'description': 'md5:116377fd2963b81ec4ce64b542173306', + 'duration': 220, + 'upload_date': '20150625', + 'formats': 'mincount:31', + }, + 'skip': 'not actual anymore', + }, + # DASH manifest with segment_list + { + 'url': 'https://www.youtube.com/embed/CsmdDsKjzN8', + 'md5': '8ce563a1d667b599d21064e982ab9e31', + 'info_dict': { + 'id': 'CsmdDsKjzN8', + 'ext': 'mp4', + 'upload_date': '20150501', # According to '<meta itemprop="datePublished"', but in other places it's 20150510 + 'description': 'Retransmisión en directo de la XVIII media maratón de Zaragoza.', + 'title': 'Retransmisión XVIII Media maratón Zaragoza 2015', + }, + 'params': { + 'youtube_include_dash_manifest': True, + 'format': '135', # bestvideo + }, + 'skip': 'This live event has ended.', + }, + { + # Multifeed videos (multiple cameras), URL can be of any Camera + # TODO: fix multifeed titles + 'url': 'https://www.youtube.com/watch?v=zaPI8MvL8pg', + 'info_dict': { + 'id': 'zaPI8MvL8pg', + 'title': 'Terraria 1.2 Live Stream | Let\'s Play - Part 04', + 'description': 'md5:563ccbc698b39298481ca3c571169519', + }, + 'playlist': [{ + 'info_dict': { + 'id': 'j5yGuxZ8lLU', + 'ext': 'mp4', + 'title': 'Terraria 1.2 Live Stream | Let\'s Play - Part 04 (Chris)', + 'description': 'md5:563ccbc698b39298481ca3c571169519', + 'duration': 10120, + 'channel_follower_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCN2XePorRokPB9TEgRZpddg', + 'availability': 'public', + 'playable_in_embed': True, + 'upload_date': '20131105', + 'categories': ['Gaming'], + 'live_status': 'was_live', + 'tags': 'count:24', + 'release_timestamp': 1383701910, + 'thumbnail': 'https://i.ytimg.com/vi/j5yGuxZ8lLU/maxresdefault.jpg', + 'comment_count': int, + 'age_limit': 0, + 'like_count': int, + 'channel_id': 'UCN2XePorRokPB9TEgRZpddg', + 'channel': 'WiiLikeToPlay', + 'view_count': int, + 'release_date': '20131106', + 'uploader': 'WiiLikeToPlay', + 'uploader_id': '@WLTP', + 'uploader_url': 'https://www.youtube.com/@WLTP', + }, + }, { + 'info_dict': { + 'id': 'zaPI8MvL8pg', + 'ext': 'mp4', + 'title': 'Terraria 1.2 Live Stream | Let\'s Play - Part 04 (Tyson)', + 'availability': 'public', + 'channel_url': 'https://www.youtube.com/channel/UCN2XePorRokPB9TEgRZpddg', + 'channel': 'WiiLikeToPlay', + 'channel_follower_count': int, + 'description': 'md5:563ccbc698b39298481ca3c571169519', + 'duration': 10108, + 'age_limit': 0, + 'like_count': int, + 'tags': 'count:24', + 'channel_id': 'UCN2XePorRokPB9TEgRZpddg', + 'release_timestamp': 1383701915, + 'comment_count': int, + 'upload_date': '20131105', + 'thumbnail': 'https://i.ytimg.com/vi/zaPI8MvL8pg/maxresdefault.jpg', + 'release_date': '20131106', + 'playable_in_embed': True, + 'live_status': 'was_live', + 'categories': ['Gaming'], + 'view_count': int, + 'uploader': 'WiiLikeToPlay', + 'uploader_id': '@WLTP', + 'uploader_url': 'https://www.youtube.com/@WLTP', + }, + }, { + 'info_dict': { + 'id': 'R7r3vfO7Hao', + 'ext': 'mp4', + 'title': 'Terraria 1.2 Live Stream | Let\'s Play - Part 04 (Spencer)', + 'thumbnail': 'https://i.ytimg.com/vi/R7r3vfO7Hao/maxresdefault.jpg', + 'channel_id': 'UCN2XePorRokPB9TEgRZpddg', + 'like_count': int, + 'availability': 'public', + 'playable_in_embed': True, + 'upload_date': '20131105', + 'description': 'md5:563ccbc698b39298481ca3c571169519', + 'channel_follower_count': int, + 'tags': 'count:24', + 'release_date': '20131106', + 'comment_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCN2XePorRokPB9TEgRZpddg', + 'channel': 'WiiLikeToPlay', + 'categories': ['Gaming'], + 'release_timestamp': 1383701914, + 'live_status': 'was_live', + 'age_limit': 0, + 'duration': 10128, + 'view_count': int, + 'uploader': 'WiiLikeToPlay', + 'uploader_id': '@WLTP', + 'uploader_url': 'https://www.youtube.com/@WLTP', + }, + }], + 'params': {'skip_download': True}, + }, + { + # Multifeed video with comma in title (see https://github.com/ytdl-org/youtube-dl/issues/8536) + 'url': 'https://www.youtube.com/watch?v=gVfLd0zydlo', + 'info_dict': { + 'id': 'gVfLd0zydlo', + 'title': 'DevConf.cz 2016 Day 2 Workshops 1 14:00 - 15:30', + }, + 'playlist_count': 2, + 'skip': 'Not multifeed anymore', + }, + { + 'url': 'https://vid.plus/FlRa-iH7PGw', + 'only_matching': True, + }, + { + 'url': 'https://zwearz.com/watch/9lWxNJF-ufM/electra-woman-dyna-girl-official-trailer-grace-helbig.html', + 'only_matching': True, + }, + { + # Title with JS-like syntax "};" (see https://github.com/ytdl-org/youtube-dl/issues/7468) + # Also tests cut-off URL expansion in video description (see + # https://github.com/ytdl-org/youtube-dl/issues/1892, + # https://github.com/ytdl-org/youtube-dl/issues/8164) + 'url': 'https://www.youtube.com/watch?v=lsguqyKfVQg', + 'info_dict': { + 'id': 'lsguqyKfVQg', + 'ext': 'mp4', + 'title': '{dark walk}; Loki/AC/Dishonored; collab w/Elflover21', + 'alt_title': 'Dark Walk', + 'description': 'md5:8085699c11dc3f597ce0410b0dcbb34a', + 'duration': 133, + 'upload_date': '20151119', + 'creator': 'Todd Haberman;\nDaniel Law Heath and Aaron Kaplan', + 'track': 'Dark Walk', + 'artist': 'Todd Haberman;\nDaniel Law Heath and Aaron Kaplan', + 'album': 'Position Music - Production Music Vol. 143 - Dark Walk', + 'thumbnail': 'https://i.ytimg.com/vi_webp/lsguqyKfVQg/maxresdefault.webp', + 'categories': ['Film & Animation'], + 'view_count': int, + 'live_status': 'not_live', + 'channel_url': 'https://www.youtube.com/channel/UCTSRgz5jylBvFt_S7wnsqLQ', + 'channel_id': 'UCTSRgz5jylBvFt_S7wnsqLQ', + 'tags': 'count:13', + 'availability': 'public', + 'channel': 'IronSoulElf', + 'playable_in_embed': True, + 'like_count': int, + 'age_limit': 0, + 'channel_follower_count': int + }, + 'params': { + 'skip_download': True, + }, + }, + { + # Tags with '};' (see https://github.com/ytdl-org/youtube-dl/issues/7468) + 'url': 'https://www.youtube.com/watch?v=Ms7iBXnlUO8', + 'only_matching': True, + }, + { + # Video with yt:stretch=17:0 + 'url': 'https://www.youtube.com/watch?v=Q39EVAstoRM', + 'info_dict': { + 'id': 'Q39EVAstoRM', + 'ext': 'mp4', + 'title': 'Clash Of Clans#14 Dicas De Ataque Para CV 4', + 'description': 'md5:ee18a25c350637c8faff806845bddee9', + 'upload_date': '20151107', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'This video does not exist.', + }, + { + # Video with incomplete 'yt:stretch=16:' + 'url': 'https://www.youtube.com/watch?v=FRhJzUSJbGI', + 'only_matching': True, + }, + { + # Video licensed under Creative Commons + 'url': 'https://www.youtube.com/watch?v=M4gD1WSo5mA', + 'info_dict': { + 'id': 'M4gD1WSo5mA', + 'ext': 'mp4', + 'title': 'md5:e41008789470fc2533a3252216f1c1d1', + 'description': 'md5:a677553cf0840649b731a3024aeff4cc', + 'duration': 721, + 'upload_date': '20150128', + 'license': 'Creative Commons Attribution license (reuse allowed)', + 'channel_id': 'UCuLGmD72gJDBwmLw06X58SA', + 'channel_url': 'https://www.youtube.com/channel/UCuLGmD72gJDBwmLw06X58SA', + 'like_count': int, + 'age_limit': 0, + 'tags': ['Copyright (Legal Subject)', 'Law (Industry)', 'William W. Fisher (Author)'], + 'channel': 'The Berkman Klein Center for Internet & Society', + 'availability': 'public', + 'view_count': int, + 'categories': ['Education'], + 'thumbnail': 'https://i.ytimg.com/vi_webp/M4gD1WSo5mA/maxresdefault.webp', + 'live_status': 'not_live', + 'playable_in_embed': True, + 'channel_follower_count': int, + 'chapters': list, + 'uploader': 'The Berkman Klein Center for Internet & Society', + 'uploader_id': '@BKCHarvard', + 'uploader_url': 'https://www.youtube.com/@BKCHarvard', + }, + 'params': { + 'skip_download': True, + }, + }, + { + 'url': 'https://www.youtube.com/watch?v=eQcmzGIKrzg', + 'info_dict': { + 'id': 'eQcmzGIKrzg', + 'ext': 'mp4', + 'title': 'Democratic Socialism and Foreign Policy | Bernie Sanders', + 'description': 'md5:13a2503d7b5904ef4b223aa101628f39', + 'duration': 4060, + 'upload_date': '20151120', + 'license': 'Creative Commons Attribution license (reuse allowed)', + 'playable_in_embed': True, + 'tags': 'count:12', + 'like_count': int, + 'channel_id': 'UCH1dpzjCEiGAt8CXkryhkZg', + 'age_limit': 0, + 'availability': 'public', + 'categories': ['News & Politics'], + 'channel': 'Bernie Sanders', + 'thumbnail': 'https://i.ytimg.com/vi_webp/eQcmzGIKrzg/maxresdefault.webp', + 'view_count': int, + 'live_status': 'not_live', + 'channel_url': 'https://www.youtube.com/channel/UCH1dpzjCEiGAt8CXkryhkZg', + 'comment_count': int, + 'channel_follower_count': int, + 'chapters': list, + 'uploader': 'Bernie Sanders', + 'uploader_url': 'https://www.youtube.com/@BernieSanders', + 'uploader_id': '@BernieSanders', + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + 'params': { + 'skip_download': True, + }, + }, + { + 'url': 'https://www.youtube.com/watch?feature=player_embedded&amp;v=V36LpHqtcDY', + 'only_matching': True, + }, + { + # YouTube Red paid video (https://github.com/ytdl-org/youtube-dl/issues/10059) + 'url': 'https://www.youtube.com/watch?v=i1Ko8UG-Tdo', + 'only_matching': True, + }, + { + # Rental video preview + 'url': 'https://www.youtube.com/watch?v=yYr8q0y5Jfg', + 'info_dict': { + 'id': 'uGpuVWrhIzE', + 'ext': 'mp4', + 'title': 'Piku - Trailer', + 'description': 'md5:c36bd60c3fd6f1954086c083c72092eb', + 'upload_date': '20150811', + 'license': 'Standard YouTube License', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'This video is not available.', + }, + { + # YouTube Red video with episode data + 'url': 'https://www.youtube.com/watch?v=iqKdEhx-dD4', + 'info_dict': { + 'id': 'iqKdEhx-dD4', + 'ext': 'mp4', + 'title': 'Isolation - Mind Field (Ep 1)', + 'description': 'md5:f540112edec5d09fc8cc752d3d4ba3cd', + 'duration': 2085, + 'upload_date': '20170118', + 'series': 'Mind Field', + 'season_number': 1, + 'episode_number': 1, + 'thumbnail': 'https://i.ytimg.com/vi_webp/iqKdEhx-dD4/maxresdefault.webp', + 'tags': 'count:12', + 'view_count': int, + 'availability': 'public', + 'age_limit': 0, + 'channel': 'Vsauce', + 'episode': 'Episode 1', + 'categories': ['Entertainment'], + 'season': 'Season 1', + 'channel_id': 'UC6nSFpj9HTCZ5t-N3Rm3-HA', + 'channel_url': 'https://www.youtube.com/channel/UC6nSFpj9HTCZ5t-N3Rm3-HA', + 'like_count': int, + 'playable_in_embed': True, + 'live_status': 'not_live', + 'channel_follower_count': int, + 'uploader': 'Vsauce', + 'uploader_url': 'https://www.youtube.com/@Vsauce', + 'uploader_id': '@Vsauce', + 'comment_count': int, + 'channel_is_verified': True, + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': [ + 'Skipping DASH manifest', + ], + }, + { + # The following content has been identified by the YouTube community + # as inappropriate or offensive to some audiences. + 'url': 'https://www.youtube.com/watch?v=6SJNVb0GnPI', + 'info_dict': { + 'id': '6SJNVb0GnPI', + 'ext': 'mp4', + 'title': 'Race Differences in Intelligence', + 'description': 'md5:5d161533167390427a1f8ee89a1fc6f1', + 'duration': 965, + 'upload_date': '20140124', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'This video has been removed for violating YouTube\'s policy on hate speech.', + }, + { + # itag 212 + 'url': '1t24XAntNCY', + 'only_matching': True, + }, + { + # geo restricted to JP + 'url': 'sJL6WA-aGkQ', + 'only_matching': True, + }, + { + 'url': 'https://invidio.us/watch?v=BaW_jenozKc', + 'only_matching': True, + }, + { + 'url': 'https://redirect.invidious.io/watch?v=BaW_jenozKc', + 'only_matching': True, + }, + { + # from https://nitter.pussthecat.org/YouTube/status/1360363141947944964#m + 'url': 'https://redirect.invidious.io/Yh0AhrY9GjA', + 'only_matching': True, + }, + { + # DRM protected + 'url': 'https://www.youtube.com/watch?v=s7_qI6_mIXc', + 'only_matching': True, + }, + { + # Video with unsupported adaptive stream type formats + 'url': 'https://www.youtube.com/watch?v=Z4Vy8R84T1U', + 'info_dict': { + 'id': 'Z4Vy8R84T1U', + 'ext': 'mp4', + 'title': 'saman SMAN 53 Jakarta(Sancety) opening COFFEE4th at SMAN 53 Jakarta', + 'description': 'md5:d41d8cd98f00b204e9800998ecf8427e', + 'duration': 433, + 'upload_date': '20130923', + 'formats': 'maxcount:10', + }, + 'params': { + 'skip_download': True, + 'youtube_include_dash_manifest': False, + }, + 'skip': 'not actual anymore', + }, + { + # Youtube Music Auto-generated description + # TODO: fix metadata extraction + 'url': 'https://music.youtube.com/watch?v=MgNrAu2pzNs', + 'info_dict': { + 'id': 'MgNrAu2pzNs', + 'ext': 'mp4', + 'title': 'Voyeur Girl', + 'description': 'md5:7ae382a65843d6df2685993e90a8628f', + 'upload_date': '20190312', + 'artist': 'Stephen', + 'track': 'Voyeur Girl', + 'album': 'it\'s too much love to know my dear', + 'release_date': '20190313', + 'release_year': 2019, + 'alt_title': 'Voyeur Girl', + 'view_count': int, + 'playable_in_embed': True, + 'like_count': int, + 'categories': ['Music'], + 'channel_url': 'https://www.youtube.com/channel/UC-pWHpBjdGG69N9mM2auIAA', + 'channel': 'Stephen', # TODO: should be "Stephen - Topic" + 'uploader': 'Stephen', + 'availability': 'public', + 'creator': 'Stephen', + 'duration': 169, + 'thumbnail': 'https://i.ytimg.com/vi_webp/MgNrAu2pzNs/maxresdefault.webp', + 'age_limit': 0, + 'channel_id': 'UC-pWHpBjdGG69N9mM2auIAA', + 'tags': 'count:11', + 'live_status': 'not_live', + 'channel_follower_count': int + }, + 'params': { + 'skip_download': True, + }, + }, + { + 'url': 'https://www.youtubekids.com/watch?v=3b8nCWDgZ6Q', + 'only_matching': True, + }, + { + # invalid -> valid video id redirection + 'url': 'DJztXj2GPfl', + 'info_dict': { + 'id': 'DJztXj2GPfk', + 'ext': 'mp4', + 'title': 'Panjabi MC - Mundian To Bach Ke (The Dictator Soundtrack)', + 'description': 'md5:bf577a41da97918e94fa9798d9228825', + 'upload_date': '20090125', + 'artist': 'Panjabi MC', + 'track': 'Beware of the Boys (Mundian to Bach Ke) - Motivo Hi-Lectro Remix', + 'album': 'Beware of the Boys (Mundian To Bach Ke)', + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'Video unavailable', + }, + { + # empty description results in an empty string + 'url': 'https://www.youtube.com/watch?v=x41yOUIvK2k', + 'info_dict': { + 'id': 'x41yOUIvK2k', + 'ext': 'mp4', + 'title': 'IMG 3456', + 'description': '', + 'upload_date': '20170613', + 'view_count': int, + 'thumbnail': 'https://i.ytimg.com/vi_webp/x41yOUIvK2k/maxresdefault.webp', + 'like_count': int, + 'channel_id': 'UCo03ZQPBW5U4UC3regpt1nw', + 'tags': [], + 'channel_url': 'https://www.youtube.com/channel/UCo03ZQPBW5U4UC3regpt1nw', + 'availability': 'public', + 'age_limit': 0, + 'categories': ['Pets & Animals'], + 'duration': 7, + 'playable_in_embed': True, + 'live_status': 'not_live', + 'channel': 'l\'Or Vert asbl', + 'channel_follower_count': int, + 'uploader': 'l\'Or Vert asbl', + 'uploader_url': 'https://www.youtube.com/@ElevageOrVert', + 'uploader_id': '@ElevageOrVert', + }, + 'params': { + 'skip_download': True, + }, + }, + { + # with '};' inside yt initial data (see [1]) + # see [2] for an example with '};' inside ytInitialPlayerResponse + # 1. https://github.com/ytdl-org/youtube-dl/issues/27093 + # 2. https://github.com/ytdl-org/youtube-dl/issues/27216 + 'url': 'https://www.youtube.com/watch?v=CHqg6qOn4no', + 'info_dict': { + 'id': 'CHqg6qOn4no', + 'ext': 'mp4', + 'title': 'Part 77 Sort a list of simple types in c#', + 'description': 'md5:b8746fa52e10cdbf47997903f13b20dc', + 'upload_date': '20130831', + 'channel_id': 'UCCTVrRB5KpIiK6V2GGVsR1Q', + 'like_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCCTVrRB5KpIiK6V2GGVsR1Q', + 'live_status': 'not_live', + 'categories': ['Education'], + 'availability': 'public', + 'thumbnail': 'https://i.ytimg.com/vi/CHqg6qOn4no/sddefault.jpg', + 'tags': 'count:12', + 'playable_in_embed': True, + 'age_limit': 0, + 'view_count': int, + 'duration': 522, + 'channel': 'kudvenkat', + 'comment_count': int, + 'channel_follower_count': int, + 'chapters': list, + 'uploader': 'kudvenkat', + 'uploader_url': 'https://www.youtube.com/@Csharp-video-tutorialsBlogspot', + 'uploader_id': '@Csharp-video-tutorialsBlogspot', + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + 'params': { + 'skip_download': True, + }, + }, + { + # another example of '};' in ytInitialData + 'url': 'https://www.youtube.com/watch?v=gVfgbahppCY', + 'only_matching': True, + }, + { + 'url': 'https://www.youtube.com/watch_popup?v=63RmMXCd_bQ', + 'only_matching': True, + }, + { + # https://github.com/ytdl-org/youtube-dl/pull/28094 + 'url': 'OtqTfy26tG0', + 'info_dict': { + 'id': 'OtqTfy26tG0', + 'ext': 'mp4', + 'title': 'Burn Out', + 'description': 'md5:8d07b84dcbcbfb34bc12a56d968b6131', + 'upload_date': '20141120', + 'artist': 'The Cinematic Orchestra', + 'track': 'Burn Out', + 'album': 'Every Day', + 'like_count': int, + 'live_status': 'not_live', + 'alt_title': 'Burn Out', + 'duration': 614, + 'age_limit': 0, + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCIzsJBIyo8hhpFm1NK0uLgw', + 'creator': 'The Cinematic Orchestra', + 'channel': 'The Cinematic Orchestra', + 'tags': ['The Cinematic Orchestra', 'Every Day', 'Burn Out'], + 'channel_id': 'UCIzsJBIyo8hhpFm1NK0uLgw', + 'availability': 'public', + 'thumbnail': 'https://i.ytimg.com/vi/OtqTfy26tG0/maxresdefault.jpg', + 'categories': ['Music'], + 'playable_in_embed': True, + 'channel_follower_count': int, + 'uploader': 'The Cinematic Orchestra', + 'comment_count': int, + }, + 'params': { + 'skip_download': True, + }, + }, + { + # controversial video, only works with bpctr when authenticated with cookies + 'url': 'https://www.youtube.com/watch?v=nGC3D_FkCmg', + 'only_matching': True, + }, + { + # controversial video, requires bpctr/contentCheckOk + 'url': 'https://www.youtube.com/watch?v=SZJvDhaSDnc', + 'info_dict': { + 'id': 'SZJvDhaSDnc', + 'ext': 'mp4', + 'title': 'San Diego teen commits suicide after bullying over embarrassing video', + 'channel_id': 'UC-SJ6nODDmufqBzPBwCvYvQ', + 'upload_date': '20140716', + 'description': 'md5:acde3a73d3f133fc97e837a9f76b53b7', + 'duration': 170, + 'categories': ['News & Politics'], + 'view_count': int, + 'channel': 'CBS Mornings', + 'tags': ['suicide', 'bullying', 'video', 'cbs', 'news'], + 'thumbnail': 'https://i.ytimg.com/vi/SZJvDhaSDnc/hqdefault.jpg', + 'age_limit': 18, + 'availability': 'needs_auth', + 'channel_url': 'https://www.youtube.com/channel/UC-SJ6nODDmufqBzPBwCvYvQ', + 'like_count': int, + 'live_status': 'not_live', + 'playable_in_embed': True, + 'channel_follower_count': int, + 'uploader': 'CBS Mornings', + 'uploader_url': 'https://www.youtube.com/@CBSMornings', + 'uploader_id': '@CBSMornings', + 'comment_count': int, + 'channel_is_verified': True, + } + }, + { + # restricted location, https://github.com/ytdl-org/youtube-dl/issues/28685 + 'url': 'cBvYw8_A0vQ', + 'info_dict': { + 'id': 'cBvYw8_A0vQ', + 'ext': 'mp4', + 'title': '4K Ueno Okachimachi Street Scenes 上野御徒町歩ã', + 'description': 'md5:ea770e474b7cd6722b4c95b833c03630', + 'upload_date': '20201120', + 'duration': 1456, + 'categories': ['Travel & Events'], + 'channel_id': 'UC3o_t8PzBmXf5S9b7GLx1Mw', + 'view_count': int, + 'channel': 'Walk around Japan', + 'tags': ['Ueno Tokyo', 'Okachimachi Tokyo', 'Ameyoko Street', 'Tokyo attraction', 'Travel in Tokyo'], + 'thumbnail': 'https://i.ytimg.com/vi_webp/cBvYw8_A0vQ/hqdefault.webp', + 'age_limit': 0, + 'availability': 'public', + 'channel_url': 'https://www.youtube.com/channel/UC3o_t8PzBmXf5S9b7GLx1Mw', + 'live_status': 'not_live', + 'playable_in_embed': True, + 'channel_follower_count': int, + 'uploader': 'Walk around Japan', + 'uploader_url': 'https://www.youtube.com/@walkaroundjapan7124', + 'uploader_id': '@walkaroundjapan7124', + }, + 'params': { + 'skip_download': True, + }, + }, { + # Has multiple audio streams + 'url': 'WaOKSUlf4TM', + 'only_matching': True + }, { + # Requires Premium: has format 141 when requested using YTM url + 'url': 'https://music.youtube.com/watch?v=XclachpHxis', + 'only_matching': True + }, { + # multiple subtitles with same lang_code + 'url': 'https://www.youtube.com/watch?v=wsQiKKfKxug', + 'only_matching': True, + }, { + # Force use android client fallback + 'url': 'https://www.youtube.com/watch?v=YOelRv7fMxY', + 'info_dict': { + 'id': 'YOelRv7fMxY', + 'title': 'DIGGING A SECRET TUNNEL Part 1', + 'ext': '3gp', + 'upload_date': '20210624', + 'channel_id': 'UCp68_FLety0O-n9QU6phsgw', + 'channel_url': r're:https?://(?:www\.)?youtube\.com/channel/UCp68_FLety0O-n9QU6phsgw', + 'description': 'md5:5d5991195d599b56cd0c4148907eec50', + 'duration': 596, + 'categories': ['Entertainment'], + 'view_count': int, + 'channel': 'colinfurze', + 'tags': ['Colin', 'furze', 'Terry', 'tunnel', 'underground', 'bunker'], + 'thumbnail': 'https://i.ytimg.com/vi/YOelRv7fMxY/maxresdefault.jpg', + 'age_limit': 0, + 'availability': 'public', + 'like_count': int, + 'live_status': 'not_live', + 'playable_in_embed': True, + 'channel_follower_count': int, + 'chapters': list, + 'uploader': 'colinfurze', + 'uploader_url': 'https://www.youtube.com/@colinfurze', + 'uploader_id': '@colinfurze', + 'comment_count': int, + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + 'params': { + 'format': '17', # 3gp format available on android + 'extractor_args': {'youtube': {'player_client': ['android']}}, + }, + }, + { + # Skip download of additional client configs (remix client config in this case) + 'url': 'https://music.youtube.com/watch?v=MgNrAu2pzNs', + 'only_matching': True, + 'params': { + 'extractor_args': {'youtube': {'player_skip': ['configs']}}, + }, + }, { + # shorts + 'url': 'https://www.youtube.com/shorts/BGQWPY4IigY', + 'only_matching': True, + }, { + 'note': 'Storyboards', + 'url': 'https://www.youtube.com/watch?v=5KLPxDtMqe8', + 'info_dict': { + 'id': '5KLPxDtMqe8', + 'ext': 'mhtml', + 'format_id': 'sb0', + 'title': 'Your Brain is Plastic', + 'description': 'md5:89cd86034bdb5466cd87c6ba206cd2bc', + 'upload_date': '20140324', + 'like_count': int, + 'channel_id': 'UCZYTClx2T1of7BRZ86-8fow', + 'channel_url': 'https://www.youtube.com/channel/UCZYTClx2T1of7BRZ86-8fow', + 'view_count': int, + 'thumbnail': 'https://i.ytimg.com/vi/5KLPxDtMqe8/maxresdefault.jpg', + 'playable_in_embed': True, + 'tags': 'count:12', + 'availability': 'public', + 'channel': 'SciShow', + 'live_status': 'not_live', + 'duration': 248, + 'categories': ['Education'], + 'age_limit': 0, + 'channel_follower_count': int, + 'chapters': list, + 'uploader': 'SciShow', + 'uploader_url': 'https://www.youtube.com/@SciShow', + 'uploader_id': '@SciShow', + 'comment_count': int, + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, 'params': {'format': 'mhtml', 'skip_download': True} + }, { + # Ensure video upload_date is in UTC timezone (video was uploaded 1641170939) + 'url': 'https://www.youtube.com/watch?v=2NUZ8W2llS4', + 'info_dict': { + 'id': '2NUZ8W2llS4', + 'ext': 'mp4', + 'title': 'The NP that test your phone performance 🙂', + 'description': 'md5:144494b24d4f9dfacb97c1bbef5de84d', + 'channel_id': 'UCRqNBSOHgilHfAczlUmlWHA', + 'channel_url': 'https://www.youtube.com/channel/UCRqNBSOHgilHfAczlUmlWHA', + 'duration': 21, + 'view_count': int, + 'age_limit': 0, + 'categories': ['Gaming'], + 'tags': 'count:23', + 'playable_in_embed': True, + 'live_status': 'not_live', + 'upload_date': '20220103', + 'like_count': int, + 'availability': 'public', + 'channel': 'Leon Nguyen', + 'thumbnail': 'https://i.ytimg.com/vi_webp/2NUZ8W2llS4/maxresdefault.webp', + 'comment_count': int, + 'channel_follower_count': int, + 'uploader': 'Leon Nguyen', + 'uploader_url': 'https://www.youtube.com/@LeonNguyen', + 'uploader_id': '@LeonNguyen', + 'heatmap': 'count:100', + } + }, { + # Same video as above, but with --compat-opt no-youtube-prefer-utc-upload-date + 'url': 'https://www.youtube.com/watch?v=2NUZ8W2llS4', + 'info_dict': { + 'id': '2NUZ8W2llS4', + 'ext': 'mp4', + 'title': 'The NP that test your phone performance 🙂', + 'description': 'md5:144494b24d4f9dfacb97c1bbef5de84d', + 'channel_id': 'UCRqNBSOHgilHfAczlUmlWHA', + 'channel_url': 'https://www.youtube.com/channel/UCRqNBSOHgilHfAczlUmlWHA', + 'duration': 21, + 'view_count': int, + 'age_limit': 0, + 'categories': ['Gaming'], + 'tags': 'count:23', + 'playable_in_embed': True, + 'live_status': 'not_live', + 'upload_date': '20220102', + 'like_count': int, + 'availability': 'public', + 'channel': 'Leon Nguyen', + 'thumbnail': 'https://i.ytimg.com/vi_webp/2NUZ8W2llS4/maxresdefault.webp', + 'comment_count': int, + 'channel_follower_count': int, + 'uploader': 'Leon Nguyen', + 'uploader_url': 'https://www.youtube.com/@LeonNguyen', + 'uploader_id': '@LeonNguyen', + 'heatmap': 'count:100', + }, + 'params': {'compat_opts': ['no-youtube-prefer-utc-upload-date']} + }, { + # date text is premiered video, ensure upload date in UTC (published 1641172509) + 'url': 'https://www.youtube.com/watch?v=mzZzzBU6lrM', + 'info_dict': { + 'id': 'mzZzzBU6lrM', + 'ext': 'mp4', + 'title': 'I Met GeorgeNotFound In Real Life...', + 'description': 'md5:978296ec9783a031738b684d4ebf302d', + 'channel_id': 'UC_8NknAFiyhOUaZqHR3lq3Q', + 'channel_url': 'https://www.youtube.com/channel/UC_8NknAFiyhOUaZqHR3lq3Q', + 'duration': 955, + 'view_count': int, + 'age_limit': 0, + 'categories': ['Entertainment'], + 'tags': 'count:26', + 'playable_in_embed': True, + 'live_status': 'not_live', + 'release_timestamp': 1641172509, + 'release_date': '20220103', + 'upload_date': '20220103', + 'like_count': int, + 'availability': 'public', + 'channel': 'Quackity', + 'thumbnail': 'https://i.ytimg.com/vi/mzZzzBU6lrM/maxresdefault.jpg', + 'channel_follower_count': int, + 'uploader': 'Quackity', + 'uploader_id': '@Quackity', + 'uploader_url': 'https://www.youtube.com/@Quackity', + 'comment_count': int, + 'channel_is_verified': True, + 'heatmap': 'count:100', + } + }, + { # continuous livestream. Microformat upload date should be preferred. + # Upload date was 2021-06-19 (not UTC), while stream start is 2021-11-27 + 'url': 'https://www.youtube.com/watch?v=kgx4WGK0oNU', + 'info_dict': { + 'id': 'kgx4WGK0oNU', + 'title': r're:jazz\/lofi hip hop radio🌱chill beats to relax\/study to \[LIVE 24\/7\] \d{4}-\d{2}-\d{2} \d{2}:\d{2}', + 'ext': 'mp4', + 'channel_id': 'UC84whx2xxsiA1gXHXXqKGOA', + 'availability': 'public', + 'age_limit': 0, + 'release_timestamp': 1637975704, + 'upload_date': '20210619', + 'channel_url': 'https://www.youtube.com/channel/UC84whx2xxsiA1gXHXXqKGOA', + 'live_status': 'is_live', + 'thumbnail': 'https://i.ytimg.com/vi/kgx4WGK0oNU/maxresdefault.jpg', + 'channel': 'Abao in Tokyo', + 'channel_follower_count': int, + 'release_date': '20211127', + 'tags': 'count:39', + 'categories': ['People & Blogs'], + 'like_count': int, + 'view_count': int, + 'playable_in_embed': True, + 'description': 'md5:2ef1d002cad520f65825346e2084e49d', + 'concurrent_view_count': int, + 'uploader': 'Abao in Tokyo', + 'uploader_url': 'https://www.youtube.com/@abaointokyo', + 'uploader_id': '@abaointokyo', + }, + 'params': {'skip_download': True} + }, { + 'url': 'https://www.youtube.com/watch?v=tjjjtzRLHvA', + 'info_dict': { + 'id': 'tjjjtzRLHvA', + 'ext': 'mp4', + 'title': 'ãƒãƒƒã‚·ãƒ¥ã‚¿ã‚°ç„¡ã— };if window.ytcsi', + 'upload_date': '20220323', + 'like_count': int, + 'availability': 'unlisted', + 'channel': 'Lesmiscore', + 'thumbnail': r're:^https?://.*\.jpg', + 'age_limit': 0, + 'categories': ['Music'], + 'view_count': int, + 'description': '', + 'channel_url': 'https://www.youtube.com/channel/UCdqltm_7iv1Vs6kp6Syke5A', + 'channel_id': 'UCdqltm_7iv1Vs6kp6Syke5A', + 'live_status': 'not_live', + 'playable_in_embed': True, + 'channel_follower_count': int, + 'duration': 6, + 'tags': [], + 'uploader_id': '@lesmiscore', + 'uploader': 'Lesmiscore', + 'uploader_url': 'https://www.youtube.com/@lesmiscore', + } + }, { + # Prefer primary title+description language metadata by default + # Do not prefer translated description if primary is empty + 'url': 'https://www.youtube.com/watch?v=el3E4MbxRqQ', + 'info_dict': { + 'id': 'el3E4MbxRqQ', + 'ext': 'mp4', + 'title': 'dlp test video 2 - primary sv no desc', + 'description': '', + 'channel': 'cole-dlp-test-acc', + 'tags': [], + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA', + 'like_count': int, + 'playable_in_embed': True, + 'availability': 'unlisted', + 'thumbnail': r're:^https?://.*\.jpg', + 'age_limit': 0, + 'duration': 5, + 'live_status': 'not_live', + 'upload_date': '20220908', + 'categories': ['People & Blogs'], + 'channel_id': 'UCiu-3thuViMebBjw_5nWYrA', + 'uploader_url': 'https://www.youtube.com/@coletdjnz', + 'uploader_id': '@coletdjnz', + 'uploader': 'cole-dlp-test-acc', + }, + 'params': {'skip_download': True} + }, { + # Extractor argument: prefer translated title+description + 'url': 'https://www.youtube.com/watch?v=gHKT4uU8Zng', + 'info_dict': { + 'id': 'gHKT4uU8Zng', + 'ext': 'mp4', + 'channel': 'cole-dlp-test-acc', + 'tags': [], + 'duration': 5, + 'live_status': 'not_live', + 'channel_id': 'UCiu-3thuViMebBjw_5nWYrA', + 'upload_date': '20220728', + 'view_count': int, + 'categories': ['People & Blogs'], + 'thumbnail': r're:^https?://.*\.jpg', + 'title': 'dlp test video title translated (fr)', + 'availability': 'public', + 'age_limit': 0, + 'description': 'dlp test video description translated (fr)', + 'playable_in_embed': True, + 'channel_url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA', + 'uploader_url': 'https://www.youtube.com/@coletdjnz', + 'uploader_id': '@coletdjnz', + 'uploader': 'cole-dlp-test-acc', + }, + 'params': {'skip_download': True, 'extractor_args': {'youtube': {'lang': ['fr']}}}, + 'expected_warnings': [r'Preferring "fr" translated fields'], + }, { + 'note': '6 channel audio', + 'url': 'https://www.youtube.com/watch?v=zgdo7-RRjgo', + 'only_matching': True, + }, { + 'note': 'Multiple HLS formats with same itag', + 'url': 'https://www.youtube.com/watch?v=kX3nB4PpJko', + 'info_dict': { + 'id': 'kX3nB4PpJko', + 'ext': 'mp4', + 'categories': ['Entertainment'], + 'description': 'md5:e8031ff6e426cdb6a77670c9b81f6fa6', + 'live_status': 'not_live', + 'duration': 937, + 'channel_follower_count': int, + 'thumbnail': 'https://i.ytimg.com/vi_webp/kX3nB4PpJko/maxresdefault.webp', + 'title': 'Last To Take Hand Off Jet, Keeps It!', + 'channel': 'MrBeast', + 'playable_in_embed': True, + 'view_count': int, + 'upload_date': '20221112', + 'channel_url': 'https://www.youtube.com/channel/UCX6OQ3DkcsbYNE6H8uQQuVA', + 'age_limit': 0, + 'availability': 'public', + 'channel_id': 'UCX6OQ3DkcsbYNE6H8uQQuVA', + 'like_count': int, + 'tags': [], + 'uploader': 'MrBeast', + 'uploader_url': 'https://www.youtube.com/@MrBeast', + 'uploader_id': '@MrBeast', + 'comment_count': int, + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + 'params': {'extractor_args': {'youtube': {'player_client': ['ios']}}, 'format': '233-1'}, + }, { + 'note': 'Audio formats with Dynamic Range Compression', + 'url': 'https://www.youtube.com/watch?v=Tq92D6wQ1mg', + 'info_dict': { + 'id': 'Tq92D6wQ1mg', + 'ext': 'webm', + 'title': '[MMD] Adios - EVERGLOW [+Motion DL]', + 'channel_url': 'https://www.youtube.com/channel/UC1yoRdFoFJaCY-AGfD9W0wQ', + 'channel_id': 'UC1yoRdFoFJaCY-AGfD9W0wQ', + 'channel_follower_count': int, + 'description': 'md5:17eccca93a786d51bc67646756894066', + 'upload_date': '20191228', + 'tags': ['mmd', 'dance', 'mikumikudance', 'kpop', 'vtuber'], + 'playable_in_embed': True, + 'like_count': int, + 'categories': ['Entertainment'], + 'thumbnail': 'https://i.ytimg.com/vi/Tq92D6wQ1mg/sddefault.jpg', + 'age_limit': 18, + 'channel': 'Projekt Melody', + 'view_count': int, + 'availability': 'needs_auth', + 'comment_count': int, + 'live_status': 'not_live', + 'duration': 106, + 'uploader': 'Projekt Melody', + 'uploader_id': '@ProjektMelody', + 'uploader_url': 'https://www.youtube.com/@ProjektMelody', + }, + 'params': {'extractor_args': {'youtube': {'player_client': ['tv_embedded']}}, 'format': '251-drc'}, + }, + { + 'url': 'https://www.youtube.com/live/qVv6vCqciTM', + 'info_dict': { + 'id': 'qVv6vCqciTM', + 'ext': 'mp4', + 'age_limit': 0, + 'comment_count': int, + 'chapters': 'count:13', + 'upload_date': '20221223', + 'thumbnail': 'https://i.ytimg.com/vi/qVv6vCqciTM/maxresdefault.jpg', + 'channel_url': 'https://www.youtube.com/channel/UCIdEIHpS0TdkqRkHL5OkLtA', + 'like_count': int, + 'release_date': '20221223', + 'tags': ['Vtuber', '月ノ美兎', 'åå–ã•ãª', 'ã«ã˜ã•ã‚“ã˜', 'クリスマス', '3Dé…ä¿¡'], + 'title': '〠#インターãƒãƒƒãƒˆå¥³ã‚¯ãƒªã‚¹ãƒžã‚¹ 】3Dã§æ­Œã£ã¦ã¯ã—ゃãインターãƒãƒƒãƒˆã®å¥³ãŸã¡ã€æœˆãƒŽç¾Žå…Ž/åå–ã•ãªã€‘', + 'view_count': int, + 'playable_in_embed': True, + 'duration': 4438, + 'availability': 'public', + 'channel_follower_count': int, + 'channel_id': 'UCIdEIHpS0TdkqRkHL5OkLtA', + 'categories': ['Entertainment'], + 'live_status': 'was_live', + 'release_timestamp': 1671793345, + 'channel': 'ã•ãªã¡ã‚ƒã‚“ã­ã‚‹', + 'description': 'md5:6aebf95cc4a1d731aebc01ad6cc9806d', + 'uploader': 'ã•ãªã¡ã‚ƒã‚“ã­ã‚‹', + 'uploader_url': 'https://www.youtube.com/@sana_natori', + 'uploader_id': '@sana_natori', + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + }, + { + # Fallbacks when webpage and web client is unavailable + 'url': 'https://www.youtube.com/watch?v=wSSmNUl9Snw', + 'info_dict': { + 'id': 'wSSmNUl9Snw', + 'ext': 'mp4', + # 'categories': ['Science & Technology'], + 'view_count': int, + 'chapters': 'count:2', + 'channel': 'Scott Manley', + 'like_count': int, + 'age_limit': 0, + # 'availability': 'public', + 'channel_follower_count': int, + 'live_status': 'not_live', + 'upload_date': '20170831', + 'duration': 682, + 'tags': 'count:8', + 'uploader_url': 'https://www.youtube.com/@scottmanley', + 'description': 'md5:f4bed7b200404b72a394c2f97b782c02', + 'uploader': 'Scott Manley', + 'uploader_id': '@scottmanley', + 'title': 'The Computer Hack That Saved Apollo 14', + 'channel_id': 'UCxzC4EngIsMrPmbm6Nxvb-A', + 'thumbnail': r're:^https?://.*\.webp', + 'channel_url': 'https://www.youtube.com/channel/UCxzC4EngIsMrPmbm6Nxvb-A', + 'playable_in_embed': True, + 'comment_count': int, + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + 'params': { + 'extractor_args': {'youtube': {'player_client': ['android'], 'player_skip': ['webpage']}}, + }, + }, + ] + + _WEBPAGE_TESTS = [ + # YouTube <object> embed + { + 'url': 'http://www.improbable.com/2017/04/03/untrained-modern-youths-and-ancient-masters-in-selfie-portraits/', + 'md5': '873c81d308b979f0e23ee7e620b312a3', + 'info_dict': { + 'id': 'msN87y-iEx0', + 'ext': 'mp4', + 'title': 'Feynman: Mirrors FUN TO IMAGINE 6', + 'upload_date': '20080526', + 'description': 'md5:873c81d308b979f0e23ee7e620b312a3', + 'age_limit': 0, + 'tags': ['feynman', 'mirror', 'science', 'physics', 'imagination', 'fun', 'cool', 'puzzle'], + 'channel_id': 'UCCeo--lls1vna5YJABWAcVA', + 'playable_in_embed': True, + 'thumbnail': 'https://i.ytimg.com/vi/msN87y-iEx0/hqdefault.jpg', + 'like_count': int, + 'comment_count': int, + 'channel': 'Christopher Sykes', + 'live_status': 'not_live', + 'channel_url': 'https://www.youtube.com/channel/UCCeo--lls1vna5YJABWAcVA', + 'availability': 'public', + 'duration': 195, + 'view_count': int, + 'categories': ['Science & Technology'], + 'channel_follower_count': int, + 'uploader': 'Christopher Sykes', + 'uploader_url': 'https://www.youtube.com/@ChristopherSykesDocumentaries', + 'uploader_id': '@ChristopherSykesDocumentaries', + 'heatmap': 'count:100', + }, + 'params': { + 'skip_download': True, + } + }, + ] + + @classmethod + def suitable(cls, url): + from ..utils import parse_qs + + qs = parse_qs(url) + if qs.get('list', [None])[0]: + return False + return super().suitable(url) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self._code_cache = {} + self._player_cache = {} + + def _prepare_live_from_start_formats(self, formats, video_id, live_start_time, url, webpage_url, smuggled_data, is_live): + lock = threading.Lock() + start_time = time.time() + formats = [f for f in formats if f.get('is_from_start')] + + def refetch_manifest(format_id, delay): + nonlocal formats, start_time, is_live + if time.time() <= start_time + delay: + return + + _, _, prs, player_url = self._download_player_responses(url, smuggled_data, video_id, webpage_url) + video_details = traverse_obj(prs, (..., 'videoDetails'), expected_type=dict) + microformats = traverse_obj( + prs, (..., 'microformat', 'playerMicroformatRenderer'), + expected_type=dict) + _, live_status, _, formats, _ = self._list_formats(video_id, microformats, video_details, prs, player_url) + is_live = live_status == 'is_live' + start_time = time.time() + + def mpd_feed(format_id, delay): + """ + @returns (manifest_url, manifest_stream_number, is_live) or None + """ + for retry in self.RetryManager(fatal=False): + with lock: + refetch_manifest(format_id, delay) + + f = next((f for f in formats if f['format_id'] == format_id), None) + if not f: + if not is_live: + retry.error = f'{video_id}: Video is no longer live' + else: + retry.error = f'Cannot find refreshed manifest for format {format_id}{bug_reports_message()}' + continue + return f['manifest_url'], f['manifest_stream_number'], is_live + return None + + for f in formats: + f['is_live'] = is_live + gen = functools.partial(self._live_dash_fragments, video_id, f['format_id'], + live_start_time, mpd_feed, not is_live and f.copy()) + if is_live: + f['fragments'] = gen + f['protocol'] = 'http_dash_segments_generator' + else: + f['fragments'] = LazyList(gen({})) + del f['is_from_start'] + + def _live_dash_fragments(self, video_id, format_id, live_start_time, mpd_feed, manifestless_orig_fmt, ctx): + FETCH_SPAN, MAX_DURATION = 5, 432000 + + mpd_url, stream_number, is_live = None, None, True + + begin_index = 0 + download_start_time = ctx.get('start') or time.time() + + lack_early_segments = download_start_time - (live_start_time or download_start_time) > MAX_DURATION + if lack_early_segments: + self.report_warning(bug_reports_message( + 'Starting download from the last 120 hours of the live stream since ' + 'YouTube does not have data before that. If you think this is wrong,'), only_once=True) + lack_early_segments = True + + known_idx, no_fragment_score, last_segment_url = begin_index, 0, None + fragments, fragment_base_url = None, None + + def _extract_sequence_from_mpd(refresh_sequence, immediate): + nonlocal mpd_url, stream_number, is_live, no_fragment_score, fragments, fragment_base_url + # Obtain from MPD's maximum seq value + old_mpd_url = mpd_url + last_error = ctx.pop('last_error', None) + expire_fast = immediate or last_error and isinstance(last_error, HTTPError) and last_error.status == 403 + mpd_url, stream_number, is_live = (mpd_feed(format_id, 5 if expire_fast else 18000) + or (mpd_url, stream_number, False)) + if not refresh_sequence: + if expire_fast and not is_live: + return False, last_seq + elif old_mpd_url == mpd_url: + return True, last_seq + if manifestless_orig_fmt: + fmt_info = manifestless_orig_fmt + else: + try: + fmts, _ = self._extract_mpd_formats_and_subtitles( + mpd_url, None, note=False, errnote=False, fatal=False) + except ExtractorError: + fmts = None + if not fmts: + no_fragment_score += 2 + return False, last_seq + fmt_info = next(x for x in fmts if x['manifest_stream_number'] == stream_number) + fragments = fmt_info['fragments'] + fragment_base_url = fmt_info['fragment_base_url'] + assert fragment_base_url + + _last_seq = int(re.search(r'(?:/|^)sq/(\d+)', fragments[-1]['path']).group(1)) + return True, _last_seq + + self.write_debug(f'[{video_id}] Generating fragments for format {format_id}') + while is_live: + fetch_time = time.time() + if no_fragment_score > 30: + return + if last_segment_url: + # Obtain from "X-Head-Seqnum" header value from each segment + try: + urlh = self._request_webpage( + last_segment_url, None, note=False, errnote=False, fatal=False) + except ExtractorError: + urlh = None + last_seq = try_get(urlh, lambda x: int_or_none(x.headers['X-Head-Seqnum'])) + if last_seq is None: + no_fragment_score += 2 + last_segment_url = None + continue + else: + should_continue, last_seq = _extract_sequence_from_mpd(True, no_fragment_score > 15) + no_fragment_score += 2 + if not should_continue: + continue + + if known_idx > last_seq: + last_segment_url = None + continue + + last_seq += 1 + + if begin_index < 0 and known_idx < 0: + # skip from the start when it's negative value + known_idx = last_seq + begin_index + if lack_early_segments: + known_idx = max(known_idx, last_seq - int(MAX_DURATION // fragments[-1]['duration'])) + try: + for idx in range(known_idx, last_seq): + # do not update sequence here or you'll get skipped some part of it + should_continue, _ = _extract_sequence_from_mpd(False, False) + if not should_continue: + known_idx = idx - 1 + raise ExtractorError('breaking out of outer loop') + last_segment_url = urljoin(fragment_base_url, 'sq/%d' % idx) + yield { + 'url': last_segment_url, + 'fragment_count': last_seq, + } + if known_idx == last_seq: + no_fragment_score += 5 + else: + no_fragment_score = 0 + known_idx = last_seq + except ExtractorError: + continue + + if manifestless_orig_fmt: + # Stop at the first iteration if running for post-live manifestless; + # fragment count no longer increase since it starts + break + + time.sleep(max(0, FETCH_SPAN + fetch_time - time.time())) + + def _extract_player_url(self, *ytcfgs, webpage=None): + player_url = traverse_obj( + ytcfgs, (..., 'PLAYER_JS_URL'), (..., 'WEB_PLAYER_CONTEXT_CONFIGS', ..., 'jsUrl'), + get_all=False, expected_type=str) + if not player_url: + return + return urljoin('https://www.youtube.com', player_url) + + def _download_player_url(self, video_id, fatal=False): + res = self._download_webpage( + 'https://www.youtube.com/iframe_api', + note='Downloading iframe API JS', video_id=video_id, fatal=fatal) + if res: + player_version = self._search_regex( + r'player\\?/([0-9a-fA-F]{8})\\?/', res, 'player version', fatal=fatal) + if player_version: + return f'https://www.youtube.com/s/player/{player_version}/player_ias.vflset/en_US/base.js' + + def _signature_cache_id(self, example_sig): + """ Return a string representation of a signature """ + return '.'.join(str(len(part)) for part in example_sig.split('.')) + + @classmethod + def _extract_player_info(cls, player_url): + for player_re in cls._PLAYER_INFO_RE: + id_m = re.search(player_re, player_url) + if id_m: + break + else: + raise ExtractorError('Cannot identify player %r' % player_url) + return id_m.group('id') + + def _load_player(self, video_id, player_url, fatal=True): + player_id = self._extract_player_info(player_url) + if player_id not in self._code_cache: + code = self._download_webpage( + player_url, video_id, fatal=fatal, + note='Downloading player ' + player_id, + errnote='Download of %s failed' % player_url) + if code: + self._code_cache[player_id] = code + return self._code_cache.get(player_id) + + def _extract_signature_function(self, video_id, player_url, example_sig): + player_id = self._extract_player_info(player_url) + + # Read from filesystem cache + func_id = f'js_{player_id}_{self._signature_cache_id(example_sig)}' + assert os.path.basename(func_id) == func_id + + self.write_debug(f'Extracting signature function {func_id}') + cache_spec, code = self.cache.load('youtube-sigfuncs', func_id), None + + if not cache_spec: + code = self._load_player(video_id, player_url) + if code: + res = self._parse_sig_js(code) + test_string = ''.join(map(chr, range(len(example_sig)))) + cache_spec = [ord(c) for c in res(test_string)] + self.cache.store('youtube-sigfuncs', func_id, cache_spec) + + return lambda s: ''.join(s[i] for i in cache_spec) + + def _print_sig_code(self, func, example_sig): + if not self.get_param('youtube_print_sig_code'): + return + + def gen_sig_code(idxs): + def _genslice(start, end, step): + starts = '' if start == 0 else str(start) + ends = (':%d' % (end + step)) if end + step >= 0 else ':' + steps = '' if step == 1 else (':%d' % step) + return f's[{starts}{ends}{steps}]' + + step = None + # Quelch pyflakes warnings - start will be set when step is set + start = '(Never used)' + for i, prev in zip(idxs[1:], idxs[:-1]): + if step is not None: + if i - prev == step: + continue + yield _genslice(start, prev, step) + step = None + continue + if i - prev in [-1, 1]: + step = i - prev + start = prev + continue + else: + yield 's[%d]' % prev + if step is None: + yield 's[%d]' % i + else: + yield _genslice(start, i, step) + + test_string = ''.join(map(chr, range(len(example_sig)))) + cache_res = func(test_string) + cache_spec = [ord(c) for c in cache_res] + expr_code = ' + '.join(gen_sig_code(cache_spec)) + signature_id_tuple = '(%s)' % ( + ', '.join(str(len(p)) for p in example_sig.split('.'))) + code = ('if tuple(len(p) for p in s.split(\'.\')) == %s:\n' + ' return %s\n') % (signature_id_tuple, expr_code) + self.to_screen('Extracted signature function:\n' + code) + + def _parse_sig_js(self, jscode): + funcname = self._search_regex( + (r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(', + r'\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*encodeURIComponent\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\(', + r'\bm=(?P<sig>[a-zA-Z0-9$]{2,})\(decodeURIComponent\(h\.s\)\)', + r'\bc&&\(c=(?P<sig>[a-zA-Z0-9$]{2,})\(decodeURIComponent\(c\)\)', + r'(?:\b|[^a-zA-Z0-9$])(?P<sig>[a-zA-Z0-9$]{2,})\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)(?:;[a-zA-Z0-9$]{2}\.[a-zA-Z0-9$]{2}\(a,\d+\))?', + r'(?P<sig>[a-zA-Z0-9$]+)\s*=\s*function\(\s*a\s*\)\s*{\s*a\s*=\s*a\.split\(\s*""\s*\)', + # Obsolete patterns + r'("|\')signature\1\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(', + r'\.sig\|\|(?P<sig>[a-zA-Z0-9$]+)\(', + r'yt\.akamaized\.net/\)\s*\|\|\s*.*?\s*[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?:encodeURIComponent\s*\()?\s*(?P<sig>[a-zA-Z0-9$]+)\(', + r'\b[cs]\s*&&\s*[adf]\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(', + r'\b[a-zA-Z0-9]+\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*(?P<sig>[a-zA-Z0-9$]+)\(', + r'\bc\s*&&\s*[a-zA-Z0-9]+\.set\([^,]+\s*,\s*\([^)]*\)\s*\(\s*(?P<sig>[a-zA-Z0-9$]+)\('), + jscode, 'Initial JS player signature function name', group='sig') + + jsi = JSInterpreter(jscode) + initial_function = jsi.extract_function(funcname) + return lambda s: initial_function([s]) + + def _cached(self, func, *cache_id): + def inner(*args, **kwargs): + if cache_id not in self._player_cache: + try: + self._player_cache[cache_id] = func(*args, **kwargs) + except ExtractorError as e: + self._player_cache[cache_id] = e + except Exception as e: + self._player_cache[cache_id] = ExtractorError(traceback.format_exc(), cause=e) + + ret = self._player_cache[cache_id] + if isinstance(ret, Exception): + raise ret + return ret + return inner + + def _decrypt_signature(self, s, video_id, player_url): + """Turn the encrypted s field into a working signature""" + extract_sig = self._cached( + self._extract_signature_function, 'sig', player_url, self._signature_cache_id(s)) + func = extract_sig(video_id, player_url, s) + self._print_sig_code(func, s) + return func(s) + + def _decrypt_nsig(self, s, video_id, player_url): + """Turn the encrypted n field into a working signature""" + if player_url is None: + raise ExtractorError('Cannot decrypt nsig without player_url') + player_url = urljoin('https://www.youtube.com', player_url) + + try: + jsi, player_id, func_code = self._extract_n_function_code(video_id, player_url) + except ExtractorError as e: + raise ExtractorError('Unable to extract nsig function code', cause=e) + if self.get_param('youtube_print_sig_code'): + self.to_screen(f'Extracted nsig function from {player_id}:\n{func_code[1]}\n') + + try: + extract_nsig = self._cached(self._extract_n_function_from_code, 'nsig func', player_url) + ret = extract_nsig(jsi, func_code)(s) + except JSInterpreter.Exception as e: + try: + jsi = PhantomJSwrapper(self, timeout=5000) + except ExtractorError: + raise e + self.report_warning( + f'Native nsig extraction failed: Trying with PhantomJS\n' + f' n = {s} ; player = {player_url}', video_id) + self.write_debug(e, only_once=True) + + args, func_body = func_code + ret = jsi.execute( + f'console.log(function({", ".join(args)}) {{ {func_body} }}({s!r}));', + video_id=video_id, note='Executing signature code').strip() + + self.write_debug(f'Decrypted nsig {s} => {ret}') + return ret + + def _extract_n_function_name(self, jscode): + funcname, idx = self._search_regex( + r'\.get\("n"\)\)&&\(b=(?P<nfunc>[a-zA-Z0-9$]+)(?:\[(?P<idx>\d+)\])?\([a-zA-Z0-9]\)', + jscode, 'Initial JS player n function name', group=('nfunc', 'idx')) + if not idx: + return funcname + + return json.loads(js_to_json(self._search_regex( + rf'var {re.escape(funcname)}\s*=\s*(\[.+?\])\s*[,;]', jscode, + f'Initial JS player n function list ({funcname}.{idx})')))[int(idx)] + + def _extract_n_function_code(self, video_id, player_url): + player_id = self._extract_player_info(player_url) + func_code = self.cache.load('youtube-nsig', player_id, min_ver='2022.09.1') + jscode = func_code or self._load_player(video_id, player_url) + jsi = JSInterpreter(jscode) + + if func_code: + return jsi, player_id, func_code + + func_name = self._extract_n_function_name(jscode) + + # For redundancy + func_code = self._search_regex( + r'''(?xs)%s\s*=\s*function\s*\((?P<var>[\w$]+)\)\s* + # NB: The end of the regex is intentionally kept strict + {(?P<code>.+?}\s*return\ [\w$]+.join\(""\))};''' % func_name, + jscode, 'nsig function', group=('var', 'code'), default=None) + if func_code: + func_code = ([func_code[0]], func_code[1]) + else: + self.write_debug('Extracting nsig function with jsinterp') + func_code = jsi.extract_function_code(func_name) + + self.cache.store('youtube-nsig', player_id, func_code) + return jsi, player_id, func_code + + def _extract_n_function_from_code(self, jsi, func_code): + func = jsi.extract_function_from_code(*func_code) + + def extract_nsig(s): + try: + ret = func([s]) + except JSInterpreter.Exception: + raise + except Exception as e: + raise JSInterpreter.Exception(traceback.format_exc(), cause=e) + + if ret.startswith('enhanced_except_'): + raise JSInterpreter.Exception('Signature function returned an exception') + return ret + + return extract_nsig + + def _extract_signature_timestamp(self, video_id, player_url, ytcfg=None, fatal=False): + """ + Extract signatureTimestamp (sts) + Required to tell API what sig/player version is in use. + """ + sts = None + if isinstance(ytcfg, dict): + sts = int_or_none(ytcfg.get('STS')) + + if not sts: + # Attempt to extract from player + if player_url is None: + error_msg = 'Cannot extract signature timestamp without player_url.' + if fatal: + raise ExtractorError(error_msg) + self.report_warning(error_msg) + return + code = self._load_player(video_id, player_url, fatal=fatal) + if code: + sts = int_or_none(self._search_regex( + r'(?:signatureTimestamp|sts)\s*:\s*(?P<sts>[0-9]{5})', code, + 'JS player signature timestamp', group='sts', fatal=fatal)) + return sts + + def _mark_watched(self, video_id, player_responses): + for is_full, key in enumerate(('videostatsPlaybackUrl', 'videostatsWatchtimeUrl')): + label = 'fully ' if is_full else '' + url = get_first(player_responses, ('playbackTracking', key, 'baseUrl'), + expected_type=url_or_none) + if not url: + self.report_warning(f'Unable to mark {label}watched') + return + parsed_url = urllib.parse.urlparse(url) + qs = urllib.parse.parse_qs(parsed_url.query) + + # cpn generation algorithm is reverse engineered from base.js. + # In fact it works even with dummy cpn. + CPN_ALPHABET = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_' + cpn = ''.join(CPN_ALPHABET[random.randint(0, 256) & 63] for _ in range(0, 16)) + + # # more consistent results setting it to right before the end + video_length = [str(float((qs.get('len') or ['1.5'])[0]) - 1)] + + qs.update({ + 'ver': ['2'], + 'cpn': [cpn], + 'cmt': video_length, + 'el': 'detailpage', # otherwise defaults to "shorts" + }) + + if is_full: + # these seem to mark watchtime "history" in the real world + # they're required, so send in a single value + qs.update({ + 'st': 0, + 'et': video_length, + }) + + url = urllib.parse.urlunparse( + parsed_url._replace(query=urllib.parse.urlencode(qs, True))) + + self._download_webpage( + url, video_id, f'Marking {label}watched', + 'Unable to mark watched', fatal=False) + + @classmethod + def _extract_from_webpage(cls, url, webpage): + # Invidious Instances + # https://github.com/yt-dlp/yt-dlp/issues/195 + # https://github.com/iv-org/invidious/pull/1730 + mobj = re.search( + r'<link rel="alternate" href="(?P<url>https://www\.youtube\.com/watch\?v=[0-9A-Za-z_-]{11})"', + webpage) + if mobj: + yield cls.url_result(mobj.group('url'), cls) + raise cls.StopExtraction() + + yield from super()._extract_from_webpage(url, webpage) + + # lazyYT YouTube embed + for id_ in re.findall(r'class="lazyYT" data-youtube-id="([^"]+)"', webpage): + yield cls.url_result(unescapeHTML(id_), cls, id_) + + # Wordpress "YouTube Video Importer" plugin + for m in re.findall(r'''(?x)<div[^>]+ + class=(?P<q1>[\'"])[^\'"]*\byvii_single_video_player\b[^\'"]*(?P=q1)[^>]+ + data-video_id=(?P<q2>[\'"])([^\'"]+)(?P=q2)''', webpage): + yield cls.url_result(m[-1], cls, m[-1]) + + @classmethod + def extract_id(cls, url): + video_id = cls.get_temp_id(url) + if not video_id: + raise ExtractorError(f'Invalid URL: {url}') + return video_id + + def _extract_chapters_from_json(self, data, duration): + chapter_list = traverse_obj( + data, ( + 'playerOverlays', 'playerOverlayRenderer', 'decoratedPlayerBarRenderer', + 'decoratedPlayerBarRenderer', 'playerBar', 'chapteredPlayerBarRenderer', 'chapters' + ), expected_type=list) + + return self._extract_chapters_helper( + chapter_list, + start_function=lambda chapter: float_or_none( + traverse_obj(chapter, ('chapterRenderer', 'timeRangeStartMillis')), scale=1000), + title_function=lambda chapter: traverse_obj( + chapter, ('chapterRenderer', 'title', 'simpleText'), expected_type=str), + duration=duration) + + def _extract_chapters_from_engagement_panel(self, data, duration): + content_list = traverse_obj( + data, + ('engagementPanels', ..., 'engagementPanelSectionListRenderer', 'content', 'macroMarkersListRenderer', 'contents'), + expected_type=list) + chapter_time = lambda chapter: parse_duration(self._get_text(chapter, 'timeDescription')) + chapter_title = lambda chapter: self._get_text(chapter, 'title') + + return next(filter(None, ( + self._extract_chapters_helper(traverse_obj(contents, (..., 'macroMarkersListItemRenderer')), + chapter_time, chapter_title, duration) + for contents in content_list)), []) + + def _extract_heatmap(self, data): + return traverse_obj(data, ( + 'frameworkUpdates', 'entityBatchUpdate', 'mutations', + lambda _, v: v['payload']['macroMarkersListEntity']['markersList']['markerType'] == 'MARKER_TYPE_HEATMAP', + 'payload', 'macroMarkersListEntity', 'markersList', 'markers', ..., { + 'start_time': ('startMillis', {functools.partial(float_or_none, scale=1000)}), + 'end_time': {lambda x: (int(x['startMillis']) + int(x['durationMillis'])) / 1000}, + 'value': ('intensityScoreNormalized', {float_or_none}), + })) or None + + def _extract_comment(self, comment_renderer, parent=None): + comment_id = comment_renderer.get('commentId') + if not comment_id: + return + + info = { + 'id': comment_id, + 'text': self._get_text(comment_renderer, 'contentText'), + 'like_count': self._get_count(comment_renderer, 'voteCount'), + 'author_id': traverse_obj(comment_renderer, ('authorEndpoint', 'browseEndpoint', 'browseId', {self.ucid_or_none})), + 'author': self._get_text(comment_renderer, 'authorText'), + 'author_thumbnail': traverse_obj(comment_renderer, ('authorThumbnail', 'thumbnails', -1, 'url', {url_or_none})), + 'parent': parent or 'root', + } + + # Timestamp is an estimate calculated from the current time and time_text + time_text = self._get_text(comment_renderer, 'publishedTimeText') or '' + timestamp = self._parse_time_text(time_text) + + info.update({ + # FIXME: non-standard, but we need a way of showing that it is an estimate. + '_time_text': time_text, + 'timestamp': timestamp, + }) + + info['author_url'] = urljoin( + 'https://www.youtube.com', traverse_obj(comment_renderer, ('authorEndpoint', ( + ('browseEndpoint', 'canonicalBaseUrl'), ('commandMetadata', 'webCommandMetadata', 'url'))), + expected_type=str, get_all=False)) + + author_is_uploader = traverse_obj(comment_renderer, 'authorIsChannelOwner') + if author_is_uploader is not None: + info['author_is_uploader'] = author_is_uploader + + comment_abr = traverse_obj( + comment_renderer, ('actionButtons', 'commentActionButtonsRenderer'), expected_type=dict) + if comment_abr is not None: + info['is_favorited'] = 'creatorHeart' in comment_abr + + badges = self._extract_badges([traverse_obj(comment_renderer, 'authorCommentBadge')]) + if self._has_badge(badges, BadgeType.VERIFIED): + info['author_is_verified'] = True + + is_pinned = traverse_obj(comment_renderer, 'pinnedCommentBadge') + if is_pinned: + info['is_pinned'] = True + + return info + + def _comment_entries(self, root_continuation_data, ytcfg, video_id, parent=None, tracker=None): + + get_single_config_arg = lambda c: self._configuration_arg(c, [''])[0] + + def extract_header(contents): + _continuation = None + for content in contents: + comments_header_renderer = traverse_obj(content, 'commentsHeaderRenderer') + expected_comment_count = self._get_count( + comments_header_renderer, 'countText', 'commentsCount') + + if expected_comment_count is not None: + tracker['est_total'] = expected_comment_count + self.to_screen(f'Downloading ~{expected_comment_count} comments') + comment_sort_index = int(get_single_config_arg('comment_sort') != 'top') # 1 = new, 0 = top + + sort_menu_item = try_get( + comments_header_renderer, + lambda x: x['sortMenu']['sortFilterSubMenuRenderer']['subMenuItems'][comment_sort_index], dict) or {} + sort_continuation_ep = sort_menu_item.get('serviceEndpoint') or {} + + _continuation = self._extract_continuation_ep_data(sort_continuation_ep) or self._extract_continuation(sort_menu_item) + if not _continuation: + continue + + sort_text = str_or_none(sort_menu_item.get('title')) + if not sort_text: + sort_text = 'top comments' if comment_sort_index == 0 else 'newest first' + self.to_screen('Sorting comments by %s' % sort_text.lower()) + break + return _continuation + + def extract_thread(contents): + if not parent: + tracker['current_page_thread'] = 0 + for content in contents: + if not parent and tracker['total_parent_comments'] >= max_parents: + yield + comment_thread_renderer = try_get(content, lambda x: x['commentThreadRenderer']) + comment_renderer = get_first( + (comment_thread_renderer, content), [['commentRenderer', ('comment', 'commentRenderer')]], + expected_type=dict, default={}) + + comment = self._extract_comment(comment_renderer, parent) + if not comment: + continue + comment_id = comment['id'] + if comment.get('is_pinned'): + tracker['pinned_comment_ids'].add(comment_id) + # Sometimes YouTube may break and give us infinite looping comments. + # See: https://github.com/yt-dlp/yt-dlp/issues/6290 + if comment_id in tracker['seen_comment_ids']: + if comment_id in tracker['pinned_comment_ids'] and not comment.get('is_pinned'): + # Pinned comments may appear a second time in newest first sort + # See: https://github.com/yt-dlp/yt-dlp/issues/6712 + continue + self.report_warning( + 'Detected YouTube comments looping. Stopping comment extraction ' + f'{"for this thread" if parent else ""} as we probably cannot get any more.') + yield + else: + tracker['seen_comment_ids'].add(comment['id']) + + tracker['running_total'] += 1 + tracker['total_reply_comments' if parent else 'total_parent_comments'] += 1 + yield comment + + # Attempt to get the replies + comment_replies_renderer = try_get( + comment_thread_renderer, lambda x: x['replies']['commentRepliesRenderer'], dict) + + if comment_replies_renderer: + tracker['current_page_thread'] += 1 + comment_entries_iter = self._comment_entries( + comment_replies_renderer, ytcfg, video_id, + parent=comment.get('id'), tracker=tracker) + yield from itertools.islice(comment_entries_iter, min( + max_replies_per_thread, max(0, max_replies - tracker['total_reply_comments']))) + + # Keeps track of counts across recursive calls + if not tracker: + tracker = dict( + running_total=0, + est_total=None, + current_page_thread=0, + total_parent_comments=0, + total_reply_comments=0, + seen_comment_ids=set(), + pinned_comment_ids=set() + ) + + # TODO: Deprecated + # YouTube comments have a max depth of 2 + max_depth = int_or_none(get_single_config_arg('max_comment_depth')) + if max_depth: + self._downloader.deprecated_feature('[youtube] max_comment_depth extractor argument is deprecated. ' + 'Set max replies in the max-comments extractor argument instead') + if max_depth == 1 and parent: + return + + max_comments, max_parents, max_replies, max_replies_per_thread, *_ = map( + lambda p: int_or_none(p, default=sys.maxsize), self._configuration_arg('max_comments', ) + [''] * 4) + + continuation = self._extract_continuation(root_continuation_data) + + response = None + is_forced_continuation = False + is_first_continuation = parent is None + if is_first_continuation and not continuation: + # Sometimes you can get comments by generating the continuation yourself, + # even if YouTube initially reports them being disabled - e.g. stories comments. + # Note: if the comment section is actually disabled, YouTube may return a response with + # required check_get_keys missing. So we will disable that check initially in this case. + continuation = self._build_api_continuation_query(self._generate_comment_continuation(video_id)) + is_forced_continuation = True + + continuation_items_path = ( + 'onResponseReceivedEndpoints', ..., ('reloadContinuationItemsCommand', 'appendContinuationItemsAction'), 'continuationItems') + for page_num in itertools.count(0): + if not continuation: + break + headers = self.generate_api_headers(ytcfg=ytcfg, visitor_data=self._extract_visitor_data(response)) + comment_prog_str = f"({tracker['running_total']}/~{tracker['est_total']})" + if page_num == 0: + if is_first_continuation: + note_prefix = 'Downloading comment section API JSON' + else: + note_prefix = ' Downloading comment API JSON reply thread %d %s' % ( + tracker['current_page_thread'], comment_prog_str) + else: + note_prefix = '%sDownloading comment%s API JSON page %d %s' % ( + ' ' if parent else '', ' replies' if parent else '', + page_num, comment_prog_str) + + # Do a deep check for incomplete data as sometimes YouTube may return no comments for a continuation + # Ignore check if YouTube says the comment count is 0. + check_get_keys = None + if not is_forced_continuation and not (tracker['est_total'] == 0 and tracker['running_total'] == 0): + check_get_keys = [[*continuation_items_path, ..., ( + 'commentsHeaderRenderer' if is_first_continuation else ('commentThreadRenderer', 'commentRenderer'))]] + try: + response = self._extract_response( + item_id=None, query=continuation, + ep='next', ytcfg=ytcfg, headers=headers, note=note_prefix, + check_get_keys=check_get_keys) + except ExtractorError as e: + # Ignore incomplete data error for replies if retries didn't work. + # This is to allow any other parent comments and comment threads to be downloaded. + # See: https://github.com/yt-dlp/yt-dlp/issues/4669 + if 'incomplete data' in str(e).lower() and parent: + if self.get_param('ignoreerrors') in (True, 'only_download'): + self.report_warning( + 'Received incomplete data for a comment reply thread and retrying did not help. ' + 'Ignoring to let other comments be downloaded. Pass --no-ignore-errors to not ignore.') + return + else: + raise ExtractorError( + 'Incomplete data received for comment reply thread. ' + 'Pass --ignore-errors to ignore and allow rest of comments to download.', + expected=True) + raise + is_forced_continuation = False + continuation = None + for continuation_items in traverse_obj(response, continuation_items_path, expected_type=list, default=[]): + if is_first_continuation: + continuation = extract_header(continuation_items) + is_first_continuation = False + if continuation: + break + continue + + for entry in extract_thread(continuation_items): + if not entry: + return + yield entry + continuation = self._extract_continuation({'contents': continuation_items}) + if continuation: + break + + message = self._get_text(root_continuation_data, ('contents', ..., 'messageRenderer', 'text'), max_runs=1) + if message and not parent and tracker['running_total'] == 0: + self.report_warning(f'Youtube said: {message}', video_id=video_id, only_once=True) + raise self.CommentsDisabled + + @staticmethod + def _generate_comment_continuation(video_id): + """ + Generates initial comment section continuation token from given video id + """ + token = f'\x12\r\x12\x0b{video_id}\x18\x062\'"\x11"\x0b{video_id}0\x00x\x020\x00B\x10comments-section' + return base64.b64encode(token.encode()).decode() + + def _get_comments(self, ytcfg, video_id, contents, webpage): + """Entry for comment extraction""" + def _real_comment_extract(contents): + renderer = next(( + item for item in traverse_obj(contents, (..., 'itemSectionRenderer'), default={}) + if item.get('sectionIdentifier') == 'comment-item-section'), None) + yield from self._comment_entries(renderer, ytcfg, video_id) + + max_comments = int_or_none(self._configuration_arg('max_comments', [''])[0]) + return itertools.islice(_real_comment_extract(contents), 0, max_comments) + + @staticmethod + def _get_checkok_params(): + return {'contentCheckOk': True, 'racyCheckOk': True} + + @classmethod + def _generate_player_context(cls, sts=None): + context = { + 'html5Preference': 'HTML5_PREF_WANTS', + } + if sts is not None: + context['signatureTimestamp'] = sts + return { + 'playbackContext': { + 'contentPlaybackContext': context + }, + **cls._get_checkok_params() + } + + @staticmethod + def _is_agegated(player_response): + if traverse_obj(player_response, ('playabilityStatus', 'desktopLegacyAgeGateReason')): + return True + + reasons = traverse_obj(player_response, ('playabilityStatus', ('status', 'reason'))) + AGE_GATE_REASONS = ( + 'confirm your age', 'age-restricted', 'inappropriate', # reason + 'age_verification_required', 'age_check_required', # status + ) + return any(expected in reason for expected in AGE_GATE_REASONS for reason in reasons) + + @staticmethod + def _is_unplayable(player_response): + return traverse_obj(player_response, ('playabilityStatus', 'status')) == 'UNPLAYABLE' + + def _extract_player_response(self, client, video_id, master_ytcfg, player_ytcfg, player_url, initial_pr, smuggled_data): + + session_index = self._extract_session_index(player_ytcfg, master_ytcfg) + syncid = self._extract_account_syncid(player_ytcfg, master_ytcfg, initial_pr) + sts = self._extract_signature_timestamp(video_id, player_url, master_ytcfg, fatal=False) if player_url else None + headers = self.generate_api_headers( + ytcfg=player_ytcfg, account_syncid=syncid, session_index=session_index, default_client=client) + + yt_query = { + 'videoId': video_id, + } + if _split_innertube_client(client)[0] == 'android': + yt_query['params'] = 'CgIQBg==' + + pp_arg = self._configuration_arg('player_params', [None], casesense=True)[0] + if pp_arg: + yt_query['params'] = pp_arg + + yt_query.update(self._generate_player_context(sts)) + return self._extract_response( + item_id=video_id, ep='player', query=yt_query, + ytcfg=player_ytcfg, headers=headers, fatal=True, + default_client=client, + note='Downloading %s player API JSON' % client.replace('_', ' ').strip() + ) or None + + def _get_requested_clients(self, url, smuggled_data): + requested_clients = [] + default = ['ios', 'android', 'web'] + allowed_clients = sorted( + (client for client in INNERTUBE_CLIENTS.keys() if client[:1] != '_'), + key=lambda client: INNERTUBE_CLIENTS[client]['priority'], reverse=True) + for client in self._configuration_arg('player_client'): + if client in allowed_clients: + requested_clients.append(client) + elif client == 'default': + requested_clients.extend(default) + elif client == 'all': + requested_clients.extend(allowed_clients) + else: + self.report_warning(f'Skipping unsupported client {client}') + if not requested_clients: + requested_clients = default + + if smuggled_data.get('is_music_url') or self.is_music_url(url): + requested_clients.extend( + f'{client}_music' for client in requested_clients if f'{client}_music' in INNERTUBE_CLIENTS) + + return orderedSet(requested_clients) + + def _extract_player_responses(self, clients, video_id, webpage, master_ytcfg, smuggled_data): + initial_pr = None + if webpage: + initial_pr = self._search_json( + self._YT_INITIAL_PLAYER_RESPONSE_RE, webpage, 'initial player response', video_id, fatal=False) + + all_clients = set(clients) + clients = clients[::-1] + prs = [] + + def append_client(*client_names): + """ Append the first client name that exists but not already used """ + for client_name in client_names: + actual_client = _split_innertube_client(client_name)[0] + if actual_client in INNERTUBE_CLIENTS: + if actual_client not in all_clients: + clients.append(client_name) + all_clients.add(actual_client) + return + + # Android player_response does not have microFormats which are needed for + # extraction of some data. So we return the initial_pr with formats + # stripped out even if not requested by the user + # See: https://github.com/yt-dlp/yt-dlp/issues/501 + if initial_pr: + pr = dict(initial_pr) + pr['streamingData'] = None + prs.append(pr) + + last_error = None + tried_iframe_fallback = False + player_url = None + while clients: + client, base_client, variant = _split_innertube_client(clients.pop()) + player_ytcfg = master_ytcfg if client == 'web' else {} + if 'configs' not in self._configuration_arg('player_skip') and client != 'web': + player_ytcfg = self._download_ytcfg(client, video_id) or player_ytcfg + + player_url = player_url or self._extract_player_url(master_ytcfg, player_ytcfg, webpage=webpage) + require_js_player = self._get_default_ytcfg(client).get('REQUIRE_JS_PLAYER') + if 'js' in self._configuration_arg('player_skip'): + require_js_player = False + player_url = None + + if not player_url and not tried_iframe_fallback and require_js_player: + player_url = self._download_player_url(video_id) + tried_iframe_fallback = True + + try: + pr = initial_pr if client == 'web' and initial_pr else self._extract_player_response( + client, video_id, player_ytcfg or master_ytcfg, player_ytcfg, player_url if require_js_player else None, initial_pr, smuggled_data) + except ExtractorError as e: + if last_error: + self.report_warning(last_error) + last_error = e + continue + + if pr: + # YouTube may return a different video player response than expected. + # See: https://github.com/TeamNewPipe/NewPipe/issues/8713 + pr_video_id = traverse_obj(pr, ('videoDetails', 'videoId')) + if pr_video_id and pr_video_id != video_id: + self.report_warning( + f'Skipping player response from {client} client (got player response for video "{pr_video_id}" instead of "{video_id}")' + bug_reports_message()) + else: + # Save client name for introspection later + name = short_client_name(client) + sd = traverse_obj(pr, ('streamingData', {dict})) or {} + sd[STREAMING_DATA_CLIENT_NAME] = name + for f in traverse_obj(sd, (('formats', 'adaptiveFormats'), ..., {dict})): + f[STREAMING_DATA_CLIENT_NAME] = name + prs.append(pr) + + # creator clients can bypass AGE_VERIFICATION_REQUIRED if logged in + if variant == 'embedded' and self._is_unplayable(pr) and self.is_authenticated: + append_client(f'{base_client}_creator') + elif self._is_agegated(pr): + if variant == 'tv_embedded': + append_client(f'{base_client}_embedded') + elif not variant: + append_client(f'tv_embedded.{base_client}', f'{base_client}_embedded') + + if last_error: + if not len(prs): + raise last_error + self.report_warning(last_error) + return prs, player_url + + def _needs_live_processing(self, live_status, duration): + if (live_status == 'is_live' and self.get_param('live_from_start') + or live_status == 'post_live' and (duration or 0) > 2 * 3600): + return live_status + + def _extract_formats_and_subtitles(self, streaming_data, video_id, player_url, live_status, duration): + CHUNK_SIZE = 10 << 20 + itags, stream_ids = collections.defaultdict(set), [] + itag_qualities, res_qualities = {}, {0: None} + q = qualities([ + # Normally tiny is the smallest video-only formats. But + # audio-only formats with unknown quality may get tagged as tiny + 'tiny', + 'audio_quality_ultralow', 'audio_quality_low', 'audio_quality_medium', 'audio_quality_high', # Audio only formats + 'small', 'medium', 'large', 'hd720', 'hd1080', 'hd1440', 'hd2160', 'hd2880', 'highres' + ]) + streaming_formats = traverse_obj(streaming_data, (..., ('formats', 'adaptiveFormats'), ...)) + format_types = self._configuration_arg('formats') + all_formats = 'duplicate' in format_types + if self._configuration_arg('include_duplicate_formats'): + all_formats = True + self._downloader.deprecated_feature('[youtube] include_duplicate_formats extractor argument is deprecated. ' + 'Use formats=duplicate extractor argument instead') + + def build_fragments(f): + return LazyList({ + 'url': update_url_query(f['url'], { + 'range': f'{range_start}-{min(range_start + CHUNK_SIZE - 1, f["filesize"])}' + }) + } for range_start in range(0, f['filesize'], CHUNK_SIZE)) + + for fmt in streaming_formats: + if fmt.get('targetDurationSec'): + continue + + itag = str_or_none(fmt.get('itag')) + audio_track = fmt.get('audioTrack') or {} + stream_id = (itag, audio_track.get('id'), fmt.get('isDrc')) + if not all_formats: + if stream_id in stream_ids: + continue + + quality = fmt.get('quality') + height = int_or_none(fmt.get('height')) + if quality == 'tiny' or not quality: + quality = fmt.get('audioQuality', '').lower() or quality + # The 3gp format (17) in android client has a quality of "small", + # but is actually worse than other formats + if itag == '17': + quality = 'tiny' + if quality: + if itag: + itag_qualities[itag] = quality + if height: + res_qualities[height] = quality + # FORMAT_STREAM_TYPE_OTF(otf=1) requires downloading the init fragment + # (adding `&sq=0` to the URL) and parsing emsg box to determine the + # number of fragment that would subsequently requested with (`&sq=N`) + if fmt.get('type') == 'FORMAT_STREAM_TYPE_OTF': + continue + + fmt_url = fmt.get('url') + if not fmt_url: + sc = urllib.parse.parse_qs(fmt.get('signatureCipher')) + fmt_url = url_or_none(try_get(sc, lambda x: x['url'][0])) + encrypted_sig = try_get(sc, lambda x: x['s'][0]) + if not all((sc, fmt_url, player_url, encrypted_sig)): + continue + try: + fmt_url += '&%s=%s' % ( + traverse_obj(sc, ('sp', -1)) or 'signature', + self._decrypt_signature(encrypted_sig, video_id, player_url) + ) + except ExtractorError as e: + self.report_warning('Signature extraction failed: Some formats may be missing', + video_id=video_id, only_once=True) + self.write_debug(e, only_once=True) + continue + + query = parse_qs(fmt_url) + throttled = False + if query.get('n'): + try: + decrypt_nsig = self._cached(self._decrypt_nsig, 'nsig', query['n'][0]) + fmt_url = update_url_query(fmt_url, { + 'n': decrypt_nsig(query['n'][0], video_id, player_url) + }) + except ExtractorError as e: + phantomjs_hint = '' + if isinstance(e, JSInterpreter.Exception): + phantomjs_hint = (f' Install {self._downloader._format_err("PhantomJS", self._downloader.Styles.EMPHASIS)} ' + f'to workaround the issue. {PhantomJSwrapper.INSTALL_HINT}\n') + if player_url: + self.report_warning( + f'nsig extraction failed: You may experience throttling for some formats\n{phantomjs_hint}' + f' n = {query["n"][0]} ; player = {player_url}', video_id=video_id, only_once=True) + self.write_debug(e, only_once=True) + else: + self.report_warning( + 'Cannot decrypt nsig without player_url: You may experience throttling for some formats', + video_id=video_id, only_once=True) + throttled = True + + tbr = float_or_none(fmt.get('averageBitrate') or fmt.get('bitrate'), 1000) + language_preference = ( + 10 if audio_track.get('audioIsDefault') and 10 + else -10 if 'descriptive' in (audio_track.get('displayName') or '').lower() and -10 + else -1) + # Some formats may have much smaller duration than others (possibly damaged during encoding) + # E.g. 2-nOtRESiUc Ref: https://github.com/yt-dlp/yt-dlp/issues/2823 + # Make sure to avoid false positives with small duration differences. + # E.g. __2ABJjxzNo, ySuUZEjARPY + is_damaged = try_get(fmt, lambda x: float(x['approxDurationMs']) / duration < 500) + if is_damaged: + self.report_warning( + f'{video_id}: Some formats are possibly damaged. They will be deprioritized', only_once=True) + + client_name = fmt.get(STREAMING_DATA_CLIENT_NAME) + name = fmt.get('qualityLabel') or quality.replace('audio_quality_', '') or '' + fps = int_or_none(fmt.get('fps')) or 0 + dct = { + 'asr': int_or_none(fmt.get('audioSampleRate')), + 'filesize': int_or_none(fmt.get('contentLength')), + 'format_id': f'{itag}{"-drc" if fmt.get("isDrc") else ""}', + 'format_note': join_nonempty( + join_nonempty(audio_track.get('displayName'), + language_preference > 0 and ' (default)', delim=''), + name, fmt.get('isDrc') and 'DRC', + try_get(fmt, lambda x: x['projectionType'].replace('RECTANGULAR', '').lower()), + try_get(fmt, lambda x: x['spatialAudioType'].replace('SPATIAL_AUDIO_TYPE_', '').lower()), + throttled and 'THROTTLED', is_damaged and 'DAMAGED', + (self.get_param('verbose') or all_formats) and client_name, + delim=', '), + # Format 22 is likely to be damaged. See https://github.com/yt-dlp/yt-dlp/issues/3372 + 'source_preference': ((-10 if throttled else -5 if itag == '22' else -1) + + (100 if 'Premium' in name else 0)), + 'fps': fps if fps > 1 else None, # For some formats, fps is wrongly returned as 1 + 'audio_channels': fmt.get('audioChannels'), + 'height': height, + 'quality': q(quality) - bool(fmt.get('isDrc')) / 2, + 'has_drm': bool(fmt.get('drmFamilies')), + 'tbr': tbr, + 'url': fmt_url, + 'width': int_or_none(fmt.get('width')), + 'language': join_nonempty(audio_track.get('id', '').split('.')[0], + 'desc' if language_preference < -1 else '') or None, + 'language_preference': language_preference, + # Strictly de-prioritize damaged and 3gp formats + 'preference': -10 if is_damaged else -2 if itag == '17' else None, + } + mime_mobj = re.match( + r'((?:[^/]+)/(?:[^;]+))(?:;\s*codecs="([^"]+)")?', fmt.get('mimeType') or '') + if mime_mobj: + dct['ext'] = mimetype2ext(mime_mobj.group(1)) + dct.update(parse_codecs(mime_mobj.group(2))) + if itag: + itags[itag].add(('https', dct.get('language'))) + stream_ids.append(stream_id) + single_stream = 'none' in (dct.get('acodec'), dct.get('vcodec')) + if single_stream and dct.get('ext'): + dct['container'] = dct['ext'] + '_dash' + + if (all_formats or 'dashy' in format_types) and dct['filesize']: + yield { + **dct, + 'format_id': f'{dct["format_id"]}-dashy' if all_formats else dct['format_id'], + 'protocol': 'http_dash_segments', + 'fragments': build_fragments(dct), + } + if all_formats or 'dashy' not in format_types: + dct['downloader_options'] = {'http_chunk_size': CHUNK_SIZE} + yield dct + + needs_live_processing = self._needs_live_processing(live_status, duration) + skip_bad_formats = 'incomplete' not in format_types + if self._configuration_arg('include_incomplete_formats'): + skip_bad_formats = False + self._downloader.deprecated_feature('[youtube] include_incomplete_formats extractor argument is deprecated. ' + 'Use formats=incomplete extractor argument instead') + + skip_manifests = set(self._configuration_arg('skip')) + if (not self.get_param('youtube_include_hls_manifest', True) + or needs_live_processing == 'is_live' # These will be filtered out by YoutubeDL anyway + or needs_live_processing and skip_bad_formats): + skip_manifests.add('hls') + + if not self.get_param('youtube_include_dash_manifest', True): + skip_manifests.add('dash') + if self._configuration_arg('include_live_dash'): + self._downloader.deprecated_feature('[youtube] include_live_dash extractor argument is deprecated. ' + 'Use formats=incomplete extractor argument instead') + elif skip_bad_formats and live_status == 'is_live' and needs_live_processing != 'is_live': + skip_manifests.add('dash') + + def process_manifest_format(f, proto, client_name, itag): + key = (proto, f.get('language')) + if not all_formats and key in itags[itag]: + return False + itags[itag].add(key) + + if itag and all_formats: + f['format_id'] = f'{itag}-{proto}' + elif any(p != proto for p, _ in itags[itag]): + f['format_id'] = f'{itag}-{proto}' + elif itag: + f['format_id'] = itag + + if f.get('source_preference') is None: + f['source_preference'] = -1 + + if itag in ('616', '235'): + f['format_note'] = join_nonempty(f.get('format_note'), 'Premium', delim=' ') + f['source_preference'] += 100 + + f['quality'] = q(itag_qualities.get(try_get(f, lambda f: f['format_id'].split('-')[0]), -1)) + if f['quality'] == -1 and f.get('height'): + f['quality'] = q(res_qualities[min(res_qualities, key=lambda x: abs(x - f['height']))]) + if self.get_param('verbose') or all_formats: + f['format_note'] = join_nonempty(f.get('format_note'), client_name, delim=', ') + if f.get('fps') and f['fps'] <= 1: + del f['fps'] + + if proto == 'hls' and f.get('has_drm'): + f['has_drm'] = 'maybe' + f['source_preference'] -= 5 + return True + + subtitles = {} + for sd in streaming_data: + client_name = sd.get(STREAMING_DATA_CLIENT_NAME) + + hls_manifest_url = 'hls' not in skip_manifests and sd.get('hlsManifestUrl') + if hls_manifest_url: + fmts, subs = self._extract_m3u8_formats_and_subtitles( + hls_manifest_url, video_id, 'mp4', fatal=False, live=live_status == 'is_live') + subtitles = self._merge_subtitles(subs, subtitles) + for f in fmts: + if process_manifest_format(f, 'hls', client_name, self._search_regex( + r'/itag/(\d+)', f['url'], 'itag', default=None)): + yield f + + dash_manifest_url = 'dash' not in skip_manifests and sd.get('dashManifestUrl') + if dash_manifest_url: + formats, subs = self._extract_mpd_formats_and_subtitles(dash_manifest_url, video_id, fatal=False) + subtitles = self._merge_subtitles(subs, subtitles) # Prioritize HLS subs over DASH + for f in formats: + if process_manifest_format(f, 'dash', client_name, f['format_id']): + f['filesize'] = int_or_none(self._search_regex( + r'/clen/(\d+)', f.get('fragment_base_url') or f['url'], 'file size', default=None)) + if needs_live_processing: + f['is_from_start'] = True + + yield f + yield subtitles + + def _extract_storyboard(self, player_responses, duration): + spec = get_first( + player_responses, ('storyboards', 'playerStoryboardSpecRenderer', 'spec'), default='').split('|')[::-1] + base_url = url_or_none(urljoin('https://i.ytimg.com/', spec.pop() or None)) + if not base_url: + return + L = len(spec) - 1 + for i, args in enumerate(spec): + args = args.split('#') + counts = list(map(int_or_none, args[:5])) + if len(args) != 8 or not all(counts): + self.report_warning(f'Malformed storyboard {i}: {"#".join(args)}{bug_reports_message()}') + continue + width, height, frame_count, cols, rows = counts + N, sigh = args[6:] + + url = base_url.replace('$L', str(L - i)).replace('$N', N) + f'&sigh={sigh}' + fragment_count = frame_count / (cols * rows) + fragment_duration = duration / fragment_count + yield { + 'format_id': f'sb{i}', + 'format_note': 'storyboard', + 'ext': 'mhtml', + 'protocol': 'mhtml', + 'acodec': 'none', + 'vcodec': 'none', + 'url': url, + 'width': width, + 'height': height, + 'fps': frame_count / duration, + 'rows': rows, + 'columns': cols, + 'fragments': [{ + 'url': url.replace('$M', str(j)), + 'duration': min(fragment_duration, duration - (j * fragment_duration)), + } for j in range(math.ceil(fragment_count))], + } + + def _download_player_responses(self, url, smuggled_data, video_id, webpage_url): + webpage = None + if 'webpage' not in self._configuration_arg('player_skip'): + query = {'bpctr': '9999999999', 'has_verified': '1'} + pp = self._configuration_arg('player_params', [None], casesense=True)[0] + if pp: + query['pp'] = pp + webpage = self._download_webpage( + webpage_url, video_id, fatal=False, query=query) + + master_ytcfg = self.extract_ytcfg(video_id, webpage) or self._get_default_ytcfg() + + player_responses, player_url = self._extract_player_responses( + self._get_requested_clients(url, smuggled_data), + video_id, webpage, master_ytcfg, smuggled_data) + + return webpage, master_ytcfg, player_responses, player_url + + def _list_formats(self, video_id, microformats, video_details, player_responses, player_url, duration=None): + live_broadcast_details = traverse_obj(microformats, (..., 'liveBroadcastDetails')) + is_live = get_first(video_details, 'isLive') + if is_live is None: + is_live = get_first(live_broadcast_details, 'isLiveNow') + live_content = get_first(video_details, 'isLiveContent') + is_upcoming = get_first(video_details, 'isUpcoming') + post_live = get_first(video_details, 'isPostLiveDvr') + live_status = ('post_live' if post_live + else 'is_live' if is_live + else 'is_upcoming' if is_upcoming + else 'was_live' if live_content + else 'not_live' if False in (is_live, live_content) + else None) + streaming_data = traverse_obj(player_responses, (..., 'streamingData')) + *formats, subtitles = self._extract_formats_and_subtitles(streaming_data, video_id, player_url, live_status, duration) + if all(f.get('has_drm') for f in formats): + # If there are no formats that definitely don't have DRM, all have DRM + for f in formats: + f['has_drm'] = True + + return live_broadcast_details, live_status, streaming_data, formats, subtitles + + def _real_extract(self, url): + url, smuggled_data = unsmuggle_url(url, {}) + video_id = self._match_id(url) + + base_url = self.http_scheme() + '//www.youtube.com/' + webpage_url = base_url + 'watch?v=' + video_id + + webpage, master_ytcfg, player_responses, player_url = self._download_player_responses(url, smuggled_data, video_id, webpage_url) + + playability_statuses = traverse_obj( + player_responses, (..., 'playabilityStatus'), expected_type=dict) + + trailer_video_id = get_first( + playability_statuses, + ('errorScreen', 'playerLegacyDesktopYpcTrailerRenderer', 'trailerVideoId'), + expected_type=str) + if trailer_video_id: + return self.url_result( + trailer_video_id, self.ie_key(), trailer_video_id) + + search_meta = ((lambda x: self._html_search_meta(x, webpage, default=None)) + if webpage else (lambda x: None)) + + video_details = traverse_obj(player_responses, (..., 'videoDetails'), expected_type=dict) + microformats = traverse_obj( + player_responses, (..., 'microformat', 'playerMicroformatRenderer'), + expected_type=dict) + + translated_title = self._get_text(microformats, (..., 'title')) + video_title = (self._preferred_lang and translated_title + or get_first(video_details, 'title') # primary + or translated_title + or search_meta(['og:title', 'twitter:title', 'title'])) + translated_description = self._get_text(microformats, (..., 'description')) + original_description = get_first(video_details, 'shortDescription') + video_description = ( + self._preferred_lang and translated_description + # If original description is blank, it will be an empty string. + # Do not prefer translated description in this case. + or original_description if original_description is not None else translated_description) + + multifeed_metadata_list = get_first( + player_responses, + ('multicamera', 'playerLegacyMulticameraRenderer', 'metadataList'), + expected_type=str) + if multifeed_metadata_list and not smuggled_data.get('force_singlefeed'): + if self.get_param('noplaylist'): + self.to_screen('Downloading just video %s because of --no-playlist' % video_id) + else: + entries = [] + feed_ids = [] + for feed in multifeed_metadata_list.split(','): + # Unquote should take place before split on comma (,) since textual + # fields may contain comma as well (see + # https://github.com/ytdl-org/youtube-dl/issues/8536) + feed_data = urllib.parse.parse_qs( + urllib.parse.unquote_plus(feed)) + + def feed_entry(name): + return try_get( + feed_data, lambda x: x[name][0], str) + + feed_id = feed_entry('id') + if not feed_id: + continue + feed_title = feed_entry('title') + title = video_title + if feed_title: + title += ' (%s)' % feed_title + entries.append({ + '_type': 'url_transparent', + 'ie_key': 'Youtube', + 'url': smuggle_url( + '%swatch?v=%s' % (base_url, feed_data['id'][0]), + {'force_singlefeed': True}), + 'title': title, + }) + feed_ids.append(feed_id) + self.to_screen( + 'Downloading multifeed video (%s) - add --no-playlist to just download video %s' + % (', '.join(feed_ids), video_id)) + return self.playlist_result( + entries, video_id, video_title, video_description) + + duration = (int_or_none(get_first(video_details, 'lengthSeconds')) + or int_or_none(get_first(microformats, 'lengthSeconds')) + or parse_duration(search_meta('duration')) or None) + + live_broadcast_details, live_status, streaming_data, formats, automatic_captions = \ + self._list_formats(video_id, microformats, video_details, player_responses, player_url, duration) + if live_status == 'post_live': + self.write_debug(f'{video_id}: Video is in Post-Live Manifestless mode') + + if not formats: + if not self.get_param('allow_unplayable_formats') and traverse_obj(streaming_data, (..., 'licenseInfos')): + self.report_drm(video_id) + pemr = get_first( + playability_statuses, + ('errorScreen', 'playerErrorMessageRenderer'), expected_type=dict) or {} + reason = self._get_text(pemr, 'reason') or get_first(playability_statuses, 'reason') + subreason = clean_html(self._get_text(pemr, 'subreason') or '') + if subreason: + if subreason == 'The uploader has not made this video available in your country.': + countries = get_first(microformats, 'availableCountries') + if not countries: + regions_allowed = search_meta('regionsAllowed') + countries = regions_allowed.split(',') if regions_allowed else None + self.raise_geo_restricted(subreason, countries, metadata_available=True) + reason += f'. {subreason}' + if reason: + self.raise_no_formats(reason, expected=True) + + keywords = get_first(video_details, 'keywords', expected_type=list) or [] + if not keywords and webpage: + keywords = [ + unescapeHTML(m.group('content')) + for m in re.finditer(self._meta_regex('og:video:tag'), webpage)] + for keyword in keywords: + if keyword.startswith('yt:stretch='): + mobj = re.search(r'(\d+)\s*:\s*(\d+)', keyword) + if mobj: + # NB: float is intentional for forcing float division + w, h = (float(v) for v in mobj.groups()) + if w > 0 and h > 0: + ratio = w / h + for f in formats: + if f.get('vcodec') != 'none': + f['stretched_ratio'] = ratio + break + thumbnails = self._extract_thumbnails((video_details, microformats), (..., ..., 'thumbnail')) + thumbnail_url = search_meta(['og:image', 'twitter:image']) + if thumbnail_url: + thumbnails.append({ + 'url': thumbnail_url, + }) + original_thumbnails = thumbnails.copy() + + # The best resolution thumbnails sometimes does not appear in the webpage + # See: https://github.com/yt-dlp/yt-dlp/issues/340 + # List of possible thumbnails - Ref: <https://stackoverflow.com/a/20542029> + thumbnail_names = [ + # While the *1,*2,*3 thumbnails are just below their corresponding "*default" variants + # in resolution, these are not the custom thumbnail. So de-prioritize them + 'maxresdefault', 'hq720', 'sddefault', 'hqdefault', '0', 'mqdefault', 'default', + 'sd1', 'sd2', 'sd3', 'hq1', 'hq2', 'hq3', 'mq1', 'mq2', 'mq3', '1', '2', '3' + ] + n_thumbnail_names = len(thumbnail_names) + thumbnails.extend({ + 'url': 'https://i.ytimg.com/vi{webp}/{video_id}/{name}{live}.{ext}'.format( + video_id=video_id, name=name, ext=ext, + webp='_webp' if ext == 'webp' else '', live='_live' if live_status == 'is_live' else ''), + } for name in thumbnail_names for ext in ('webp', 'jpg')) + for thumb in thumbnails: + i = next((i for i, t in enumerate(thumbnail_names) if f'/{video_id}/{t}' in thumb['url']), n_thumbnail_names) + thumb['preference'] = (0 if '.webp' in thumb['url'] else -1) - (2 * i) + self._remove_duplicate_formats(thumbnails) + self._downloader._sort_thumbnails(original_thumbnails) + + category = get_first(microformats, 'category') or search_meta('genre') + channel_id = self.ucid_or_none(str_or_none( + get_first(video_details, 'channelId') + or get_first(microformats, 'externalChannelId') + or search_meta('channelId'))) + owner_profile_url = get_first(microformats, 'ownerProfileUrl') + + live_start_time = parse_iso8601(get_first(live_broadcast_details, 'startTimestamp')) + live_end_time = parse_iso8601(get_first(live_broadcast_details, 'endTimestamp')) + if not duration and live_end_time and live_start_time: + duration = live_end_time - live_start_time + + needs_live_processing = self._needs_live_processing(live_status, duration) + + def is_bad_format(fmt): + if needs_live_processing and not fmt.get('is_from_start'): + return True + elif (live_status == 'is_live' and needs_live_processing != 'is_live' + and fmt.get('protocol') == 'http_dash_segments'): + return True + + for fmt in filter(is_bad_format, formats): + fmt['preference'] = (fmt.get('preference') or -1) - 10 + fmt['format_note'] = join_nonempty(fmt.get('format_note'), '(Last 2 hours)', delim=' ') + + if needs_live_processing: + self._prepare_live_from_start_formats( + formats, video_id, live_start_time, url, webpage_url, smuggled_data, live_status == 'is_live') + + formats.extend(self._extract_storyboard(player_responses, duration)) + + channel_handle = self.handle_from_url(owner_profile_url) + + info = { + 'id': video_id, + 'title': video_title, + 'formats': formats, + 'thumbnails': thumbnails, + # The best thumbnail that we are sure exists. Prevents unnecessary + # URL checking if user don't care about getting the best possible thumbnail + 'thumbnail': traverse_obj(original_thumbnails, (-1, 'url')), + 'description': video_description, + 'channel_id': channel_id, + 'channel_url': format_field(channel_id, None, 'https://www.youtube.com/channel/%s', default=None), + 'duration': duration, + 'view_count': int_or_none( + get_first((video_details, microformats), (..., 'viewCount')) + or search_meta('interactionCount')), + 'average_rating': float_or_none(get_first(video_details, 'averageRating')), + 'age_limit': 18 if ( + get_first(microformats, 'isFamilySafe') is False + or search_meta('isFamilyFriendly') == 'false' + or search_meta('og:restrictions:age') == '18+') else 0, + 'webpage_url': webpage_url, + 'categories': [category] if category else None, + 'tags': keywords, + 'playable_in_embed': get_first(playability_statuses, 'playableInEmbed'), + 'live_status': live_status, + 'release_timestamp': live_start_time, + '_format_sort_fields': ( # source_preference is lower for throttled/potentially damaged formats + 'quality', 'res', 'fps', 'hdr:12', 'source', 'vcodec:vp9.2', 'channels', 'acodec', 'lang', 'proto') + } + + subtitles = {} + pctr = traverse_obj(player_responses, (..., 'captions', 'playerCaptionsTracklistRenderer'), expected_type=dict) + if pctr: + def get_lang_code(track): + return (remove_start(track.get('vssId') or '', '.').replace('.', '-') + or track.get('languageCode')) + + # Converted into dicts to remove duplicates + captions = { + get_lang_code(sub): sub + for sub in traverse_obj(pctr, (..., 'captionTracks', ...))} + translation_languages = { + lang.get('languageCode'): self._get_text(lang.get('languageName'), max_runs=1) + for lang in traverse_obj(pctr, (..., 'translationLanguages', ...))} + + def process_language(container, base_url, lang_code, sub_name, query): + lang_subs = container.setdefault(lang_code, []) + for fmt in self._SUBTITLE_FORMATS: + query.update({ + 'fmt': fmt, + }) + lang_subs.append({ + 'ext': fmt, + 'url': urljoin('https://www.youtube.com', update_url_query(base_url, query)), + 'name': sub_name, + }) + + # NB: Constructing the full subtitle dictionary is slow + get_translated_subs = 'translated_subs' not in self._configuration_arg('skip') and ( + self.get_param('writeautomaticsub', False) or self.get_param('listsubtitles')) + for lang_code, caption_track in captions.items(): + base_url = caption_track.get('baseUrl') + orig_lang = parse_qs(base_url).get('lang', [None])[-1] + if not base_url: + continue + lang_name = self._get_text(caption_track, 'name', max_runs=1) + if caption_track.get('kind') != 'asr': + if not lang_code: + continue + process_language( + subtitles, base_url, lang_code, lang_name, {}) + if not caption_track.get('isTranslatable'): + continue + for trans_code, trans_name in translation_languages.items(): + if not trans_code: + continue + orig_trans_code = trans_code + if caption_track.get('kind') != 'asr' and trans_code != 'und': + if not get_translated_subs: + continue + trans_code += f'-{lang_code}' + trans_name += format_field(lang_name, None, ' from %s') + if lang_code == f'a-{orig_trans_code}': + # Set audio language based on original subtitles + for f in formats: + if f.get('acodec') != 'none' and not f.get('language'): + f['language'] = orig_trans_code + # Add an "-orig" label to the original language so that it can be distinguished. + # The subs are returned without "-orig" as well for compatibility + process_language( + automatic_captions, base_url, f'{trans_code}-orig', f'{trans_name} (Original)', {}) + # Setting tlang=lang returns damaged subtitles. + process_language(automatic_captions, base_url, trans_code, trans_name, + {} if orig_lang == orig_trans_code else {'tlang': trans_code}) + + info['automatic_captions'] = automatic_captions + info['subtitles'] = subtitles + + parsed_url = urllib.parse.urlparse(url) + for component in [parsed_url.fragment, parsed_url.query]: + query = urllib.parse.parse_qs(component) + for k, v in query.items(): + for d_k, s_ks in [('start', ('start', 't')), ('end', ('end',))]: + d_k += '_time' + if d_k not in info and k in s_ks: + info[d_k] = parse_duration(query[k][0]) + + # Youtube Music Auto-generated description + if (video_description or '').strip().endswith('\nAuto-generated by YouTube.'): + # XXX: Causes catastrophic backtracking if description has "·" + # E.g. https://www.youtube.com/watch?v=DoPaAxMQoiI + # Simulating atomic groups: (?P<a>[^xy]+)x => (?=(?P<a>[^xy]+))(?P=a)x + # reduces it, but does not fully fix it. https://regex101.com/r/8Ssf2h/2 + mobj = re.search( + r'''(?xs) + (?=(?P<track>[^\n·]+))(?P=track)· + (?=(?P<artist>[^\n]+))(?P=artist)\n+ + (?=(?P<album>[^\n]+))(?P=album)\n + (?:.+?â„—\s*(?P<release_year>\d{4})(?!\d))? + (?:.+?Released on\s*:\s*(?P<release_date>\d{4}-\d{2}-\d{2}))? + (.+?\nArtist\s*:\s* + (?=(?P<clean_artist>[^\n]+))(?P=clean_artist)\n + )?.+\nAuto-generated\ by\ YouTube\.\s*$ + ''', video_description) + if mobj: + release_year = mobj.group('release_year') + release_date = mobj.group('release_date') + if release_date: + release_date = release_date.replace('-', '') + if not release_year: + release_year = release_date[:4] + info.update({ + 'album': mobj.group('album'.strip()), + 'artist': mobj.group('clean_artist') or ', '.join(a.strip() for a in mobj.group('artist').split('·')), + 'track': mobj.group('track').strip(), + 'release_date': release_date, + 'release_year': int_or_none(release_year), + }) + + initial_data = None + if webpage: + initial_data = self.extract_yt_initial_data(video_id, webpage, fatal=False) + if not traverse_obj(initial_data, 'contents'): + self.report_warning('Incomplete data received in embedded initial data; re-fetching using API.') + initial_data = None + if not initial_data: + query = {'videoId': video_id} + query.update(self._get_checkok_params()) + initial_data = self._extract_response( + item_id=video_id, ep='next', fatal=False, + ytcfg=master_ytcfg, query=query, check_get_keys='contents', + headers=self.generate_api_headers(ytcfg=master_ytcfg), + note='Downloading initial data API JSON') + + info['comment_count'] = traverse_obj(initial_data, ( + 'contents', 'twoColumnWatchNextResults', 'results', 'results', 'contents', ..., 'itemSectionRenderer', + 'contents', ..., 'commentsEntryPointHeaderRenderer', 'commentCount' + ), ( + 'engagementPanels', lambda _, v: v['engagementPanelSectionListRenderer']['panelIdentifier'] == 'comment-item-section', + 'engagementPanelSectionListRenderer', 'header', 'engagementPanelTitleHeaderRenderer', 'contextualInfo' + ), expected_type=self._get_count, get_all=False) + + try: # This will error if there is no livechat + initial_data['contents']['twoColumnWatchNextResults']['conversationBar']['liveChatRenderer']['continuations'][0]['reloadContinuationData']['continuation'] + except (KeyError, IndexError, TypeError): + pass + else: + info.setdefault('subtitles', {})['live_chat'] = [{ + # url is needed to set cookies + 'url': f'https://www.youtube.com/watch?v={video_id}&bpctr=9999999999&has_verified=1', + 'video_id': video_id, + 'ext': 'json', + 'protocol': ('youtube_live_chat' if live_status in ('is_live', 'is_upcoming') + else 'youtube_live_chat_replay'), + }] + + if initial_data: + info['chapters'] = ( + self._extract_chapters_from_json(initial_data, duration) + or self._extract_chapters_from_engagement_panel(initial_data, duration) + or self._extract_chapters_from_description(video_description, duration) + or None) + + info['heatmap'] = self._extract_heatmap(initial_data) + + contents = traverse_obj( + initial_data, ('contents', 'twoColumnWatchNextResults', 'results', 'results', 'contents'), + expected_type=list, default=[]) + + vpir = get_first(contents, 'videoPrimaryInfoRenderer') + if vpir: + stl = vpir.get('superTitleLink') + if stl: + stl = self._get_text(stl) + if try_get( + vpir, + lambda x: x['superTitleIcon']['iconType']) == 'LOCATION_PIN': + info['location'] = stl + else: + mobj = re.search(r'(.+?)\s*S(\d+)\s*•?\s*E(\d+)', stl) + if mobj: + info.update({ + 'series': mobj.group(1), + 'season_number': int(mobj.group(2)), + 'episode_number': int(mobj.group(3)), + }) + for tlb in (try_get( + vpir, + lambda x: x['videoActions']['menuRenderer']['topLevelButtons'], + list) or []): + tbrs = variadic( + traverse_obj( + tlb, ('toggleButtonRenderer', ...), + ('segmentedLikeDislikeButtonRenderer', ..., 'toggleButtonRenderer'))) + for tbr in tbrs: + for getter, regex in [( + lambda x: x['defaultText']['accessibility']['accessibilityData'], + r'(?P<count>[\d,]+)\s*(?P<type>(?:dis)?like)'), ([ + lambda x: x['accessibility'], + lambda x: x['accessibilityData']['accessibilityData'], + ], r'(?P<type>(?:dis)?like) this video along with (?P<count>[\d,]+) other people')]: + label = (try_get(tbr, getter, dict) or {}).get('label') + if label: + mobj = re.match(regex, label) + if mobj: + info[mobj.group('type') + '_count'] = str_to_int(mobj.group('count')) + break + sbr_tooltip = try_get( + vpir, lambda x: x['sentimentBar']['sentimentBarRenderer']['tooltip']) + if sbr_tooltip: + like_count, dislike_count = sbr_tooltip.split(' / ') + info.update({ + 'like_count': str_to_int(like_count), + 'dislike_count': str_to_int(dislike_count), + }) + vcr = traverse_obj(vpir, ('viewCount', 'videoViewCountRenderer')) + if vcr: + vc = self._get_count(vcr, 'viewCount') + # Upcoming premieres with waiting count are treated as live here + if vcr.get('isLive'): + info['concurrent_view_count'] = vc + elif info.get('view_count') is None: + info['view_count'] = vc + + vsir = get_first(contents, 'videoSecondaryInfoRenderer') + if vsir: + vor = traverse_obj(vsir, ('owner', 'videoOwnerRenderer')) + info.update({ + 'channel': self._get_text(vor, 'title'), + 'channel_follower_count': self._get_count(vor, 'subscriberCountText')}) + + if not channel_handle: + channel_handle = self.handle_from_url( + traverse_obj(vor, ( + ('navigationEndpoint', ('title', 'runs', ..., 'navigationEndpoint')), + (('commandMetadata', 'webCommandMetadata', 'url'), ('browseEndpoint', 'canonicalBaseUrl')), + {str}), get_all=False)) + + rows = try_get( + vsir, + lambda x: x['metadataRowContainer']['metadataRowContainerRenderer']['rows'], + list) or [] + multiple_songs = False + for row in rows: + if try_get(row, lambda x: x['metadataRowRenderer']['hasDividerLine']) is True: + multiple_songs = True + break + for row in rows: + mrr = row.get('metadataRowRenderer') or {} + mrr_title = mrr.get('title') + if not mrr_title: + continue + mrr_title = self._get_text(mrr, 'title') + mrr_contents_text = self._get_text(mrr, ('contents', 0)) + if mrr_title == 'License': + info['license'] = mrr_contents_text + elif not multiple_songs: + if mrr_title == 'Album': + info['album'] = mrr_contents_text + elif mrr_title == 'Artist': + info['artist'] = mrr_contents_text + elif mrr_title == 'Song': + info['track'] = mrr_contents_text + owner_badges = self._extract_badges(traverse_obj(vsir, ('owner', 'videoOwnerRenderer', 'badges'))) + if self._has_badge(owner_badges, BadgeType.VERIFIED): + info['channel_is_verified'] = True + + info.update({ + 'uploader': info.get('channel'), + 'uploader_id': channel_handle, + 'uploader_url': format_field(channel_handle, None, 'https://www.youtube.com/%s', default=None), + }) + # The upload date for scheduled, live and past live streams / premieres in microformats + # may be different from the stream date. Although not in UTC, we will prefer it in this case. + # See: https://github.com/yt-dlp/yt-dlp/pull/2223#issuecomment-1008485139 + upload_date = ( + unified_strdate(get_first(microformats, 'uploadDate')) + or unified_strdate(search_meta('uploadDate'))) + if not upload_date or ( + live_status in ('not_live', None) + and 'no-youtube-prefer-utc-upload-date' not in self.get_param('compat_opts', []) + ): + upload_date = strftime_or_none( + self._parse_time_text(self._get_text(vpir, 'dateText'))) or upload_date + info['upload_date'] = upload_date + + for s_k, d_k in [('artist', 'creator'), ('track', 'alt_title')]: + v = info.get(s_k) + if v: + info[d_k] = v + + badges = self._extract_badges(traverse_obj(vpir, 'badges')) + + is_private = (self._has_badge(badges, BadgeType.AVAILABILITY_PRIVATE) + or get_first(video_details, 'isPrivate', expected_type=bool)) + + info['availability'] = ( + 'public' if self._has_badge(badges, BadgeType.AVAILABILITY_PUBLIC) + else self._availability( + is_private=is_private, + needs_premium=( + self._has_badge(badges, BadgeType.AVAILABILITY_PREMIUM) + or False if initial_data and is_private is not None else None), + needs_subscription=( + self._has_badge(badges, BadgeType.AVAILABILITY_SUBSCRIPTION) + or False if initial_data and is_private is not None else None), + needs_auth=info['age_limit'] >= 18, + is_unlisted=None if is_private is None else ( + self._has_badge(badges, BadgeType.AVAILABILITY_UNLISTED) + or get_first(microformats, 'isUnlisted', expected_type=bool)))) + + info['__post_extractor'] = self.extract_comments(master_ytcfg, video_id, contents, webpage) + + self.mark_watched(video_id, player_responses) + + return info + + +class YoutubeTabBaseInfoExtractor(YoutubeBaseInfoExtractor): + @staticmethod + def passthrough_smuggled_data(func): + def _smuggle(info, smuggled_data): + if info.get('_type') not in ('url', 'url_transparent'): + return info + if smuggled_data.get('is_music_url'): + parsed_url = urllib.parse.urlparse(info['url']) + if parsed_url.netloc in ('www.youtube.com', 'music.youtube.com'): + smuggled_data.pop('is_music_url') + info['url'] = urllib.parse.urlunparse(parsed_url._replace(netloc='music.youtube.com')) + if smuggled_data: + info['url'] = smuggle_url(info['url'], smuggled_data) + return info + + @functools.wraps(func) + def wrapper(self, url): + url, smuggled_data = unsmuggle_url(url, {}) + if self.is_music_url(url): + smuggled_data['is_music_url'] = True + info_dict = func(self, url, smuggled_data) + if smuggled_data: + _smuggle(info_dict, smuggled_data) + if info_dict.get('entries'): + info_dict['entries'] = (_smuggle(i, smuggled_data.copy()) for i in info_dict['entries']) + return info_dict + return wrapper + + @staticmethod + def _extract_basic_item_renderer(item): + # Modified from _extract_grid_item_renderer + known_basic_renderers = ( + 'playlistRenderer', 'videoRenderer', 'channelRenderer', 'showRenderer', 'reelItemRenderer' + ) + for key, renderer in item.items(): + if not isinstance(renderer, dict): + continue + elif key in known_basic_renderers: + return renderer + elif key.startswith('grid') and key.endswith('Renderer'): + return renderer + + def _extract_channel_renderer(self, renderer): + channel_id = self.ucid_or_none(renderer['channelId']) + title = self._get_text(renderer, 'title') + channel_url = format_field(channel_id, None, 'https://www.youtube.com/channel/%s', default=None) + channel_handle = self.handle_from_url( + traverse_obj(renderer, ( + 'navigationEndpoint', (('commandMetadata', 'webCommandMetadata', 'url'), + ('browseEndpoint', 'canonicalBaseUrl')), + {str}), get_all=False)) + if not channel_handle: + # As of 2023-06-01, YouTube sets subscriberCountText to the handle in search + channel_handle = self.handle_or_none(self._get_text(renderer, 'subscriberCountText')) + return { + '_type': 'url', + 'url': channel_url, + 'id': channel_id, + 'ie_key': YoutubeTabIE.ie_key(), + 'channel': title, + 'uploader': title, + 'channel_id': channel_id, + 'channel_url': channel_url, + 'title': title, + 'uploader_id': channel_handle, + 'uploader_url': format_field(channel_handle, None, 'https://www.youtube.com/%s', default=None), + # See above. YouTube sets videoCountText to the subscriber text in search channel renderers. + # However, in feed/channels this is set correctly to the subscriber count + 'channel_follower_count': traverse_obj( + renderer, 'subscriberCountText', 'videoCountText', expected_type=self._get_count), + 'thumbnails': self._extract_thumbnails(renderer, 'thumbnail'), + 'playlist_count': ( + # videoCountText may be the subscriber count + self._get_count(renderer, 'videoCountText') + if self._get_count(renderer, 'subscriberCountText') is not None else None), + 'description': self._get_text(renderer, 'descriptionSnippet'), + 'channel_is_verified': True if self._has_badge( + self._extract_badges(traverse_obj(renderer, 'ownerBadges')), BadgeType.VERIFIED) else None, + } + + def _grid_entries(self, grid_renderer): + for item in grid_renderer['items']: + if not isinstance(item, dict): + continue + renderer = self._extract_basic_item_renderer(item) + if not isinstance(renderer, dict): + continue + title = self._get_text(renderer, 'title') + + # playlist + playlist_id = renderer.get('playlistId') + if playlist_id: + yield self.url_result( + 'https://www.youtube.com/playlist?list=%s' % playlist_id, + ie=YoutubeTabIE.ie_key(), video_id=playlist_id, + video_title=title) + continue + # video + video_id = renderer.get('videoId') + if video_id: + yield self._extract_video(renderer) + continue + # channel + channel_id = renderer.get('channelId') + if channel_id: + yield self._extract_channel_renderer(renderer) + continue + # generic endpoint URL support + ep_url = urljoin('https://www.youtube.com/', try_get( + renderer, lambda x: x['navigationEndpoint']['commandMetadata']['webCommandMetadata']['url'], + str)) + if ep_url: + for ie in (YoutubeTabIE, YoutubePlaylistIE, YoutubeIE): + if ie.suitable(ep_url): + yield self.url_result( + ep_url, ie=ie.ie_key(), video_id=ie._match_id(ep_url), video_title=title) + break + + def _music_reponsive_list_entry(self, renderer): + video_id = traverse_obj(renderer, ('playlistItemData', 'videoId')) + if video_id: + title = traverse_obj(renderer, ( + 'flexColumns', 0, 'musicResponsiveListItemFlexColumnRenderer', + 'text', 'runs', 0, 'text')) + return self.url_result(f'https://music.youtube.com/watch?v={video_id}', + ie=YoutubeIE.ie_key(), video_id=video_id, title=title) + playlist_id = traverse_obj(renderer, ('navigationEndpoint', 'watchEndpoint', 'playlistId')) + if playlist_id: + video_id = traverse_obj(renderer, ('navigationEndpoint', 'watchEndpoint', 'videoId')) + if video_id: + return self.url_result(f'https://music.youtube.com/watch?v={video_id}&list={playlist_id}', + ie=YoutubeTabIE.ie_key(), video_id=playlist_id) + return self.url_result(f'https://music.youtube.com/playlist?list={playlist_id}', + ie=YoutubeTabIE.ie_key(), video_id=playlist_id) + browse_id = traverse_obj(renderer, ('navigationEndpoint', 'browseEndpoint', 'browseId')) + if browse_id: + return self.url_result(f'https://music.youtube.com/browse/{browse_id}', + ie=YoutubeTabIE.ie_key(), video_id=browse_id) + + def _shelf_entries_from_content(self, shelf_renderer): + content = shelf_renderer.get('content') + if not isinstance(content, dict): + return + renderer = content.get('gridRenderer') or content.get('expandedShelfContentsRenderer') + if renderer: + # TODO: add support for nested playlists so each shelf is processed + # as separate playlist + # TODO: this includes only first N items + yield from self._grid_entries(renderer) + renderer = content.get('horizontalListRenderer') + if renderer: + # TODO + pass + + def _shelf_entries(self, shelf_renderer, skip_channels=False): + ep = try_get( + shelf_renderer, lambda x: x['endpoint']['commandMetadata']['webCommandMetadata']['url'], + str) + shelf_url = urljoin('https://www.youtube.com', ep) + if shelf_url: + # Skipping links to another channels, note that checking for + # endpoint.commandMetadata.webCommandMetadata.webPageTypwebPageType == WEB_PAGE_TYPE_CHANNEL + # will not work + if skip_channels and '/channels?' in shelf_url: + return + title = self._get_text(shelf_renderer, 'title') + yield self.url_result(shelf_url, video_title=title) + # Shelf may not contain shelf URL, fallback to extraction from content + yield from self._shelf_entries_from_content(shelf_renderer) + + def _playlist_entries(self, video_list_renderer): + for content in video_list_renderer['contents']: + if not isinstance(content, dict): + continue + renderer = content.get('playlistVideoRenderer') or content.get('playlistPanelVideoRenderer') + if not isinstance(renderer, dict): + continue + video_id = renderer.get('videoId') + if not video_id: + continue + yield self._extract_video(renderer) + + def _rich_entries(self, rich_grid_renderer): + renderer = traverse_obj( + rich_grid_renderer, + ('content', ('videoRenderer', 'reelItemRenderer', 'playlistRenderer')), get_all=False) or {} + video_id = renderer.get('videoId') + if video_id: + yield self._extract_video(renderer) + return + playlist_id = renderer.get('playlistId') + if playlist_id: + yield self.url_result( + f'https://www.youtube.com/playlist?list={playlist_id}', + ie=YoutubeTabIE.ie_key(), video_id=playlist_id, + video_title=self._get_text(renderer, 'title')) + return + + def _video_entry(self, video_renderer): + video_id = video_renderer.get('videoId') + if video_id: + return self._extract_video(video_renderer) + + def _hashtag_tile_entry(self, hashtag_tile_renderer): + url = urljoin('https://youtube.com', traverse_obj( + hashtag_tile_renderer, ('onTapCommand', 'commandMetadata', 'webCommandMetadata', 'url'))) + if url: + return self.url_result( + url, ie=YoutubeTabIE.ie_key(), title=self._get_text(hashtag_tile_renderer, 'hashtag')) + + def _post_thread_entries(self, post_thread_renderer): + post_renderer = try_get( + post_thread_renderer, lambda x: x['post']['backstagePostRenderer'], dict) + if not post_renderer: + return + # video attachment + video_renderer = try_get( + post_renderer, lambda x: x['backstageAttachment']['videoRenderer'], dict) or {} + video_id = video_renderer.get('videoId') + if video_id: + entry = self._extract_video(video_renderer) + if entry: + yield entry + # playlist attachment + playlist_id = try_get( + post_renderer, lambda x: x['backstageAttachment']['playlistRenderer']['playlistId'], str) + if playlist_id: + yield self.url_result( + 'https://www.youtube.com/playlist?list=%s' % playlist_id, + ie=YoutubeTabIE.ie_key(), video_id=playlist_id) + # inline video links + runs = try_get(post_renderer, lambda x: x['contentText']['runs'], list) or [] + for run in runs: + if not isinstance(run, dict): + continue + ep_url = try_get( + run, lambda x: x['navigationEndpoint']['urlEndpoint']['url'], str) + if not ep_url: + continue + if not YoutubeIE.suitable(ep_url): + continue + ep_video_id = YoutubeIE._match_id(ep_url) + if video_id == ep_video_id: + continue + yield self.url_result(ep_url, ie=YoutubeIE.ie_key(), video_id=ep_video_id) + + def _post_thread_continuation_entries(self, post_thread_continuation): + contents = post_thread_continuation.get('contents') + if not isinstance(contents, list): + return + for content in contents: + renderer = content.get('backstagePostThreadRenderer') + if isinstance(renderer, dict): + yield from self._post_thread_entries(renderer) + continue + renderer = content.get('videoRenderer') + if isinstance(renderer, dict): + yield self._video_entry(renderer) + + r''' # unused + def _rich_grid_entries(self, contents): + for content in contents: + video_renderer = try_get(content, lambda x: x['richItemRenderer']['content']['videoRenderer'], dict) + if video_renderer: + entry = self._video_entry(video_renderer) + if entry: + yield entry + ''' + + def _report_history_entries(self, renderer): + for url in traverse_obj(renderer, ( + 'rows', ..., 'reportHistoryTableRowRenderer', 'cells', ..., + 'reportHistoryTableCellRenderer', 'cell', 'reportHistoryTableTextCellRenderer', 'text', 'runs', ..., + 'navigationEndpoint', 'commandMetadata', 'webCommandMetadata', 'url')): + yield self.url_result(urljoin('https://www.youtube.com', url), YoutubeIE) + + def _extract_entries(self, parent_renderer, continuation_list): + # continuation_list is modified in-place with continuation_list = [continuation_token] + continuation_list[:] = [None] + contents = try_get(parent_renderer, lambda x: x['contents'], list) or [] + for content in contents: + if not isinstance(content, dict): + continue + is_renderer = traverse_obj( + content, 'itemSectionRenderer', 'musicShelfRenderer', 'musicShelfContinuation', + expected_type=dict) + if not is_renderer: + if content.get('richItemRenderer'): + for entry in self._rich_entries(content['richItemRenderer']): + yield entry + continuation_list[0] = self._extract_continuation(parent_renderer) + elif content.get('reportHistorySectionRenderer'): # https://www.youtube.com/reporthistory + table = traverse_obj(content, ('reportHistorySectionRenderer', 'table', 'tableRenderer')) + yield from self._report_history_entries(table) + continuation_list[0] = self._extract_continuation(table) + continue + + isr_contents = try_get(is_renderer, lambda x: x['contents'], list) or [] + for isr_content in isr_contents: + if not isinstance(isr_content, dict): + continue + + known_renderers = { + 'playlistVideoListRenderer': self._playlist_entries, + 'gridRenderer': self._grid_entries, + 'reelShelfRenderer': self._grid_entries, + 'shelfRenderer': self._shelf_entries, + 'musicResponsiveListItemRenderer': lambda x: [self._music_reponsive_list_entry(x)], + 'backstagePostThreadRenderer': self._post_thread_entries, + 'videoRenderer': lambda x: [self._video_entry(x)], + 'playlistRenderer': lambda x: self._grid_entries({'items': [{'playlistRenderer': x}]}), + 'channelRenderer': lambda x: self._grid_entries({'items': [{'channelRenderer': x}]}), + 'hashtagTileRenderer': lambda x: [self._hashtag_tile_entry(x)], + 'richGridRenderer': lambda x: self._extract_entries(x, continuation_list), + } + for key, renderer in isr_content.items(): + if key not in known_renderers: + continue + for entry in known_renderers[key](renderer): + if entry: + yield entry + continuation_list[0] = self._extract_continuation(renderer) + break + + if not continuation_list[0]: + continuation_list[0] = self._extract_continuation(is_renderer) + + if not continuation_list[0]: + continuation_list[0] = self._extract_continuation(parent_renderer) + + def _entries(self, tab, item_id, ytcfg, account_syncid, visitor_data): + continuation_list = [None] + extract_entries = lambda x: self._extract_entries(x, continuation_list) + tab_content = try_get(tab, lambda x: x['content'], dict) + if not tab_content: + return + parent_renderer = ( + try_get(tab_content, lambda x: x['sectionListRenderer'], dict) + or try_get(tab_content, lambda x: x['richGridRenderer'], dict) or {}) + yield from extract_entries(parent_renderer) + continuation = continuation_list[0] + seen_continuations = set() + for page_num in itertools.count(1): + if not continuation: + break + continuation_token = continuation.get('continuation') + if continuation_token is not None and continuation_token in seen_continuations: + self.write_debug('Detected YouTube feed looping - assuming end of feed.') + break + seen_continuations.add(continuation_token) + headers = self.generate_api_headers( + ytcfg=ytcfg, account_syncid=account_syncid, visitor_data=visitor_data) + response = self._extract_response( + item_id=f'{item_id} page {page_num}', + query=continuation, headers=headers, ytcfg=ytcfg, + check_get_keys=('continuationContents', 'onResponseReceivedActions', 'onResponseReceivedEndpoints')) + + if not response: + break + # Extracting updated visitor data is required to prevent an infinite extraction loop in some cases + # See: https://github.com/ytdl-org/youtube-dl/issues/28702 + visitor_data = self._extract_visitor_data(response) or visitor_data + + known_renderers = { + 'videoRenderer': (self._grid_entries, 'items'), # for membership tab + 'gridPlaylistRenderer': (self._grid_entries, 'items'), + 'gridVideoRenderer': (self._grid_entries, 'items'), + 'gridChannelRenderer': (self._grid_entries, 'items'), + 'playlistVideoRenderer': (self._playlist_entries, 'contents'), + 'itemSectionRenderer': (extract_entries, 'contents'), # for feeds + 'richItemRenderer': (extract_entries, 'contents'), # for hashtag + 'backstagePostThreadRenderer': (self._post_thread_continuation_entries, 'contents'), + 'reportHistoryTableRowRenderer': (self._report_history_entries, 'rows'), + 'playlistVideoListContinuation': (self._playlist_entries, None), + 'gridContinuation': (self._grid_entries, None), + 'itemSectionContinuation': (self._post_thread_continuation_entries, None), + 'sectionListContinuation': (extract_entries, None), # for feeds + } + + continuation_items = traverse_obj(response, ( + ('onResponseReceivedActions', 'onResponseReceivedEndpoints'), ..., + 'appendContinuationItemsAction', 'continuationItems' + ), 'continuationContents', get_all=False) + continuation_item = traverse_obj(continuation_items, 0, None, expected_type=dict, default={}) + + video_items_renderer = None + for key in continuation_item.keys(): + if key not in known_renderers: + continue + func, parent_key = known_renderers[key] + video_items_renderer = {parent_key: continuation_items} if parent_key else continuation_items + continuation_list = [None] + yield from func(video_items_renderer) + continuation = continuation_list[0] or self._extract_continuation(video_items_renderer) + + if not video_items_renderer: + break + + @staticmethod + def _extract_selected_tab(tabs, fatal=True): + for tab_renderer in tabs: + if tab_renderer.get('selected'): + return tab_renderer + if fatal: + raise ExtractorError('Unable to find selected tab') + + @staticmethod + def _extract_tab_renderers(response): + return traverse_obj( + response, ('contents', 'twoColumnBrowseResultsRenderer', 'tabs', ..., ('tabRenderer', 'expandableTabRenderer')), expected_type=dict) + + def _extract_from_tabs(self, item_id, ytcfg, data, tabs): + metadata = self._extract_metadata_from_tabs(item_id, data) + + selected_tab = self._extract_selected_tab(tabs) + metadata['title'] += format_field(selected_tab, 'title', ' - %s') + metadata['title'] += format_field(selected_tab, 'expandedText', ' - %s') + + return self.playlist_result( + self._entries( + selected_tab, metadata['id'], ytcfg, + self._extract_account_syncid(ytcfg, data), + self._extract_visitor_data(data, ytcfg)), + **metadata) + + def _extract_metadata_from_tabs(self, item_id, data): + info = {'id': item_id} + + metadata_renderer = traverse_obj(data, ('metadata', 'channelMetadataRenderer'), expected_type=dict) + if metadata_renderer: + channel_id = traverse_obj(metadata_renderer, ('externalId', {self.ucid_or_none}), + ('channelUrl', {self.ucid_from_url})) + info.update({ + 'channel': metadata_renderer.get('title'), + 'channel_id': channel_id, + }) + if info['channel_id']: + info['id'] = info['channel_id'] + else: + metadata_renderer = traverse_obj(data, ('metadata', 'playlistMetadataRenderer'), expected_type=dict) + + # We can get the uncropped banner/avatar by replacing the crop params with '=s0' + # See: https://github.com/yt-dlp/yt-dlp/issues/2237#issuecomment-1013694714 + def _get_uncropped(url): + return url_or_none((url or '').split('=')[0] + '=s0') + + avatar_thumbnails = self._extract_thumbnails(metadata_renderer, 'avatar') + if avatar_thumbnails: + uncropped_avatar = _get_uncropped(avatar_thumbnails[0]['url']) + if uncropped_avatar: + avatar_thumbnails.append({ + 'url': uncropped_avatar, + 'id': 'avatar_uncropped', + 'preference': 1 + }) + + channel_banners = self._extract_thumbnails( + data, ('header', ..., ('banner', 'mobileBanner', 'tvBanner'))) + for banner in channel_banners: + banner['preference'] = -10 + + if channel_banners: + uncropped_banner = _get_uncropped(channel_banners[0]['url']) + if uncropped_banner: + channel_banners.append({ + 'url': uncropped_banner, + 'id': 'banner_uncropped', + 'preference': -5 + }) + + # Deprecated - remove primary_sidebar_renderer when layout discontinued + primary_sidebar_renderer = self._extract_sidebar_info_renderer(data, 'playlistSidebarPrimaryInfoRenderer') + playlist_header_renderer = traverse_obj(data, ('header', 'playlistHeaderRenderer'), expected_type=dict) + + primary_thumbnails = self._extract_thumbnails( + primary_sidebar_renderer, ('thumbnailRenderer', ('playlistVideoThumbnailRenderer', 'playlistCustomThumbnailRenderer'), 'thumbnail')) + playlist_thumbnails = self._extract_thumbnails( + playlist_header_renderer, ('playlistHeaderBanner', 'heroPlaylistThumbnailRenderer', 'thumbnail')) + + info.update({ + 'title': (traverse_obj(metadata_renderer, 'title') + or self._get_text(data, ('header', 'hashtagHeaderRenderer', 'hashtag')) + or info['id']), + 'availability': self._extract_availability(data), + 'channel_follower_count': self._get_count(data, ('header', ..., 'subscriberCountText')), + 'description': try_get(metadata_renderer, lambda x: x.get('description', '')), + 'tags': try_get(metadata_renderer or {}, lambda x: x.get('keywords', '').split()), + 'thumbnails': (primary_thumbnails or playlist_thumbnails) + avatar_thumbnails + channel_banners, + }) + + channel_handle = ( + traverse_obj(metadata_renderer, (('vanityChannelUrl', ('ownerUrls', ...)), {self.handle_from_url}), get_all=False) + or traverse_obj(data, ('header', ..., 'channelHandleText', {self.handle_or_none}), get_all=False)) + + if channel_handle: + info.update({ + 'uploader_id': channel_handle, + 'uploader_url': format_field(channel_handle, None, 'https://www.youtube.com/%s', default=None), + }) + + channel_badges = self._extract_badges(traverse_obj(data, ('header', ..., 'badges'), get_all=False)) + if self._has_badge(channel_badges, BadgeType.VERIFIED): + info['channel_is_verified'] = True + # Playlist stats is a text runs array containing [video count, view count, last updated]. + # last updated or (view count and last updated) may be missing. + playlist_stats = get_first( + (primary_sidebar_renderer, playlist_header_renderer), (('stats', 'briefStats', 'numVideosText'), )) + + last_updated_unix = self._parse_time_text( + self._get_text(playlist_stats, 2) # deprecated, remove when old layout discontinued + or self._get_text(playlist_header_renderer, ('byline', 1, 'playlistBylineRenderer', 'text'))) + info['modified_date'] = strftime_or_none(last_updated_unix) + + info['view_count'] = self._get_count(playlist_stats, 1) + if info['view_count'] is None: # 0 is allowed + info['view_count'] = self._get_count(playlist_header_renderer, 'viewCountText') + if info['view_count'] is None: + info['view_count'] = self._get_count(data, ( + 'contents', 'twoColumnBrowseResultsRenderer', 'tabs', ..., 'tabRenderer', 'content', 'sectionListRenderer', + 'contents', ..., 'itemSectionRenderer', 'contents', ..., 'channelAboutFullMetadataRenderer', 'viewCountText')) + + info['playlist_count'] = self._get_count(playlist_stats, 0) + if info['playlist_count'] is None: # 0 is allowed + info['playlist_count'] = self._get_count(playlist_header_renderer, ('byline', 0, 'playlistBylineRenderer', 'text')) + + if not info.get('channel_id'): + owner = traverse_obj(playlist_header_renderer, 'ownerText') + if not owner: # Deprecated + owner = traverse_obj( + self._extract_sidebar_info_renderer(data, 'playlistSidebarSecondaryInfoRenderer'), + ('videoOwner', 'videoOwnerRenderer', 'title')) + owner_text = self._get_text(owner) + browse_ep = traverse_obj(owner, ('runs', 0, 'navigationEndpoint', 'browseEndpoint')) or {} + info.update({ + 'channel': self._search_regex(r'^by (.+) and \d+ others?$', owner_text, 'uploader', default=owner_text), + 'channel_id': self.ucid_or_none(browse_ep.get('browseId')), + 'uploader_id': self.handle_from_url(urljoin('https://www.youtube.com', browse_ep.get('canonicalBaseUrl'))) + }) + + info.update({ + 'uploader': info['channel'], + 'channel_url': format_field(info.get('channel_id'), None, 'https://www.youtube.com/channel/%s', default=None), + 'uploader_url': format_field(info.get('uploader_id'), None, 'https://www.youtube.com/%s', default=None), + }) + + return info + + def _extract_inline_playlist(self, playlist, playlist_id, data, ytcfg): + first_id = last_id = response = None + for page_num in itertools.count(1): + videos = list(self._playlist_entries(playlist)) + if not videos: + return + start = next((i for i, v in enumerate(videos) if v['id'] == last_id), -1) + 1 + if start >= len(videos): + return + yield from videos[start:] + first_id = first_id or videos[0]['id'] + last_id = videos[-1]['id'] + watch_endpoint = try_get( + playlist, lambda x: x['contents'][-1]['playlistPanelVideoRenderer']['navigationEndpoint']['watchEndpoint']) + headers = self.generate_api_headers( + ytcfg=ytcfg, account_syncid=self._extract_account_syncid(ytcfg, data), + visitor_data=self._extract_visitor_data(response, data, ytcfg)) + query = { + 'playlistId': playlist_id, + 'videoId': watch_endpoint.get('videoId') or last_id, + 'index': watch_endpoint.get('index') or len(videos), + 'params': watch_endpoint.get('params') or 'OAE%3D' + } + response = self._extract_response( + item_id='%s page %d' % (playlist_id, page_num), + query=query, ep='next', headers=headers, ytcfg=ytcfg, + check_get_keys='contents' + ) + playlist = try_get( + response, lambda x: x['contents']['twoColumnWatchNextResults']['playlist']['playlist'], dict) + + def _extract_from_playlist(self, item_id, url, data, playlist, ytcfg): + title = playlist.get('title') or try_get( + data, lambda x: x['titleText']['simpleText'], str) + playlist_id = playlist.get('playlistId') or item_id + + # Delegating everything except mix playlists to regular tab-based playlist URL + playlist_url = urljoin(url, try_get( + playlist, lambda x: x['endpoint']['commandMetadata']['webCommandMetadata']['url'], + str)) + + # Some playlists are unviewable but YouTube still provides a link to the (broken) playlist page [1] + # [1] MLCT, RLTDwFCb4jeqaKWnciAYM-ZVHg + is_known_unviewable = re.fullmatch(r'MLCT|RLTD[\w-]{22}', playlist_id) + + if playlist_url and playlist_url != url and not is_known_unviewable: + return self.url_result( + playlist_url, ie=YoutubeTabIE.ie_key(), video_id=playlist_id, + video_title=title) + + return self.playlist_result( + self._extract_inline_playlist(playlist, playlist_id, data, ytcfg), + playlist_id=playlist_id, playlist_title=title) + + def _extract_availability(self, data): + """ + Gets the availability of a given playlist/tab. + Note: Unless YouTube tells us explicitly, we do not assume it is public + @param data: response + """ + sidebar_renderer = self._extract_sidebar_info_renderer(data, 'playlistSidebarPrimaryInfoRenderer') or {} + playlist_header_renderer = traverse_obj(data, ('header', 'playlistHeaderRenderer')) or {} + player_header_privacy = playlist_header_renderer.get('privacy') + + badges = self._extract_badges(traverse_obj(sidebar_renderer, 'badges')) + + # Personal playlists, when authenticated, have a dropdown visibility selector instead of a badge + privacy_setting_icon = get_first( + (playlist_header_renderer, sidebar_renderer), + ('privacyForm', 'dropdownFormFieldRenderer', 'dropdown', 'dropdownRenderer', 'entries', + lambda _, v: v['privacyDropdownItemRenderer']['isSelected'], 'privacyDropdownItemRenderer', 'icon', 'iconType'), + expected_type=str) + + microformats_is_unlisted = traverse_obj( + data, ('microformat', 'microformatDataRenderer', 'unlisted'), expected_type=bool) + + return ( + 'public' if ( + self._has_badge(badges, BadgeType.AVAILABILITY_PUBLIC) + or player_header_privacy == 'PUBLIC' + or privacy_setting_icon == 'PRIVACY_PUBLIC') + else self._availability( + is_private=( + self._has_badge(badges, BadgeType.AVAILABILITY_PRIVATE) + or player_header_privacy == 'PRIVATE' if player_header_privacy is not None + else privacy_setting_icon == 'PRIVACY_PRIVATE' if privacy_setting_icon is not None else None), + is_unlisted=( + self._has_badge(badges, BadgeType.AVAILABILITY_UNLISTED) + or player_header_privacy == 'UNLISTED' if player_header_privacy is not None + else privacy_setting_icon == 'PRIVACY_UNLISTED' if privacy_setting_icon is not None + else microformats_is_unlisted if microformats_is_unlisted is not None else None), + needs_subscription=self._has_badge(badges, BadgeType.AVAILABILITY_SUBSCRIPTION) or None, + needs_premium=self._has_badge(badges, BadgeType.AVAILABILITY_PREMIUM) or None, + needs_auth=False)) + + @staticmethod + def _extract_sidebar_info_renderer(data, info_renderer, expected_type=dict): + sidebar_renderer = try_get( + data, lambda x: x['sidebar']['playlistSidebarRenderer']['items'], list) or [] + for item in sidebar_renderer: + renderer = try_get(item, lambda x: x[info_renderer], expected_type) + if renderer: + return renderer + + def _reload_with_unavailable_videos(self, item_id, data, ytcfg): + """ + Reload playlists with unavailable videos (e.g. private videos, region blocked, etc.) + """ + is_playlist = bool(traverse_obj( + data, ('metadata', 'playlistMetadataRenderer'), ('header', 'playlistHeaderRenderer'))) + if not is_playlist: + return + headers = self.generate_api_headers( + ytcfg=ytcfg, account_syncid=self._extract_account_syncid(ytcfg, data), + visitor_data=self._extract_visitor_data(data, ytcfg)) + query = { + 'params': 'wgYCCAA=', + 'browseId': f'VL{item_id}' + } + return self._extract_response( + item_id=item_id, headers=headers, query=query, + check_get_keys='contents', fatal=False, ytcfg=ytcfg, + note='Redownloading playlist API JSON with unavailable videos') + + @functools.cached_property + def skip_webpage(self): + return 'webpage' in self._configuration_arg('skip', ie_key=YoutubeTabIE.ie_key()) + + def _extract_webpage(self, url, item_id, fatal=True): + webpage, data = None, None + for retry in self.RetryManager(fatal=fatal): + try: + webpage = self._download_webpage(url, item_id, note='Downloading webpage') + data = self.extract_yt_initial_data(item_id, webpage or '', fatal=fatal) or {} + except ExtractorError as e: + if isinstance(e.cause, network_exceptions): + if not isinstance(e.cause, HTTPError) or e.cause.status not in (403, 429): + retry.error = e + continue + self._error_or_warning(e, fatal=fatal) + break + + try: + self._extract_and_report_alerts(data) + except ExtractorError as e: + self._error_or_warning(e, fatal=fatal) + break + + # Sometimes youtube returns a webpage with incomplete ytInitialData + # See: https://github.com/yt-dlp/yt-dlp/issues/116 + if not traverse_obj(data, 'contents', 'currentVideoEndpoint', 'onResponseReceivedActions'): + retry.error = ExtractorError('Incomplete yt initial data received') + continue + + return webpage, data + + def _report_playlist_authcheck(self, ytcfg, fatal=True): + """Use if failed to extract ytcfg (and data) from initial webpage""" + if not ytcfg and self.is_authenticated: + msg = 'Playlists that require authentication may not extract correctly without a successful webpage download' + if 'authcheck' not in self._configuration_arg('skip', ie_key=YoutubeTabIE.ie_key()) and fatal: + raise ExtractorError( + f'{msg}. If you are not downloading private content, or ' + 'your cookies are only for the first account and channel,' + ' pass "--extractor-args youtubetab:skip=authcheck" to skip this check', + expected=True) + self.report_warning(msg, only_once=True) + + def _extract_data(self, url, item_id, ytcfg=None, fatal=True, webpage_fatal=False, default_client='web'): + data = None + if not self.skip_webpage: + webpage, data = self._extract_webpage(url, item_id, fatal=webpage_fatal) + ytcfg = ytcfg or self.extract_ytcfg(item_id, webpage) + # Reject webpage data if redirected to home page without explicitly requesting + selected_tab = self._extract_selected_tab(self._extract_tab_renderers(data), fatal=False) or {} + if (url != 'https://www.youtube.com/feed/recommended' + and selected_tab.get('tabIdentifier') == 'FEwhat_to_watch' # Home page + and 'no-youtube-channel-redirect' not in self.get_param('compat_opts', [])): + msg = 'The channel/playlist does not exist and the URL redirected to youtube.com home page' + if fatal: + raise ExtractorError(msg, expected=True) + self.report_warning(msg, only_once=True) + if not data: + self._report_playlist_authcheck(ytcfg, fatal=fatal) + data = self._extract_tab_endpoint(url, item_id, ytcfg, fatal=fatal, default_client=default_client) + return data, ytcfg + + def _extract_tab_endpoint(self, url, item_id, ytcfg=None, fatal=True, default_client='web'): + headers = self.generate_api_headers(ytcfg=ytcfg, default_client=default_client) + resolve_response = self._extract_response( + item_id=item_id, query={'url': url}, check_get_keys='endpoint', headers=headers, ytcfg=ytcfg, fatal=fatal, + ep='navigation/resolve_url', note='Downloading API parameters API JSON', default_client=default_client) + endpoints = {'browseEndpoint': 'browse', 'watchEndpoint': 'next'} + for ep_key, ep in endpoints.items(): + params = try_get(resolve_response, lambda x: x['endpoint'][ep_key], dict) + if params: + return self._extract_response( + item_id=item_id, query=params, ep=ep, headers=headers, + ytcfg=ytcfg, fatal=fatal, default_client=default_client, + check_get_keys=('contents', 'currentVideoEndpoint', 'onResponseReceivedActions')) + err_note = 'Failed to resolve url (does the playlist exist?)' + if fatal: + raise ExtractorError(err_note, expected=True) + self.report_warning(err_note, item_id) + + _SEARCH_PARAMS = None + + def _search_results(self, query, params=NO_DEFAULT, default_client='web'): + data = {'query': query} + if params is NO_DEFAULT: + params = self._SEARCH_PARAMS + if params: + data['params'] = params + + content_keys = ( + ('contents', 'twoColumnSearchResultsRenderer', 'primaryContents', 'sectionListRenderer', 'contents'), + ('onResponseReceivedCommands', 0, 'appendContinuationItemsAction', 'continuationItems'), + # ytmusic search + ('contents', 'tabbedSearchResultsRenderer', 'tabs', 0, 'tabRenderer', 'content', 'sectionListRenderer', 'contents'), + ('continuationContents', ), + ) + display_id = f'query "{query}"' + check_get_keys = tuple({keys[0] for keys in content_keys}) + ytcfg = self._download_ytcfg(default_client, display_id) if not self.skip_webpage else {} + self._report_playlist_authcheck(ytcfg, fatal=False) + + continuation_list = [None] + search = None + for page_num in itertools.count(1): + data.update(continuation_list[0] or {}) + headers = self.generate_api_headers( + ytcfg=ytcfg, visitor_data=self._extract_visitor_data(search), default_client=default_client) + search = self._extract_response( + item_id=f'{display_id} page {page_num}', ep='search', query=data, + default_client=default_client, check_get_keys=check_get_keys, ytcfg=ytcfg, headers=headers) + slr_contents = traverse_obj(search, *content_keys) + yield from self._extract_entries({'contents': list(variadic(slr_contents))}, continuation_list) + if not continuation_list[0]: + break + + +class YoutubeTabIE(YoutubeTabBaseInfoExtractor): + IE_DESC = 'YouTube Tabs' + _VALID_URL = r'''(?x: + https?:// + (?!consent\.)(?:\w+\.)? + (?: + youtube(?:kids)?\.com| + %(invidious)s + )/ + (?: + (?P<channel_type>channel|c|user|browse)/| + (?P<not_channel> + feed/|hashtag/| + (?:playlist|watch)\?.*?\blist= + )| + (?!(?:%(reserved_names)s)\b) # Direct URLs + ) + (?P<id>[^/?\#&]+) + )''' % { + 'reserved_names': YoutubeBaseInfoExtractor._RESERVED_NAMES, + 'invidious': '|'.join(YoutubeBaseInfoExtractor._INVIDIOUS_SITES), + } + IE_NAME = 'youtube:tab' + + _TESTS = [{ + 'note': 'playlists, multipage', + 'url': 'https://www.youtube.com/c/ИгорьКлейнер/playlists?view=1&flow=grid', + 'playlist_mincount': 94, + 'info_dict': { + 'id': 'UCqj7Cz7revf5maW9g5pgNcg', + 'title': 'Igor Kleiner - Playlists', + 'description': 'md5:be97ee0f14ee314f1f002cf187166ee2', + 'uploader': 'Igor Kleiner', + 'uploader_id': '@IgorDataScience', + 'uploader_url': 'https://www.youtube.com/@IgorDataScience', + 'channel': 'Igor Kleiner', + 'channel_id': 'UCqj7Cz7revf5maW9g5pgNcg', + 'tags': ['"критичеÑкое', 'мышление"', '"наука', 'проÑто"', 'математика', '"анализ', 'данных"'], + 'channel_url': 'https://www.youtube.com/channel/UCqj7Cz7revf5maW9g5pgNcg', + 'channel_follower_count': int + }, + }, { + 'note': 'playlists, multipage, different order', + 'url': 'https://www.youtube.com/user/igorkle1/playlists?view=1&sort=dd', + 'playlist_mincount': 94, + 'info_dict': { + 'id': 'UCqj7Cz7revf5maW9g5pgNcg', + 'title': 'Igor Kleiner - Playlists', + 'description': 'md5:be97ee0f14ee314f1f002cf187166ee2', + 'uploader': 'Igor Kleiner', + 'uploader_id': '@IgorDataScience', + 'uploader_url': 'https://www.youtube.com/@IgorDataScience', + 'tags': ['"критичеÑкое', 'мышление"', '"наука', 'проÑто"', 'математика', '"анализ', 'данных"'], + 'channel_id': 'UCqj7Cz7revf5maW9g5pgNcg', + 'channel': 'Igor Kleiner', + 'channel_url': 'https://www.youtube.com/channel/UCqj7Cz7revf5maW9g5pgNcg', + 'channel_follower_count': int + }, + }, { + 'note': 'playlists, series', + 'url': 'https://www.youtube.com/c/3blue1brown/playlists?view=50&sort=dd&shelf_id=3', + 'playlist_mincount': 5, + 'info_dict': { + 'id': 'UCYO_jab_esuFRV4b17AJtAw', + 'title': '3Blue1Brown - Playlists', + 'description': 'md5:e1384e8a133307dd10edee76e875d62f', + 'channel_url': 'https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw', + 'channel': '3Blue1Brown', + 'channel_id': 'UCYO_jab_esuFRV4b17AJtAw', + 'uploader_id': '@3blue1brown', + 'uploader_url': 'https://www.youtube.com/@3blue1brown', + 'uploader': '3Blue1Brown', + 'tags': ['Mathematics'], + 'channel_follower_count': int, + 'channel_is_verified': True, + }, + }, { + 'note': 'playlists, singlepage', + 'url': 'https://www.youtube.com/user/ThirstForScience/playlists', + 'playlist_mincount': 4, + 'info_dict': { + 'id': 'UCAEtajcuhQ6an9WEzY9LEMQ', + 'title': 'ThirstForScience - Playlists', + 'description': 'md5:609399d937ea957b0f53cbffb747a14c', + 'uploader': 'ThirstForScience', + 'uploader_url': 'https://www.youtube.com/@ThirstForScience', + 'uploader_id': '@ThirstForScience', + 'channel_id': 'UCAEtajcuhQ6an9WEzY9LEMQ', + 'channel_url': 'https://www.youtube.com/channel/UCAEtajcuhQ6an9WEzY9LEMQ', + 'tags': 'count:13', + 'channel': 'ThirstForScience', + 'channel_follower_count': int + } + }, { + 'url': 'https://www.youtube.com/c/ChristophLaimer/playlists', + 'only_matching': True, + }, { + 'note': 'basic, single video playlist', + 'url': 'https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc', + 'info_dict': { + 'id': 'PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc', + 'title': 'youtube-dl public playlist', + 'description': '', + 'tags': [], + 'view_count': int, + 'modified_date': '20201130', + 'channel': 'Sergey M.', + 'channel_id': 'UCmlqkdCBesrv2Lak1mF_MxA', + 'channel_url': 'https://www.youtube.com/channel/UCmlqkdCBesrv2Lak1mF_MxA', + 'availability': 'public', + 'uploader': 'Sergey M.', + 'uploader_url': 'https://www.youtube.com/@sergeym.6173', + 'uploader_id': '@sergeym.6173', + }, + 'playlist_count': 1, + }, { + 'note': 'empty playlist', + 'url': 'https://www.youtube.com/playlist?list=PL4lCao7KL_QFodcLWhDpGCYnngnHtQ-Xf', + 'info_dict': { + 'id': 'PL4lCao7KL_QFodcLWhDpGCYnngnHtQ-Xf', + 'title': 'youtube-dl empty playlist', + 'tags': [], + 'channel': 'Sergey M.', + 'description': '', + 'modified_date': '20160902', + 'channel_id': 'UCmlqkdCBesrv2Lak1mF_MxA', + 'channel_url': 'https://www.youtube.com/channel/UCmlqkdCBesrv2Lak1mF_MxA', + 'availability': 'public', + 'uploader_url': 'https://www.youtube.com/@sergeym.6173', + 'uploader_id': '@sergeym.6173', + 'uploader': 'Sergey M.', + }, + 'playlist_count': 0, + }, { + 'note': 'Home tab', + 'url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w/featured', + 'info_dict': { + 'id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'title': 'lex will - Home', + 'description': 'md5:2163c5d0ff54ed5f598d6a7e6211e488', + 'uploader': 'lex will', + 'uploader_id': '@lexwill718', + 'channel': 'lex will', + 'tags': ['bible', 'history', 'prophesy'], + 'uploader_url': 'https://www.youtube.com/@lexwill718', + 'channel_url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_follower_count': int + }, + 'playlist_mincount': 2, + }, { + 'note': 'Videos tab', + 'url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w/videos', + 'info_dict': { + 'id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'title': 'lex will - Videos', + 'description': 'md5:2163c5d0ff54ed5f598d6a7e6211e488', + 'uploader': 'lex will', + 'uploader_id': '@lexwill718', + 'tags': ['bible', 'history', 'prophesy'], + 'channel_url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'uploader_url': 'https://www.youtube.com/@lexwill718', + 'channel': 'lex will', + 'channel_follower_count': int + }, + 'playlist_mincount': 975, + }, { + 'note': 'Videos tab, sorted by popular', + 'url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w/videos?view=0&sort=p&flow=grid', + 'info_dict': { + 'id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'title': 'lex will - Videos', + 'description': 'md5:2163c5d0ff54ed5f598d6a7e6211e488', + 'uploader': 'lex will', + 'uploader_id': '@lexwill718', + 'channel_id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'uploader_url': 'https://www.youtube.com/@lexwill718', + 'channel': 'lex will', + 'tags': ['bible', 'history', 'prophesy'], + 'channel_url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_follower_count': int + }, + 'playlist_mincount': 199, + }, { + 'note': 'Playlists tab', + 'url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w/playlists', + 'info_dict': { + 'id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'title': 'lex will - Playlists', + 'description': 'md5:2163c5d0ff54ed5f598d6a7e6211e488', + 'uploader': 'lex will', + 'uploader_id': '@lexwill718', + 'uploader_url': 'https://www.youtube.com/@lexwill718', + 'channel': 'lex will', + 'channel_url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'tags': ['bible', 'history', 'prophesy'], + 'channel_follower_count': int + }, + 'playlist_mincount': 17, + }, { + 'note': 'Community tab', + 'url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w/community', + 'info_dict': { + 'id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'title': 'lex will - Community', + 'description': 'md5:2163c5d0ff54ed5f598d6a7e6211e488', + 'channel': 'lex will', + 'channel_url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'tags': ['bible', 'history', 'prophesy'], + 'channel_follower_count': int, + 'uploader_url': 'https://www.youtube.com/@lexwill718', + 'uploader_id': '@lexwill718', + 'uploader': 'lex will', + }, + 'playlist_mincount': 18, + }, { + 'note': 'Channels tab', + 'url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w/channels', + 'info_dict': { + 'id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'title': 'lex will - Channels', + 'description': 'md5:2163c5d0ff54ed5f598d6a7e6211e488', + 'channel': 'lex will', + 'channel_url': 'https://www.youtube.com/channel/UCKfVa3S1e4PHvxWcwyMMg8w', + 'channel_id': 'UCKfVa3S1e4PHvxWcwyMMg8w', + 'tags': ['bible', 'history', 'prophesy'], + 'channel_follower_count': int, + 'uploader_url': 'https://www.youtube.com/@lexwill718', + 'uploader_id': '@lexwill718', + 'uploader': 'lex will', + }, + 'playlist_mincount': 12, + }, { + 'note': 'Search tab', + 'url': 'https://www.youtube.com/c/3blue1brown/search?query=linear%20algebra', + 'playlist_mincount': 40, + 'info_dict': { + 'id': 'UCYO_jab_esuFRV4b17AJtAw', + 'title': '3Blue1Brown - Search - linear algebra', + 'description': 'md5:e1384e8a133307dd10edee76e875d62f', + 'channel_url': 'https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw', + 'tags': ['Mathematics'], + 'channel': '3Blue1Brown', + 'channel_id': 'UCYO_jab_esuFRV4b17AJtAw', + 'channel_follower_count': int, + 'uploader_url': 'https://www.youtube.com/@3blue1brown', + 'uploader_id': '@3blue1brown', + 'uploader': '3Blue1Brown', + 'channel_is_verified': True, + }, + }, { + 'url': 'https://invidio.us/channel/UCmlqkdCBesrv2Lak1mF_MxA', + 'only_matching': True, + }, { + 'url': 'https://www.youtubekids.com/channel/UCmlqkdCBesrv2Lak1mF_MxA', + 'only_matching': True, + }, { + 'url': 'https://music.youtube.com/channel/UCmlqkdCBesrv2Lak1mF_MxA', + 'only_matching': True, + }, { + 'note': 'Playlist with deleted videos (#651). As a bonus, the video #51 is also twice in this list.', + 'url': 'https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC', + 'info_dict': { + 'title': '29C3: Not my department', + 'id': 'PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC', + 'description': 'md5:a14dc1a8ef8307a9807fe136a0660268', + 'tags': [], + 'view_count': int, + 'modified_date': '20150605', + 'channel_id': 'UCEPzS1rYsrkqzSLNp76nrcg', + 'channel_url': 'https://www.youtube.com/channel/UCEPzS1rYsrkqzSLNp76nrcg', + 'channel': 'Christiaan008', + 'availability': 'public', + 'uploader_id': '@ChRiStIaAn008', + 'uploader': 'Christiaan008', + 'uploader_url': 'https://www.youtube.com/@ChRiStIaAn008', + }, + 'playlist_count': 96, + }, { + 'note': 'Large playlist', + 'url': 'https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q', + 'info_dict': { + 'title': 'Uploads from Cauchemar', + 'id': 'UUBABnxM4Ar9ten8Mdjj1j0Q', + 'channel_url': 'https://www.youtube.com/channel/UCBABnxM4Ar9ten8Mdjj1j0Q', + 'tags': [], + 'modified_date': r're:\d{8}', + 'channel': 'Cauchemar', + 'view_count': int, + 'description': '', + 'channel_id': 'UCBABnxM4Ar9ten8Mdjj1j0Q', + 'availability': 'public', + 'uploader_id': '@Cauchemar89', + 'uploader': 'Cauchemar', + 'uploader_url': 'https://www.youtube.com/@Cauchemar89', + }, + 'playlist_mincount': 1123, + 'expected_warnings': [r'[Uu]navailable videos (are|will be) hidden'], + }, { + 'note': 'even larger playlist, 8832 videos', + 'url': 'http://www.youtube.com/user/NASAgovVideo/videos', + 'only_matching': True, + }, { + 'note': 'Buggy playlist: the webpage has a "Load more" button but it doesn\'t have more videos', + 'url': 'https://www.youtube.com/playlist?list=UUXw-G3eDE9trcvY2sBMM_aA', + 'info_dict': { + 'title': 'Uploads from Interstellar Movie', + 'id': 'UUXw-G3eDE9trcvY2sBMM_aA', + 'tags': [], + 'view_count': int, + 'channel_id': 'UCXw-G3eDE9trcvY2sBMM_aA', + 'channel_url': 'https://www.youtube.com/channel/UCXw-G3eDE9trcvY2sBMM_aA', + 'channel': 'Interstellar Movie', + 'description': '', + 'modified_date': r're:\d{8}', + 'availability': 'public', + 'uploader_id': '@InterstellarMovie', + 'uploader': 'Interstellar Movie', + 'uploader_url': 'https://www.youtube.com/@InterstellarMovie', + }, + 'playlist_mincount': 21, + }, { + 'note': 'Playlist with "show unavailable videos" button', + 'url': 'https://www.youtube.com/playlist?list=UUTYLiWFZy8xtPwxFwX9rV7Q', + 'info_dict': { + 'title': 'Uploads from Phim Siêu Nhân Nhật Bản', + 'id': 'UUTYLiWFZy8xtPwxFwX9rV7Q', + 'view_count': int, + 'channel': 'Phim Siêu Nhân Nhật Bản', + 'tags': [], + 'description': '', + 'channel_url': 'https://www.youtube.com/channel/UCTYLiWFZy8xtPwxFwX9rV7Q', + 'channel_id': 'UCTYLiWFZy8xtPwxFwX9rV7Q', + 'modified_date': r're:\d{8}', + 'availability': 'public', + 'uploader_url': 'https://www.youtube.com/@phimsieunhannhatban', + 'uploader_id': '@phimsieunhannhatban', + 'uploader': 'Phim Siêu Nhân Nhật Bản', + }, + 'playlist_mincount': 200, + 'expected_warnings': [r'[Uu]navailable videos (are|will be) hidden'], + }, { + 'note': 'Playlist with unavailable videos in page 7', + 'url': 'https://www.youtube.com/playlist?list=UU8l9frL61Yl5KFOl87nIm2w', + 'info_dict': { + 'title': 'Uploads from BlankTV', + 'id': 'UU8l9frL61Yl5KFOl87nIm2w', + 'channel': 'BlankTV', + 'channel_url': 'https://www.youtube.com/channel/UC8l9frL61Yl5KFOl87nIm2w', + 'channel_id': 'UC8l9frL61Yl5KFOl87nIm2w', + 'view_count': int, + 'tags': [], + 'modified_date': r're:\d{8}', + 'description': '', + 'availability': 'public', + 'uploader_id': '@blanktv', + 'uploader': 'BlankTV', + 'uploader_url': 'https://www.youtube.com/@blanktv', + }, + 'playlist_mincount': 1000, + 'expected_warnings': [r'[Uu]navailable videos (are|will be) hidden'], + }, { + 'note': 'https://github.com/ytdl-org/youtube-dl/issues/21844', + 'url': 'https://www.youtube.com/playlist?list=PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba', + 'info_dict': { + 'title': 'Data Analysis with Dr Mike Pound', + 'id': 'PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba', + 'description': 'md5:7f567c574d13d3f8c0954d9ffee4e487', + 'tags': [], + 'view_count': int, + 'channel_id': 'UC9-y-6csu5WGm29I7JiwpnA', + 'channel_url': 'https://www.youtube.com/channel/UC9-y-6csu5WGm29I7JiwpnA', + 'channel': 'Computerphile', + 'availability': 'public', + 'modified_date': '20190712', + 'uploader_id': '@Computerphile', + 'uploader': 'Computerphile', + 'uploader_url': 'https://www.youtube.com/@Computerphile', + }, + 'playlist_mincount': 11, + }, { + 'url': 'https://invidio.us/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc', + 'only_matching': True, + }, { + 'note': 'Playlist URL that does not actually serve a playlist', + 'url': 'https://www.youtube.com/watch?v=FqZTN594JQw&list=PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4', + 'info_dict': { + 'id': 'FqZTN594JQw', + 'ext': 'webm', + 'title': "Smiley's People 01 detective, Adventure Series, Action", + 'upload_date': '20150526', + 'license': 'Standard YouTube License', + 'description': 'md5:507cdcb5a49ac0da37a920ece610be80', + 'categories': ['People & Blogs'], + 'tags': list, + 'view_count': int, + 'like_count': int, + }, + 'params': { + 'skip_download': True, + }, + 'skip': 'This video is not available.', + 'add_ie': [YoutubeIE.ie_key()], + }, { + 'url': 'https://www.youtubekids.com/watch?v=Agk7R8I8o5U&list=PUZ6jURNr1WQZCNHF0ao-c0g', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/watch?v=MuAGGZNfUkU&list=RDMM', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/channel/UCoMdktPbSTixAyNGwb-UYkQ/live', + 'info_dict': { + 'id': 'hGkQjiJLjWQ', # This will keep changing + 'ext': 'mp4', + 'title': str, + 'upload_date': r're:\d{8}', + 'description': str, + 'categories': ['News & Politics'], + 'tags': list, + 'like_count': int, + 'release_timestamp': int, + 'channel': 'Sky News', + 'channel_id': 'UCoMdktPbSTixAyNGwb-UYkQ', + 'age_limit': 0, + 'view_count': int, + 'thumbnail': r're:https?://i\.ytimg\.com/vi/[^/]+/maxresdefault(?:_live)?\.jpg', + 'playable_in_embed': True, + 'release_date': r're:\d+', + 'availability': 'public', + 'live_status': 'is_live', + 'channel_url': 'https://www.youtube.com/channel/UCoMdktPbSTixAyNGwb-UYkQ', + 'channel_follower_count': int, + 'concurrent_view_count': int, + 'uploader_url': 'https://www.youtube.com/@SkyNews', + 'uploader_id': '@SkyNews', + 'uploader': 'Sky News', + 'channel_is_verified': True, + }, + 'params': { + 'skip_download': True, + }, + 'expected_warnings': ['Ignoring subtitle tracks found in '], + }, { + 'url': 'https://www.youtube.com/user/TheYoungTurks/live', + 'info_dict': { + 'id': 'a48o2S1cPoo', + 'ext': 'mp4', + 'title': 'The Young Turks - Live Main Show', + 'upload_date': '20150715', + 'license': 'Standard YouTube License', + 'description': 'md5:438179573adcdff3c97ebb1ee632b891', + 'categories': ['News & Politics'], + 'tags': ['Cenk Uygur (TV Program Creator)', 'The Young Turks (Award-Winning Work)', 'Talk Show (TV Genre)'], + 'like_count': int, + }, + 'params': { + 'skip_download': True, + }, + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/channel/UC1yBKRuGpC1tSM73A0ZjYjQ/live', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/c/CommanderVideoHq/live', + 'only_matching': True, + }, { + 'note': 'A channel that is not live. Should raise error', + 'url': 'https://www.youtube.com/user/numberphile/live', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/feed/trending', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/feed/library', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/feed/history', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/feed/subscriptions', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/feed/watch_later', + 'only_matching': True, + }, { + 'note': 'Recommended - redirects to home page.', + 'url': 'https://www.youtube.com/feed/recommended', + 'only_matching': True, + }, { + 'note': 'inline playlist with not always working continuations', + 'url': 'https://www.youtube.com/watch?v=UC6u0Tct-Fo&list=PL36D642111D65BE7C', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/course', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/zsecurity', + 'only_matching': True, + }, { + 'url': 'http://www.youtube.com/NASAgovVideo/videos', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/TheYoungTurks/live', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/hashtag/cctv9', + 'info_dict': { + 'id': 'cctv9', + 'title': '#cctv9', + 'tags': [], + }, + 'playlist_mincount': 300, # not consistent but should be over 300 + }, { + 'url': 'https://www.youtube.com/watch?list=PLW4dVinRY435CBE_JD3t-0SRXKfnZHS1P&feature=youtu.be&v=M9cJMXmQ_ZU', + 'only_matching': True, + }, { + 'note': 'Requires Premium: should request additional YTM-info webpage (and have format 141) for videos in playlist', + 'url': 'https://music.youtube.com/playlist?list=PLRBp0Fe2GpgmgoscNFLxNyBVSFVdYmFkq', + 'only_matching': True + }, { + 'note': '/browse/ should redirect to /channel/', + 'url': 'https://music.youtube.com/browse/UC1a8OFewdjuLq6KlF8M_8Ng', + 'only_matching': True + }, { + 'note': 'VLPL, should redirect to playlist?list=PL...', + 'url': 'https://music.youtube.com/browse/VLPLRBp0Fe2GpgmgoscNFLxNyBVSFVdYmFkq', + 'info_dict': { + 'id': 'PLRBp0Fe2GpgmgoscNFLxNyBVSFVdYmFkq', + 'description': 'Providing you with copyright free / safe music for gaming, live streaming, studying and more!', + 'title': 'NCS : All Releases 💿', + 'channel_url': 'https://www.youtube.com/channel/UC_aEa8K-EOJ3D6gOs7HcyNg', + 'modified_date': r're:\d{8}', + 'view_count': int, + 'channel_id': 'UC_aEa8K-EOJ3D6gOs7HcyNg', + 'tags': [], + 'channel': 'NoCopyrightSounds', + 'availability': 'public', + 'uploader_url': 'https://www.youtube.com/@NoCopyrightSounds', + 'uploader': 'NoCopyrightSounds', + 'uploader_id': '@NoCopyrightSounds', + }, + 'playlist_mincount': 166, + 'expected_warnings': [r'[Uu]navailable videos (are|will be) hidden', 'YouTube Music is not directly supported'], + }, { + # TODO: fix 'unviewable' issue with this playlist when reloading with unavailable videos + 'note': 'Topic, should redirect to playlist?list=UU...', + 'url': 'https://music.youtube.com/browse/UC9ALqqC4aIeG5iDs7i90Bfw', + 'info_dict': { + 'id': 'UU9ALqqC4aIeG5iDs7i90Bfw', + 'title': 'Uploads from Royalty Free Music - Topic', + 'tags': [], + 'channel_id': 'UC9ALqqC4aIeG5iDs7i90Bfw', + 'channel': 'Royalty Free Music - Topic', + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UC9ALqqC4aIeG5iDs7i90Bfw', + 'modified_date': r're:\d{8}', + 'description': '', + 'availability': 'public', + 'uploader': 'Royalty Free Music - Topic', + }, + 'playlist_mincount': 101, + 'expected_warnings': ['YouTube Music is not directly supported', r'[Uu]navailable videos (are|will be) hidden'], + }, { + # Destination channel with only a hidden self tab (tab id is UCtFRv9O2AHqOZjjynzrv-xg) + # Treat as a general feed + 'url': 'https://www.youtube.com/channel/UCtFRv9O2AHqOZjjynzrv-xg', + 'info_dict': { + 'id': 'UCtFRv9O2AHqOZjjynzrv-xg', + 'title': 'UCtFRv9O2AHqOZjjynzrv-xg', + 'tags': [], + }, + 'playlist_mincount': 9, + }, { + 'note': 'Youtube music Album', + 'url': 'https://music.youtube.com/browse/MPREb_gTAcphH99wE', + 'info_dict': { + 'id': 'OLAK5uy_l1m0thk3g31NmIIz_vMIbWtyv7eZixlH0', + 'title': 'Album - Royalty Free Music Library V2 (50 Songs)', + 'tags': [], + 'view_count': int, + 'description': '', + 'availability': 'unlisted', + 'modified_date': r're:\d{8}', + }, + 'playlist_count': 50, + 'expected_warnings': ['YouTube Music is not directly supported'], + }, { + 'note': 'unlisted single video playlist', + 'url': 'https://www.youtube.com/playlist?list=PLwL24UFy54GrB3s2KMMfjZscDi1x5Dajf', + 'info_dict': { + 'id': 'PLwL24UFy54GrB3s2KMMfjZscDi1x5Dajf', + 'title': 'yt-dlp unlisted playlist test', + 'availability': 'unlisted', + 'tags': [], + 'modified_date': '20220418', + 'channel': 'colethedj', + 'view_count': int, + 'description': '', + 'channel_id': 'UC9zHu_mHU96r19o-wV5Qs1Q', + 'channel_url': 'https://www.youtube.com/channel/UC9zHu_mHU96r19o-wV5Qs1Q', + 'uploader_url': 'https://www.youtube.com/@colethedj1894', + 'uploader_id': '@colethedj1894', + 'uploader': 'colethedj', + }, + 'playlist': [{ + 'info_dict': { + 'title': 'youtube-dl test video "\'/\\ä↭ð•', + 'id': 'BaW_jenozKc', + '_type': 'url', + 'ie_key': 'Youtube', + 'duration': 10, + 'channel_id': 'UCLqxVugv74EIW3VWh2NOa3Q', + 'channel_url': 'https://www.youtube.com/channel/UCLqxVugv74EIW3VWh2NOa3Q', + 'view_count': int, + 'url': 'https://www.youtube.com/watch?v=BaW_jenozKc', + 'channel': 'Philipp Hagemeister', + 'uploader_id': '@PhilippHagemeister', + 'uploader_url': 'https://www.youtube.com/@PhilippHagemeister', + 'uploader': 'Philipp Hagemeister', + } + }], + 'playlist_count': 1, + 'params': {'extract_flat': True}, + }, { + 'note': 'API Fallback: Recommended - redirects to home page. Requires visitorData', + 'url': 'https://www.youtube.com/feed/recommended', + 'info_dict': { + 'id': 'recommended', + 'title': 'recommended', + 'tags': [], + }, + 'playlist_mincount': 50, + 'params': { + 'skip_download': True, + 'extractor_args': {'youtubetab': {'skip': ['webpage']}} + }, + }, { + 'note': 'API Fallback: /videos tab, sorted by oldest first', + 'url': 'https://www.youtube.com/user/theCodyReeder/videos?view=0&sort=da&flow=grid', + 'info_dict': { + 'id': 'UCu6mSoMNzHQiBIOCkHUa2Aw', + 'title': 'Cody\'sLab - Videos', + 'description': 'md5:d083b7c2f0c67ee7a6c74c3e9b4243fa', + 'channel': 'Cody\'sLab', + 'channel_id': 'UCu6mSoMNzHQiBIOCkHUa2Aw', + 'tags': [], + 'channel_url': 'https://www.youtube.com/channel/UCu6mSoMNzHQiBIOCkHUa2Aw', + 'channel_follower_count': int + }, + 'playlist_mincount': 650, + 'params': { + 'skip_download': True, + 'extractor_args': {'youtubetab': {'skip': ['webpage']}} + }, + 'skip': 'Query for sorting no longer works', + }, { + 'note': 'API Fallback: Topic, should redirect to playlist?list=UU...', + 'url': 'https://music.youtube.com/browse/UC9ALqqC4aIeG5iDs7i90Bfw', + 'info_dict': { + 'id': 'UU9ALqqC4aIeG5iDs7i90Bfw', + 'title': 'Uploads from Royalty Free Music - Topic', + 'modified_date': r're:\d{8}', + 'channel_id': 'UC9ALqqC4aIeG5iDs7i90Bfw', + 'description': '', + 'channel_url': 'https://www.youtube.com/channel/UC9ALqqC4aIeG5iDs7i90Bfw', + 'tags': [], + 'channel': 'Royalty Free Music - Topic', + 'view_count': int, + 'availability': 'public', + 'uploader': 'Royalty Free Music - Topic', + }, + 'playlist_mincount': 101, + 'params': { + 'skip_download': True, + 'extractor_args': {'youtubetab': {'skip': ['webpage']}} + }, + 'expected_warnings': ['YouTube Music is not directly supported', r'[Uu]navailable videos (are|will be) hidden'], + }, { + 'note': 'non-standard redirect to regional channel', + 'url': 'https://www.youtube.com/channel/UCwVVpHQ2Cs9iGJfpdFngePQ', + 'only_matching': True + }, { + 'note': 'collaborative playlist (uploader name in the form "by <uploader> and x other(s)")', + 'url': 'https://www.youtube.com/playlist?list=PLx-_-Kk4c89oOHEDQAojOXzEzemXxoqx6', + 'info_dict': { + 'id': 'PLx-_-Kk4c89oOHEDQAojOXzEzemXxoqx6', + 'modified_date': '20220407', + 'channel_url': 'https://www.youtube.com/channel/UCKcqXmCcyqnhgpA5P0oHH_Q', + 'tags': [], + 'availability': 'unlisted', + 'channel_id': 'UCKcqXmCcyqnhgpA5P0oHH_Q', + 'channel': 'pukkandan', + 'description': 'Test for collaborative playlist', + 'title': 'yt-dlp test - collaborative playlist', + 'view_count': int, + 'uploader_url': 'https://www.youtube.com/@pukkandan', + 'uploader_id': '@pukkandan', + 'uploader': 'pukkandan', + }, + 'playlist_mincount': 2 + }, { + 'note': 'translated tab name', + 'url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA/playlists', + 'info_dict': { + 'id': 'UCiu-3thuViMebBjw_5nWYrA', + 'tags': [], + 'channel_url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA', + 'description': 'test description', + 'title': 'cole-dlp-test-acc - å†ç”Ÿãƒªã‚¹ãƒˆ', + 'channel_id': 'UCiu-3thuViMebBjw_5nWYrA', + 'channel': 'cole-dlp-test-acc', + 'uploader_url': 'https://www.youtube.com/@coletdjnz', + 'uploader_id': '@coletdjnz', + 'uploader': 'cole-dlp-test-acc', + }, + 'playlist_mincount': 1, + 'params': {'extractor_args': {'youtube': {'lang': ['ja']}}}, + 'expected_warnings': ['Preferring "ja"'], + }, { + # XXX: this should really check flat playlist entries, but the test suite doesn't support that + 'note': 'preferred lang set with playlist with translated video titles', + 'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlQAaPZ5Z-rJoTdbT-45Q7c0', + 'info_dict': { + 'id': 'PLt5yu3-wZAlQAaPZ5Z-rJoTdbT-45Q7c0', + 'tags': [], + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA', + 'channel': 'cole-dlp-test-acc', + 'channel_id': 'UCiu-3thuViMebBjw_5nWYrA', + 'description': 'test', + 'title': 'dlp test playlist', + 'availability': 'public', + 'uploader_url': 'https://www.youtube.com/@coletdjnz', + 'uploader_id': '@coletdjnz', + 'uploader': 'cole-dlp-test-acc', + }, + 'playlist_mincount': 1, + 'params': {'extractor_args': {'youtube': {'lang': ['ja']}}}, + 'expected_warnings': ['Preferring "ja"'], + }, { + # shorts audio pivot for 2GtVksBMYFM. + 'url': 'https://www.youtube.com/feed/sfv_audio_pivot?bp=8gUrCikSJwoLMkd0VmtzQk1ZRk0SCzJHdFZrc0JNWUZNGgsyR3RWa3NCTVlGTQ==', + 'info_dict': { + 'id': 'sfv_audio_pivot', + 'title': 'sfv_audio_pivot', + 'tags': [], + }, + 'playlist_mincount': 50, + + }, { + # Channel with a real live tab (not to be mistaken with streams tab) + # Do not treat like it should redirect to live stream + 'url': 'https://www.youtube.com/channel/UCEH7P7kyJIkS_gJf93VYbmg/live', + 'info_dict': { + 'id': 'UCEH7P7kyJIkS_gJf93VYbmg', + 'title': 'UCEH7P7kyJIkS_gJf93VYbmg - Live', + 'tags': [], + }, + 'playlist_mincount': 20, + }, { + # Tab name is not the same as tab id + 'url': 'https://www.youtube.com/channel/UCQvWX73GQygcwXOTSf_VDVg/letsplay', + 'info_dict': { + 'id': 'UCQvWX73GQygcwXOTSf_VDVg', + 'title': 'UCQvWX73GQygcwXOTSf_VDVg - Let\'s play', + 'tags': [], + }, + 'playlist_mincount': 8, + }, { + # Home tab id is literally home. Not to get mistaken with featured + 'url': 'https://www.youtube.com/channel/UCQvWX73GQygcwXOTSf_VDVg/home', + 'info_dict': { + 'id': 'UCQvWX73GQygcwXOTSf_VDVg', + 'title': 'UCQvWX73GQygcwXOTSf_VDVg - Home', + 'tags': [], + }, + 'playlist_mincount': 8, + }, { + # Should get three playlists for videos, shorts and streams tabs + 'url': 'https://www.youtube.com/channel/UCK9V2B22uJYu3N7eR_BT9QA', + 'info_dict': { + 'id': 'UCK9V2B22uJYu3N7eR_BT9QA', + 'title': 'Polka Ch. 尾丸ãƒãƒ«ã‚«', + 'channel_follower_count': int, + 'channel_id': 'UCK9V2B22uJYu3N7eR_BT9QA', + 'channel_url': 'https://www.youtube.com/channel/UCK9V2B22uJYu3N7eR_BT9QA', + 'description': 'md5:e56b74b5bb7e9c701522162e9abfb822', + 'channel': 'Polka Ch. 尾丸ãƒãƒ«ã‚«', + 'tags': 'count:35', + 'uploader_url': 'https://www.youtube.com/@OmaruPolka', + 'uploader': 'Polka Ch. 尾丸ãƒãƒ«ã‚«', + 'uploader_id': '@OmaruPolka', + }, + 'playlist_count': 3, + }, { + # Shorts tab with channel with handle + # TODO: fix channel description + 'url': 'https://www.youtube.com/@NotJustBikes/shorts', + 'info_dict': { + 'id': 'UC0intLFzLaudFG-xAvUEO-A', + 'title': 'Not Just Bikes - Shorts', + 'tags': 'count:12', + 'channel_url': 'https://www.youtube.com/channel/UC0intLFzLaudFG-xAvUEO-A', + 'description': 'md5:26bc55af26855a608a5cf89dfa595c8d', + 'channel_follower_count': int, + 'channel_id': 'UC0intLFzLaudFG-xAvUEO-A', + 'channel': 'Not Just Bikes', + 'uploader_url': 'https://www.youtube.com/@NotJustBikes', + 'uploader': 'Not Just Bikes', + 'uploader_id': '@NotJustBikes', + }, + 'playlist_mincount': 10, + }, { + # Streams tab + 'url': 'https://www.youtube.com/channel/UC3eYAvjCVwNHgkaGbXX3sig/streams', + 'info_dict': { + 'id': 'UC3eYAvjCVwNHgkaGbXX3sig', + 'title': 'ä¸­æ‘æ‚ ä¸€ - Live', + 'tags': 'count:7', + 'channel_id': 'UC3eYAvjCVwNHgkaGbXX3sig', + 'channel_url': 'https://www.youtube.com/channel/UC3eYAvjCVwNHgkaGbXX3sig', + 'channel': 'ä¸­æ‘æ‚ ä¸€', + 'channel_follower_count': int, + 'description': 'md5:e744f6c93dafa7a03c0c6deecb157300', + 'uploader_url': 'https://www.youtube.com/@Yuichi-Nakamura', + 'uploader_id': '@Yuichi-Nakamura', + 'uploader': 'ä¸­æ‘æ‚ ä¸€', + }, + 'playlist_mincount': 60, + }, { + # Channel with no uploads and hence no videos, streams, shorts tabs or uploads playlist. This should fail. + # See test_youtube_lists + 'url': 'https://www.youtube.com/channel/UC2yXPzFejc422buOIzn_0CA', + 'only_matching': True, + }, { + # No uploads and no UCID given. Should fail with no uploads error + # See test_youtube_lists + 'url': 'https://www.youtube.com/news', + 'only_matching': True + }, { + # No videos tab but has a shorts tab + 'url': 'https://www.youtube.com/c/TKFShorts', + 'info_dict': { + 'id': 'UCgJ5_1F6yJhYLnyMszUdmUg', + 'title': 'Shorts Break - Shorts', + 'tags': 'count:48', + 'channel_id': 'UCgJ5_1F6yJhYLnyMszUdmUg', + 'channel': 'Shorts Break', + 'description': 'md5:6de33c5e7ba686e5f3efd4e19c7ef499', + 'channel_follower_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCgJ5_1F6yJhYLnyMszUdmUg', + 'uploader_url': 'https://www.youtube.com/@ShortsBreak_Official', + 'uploader': 'Shorts Break', + 'uploader_id': '@ShortsBreak_Official', + }, + 'playlist_mincount': 30, + }, { + # Trending Now Tab. tab id is empty + 'url': 'https://www.youtube.com/feed/trending', + 'info_dict': { + 'id': 'trending', + 'title': 'trending - Now', + 'tags': [], + }, + 'playlist_mincount': 30, + }, { + # Trending Gaming Tab. tab id is empty + 'url': 'https://www.youtube.com/feed/trending?bp=4gIcGhpnYW1pbmdfY29ycHVzX21vc3RfcG9wdWxhcg%3D%3D', + 'info_dict': { + 'id': 'trending', + 'title': 'trending - Gaming', + 'tags': [], + }, + 'playlist_mincount': 30, + }, { + # Shorts url result in shorts tab + # TODO: Fix channel id extraction + 'url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA/shorts', + 'info_dict': { + 'id': 'UCiu-3thuViMebBjw_5nWYrA', + 'title': 'cole-dlp-test-acc - Shorts', + 'channel': 'cole-dlp-test-acc', + 'description': 'test description', + 'channel_id': 'UCiu-3thuViMebBjw_5nWYrA', + 'channel_url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA', + 'tags': [], + 'uploader_url': 'https://www.youtube.com/@coletdjnz', + 'uploader_id': '@coletdjnz', + 'uploader': 'cole-dlp-test-acc', + }, + 'playlist': [{ + 'info_dict': { + # Channel data is not currently available for short renderers (as of 2023-03-01) + '_type': 'url', + 'ie_key': 'Youtube', + 'url': 'https://www.youtube.com/shorts/sSM9J5YH_60', + 'id': 'sSM9J5YH_60', + 'title': 'SHORT short', + 'view_count': int, + 'thumbnails': list, + } + }], + 'params': {'extract_flat': True}, + }, { + # Live video status should be extracted + 'url': 'https://www.youtube.com/channel/UCQvWX73GQygcwXOTSf_VDVg/live', + 'info_dict': { + 'id': 'UCQvWX73GQygcwXOTSf_VDVg', + 'title': 'UCQvWX73GQygcwXOTSf_VDVg - Live', # TODO, should be Minecraft - Live or Minecraft - Topic - Live + 'tags': [] + }, + 'playlist': [{ + 'info_dict': { + '_type': 'url', + 'ie_key': 'Youtube', + 'url': 'startswith:https://www.youtube.com/watch?v=', + 'id': str, + 'title': str, + 'live_status': 'is_live', + 'channel_id': str, + 'channel_url': str, + 'concurrent_view_count': int, + 'channel': str, + 'uploader': str, + 'uploader_url': str, + 'uploader_id': str, + 'channel_is_verified': bool, # this will keep changing + } + }], + 'params': {'extract_flat': True, 'playlist_items': '1'}, + 'playlist_mincount': 1 + }, { + # Channel renderer metadata. Contains number of videos on the channel + 'url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA/channels', + 'info_dict': { + 'id': 'UCiu-3thuViMebBjw_5nWYrA', + 'title': 'cole-dlp-test-acc - Channels', + 'channel': 'cole-dlp-test-acc', + 'description': 'test description', + 'channel_id': 'UCiu-3thuViMebBjw_5nWYrA', + 'channel_url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA', + 'tags': [], + 'uploader_url': 'https://www.youtube.com/@coletdjnz', + 'uploader_id': '@coletdjnz', + 'uploader': 'cole-dlp-test-acc', + }, + 'playlist': [{ + 'info_dict': { + '_type': 'url', + 'ie_key': 'YoutubeTab', + 'url': 'https://www.youtube.com/channel/UC-lHJZR3Gqxm24_Vd_AJ5Yw', + 'id': 'UC-lHJZR3Gqxm24_Vd_AJ5Yw', + 'channel_id': 'UC-lHJZR3Gqxm24_Vd_AJ5Yw', + 'title': 'PewDiePie', + 'channel': 'PewDiePie', + 'channel_url': 'https://www.youtube.com/channel/UC-lHJZR3Gqxm24_Vd_AJ5Yw', + 'thumbnails': list, + 'channel_follower_count': int, + 'playlist_count': int, + 'uploader': 'PewDiePie', + 'uploader_url': 'https://www.youtube.com/@PewDiePie', + 'uploader_id': '@PewDiePie', + 'channel_is_verified': True, + } + }], + 'params': {'extract_flat': True}, + }, { + 'url': 'https://www.youtube.com/@3blue1brown/about', + 'info_dict': { + 'id': 'UCYO_jab_esuFRV4b17AJtAw', + 'tags': ['Mathematics'], + 'title': '3Blue1Brown - About', + 'channel_follower_count': int, + 'channel_id': 'UCYO_jab_esuFRV4b17AJtAw', + 'channel': '3Blue1Brown', + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw', + 'description': 'md5:e1384e8a133307dd10edee76e875d62f', + 'uploader_url': 'https://www.youtube.com/@3blue1brown', + 'uploader_id': '@3blue1brown', + 'uploader': '3Blue1Brown', + 'channel_is_verified': True, + }, + 'playlist_count': 0, + }, { + # Podcasts tab, with rich entry playlistRenderers + 'url': 'https://www.youtube.com/@99percentinvisiblepodcast/podcasts', + 'info_dict': { + 'id': 'UCVMF2HD4ZgC0QHpU9Yq5Xrw', + 'channel_id': 'UCVMF2HD4ZgC0QHpU9Yq5Xrw', + 'uploader_url': 'https://www.youtube.com/@99percentinvisiblepodcast', + 'description': 'md5:3a0ed38f1ad42a68ef0428c04a15695c', + 'title': '99 Percent Invisible - Podcasts', + 'uploader': '99 Percent Invisible', + 'channel_follower_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCVMF2HD4ZgC0QHpU9Yq5Xrw', + 'tags': [], + 'channel': '99 Percent Invisible', + 'uploader_id': '@99percentinvisiblepodcast', + }, + 'playlist_count': 1, + }, { + # Releases tab, with rich entry playlistRenderers (same as Podcasts tab) + 'url': 'https://www.youtube.com/@AHimitsu/releases', + 'info_dict': { + 'id': 'UCgFwu-j5-xNJml2FtTrrB3A', + 'channel': 'A Himitsu', + 'uploader_url': 'https://www.youtube.com/@AHimitsu', + 'title': 'A Himitsu - Releases', + 'uploader_id': '@AHimitsu', + 'uploader': 'A Himitsu', + 'channel_id': 'UCgFwu-j5-xNJml2FtTrrB3A', + 'tags': 'count:16', + 'description': 'I make music', + 'channel_url': 'https://www.youtube.com/channel/UCgFwu-j5-xNJml2FtTrrB3A', + 'channel_follower_count': int, + 'channel_is_verified': True, + }, + 'playlist_mincount': 10, + }, { + # Playlist with only shorts, shown as reel renderers + # FIXME: future: YouTube currently doesn't give continuation for this, + # may do in future. + 'url': 'https://www.youtube.com/playlist?list=UUxqPAgubo4coVn9Lx1FuKcg', + 'info_dict': { + 'id': 'UUxqPAgubo4coVn9Lx1FuKcg', + 'channel_url': 'https://www.youtube.com/channel/UCxqPAgubo4coVn9Lx1FuKcg', + 'view_count': int, + 'uploader_id': '@BangyShorts', + 'description': '', + 'uploader_url': 'https://www.youtube.com/@BangyShorts', + 'channel_id': 'UCxqPAgubo4coVn9Lx1FuKcg', + 'channel': 'Bangy Shorts', + 'uploader': 'Bangy Shorts', + 'tags': [], + 'availability': 'public', + 'modified_date': '20230626', + 'title': 'Uploads from Bangy Shorts', + }, + 'playlist_mincount': 100, + 'expected_warnings': [r'[Uu]navailable videos (are|will be) hidden'], + }] + + @classmethod + def suitable(cls, url): + return False if YoutubeIE.suitable(url) else super().suitable(url) + + _URL_RE = re.compile(rf'(?P<pre>{_VALID_URL})(?(not_channel)|(?P<tab>/[^?#/]+))?(?P<post>.*)$') + + def _get_url_mobj(self, url): + mobj = self._URL_RE.match(url).groupdict() + mobj.update((k, '') for k, v in mobj.items() if v is None) + return mobj + + def _extract_tab_id_and_name(self, tab, base_url='https://www.youtube.com'): + tab_name = (tab.get('title') or '').lower() + tab_url = urljoin(base_url, traverse_obj( + tab, ('endpoint', 'commandMetadata', 'webCommandMetadata', 'url'))) + + tab_id = (tab_url and self._get_url_mobj(tab_url)['tab'][1:] + or traverse_obj(tab, 'tabIdentifier', expected_type=str)) + if tab_id: + return { + 'TAB_ID_SPONSORSHIPS': 'membership', + }.get(tab_id, tab_id), tab_name + + # Fallback to tab name if we cannot get the tab id. + # XXX: should we strip non-ascii letters? e.g. in case of 'let's play' tab example on special gaming channel + # Note that in the case of translated tab name this may result in an empty string, which we don't want. + if tab_name: + self.write_debug(f'Falling back to selected tab name: {tab_name}') + return { + 'home': 'featured', + 'live': 'streams', + }.get(tab_name, tab_name), tab_name + + def _has_tab(self, tabs, tab_id): + return any(self._extract_tab_id_and_name(tab)[0] == tab_id for tab in tabs) + + @YoutubeTabBaseInfoExtractor.passthrough_smuggled_data + def _real_extract(self, url, smuggled_data): + item_id = self._match_id(url) + url = urllib.parse.urlunparse( + urllib.parse.urlparse(url)._replace(netloc='www.youtube.com')) + compat_opts = self.get_param('compat_opts', []) + + mobj = self._get_url_mobj(url) + pre, tab, post, is_channel = mobj['pre'], mobj['tab'], mobj['post'], not mobj['not_channel'] + if is_channel and smuggled_data.get('is_music_url'): + if item_id[:2] == 'VL': # Youtube music VL channels have an equivalent playlist + return self.url_result( + f'https://music.youtube.com/playlist?list={item_id[2:]}', YoutubeTabIE, item_id[2:]) + elif item_id[:2] == 'MP': # Resolve albums (/[channel/browse]/MP...) to their equivalent playlist + mdata = self._extract_tab_endpoint( + f'https://music.youtube.com/channel/{item_id}', item_id, default_client='web_music') + murl = traverse_obj(mdata, ('microformat', 'microformatDataRenderer', 'urlCanonical'), + get_all=False, expected_type=str) + if not murl: + raise ExtractorError('Failed to resolve album to playlist') + return self.url_result(murl, YoutubeTabIE) + elif mobj['channel_type'] == 'browse': # Youtube music /browse/ should be changed to /channel/ + return self.url_result( + f'https://music.youtube.com/channel/{item_id}{tab}{post}', YoutubeTabIE, item_id) + + original_tab_id, display_id = tab[1:], f'{item_id}{tab}' + if is_channel and not tab and 'no-youtube-channel-redirect' not in compat_opts: + url = f'{pre}/videos{post}' + if smuggled_data.get('is_music_url'): + self.report_warning(f'YouTube Music is not directly supported. Redirecting to {url}') + + # Handle both video/playlist URLs + qs = parse_qs(url) + video_id, playlist_id = [traverse_obj(qs, (key, 0)) for key in ('v', 'list')] + if not video_id and mobj['not_channel'].startswith('watch'): + if not playlist_id: + # If there is neither video or playlist ids, youtube redirects to home page, which is undesirable + raise ExtractorError('A video URL was given without video ID', expected=True) + # Common mistake: https://www.youtube.com/watch?list=playlist_id + self.report_warning(f'A video URL was given without video ID. Trying to download playlist {playlist_id}') + return self.url_result( + f'https://www.youtube.com/playlist?list={playlist_id}', YoutubeTabIE, playlist_id) + + if not self._yes_playlist(playlist_id, video_id): + return self.url_result( + f'https://www.youtube.com/watch?v={video_id}', YoutubeIE, video_id) + + data, ytcfg = self._extract_data(url, display_id) + + # YouTube may provide a non-standard redirect to the regional channel + # See: https://github.com/yt-dlp/yt-dlp/issues/2694 + # https://support.google.com/youtube/answer/2976814#zippy=,conditional-redirects + redirect_url = traverse_obj( + data, ('onResponseReceivedActions', ..., 'navigateAction', 'endpoint', 'commandMetadata', 'webCommandMetadata', 'url'), get_all=False) + if redirect_url and 'no-youtube-channel-redirect' not in compat_opts: + redirect_url = ''.join((urljoin('https://www.youtube.com', redirect_url), tab, post)) + self.to_screen(f'This playlist is likely not available in your region. Following conditional redirect to {redirect_url}') + return self.url_result(redirect_url, YoutubeTabIE) + + tabs, extra_tabs = self._extract_tab_renderers(data), [] + if is_channel and tabs and 'no-youtube-channel-redirect' not in compat_opts: + selected_tab = self._extract_selected_tab(tabs) + selected_tab_id, selected_tab_name = self._extract_tab_id_and_name(selected_tab, url) # NB: Name may be translated + self.write_debug(f'Selected tab: {selected_tab_id!r} ({selected_tab_name}), Requested tab: {original_tab_id!r}') + + if not original_tab_id and selected_tab_name: + self.to_screen('Downloading all uploads of the channel. ' + 'To download only the videos in a specific tab, pass the tab\'s URL') + if self._has_tab(tabs, 'streams'): + extra_tabs.append(''.join((pre, '/streams', post))) + if self._has_tab(tabs, 'shorts'): + extra_tabs.append(''.join((pre, '/shorts', post))) + # XXX: Members-only tab should also be extracted + + if not extra_tabs and selected_tab_id != 'videos': + # Channel does not have streams, shorts or videos tabs + if item_id[:2] != 'UC': + raise ExtractorError('This channel has no uploads', expected=True) + + # Topic channels don't have /videos. Use the equivalent playlist instead + pl_id = f'UU{item_id[2:]}' + pl_url = f'https://www.youtube.com/playlist?list={pl_id}' + try: + data, ytcfg = self._extract_data(pl_url, pl_id, ytcfg=ytcfg, fatal=True, webpage_fatal=True) + except ExtractorError: + raise ExtractorError('This channel has no uploads', expected=True) + else: + item_id, url = pl_id, pl_url + self.to_screen( + f'The channel does not have a videos, shorts, or live tab. Redirecting to playlist {pl_id} instead') + + elif extra_tabs and selected_tab_id != 'videos': + # When there are shorts/live tabs but not videos tab + url, data = f'{pre}{post}', None + + elif (original_tab_id or 'videos') != selected_tab_id: + if original_tab_id == 'live': + # Live tab should have redirected to the video + # Except in the case the channel has an actual live tab + # Example: https://www.youtube.com/channel/UCEH7P7kyJIkS_gJf93VYbmg/live + raise UserNotLive(video_id=item_id) + elif selected_tab_name: + raise ExtractorError(f'This channel does not have a {original_tab_id} tab', expected=True) + + # For channels such as https://www.youtube.com/channel/UCtFRv9O2AHqOZjjynzrv-xg + url = f'{pre}{post}' + + # YouTube sometimes provides a button to reload playlist with unavailable videos. + if 'no-youtube-unavailable-videos' not in compat_opts: + data = self._reload_with_unavailable_videos(display_id, data, ytcfg) or data + self._extract_and_report_alerts(data, only_once=True) + + tabs, entries = self._extract_tab_renderers(data), [] + if tabs: + entries = [self._extract_from_tabs(item_id, ytcfg, data, tabs)] + entries[0].update({ + 'extractor_key': YoutubeTabIE.ie_key(), + 'extractor': YoutubeTabIE.IE_NAME, + 'webpage_url': url, + }) + if self.get_param('playlist_items') == '0': + entries.extend(self.url_result(u, YoutubeTabIE) for u in extra_tabs) + else: # Users expect to get all `video_id`s even with `--flat-playlist`. So don't return `url_result` + entries.extend(map(self._real_extract, extra_tabs)) + + if len(entries) == 1: + return entries[0] + elif entries: + metadata = self._extract_metadata_from_tabs(item_id, data) + uploads_url = 'the Uploads (UU) playlist URL' + if try_get(metadata, lambda x: x['channel_id'].startswith('UC')): + uploads_url = f'https://www.youtube.com/playlist?list=UU{metadata["channel_id"][2:]}' + self.to_screen( + 'Downloading as multiple playlists, separated by tabs. ' + f'To download as a single playlist instead, pass {uploads_url}') + return self.playlist_result(entries, item_id, **metadata) + + # Inline playlist + playlist = traverse_obj( + data, ('contents', 'twoColumnWatchNextResults', 'playlist', 'playlist'), expected_type=dict) + if playlist: + return self._extract_from_playlist(item_id, url, data, playlist, ytcfg) + + video_id = traverse_obj( + data, ('currentVideoEndpoint', 'watchEndpoint', 'videoId'), expected_type=str) or video_id + if video_id: + if tab != '/live': # live tab is expected to redirect to video + self.report_warning(f'Unable to recognize playlist. Downloading just video {video_id}') + return self.url_result(f'https://www.youtube.com/watch?v={video_id}', YoutubeIE, video_id) + + raise ExtractorError('Unable to recognize tab page') + + +class YoutubePlaylistIE(InfoExtractor): + IE_DESC = 'YouTube playlists' + _VALID_URL = r'''(?x)(?: + (?:https?://)? + (?:\w+\.)? + (?: + (?: + youtube(?:kids)?\.com| + %(invidious)s + ) + /.*?\?.*?\blist= + )? + (?P<id>%(playlist_id)s) + )''' % { + 'playlist_id': YoutubeBaseInfoExtractor._PLAYLIST_ID_RE, + 'invidious': '|'.join(YoutubeBaseInfoExtractor._INVIDIOUS_SITES), + } + IE_NAME = 'youtube:playlist' + _TESTS = [{ + 'note': 'issue #673', + 'url': 'PLBB231211A4F62143', + 'info_dict': { + 'title': '[OLD]Team Fortress 2 (Class-based LP)', + 'id': 'PLBB231211A4F62143', + 'uploader': 'Wickman', + 'uploader_id': '@WickmanVT', + 'description': 'md5:8fa6f52abb47a9552002fa3ddfc57fc2', + 'view_count': int, + 'uploader_url': 'https://www.youtube.com/@WickmanVT', + 'modified_date': r're:\d{8}', + 'channel_id': 'UCKSpbfbl5kRQpTdL7kMc-1Q', + 'channel': 'Wickman', + 'tags': [], + 'channel_url': 'https://www.youtube.com/channel/UCKSpbfbl5kRQpTdL7kMc-1Q', + 'availability': 'public', + }, + 'playlist_mincount': 29, + }, { + 'url': 'PLtPgu7CB4gbY9oDN3drwC3cMbJggS7dKl', + 'info_dict': { + 'title': 'YDL_safe_search', + 'id': 'PLtPgu7CB4gbY9oDN3drwC3cMbJggS7dKl', + }, + 'playlist_count': 2, + 'skip': 'This playlist is private', + }, { + 'note': 'embedded', + 'url': 'https://www.youtube.com/embed/videoseries?list=PL6IaIsEjSbf96XFRuNccS_RuEXwNdsoEu', + 'playlist_count': 4, + 'info_dict': { + 'title': 'JODA15', + 'id': 'PL6IaIsEjSbf96XFRuNccS_RuEXwNdsoEu', + 'uploader': 'milan', + 'uploader_id': '@milan5503', + 'description': '', + 'channel_url': 'https://www.youtube.com/channel/UCEI1-PVPcYXjB73Hfelbmaw', + 'tags': [], + 'modified_date': '20140919', + 'view_count': int, + 'channel': 'milan', + 'channel_id': 'UCEI1-PVPcYXjB73Hfelbmaw', + 'uploader_url': 'https://www.youtube.com/@milan5503', + 'availability': 'public', + }, + 'expected_warnings': [r'[Uu]navailable videos? (is|are|will be) hidden'], + }, { + 'url': 'http://www.youtube.com/embed/_xDOZElKyNU?list=PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl', + 'playlist_mincount': 455, + 'info_dict': { + 'title': '2018 Chinese New Singles (11/6 updated)', + 'id': 'PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl', + 'uploader': 'LBK', + 'uploader_id': '@music_king', + 'description': 'md5:da521864744d60a198e3a88af4db0d9d', + 'channel': 'LBK', + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UC21nz3_MesPLqtDqwdvnoxA', + 'tags': [], + 'uploader_url': 'https://www.youtube.com/@music_king', + 'channel_id': 'UC21nz3_MesPLqtDqwdvnoxA', + 'modified_date': r're:\d{8}', + 'availability': 'public', + }, + 'expected_warnings': [r'[Uu]navailable videos (are|will be) hidden'], + }, { + 'url': 'TLGGrESM50VT6acwMjAyMjAxNw', + 'only_matching': True, + }, { + # music album playlist + 'url': 'OLAK5uy_m4xAFdmMC5rX3Ji3g93pQe3hqLZw_9LhM', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + if YoutubeTabIE.suitable(url): + return False + from ..utils import parse_qs + qs = parse_qs(url) + if qs.get('v', [None])[0]: + return False + return super().suitable(url) + + def _real_extract(self, url): + playlist_id = self._match_id(url) + is_music_url = YoutubeBaseInfoExtractor.is_music_url(url) + url = update_url_query( + 'https://www.youtube.com/playlist', + parse_qs(url) or {'list': playlist_id}) + if is_music_url: + url = smuggle_url(url, {'is_music_url': True}) + return self.url_result(url, ie=YoutubeTabIE.ie_key(), video_id=playlist_id) + + +class YoutubeYtBeIE(InfoExtractor): + IE_DESC = 'youtu.be' + _VALID_URL = r'https?://youtu\.be/(?P<id>[0-9A-Za-z_-]{11})/*?.*?\blist=(?P<playlist_id>%(playlist_id)s)' % {'playlist_id': YoutubeBaseInfoExtractor._PLAYLIST_ID_RE} + _TESTS = [{ + 'url': 'https://youtu.be/yeWKywCrFtk?list=PL2qgrgXsNUG5ig9cat4ohreBjYLAPC0J5', + 'info_dict': { + 'id': 'yeWKywCrFtk', + 'ext': 'mp4', + 'title': 'Small Scale Baler and Braiding Rugs', + 'uploader': 'Backus-Page House Museum', + 'uploader_id': '@backuspagemuseum', + 'uploader_url': r're:https?://(?:www\.)?youtube\.com/@backuspagemuseum', + 'upload_date': '20161008', + 'description': 'md5:800c0c78d5eb128500bffd4f0b4f2e8a', + 'categories': ['Nonprofits & Activism'], + 'tags': list, + 'like_count': int, + 'age_limit': 0, + 'playable_in_embed': True, + 'thumbnail': r're:^https?://.*\.webp', + 'channel': 'Backus-Page House Museum', + 'channel_id': 'UCEfMCQ9bs3tjvjy1s451zaw', + 'live_status': 'not_live', + 'view_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCEfMCQ9bs3tjvjy1s451zaw', + 'availability': 'public', + 'duration': 59, + 'comment_count': int, + 'channel_follower_count': int + }, + 'params': { + 'noplaylist': True, + 'skip_download': True, + }, + }, { + 'url': 'https://youtu.be/uWyaPkt-VOI?list=PL9D9FC436B881BA21', + 'only_matching': True, + }] + + def _real_extract(self, url): + mobj = self._match_valid_url(url) + video_id = mobj.group('id') + playlist_id = mobj.group('playlist_id') + return self.url_result( + update_url_query('https://www.youtube.com/watch', { + 'v': video_id, + 'list': playlist_id, + 'feature': 'youtu.be', + }), ie=YoutubeTabIE.ie_key(), video_id=playlist_id) + + +class YoutubeLivestreamEmbedIE(InfoExtractor): + IE_DESC = 'YouTube livestream embeds' + _VALID_URL = r'https?://(?:\w+\.)?youtube\.com/embed/live_stream/?\?(?:[^#]+&)?channel=(?P<id>[^&#]+)' + _TESTS = [{ + 'url': 'https://www.youtube.com/embed/live_stream?channel=UC2_KI6RB__jGdlnK6dvFEZA', + 'only_matching': True, + }] + + def _real_extract(self, url): + channel_id = self._match_id(url) + return self.url_result( + f'https://www.youtube.com/channel/{channel_id}/live', + ie=YoutubeTabIE.ie_key(), video_id=channel_id) + + +class YoutubeYtUserIE(InfoExtractor): + IE_DESC = 'YouTube user videos; "ytuser:" prefix' + IE_NAME = 'youtube:user' + _VALID_URL = r'ytuser:(?P<id>.+)' + _TESTS = [{ + 'url': 'ytuser:phihag', + 'only_matching': True, + }] + + def _real_extract(self, url): + user_id = self._match_id(url) + return self.url_result(f'https://www.youtube.com/user/{user_id}', YoutubeTabIE, user_id) + + +class YoutubeFavouritesIE(YoutubeBaseInfoExtractor): + IE_NAME = 'youtube:favorites' + IE_DESC = 'YouTube liked videos; ":ytfav" keyword (requires cookies)' + _VALID_URL = r':ytfav(?:ou?rite)?s?' + _LOGIN_REQUIRED = True + _TESTS = [{ + 'url': ':ytfav', + 'only_matching': True, + }, { + 'url': ':ytfavorites', + 'only_matching': True, + }] + + def _real_extract(self, url): + return self.url_result( + 'https://www.youtube.com/playlist?list=LL', + ie=YoutubeTabIE.ie_key()) + + +class YoutubeNotificationsIE(YoutubeTabBaseInfoExtractor): + IE_NAME = 'youtube:notif' + IE_DESC = 'YouTube notifications; ":ytnotif" keyword (requires cookies)' + _VALID_URL = r':ytnotif(?:ication)?s?' + _LOGIN_REQUIRED = True + _TESTS = [{ + 'url': ':ytnotif', + 'only_matching': True, + }, { + 'url': ':ytnotifications', + 'only_matching': True, + }] + + def _extract_notification_menu(self, response, continuation_list): + notification_list = traverse_obj( + response, + ('actions', 0, 'openPopupAction', 'popup', 'multiPageMenuRenderer', 'sections', 0, 'multiPageMenuNotificationSectionRenderer', 'items'), + ('actions', 0, 'appendContinuationItemsAction', 'continuationItems'), + expected_type=list) or [] + continuation_list[0] = None + for item in notification_list: + entry = self._extract_notification_renderer(item.get('notificationRenderer')) + if entry: + yield entry + continuation = item.get('continuationItemRenderer') + if continuation: + continuation_list[0] = continuation + + def _extract_notification_renderer(self, notification): + video_id = traverse_obj( + notification, ('navigationEndpoint', 'watchEndpoint', 'videoId'), expected_type=str) + url = f'https://www.youtube.com/watch?v={video_id}' + channel_id = None + if not video_id: + browse_ep = traverse_obj( + notification, ('navigationEndpoint', 'browseEndpoint'), expected_type=dict) + channel_id = self.ucid_or_none(traverse_obj(browse_ep, 'browseId', expected_type=str)) + post_id = self._search_regex( + r'/post/(.+)', traverse_obj(browse_ep, 'canonicalBaseUrl', expected_type=str), + 'post id', default=None) + if not channel_id or not post_id: + return + # The direct /post url redirects to this in the browser + url = f'https://www.youtube.com/channel/{channel_id}/community?lb={post_id}' + + channel = traverse_obj( + notification, ('contextualMenu', 'menuRenderer', 'items', 1, 'menuServiceItemRenderer', 'text', 'runs', 1, 'text'), + expected_type=str) + notification_title = self._get_text(notification, 'shortMessage') + if notification_title: + notification_title = notification_title.replace('\xad', '') # remove soft hyphens + # TODO: handle recommended videos + title = self._search_regex( + rf'{re.escape(channel or "")}[^:]+: (.+)', notification_title, + 'video title', default=None) + timestamp = (self._parse_time_text(self._get_text(notification, 'sentTimeText')) + if self._configuration_arg('approximate_date', ie_key=YoutubeTabIE) + else None) + return { + '_type': 'url', + 'url': url, + 'ie_key': (YoutubeIE if video_id else YoutubeTabIE).ie_key(), + 'video_id': video_id, + 'title': title, + 'channel_id': channel_id, + 'channel': channel, + 'uploader': channel, + 'thumbnails': self._extract_thumbnails(notification, 'videoThumbnail'), + 'timestamp': timestamp, + } + + def _notification_menu_entries(self, ytcfg): + continuation_list = [None] + response = None + for page in itertools.count(1): + ctoken = traverse_obj( + continuation_list, (0, 'continuationEndpoint', 'getNotificationMenuEndpoint', 'ctoken'), expected_type=str) + response = self._extract_response( + item_id=f'page {page}', query={'ctoken': ctoken} if ctoken else {}, ytcfg=ytcfg, + ep='notification/get_notification_menu', check_get_keys='actions', + headers=self.generate_api_headers(ytcfg=ytcfg, visitor_data=self._extract_visitor_data(response))) + yield from self._extract_notification_menu(response, continuation_list) + if not continuation_list[0]: + break + + def _real_extract(self, url): + display_id = 'notifications' + ytcfg = self._download_ytcfg('web', display_id) if not self.skip_webpage else {} + self._report_playlist_authcheck(ytcfg) + return self.playlist_result(self._notification_menu_entries(ytcfg), display_id, display_id) + + +class YoutubeSearchIE(YoutubeTabBaseInfoExtractor, SearchInfoExtractor): + IE_DESC = 'YouTube search' + IE_NAME = 'youtube:search' + _SEARCH_KEY = 'ytsearch' + _SEARCH_PARAMS = 'EgIQAQ%3D%3D' # Videos only + _TESTS = [{ + 'url': 'ytsearch5:youtube-dl test video', + 'playlist_count': 5, + 'info_dict': { + 'id': 'youtube-dl test video', + 'title': 'youtube-dl test video', + } + }] + + +class YoutubeSearchDateIE(YoutubeTabBaseInfoExtractor, SearchInfoExtractor): + IE_NAME = YoutubeSearchIE.IE_NAME + ':date' + _SEARCH_KEY = 'ytsearchdate' + IE_DESC = 'YouTube search, newest videos first' + _SEARCH_PARAMS = 'CAISAhAB' # Videos only, sorted by date + _TESTS = [{ + 'url': 'ytsearchdate5:youtube-dl test video', + 'playlist_count': 5, + 'info_dict': { + 'id': 'youtube-dl test video', + 'title': 'youtube-dl test video', + } + }] + + +class YoutubeSearchURLIE(YoutubeTabBaseInfoExtractor): + IE_DESC = 'YouTube search URLs with sorting and filter support' + IE_NAME = YoutubeSearchIE.IE_NAME + '_url' + _VALID_URL = r'https?://(?:www\.)?youtube\.com/(?:results|search)\?([^#]+&)?(?:search_query|q)=(?:[^&]+)(?:[&#]|$)' + _TESTS = [{ + 'url': 'https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', + 'playlist_mincount': 5, + 'info_dict': { + 'id': 'youtube-dl test video', + 'title': 'youtube-dl test video', + } + }, { + 'url': 'https://www.youtube.com/results?search_query=python&sp=EgIQAg%253D%253D', + 'playlist_mincount': 5, + 'info_dict': { + 'id': 'python', + 'title': 'python', + } + }, { + 'url': 'https://www.youtube.com/results?search_query=%23cats', + 'playlist_mincount': 1, + 'info_dict': { + 'id': '#cats', + 'title': '#cats', + # The test suite does not have support for nested playlists + # 'entries': [{ + # 'url': r're:https://(www\.)?youtube\.com/hashtag/cats', + # 'title': '#cats', + # }], + }, + }, { + # Channel results + 'url': 'https://www.youtube.com/results?search_query=kurzgesagt&sp=EgIQAg%253D%253D', + 'info_dict': { + 'id': 'kurzgesagt', + 'title': 'kurzgesagt', + }, + 'playlist': [{ + 'info_dict': { + '_type': 'url', + 'id': 'UCsXVk37bltHxD1rDPwtNM8Q', + 'url': 'https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q', + 'ie_key': 'YoutubeTab', + 'channel': 'Kurzgesagt – In a Nutshell', + 'description': 'md5:4ae48dfa9505ffc307dad26342d06bfc', + 'title': 'Kurzgesagt – In a Nutshell', + 'channel_id': 'UCsXVk37bltHxD1rDPwtNM8Q', + # No longer available for search as it is set to the handle. + # 'playlist_count': int, + 'channel_url': 'https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q', + 'thumbnails': list, + 'uploader_id': '@kurzgesagt', + 'uploader_url': 'https://www.youtube.com/@kurzgesagt', + 'uploader': 'Kurzgesagt – In a Nutshell', + 'channel_is_verified': True, + 'channel_follower_count': int, + } + }], + 'params': {'extract_flat': True, 'playlist_items': '1'}, + 'playlist_mincount': 1, + }, { + 'url': 'https://www.youtube.com/results?q=test&sp=EgQIBBgB', + 'only_matching': True, + }] + + def _real_extract(self, url): + qs = parse_qs(url) + query = (qs.get('search_query') or qs.get('q'))[0] + return self.playlist_result(self._search_results(query, qs.get('sp', (None,))[0]), query, query) + + +class YoutubeMusicSearchURLIE(YoutubeTabBaseInfoExtractor): + IE_DESC = 'YouTube music search URLs with selectable sections, e.g. #songs' + IE_NAME = 'youtube:music:search_url' + _VALID_URL = r'https?://music\.youtube\.com/search\?([^#]+&)?(?:search_query|q)=(?:[^&]+)(?:[&#]|$)' + _TESTS = [{ + 'url': 'https://music.youtube.com/search?q=royalty+free+music', + 'playlist_count': 16, + 'info_dict': { + 'id': 'royalty free music', + 'title': 'royalty free music', + } + }, { + 'url': 'https://music.youtube.com/search?q=royalty+free+music&sp=EgWKAQIIAWoKEAoQAxAEEAkQBQ%3D%3D', + 'playlist_mincount': 30, + 'info_dict': { + 'id': 'royalty free music - songs', + 'title': 'royalty free music - songs', + }, + 'params': {'extract_flat': 'in_playlist'} + }, { + 'url': 'https://music.youtube.com/search?q=royalty+free+music#community+playlists', + 'playlist_mincount': 30, + 'info_dict': { + 'id': 'royalty free music - community playlists', + 'title': 'royalty free music - community playlists', + }, + 'params': {'extract_flat': 'in_playlist'} + }] + + _SECTIONS = { + 'albums': 'EgWKAQIYAWoKEAoQAxAEEAkQBQ==', + 'artists': 'EgWKAQIgAWoKEAoQAxAEEAkQBQ==', + 'community playlists': 'EgeKAQQoAEABagoQChADEAQQCRAF', + 'featured playlists': 'EgeKAQQoADgBagwQAxAJEAQQDhAKEAU==', + 'songs': 'EgWKAQIIAWoKEAoQAxAEEAkQBQ==', + 'videos': 'EgWKAQIQAWoKEAoQAxAEEAkQBQ==', + } + + def _real_extract(self, url): + qs = parse_qs(url) + query = (qs.get('search_query') or qs.get('q'))[0] + params = qs.get('sp', (None,))[0] + if params: + section = next((k for k, v in self._SECTIONS.items() if v == params), params) + else: + section = urllib.parse.unquote_plus((url.split('#') + [''])[1]).lower() + params = self._SECTIONS.get(section) + if not params: + section = None + title = join_nonempty(query, section, delim=' - ') + return self.playlist_result(self._search_results(query, params, default_client='web_music'), title, title) + + +class YoutubeFeedsInfoExtractor(InfoExtractor): + """ + Base class for feed extractors + Subclasses must re-define the _FEED_NAME property. + """ + _LOGIN_REQUIRED = True + _FEED_NAME = 'feeds' + + def _real_initialize(self): + YoutubeBaseInfoExtractor._check_login_required(self) + + @classproperty + def IE_NAME(self): + return f'youtube:{self._FEED_NAME}' + + def _real_extract(self, url): + return self.url_result( + f'https://www.youtube.com/feed/{self._FEED_NAME}', ie=YoutubeTabIE.ie_key()) + + +class YoutubeWatchLaterIE(InfoExtractor): + IE_NAME = 'youtube:watchlater' + IE_DESC = 'Youtube watch later list; ":ytwatchlater" keyword (requires cookies)' + _VALID_URL = r':ytwatchlater' + _TESTS = [{ + 'url': ':ytwatchlater', + 'only_matching': True, + }] + + def _real_extract(self, url): + return self.url_result( + 'https://www.youtube.com/playlist?list=WL', ie=YoutubeTabIE.ie_key()) + + +class YoutubeRecommendedIE(YoutubeFeedsInfoExtractor): + IE_DESC = 'YouTube recommended videos; ":ytrec" keyword' + _VALID_URL = r'https?://(?:www\.)?youtube\.com/?(?:[?#]|$)|:ytrec(?:ommended)?' + _FEED_NAME = 'recommended' + _LOGIN_REQUIRED = False + _TESTS = [{ + 'url': ':ytrec', + 'only_matching': True, + }, { + 'url': ':ytrecommended', + 'only_matching': True, + }, { + 'url': 'https://youtube.com', + 'only_matching': True, + }] + + +class YoutubeSubscriptionsIE(YoutubeFeedsInfoExtractor): + IE_DESC = 'YouTube subscriptions feed; ":ytsubs" keyword (requires cookies)' + _VALID_URL = r':ytsub(?:scription)?s?' + _FEED_NAME = 'subscriptions' + _TESTS = [{ + 'url': ':ytsubs', + 'only_matching': True, + }, { + 'url': ':ytsubscriptions', + 'only_matching': True, + }] + + +class YoutubeHistoryIE(YoutubeFeedsInfoExtractor): + IE_DESC = 'Youtube watch history; ":ythis" keyword (requires cookies)' + _VALID_URL = r':ythis(?:tory)?' + _FEED_NAME = 'history' + _TESTS = [{ + 'url': ':ythistory', + 'only_matching': True, + }] + + +class YoutubeShortsAudioPivotIE(InfoExtractor): + IE_DESC = 'YouTube Shorts audio pivot (Shorts using audio of a given video)' + IE_NAME = 'youtube:shorts:pivot:audio' + _VALID_URL = r'https?://(?:www\.)?youtube\.com/source/(?P<id>[\w-]{11})/shorts' + _TESTS = [{ + 'url': 'https://www.youtube.com/source/Lyj-MZSAA9o/shorts', + 'only_matching': True, + }] + + @staticmethod + def _generate_audio_pivot_params(video_id): + """ + Generates sfv_audio_pivot browse params for this video id + """ + pb_params = b'\xf2\x05+\n)\x12\'\n\x0b%b\x12\x0b%b\x1a\x0b%b' % ((video_id.encode(),) * 3) + return urllib.parse.quote(base64.b64encode(pb_params).decode()) + + def _real_extract(self, url): + video_id = self._match_id(url) + return self.url_result( + f'https://www.youtube.com/feed/sfv_audio_pivot?bp={self._generate_audio_pivot_params(video_id)}', + ie=YoutubeTabIE) + + +class YoutubeTruncatedURLIE(InfoExtractor): + IE_NAME = 'youtube:truncated_url' + IE_DESC = False # Do not list + _VALID_URL = r'''(?x) + (?:https?://)? + (?:\w+\.)?[yY][oO][uU][tT][uU][bB][eE](?:-nocookie)?\.com/ + (?:watch\?(?: + feature=[a-z_]+| + annotation_id=annotation_[^&]+| + x-yt-cl=[0-9]+| + hl=[^&]*| + t=[0-9]+ + )? + | + attribution_link\?a=[^&]+ + ) + $ + ''' + + _TESTS = [{ + 'url': 'https://www.youtube.com/watch?annotation_id=annotation_3951667041', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/watch?', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/watch?x-yt-cl=84503534', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/watch?feature=foo', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/watch?hl=en-GB', + 'only_matching': True, + }, { + 'url': 'https://www.youtube.com/watch?t=2372', + 'only_matching': True, + }] + + def _real_extract(self, url): + raise ExtractorError( + 'Did you forget to quote the URL? Remember that & is a meta ' + 'character in most shells, so you want to put the URL in quotes, ' + 'like youtube-dl ' + '"https://www.youtube.com/watch?feature=foo&v=BaW_jenozKc" ' + ' or simply youtube-dl BaW_jenozKc .', + expected=True) + + +class YoutubeClipIE(YoutubeTabBaseInfoExtractor): + IE_NAME = 'youtube:clip' + _VALID_URL = r'https?://(?:www\.)?youtube\.com/clip/(?P<id>[^/?#]+)' + _TESTS = [{ + # FIXME: Other metadata should be extracted from the clip, not from the base video + 'url': 'https://www.youtube.com/clip/UgytZKpehg-hEMBSn3F4AaABCQ', + 'info_dict': { + 'id': 'UgytZKpehg-hEMBSn3F4AaABCQ', + 'ext': 'mp4', + 'section_start': 29.0, + 'section_end': 39.7, + 'duration': 10.7, + 'age_limit': 0, + 'availability': 'public', + 'categories': ['Gaming'], + 'channel': 'Scott The Woz', + 'channel_id': 'UC4rqhyiTs7XyuODcECvuiiQ', + 'channel_url': 'https://www.youtube.com/channel/UC4rqhyiTs7XyuODcECvuiiQ', + 'description': 'md5:7a4517a17ea9b4bd98996399d8bb36e7', + 'like_count': int, + 'playable_in_embed': True, + 'tags': 'count:17', + 'thumbnail': 'https://i.ytimg.com/vi_webp/ScPX26pdQik/maxresdefault.webp', + 'title': 'Mobile Games on Console - Scott The Woz', + 'upload_date': '20210920', + 'uploader': 'Scott The Woz', + 'uploader_id': '@ScottTheWoz', + 'uploader_url': 'https://www.youtube.com/@ScottTheWoz', + 'view_count': int, + 'live_status': 'not_live', + 'channel_follower_count': int, + 'chapters': 'count:20', + 'comment_count': int, + 'heatmap': 'count:100', + } + }] + + def _real_extract(self, url): + clip_id = self._match_id(url) + _, data = self._extract_webpage(url, clip_id) + + video_id = traverse_obj(data, ('currentVideoEndpoint', 'watchEndpoint', 'videoId')) + if not video_id: + raise ExtractorError('Unable to find video ID') + + clip_data = traverse_obj(data, ( + 'engagementPanels', ..., 'engagementPanelSectionListRenderer', 'content', 'clipSectionRenderer', + 'contents', ..., 'clipAttributionRenderer', 'onScrubExit', 'commandExecutorCommand', 'commands', ..., + 'openPopupAction', 'popup', 'notificationActionRenderer', 'actionButton', 'buttonRenderer', 'command', + 'commandExecutorCommand', 'commands', ..., 'loopCommand'), get_all=False) + + return { + '_type': 'url_transparent', + 'url': f'https://www.youtube.com/watch?v={video_id}', + 'ie_key': YoutubeIE.ie_key(), + 'id': clip_id, + 'section_start': int(clip_data['startTimeMs']) / 1000, + 'section_end': int(clip_data['endTimeMs']) / 1000, + } + + +class YoutubeConsentRedirectIE(YoutubeBaseInfoExtractor): + IE_NAME = 'youtube:consent' + IE_DESC = False # Do not list + _VALID_URL = r'https?://consent\.youtube\.com/m\?' + _TESTS = [{ + 'url': 'https://consent.youtube.com/m?continue=https%3A%2F%2Fwww.youtube.com%2Flive%2FqVv6vCqciTM%3Fcbrd%3D1&gl=NL&m=0&pc=yt&hl=en&src=1', + 'info_dict': { + 'id': 'qVv6vCqciTM', + 'ext': 'mp4', + 'age_limit': 0, + 'uploader_id': '@sana_natori', + 'comment_count': int, + 'chapters': 'count:13', + 'upload_date': '20221223', + 'thumbnail': 'https://i.ytimg.com/vi/qVv6vCqciTM/maxresdefault.jpg', + 'channel_url': 'https://www.youtube.com/channel/UCIdEIHpS0TdkqRkHL5OkLtA', + 'uploader_url': 'https://www.youtube.com/@sana_natori', + 'like_count': int, + 'release_date': '20221223', + 'tags': ['Vtuber', '月ノ美兎', 'åå–ã•ãª', 'ã«ã˜ã•ã‚“ã˜', 'クリスマス', '3Dé…ä¿¡'], + 'title': '〠#インターãƒãƒƒãƒˆå¥³ã‚¯ãƒªã‚¹ãƒžã‚¹ 】3Dã§æ­Œã£ã¦ã¯ã—ゃãインターãƒãƒƒãƒˆã®å¥³ãŸã¡ã€æœˆãƒŽç¾Žå…Ž/åå–ã•ãªã€‘', + 'view_count': int, + 'playable_in_embed': True, + 'duration': 4438, + 'availability': 'public', + 'channel_follower_count': int, + 'channel_id': 'UCIdEIHpS0TdkqRkHL5OkLtA', + 'categories': ['Entertainment'], + 'live_status': 'was_live', + 'release_timestamp': 1671793345, + 'channel': 'ã•ãªã¡ã‚ƒã‚“ã­ã‚‹', + 'description': 'md5:6aebf95cc4a1d731aebc01ad6cc9806d', + 'uploader': 'ã•ãªã¡ã‚ƒã‚“ã­ã‚‹', + 'channel_is_verified': True, + 'heatmap': 'count:100', + }, + 'add_ie': ['Youtube'], + 'params': {'skip_download': 'Youtube'}, + }] + + def _real_extract(self, url): + redirect_url = url_or_none(parse_qs(url).get('continue', [None])[-1]) + if not redirect_url: + raise ExtractorError('Invalid cookie consent redirect URL', expected=True) + return self.url_result(redirect_url) + + +class YoutubeTruncatedIDIE(InfoExtractor): + IE_NAME = 'youtube:truncated_id' + IE_DESC = False # Do not list + _VALID_URL = r'https?://(?:www\.)?youtube\.com/watch\?v=(?P<id>[0-9A-Za-z_-]{1,10})$' + + _TESTS = [{ + 'url': 'https://www.youtube.com/watch?v=N_708QY7Ob', + 'only_matching': True, + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + raise ExtractorError( + f'Incomplete YouTube ID {video_id}. URL {url} looks truncated.', + expected=True) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zaiko.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zaiko.py new file mode 100644 index 0000000..2b6221d --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zaiko.py @@ -0,0 +1,139 @@ +import base64 + +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + extract_attributes, + int_or_none, + str_or_none, + traverse_obj, + try_call, + unescapeHTML, + url_basename, + url_or_none, +) + + +class ZaikoBaseIE(InfoExtractor): + def _download_real_webpage(self, url, video_id): + webpage, urlh = self._download_webpage_handle(url, video_id) + final_url = urlh.url + if 'zaiko.io/login' in final_url: + self.raise_login_required() + elif '/_buy/' in final_url: + raise ExtractorError('Your account does not have tickets to this event', expected=True) + return webpage + + def _parse_vue_element_attr(self, name, string, video_id): + page_elem = self._search_regex(rf'(<{name}[^>]+>)', string, name) + attrs = {} + for key, value in extract_attributes(page_elem).items(): + if key.startswith(':'): + attrs[key[1:]] = self._parse_json( + value, video_id, transform_source=unescapeHTML, fatal=False) + return attrs + + +class ZaikoIE(ZaikoBaseIE): + _VALID_URL = r'https?://(?:[\w-]+\.)?zaiko\.io/event/(?P<id>\d+)/stream(?:/\d+)+' + _TESTS = [{ + 'url': 'https://zaiko.io/event/324868/stream/20571/20571', + 'info_dict': { + 'id': '324868', + 'ext': 'mp4', + 'title': 'ZAIKO STREAMING TEST', + 'alt_title': '[VOD] ZAIKO STREAMING TEST_20210603(Do Not Delete)', + 'uploader_id': '454', + 'uploader': 'ZAIKO ZERO', + 'release_timestamp': 1583809200, + 'thumbnail': r're:^https://[\w.-]+/\w+/\w+', + 'thumbnails': 'maxcount:2', + 'release_date': '20200310', + 'categories': ['Tech House'], + 'live_status': 'was_live', + }, + 'params': {'skip_download': 'm3u8'}, + 'skip': 'Your account does not have tickets to this event', + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_real_webpage(url, video_id) + stream_meta = self._parse_vue_element_attr('stream-page', webpage, video_id) + + player_page = self._download_webpage( + stream_meta['stream-access']['video_source'], video_id, + 'Downloading player page', headers={'referer': 'https://zaiko.io/'}) + player_meta = self._parse_vue_element_attr('player', player_page, video_id) + status = traverse_obj(player_meta, ('initial_event_info', 'status', {str})) + live_status, msg, expected = { + 'vod': ('was_live', 'No VOD stream URL was found', False), + 'archiving': ('post_live', 'Event VOD is still being processed', True), + 'deleting': ('post_live', 'This event has ended', True), + 'deleted': ('post_live', 'This event has ended', True), + 'error': ('post_live', 'This event has ended', True), + 'disconnected': ('post_live', 'Stream has been disconnected', True), + 'live_to_disconnected': ('post_live', 'Stream has been disconnected', True), + 'live': ('is_live', 'No livestream URL found was found', False), + 'waiting': ('is_upcoming', 'Live event has not yet started', True), + 'cancelled': ('not_live', 'Event has been cancelled', True), + }.get(status) or ('not_live', f'Unknown event status "{status}"', False) + + stream_url = traverse_obj(player_meta, ('initial_event_info', 'endpoint', {url_or_none})) + formats = self._extract_m3u8_formats( + stream_url, video_id, live=True, fatal=False) if stream_url else [] + if not formats: + self.raise_no_formats(msg, expected=expected) + + thumbnail_urls = [ + traverse_obj(player_meta, ('initial_event_info', 'poster_url')), + self._og_search_thumbnail(self._download_webpage( + f'https://zaiko.io/event/{video_id}', video_id, 'Downloading event page', fatal=False) or ''), + ] + + return { + 'id': video_id, + 'formats': formats, + 'live_status': live_status, + **traverse_obj(stream_meta, { + 'title': ('event', 'name', {str}), + 'uploader': ('profile', 'name', {str}), + 'uploader_id': ('profile', 'id', {str_or_none}), + 'release_timestamp': ('stream', 'start', 'timestamp', {int_or_none}), + 'categories': ('event', 'genres', ..., {lambda x: x or None}), + }), + **traverse_obj(player_meta, ('initial_event_info', { + 'alt_title': ('title', {str}), + })), + 'thumbnails': [{'url': url, 'id': url_basename(url)} for url in thumbnail_urls if url_or_none(url)] + } + + +class ZaikoETicketIE(ZaikoBaseIE): + _VALID_URL = r'https?://(?:www.)?zaiko\.io/account/eticket/(?P<id>[\w=-]{49})' + _TESTS = [{ + 'url': 'https://zaiko.io/account/eticket/TZjMwMzQ2Y2EzMXwyMDIzMDYwNzEyMTMyNXw1MDViOWU2Mw==', + 'playlist_count': 1, + 'info_dict': { + 'id': 'f30346ca31-20230607121325-505b9e63', + 'title': 'ZAIKO STREAMING TEST', + 'thumbnail': 'https://media.zkocdn.net/pf_1/1_3wdyjcjyupseatkwid34u', + }, + 'skip': 'Only available with the ticketholding account', + }] + + def _real_extract(self, url): + ticket_id = self._match_id(url) + ticket_id = try_call( + lambda: base64.urlsafe_b64decode(ticket_id[1:]).decode().replace('|', '-')) or ticket_id + + webpage = self._download_real_webpage(url, ticket_id) + eticket = self._parse_vue_element_attr('eticket', webpage, ticket_id) + + return self.playlist_result( + [self.url_result(stream, ZaikoIE) for stream in traverse_obj(eticket, ('streams', ..., 'url'))], + ticket_id, **traverse_obj(eticket, ('ticket-details', { + 'title': 'event_name', + 'thumbnail': 'event_img_url', + }))) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/zapiks.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zapiks.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/zapiks.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/zapiks.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zattoo.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zattoo.py new file mode 100644 index 0000000..6bd9ea0 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zattoo.py @@ -0,0 +1,865 @@ +import re +from uuid import uuid4 + +from .common import InfoExtractor +from ..compat import compat_str +from ..networking.exceptions import HTTPError +from ..utils import ( + ExtractorError, + int_or_none, + join_nonempty, + try_get, + url_or_none, + urlencode_postdata, +) + + +class ZattooPlatformBaseIE(InfoExtractor): + _power_guide_hash = None + + def _host_url(self): + return 'https://%s' % (self._API_HOST if hasattr(self, '_API_HOST') else self._HOST) + + def _real_initialize(self): + if not self._power_guide_hash: + self.raise_login_required('An account is needed to access this media', method='password') + + def _perform_login(self, username, password): + try: + data = self._download_json( + '%s/zapi/v2/account/login' % self._host_url(), None, 'Logging in', + data=urlencode_postdata({ + 'login': username, + 'password': password, + 'remember': 'true', + }), headers={ + 'Referer': '%s/login' % self._host_url(), + 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', + }) + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status == 400: + raise ExtractorError( + 'Unable to login: incorrect username and/or password', + expected=True) + raise + + self._power_guide_hash = data['session']['power_guide_hash'] + + def _initialize_pre_login(self): + session_token = self._download_json( + f'{self._host_url()}/token.json', None, 'Downloading session token')['session_token'] + + # Will setup appropriate cookies + self._request_webpage( + '%s/zapi/v3/session/hello' % self._host_url(), None, + 'Opening session', data=urlencode_postdata({ + 'uuid': compat_str(uuid4()), + 'lang': 'en', + 'app_version': '1.8.2', + 'format': 'json', + 'client_app_token': session_token, + })) + + def _extract_video_id_from_recording(self, recid): + playlist = self._download_json( + f'{self._host_url()}/zapi/v2/playlist', recid, 'Downloading playlist') + try: + return next( + str(item['program_id']) for item in playlist['recordings'] + if item.get('program_id') and str(item.get('id')) == recid) + except (StopIteration, KeyError): + raise ExtractorError('Could not extract video id from recording') + + def _extract_cid(self, video_id, channel_name): + channel_groups = self._download_json( + '%s/zapi/v2/cached/channels/%s' % (self._host_url(), + self._power_guide_hash), + video_id, 'Downloading channel list', + query={'details': False})['channel_groups'] + channel_list = [] + for chgrp in channel_groups: + channel_list.extend(chgrp['channels']) + try: + return next( + chan['cid'] for chan in channel_list + if chan.get('cid') and ( + chan.get('display_alias') == channel_name + or chan.get('cid') == channel_name)) + except StopIteration: + raise ExtractorError('Could not extract channel id') + + def _extract_cid_and_video_info(self, video_id): + data = self._download_json( + '%s/zapi/v2/cached/program/power_details/%s' % ( + self._host_url(), self._power_guide_hash), + video_id, + 'Downloading video information', + query={ + 'program_ids': video_id, + 'complete': True, + }) + + p = data['programs'][0] + cid = p['cid'] + + info_dict = { + 'id': video_id, + 'title': p.get('t') or p['et'], + 'description': p.get('d'), + 'thumbnail': p.get('i_url'), + 'creator': p.get('channel_name'), + 'episode': p.get('et'), + 'episode_number': int_or_none(p.get('e_no')), + 'season_number': int_or_none(p.get('s_no')), + 'release_year': int_or_none(p.get('year')), + 'categories': try_get(p, lambda x: x['c'], list), + 'tags': try_get(p, lambda x: x['g'], list) + } + + return cid, info_dict + + def _extract_ondemand_info(self, ondemand_id): + """ + @returns (ondemand_token, ondemand_type, info_dict) + """ + data = self._download_json( + '%s/zapi/vod/movies/%s' % (self._host_url(), ondemand_id), + ondemand_id, 'Downloading ondemand information') + info_dict = { + 'id': ondemand_id, + 'title': data.get('title'), + 'description': data.get('description'), + 'duration': int_or_none(data.get('duration')), + 'release_year': int_or_none(data.get('year')), + 'episode_number': int_or_none(data.get('episode_number')), + 'season_number': int_or_none(data.get('season_number')), + 'categories': try_get(data, lambda x: x['categories'], list), + } + return data['terms_catalog'][0]['terms'][0]['token'], data['type'], info_dict + + def _extract_formats(self, cid, video_id, record_id=None, ondemand_id=None, ondemand_termtoken=None, ondemand_type=None, is_live=False): + postdata_common = { + 'https_watch_urls': True, + } + + if is_live: + postdata_common.update({'timeshift': 10800}) + url = '%s/zapi/watch/live/%s' % (self._host_url(), cid) + elif record_id: + url = '%s/zapi/watch/recording/%s' % (self._host_url(), record_id) + elif ondemand_id: + postdata_common.update({ + 'teasable_id': ondemand_id, + 'term_token': ondemand_termtoken, + 'teasable_type': ondemand_type + }) + url = '%s/zapi/watch/vod/video' % self._host_url() + else: + url = '%s/zapi/v3/watch/replay/%s/%s' % (self._host_url(), cid, video_id) + formats = [] + subtitles = {} + for stream_type in ('dash', 'hls7'): + postdata = postdata_common.copy() + postdata['stream_type'] = stream_type + + data = self._download_json( + url, video_id, 'Downloading %s formats' % stream_type.upper(), + data=urlencode_postdata(postdata), fatal=False) + if not data: + continue + + watch_urls = try_get( + data, lambda x: x['stream']['watch_urls'], list) + if not watch_urls: + continue + + for watch in watch_urls: + if not isinstance(watch, dict): + continue + watch_url = url_or_none(watch.get('url')) + if not watch_url: + continue + audio_channel = watch.get('audio_channel') + preference = 1 if audio_channel == 'A' else None + format_id = join_nonempty(stream_type, watch.get('maxrate'), audio_channel) + if stream_type.startswith('dash'): + this_formats, subs = self._extract_mpd_formats_and_subtitles( + watch_url, video_id, mpd_id=format_id, fatal=False) + self._merge_subtitles(subs, target=subtitles) + elif stream_type.startswith('hls'): + this_formats, subs = self._extract_m3u8_formats_and_subtitles( + watch_url, video_id, 'mp4', + entry_protocol='m3u8_native', m3u8_id=format_id, + fatal=False) + self._merge_subtitles(subs, target=subtitles) + elif stream_type == 'hds': + this_formats = self._extract_f4m_formats( + watch_url, video_id, f4m_id=format_id, fatal=False) + elif stream_type == 'smooth_playready': + this_formats = self._extract_ism_formats( + watch_url, video_id, ism_id=format_id, fatal=False) + else: + assert False + for this_format in this_formats: + this_format['quality'] = preference + formats.extend(this_formats) + return formats, subtitles + + def _extract_video(self, video_id, record_id=None): + cid, info_dict = self._extract_cid_and_video_info(video_id) + info_dict['formats'], info_dict['subtitles'] = self._extract_formats(cid, video_id, record_id=record_id) + return info_dict + + def _extract_live(self, channel_name): + cid = self._extract_cid(channel_name, channel_name) + formats, subtitles = self._extract_formats(cid, cid, is_live=True) + return { + 'id': channel_name, + 'title': channel_name, + 'is_live': True, + 'formats': formats, + 'subtitles': subtitles + } + + def _extract_record(self, record_id): + video_id = self._extract_video_id_from_recording(record_id) + cid, info_dict = self._extract_cid_and_video_info(video_id) + info_dict['formats'], info_dict['subtitles'] = self._extract_formats(cid, video_id, record_id=record_id) + return info_dict + + def _extract_ondemand(self, ondemand_id): + ondemand_termtoken, ondemand_type, info_dict = self._extract_ondemand_info(ondemand_id) + info_dict['formats'], info_dict['subtitles'] = self._extract_formats( + None, ondemand_id, ondemand_id=ondemand_id, + ondemand_termtoken=ondemand_termtoken, ondemand_type=ondemand_type) + return info_dict + + def _real_extract(self, url): + video_id, record_id = self._match_valid_url(url).groups() + return getattr(self, f'_extract_{self._TYPE}')(video_id or record_id) + + +def _create_valid_url(host, match, qs, base_re=None): + match_base = fr'|{base_re}/(?P<vid1>{match})' if base_re else '(?P<vid1>)' + return rf'''(?x)https?://(?:www\.)?{re.escape(host)}/(?: + [^?#]+\?(?:[^#]+&)?{qs}=(?P<vid2>{match}) + {match_base} + )''' + + +class ZattooBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'zattoo' + _HOST = 'zattoo.com' + + +class ZattooIE(ZattooBaseIE): + _VALID_URL = _create_valid_url(ZattooBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://zattoo.com/program/zdf/250170418', + 'info_dict': { + 'id': '250170418', + 'ext': 'mp4', + 'title': 'Markus Lanz', + 'description': 'md5:e41cb1257de008ca62a73bb876ffa7fc', + 'thumbnail': 're:http://images.zattic.com/cms/.+/format_480x360.jpg', + 'creator': 'ZDF HD', + 'release_year': 2022, + 'episode': 'Folge 1655', + 'categories': 'count:1', + 'tags': 'count:2' + }, + 'params': {'skip_download': 'm3u8'} + }, { + 'url': 'https://zattoo.com/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://zattoo.com/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class ZattooLiveIE(ZattooBaseIE): + _VALID_URL = _create_valid_url(ZattooBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://zattoo.com/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://zattoo.com/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if ZattooIE.suitable(url) else super().suitable(url) + + +class ZattooMoviesIE(ZattooBaseIE): + _VALID_URL = _create_valid_url(ZattooBaseIE._HOST, r'\w+', 'movie_id', 'vod/movies') + _TYPE = 'ondemand' + _TESTS = [{ + 'url': 'https://zattoo.com/vod/movies/7521', + 'only_matching': True, + }, { + 'url': 'https://zattoo.com/ondemand?movie_id=7521&term_token=9f00f43183269484edde', + 'only_matching': True, + }] + + +class ZattooRecordingsIE(ZattooBaseIE): + _VALID_URL = _create_valid_url('zattoo.com', r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://zattoo.com/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://zattoo.com/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class NetPlusTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'netplus' + _HOST = 'netplus.tv' + _API_HOST = 'www.%s' % _HOST + + +class NetPlusTVIE(NetPlusTVBaseIE): + _VALID_URL = _create_valid_url(NetPlusTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://netplus.tv/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://netplus.tv/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class NetPlusTVLiveIE(NetPlusTVBaseIE): + _VALID_URL = _create_valid_url(NetPlusTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://netplus.tv/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://netplus.tv/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if NetPlusTVIE.suitable(url) else super().suitable(url) + + +class NetPlusTVRecordingsIE(NetPlusTVBaseIE): + _VALID_URL = _create_valid_url(NetPlusTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://netplus.tv/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://netplus.tv/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class MNetTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'mnettv' + _HOST = 'tvplus.m-net.de' + + +class MNetTVIE(MNetTVBaseIE): + _VALID_URL = _create_valid_url(MNetTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://tvplus.m-net.de/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://tvplus.m-net.de/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class MNetTVLiveIE(MNetTVBaseIE): + _VALID_URL = _create_valid_url(MNetTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://tvplus.m-net.de/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://tvplus.m-net.de/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if MNetTVIE.suitable(url) else super().suitable(url) + + +class MNetTVRecordingsIE(MNetTVBaseIE): + _VALID_URL = _create_valid_url(MNetTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://tvplus.m-net.de/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://tvplus.m-net.de/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class WalyTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'walytv' + _HOST = 'player.waly.tv' + + +class WalyTVIE(WalyTVBaseIE): + _VALID_URL = _create_valid_url(WalyTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://player.waly.tv/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://player.waly.tv/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class WalyTVLiveIE(WalyTVBaseIE): + _VALID_URL = _create_valid_url(WalyTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://player.waly.tv/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://player.waly.tv/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if WalyTVIE.suitable(url) else super().suitable(url) + + +class WalyTVRecordingsIE(WalyTVBaseIE): + _VALID_URL = _create_valid_url(WalyTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://player.waly.tv/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://player.waly.tv/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class BBVTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'bbvtv' + _HOST = 'bbv-tv.net' + _API_HOST = 'www.%s' % _HOST + + +class BBVTVIE(BBVTVBaseIE): + _VALID_URL = _create_valid_url(BBVTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://bbv-tv.net/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://bbv-tv.net/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class BBVTVLiveIE(BBVTVBaseIE): + _VALID_URL = _create_valid_url(BBVTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://bbv-tv.net/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://bbv-tv.net/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if BBVTVIE.suitable(url) else super().suitable(url) + + +class BBVTVRecordingsIE(BBVTVBaseIE): + _VALID_URL = _create_valid_url(BBVTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://bbv-tv.net/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://bbv-tv.net/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class VTXTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'vtxtv' + _HOST = 'vtxtv.ch' + _API_HOST = 'www.%s' % _HOST + + +class VTXTVIE(VTXTVBaseIE): + _VALID_URL = _create_valid_url(VTXTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://vtxtv.ch/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://vtxtv.ch/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class VTXTVLiveIE(VTXTVBaseIE): + _VALID_URL = _create_valid_url(VTXTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://vtxtv.ch/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://vtxtv.ch/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if VTXTVIE.suitable(url) else super().suitable(url) + + +class VTXTVRecordingsIE(VTXTVBaseIE): + _VALID_URL = _create_valid_url(VTXTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://vtxtv.ch/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://vtxtv.ch/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class GlattvisionTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'glattvisiontv' + _HOST = 'iptv.glattvision.ch' + + +class GlattvisionTVIE(GlattvisionTVBaseIE): + _VALID_URL = _create_valid_url(GlattvisionTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://iptv.glattvision.ch/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://iptv.glattvision.ch/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class GlattvisionTVLiveIE(GlattvisionTVBaseIE): + _VALID_URL = _create_valid_url(GlattvisionTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://iptv.glattvision.ch/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://iptv.glattvision.ch/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if GlattvisionTVIE.suitable(url) else super().suitable(url) + + +class GlattvisionTVRecordingsIE(GlattvisionTVBaseIE): + _VALID_URL = _create_valid_url(GlattvisionTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://iptv.glattvision.ch/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://iptv.glattvision.ch/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class SAKTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'saktv' + _HOST = 'saktv.ch' + _API_HOST = 'www.%s' % _HOST + + +class SAKTVIE(SAKTVBaseIE): + _VALID_URL = _create_valid_url(SAKTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://saktv.ch/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://saktv.ch/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class SAKTVLiveIE(SAKTVBaseIE): + _VALID_URL = _create_valid_url(SAKTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://saktv.ch/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://saktv.ch/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if SAKTVIE.suitable(url) else super().suitable(url) + + +class SAKTVRecordingsIE(SAKTVBaseIE): + _VALID_URL = _create_valid_url(SAKTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://saktv.ch/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://saktv.ch/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class EWETVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'ewetv' + _HOST = 'tvonline.ewe.de' + + +class EWETVIE(EWETVBaseIE): + _VALID_URL = _create_valid_url(EWETVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://tvonline.ewe.de/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://tvonline.ewe.de/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class EWETVLiveIE(EWETVBaseIE): + _VALID_URL = _create_valid_url(EWETVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://tvonline.ewe.de/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://tvonline.ewe.de/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if EWETVIE.suitable(url) else super().suitable(url) + + +class EWETVRecordingsIE(EWETVBaseIE): + _VALID_URL = _create_valid_url(EWETVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://tvonline.ewe.de/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://tvonline.ewe.de/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class QuantumTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'quantumtv' + _HOST = 'quantum-tv.com' + _API_HOST = 'www.%s' % _HOST + + +class QuantumTVIE(QuantumTVBaseIE): + _VALID_URL = _create_valid_url(QuantumTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://quantum-tv.com/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://quantum-tv.com/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class QuantumTVLiveIE(QuantumTVBaseIE): + _VALID_URL = _create_valid_url(QuantumTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://quantum-tv.com/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://quantum-tv.com/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if QuantumTVIE.suitable(url) else super().suitable(url) + + +class QuantumTVRecordingsIE(QuantumTVBaseIE): + _VALID_URL = _create_valid_url(QuantumTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://quantum-tv.com/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://quantum-tv.com/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class OsnatelTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'osnateltv' + _HOST = 'tvonline.osnatel.de' + + +class OsnatelTVIE(OsnatelTVBaseIE): + _VALID_URL = _create_valid_url(OsnatelTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://tvonline.osnatel.de/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://tvonline.osnatel.de/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class OsnatelTVLiveIE(OsnatelTVBaseIE): + _VALID_URL = _create_valid_url(OsnatelTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://tvonline.osnatel.de/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://tvonline.osnatel.de/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if OsnatelTVIE.suitable(url) else super().suitable(url) + + +class OsnatelTVRecordingsIE(OsnatelTVBaseIE): + _VALID_URL = _create_valid_url(OsnatelTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://tvonline.osnatel.de/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://tvonline.osnatel.de/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class EinsUndEinsTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = '1und1tv' + _HOST = '1und1.tv' + _API_HOST = 'www.%s' % _HOST + + +class EinsUndEinsTVIE(EinsUndEinsTVBaseIE): + _VALID_URL = _create_valid_url(EinsUndEinsTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://1und1.tv/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://1und1.tv/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class EinsUndEinsTVLiveIE(EinsUndEinsTVBaseIE): + _VALID_URL = _create_valid_url(EinsUndEinsTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://1und1.tv/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://1und1.tv/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if EinsUndEinsTVIE.suitable(url) else super().suitable(url) + + +class EinsUndEinsTVRecordingsIE(EinsUndEinsTVBaseIE): + _VALID_URL = _create_valid_url(EinsUndEinsTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://1und1.tv/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://1und1.tv/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] + + +class SaltTVBaseIE(ZattooPlatformBaseIE): + _NETRC_MACHINE = 'salttv' + _HOST = 'tv.salt.ch' + + +class SaltTVIE(SaltTVBaseIE): + _VALID_URL = _create_valid_url(SaltTVBaseIE._HOST, r'\d+', 'program', '(?:program|watch)/[^/]+') + _TYPE = 'video' + _TESTS = [{ + 'url': 'https://tv.salt.ch/program/daserste/210177916', + 'only_matching': True, + }, { + 'url': 'https://tv.salt.ch/guide/german?channel=srf1&program=169860555', + 'only_matching': True, + }] + + +class SaltTVLiveIE(SaltTVBaseIE): + _VALID_URL = _create_valid_url(SaltTVBaseIE._HOST, r'[^/?&#]+', 'channel', 'live') + _TYPE = 'live' + _TESTS = [{ + 'url': 'https://tv.salt.ch/channels/german?channel=srf_zwei', + 'only_matching': True, + }, { + 'url': 'https://tv.salt.ch/live/srf1', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if SaltTVIE.suitable(url) else super().suitable(url) + + +class SaltTVRecordingsIE(SaltTVBaseIE): + _VALID_URL = _create_valid_url(SaltTVBaseIE._HOST, r'\d+', 'recording') + _TYPE = 'record' + _TESTS = [{ + 'url': 'https://tv.salt.ch/recordings?recording=193615508', + 'only_matching': True, + }, { + 'url': 'https://tv.salt.ch/tc/ptc_recordings_all_recordings?recording=193615420', + 'only_matching': True, + }] diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zdf.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zdf.py new file mode 100644 index 0000000..c04d51b --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zdf.py @@ -0,0 +1,442 @@ +import re + +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + NO_DEFAULT, + ExtractorError, + determine_ext, + extract_attributes, + float_or_none, + int_or_none, + join_nonempty, + merge_dicts, + parse_codecs, + qualities, + traverse_obj, + try_get, + unified_timestamp, + update_url_query, + url_or_none, + urljoin, +) + + +class ZDFBaseIE(InfoExtractor): + _GEO_COUNTRIES = ['DE'] + _QUALITIES = ('auto', 'low', 'med', 'high', 'veryhigh', 'hd', 'fhd', 'uhd') + + def _call_api(self, url, video_id, item, api_token=None, referrer=None): + headers = {} + if api_token: + headers['Api-Auth'] = 'Bearer %s' % api_token + if referrer: + headers['Referer'] = referrer + return self._download_json( + url, video_id, 'Downloading JSON %s' % item, headers=headers) + + @staticmethod + def _extract_subtitles(src): + subtitles = {} + for caption in try_get(src, lambda x: x['captions'], list) or []: + subtitle_url = url_or_none(caption.get('uri')) + if subtitle_url: + lang = caption.get('language', 'deu') + subtitles.setdefault(lang, []).append({ + 'url': subtitle_url, + }) + return subtitles + + def _extract_format(self, video_id, formats, format_urls, meta): + format_url = url_or_none(meta.get('url')) + if not format_url or format_url in format_urls: + return + format_urls.add(format_url) + + mime_type, ext = meta.get('mimeType'), determine_ext(format_url) + if mime_type == 'application/x-mpegURL' or ext == 'm3u8': + new_formats = self._extract_m3u8_formats( + format_url, video_id, 'mp4', m3u8_id='hls', + entry_protocol='m3u8_native', fatal=False) + elif mime_type == 'application/f4m+xml' or ext == 'f4m': + new_formats = self._extract_f4m_formats( + update_url_query(format_url, {'hdcore': '3.7.0'}), video_id, f4m_id='hds', fatal=False) + elif ext == 'mpd': + new_formats = self._extract_mpd_formats( + format_url, video_id, mpd_id='dash', fatal=False) + else: + f = parse_codecs(meta.get('mimeCodec')) + if not f and meta.get('type'): + data = meta['type'].split('_') + if try_get(data, lambda x: x[2]) == ext: + f = {'vcodec': data[0], 'acodec': data[1]} + f.update({ + 'url': format_url, + 'format_id': join_nonempty('http', meta.get('type'), meta.get('quality')), + 'tbr': int_or_none(self._search_regex(r'_(\d+)k_', format_url, 'tbr', default=None)) + }) + new_formats = [f] + formats.extend(merge_dicts(f, { + 'format_note': join_nonempty('quality', 'class', from_dict=meta, delim=', '), + 'language': meta.get('language'), + 'language_preference': 10 if meta.get('class') == 'main' else -10 if meta.get('class') == 'ad' else -1, + 'quality': qualities(self._QUALITIES)(meta.get('quality')), + }) for f in new_formats) + + def _extract_ptmd(self, ptmd_url, video_id, api_token, referrer): + ptmd = self._call_api( + ptmd_url, video_id, 'metadata', api_token, referrer) + + content_id = ptmd.get('basename') or ptmd_url.split('/')[-1] + + formats = [] + track_uris = set() + for p in ptmd['priorityList']: + formitaeten = p.get('formitaeten') + if not isinstance(formitaeten, list): + continue + for f in formitaeten: + f_qualities = f.get('qualities') + if not isinstance(f_qualities, list): + continue + for quality in f_qualities: + tracks = try_get(quality, lambda x: x['audio']['tracks'], list) + if not tracks: + continue + for track in tracks: + self._extract_format( + content_id, formats, track_uris, { + 'url': track.get('uri'), + 'type': f.get('type'), + 'mimeType': f.get('mimeType'), + 'quality': quality.get('quality'), + 'class': track.get('class'), + 'language': track.get('language'), + }) + + duration = float_or_none(try_get( + ptmd, lambda x: x['attributes']['duration']['value']), scale=1000) + + return { + 'extractor_key': ZDFIE.ie_key(), + 'id': content_id, + 'duration': duration, + 'formats': formats, + 'subtitles': self._extract_subtitles(ptmd), + '_format_sort_fields': ('tbr', 'res', 'quality', 'language_preference'), + } + + def _extract_player(self, webpage, video_id, fatal=True): + return self._parse_json( + self._search_regex( + r'(?s)data-zdfplayer-jsb=(["\'])(?P<json>{.+?})\1', webpage, + 'player JSON', default='{}' if not fatal else NO_DEFAULT, + group='json'), + video_id) + + +class ZDFIE(ZDFBaseIE): + _VALID_URL = r'https?://www\.zdf\.de/(?:[^/]+/)*(?P<id>[^/?#&]+)\.html' + _TESTS = [{ + # Same as https://www.phoenix.de/sendungen/ereignisse/corona-nachgehakt/wohin-fuehrt-der-protest-in-der-pandemie-a-2050630.html + 'url': 'https://www.zdf.de/politik/phoenix-sendungen/wohin-fuehrt-der-protest-in-der-pandemie-100.html', + 'md5': '34ec321e7eb34231fd88616c65c92db0', + 'info_dict': { + 'id': '210222_phx_nachgehakt_corona_protest', + 'ext': 'mp4', + 'title': 'Wohin führt der Protest in der Pandemie?', + 'description': 'md5:7d643fe7f565e53a24aac036b2122fbd', + 'duration': 1691, + 'timestamp': 1613948400, + 'upload_date': '20210221', + }, + 'skip': 'No longer available: "Diese Seite wurde leider nicht gefunden"', + }, { + # Same as https://www.3sat.de/film/ab-18/10-wochen-sommer-108.html + 'url': 'https://www.zdf.de/dokumentation/ab-18/10-wochen-sommer-102.html', + 'md5': '0aff3e7bc72c8813f5e0fae333316a1d', + 'info_dict': { + 'id': '141007_ab18_10wochensommer_film', + 'ext': 'mp4', + 'title': 'Ab 18! - 10 Wochen Sommer', + 'description': 'md5:8253f41dc99ce2c3ff892dac2d65fe26', + 'duration': 2660, + 'timestamp': 1608604200, + 'upload_date': '20201222', + }, + 'skip': 'No longer available: "Diese Seite wurde leider nicht gefunden"', + }, { + 'url': 'https://www.zdf.de/nachrichten/heute-journal/heute-journal-vom-30-12-2021-100.html', + 'info_dict': { + 'id': '211230_sendung_hjo', + 'ext': 'mp4', + 'description': 'md5:47dff85977bde9fb8cba9e9c9b929839', + 'duration': 1890.0, + 'upload_date': '20211230', + 'chapters': list, + 'thumbnail': 'md5:e65f459f741be5455c952cd820eb188e', + 'title': 'heute journal vom 30.12.2021', + 'timestamp': 1640897100, + }, + 'skip': 'No longer available: "Diese Seite wurde leider nicht gefunden"', + }, { + 'url': 'https://www.zdf.de/dokumentation/terra-x/die-magie-der-farben-von-koenigspurpur-und-jeansblau-100.html', + 'info_dict': { + 'id': '151025_magie_farben2_tex', + 'ext': 'mp4', + 'title': 'Die Magie der Farben (2/2)', + 'description': 'md5:a89da10c928c6235401066b60a6d5c1a', + 'duration': 2615, + 'timestamp': 1465021200, + 'upload_date': '20160604', + 'thumbnail': 'https://www.zdf.de/assets/mauve-im-labor-100~768x432?cb=1464909117806', + }, + }, { + 'url': 'https://www.zdf.de/funk/druck-11790/funk-alles-ist-verzaubert-102.html', + 'md5': '57af4423db0455a3975d2dc4578536bc', + 'info_dict': { + 'ext': 'mp4', + 'id': 'video_funk_1770473', + 'duration': 1278, + 'description': 'Die Neue an der Schule verdreht Ismail den Kopf.', + 'title': 'Alles ist verzaubert', + 'timestamp': 1635520560, + 'upload_date': '20211029', + 'thumbnail': 'https://www.zdf.de/assets/teaser-funk-alles-ist-verzaubert-102~1920x1080?cb=1663848412907', + }, + }, { + # Same as https://www.phoenix.de/sendungen/dokumentationen/gesten-der-maechtigen-i-a-89468.html?ref=suche + 'url': 'https://www.zdf.de/politik/phoenix-sendungen/die-gesten-der-maechtigen-100.html', + 'only_matching': True, + }, { + # Same as https://www.3sat.de/film/spielfilm/der-hauptmann-100.html + 'url': 'https://www.zdf.de/filme/filme-sonstige/der-hauptmann-112.html', + 'only_matching': True, + }, { + # Same as https://www.3sat.de/wissen/nano/nano-21-mai-2019-102.html, equal media ids + 'url': 'https://www.zdf.de/wissen/nano/nano-21-mai-2019-102.html', + 'only_matching': True, + }, { + 'url': 'https://www.zdf.de/service-und-hilfe/die-neue-zdf-mediathek/zdfmediathek-trailer-100.html', + 'only_matching': True, + }, { + 'url': 'https://www.zdf.de/filme/taunuskrimi/die-lebenden-und-die-toten-1---ein-taunuskrimi-100.html', + 'only_matching': True, + }, { + 'url': 'https://www.zdf.de/dokumentation/planet-e/planet-e-uebersichtsseite-weitere-dokumentationen-von-planet-e-100.html', + 'only_matching': True, + }, { + 'url': 'https://www.zdf.de/arte/todliche-flucht/page-video-artede-toedliche-flucht-16-100.html', + 'info_dict': { + 'id': 'video_artede_083871-001-A', + 'ext': 'mp4', + 'title': 'Tödliche Flucht (1/6)', + 'description': 'md5:e34f96a9a5f8abd839ccfcebad3d5315', + 'duration': 3193.0, + 'timestamp': 1641355200, + 'upload_date': '20220105', + }, + 'skip': 'No longer available "Diese Seite wurde leider nicht gefunden"' + }, { + 'url': 'https://www.zdf.de/serien/soko-stuttgart/das-geld-anderer-leute-100.html', + 'info_dict': { + 'id': '191205_1800_sendung_sok8', + 'ext': 'mp4', + 'title': 'Das Geld anderer Leute', + 'description': 'md5:cb6f660850dc5eb7d1ab776ea094959d', + 'duration': 2581.0, + 'timestamp': 1675160100, + 'upload_date': '20230131', + 'thumbnail': 'https://epg-image.zdf.de/fotobase-webdelivery/images/e2d7e55a-09f0-424e-ac73-6cac4dd65f35?layout=2400x1350', + }, + }, { + 'url': 'https://www.zdf.de/dokumentation/terra-x/unser-gruener-planet-wuesten-doku-100.html', + 'info_dict': { + 'id': '220605_dk_gruener_planet_wuesten_tex', + 'ext': 'mp4', + 'title': 'Unser grüner Planet - Wüsten', + 'description': 'md5:4fc647b6f9c3796eea66f4a0baea2862', + 'duration': 2613.0, + 'timestamp': 1654450200, + 'upload_date': '20220605', + 'format_note': 'uhd, main', + 'thumbnail': 'https://www.zdf.de/assets/saguaro-kakteen-102~3840x2160?cb=1655910690796', + }, + }] + + def _extract_entry(self, url, player, content, video_id): + title = content.get('title') or content['teaserHeadline'] + + t = content['mainVideoContent']['http://zdf.de/rels/target'] + ptmd_path = traverse_obj(t, ( + (('streams', 'default'), None), + ('http://zdf.de/rels/streams/ptmd', 'http://zdf.de/rels/streams/ptmd-template') + ), get_all=False) + if not ptmd_path: + raise ExtractorError('Could not extract ptmd_path') + + info = self._extract_ptmd( + urljoin(url, ptmd_path.replace('{playerId}', 'android_native_5')), video_id, player['apiToken'], url) + + thumbnails = [] + layouts = try_get( + content, lambda x: x['teaserImageRef']['layouts'], dict) + if layouts: + for layout_key, layout_url in layouts.items(): + layout_url = url_or_none(layout_url) + if not layout_url: + continue + thumbnail = { + 'url': layout_url, + 'format_id': layout_key, + } + mobj = re.search(r'(?P<width>\d+)x(?P<height>\d+)', layout_key) + if mobj: + thumbnail.update({ + 'width': int(mobj.group('width')), + 'height': int(mobj.group('height')), + }) + thumbnails.append(thumbnail) + + chapter_marks = t.get('streamAnchorTag') or [] + chapter_marks.append({'anchorOffset': int_or_none(t.get('duration'))}) + chapters = [{ + 'start_time': chap.get('anchorOffset'), + 'end_time': next_chap.get('anchorOffset'), + 'title': chap.get('anchorLabel') + } for chap, next_chap in zip(chapter_marks, chapter_marks[1:])] + + return merge_dicts(info, { + 'title': title, + 'description': content.get('leadParagraph') or content.get('teasertext'), + 'duration': int_or_none(t.get('duration')), + 'timestamp': unified_timestamp(content.get('editorialDate')), + 'thumbnails': thumbnails, + 'chapters': chapters or None + }) + + def _extract_regular(self, url, player, video_id): + content = self._call_api( + player['content'], video_id, 'content', player['apiToken'], url) + return self._extract_entry(player['content'], player, content, video_id) + + def _extract_mobile(self, video_id): + video = self._download_json( + 'https://zdf-cdn.live.cellular.de/mediathekV2/document/%s' % video_id, + video_id) + + formats = [] + formitaeten = try_get(video, lambda x: x['document']['formitaeten'], list) + document = formitaeten and video['document'] + if formitaeten: + title = document['titel'] + content_id = document['basename'] + + format_urls = set() + for f in formitaeten or []: + self._extract_format(content_id, formats, format_urls, f) + + thumbnails = [] + teaser_bild = document.get('teaserBild') + if isinstance(teaser_bild, dict): + for thumbnail_key, thumbnail in teaser_bild.items(): + thumbnail_url = try_get( + thumbnail, lambda x: x['url'], compat_str) + if thumbnail_url: + thumbnails.append({ + 'url': thumbnail_url, + 'id': thumbnail_key, + 'width': int_or_none(thumbnail.get('width')), + 'height': int_or_none(thumbnail.get('height')), + }) + + return { + 'id': content_id, + 'title': title, + 'description': document.get('beschreibung'), + 'duration': int_or_none(document.get('length')), + 'timestamp': unified_timestamp(document.get('date')) or unified_timestamp( + try_get(video, lambda x: x['meta']['editorialDate'], compat_str)), + 'thumbnails': thumbnails, + 'subtitles': self._extract_subtitles(document), + 'formats': formats, + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + webpage = self._download_webpage(url, video_id, fatal=False) + if webpage: + player = self._extract_player(webpage, url, fatal=False) + if player: + return self._extract_regular(url, player, video_id) + + return self._extract_mobile(video_id) + + +class ZDFChannelIE(ZDFBaseIE): + _VALID_URL = r'https?://www\.zdf\.de/(?:[^/]+/)*(?P<id>[^/?#&]+)' + _TESTS = [{ + 'url': 'https://www.zdf.de/sport/das-aktuelle-sportstudio', + 'info_dict': { + 'id': 'das-aktuelle-sportstudio', + 'title': 'das aktuelle sportstudio', + }, + 'playlist_mincount': 18, + }, { + 'url': 'https://www.zdf.de/dokumentation/planet-e', + 'info_dict': { + 'id': 'planet-e', + 'title': 'planet e.', + }, + 'playlist_mincount': 50, + }, { + 'url': 'https://www.zdf.de/gesellschaft/aktenzeichen-xy-ungeloest', + 'info_dict': { + 'id': 'aktenzeichen-xy-ungeloest', + 'title': 'Aktenzeichen XY... ungelöst', + 'entries': "lambda x: not any('xy580-fall1-kindermoerder-gesucht-100' in e['url'] for e in x)", + }, + 'playlist_mincount': 2, + }, { + 'url': 'https://www.zdf.de/filme/taunuskrimi/', + 'only_matching': True, + }] + + @classmethod + def suitable(cls, url): + return False if ZDFIE.suitable(url) else super(ZDFChannelIE, cls).suitable(url) + + def _og_search_title(self, webpage, fatal=False): + title = super(ZDFChannelIE, self)._og_search_title(webpage, fatal=fatal) + return re.split(r'\s+[-|]\s+ZDF(?:mediathek)?$', title or '')[0] or None + + def _real_extract(self, url): + channel_id = self._match_id(url) + + webpage = self._download_webpage(url, channel_id) + + matches = re.finditer( + r'''<div\b[^>]*?\sdata-plusbar-id\s*=\s*(["'])(?P<p_id>[\w-]+)\1[^>]*?\sdata-plusbar-url=\1(?P<url>%s)\1''' % ZDFIE._VALID_URL, + webpage) + + if self._downloader.params.get('noplaylist', False): + entry = next( + (self.url_result(m.group('url'), ie=ZDFIE.ie_key()) for m in matches), + None) + self.to_screen('Downloading just the main video because of --no-playlist') + if entry: + return entry + else: + self.to_screen('Downloading playlist %s - add --no-playlist to download just the main video' % (channel_id, )) + + def check_video(m): + v_ref = self._search_regex( + r'''(<a\b[^>]*?\shref\s*=[^>]+?\sdata-target-id\s*=\s*(["'])%s\2[^>]*>)''' % (m.group('p_id'), ), + webpage, 'check id', default='') + v_ref = extract_attributes(v_ref) + return v_ref.get('data-target-video-type') != 'novideo' + + return self.playlist_from_matches( + (m.group('url') for m in matches if check_video(m)), + channel_id, self._og_search_title(webpage, fatal=False)) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zee5.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zee5.py new file mode 100644 index 0000000..ca79cf0 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zee5.py @@ -0,0 +1,270 @@ +import json +import time +import uuid + +from .common import InfoExtractor +from ..compat import compat_str +from ..utils import ( + ExtractorError, + int_or_none, + jwt_decode_hs256, + parse_age_limit, + str_or_none, + try_call, + try_get, + unified_strdate, + unified_timestamp, + url_or_none, +) + + +class Zee5IE(InfoExtractor): + _VALID_URL = r'''(?x) + (?: + zee5:| + https?://(?:www\.)?zee5\.com/(?:[^#?]+/)? + (?: + (?:tv-shows|kids|web-series|zee5originals)(?:/[^#/?]+){3} + |(?:movies|kids|videos|news|music-videos)/(?!kids-shows)[^#/?]+ + )/(?P<display_id>[^#/?]+)/ + ) + (?P<id>[^#/?]+)/?(?:$|[?#]) + ''' + _TESTS = [{ + 'url': 'https://www.zee5.com/movies/details/adavari-matalaku-ardhale-verule/0-0-movie_1143162669', + 'info_dict': { + 'id': '0-0-movie_1143162669', + 'ext': 'mp4', + 'display_id': 'adavari-matalaku-ardhale-verule', + 'title': 'Adavari Matalaku Ardhale Verule', + 'duration': 9360, + 'description': compat_str, + 'alt_title': 'Adavari Matalaku Ardhale Verule', + 'uploader': 'Zee Entertainment Enterprises Ltd', + 'release_date': '20070427', + 'upload_date': '20070427', + 'timestamp': 1177632000, + 'thumbnail': r're:^https?://.*\.jpg$', + 'episode_number': 0, + 'episode': 'Episode 0', + 'tags': list + }, + 'params': { + 'format': 'bv', + }, + }, { + 'url': 'https://www.zee5.com/kids/kids-shows/bandbudh-aur-budbak/0-6-1899/yoga-se-hoga-bandbudh-aur-budbak/0-1-239839', + 'info_dict': { + 'id': '0-1-239839', + 'ext': 'mp4', + 'display_id': 'yoga-se-hoga-bandbudh-aur-budbak', + 'title': 'Yoga Se Hoga-Bandbudh aur Budbak', + 'duration': 659, + 'description': compat_str, + 'alt_title': 'Yoga Se Hoga-Bandbudh aur Budbak', + 'uploader': 'Zee Entertainment Enterprises Ltd', + 'release_date': '20150101', + 'upload_date': '20150101', + 'timestamp': 1420070400, + 'thumbnail': r're:^https?://.*\.jpg$', + 'series': 'Bandbudh Aur Budbak', + 'season_number': 1, + 'episode_number': 1, + 'episode': 'Episode 1', + 'season': 'Season 1', + 'tags': list, + }, + 'params': { + 'format': 'bv', + }, + }, { + 'url': 'https://www.zee5.com/hi/tv-shows/details/kundali-bhagya/0-6-366/kundali-bhagya-march-08-2021/0-1-manual_7g9jv1os7730?country=IN', + 'only_matching': True + }, { + 'url': 'https://www.zee5.com/global/hi/tv-shows/details/kundali-bhagya/0-6-366/kundali-bhagya-march-08-2021/0-1-manual_7g9jv1os7730', + 'only_matching': True + }, { + 'url': 'https://www.zee5.com/web-series/details/mithya/0-6-4z587408/maine-dekhi-hai-uski-mrityu/0-1-6z587412', + 'only_matching': True + }, { + 'url': 'https://www.zee5.com/kids/kids-movies/maya-bommalu/0-0-movie_1040370005', + 'only_matching': True + }, { + 'url': 'https://www.zee5.com/news/details/jana-sena-chief-pawan-kalyan-shows-slippers-to-ysrcp-leaders/0-0-newsauto_6ettj4242oo0', + 'only_matching': True + }, { + 'url': 'https://www.zee5.com/music-videos/details/adhento-gaani-vunnapaatuga-jersey-nani-shraddha-srinath/0-0-56973', + 'only_matching': True + }] + _DEVICE_ID = str(uuid.uuid4()) + _USER_TOKEN = None + _LOGIN_HINT = 'Use "--username <mobile_number>" to login using otp or "--username token" and "--password <user_token>" to login using user token.' + _NETRC_MACHINE = 'zee5' + _GEO_COUNTRIES = ['IN'] + _USER_COUNTRY = None + + def _perform_login(self, username, password): + if len(username) == 10 and username.isdigit() and self._USER_TOKEN is None: + self.report_login() + otp_request_json = self._download_json(f'https://b2bapi.zee5.com/device/sendotp_v1.php?phoneno=91{username}', + None, note='Sending OTP') + if otp_request_json['code'] == 0: + self.to_screen(otp_request_json['message']) + else: + raise ExtractorError(otp_request_json['message'], expected=True) + otp_code = self._get_tfa_info('OTP') + otp_verify_json = self._download_json(f'https://b2bapi.zee5.com/device/verifyotp_v1.php?phoneno=91{username}&otp={otp_code}&guest_token={self._DEVICE_ID}&platform=web', + None, note='Verifying OTP', fatal=False) + if not otp_verify_json: + raise ExtractorError('Unable to verify OTP.', expected=True) + self._USER_TOKEN = otp_verify_json.get('token') + if not self._USER_TOKEN: + raise ExtractorError(otp_request_json['message'], expected=True) + elif username.lower() == 'token' and try_call(lambda: jwt_decode_hs256(password)): + self._USER_TOKEN = password + else: + raise ExtractorError(self._LOGIN_HINT, expected=True) + + token = jwt_decode_hs256(self._USER_TOKEN) + if token.get('exp', 0) <= int(time.time()): + raise ExtractorError('User token has expired', expected=True) + self._USER_COUNTRY = token.get('current_country') + + def _real_extract(self, url): + video_id, display_id = self._match_valid_url(url).group('id', 'display_id') + access_token_request = self._download_json( + 'https://launchapi.zee5.com/launch?platform_name=web_app', + video_id, note='Downloading access token')['platform_token'] + data = { + 'x-access-token': access_token_request['token'] + } + if self._USER_TOKEN: + data['Authorization'] = 'bearer %s' % self._USER_TOKEN + else: + data['X-Z5-Guest-Token'] = self._DEVICE_ID + + json_data = self._download_json( + 'https://spapi.zee5.com/singlePlayback/getDetails/secure', video_id, query={ + 'content_id': video_id, + 'device_id': self._DEVICE_ID, + 'platform_name': 'desktop_web', + 'country': self._USER_COUNTRY or self.get_param('geo_bypass_country') or 'IN', + 'check_parental_control': False, + }, headers={'content-type': 'application/json'}, data=json.dumps(data).encode('utf-8')) + asset_data = json_data['assetDetails'] + show_data = json_data.get('showDetails', {}) + if 'premium' in asset_data['business_type']: + raise ExtractorError('Premium content is DRM protected.', expected=True) + if not asset_data.get('hls_url'): + self.raise_login_required(self._LOGIN_HINT, metadata_available=True, method=None) + formats, m3u8_subs = self._extract_m3u8_formats_and_subtitles(asset_data['hls_url'], video_id, 'mp4', fatal=False) + + subtitles = {} + for sub in asset_data.get('subtitle_url', []): + sub_url = sub.get('url') + if not sub_url: + continue + subtitles.setdefault(sub.get('language', 'en'), []).append({ + 'url': self._proto_relative_url(sub_url), + }) + subtitles = self._merge_subtitles(subtitles, m3u8_subs) + return { + 'id': video_id, + 'display_id': display_id, + 'title': asset_data['title'], + 'formats': formats, + 'subtitles': subtitles, + 'duration': int_or_none(asset_data.get('duration')), + 'description': str_or_none(asset_data.get('description')), + 'alt_title': str_or_none(asset_data.get('original_title')), + 'uploader': str_or_none(asset_data.get('content_owner')), + 'age_limit': parse_age_limit(asset_data.get('age_rating')), + 'release_date': unified_strdate(asset_data.get('release_date')), + 'timestamp': unified_timestamp(asset_data.get('release_date')), + 'thumbnail': url_or_none(asset_data.get('image_url')), + 'series': str_or_none(asset_data.get('tvshow_name')), + 'season': try_get(show_data, lambda x: x['seasons']['title'], str), + 'season_number': int_or_none(try_get(show_data, lambda x: x['seasons'][0]['orderid'])), + 'episode_number': int_or_none(try_get(asset_data, lambda x: x['orderid'])), + 'tags': try_get(asset_data, lambda x: x['tags'], list) + } + + +class Zee5SeriesIE(InfoExtractor): + IE_NAME = 'zee5:series' + _VALID_URL = r'''(?x) + (?: + zee5:series:| + https?://(?:www\.)?zee5\.com/(?:[^#?]+/)? + (?:tv-shows|web-series|kids|zee5originals)/(?!kids-movies)(?:[^#/?]+/){2} + ) + (?P<id>[^#/?]+)(?:/episodes)?/?(?:$|[?#]) + ''' + _TESTS = [{ + 'url': 'https://www.zee5.com/kids/kids-shows/bandbudh-aur-budbak/0-6-1899', + 'playlist_mincount': 156, + 'info_dict': { + 'id': '0-6-1899', + }, + }, { + 'url': 'https://www.zee5.com/tv-shows/details/bhabi-ji-ghar-par-hai/0-6-199', + 'playlist_mincount': 1500, + 'info_dict': { + 'id': '0-6-199', + }, + }, { + 'url': 'https://www.zee5.com/tv-shows/details/agent-raghav-crime-branch/0-6-965', + 'playlist_mincount': 24, + 'info_dict': { + 'id': '0-6-965', + }, + }, { + 'url': 'https://www.zee5.com/ta/tv-shows/details/nagabhairavi/0-6-3201', + 'playlist_mincount': 3, + 'info_dict': { + 'id': '0-6-3201', + }, + }, { + 'url': 'https://www.zee5.com/global/hi/tv-shows/details/khwaabon-ki-zamin-par/0-6-270', + 'playlist_mincount': 150, + 'info_dict': { + 'id': '0-6-270', + }, + }, { + 'url': 'https://www.zee5.com/tv-shows/details/chala-hawa-yeu-dya-ladies-zindabaad/0-6-2943/episodes', + 'only_matching': True, + }, { + 'url': 'https://www.zee5.com/web-series/details/mithya/0-6-4z587408', + 'only_matching': True, + }] + + def _entries(self, show_id): + access_token_request = self._download_json( + 'https://launchapi.zee5.com/launch?platform_name=web_app', + show_id, note='Downloading access token')['platform_token'] + headers = { + 'X-Access-Token': access_token_request['token'], + 'Referer': 'https://www.zee5.com/', + } + show_url = f'https://gwapi.zee5.com/content/tvshow/{show_id}?translation=en&country=IN' + + page_num = 0 + show_json = self._download_json(show_url, video_id=show_id, headers=headers) + for season in show_json.get('seasons') or []: + season_id = try_get(season, lambda x: x['id'], compat_str) + next_url = f'https://gwapi.zee5.com/content/tvshow/?season_id={season_id}&type=episode&translation=en&country=IN&on_air=false&asset_subtype=tvshow&page=1&limit=100' + while next_url: + page_num += 1 + episodes_json = self._download_json( + next_url, video_id=show_id, headers=headers, + note='Downloading JSON metadata page %d' % page_num) + for episode in try_get(episodes_json, lambda x: x['episode'], list) or []: + video_id = episode.get('id') + yield self.url_result( + 'zee5:%s' % video_id, + ie=Zee5IE.ie_key(), video_id=video_id) + next_url = url_or_none(episodes_json.get('next_episode_api')) + + def _real_extract(self, url): + show_id = self._match_id(url) + return self.playlist_result(self._entries(show_id), playlist_id=show_id) diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/zeenews.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zeenews.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/zeenews.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/zeenews.py diff --git a/lib/python3.11/site-packages/yt_dlp/extractor/zhihu.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zhihu.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/extractor/zhihu.py rename to python/lib/python3.10/site-packages/yt_dlp/extractor/zhihu.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zingmp3.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zingmp3.py new file mode 100644 index 0000000..007658c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zingmp3.py @@ -0,0 +1,426 @@ +import hashlib +import hmac +import itertools +import json +import urllib.parse + +from .common import InfoExtractor +from ..utils import int_or_none, traverse_obj, try_call, urljoin + + +class ZingMp3BaseIE(InfoExtractor): + _VALID_URL_TMPL = r'https?://(?:mp3\.zing|zingmp3)\.vn/(?P<type>(?:%s))/[^/?#]+/(?P<id>\w+)(?:\.html|\?)' + _GEO_COUNTRIES = ['VN'] + _DOMAIN = 'https://zingmp3.vn' + _PER_PAGE = 50 + _API_SLUGS = { + # Audio/video + 'bai-hat': '/api/v2/page/get/song', + 'embed': '/api/v2/page/get/song', + 'video-clip': '/api/v2/page/get/video', + 'lyric': '/api/v2/lyric/get/lyric', + 'song-streaming': '/api/v2/song/get/streaming', + # Playlist + 'playlist': '/api/v2/page/get/playlist', + 'album': '/api/v2/page/get/playlist', + # Chart + 'zing-chart': '/api/v2/page/get/chart-home', + 'zing-chart-tuan': '/api/v2/page/get/week-chart', + 'moi-phat-hanh': '/api/v2/page/get/newrelease-chart', + 'the-loai-video': '/api/v2/video/get/list', + # User + 'info-artist': '/api/v2/page/get/artist', + 'user-list-song': '/api/v2/song/get/list', + 'user-list-video': '/api/v2/video/get/list', + 'hub': '/api/v2/page/get/hub-detail', + } + + def _api_url(self, url_type, params): + api_slug = self._API_SLUGS[url_type] + params.update({'ctime': '1'}) + sha256 = hashlib.sha256( + ''.join(f'{k}={v}' for k, v in sorted(params.items())).encode()).hexdigest() + data = { + **params, + 'apiKey': 'X5BM3w8N7MKozC0B85o4KMlzLZKhV00y', + 'sig': hmac.new(b'acOrvUS15XRW2o9JksiK1KgQ6Vbds8ZW', + f'{api_slug}{sha256}'.encode(), hashlib.sha512).hexdigest(), + } + return f'{self._DOMAIN}{api_slug}?{urllib.parse.urlencode(data)}' + + def _call_api(self, url_type, params, display_id=None, **kwargs): + resp = self._download_json( + self._api_url(url_type, params), display_id or params.get('id'), + note=f'Downloading {url_type} JSON metadata', **kwargs) + return (resp or {}).get('data') or {} + + def _real_initialize(self): + if not self._cookies_passed: + self._request_webpage( + self._api_url('bai-hat', {'id': ''}), None, note='Updating cookies') + + def _parse_items(self, items): + for url in traverse_obj(items, (..., 'link')) or []: + yield self.url_result(urljoin(self._DOMAIN, url)) + + def _fetch_page(self, id_, url_type, page): + raise NotImplementedError('This method must be implemented by subclasses') + + def _paged_list(self, _id, url_type): + count = 0 + for page in itertools.count(1): + data = self._fetch_page(_id, url_type, page) + entries = list(self._parse_items(data.get('items'))) + count += len(entries) + yield from entries + if not data.get('hasMore') or try_call(lambda: count > data['total']): + break + + +class ZingMp3IE(ZingMp3BaseIE): + _VALID_URL = ZingMp3BaseIE._VALID_URL_TMPL % 'bai-hat|video-clip|embed' + IE_NAME = 'zingmp3' + IE_DESC = 'zingmp3.vn' + _TESTS = [{ + 'url': 'https://mp3.zing.vn/bai-hat/Xa-Mai-Xa-Bao-Thy/ZWZB9WAB.html', + 'md5': 'ead7ae13693b3205cbc89536a077daed', + 'info_dict': { + 'id': 'ZWZB9WAB', + 'title': 'Xa Mãi Xa', + 'ext': 'mp3', + 'thumbnail': r're:^https?://.+\.jpg', + 'subtitles': { + 'origin': [{ + 'ext': 'lrc', + }] + }, + 'duration': 255, + 'track': 'Xa Mãi Xa', + 'artist': 'Bảo Thy', + 'album': 'Special Album', + 'album_artist': 'Bảo Thy', + }, + }, { + 'url': 'https://zingmp3.vn/video-clip/Suong-Hoa-Dua-Loi-K-ICM-RYO/ZO8ZF7C7.html', + 'md5': '3c2081e79471a2f4a3edd90b70b185ea', + 'info_dict': { + 'id': 'ZO8ZF7C7', + 'title': 'Sương Hoa ÄÆ°a Lối', + 'ext': 'mp4', + 'thumbnail': r're:^https?://.+\.jpg', + 'duration': 207, + 'track': 'Sương Hoa ÄÆ°a Lối', + 'artist': 'K-ICM, RYO', + 'album': 'Sương Hoa ÄÆ°a Lối (Single)', + 'album_artist': 'K-ICM, RYO', + }, + }, { + 'url': 'https://zingmp3.vn/bai-hat/Nguoi-Yeu-Toi-Lanh-Lung-Sat-Da-Mr-Siro/ZZ6IW7OU.html', + 'md5': '3e9f7a9bd0d965573dbff8d7c68b629d', + 'info_dict': { + 'id': 'ZZ6IW7OU', + 'title': 'Ngưá»i Yêu Tôi Lạnh Lùng Sắt Äá', + 'ext': 'mp3', + 'thumbnail': r're:^https?://.+\.jpg', + 'duration': 303, + 'track': 'Ngưá»i Yêu Tôi Lạnh Lùng Sắt Äá', + 'artist': 'Mr. Siro', + 'album': 'Ngưá»i Yêu Tôi Lạnh Lùng Sắt Äá (Single)', + 'album_artist': 'Mr. Siro', + }, + }, { + 'url': 'https://zingmp3.vn/embed/song/ZWZEI76B?start=false', + 'only_matching': True, + }, { + 'url': 'https://zingmp3.vn/bai-hat/Xa-Mai-Xa-Bao-Thy/ZWZB9WAB.html', + 'only_matching': True, + }] + + def _real_extract(self, url): + song_id, url_type = self._match_valid_url(url).group('id', 'type') + item = self._call_api(url_type, {'id': song_id}) + + item_id = item.get('encodeId') or song_id + if url_type == 'video-clip': + source = item.get('streaming') + source['mp4'] = self._download_json( + 'http://api.mp3.zing.vn/api/mobile/video/getvideoinfo', item_id, + query={'requestdata': json.dumps({'id': item_id})}, + note='Downloading mp4 JSON metadata').get('source') + else: + source = self._call_api('song-streaming', {'id': item_id}) + + formats = [] + for k, v in (source or {}).items(): + if not v or v == 'VIP': + continue + if k not in ('mp4', 'hls'): + formats.append({ + 'ext': 'mp3', + 'format_id': k, + 'tbr': int_or_none(k), + 'url': self._proto_relative_url(v), + 'vcodec': 'none', + }) + continue + for res, video_url in v.items(): + if not video_url: + continue + if k == 'hls': + formats.extend(self._extract_m3u8_formats(video_url, item_id, 'mp4', m3u8_id=k, fatal=False)) + continue + formats.append({ + 'format_id': f'mp4-{res}', + 'url': video_url, + 'height': int_or_none(res), + }) + + if not formats: + if item.get('msg') == 'Sorry, this content is not available in your country.': + self.raise_geo_restricted(countries=self._GEO_COUNTRIES, metadata_available=True) + else: + self.raise_no_formats('The song is only for VIP accounts.') + + lyric = item.get('lyric') or self._call_api('lyric', {'id': item_id}, fatal=False).get('file') + + return { + 'id': item_id, + 'title': traverse_obj(item, 'title', 'alias'), + 'thumbnail': traverse_obj(item, 'thumbnail', 'thumbnailM'), + 'duration': int_or_none(item.get('duration')), + 'track': traverse_obj(item, 'title', 'alias'), + 'artist': traverse_obj(item, 'artistsNames', 'artists_names'), + 'album': traverse_obj(item, ('album', ('name', 'title')), get_all=False), + 'album_artist': traverse_obj(item, ('album', ('artistsNames', 'artists_names')), get_all=False), + 'formats': formats, + 'subtitles': {'origin': [{'url': lyric}]} if lyric else None, + } + + +class ZingMp3AlbumIE(ZingMp3BaseIE): + _VALID_URL = ZingMp3BaseIE._VALID_URL_TMPL % 'album|playlist' + _TESTS = [{ + 'url': 'http://mp3.zing.vn/album/Lau-Dai-Tinh-Ai-Bang-Kieu-Minh-Tuyet/ZWZBWDAF.html', + 'info_dict': { + 'id': 'ZWZBWDAF', + 'title': 'Lâu Äài Tình Ãi', + }, + 'playlist_mincount': 9, + }, { + 'url': 'https://zingmp3.vn/album/Nhung-Bai-Hat-Hay-Nhat-Cua-Mr-Siro-Mr-Siro/ZWZAEZZD.html', + 'info_dict': { + 'id': 'ZWZAEZZD', + 'title': 'Những Bài Hát Hay Nhất Cá»§a Mr. Siro', + }, + 'playlist_mincount': 20, + }, { + 'url': 'http://mp3.zing.vn/playlist/Duong-Hong-Loan-apollobee/IWCAACCB.html', + 'only_matching': True, + }, { + 'url': 'https://zingmp3.vn/album/Lau-Dai-Tinh-Ai-Bang-Kieu-Minh-Tuyet/ZWZBWDAF.html', + 'only_matching': True, + }] + IE_NAME = 'zingmp3:album' + + def _real_extract(self, url): + song_id, url_type = self._match_valid_url(url).group('id', 'type') + data = self._call_api(url_type, {'id': song_id}) + return self.playlist_result( + self._parse_items(traverse_obj(data, ('song', 'items'))), + traverse_obj(data, 'id', 'encodeId'), traverse_obj(data, 'name', 'title')) + + +class ZingMp3ChartHomeIE(ZingMp3BaseIE): + _VALID_URL = r'https?://(?:mp3\.zing|zingmp3)\.vn/(?P<id>(?:zing-chart|moi-phat-hanh))/?(?:[#?]|$)' + _TESTS = [{ + 'url': 'https://zingmp3.vn/zing-chart', + 'info_dict': { + 'id': 'zing-chart', + }, + 'playlist_mincount': 100, + }, { + 'url': 'https://zingmp3.vn/moi-phat-hanh', + 'info_dict': { + 'id': 'moi-phat-hanh', + }, + 'playlist_mincount': 100, + }] + IE_NAME = 'zingmp3:chart-home' + + def _real_extract(self, url): + url_type = self._match_id(url) + data = self._call_api(url_type, {'id': url_type}) + items = traverse_obj(data, ('RTChart', 'items') if url_type == 'zing-chart' else 'items') + return self.playlist_result(self._parse_items(items), url_type) + + +class ZingMp3WeekChartIE(ZingMp3BaseIE): + _VALID_URL = ZingMp3BaseIE._VALID_URL_TMPL % 'zing-chart-tuan' + IE_NAME = 'zingmp3:week-chart' + _TESTS = [{ + 'url': 'https://zingmp3.vn/zing-chart-tuan/Bai-hat-Viet-Nam/IWZ9Z08I.html', + 'info_dict': { + 'id': 'IWZ9Z08I', + 'title': 'zing-chart-vn', + }, + 'playlist_mincount': 10, + }, { + 'url': 'https://zingmp3.vn/zing-chart-tuan/Bai-hat-US-UK/IWZ9Z0BW.html', + 'info_dict': { + 'id': 'IWZ9Z0BW', + 'title': 'zing-chart-us', + }, + 'playlist_mincount': 10, + }, { + 'url': 'https://zingmp3.vn/zing-chart-tuan/Bai-hat-KPop/IWZ9Z0BO.html', + 'info_dict': { + 'id': 'IWZ9Z0BO', + 'title': 'zing-chart-korea', + }, + 'playlist_mincount': 10, + }] + + def _real_extract(self, url): + song_id, url_type = self._match_valid_url(url).group('id', 'type') + data = self._call_api(url_type, {'id': song_id}) + return self.playlist_result( + self._parse_items(data['items']), song_id, f'zing-chart-{data.get("country", "")}') + + +class ZingMp3ChartMusicVideoIE(ZingMp3BaseIE): + _VALID_URL = r'https?://(?:mp3\.zing|zingmp3)\.vn/(?P<type>the-loai-video)/(?P<regions>[^/]+)/(?P<id>[^\.]+)' + IE_NAME = 'zingmp3:chart-music-video' + _TESTS = [{ + 'url': 'https://zingmp3.vn/the-loai-video/Viet-Nam/IWZ9Z08I.html', + 'info_dict': { + 'id': 'IWZ9Z08I', + 'title': 'the-loai-video_Viet-Nam', + }, + 'playlist_mincount': 400, + }, { + 'url': 'https://zingmp3.vn/the-loai-video/Au-My/IWZ9Z08O.html', + 'info_dict': { + 'id': 'IWZ9Z08O', + 'title': 'the-loai-video_Au-My', + }, + 'playlist_mincount': 40, + }, { + 'url': 'https://zingmp3.vn/the-loai-video/Han-Quoc/IWZ9Z08W.html', + 'info_dict': { + 'id': 'IWZ9Z08W', + 'title': 'the-loai-video_Han-Quoc', + }, + 'playlist_mincount': 30, + }, { + 'url': 'https://zingmp3.vn/the-loai-video/Khong-Loi/IWZ9Z086.html', + 'info_dict': { + 'id': 'IWZ9Z086', + 'title': 'the-loai-video_Khong-Loi', + }, + 'playlist_mincount': 1, + }] + + def _fetch_page(self, song_id, url_type, page): + return self._call_api(url_type, { + 'id': song_id, + 'type': 'genre', + 'page': page, + 'count': self._PER_PAGE + }) + + def _real_extract(self, url): + song_id, regions, url_type = self._match_valid_url(url).group('id', 'regions', 'type') + return self.playlist_result(self._paged_list(song_id, url_type), song_id, f'{url_type}_{regions}') + + +class ZingMp3UserIE(ZingMp3BaseIE): + _VALID_URL = r'https?://(?:mp3\.zing|zingmp3)\.vn/(?P<user>[^/]+)/(?P<type>bai-hat|single|album|video)/?(?:[?#]|$)' + IE_NAME = 'zingmp3:user' + _TESTS = [{ + 'url': 'https://zingmp3.vn/Mr-Siro/bai-hat', + 'info_dict': { + 'id': 'IWZ98609', + 'title': 'Mr. Siro - bai-hat', + 'description': 'md5:5bdcf45e955dc1b8d7f518f322ffef36', + }, + 'playlist_mincount': 91, + }, { + 'url': 'https://zingmp3.vn/Mr-Siro/album', + 'info_dict': { + 'id': 'IWZ98609', + 'title': 'Mr. Siro - album', + 'description': 'md5:5bdcf45e955dc1b8d7f518f322ffef36', + }, + 'playlist_mincount': 3, + }, { + 'url': 'https://zingmp3.vn/Mr-Siro/single', + 'info_dict': { + 'id': 'IWZ98609', + 'title': 'Mr. Siro - single', + 'description': 'md5:5bdcf45e955dc1b8d7f518f322ffef36', + }, + 'playlist_mincount': 20, + }, { + 'url': 'https://zingmp3.vn/Mr-Siro/video', + 'info_dict': { + 'id': 'IWZ98609', + 'title': 'Mr. Siro - video', + 'description': 'md5:5bdcf45e955dc1b8d7f518f322ffef36', + }, + 'playlist_mincount': 15, + }] + + def _fetch_page(self, user_id, url_type, page): + url_type = 'user-list-song' if url_type == 'bai-hat' else 'user-list-video' + return self._call_api(url_type, { + 'id': user_id, + 'type': 'artist', + 'page': page, + 'count': self._PER_PAGE + }) + + def _real_extract(self, url): + user_alias, url_type = self._match_valid_url(url).group('user', 'type') + if not url_type: + url_type = 'bai-hat' + + user_info = self._call_api('info-artist', {}, user_alias, query={'alias': user_alias}) + if url_type in ('bai-hat', 'video'): + entries = self._paged_list(user_info['id'], url_type) + else: + entries = self._parse_items(traverse_obj(user_info, ( + 'sections', + lambda _, v: v['sectionId'] == 'aAlbum' if url_type == 'album' else v['sectionId'] == 'aSingle', + 'items', ...))) + return self.playlist_result( + entries, user_info['id'], f'{user_info.get("name")} - {url_type}', user_info.get('biography')) + + +class ZingMp3HubIE(ZingMp3BaseIE): + IE_NAME = 'zingmp3:hub' + _VALID_URL = r'https?://(?:mp3\.zing|zingmp3)\.vn/(?P<type>hub)/(?P<regions>[^/]+)/(?P<id>[^\.]+)' + _TESTS = [{ + 'url': 'https://zingmp3.vn/hub/Nhac-Moi/IWZ9Z0CA.html', + 'info_dict': { + 'id': 'IWZ9Z0CA', + 'title': 'Nhạc Má»›i', + 'description': 'md5:1cc31b68a6f746427b07b2756c22a558', + }, + 'playlist_mincount': 20, + }, { + 'url': 'https://zingmp3.vn/hub/Nhac-Viet/IWZ9Z087.html', + 'info_dict': { + 'id': 'IWZ9Z087', + 'title': 'Nhạc Việt', + 'description': 'md5:acc976c8bdde64d5c6ee4a92c39f7a77', + }, + 'playlist_mincount': 30, + }] + + def _real_extract(self, url): + song_id, regions, url_type = self._match_valid_url(url).group('id', 'regions', 'type') + hub_detail = self._call_api(url_type, {'id': song_id}) + entries = self._parse_items(traverse_obj(hub_detail, ( + 'sections', lambda _, v: v['sectionId'] == 'hub', 'items', ...))) + return self.playlist_result( + entries, song_id, hub_detail.get('title'), hub_detail.get('description')) diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zoom.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zoom.py new file mode 100644 index 0000000..329ba14 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zoom.py @@ -0,0 +1,136 @@ +from .common import InfoExtractor +from ..utils import ( + ExtractorError, + int_or_none, + str_or_none, + js_to_json, + parse_filesize, + traverse_obj, + urlencode_postdata, + urljoin, +) + + +class ZoomIE(InfoExtractor): + IE_NAME = 'zoom' + _VALID_URL = r'(?P<base_url>https?://(?:[^.]+\.)?zoom\.us/)rec(?:ording)?/(?P<type>play|share)/(?P<id>[\w.-]+)' + _TESTS = [{ + 'url': 'https://economist.zoom.us/rec/play/dUk_CNBETmZ5VA2BwEl-jjakPpJ3M1pcfVYAPRsoIbEByGsLjUZtaa4yCATQuOL3der8BlTwxQePl_j0.EImBkXzTIaPvdZO5', + 'md5': 'ab445e8c911fddc4f9adc842c2c5d434', + 'info_dict': { + 'id': 'dUk_CNBETmZ5VA2BwEl-jjakPpJ3M1pcfVYAPRsoIbEByGsLjUZtaa4yCATQuOL3der8BlTwxQePl_j0.EImBkXzTIaPvdZO5', + 'ext': 'mp4', + 'title': 'China\'s "two sessions" and the new five-year plan', + }, + 'skip': 'Recording requires email authentication to access', + }, { + # play URL + 'url': 'https://ffgolf.zoom.us/rec/play/qhEhXbrxq1Zoucx8CMtHzq1Z_2YZRPVCqWK_K-2FkEGRsSLDeOX8Tu4P6jtjZcRry8QhIbvKZdtr4UNo.QcPn2debFskI9whJ', + 'md5': '2c4b1c4e5213ebf9db293e88d9385bee', + 'info_dict': { + 'id': 'qhEhXbrxq1Zoucx8CMtHzq1Z_2YZRPVCqWK_K-2FkEGRsSLDeOX8Tu4P6jtjZcRry8QhIbvKZdtr4UNo.QcPn2debFskI9whJ', + 'ext': 'mp4', + 'title': 'Prépa AF2023 - Séance 5 du 11 avril - R20/VM/GO', + }, + }, { + # share URL + 'url': 'https://us02web.zoom.us/rec/share/hkUk5Zxcga0nkyNGhVCRfzkA2gX_mzgS3LpTxEEWJz9Y_QpIQ4mZFOUx7KZRZDQA.9LGQBdqmDAYgiZ_8', + 'md5': '90fdc7cfcaee5d52d1c817fc03c43c9b', + 'info_dict': { + 'id': 'hkUk5Zxcga0nkyNGhVCRfzkA2gX_mzgS3LpTxEEWJz9Y_QpIQ4mZFOUx7KZRZDQA.9LGQBdqmDAYgiZ_8', + 'ext': 'mp4', + 'title': 'Timea Andrea Lelik\'s Personal Meeting Room', + }, + }] + + def _get_page_data(self, webpage, video_id): + return self._search_json( + r'window\.__data__\s*=', webpage, 'data', video_id, transform_source=js_to_json) + + def _get_real_webpage(self, url, base_url, video_id, url_type): + webpage = self._download_webpage(url, video_id, note=f'Downloading {url_type} webpage') + try: + form = self._form_hidden_inputs('password_form', webpage) + except ExtractorError: + return webpage + + password = self.get_param('videopassword') + if not password: + raise ExtractorError( + 'This video is protected by a passcode, use the --video-password option', expected=True) + is_meeting = form.get('useWhichPasswd') == 'meeting' + validation = self._download_json( + base_url + 'rec/validate%s_passwd' % ('_meet' if is_meeting else ''), + video_id, 'Validating passcode', 'Wrong passcode', data=urlencode_postdata({ + 'id': form[('meet' if is_meeting else 'file') + 'Id'], + 'passwd': password, + 'action': form.get('action'), + })) + if not validation.get('status'): + raise ExtractorError(validation['errorMessage'], expected=True) + return self._download_webpage(url, video_id, note=f'Re-downloading {url_type} webpage') + + def _real_extract(self, url): + base_url, url_type, video_id = self._match_valid_url(url).group('base_url', 'type', 'id') + + if url_type == 'share': + webpage = self._get_real_webpage(url, base_url, video_id, 'share') + meeting_id = self._get_page_data(webpage, video_id)['meetingId'] + redirect_path = self._download_json( + f'{base_url}nws/recording/1.0/play/share-info/{meeting_id}', + video_id, note='Downloading share info JSON')['result']['redirectUrl'] + url = urljoin(base_url, redirect_path) + + webpage = self._get_real_webpage(url, base_url, video_id, 'play') + file_id = self._get_page_data(webpage, video_id)['fileId'] + if not file_id: + # When things go wrong, file_id can be empty string + raise ExtractorError('Unable to extract file ID') + + data = self._download_json( + f'{base_url}nws/recording/1.0/play/info/{file_id}', video_id, + note='Downloading play info JSON')['result'] + + subtitles = {} + for _type in ('transcript', 'cc', 'chapter'): + if data.get('%sUrl' % _type): + subtitles[_type] = [{ + 'url': urljoin(base_url, data['%sUrl' % _type]), + 'ext': 'vtt', + }] + + formats = [] + + if data.get('viewMp4Url'): + formats.append({ + 'format_note': 'Camera stream', + 'url': str_or_none(data.get('viewMp4Url')), + 'width': int_or_none(traverse_obj(data, ('viewResolvtions', 0))), + 'height': int_or_none(traverse_obj(data, ('viewResolvtions', 1))), + 'format_id': str_or_none(traverse_obj(data, ('recording', 'id'))), + 'ext': 'mp4', + 'filesize_approx': parse_filesize(str_or_none(traverse_obj(data, ('recording', 'fileSizeInMB')))), + 'preference': 0 + }) + + if data.get('shareMp4Url'): + formats.append({ + 'format_note': 'Screen share stream', + 'url': str_or_none(data.get('shareMp4Url')), + 'width': int_or_none(traverse_obj(data, ('shareResolvtions', 0))), + 'height': int_or_none(traverse_obj(data, ('shareResolvtions', 1))), + 'format_id': str_or_none(traverse_obj(data, ('shareVideo', 'id'))), + 'ext': 'mp4', + 'preference': -1 + }) + + return { + 'id': video_id, + 'title': str_or_none(traverse_obj(data, ('meet', 'topic'))), + 'duration': int_or_none(data.get('duration')), + 'subtitles': subtitles, + 'formats': formats, + 'http_headers': { + 'Referer': base_url, + }, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/extractor/zype.py b/python/lib/python3.10/site-packages/yt_dlp/extractor/zype.py new file mode 100644 index 0000000..2f3b4c4 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/extractor/zype.py @@ -0,0 +1,135 @@ +import re + +from .common import InfoExtractor +from ..networking.exceptions import HTTPError +from ..utils import ( + dict_get, + ExtractorError, + int_or_none, + js_to_json, + parse_iso8601, +) + + +class ZypeIE(InfoExtractor): + _ID_RE = r'[\da-fA-F]+' + _COMMON_RE = r'//player\.zype\.com/embed/%s\.(?:js|json|html)\?.*?(?:access_token|(?:ap[ip]|player)_key)=' + _VALID_URL = r'https?:%s[^&]+' % (_COMMON_RE % ('(?P<id>%s)' % _ID_RE)) + _EMBED_REGEX = [fr'<script[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?{_COMMON_RE % _ID_RE}.+?)\1'] + _TEST = { + 'url': 'https://player.zype.com/embed/5b400b834b32992a310622b9.js?api_key=jZ9GUhRmxcPvX7M3SlfejB6Hle9jyHTdk2jVxG7wOHPLODgncEKVdPYBhuz9iWXQ&autoplay=false&controls=true&da=false', + 'md5': 'eaee31d474c76a955bdaba02a505c595', + 'info_dict': { + 'id': '5b400b834b32992a310622b9', + 'ext': 'mp4', + 'title': 'Smoky Barbecue Favorites', + 'thumbnail': r're:^https?://.*\.jpe?g', + 'description': 'md5:5ff01e76316bd8d46508af26dc86023b', + 'timestamp': 1504915200, + 'upload_date': '20170909', + }, + } + + def _real_extract(self, url): + video_id = self._match_id(url) + + try: + response = self._download_json(re.sub( + r'\.(?:js|html)\?', '.json?', url), video_id)['response'] + except ExtractorError as e: + if isinstance(e.cause, HTTPError) and e.cause.status in (400, 401, 403): + raise ExtractorError(self._parse_json( + e.cause.response.read().decode(), video_id)['message'], expected=True) + raise + + body = response['body'] + video = response['video'] + title = video['title'] + + subtitles = {} + + if isinstance(body, dict): + formats = [] + for output in body.get('outputs', []): + output_url = output.get('url') + if not output_url: + continue + name = output.get('name') + if name == 'm3u8': + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + output_url, video_id, 'mp4', + 'm3u8_native', m3u8_id='hls', fatal=False) + else: + f = { + 'format_id': name, + 'tbr': int_or_none(output.get('bitrate')), + 'url': output_url, + } + if name in ('m4a', 'mp3'): + f['vcodec'] = 'none' + else: + f.update({ + 'height': int_or_none(output.get('height')), + 'width': int_or_none(output.get('width')), + }) + formats.append(f) + text_tracks = body.get('subtitles') or [] + else: + m3u8_url = self._search_regex( + r'(["\'])(?P<url>(?:(?!\1).)+\.m3u8(?:(?!\1).)*)\1', + body, 'm3u8 url', group='url', default=None) + if not m3u8_url: + source = self._search_regex( + r'(?s)sources\s*:\s*\[\s*({.+?})\s*\]', body, 'source') + + def get_attr(key): + return self._search_regex( + r'\b%s\s*:\s*([\'"])(?P<val>(?:(?!\1).)+)\1' % key, + source, key, group='val') + + if get_attr('integration') == 'verizon-media': + m3u8_url = 'https://content.uplynk.com/%s.m3u8' % get_attr('id') + formats, subtitles = self._extract_m3u8_formats_and_subtitles( + m3u8_url, video_id, 'mp4', 'm3u8_native', m3u8_id='hls') + text_tracks = self._search_regex( + r'textTracks\s*:\s*(\[[^]]+\])', + body, 'text tracks', default=None) + if text_tracks: + text_tracks = self._parse_json( + text_tracks, video_id, js_to_json, False) + + if text_tracks: + for text_track in text_tracks: + tt_url = dict_get(text_track, ('file', 'src')) + if not tt_url: + continue + subtitles.setdefault(text_track.get('label') or 'English', []).append({ + 'url': tt_url, + }) + + thumbnails = [] + for thumbnail in video.get('thumbnails', []): + thumbnail_url = thumbnail.get('url') + if not thumbnail_url: + continue + thumbnails.append({ + 'url': thumbnail_url, + 'width': int_or_none(thumbnail.get('width')), + 'height': int_or_none(thumbnail.get('height')), + }) + + return { + 'id': video_id, + 'display_id': video.get('friendly_title'), + 'title': title, + 'thumbnails': thumbnails, + 'description': dict_get(video, ('description', 'ott_description', 'short_description')), + 'timestamp': parse_iso8601(video.get('published_at')), + 'duration': int_or_none(video.get('duration')), + 'view_count': int_or_none(video.get('request_count')), + 'average_rating': int_or_none(video.get('rating')), + 'season_number': int_or_none(video.get('season')), + 'episode_number': int_or_none(video.get('episode')), + 'formats': formats, + 'subtitles': subtitles, + } diff --git a/python/lib/python3.10/site-packages/yt_dlp/jsinterp.py b/python/lib/python3.10/site-packages/yt_dlp/jsinterp.py new file mode 100644 index 0000000..bda3fb4 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/jsinterp.py @@ -0,0 +1,853 @@ +import collections +import contextlib +import itertools +import json +import math +import operator +import re + +from .utils import ( + NO_DEFAULT, + ExtractorError, + function_with_repr, + js_to_json, + remove_quotes, + truncate_string, + unified_timestamp, + write_string, +) + + +def _js_bit_op(op): + def zeroise(x): + if x in (None, JS_Undefined): + return 0 + with contextlib.suppress(TypeError): + if math.isnan(x): # NB: NaN cannot be checked by membership + return 0 + return x + + def wrapped(a, b): + return op(zeroise(a), zeroise(b)) & 0xffffffff + + return wrapped + + +def _js_arith_op(op): + + def wrapped(a, b): + if JS_Undefined in (a, b): + return float('nan') + return op(a or 0, b or 0) + + return wrapped + + +def _js_div(a, b): + if JS_Undefined in (a, b) or not (a or b): + return float('nan') + return (a or 0) / b if b else float('inf') + + +def _js_mod(a, b): + if JS_Undefined in (a, b) or not b: + return float('nan') + return (a or 0) % b + + +def _js_exp(a, b): + if not b: + return 1 # even 0 ** 0 !! + elif JS_Undefined in (a, b): + return float('nan') + return (a or 0) ** b + + +def _js_eq_op(op): + + def wrapped(a, b): + if {a, b} <= {None, JS_Undefined}: + return op(a, a) + return op(a, b) + + return wrapped + + +def _js_comp_op(op): + + def wrapped(a, b): + if JS_Undefined in (a, b): + return False + if isinstance(a, str) or isinstance(b, str): + return op(str(a or 0), str(b or 0)) + return op(a or 0, b or 0) + + return wrapped + + +def _js_ternary(cndn, if_true=True, if_false=False): + """Simulate JS's ternary operator (cndn?if_true:if_false)""" + if cndn in (False, None, 0, '', JS_Undefined): + return if_false + with contextlib.suppress(TypeError): + if math.isnan(cndn): # NB: NaN cannot be checked by membership + return if_false + return if_true + + +# Ref: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Operator_Precedence +_OPERATORS = { # None => Defined in JSInterpreter._operator + '?': None, + '??': None, + '||': None, + '&&': None, + + '|': _js_bit_op(operator.or_), + '^': _js_bit_op(operator.xor), + '&': _js_bit_op(operator.and_), + + '===': operator.is_, + '!==': operator.is_not, + '==': _js_eq_op(operator.eq), + '!=': _js_eq_op(operator.ne), + + '<=': _js_comp_op(operator.le), + '>=': _js_comp_op(operator.ge), + '<': _js_comp_op(operator.lt), + '>': _js_comp_op(operator.gt), + + '>>': _js_bit_op(operator.rshift), + '<<': _js_bit_op(operator.lshift), + + '+': _js_arith_op(operator.add), + '-': _js_arith_op(operator.sub), + + '*': _js_arith_op(operator.mul), + '%': _js_mod, + '/': _js_div, + '**': _js_exp, +} + +_COMP_OPERATORS = {'===', '!==', '==', '!=', '<=', '>=', '<', '>'} + +_NAME_RE = r'[a-zA-Z_$][\w$]*' +_MATCHING_PARENS = dict(zip(*zip('()', '{}', '[]'))) +_QUOTES = '\'"/' + + +class JS_Undefined: + pass + + +class JS_Break(ExtractorError): + def __init__(self): + ExtractorError.__init__(self, 'Invalid break') + + +class JS_Continue(ExtractorError): + def __init__(self): + ExtractorError.__init__(self, 'Invalid continue') + + +class JS_Throw(ExtractorError): + def __init__(self, e): + self.error = e + ExtractorError.__init__(self, f'Uncaught exception {e}') + + +class LocalNameSpace(collections.ChainMap): + def __setitem__(self, key, value): + for scope in self.maps: + if key in scope: + scope[key] = value + return + self.maps[0][key] = value + + def __delitem__(self, key): + raise NotImplementedError('Deleting is not supported') + + +class Debugger: + import sys + ENABLED = False and 'pytest' in sys.modules + + @staticmethod + def write(*args, level=100): + write_string(f'[debug] JS: {" " * (100 - level)}' + f'{" ".join(truncate_string(str(x), 50, 50) for x in args)}\n') + + @classmethod + def wrap_interpreter(cls, f): + def interpret_statement(self, stmt, local_vars, allow_recursion, *args, **kwargs): + if cls.ENABLED and stmt.strip(): + cls.write(stmt, level=allow_recursion) + try: + ret, should_ret = f(self, stmt, local_vars, allow_recursion, *args, **kwargs) + except Exception as e: + if cls.ENABLED: + if isinstance(e, ExtractorError): + e = e.orig_msg + cls.write('=> Raises:', e, '<-|', stmt, level=allow_recursion) + raise + if cls.ENABLED and stmt.strip(): + if should_ret or not repr(ret) == stmt: + cls.write(['->', '=>'][should_ret], repr(ret), '<-|', stmt, level=allow_recursion) + return ret, should_ret + return interpret_statement + + +class JSInterpreter: + __named_object_counter = 0 + + _RE_FLAGS = { + # special knowledge: Python's re flags are bitmask values, current max 128 + # invent new bitmask values well above that for literal parsing + # TODO: new pattern class to execute matches with these flags + 'd': 1024, # Generate indices for substring matches + 'g': 2048, # Global search + 'i': re.I, # Case-insensitive search + 'm': re.M, # Multi-line search + 's': re.S, # Allows . to match newline characters + 'u': re.U, # Treat a pattern as a sequence of unicode code points + 'y': 4096, # Perform a "sticky" search that matches starting at the current position in the target string + } + + def __init__(self, code, objects=None): + self.code, self._functions = code, {} + self._objects = {} if objects is None else objects + + class Exception(ExtractorError): + def __init__(self, msg, expr=None, *args, **kwargs): + if expr is not None: + msg = f'{msg.rstrip()} in: {truncate_string(expr, 50, 50)}' + super().__init__(msg, *args, **kwargs) + + def _named_object(self, namespace, obj): + self.__named_object_counter += 1 + name = f'__yt_dlp_jsinterp_obj{self.__named_object_counter}' + if callable(obj) and not isinstance(obj, function_with_repr): + obj = function_with_repr(obj, f'F<{self.__named_object_counter}>') + namespace[name] = obj + return name + + @classmethod + def _regex_flags(cls, expr): + flags = 0 + if not expr: + return flags, expr + for idx, ch in enumerate(expr): + if ch not in cls._RE_FLAGS: + break + flags |= cls._RE_FLAGS[ch] + return flags, expr[idx + 1:] + + @staticmethod + def _separate(expr, delim=',', max_split=None): + OP_CHARS = '+-*/%&|^=<>!,;{}:[' + if not expr: + return + counters = {k: 0 for k in _MATCHING_PARENS.values()} + start, splits, pos, delim_len = 0, 0, 0, len(delim) - 1 + in_quote, escaping, after_op, in_regex_char_group = None, False, True, False + for idx, char in enumerate(expr): + if not in_quote and char in _MATCHING_PARENS: + counters[_MATCHING_PARENS[char]] += 1 + elif not in_quote and char in counters: + # Something's wrong if we get negative, but ignore it anyway + if counters[char]: + counters[char] -= 1 + elif not escaping: + if char in _QUOTES and in_quote in (char, None): + if in_quote or after_op or char != '/': + in_quote = None if in_quote and not in_regex_char_group else char + elif in_quote == '/' and char in '[]': + in_regex_char_group = char == '[' + escaping = not escaping and in_quote and char == '\\' + in_unary_op = (not in_quote and not in_regex_char_group + and after_op not in (True, False) and char in '-+') + after_op = char if (not in_quote and char in OP_CHARS) else (char.isspace() and after_op) + + if char != delim[pos] or any(counters.values()) or in_quote or in_unary_op: + pos = 0 + continue + elif pos != delim_len: + pos += 1 + continue + yield expr[start: idx - delim_len] + start, pos = idx + 1, 0 + splits += 1 + if max_split and splits >= max_split: + break + yield expr[start:] + + @classmethod + def _separate_at_paren(cls, expr, delim=None): + if delim is None: + delim = expr and _MATCHING_PARENS[expr[0]] + separated = list(cls._separate(expr, delim, 1)) + if len(separated) < 2: + raise cls.Exception(f'No terminating paren {delim}', expr) + return separated[0][1:].strip(), separated[1].strip() + + def _operator(self, op, left_val, right_expr, expr, local_vars, allow_recursion): + if op in ('||', '&&'): + if (op == '&&') ^ _js_ternary(left_val): + return left_val # short circuiting + elif op == '??': + if left_val not in (None, JS_Undefined): + return left_val + elif op == '?': + right_expr = _js_ternary(left_val, *self._separate(right_expr, ':', 1)) + + right_val = self.interpret_expression(right_expr, local_vars, allow_recursion) + if not _OPERATORS.get(op): + return right_val + + try: + return _OPERATORS[op](left_val, right_val) + except Exception as e: + raise self.Exception(f'Failed to evaluate {left_val!r} {op} {right_val!r}', expr, cause=e) + + def _index(self, obj, idx, allow_undefined=False): + if idx == 'length': + return len(obj) + try: + return obj[int(idx)] if isinstance(obj, list) else obj[idx] + except Exception as e: + if allow_undefined: + return JS_Undefined + raise self.Exception(f'Cannot get index {idx}', repr(obj), cause=e) + + def _dump(self, obj, namespace): + try: + return json.dumps(obj) + except TypeError: + return self._named_object(namespace, obj) + + @Debugger.wrap_interpreter + def interpret_statement(self, stmt, local_vars, allow_recursion=100): + if allow_recursion < 0: + raise self.Exception('Recursion limit reached') + allow_recursion -= 1 + + should_return = False + sub_statements = list(self._separate(stmt, ';')) or [''] + expr = stmt = sub_statements.pop().strip() + + for sub_stmt in sub_statements: + ret, should_return = self.interpret_statement(sub_stmt, local_vars, allow_recursion) + if should_return: + return ret, should_return + + m = re.match(r'(?P<var>(?:var|const|let)\s)|return(?:\s+|(?=["\'])|$)|(?P<throw>throw\s+)', stmt) + if m: + expr = stmt[len(m.group(0)):].strip() + if m.group('throw'): + raise JS_Throw(self.interpret_expression(expr, local_vars, allow_recursion)) + should_return = not m.group('var') + if not expr: + return None, should_return + + if expr[0] in _QUOTES: + inner, outer = self._separate(expr, expr[0], 1) + if expr[0] == '/': + flags, outer = self._regex_flags(outer) + # We don't support regex methods yet, so no point compiling it + inner = f'{inner}/{flags}' + # Avoid https://github.com/python/cpython/issues/74534 + # inner = re.compile(inner[1:].replace('[[', r'[\['), flags=flags) + else: + inner = json.loads(js_to_json(f'{inner}{expr[0]}', strict=True)) + if not outer: + return inner, should_return + expr = self._named_object(local_vars, inner) + outer + + if expr.startswith('new '): + obj = expr[4:] + if obj.startswith('Date('): + left, right = self._separate_at_paren(obj[4:]) + date = unified_timestamp( + self.interpret_expression(left, local_vars, allow_recursion), False) + if date is None: + raise self.Exception(f'Failed to parse date {left!r}', expr) + expr = self._dump(int(date * 1000), local_vars) + right + else: + raise self.Exception(f'Unsupported object {obj}', expr) + + if expr.startswith('void '): + left = self.interpret_expression(expr[5:], local_vars, allow_recursion) + return None, should_return + + if expr.startswith('{'): + inner, outer = self._separate_at_paren(expr) + # try for object expression (Map) + sub_expressions = [list(self._separate(sub_expr.strip(), ':', 1)) for sub_expr in self._separate(inner)] + if all(len(sub_expr) == 2 for sub_expr in sub_expressions): + def dict_item(key, val): + val = self.interpret_expression(val, local_vars, allow_recursion) + if re.match(_NAME_RE, key): + return key, val + return self.interpret_expression(key, local_vars, allow_recursion), val + + return dict(dict_item(k, v) for k, v in sub_expressions), should_return + + inner, should_abort = self.interpret_statement(inner, local_vars, allow_recursion) + if not outer or should_abort: + return inner, should_abort or should_return + else: + expr = self._dump(inner, local_vars) + outer + + if expr.startswith('('): + inner, outer = self._separate_at_paren(expr) + inner, should_abort = self.interpret_statement(inner, local_vars, allow_recursion) + if not outer or should_abort: + return inner, should_abort or should_return + else: + expr = self._dump(inner, local_vars) + outer + + if expr.startswith('['): + inner, outer = self._separate_at_paren(expr) + name = self._named_object(local_vars, [ + self.interpret_expression(item, local_vars, allow_recursion) + for item in self._separate(inner)]) + expr = name + outer + + m = re.match(r'''(?x) + (?P<try>try)\s*\{| + (?P<if>if)\s*\(| + (?P<switch>switch)\s*\(| + (?P<for>for)\s*\( + ''', expr) + md = m.groupdict() if m else {} + if md.get('if'): + cndn, expr = self._separate_at_paren(expr[m.end() - 1:]) + if_expr, expr = self._separate_at_paren(expr.lstrip()) + # TODO: "else if" is not handled + else_expr = None + m = re.match(r'else\s*{', expr) + if m: + else_expr, expr = self._separate_at_paren(expr[m.end() - 1:]) + cndn = _js_ternary(self.interpret_expression(cndn, local_vars, allow_recursion)) + ret, should_abort = self.interpret_statement( + if_expr if cndn else else_expr, local_vars, allow_recursion) + if should_abort: + return ret, True + + if md.get('try'): + try_expr, expr = self._separate_at_paren(expr[m.end() - 1:]) + err = None + try: + ret, should_abort = self.interpret_statement(try_expr, local_vars, allow_recursion) + if should_abort: + return ret, True + except Exception as e: + # XXX: This works for now, but makes debugging future issues very hard + err = e + + pending = (None, False) + m = re.match(fr'catch\s*(?P<err>\(\s*{_NAME_RE}\s*\))?\{{', expr) + if m: + sub_expr, expr = self._separate_at_paren(expr[m.end() - 1:]) + if err: + catch_vars = {} + if m.group('err'): + catch_vars[m.group('err')] = err.error if isinstance(err, JS_Throw) else err + catch_vars = local_vars.new_child(catch_vars) + err, pending = None, self.interpret_statement(sub_expr, catch_vars, allow_recursion) + + m = re.match(r'finally\s*\{', expr) + if m: + sub_expr, expr = self._separate_at_paren(expr[m.end() - 1:]) + ret, should_abort = self.interpret_statement(sub_expr, local_vars, allow_recursion) + if should_abort: + return ret, True + + ret, should_abort = pending + if should_abort: + return ret, True + + if err: + raise err + + elif md.get('for'): + constructor, remaining = self._separate_at_paren(expr[m.end() - 1:]) + if remaining.startswith('{'): + body, expr = self._separate_at_paren(remaining) + else: + switch_m = re.match(r'switch\s*\(', remaining) # FIXME + if switch_m: + switch_val, remaining = self._separate_at_paren(remaining[switch_m.end() - 1:]) + body, expr = self._separate_at_paren(remaining, '}') + body = 'switch(%s){%s}' % (switch_val, body) + else: + body, expr = remaining, '' + start, cndn, increment = self._separate(constructor, ';') + self.interpret_expression(start, local_vars, allow_recursion) + while True: + if not _js_ternary(self.interpret_expression(cndn, local_vars, allow_recursion)): + break + try: + ret, should_abort = self.interpret_statement(body, local_vars, allow_recursion) + if should_abort: + return ret, True + except JS_Break: + break + except JS_Continue: + pass + self.interpret_expression(increment, local_vars, allow_recursion) + + elif md.get('switch'): + switch_val, remaining = self._separate_at_paren(expr[m.end() - 1:]) + switch_val = self.interpret_expression(switch_val, local_vars, allow_recursion) + body, expr = self._separate_at_paren(remaining, '}') + items = body.replace('default:', 'case default:').split('case ')[1:] + for default in (False, True): + matched = False + for item in items: + case, stmt = (i.strip() for i in self._separate(item, ':', 1)) + if default: + matched = matched or case == 'default' + elif not matched: + matched = (case != 'default' + and switch_val == self.interpret_expression(case, local_vars, allow_recursion)) + if not matched: + continue + try: + ret, should_abort = self.interpret_statement(stmt, local_vars, allow_recursion) + if should_abort: + return ret + except JS_Break: + break + if matched: + break + + if md: + ret, should_abort = self.interpret_statement(expr, local_vars, allow_recursion) + return ret, should_abort or should_return + + # Comma separated statements + sub_expressions = list(self._separate(expr)) + if len(sub_expressions) > 1: + for sub_expr in sub_expressions: + ret, should_abort = self.interpret_statement(sub_expr, local_vars, allow_recursion) + if should_abort: + return ret, True + return ret, False + + for m in re.finditer(rf'''(?x) + (?P<pre_sign>\+\+|--)(?P<var1>{_NAME_RE})| + (?P<var2>{_NAME_RE})(?P<post_sign>\+\+|--)''', expr): + var = m.group('var1') or m.group('var2') + start, end = m.span() + sign = m.group('pre_sign') or m.group('post_sign') + ret = local_vars[var] + local_vars[var] += 1 if sign[0] == '+' else -1 + if m.group('pre_sign'): + ret = local_vars[var] + expr = expr[:start] + self._dump(ret, local_vars) + expr[end:] + + if not expr: + return None, should_return + + m = re.match(fr'''(?x) + (?P<assign> + (?P<out>{_NAME_RE})(?:\[(?P<index>[^\]]+?)\])?\s* + (?P<op>{"|".join(map(re.escape, set(_OPERATORS) - _COMP_OPERATORS))})? + =(?!=)(?P<expr>.*)$ + )|(?P<return> + (?!if|return|true|false|null|undefined|NaN)(?P<name>{_NAME_RE})$ + )|(?P<indexing> + (?P<in>{_NAME_RE})\[(?P<idx>.+)\]$ + )|(?P<attribute> + (?P<var>{_NAME_RE})(?:(?P<nullish>\?)?\.(?P<member>[^(]+)|\[(?P<member2>[^\]]+)\])\s* + )|(?P<function> + (?P<fname>{_NAME_RE})\((?P<args>.*)\)$ + )''', expr) + if m and m.group('assign'): + left_val = local_vars.get(m.group('out')) + + if not m.group('index'): + local_vars[m.group('out')] = self._operator( + m.group('op'), left_val, m.group('expr'), expr, local_vars, allow_recursion) + return local_vars[m.group('out')], should_return + elif left_val in (None, JS_Undefined): + raise self.Exception(f'Cannot index undefined variable {m.group("out")}', expr) + + idx = self.interpret_expression(m.group('index'), local_vars, allow_recursion) + if not isinstance(idx, (int, float)): + raise self.Exception(f'List index {idx} must be integer', expr) + idx = int(idx) + left_val[idx] = self._operator( + m.group('op'), self._index(left_val, idx), m.group('expr'), expr, local_vars, allow_recursion) + return left_val[idx], should_return + + elif expr.isdigit(): + return int(expr), should_return + + elif expr == 'break': + raise JS_Break() + elif expr == 'continue': + raise JS_Continue() + elif expr == 'undefined': + return JS_Undefined, should_return + elif expr == 'NaN': + return float('NaN'), should_return + + elif m and m.group('return'): + return local_vars.get(m.group('name'), JS_Undefined), should_return + + with contextlib.suppress(ValueError): + return json.loads(js_to_json(expr, strict=True)), should_return + + if m and m.group('indexing'): + val = local_vars[m.group('in')] + idx = self.interpret_expression(m.group('idx'), local_vars, allow_recursion) + return self._index(val, idx), should_return + + for op in _OPERATORS: + separated = list(self._separate(expr, op)) + right_expr = separated.pop() + while True: + if op in '?<>*-' and len(separated) > 1 and not separated[-1].strip(): + separated.pop() + elif not (separated and op == '?' and right_expr.startswith('.')): + break + right_expr = f'{op}{right_expr}' + if op != '-': + right_expr = f'{separated.pop()}{op}{right_expr}' + if not separated: + continue + left_val = self.interpret_expression(op.join(separated), local_vars, allow_recursion) + return self._operator(op, left_val, right_expr, expr, local_vars, allow_recursion), should_return + + if m and m.group('attribute'): + variable, member, nullish = m.group('var', 'member', 'nullish') + if not member: + member = self.interpret_expression(m.group('member2'), local_vars, allow_recursion) + arg_str = expr[m.end():] + if arg_str.startswith('('): + arg_str, remaining = self._separate_at_paren(arg_str) + else: + arg_str, remaining = None, arg_str + + def assertion(cndn, msg): + """ assert, but without risk of getting optimized out """ + if not cndn: + raise self.Exception(f'{member} {msg}', expr) + + def eval_method(): + if (variable, member) == ('console', 'debug'): + if Debugger.ENABLED: + Debugger.write(self.interpret_expression(f'[{arg_str}]', local_vars, allow_recursion)) + return + + types = { + 'String': str, + 'Math': float, + } + obj = local_vars.get(variable, types.get(variable, NO_DEFAULT)) + if obj is NO_DEFAULT: + if variable not in self._objects: + try: + self._objects[variable] = self.extract_object(variable) + except self.Exception: + if not nullish: + raise + obj = self._objects.get(variable, JS_Undefined) + + if nullish and obj is JS_Undefined: + return JS_Undefined + + # Member access + if arg_str is None: + return self._index(obj, member, nullish) + + # Function call + argvals = [ + self.interpret_expression(v, local_vars, allow_recursion) + for v in self._separate(arg_str)] + + if obj == str: + if member == 'fromCharCode': + assertion(argvals, 'takes one or more arguments') + return ''.join(map(chr, argvals)) + raise self.Exception(f'Unsupported String method {member}', expr) + elif obj == float: + if member == 'pow': + assertion(len(argvals) == 2, 'takes two arguments') + return argvals[0] ** argvals[1] + raise self.Exception(f'Unsupported Math method {member}', expr) + + if member == 'split': + assertion(argvals, 'takes one or more arguments') + assertion(len(argvals) == 1, 'with limit argument is not implemented') + return obj.split(argvals[0]) if argvals[0] else list(obj) + elif member == 'join': + assertion(isinstance(obj, list), 'must be applied on a list') + assertion(len(argvals) == 1, 'takes exactly one argument') + return argvals[0].join(obj) + elif member == 'reverse': + assertion(not argvals, 'does not take any arguments') + obj.reverse() + return obj + elif member == 'slice': + assertion(isinstance(obj, list), 'must be applied on a list') + assertion(len(argvals) == 1, 'takes exactly one argument') + return obj[argvals[0]:] + elif member == 'splice': + assertion(isinstance(obj, list), 'must be applied on a list') + assertion(argvals, 'takes one or more arguments') + index, howMany = map(int, (argvals + [len(obj)])[:2]) + if index < 0: + index += len(obj) + add_items = argvals[2:] + res = [] + for i in range(index, min(index + howMany, len(obj))): + res.append(obj.pop(index)) + for i, item in enumerate(add_items): + obj.insert(index + i, item) + return res + elif member == 'unshift': + assertion(isinstance(obj, list), 'must be applied on a list') + assertion(argvals, 'takes one or more arguments') + for item in reversed(argvals): + obj.insert(0, item) + return obj + elif member == 'pop': + assertion(isinstance(obj, list), 'must be applied on a list') + assertion(not argvals, 'does not take any arguments') + if not obj: + return + return obj.pop() + elif member == 'push': + assertion(argvals, 'takes one or more arguments') + obj.extend(argvals) + return obj + elif member == 'forEach': + assertion(argvals, 'takes one or more arguments') + assertion(len(argvals) <= 2, 'takes at-most 2 arguments') + f, this = (argvals + [''])[:2] + return [f((item, idx, obj), {'this': this}, allow_recursion) for idx, item in enumerate(obj)] + elif member == 'indexOf': + assertion(argvals, 'takes one or more arguments') + assertion(len(argvals) <= 2, 'takes at-most 2 arguments') + idx, start = (argvals + [0])[:2] + try: + return obj.index(idx, start) + except ValueError: + return -1 + elif member == 'charCodeAt': + assertion(isinstance(obj, str), 'must be applied on a string') + assertion(len(argvals) == 1, 'takes exactly one argument') + idx = argvals[0] if isinstance(argvals[0], int) else 0 + if idx >= len(obj): + return None + return ord(obj[idx]) + + idx = int(member) if isinstance(obj, list) else member + return obj[idx](argvals, allow_recursion=allow_recursion) + + if remaining: + ret, should_abort = self.interpret_statement( + self._named_object(local_vars, eval_method()) + remaining, + local_vars, allow_recursion) + return ret, should_return or should_abort + else: + return eval_method(), should_return + + elif m and m.group('function'): + fname = m.group('fname') + argvals = [self.interpret_expression(v, local_vars, allow_recursion) + for v in self._separate(m.group('args'))] + if fname in local_vars: + return local_vars[fname](argvals, allow_recursion=allow_recursion), should_return + elif fname not in self._functions: + self._functions[fname] = self.extract_function(fname) + return self._functions[fname](argvals, allow_recursion=allow_recursion), should_return + + raise self.Exception( + f'Unsupported JS expression {truncate_string(expr, 20, 20) if expr != stmt else ""}', stmt) + + def interpret_expression(self, expr, local_vars, allow_recursion): + ret, should_return = self.interpret_statement(expr, local_vars, allow_recursion) + if should_return: + raise self.Exception('Cannot return from an expression', expr) + return ret + + def extract_object(self, objname): + _FUNC_NAME_RE = r'''(?:[a-zA-Z$0-9]+|"[a-zA-Z$0-9]+"|'[a-zA-Z$0-9]+')''' + obj = {} + obj_m = re.search( + r'''(?x) + (?<!\.)%s\s*=\s*{\s* + (?P<fields>(%s\s*:\s*function\s*\(.*?\)\s*{.*?}(?:,\s*)?)*) + }\s*; + ''' % (re.escape(objname), _FUNC_NAME_RE), + self.code) + if not obj_m: + raise self.Exception(f'Could not find object {objname}') + fields = obj_m.group('fields') + # Currently, it only supports function definitions + fields_m = re.finditer( + r'''(?x) + (?P<key>%s)\s*:\s*function\s*\((?P<args>(?:%s|,)*)\){(?P<code>[^}]+)} + ''' % (_FUNC_NAME_RE, _NAME_RE), + fields) + for f in fields_m: + argnames = f.group('args').split(',') + name = remove_quotes(f.group('key')) + obj[name] = function_with_repr(self.build_function(argnames, f.group('code')), f'F<{name}>') + + return obj + + def extract_function_code(self, funcname): + """ @returns argnames, code """ + func_m = re.search( + r'''(?xs) + (?: + function\s+%(name)s| + [{;,]\s*%(name)s\s*=\s*function| + (?:var|const|let)\s+%(name)s\s*=\s*function + )\s* + \((?P<args>[^)]*)\)\s* + (?P<code>{.+})''' % {'name': re.escape(funcname)}, + self.code) + if func_m is None: + raise self.Exception(f'Could not find JS function "{funcname}"') + code, _ = self._separate_at_paren(func_m.group('code')) + return [x.strip() for x in func_m.group('args').split(',')], code + + def extract_function(self, funcname): + return function_with_repr( + self.extract_function_from_code(*self.extract_function_code(funcname)), + f'F<{funcname}>') + + def extract_function_from_code(self, argnames, code, *global_stack): + local_vars = {} + while True: + mobj = re.search(r'function\((?P<args>[^)]*)\)\s*{', code) + if mobj is None: + break + start, body_start = mobj.span() + body, remaining = self._separate_at_paren(code[body_start - 1:]) + name = self._named_object(local_vars, self.extract_function_from_code( + [x.strip() for x in mobj.group('args').split(',')], + body, local_vars, *global_stack)) + code = code[:start] + name + remaining + return self.build_function(argnames, code, local_vars, *global_stack) + + def call_function(self, funcname, *args): + return self.extract_function(funcname)(args) + + def build_function(self, argnames, code, *global_stack): + global_stack = list(global_stack) or [{}] + argnames = tuple(argnames) + + def resf(args, kwargs={}, allow_recursion=100): + global_stack[0].update(itertools.zip_longest(argnames, args, fillvalue=None)) + global_stack[0].update(kwargs) + var_stack = LocalNameSpace(*global_stack) + ret, should_abort = self.interpret_statement(code.replace('\n', ' '), var_stack, allow_recursion - 1) + if should_abort: + return ret + return resf diff --git a/lib/python3.11/site-packages/yt_dlp/minicurses.py b/python/lib/python3.10/site-packages/yt_dlp/minicurses.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/minicurses.py rename to python/lib/python3.10/site-packages/yt_dlp/minicurses.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/networking/__init__.py b/python/lib/python3.10/site-packages/yt_dlp/networking/__init__.py new file mode 100644 index 0000000..5b1599a --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/networking/__init__.py @@ -0,0 +1,13 @@ +# flake8: noqa: F401 +from .common import ( + HEADRequest, + PUTRequest, + Request, + RequestDirector, + RequestHandler, + Response, +) + +# isort: split +# TODO: all request handlers should be safely imported +from . import _urllib diff --git a/python/lib/python3.10/site-packages/yt_dlp/networking/_helper.py b/python/lib/python3.10/site-packages/yt_dlp/networking/_helper.py new file mode 100644 index 0000000..4c9dbf2 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/networking/_helper.py @@ -0,0 +1,265 @@ +from __future__ import annotations + +import contextlib +import functools +import socket +import ssl +import sys +import typing +import urllib.parse +import urllib.request + +from .exceptions import RequestError, UnsupportedRequest +from ..dependencies import certifi +from ..socks import ProxyType +from ..utils import format_field, traverse_obj + +if typing.TYPE_CHECKING: + from collections.abc import Iterable + + from ..utils.networking import HTTPHeaderDict + + +def ssl_load_certs(context: ssl.SSLContext, use_certifi=True): + if certifi and use_certifi: + context.load_verify_locations(cafile=certifi.where()) + else: + try: + context.load_default_certs() + # Work around the issue in load_default_certs when there are bad certificates. See: + # https://github.com/yt-dlp/yt-dlp/issues/1060, + # https://bugs.python.org/issue35665, https://bugs.python.org/issue45312 + except ssl.SSLError: + # enum_certificates is not present in mingw python. See https://github.com/yt-dlp/yt-dlp/issues/1151 + if sys.platform == 'win32' and hasattr(ssl, 'enum_certificates'): + for storename in ('CA', 'ROOT'): + ssl_load_windows_store_certs(context, storename) + context.set_default_verify_paths() + + +def ssl_load_windows_store_certs(ssl_context, storename): + # Code adapted from _load_windows_store_certs in https://github.com/python/cpython/blob/main/Lib/ssl.py + try: + certs = [cert for cert, encoding, trust in ssl.enum_certificates(storename) + if encoding == 'x509_asn' and ( + trust is True or ssl.Purpose.SERVER_AUTH.oid in trust)] + except PermissionError: + return + for cert in certs: + with contextlib.suppress(ssl.SSLError): + ssl_context.load_verify_locations(cadata=cert) + + +def make_socks_proxy_opts(socks_proxy): + url_components = urllib.parse.urlparse(socks_proxy) + if url_components.scheme.lower() == 'socks5': + socks_type = ProxyType.SOCKS5 + rdns = False + elif url_components.scheme.lower() == 'socks5h': + socks_type = ProxyType.SOCKS5 + rdns = True + elif url_components.scheme.lower() == 'socks4': + socks_type = ProxyType.SOCKS4 + rdns = False + elif url_components.scheme.lower() == 'socks4a': + socks_type = ProxyType.SOCKS4A + rdns = True + else: + raise ValueError(f'Unknown SOCKS proxy version: {url_components.scheme.lower()}') + + def unquote_if_non_empty(s): + if not s: + return s + return urllib.parse.unquote_plus(s) + return { + 'proxytype': socks_type, + 'addr': url_components.hostname, + 'port': url_components.port or 1080, + 'rdns': rdns, + 'username': unquote_if_non_empty(url_components.username), + 'password': unquote_if_non_empty(url_components.password), + } + + +def select_proxy(url, proxies): + """Unified proxy selector for all backends""" + url_components = urllib.parse.urlparse(url) + if 'no' in proxies: + hostport = url_components.hostname + format_field(url_components.port, None, ':%s') + if urllib.request.proxy_bypass_environment(hostport, {'no': proxies['no']}): + return + elif urllib.request.proxy_bypass(hostport): # check system settings + return + + return traverse_obj(proxies, url_components.scheme or 'http', 'all') + + +def get_redirect_method(method, status): + """Unified redirect method handling""" + + # A 303 must either use GET or HEAD for subsequent request + # https://datatracker.ietf.org/doc/html/rfc7231#section-6.4.4 + if status == 303 and method != 'HEAD': + method = 'GET' + # 301 and 302 redirects are commonly turned into a GET from a POST + # for subsequent requests by browsers, so we'll do the same. + # https://datatracker.ietf.org/doc/html/rfc7231#section-6.4.2 + # https://datatracker.ietf.org/doc/html/rfc7231#section-6.4.3 + if status in (301, 302) and method == 'POST': + method = 'GET' + return method + + +def make_ssl_context( + verify=True, + client_certificate=None, + client_certificate_key=None, + client_certificate_password=None, + legacy_support=False, + use_certifi=True, +): + context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) + context.check_hostname = verify + context.verify_mode = ssl.CERT_REQUIRED if verify else ssl.CERT_NONE + + # Some servers may reject requests if ALPN extension is not sent. See: + # https://github.com/python/cpython/issues/85140 + # https://github.com/yt-dlp/yt-dlp/issues/3878 + with contextlib.suppress(NotImplementedError): + context.set_alpn_protocols(['http/1.1']) + if verify: + ssl_load_certs(context, use_certifi) + + if legacy_support: + context.options |= 4 # SSL_OP_LEGACY_SERVER_CONNECT + context.set_ciphers('DEFAULT') # compat + + elif ssl.OPENSSL_VERSION_INFO >= (1, 1, 1) and not ssl.OPENSSL_VERSION.startswith('LibreSSL'): + # Use the default SSL ciphers and minimum TLS version settings from Python 3.10 [1]. + # This is to ensure consistent behavior across Python versions and libraries, and help avoid fingerprinting + # in some situations [2][3]. + # Python 3.10 only supports OpenSSL 1.1.1+ [4]. Because this change is likely + # untested on older versions, we only apply this to OpenSSL 1.1.1+ to be safe. + # LibreSSL is excluded until further investigation due to cipher support issues [5][6]. + # 1. https://github.com/python/cpython/commit/e983252b516edb15d4338b0a47631b59ef1e2536 + # 2. https://github.com/yt-dlp/yt-dlp/issues/4627 + # 3. https://github.com/yt-dlp/yt-dlp/pull/5294 + # 4. https://peps.python.org/pep-0644/ + # 5. https://peps.python.org/pep-0644/#libressl-support + # 6. https://github.com/yt-dlp/yt-dlp/commit/5b9f253fa0aee996cf1ed30185d4b502e00609c4#commitcomment-89054368 + context.set_ciphers( + '@SECLEVEL=2:ECDH+AESGCM:ECDH+CHACHA20:ECDH+AES:DHE+AES:!aNULL:!eNULL:!aDSS:!SHA1:!AESCCM') + context.minimum_version = ssl.TLSVersion.TLSv1_2 + + if client_certificate: + try: + context.load_cert_chain( + client_certificate, keyfile=client_certificate_key, + password=client_certificate_password) + except ssl.SSLError: + raise RequestError('Unable to load client certificate') + + if getattr(context, 'post_handshake_auth', None) is not None: + context.post_handshake_auth = True + return context + + +class InstanceStoreMixin: + def __init__(self, **kwargs): + self.__instances = [] + super().__init__(**kwargs) # So that both MRO works + + @staticmethod + def _create_instance(**kwargs): + raise NotImplementedError + + def _get_instance(self, **kwargs): + for key, instance in self.__instances: + if key == kwargs: + return instance + + instance = self._create_instance(**kwargs) + self.__instances.append((kwargs, instance)) + return instance + + def _close_instance(self, instance): + if callable(getattr(instance, 'close', None)): + instance.close() + + def _clear_instances(self): + for _, instance in self.__instances: + self._close_instance(instance) + self.__instances.clear() + + +def add_accept_encoding_header(headers: HTTPHeaderDict, supported_encodings: Iterable[str]): + if 'Accept-Encoding' not in headers: + headers['Accept-Encoding'] = ', '.join(supported_encodings) or 'identity' + + +def wrap_request_errors(func): + @functools.wraps(func) + def wrapper(self, *args, **kwargs): + try: + return func(self, *args, **kwargs) + except UnsupportedRequest as e: + if e.handler is None: + e.handler = self + raise + return wrapper + + +def _socket_connect(ip_addr, timeout, source_address): + af, socktype, proto, canonname, sa = ip_addr + sock = socket.socket(af, socktype, proto) + try: + if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: + sock.settimeout(timeout) + if source_address: + sock.bind(source_address) + sock.connect(sa) + return sock + except socket.error: + sock.close() + raise + + +def create_connection( + address, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None, + *, + _create_socket_func=_socket_connect +): + # Work around socket.create_connection() which tries all addresses from getaddrinfo() including IPv6. + # This filters the addresses based on the given source_address. + # Based on: https://github.com/python/cpython/blob/main/Lib/socket.py#L810 + host, port = address + ip_addrs = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM) + if not ip_addrs: + raise socket.error('getaddrinfo returns an empty list') + if source_address is not None: + af = socket.AF_INET if ':' not in source_address[0] else socket.AF_INET6 + ip_addrs = [addr for addr in ip_addrs if addr[0] == af] + if not ip_addrs: + raise OSError( + f'No remote IPv{4 if af == socket.AF_INET else 6} addresses available for connect. ' + f'Can\'t use "{source_address[0]}" as source address') + + err = None + for ip_addr in ip_addrs: + try: + sock = _create_socket_func(ip_addr, timeout, source_address) + # Explicitly break __traceback__ reference cycle + # https://bugs.python.org/issue36820 + err = None + return sock + except socket.error as e: + err = e + + try: + raise err + finally: + # Explicitly break __traceback__ reference cycle + # https://bugs.python.org/issue36820 + err = None diff --git a/python/lib/python3.10/site-packages/yt_dlp/networking/_urllib.py b/python/lib/python3.10/site-packages/yt_dlp/networking/_urllib.py new file mode 100644 index 0000000..9e2bf33 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/networking/_urllib.py @@ -0,0 +1,436 @@ +from __future__ import annotations + +import functools +import http.client +import io +import socket +import ssl +import urllib.error +import urllib.parse +import urllib.request +import urllib.response +import zlib +from urllib.request import ( + DataHandler, + FileHandler, + FTPHandler, + HTTPCookieProcessor, + HTTPDefaultErrorHandler, + HTTPErrorProcessor, + UnknownHandler, +) + +from ._helper import ( + InstanceStoreMixin, + add_accept_encoding_header, + create_connection, + get_redirect_method, + make_socks_proxy_opts, + select_proxy, +) +from .common import Features, RequestHandler, Response, register_rh +from .exceptions import ( + CertificateVerifyError, + HTTPError, + IncompleteRead, + ProxyError, + RequestError, + SSLError, + TransportError, +) +from ..dependencies import brotli +from ..socks import ProxyError as SocksProxyError +from ..socks import sockssocket +from ..utils import update_url_query +from ..utils.networking import normalize_url + +SUPPORTED_ENCODINGS = ['gzip', 'deflate'] +CONTENT_DECODE_ERRORS = [zlib.error, OSError] + +if brotli: + SUPPORTED_ENCODINGS.append('br') + CONTENT_DECODE_ERRORS.append(brotli.error) + + +def _create_http_connection(http_class, source_address, *args, **kwargs): + hc = http_class(*args, **kwargs) + + if hasattr(hc, '_create_connection'): + hc._create_connection = create_connection + + if source_address is not None: + hc.source_address = (source_address, 0) + + return hc + + +class HTTPHandler(urllib.request.AbstractHTTPHandler): + """Handler for HTTP requests and responses. + + This class, when installed with an OpenerDirector, automatically adds + the standard headers to every HTTP request and handles gzipped, deflated and + brotli responses from web servers. + + Part of this code was copied from: + + http://techknack.net/python-urllib2-handlers/ + + Andrew Rowls, the author of that code, agreed to release it to the + public domain. + """ + + def __init__(self, context=None, source_address=None, *args, **kwargs): + super().__init__(*args, **kwargs) + self._source_address = source_address + self._context = context + + @staticmethod + def _make_conn_class(base, req): + conn_class = base + socks_proxy = req.headers.pop('Ytdl-socks-proxy', None) + if socks_proxy: + conn_class = make_socks_conn_class(conn_class, socks_proxy) + return conn_class + + def http_open(self, req): + conn_class = self._make_conn_class(http.client.HTTPConnection, req) + return self.do_open(functools.partial( + _create_http_connection, conn_class, self._source_address), req) + + def https_open(self, req): + conn_class = self._make_conn_class(http.client.HTTPSConnection, req) + return self.do_open( + functools.partial( + _create_http_connection, conn_class, self._source_address), + req, context=self._context) + + @staticmethod + def deflate(data): + if not data: + return data + try: + return zlib.decompress(data, -zlib.MAX_WBITS) + except zlib.error: + return zlib.decompress(data) + + @staticmethod + def brotli(data): + if not data: + return data + return brotli.decompress(data) + + @staticmethod + def gz(data): + # There may be junk added the end of the file + # We ignore it by only ever decoding a single gzip payload + if not data: + return data + return zlib.decompress(data, wbits=zlib.MAX_WBITS | 16) + + def http_request(self, req): + # According to RFC 3986, URLs can not contain non-ASCII characters, however this is not + # always respected by websites, some tend to give out URLs with non percent-encoded + # non-ASCII characters (see telemb.py, ard.py [#3412]) + # urllib chokes on URLs with non-ASCII characters (see http://bugs.python.org/issue3991) + # To work around aforementioned issue we will replace request's original URL with + # percent-encoded one + # Since redirects are also affected (e.g. http://www.southpark.de/alle-episoden/s18e09) + # the code of this workaround has been moved here from YoutubeDL.urlopen() + url = req.get_full_url() + url_escaped = normalize_url(url) + + # Substitute URL if any change after escaping + if url != url_escaped: + req = update_Request(req, url=url_escaped) + + return super().do_request_(req) + + def http_response(self, req, resp): + old_resp = resp + + # Content-Encoding header lists the encodings in order that they were applied [1]. + # To decompress, we simply do the reverse. + # [1]: https://datatracker.ietf.org/doc/html/rfc9110#name-content-encoding + decoded_response = None + for encoding in (e.strip() for e in reversed(resp.headers.get('Content-encoding', '').split(','))): + if encoding == 'gzip': + decoded_response = self.gz(decoded_response or resp.read()) + elif encoding == 'deflate': + decoded_response = self.deflate(decoded_response or resp.read()) + elif encoding == 'br' and brotli: + decoded_response = self.brotli(decoded_response or resp.read()) + + if decoded_response is not None: + resp = urllib.request.addinfourl(io.BytesIO(decoded_response), old_resp.headers, old_resp.url, old_resp.code) + resp.msg = old_resp.msg + # Percent-encode redirect URL of Location HTTP header to satisfy RFC 3986 (see + # https://github.com/ytdl-org/youtube-dl/issues/6457). + if 300 <= resp.code < 400: + location = resp.headers.get('Location') + if location: + # As of RFC 2616 default charset is iso-8859-1 that is respected by python 3 + location = location.encode('iso-8859-1').decode() + location_escaped = normalize_url(location) + if location != location_escaped: + del resp.headers['Location'] + resp.headers['Location'] = location_escaped + return resp + + https_request = http_request + https_response = http_response + + +def make_socks_conn_class(base_class, socks_proxy): + assert issubclass(base_class, ( + http.client.HTTPConnection, http.client.HTTPSConnection)) + + proxy_args = make_socks_proxy_opts(socks_proxy) + + class SocksConnection(base_class): + _create_connection = create_connection + + def connect(self): + def sock_socket_connect(ip_addr, timeout, source_address): + af, socktype, proto, canonname, sa = ip_addr + sock = sockssocket(af, socktype, proto) + try: + connect_proxy_args = proxy_args.copy() + connect_proxy_args.update({'addr': sa[0], 'port': sa[1]}) + sock.setproxy(**connect_proxy_args) + if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: # noqa: E721 + sock.settimeout(timeout) + if source_address: + sock.bind(source_address) + sock.connect((self.host, self.port)) + return sock + except socket.error: + sock.close() + raise + self.sock = create_connection( + (proxy_args['addr'], proxy_args['port']), timeout=self.timeout, + source_address=self.source_address, _create_socket_func=sock_socket_connect) + if isinstance(self, http.client.HTTPSConnection): + self.sock = self._context.wrap_socket(self.sock, server_hostname=self.host) + + return SocksConnection + + +class RedirectHandler(urllib.request.HTTPRedirectHandler): + """YoutubeDL redirect handler + + The code is based on HTTPRedirectHandler implementation from CPython [1]. + + This redirect handler fixes and improves the logic to better align with RFC7261 + and what browsers tend to do [2][3] + + 1. https://github.com/python/cpython/blob/master/Lib/urllib/request.py + 2. https://datatracker.ietf.org/doc/html/rfc7231 + 3. https://github.com/python/cpython/issues/91306 + """ + + http_error_301 = http_error_303 = http_error_307 = http_error_308 = urllib.request.HTTPRedirectHandler.http_error_302 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + if code not in (301, 302, 303, 307, 308): + raise urllib.error.HTTPError(req.full_url, code, msg, headers, fp) + + new_data = req.data + + # Technically the Cookie header should be in unredirected_hdrs, + # however in practice some may set it in normal headers anyway. + # We will remove it here to prevent any leaks. + remove_headers = ['Cookie'] + + new_method = get_redirect_method(req.get_method(), code) + # only remove payload if method changed (e.g. POST to GET) + if new_method != req.get_method(): + new_data = None + remove_headers.extend(['Content-Length', 'Content-Type']) + + new_headers = {k: v for k, v in req.headers.items() if k.title() not in remove_headers} + + return urllib.request.Request( + newurl, headers=new_headers, origin_req_host=req.origin_req_host, + unverifiable=True, method=new_method, data=new_data) + + +class ProxyHandler(urllib.request.BaseHandler): + handler_order = 100 + + def __init__(self, proxies=None): + self.proxies = proxies + # Set default handlers + for type in ('http', 'https', 'ftp'): + setattr(self, '%s_open' % type, lambda r, meth=self.proxy_open: meth(r)) + + def proxy_open(self, req): + proxy = select_proxy(req.get_full_url(), self.proxies) + if proxy is None: + return + if urllib.parse.urlparse(proxy).scheme.lower() in ('socks4', 'socks4a', 'socks5', 'socks5h'): + req.add_header('Ytdl-socks-proxy', proxy) + # yt-dlp's http/https handlers do wrapping the socket with socks + return None + return urllib.request.ProxyHandler.proxy_open( + self, req, proxy, None) + + +class PUTRequest(urllib.request.Request): + def get_method(self): + return 'PUT' + + +class HEADRequest(urllib.request.Request): + def get_method(self): + return 'HEAD' + + +def update_Request(req, url=None, data=None, headers=None, query=None): + req_headers = req.headers.copy() + req_headers.update(headers or {}) + req_data = data if data is not None else req.data + req_url = update_url_query(url or req.get_full_url(), query) + req_get_method = req.get_method() + if req_get_method == 'HEAD': + req_type = HEADRequest + elif req_get_method == 'PUT': + req_type = PUTRequest + else: + req_type = urllib.request.Request + new_req = req_type( + req_url, data=req_data, headers=req_headers, + origin_req_host=req.origin_req_host, unverifiable=req.unverifiable) + if hasattr(req, 'timeout'): + new_req.timeout = req.timeout + return new_req + + +class UrllibResponseAdapter(Response): + """ + HTTP Response adapter class for urllib addinfourl and http.client.HTTPResponse + """ + + def __init__(self, res: http.client.HTTPResponse | urllib.response.addinfourl): + # addinfourl: In Python 3.9+, .status was introduced and .getcode() was deprecated [1] + # HTTPResponse: .getcode() was deprecated, .status always existed [2] + # 1. https://docs.python.org/3/library/urllib.request.html#urllib.response.addinfourl.getcode + # 2. https://docs.python.org/3.10/library/http.client.html#http.client.HTTPResponse.status + super().__init__( + fp=res, headers=res.headers, url=res.url, + status=getattr(res, 'status', None) or res.getcode(), reason=getattr(res, 'reason', None)) + + def read(self, amt=None): + try: + return self.fp.read(amt) + except Exception as e: + handle_response_read_exceptions(e) + raise e + + +def handle_sslerror(e: ssl.SSLError): + if not isinstance(e, ssl.SSLError): + return + if isinstance(e, ssl.SSLCertVerificationError): + raise CertificateVerifyError(cause=e) from e + raise SSLError(cause=e) from e + + +def handle_response_read_exceptions(e): + if isinstance(e, http.client.IncompleteRead): + raise IncompleteRead(partial=len(e.partial), cause=e, expected=e.expected) from e + elif isinstance(e, ssl.SSLError): + handle_sslerror(e) + elif isinstance(e, (OSError, EOFError, http.client.HTTPException, *CONTENT_DECODE_ERRORS)): + # OSErrors raised here should mostly be network related + raise TransportError(cause=e) from e + + +@register_rh +class UrllibRH(RequestHandler, InstanceStoreMixin): + _SUPPORTED_URL_SCHEMES = ('http', 'https', 'data', 'ftp') + _SUPPORTED_PROXY_SCHEMES = ('http', 'socks4', 'socks4a', 'socks5', 'socks5h') + _SUPPORTED_FEATURES = (Features.NO_PROXY, Features.ALL_PROXY) + RH_NAME = 'urllib' + + def __init__(self, *, enable_file_urls: bool = False, **kwargs): + super().__init__(**kwargs) + self.enable_file_urls = enable_file_urls + if self.enable_file_urls: + self._SUPPORTED_URL_SCHEMES = (*self._SUPPORTED_URL_SCHEMES, 'file') + + def _check_extensions(self, extensions): + super()._check_extensions(extensions) + extensions.pop('cookiejar', None) + extensions.pop('timeout', None) + + def _create_instance(self, proxies, cookiejar): + opener = urllib.request.OpenerDirector() + handlers = [ + ProxyHandler(proxies), + HTTPHandler( + debuglevel=int(bool(self.verbose)), + context=self._make_sslcontext(), + source_address=self.source_address), + HTTPCookieProcessor(cookiejar), + DataHandler(), + UnknownHandler(), + HTTPDefaultErrorHandler(), + FTPHandler(), + HTTPErrorProcessor(), + RedirectHandler(), + ] + + if self.enable_file_urls: + handlers.append(FileHandler()) + + for handler in handlers: + opener.add_handler(handler) + + # Delete the default user-agent header, which would otherwise apply in + # cases where our custom HTTP handler doesn't come into play + # (See https://github.com/ytdl-org/youtube-dl/issues/1309 for details) + opener.addheaders = [] + return opener + + def _send(self, request): + headers = self._merge_headers(request.headers) + add_accept_encoding_header(headers, SUPPORTED_ENCODINGS) + urllib_req = urllib.request.Request( + url=request.url, + data=request.data, + headers=dict(headers), + method=request.method + ) + + opener = self._get_instance( + proxies=request.proxies or self.proxies, + cookiejar=request.extensions.get('cookiejar') or self.cookiejar + ) + try: + res = opener.open(urllib_req, timeout=float(request.extensions.get('timeout') or self.timeout)) + except urllib.error.HTTPError as e: + if isinstance(e.fp, (http.client.HTTPResponse, urllib.response.addinfourl)): + # Prevent file object from being closed when urllib.error.HTTPError is destroyed. + e._closer.close_called = True + raise HTTPError(UrllibResponseAdapter(e.fp), redirect_loop='redirect error' in str(e)) from e + raise # unexpected + except urllib.error.URLError as e: + cause = e.reason # NOTE: cause may be a string + + # proxy errors + if 'tunnel connection failed' in str(cause).lower() or isinstance(cause, SocksProxyError): + raise ProxyError(cause=e) from e + + handle_response_read_exceptions(cause) + raise TransportError(cause=e) from e + except (http.client.InvalidURL, ValueError) as e: + # Validation errors + # http.client.HTTPConnection raises ValueError in some validation cases + # such as if request method contains illegal control characters [1] + # 1. https://github.com/python/cpython/blob/987b712b4aeeece336eed24fcc87a950a756c3e2/Lib/http/client.py#L1256 + raise RequestError(cause=e) from e + except Exception as e: + handle_response_read_exceptions(e) + raise # unexpected + + return UrllibResponseAdapter(res) diff --git a/python/lib/python3.10/site-packages/yt_dlp/networking/common.py b/python/lib/python3.10/site-packages/yt_dlp/networking/common.py new file mode 100644 index 0000000..584c7bb --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/networking/common.py @@ -0,0 +1,564 @@ +from __future__ import annotations + +import abc +import copy +import enum +import functools +import io +import typing +import urllib.parse +import urllib.request +import urllib.response +from collections.abc import Iterable, Mapping +from email.message import Message +from http import HTTPStatus + +from ._helper import make_ssl_context, wrap_request_errors +from .exceptions import ( + NoSupportingHandlers, + RequestError, + TransportError, + UnsupportedRequest, +) +from ..compat.types import NoneType +from ..cookies import YoutubeDLCookieJar +from ..utils import ( + bug_reports_message, + classproperty, + deprecation_warning, + error_to_str, + update_url_query, +) +from ..utils.networking import HTTPHeaderDict, normalize_url + + +def register_preference(*handlers: type[RequestHandler]): + assert all(issubclass(handler, RequestHandler) for handler in handlers) + + def outer(preference: Preference): + @functools.wraps(preference) + def inner(handler, *args, **kwargs): + if not handlers or isinstance(handler, handlers): + return preference(handler, *args, **kwargs) + return 0 + _RH_PREFERENCES.add(inner) + return inner + return outer + + +class RequestDirector: + """RequestDirector class + + Helper class that, when given a request, forward it to a RequestHandler that supports it. + + Preference functions in the form of func(handler, request) -> int + can be registered into the `preferences` set. These are used to sort handlers + in order of preference. + + @param logger: Logger instance. + @param verbose: Print debug request information to stdout. + """ + + def __init__(self, logger, verbose=False): + self.handlers: dict[str, RequestHandler] = {} + self.preferences: set[Preference] = set() + self.logger = logger # TODO(Grub4k): default logger + self.verbose = verbose + + def close(self): + for handler in self.handlers.values(): + handler.close() + + def add_handler(self, handler: RequestHandler): + """Add a handler. If a handler of the same RH_KEY exists, it will overwrite it""" + assert isinstance(handler, RequestHandler), 'handler must be a RequestHandler' + self.handlers[handler.RH_KEY] = handler + + def _get_handlers(self, request: Request) -> list[RequestHandler]: + """Sorts handlers by preference, given a request""" + preferences = { + rh: sum(pref(rh, request) for pref in self.preferences) + for rh in self.handlers.values() + } + self._print_verbose('Handler preferences for this request: %s' % ', '.join( + f'{rh.RH_NAME}={pref}' for rh, pref in preferences.items())) + return sorted(self.handlers.values(), key=preferences.get, reverse=True) + + def _print_verbose(self, msg): + if self.verbose: + self.logger.stdout(f'director: {msg}') + + def send(self, request: Request) -> Response: + """ + Passes a request onto a suitable RequestHandler + """ + if not self.handlers: + raise RequestError('No request handlers configured') + + assert isinstance(request, Request) + + unexpected_errors = [] + unsupported_errors = [] + for handler in self._get_handlers(request): + self._print_verbose(f'Checking if "{handler.RH_NAME}" supports this request.') + try: + handler.validate(request) + except UnsupportedRequest as e: + self._print_verbose( + f'"{handler.RH_NAME}" cannot handle this request (reason: {error_to_str(e)})') + unsupported_errors.append(e) + continue + + self._print_verbose(f'Sending request via "{handler.RH_NAME}"') + try: + response = handler.send(request) + except RequestError: + raise + except Exception as e: + self.logger.error( + f'[{handler.RH_NAME}] Unexpected error: {error_to_str(e)}{bug_reports_message()}', + is_error=False) + unexpected_errors.append(e) + continue + + assert isinstance(response, Response) + return response + + raise NoSupportingHandlers(unsupported_errors, unexpected_errors) + + +_REQUEST_HANDLERS = {} + + +def register_rh(handler): + """Register a RequestHandler class""" + assert issubclass(handler, RequestHandler), f'{handler} must be a subclass of RequestHandler' + assert handler.RH_KEY not in _REQUEST_HANDLERS, f'RequestHandler {handler.RH_KEY} already registered' + _REQUEST_HANDLERS[handler.RH_KEY] = handler + return handler + + +class Features(enum.Enum): + ALL_PROXY = enum.auto() + NO_PROXY = enum.auto() + + +class RequestHandler(abc.ABC): + + """Request Handler class + + Request handlers are class that, given a Request, + process the request from start to finish and return a Response. + + Concrete subclasses need to redefine the _send(request) method, + which handles the underlying request logic and returns a Response. + + RH_NAME class variable may contain a display name for the RequestHandler. + By default, this is generated from the class name. + + The concrete request handler MUST have "RH" as the suffix in the class name. + + All exceptions raised by a RequestHandler should be an instance of RequestError. + Any other exception raised will be treated as a handler issue. + + If a Request is not supported by the handler, an UnsupportedRequest + should be raised with a reason. + + By default, some checks are done on the request in _validate() based on the following class variables: + - `_SUPPORTED_URL_SCHEMES`: a tuple of supported url schemes. + Any Request with an url scheme not in this list will raise an UnsupportedRequest. + + - `_SUPPORTED_PROXY_SCHEMES`: a tuple of support proxy url schemes. Any Request that contains + a proxy url with an url scheme not in this list will raise an UnsupportedRequest. + + - `_SUPPORTED_FEATURES`: a tuple of supported features, as defined in Features enum. + + The above may be set to None to disable the checks. + + Parameters: + @param logger: logger instance + @param headers: HTTP Headers to include when sending requests. + @param cookiejar: Cookiejar to use for requests. + @param timeout: Socket timeout to use when sending requests. + @param proxies: Proxies to use for sending requests. + @param source_address: Client-side IP address to bind to for requests. + @param verbose: Print debug request and traffic information to stdout. + @param prefer_system_certs: Whether to prefer system certificates over other means (e.g. certifi). + @param client_cert: SSL client certificate configuration. + dict with {client_certificate, client_certificate_key, client_certificate_password} + @param verify: Verify SSL certificates + @param legacy_ssl_support: Enable legacy SSL options such as legacy server connect and older cipher support. + + Some configuration options may be available for individual Requests too. In this case, + either the Request configuration option takes precedence or they are merged. + + Requests may have additional optional parameters defined as extensions. + RequestHandler subclasses may choose to support custom extensions. + + If an extension is supported, subclasses should extend _check_extensions(extensions) + to pop and validate the extension. + - Extensions left in `extensions` are treated as unsupported and UnsupportedRequest will be raised. + + The following extensions are defined for RequestHandler: + - `cookiejar`: Cookiejar to use for this request. + - `timeout`: socket timeout to use for this request. + To enable these, add extensions.pop('<extension>', None) to _check_extensions + + Apart from the url protocol, proxies dict may contain the following keys: + - `all`: proxy to use for all protocols. Used as a fallback if no proxy is set for a specific protocol. + - `no`: comma seperated list of hostnames (optionally with port) to not use a proxy for. + Note: a RequestHandler may not support these, as defined in `_SUPPORTED_FEATURES`. + + """ + + _SUPPORTED_URL_SCHEMES = () + _SUPPORTED_PROXY_SCHEMES = () + _SUPPORTED_FEATURES = () + + def __init__( + self, *, + logger, # TODO(Grub4k): default logger + headers: HTTPHeaderDict = None, + cookiejar: YoutubeDLCookieJar = None, + timeout: float | int | None = None, + proxies: dict = None, + source_address: str = None, + verbose: bool = False, + prefer_system_certs: bool = False, + client_cert: dict[str, str | None] = None, + verify: bool = True, + legacy_ssl_support: bool = False, + **_, + ): + + self._logger = logger + self.headers = headers or {} + self.cookiejar = cookiejar if cookiejar is not None else YoutubeDLCookieJar() + self.timeout = float(timeout or 20) + self.proxies = proxies or {} + self.source_address = source_address + self.verbose = verbose + self.prefer_system_certs = prefer_system_certs + self._client_cert = client_cert or {} + self.verify = verify + self.legacy_ssl_support = legacy_ssl_support + super().__init__() + + def _make_sslcontext(self): + return make_ssl_context( + verify=self.verify, + legacy_support=self.legacy_ssl_support, + use_certifi=not self.prefer_system_certs, + **self._client_cert, + ) + + def _merge_headers(self, request_headers): + return HTTPHeaderDict(self.headers, request_headers) + + def _check_url_scheme(self, request: Request): + scheme = urllib.parse.urlparse(request.url).scheme.lower() + if self._SUPPORTED_URL_SCHEMES is not None and scheme not in self._SUPPORTED_URL_SCHEMES: + raise UnsupportedRequest(f'Unsupported url scheme: "{scheme}"') + return scheme # for further processing + + def _check_proxies(self, proxies): + for proxy_key, proxy_url in proxies.items(): + if proxy_url is None: + continue + if proxy_key == 'no': + if self._SUPPORTED_FEATURES is not None and Features.NO_PROXY not in self._SUPPORTED_FEATURES: + raise UnsupportedRequest('"no" proxy is not supported') + continue + if ( + proxy_key == 'all' + and self._SUPPORTED_FEATURES is not None + and Features.ALL_PROXY not in self._SUPPORTED_FEATURES + ): + raise UnsupportedRequest('"all" proxy is not supported') + + # Unlikely this handler will use this proxy, so ignore. + # This is to allow a case where a proxy may be set for a protocol + # for one handler in which such protocol (and proxy) is not supported by another handler. + if self._SUPPORTED_URL_SCHEMES is not None and proxy_key not in (*self._SUPPORTED_URL_SCHEMES, 'all'): + continue + + if self._SUPPORTED_PROXY_SCHEMES is None: + # Skip proxy scheme checks + continue + + try: + if urllib.request._parse_proxy(proxy_url)[0] is None: + # Scheme-less proxies are not supported + raise UnsupportedRequest(f'Proxy "{proxy_url}" missing scheme') + except ValueError as e: + # parse_proxy may raise on some invalid proxy urls such as "/a/b/c" + raise UnsupportedRequest(f'Invalid proxy url "{proxy_url}": {e}') + + scheme = urllib.parse.urlparse(proxy_url).scheme.lower() + if scheme not in self._SUPPORTED_PROXY_SCHEMES: + raise UnsupportedRequest(f'Unsupported proxy type: "{scheme}"') + + def _check_extensions(self, extensions): + """Check extensions for unsupported extensions. Subclasses should extend this.""" + assert isinstance(extensions.get('cookiejar'), (YoutubeDLCookieJar, NoneType)) + assert isinstance(extensions.get('timeout'), (float, int, NoneType)) + + def _validate(self, request): + self._check_url_scheme(request) + self._check_proxies(request.proxies or self.proxies) + extensions = request.extensions.copy() + self._check_extensions(extensions) + if extensions: + # TODO: add support for optional extensions + raise UnsupportedRequest(f'Unsupported extensions: {", ".join(extensions.keys())}') + + @wrap_request_errors + def validate(self, request: Request): + if not isinstance(request, Request): + raise TypeError('Expected an instance of Request') + self._validate(request) + + @wrap_request_errors + def send(self, request: Request) -> Response: + if not isinstance(request, Request): + raise TypeError('Expected an instance of Request') + return self._send(request) + + @abc.abstractmethod + def _send(self, request: Request): + """Handle a request from start to finish. Redefine in subclasses.""" + pass + + def close(self): + pass + + @classproperty + def RH_NAME(cls): + return cls.__name__[:-2] + + @classproperty + def RH_KEY(cls): + assert cls.__name__.endswith('RH'), 'RequestHandler class names must end with "RH"' + return cls.__name__[:-2] + + def __enter__(self): + return self + + def __exit__(self, *args): + self.close() + + +class Request: + """ + Represents a request to be made. + Partially backwards-compatible with urllib.request.Request. + + @param url: url to send. Will be sanitized. + @param data: payload data to send. Must be bytes, iterable of bytes, a file-like object or None + @param headers: headers to send. + @param proxies: proxy dict mapping of proto:proxy to use for the request and any redirects. + @param query: URL query parameters to update the url with. + @param method: HTTP method to use. If no method specified, will use POST if payload data is present else GET + @param extensions: Dictionary of Request extensions to add, as supported by handlers. + """ + + def __init__( + self, + url: str, + data: RequestData = None, + headers: typing.Mapping = None, + proxies: dict = None, + query: dict = None, + method: str = None, + extensions: dict = None + ): + + self._headers = HTTPHeaderDict() + self._data = None + + if query: + url = update_url_query(url, query) + + self.url = url + self.method = method + if headers: + self.headers = headers + self.data = data # note: must be done after setting headers + self.proxies = proxies or {} + self.extensions = extensions or {} + + @property + def url(self): + return self._url + + @url.setter + def url(self, url): + if not isinstance(url, str): + raise TypeError('url must be a string') + elif url.startswith('//'): + url = 'http:' + url + self._url = normalize_url(url) + + @property + def method(self): + return self._method or ('POST' if self.data is not None else 'GET') + + @method.setter + def method(self, method): + if method is None: + self._method = None + elif isinstance(method, str): + self._method = method.upper() + else: + raise TypeError('method must be a string') + + @property + def data(self): + return self._data + + @data.setter + def data(self, data: RequestData): + # Try catch some common mistakes + if data is not None and ( + not isinstance(data, (bytes, io.IOBase, Iterable)) or isinstance(data, (str, Mapping)) + ): + raise TypeError('data must be bytes, iterable of bytes, or a file-like object') + + if data == self._data and self._data is None: + self.headers.pop('Content-Length', None) + + # https://docs.python.org/3/library/urllib.request.html#urllib.request.Request.data + if data != self._data: + if self._data is not None: + self.headers.pop('Content-Length', None) + self._data = data + + if self._data is None: + self.headers.pop('Content-Type', None) + + if 'Content-Type' not in self.headers and self._data is not None: + self.headers['Content-Type'] = 'application/x-www-form-urlencoded' + + @property + def headers(self) -> HTTPHeaderDict: + return self._headers + + @headers.setter + def headers(self, new_headers: Mapping): + """Replaces headers of the request. If not a CaseInsensitiveDict, it will be converted to one.""" + if isinstance(new_headers, HTTPHeaderDict): + self._headers = new_headers + elif isinstance(new_headers, Mapping): + self._headers = HTTPHeaderDict(new_headers) + else: + raise TypeError('headers must be a mapping') + + def update(self, url=None, data=None, headers=None, query=None): + self.data = data if data is not None else self.data + self.headers.update(headers or {}) + self.url = update_url_query(url or self.url, query or {}) + + def copy(self): + return self.__class__( + url=self.url, + headers=copy.deepcopy(self.headers), + proxies=copy.deepcopy(self.proxies), + data=self._data, + extensions=copy.copy(self.extensions), + method=self._method, + ) + + +HEADRequest = functools.partial(Request, method='HEAD') +PUTRequest = functools.partial(Request, method='PUT') + + +class Response(io.IOBase): + """ + Base class for HTTP response adapters. + + By default, it provides a basic wrapper for a file-like response object. + + Interface partially backwards-compatible with addinfourl and http.client.HTTPResponse. + + @param fp: Original, file-like, response. + @param url: URL that this is a response of. + @param headers: response headers. + @param status: Response HTTP status code. Default is 200 OK. + @param reason: HTTP status reason. Will use built-in reasons based on status code if not provided. + """ + + def __init__( + self, + fp: typing.IO, + url: str, + headers: Mapping[str, str], + status: int = 200, + reason: str = None): + + self.fp = fp + self.headers = Message() + for name, value in headers.items(): + self.headers.add_header(name, value) + self.status = status + self.url = url + try: + self.reason = reason or HTTPStatus(status).phrase + except ValueError: + self.reason = None + + def readable(self): + return self.fp.readable() + + def read(self, amt: int = None) -> bytes: + # Expected errors raised here should be of type RequestError or subclasses. + # Subclasses should redefine this method with more precise error handling. + try: + return self.fp.read(amt) + except Exception as e: + raise TransportError(cause=e) from e + + def close(self): + self.fp.close() + return super().close() + + def get_header(self, name, default=None): + """Get header for name. + If there are multiple matching headers, return all seperated by comma.""" + headers = self.headers.get_all(name) + if not headers: + return default + if name.title() == 'Set-Cookie': + # Special case, only get the first one + # https://www.rfc-editor.org/rfc/rfc9110.html#section-5.3-4.1 + return headers[0] + return ', '.join(headers) + + # The following methods are for compatability reasons and are deprecated + @property + def code(self): + deprecation_warning('Response.code is deprecated, use Response.status', stacklevel=2) + return self.status + + def getcode(self): + deprecation_warning('Response.getcode() is deprecated, use Response.status', stacklevel=2) + return self.status + + def geturl(self): + deprecation_warning('Response.geturl() is deprecated, use Response.url', stacklevel=2) + return self.url + + def info(self): + deprecation_warning('Response.info() is deprecated, use Response.headers', stacklevel=2) + return self.headers + + def getheader(self, name, default=None): + deprecation_warning('Response.getheader() is deprecated, use Response.get_header', stacklevel=2) + return self.get_header(name, default) + + +if typing.TYPE_CHECKING: + RequestData = bytes | Iterable[bytes] | typing.IO | None + Preference = typing.Callable[[RequestHandler, Request], int] + +_RH_PREFERENCES: set[Preference] = set() diff --git a/python/lib/python3.10/site-packages/yt_dlp/networking/exceptions.py b/python/lib/python3.10/site-packages/yt_dlp/networking/exceptions.py new file mode 100644 index 0000000..f58dc24 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/networking/exceptions.py @@ -0,0 +1,217 @@ +from __future__ import annotations + +import typing +import urllib.error + +from ..utils import YoutubeDLError, deprecation_warning + +if typing.TYPE_CHECKING: + from .common import RequestHandler, Response + + +class RequestError(YoutubeDLError): + def __init__( + self, + msg: str | None = None, + cause: Exception | str | None = None, + handler: RequestHandler = None + ): + self.handler = handler + self.cause = cause + if not msg and cause: + msg = str(cause) + super().__init__(msg) + + +class UnsupportedRequest(RequestError): + """raised when a handler cannot handle a request""" + pass + + +class NoSupportingHandlers(RequestError): + """raised when no handlers can support a request for various reasons""" + + def __init__(self, unsupported_errors: list[UnsupportedRequest], unexpected_errors: list[Exception]): + self.unsupported_errors = unsupported_errors or [] + self.unexpected_errors = unexpected_errors or [] + + # Print a quick summary of the errors + err_handler_map = {} + for err in unsupported_errors: + err_handler_map.setdefault(err.msg, []).append(err.handler.RH_NAME) + + reason_str = ', '.join([f'{msg} ({", ".join(handlers)})' for msg, handlers in err_handler_map.items()]) + if unexpected_errors: + reason_str = ' + '.join(filter(None, [reason_str, f'{len(unexpected_errors)} unexpected error(s)'])) + + err_str = 'Unable to handle request' + if reason_str: + err_str += f': {reason_str}' + + super().__init__(msg=err_str) + + +class TransportError(RequestError): + """Network related errors""" + + +class HTTPError(RequestError): + def __init__(self, response: Response, redirect_loop=False): + self.response = response + self.status = response.status + self.reason = response.reason + self.redirect_loop = redirect_loop + msg = f'HTTP Error {response.status}: {response.reason}' + if redirect_loop: + msg += ' (redirect loop detected)' + + super().__init__(msg=msg) + + def close(self): + self.response.close() + + def __repr__(self): + return f'<HTTPError {self.status}: {self.reason}>' + + +class IncompleteRead(TransportError): + def __init__(self, partial: int, expected: int = None, **kwargs): + self.partial = partial + self.expected = expected + msg = f'{partial} bytes read' + if expected is not None: + msg += f', {expected} more expected' + + super().__init__(msg=msg, **kwargs) + + def __repr__(self): + return f'<IncompleteRead: {self.msg}>' + + +class SSLError(TransportError): + pass + + +class CertificateVerifyError(SSLError): + """Raised when certificate validated has failed""" + pass + + +class ProxyError(TransportError): + pass + + +class _CompatHTTPError(urllib.error.HTTPError, HTTPError): + """ + Provides backwards compatibility with urllib.error.HTTPError. + Do not use this class directly, use HTTPError instead. + """ + + def __init__(self, http_error: HTTPError): + super().__init__( + url=http_error.response.url, + code=http_error.status, + msg=http_error.msg, + hdrs=http_error.response.headers, + fp=http_error.response + ) + self._closer.close_called = True # Disable auto close + self._http_error = http_error + HTTPError.__init__(self, http_error.response, redirect_loop=http_error.redirect_loop) + + @property + def status(self): + return self._http_error.status + + @status.setter + def status(self, value): + return + + @property + def reason(self): + return self._http_error.reason + + @reason.setter + def reason(self, value): + return + + @property + def headers(self): + deprecation_warning('HTTPError.headers is deprecated, use HTTPError.response.headers instead') + return self._http_error.response.headers + + @headers.setter + def headers(self, value): + return + + def info(self): + deprecation_warning('HTTPError.info() is deprecated, use HTTPError.response.headers instead') + return self.response.headers + + def getcode(self): + deprecation_warning('HTTPError.getcode is deprecated, use HTTPError.status instead') + return self.status + + def geturl(self): + deprecation_warning('HTTPError.geturl is deprecated, use HTTPError.response.url instead') + return self.response.url + + @property + def code(self): + deprecation_warning('HTTPError.code is deprecated, use HTTPError.status instead') + return self.status + + @code.setter + def code(self, value): + return + + @property + def url(self): + deprecation_warning('HTTPError.url is deprecated, use HTTPError.response.url instead') + return self.response.url + + @url.setter + def url(self, value): + return + + @property + def hdrs(self): + deprecation_warning('HTTPError.hdrs is deprecated, use HTTPError.response.headers instead') + return self.response.headers + + @hdrs.setter + def hdrs(self, value): + return + + @property + def filename(self): + deprecation_warning('HTTPError.filename is deprecated, use HTTPError.response.url instead') + return self.response.url + + @filename.setter + def filename(self, value): + return + + def __getattr__(self, name): + # File operations are passed through the response. + # Warn for some commonly used ones + passthrough_warnings = { + 'read': 'response.read()', + # technically possibly due to passthrough, but we should discourage this + 'get_header': 'response.get_header()', + 'readable': 'response.readable()', + 'closed': 'response.closed', + 'tell': 'response.tell()', + } + if name in passthrough_warnings: + deprecation_warning(f'HTTPError.{name} is deprecated, use HTTPError.{passthrough_warnings[name]} instead') + return super().__getattr__(name) + + def __str__(self): + return str(self._http_error) + + def __repr__(self): + return repr(self._http_error) + + +network_exceptions = (HTTPError, TransportError) diff --git a/python/lib/python3.10/site-packages/yt_dlp/options.py b/python/lib/python3.10/site-packages/yt_dlp/options.py new file mode 100644 index 0000000..85a6402 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/options.py @@ -0,0 +1,1913 @@ +import collections +import contextlib +import optparse +import os.path +import re +import shlex +import shutil +import string +import sys + +from .compat import compat_expanduser +from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS +from .downloader.external import list_external_downloaders +from .postprocessor import ( + FFmpegExtractAudioPP, + FFmpegMergerPP, + FFmpegSubtitlesConvertorPP, + FFmpegThumbnailsConvertorPP, + FFmpegVideoRemuxerPP, + SponsorBlockPP, +) +from .postprocessor.modify_chapters import DEFAULT_SPONSORBLOCK_CHAPTER_TITLE +from .update import UPDATE_SOURCES, detect_variant, is_non_updateable +from .utils import ( + OUTTMPL_TYPES, + POSTPROCESS_WHEN, + Config, + deprecation_warning, + expand_path, + format_field, + get_executable_path, + get_system_config_dirs, + get_user_config_dirs, + join_nonempty, + orderedSet_from_options, + remove_end, + variadic, + write_string, +) +from .version import CHANNEL, __version__ + + +def parseOpts(overrideArguments=None, ignore_config_files='if_override'): + PACKAGE_NAME = 'yt-dlp' + + root = Config(create_parser()) + if ignore_config_files == 'if_override': + ignore_config_files = overrideArguments is not None + + def read_config(*paths): + path = os.path.join(*paths) + conf = Config.read_file(path, default=None) + if conf is not None: + return conf, path + + def _load_from_config_dirs(config_dirs): + for config_dir in config_dirs: + head, tail = os.path.split(config_dir) + assert tail == PACKAGE_NAME or config_dir == os.path.join(compat_expanduser('~'), f'.{PACKAGE_NAME}') + + yield read_config(head, f'{PACKAGE_NAME}.conf') + if tail.startswith('.'): # ~/.PACKAGE_NAME + yield read_config(head, f'{PACKAGE_NAME}.conf.txt') + yield read_config(config_dir, 'config') + yield read_config(config_dir, 'config.txt') + + def add_config(label, path=None, func=None): + """ Adds config and returns whether to continue """ + if root.parse_known_args()[0].ignoreconfig: + return False + elif func: + assert path is None + args, current_path = next( + filter(None, _load_from_config_dirs(func(PACKAGE_NAME))), (None, None)) + else: + current_path = os.path.join(path, 'yt-dlp.conf') + args = Config.read_file(current_path, default=None) + if args is not None: + root.append_config(args, current_path, label=label) + return True + + def load_configs(): + yield not ignore_config_files + yield add_config('Portable', get_executable_path()) + yield add_config('Home', expand_path(root.parse_known_args()[0].paths.get('home', '')).strip()) + yield add_config('User', func=get_user_config_dirs) + yield add_config('System', func=get_system_config_dirs) + + opts = optparse.Values({'verbose': True, 'print_help': False}) + try: + try: + if overrideArguments is not None: + root.append_config(overrideArguments, label='Override') + else: + root.append_config(sys.argv[1:], label='Command-line') + loaded_all_configs = all(load_configs()) + except ValueError as err: + raise root.parser.error(err) + + if loaded_all_configs: + # If ignoreconfig is found inside the system configuration file, + # the user configuration is removed + if root.parse_known_args()[0].ignoreconfig: + user_conf = next((i for i, conf in enumerate(root.configs) if conf.label == 'User'), None) + if user_conf is not None: + root.configs.pop(user_conf) + + try: + root.configs[0].load_configs() # Resolve any aliases using --config-location + except ValueError as err: + raise root.parser.error(err) + + opts, args = root.parse_args() + except optparse.OptParseError: + with contextlib.suppress(optparse.OptParseError): + opts, _ = root.parse_known_args(strict=False) + raise + except (SystemExit, KeyboardInterrupt): + opts.verbose = False + raise + finally: + verbose = opts.verbose and f'\n{root}'.replace('\n| ', '\n[debug] ')[1:] + if verbose: + write_string(f'{verbose}\n') + if opts.print_help: + if verbose: + write_string('\n') + root.parser.print_help() + if opts.print_help: + sys.exit() + return root.parser, opts, args + + +class _YoutubeDLHelpFormatter(optparse.IndentedHelpFormatter): + def __init__(self): + # No need to wrap help messages if we're on a wide console + max_width = shutil.get_terminal_size().columns or 80 + # The % is chosen to get a pretty output in README.md + super().__init__(width=max_width, max_help_position=int(0.45 * max_width)) + + @staticmethod + def format_option_strings(option): + """ ('-o', '--option') -> -o, --format METAVAR """ + opts = join_nonempty( + option._short_opts and option._short_opts[0], + option._long_opts and option._long_opts[0], + delim=', ') + if option.takes_value(): + opts += f' {option.metavar}' + return opts + + +class _YoutubeDLOptionParser(optparse.OptionParser): + # optparse is deprecated since python 3.2. So assume a stable interface even for private methods + ALIAS_DEST = '_triggered_aliases' + ALIAS_TRIGGER_LIMIT = 100 + + def __init__(self): + super().__init__( + prog='yt-dlp' if detect_variant() == 'source' else None, + version=__version__, + usage='%prog [OPTIONS] URL [URL...]', + epilog='See full documentation at https://github.com/yt-dlp/yt-dlp#readme', + formatter=_YoutubeDLHelpFormatter(), + conflict_handler='resolve', + ) + self.set_default(self.ALIAS_DEST, collections.defaultdict(int)) + + _UNKNOWN_OPTION = (optparse.BadOptionError, optparse.AmbiguousOptionError) + _BAD_OPTION = optparse.OptionValueError + + def parse_known_args(self, args=None, values=None, strict=True): + """Same as parse_args, but ignore unknown switches. Similar to argparse.parse_known_args""" + self.rargs, self.largs = self._get_args(args), [] + self.values = values or self.get_default_values() + while self.rargs: + arg = self.rargs[0] + try: + if arg == '--': + del self.rargs[0] + break + elif arg.startswith('--'): + self._process_long_opt(self.rargs, self.values) + elif arg.startswith('-') and arg != '-': + self._process_short_opts(self.rargs, self.values) + elif self.allow_interspersed_args: + self.largs.append(self.rargs.pop(0)) + else: + break + except optparse.OptParseError as err: + if isinstance(err, self._UNKNOWN_OPTION): + self.largs.append(err.opt_str) + elif strict: + if isinstance(err, self._BAD_OPTION): + self.error(str(err)) + raise + return self.check_values(self.values, self.largs) + + def error(self, msg): + msg = f'{self.get_prog_name()}: error: {str(msg).strip()}\n' + raise optparse.OptParseError(f'{self.get_usage()}\n{msg}' if self.usage else msg) + + def _get_args(self, args): + return sys.argv[1:] if args is None else list(args) + + def _match_long_opt(self, opt): + """Improve ambiguous argument resolution by comparing option objects instead of argument strings""" + try: + return super()._match_long_opt(opt) + except optparse.AmbiguousOptionError as e: + if len({self._long_opt[p] for p in e.possibilities}) == 1: + return e.possibilities[0] + raise + + +def create_parser(): + def _list_from_options_callback(option, opt_str, value, parser, append=True, delim=',', process=str.strip): + # append can be True, False or -1 (prepend) + current = list(getattr(parser.values, option.dest)) if append else [] + value = list(filter(None, [process(value)] if delim is None else map(process, value.split(delim)))) + setattr( + parser.values, option.dest, + current + value if append is True else value + current) + + def _set_from_options_callback( + option, opt_str, value, parser, allowed_values, delim=',', aliases={}, + process=lambda x: x.lower().strip()): + values = [process(value)] if delim is None else map(process, value.split(delim)) + try: + requested = orderedSet_from_options(values, collections.ChainMap(aliases, {'all': allowed_values}), + start=getattr(parser.values, option.dest)) + except ValueError as e: + raise optparse.OptionValueError(f'wrong {option.metavar} for {opt_str}: {e.args[0]}') + + setattr(parser.values, option.dest, set(requested)) + + def _dict_from_options_callback( + option, opt_str, value, parser, + allowed_keys=r'[\w-]+', delimiter=':', default_key=None, process=None, multiple_keys=True, + process_key=str.lower, append=False): + + out_dict = dict(getattr(parser.values, option.dest)) + multiple_args = not isinstance(value, str) + if multiple_keys: + allowed_keys = fr'({allowed_keys})(,({allowed_keys}))*' + mobj = re.match( + fr'(?is)(?P<keys>{allowed_keys}){delimiter}(?P<val>.*)$', + value[0] if multiple_args else value) + if mobj is not None: + keys, val = mobj.group('keys').split(','), mobj.group('val') + if multiple_args: + val = [val, *value[1:]] + elif default_key is not None: + keys, val = variadic(default_key), value + else: + raise optparse.OptionValueError( + f'wrong {opt_str} formatting; it should be {option.metavar}, not "{value}"') + try: + keys = map(process_key, keys) if process_key else keys + val = process(val) if process else val + except Exception as err: + raise optparse.OptionValueError(f'wrong {opt_str} formatting; {err}') + for key in keys: + out_dict[key] = out_dict.get(key, []) + [val] if append else val + setattr(parser.values, option.dest, out_dict) + + def when_prefix(default): + return { + 'default': {}, + 'type': 'str', + 'action': 'callback', + 'callback': _dict_from_options_callback, + 'callback_kwargs': { + 'allowed_keys': '|'.join(map(re.escape, POSTPROCESS_WHEN)), + 'default_key': default, + 'multiple_keys': False, + 'append': True, + }, + } + + parser = _YoutubeDLOptionParser() + alias_group = optparse.OptionGroup(parser, 'Aliases') + Formatter = string.Formatter() + + def _create_alias(option, opt_str, value, parser): + aliases, opts = value + try: + nargs = len({i if f == '' else f + for i, (_, f, _, _) in enumerate(Formatter.parse(opts)) if f is not None}) + opts.format(*map(str, range(nargs))) # validate + except Exception as err: + raise optparse.OptionValueError(f'wrong {opt_str} OPTIONS formatting; {err}') + if alias_group not in parser.option_groups: + parser.add_option_group(alias_group) + + aliases = (x if x.startswith('-') else f'--{x}' for x in map(str.strip, aliases.split(','))) + try: + args = [f'ARG{i}' for i in range(nargs)] + alias_group.add_option( + *aliases, nargs=nargs, dest=parser.ALIAS_DEST, type='str' if nargs else None, + metavar=' '.join(args), help=opts.format(*args), action='callback', + callback=_alias_callback, callback_kwargs={'opts': opts, 'nargs': nargs}) + except Exception as err: + raise optparse.OptionValueError(f'wrong {opt_str} formatting; {err}') + + def _alias_callback(option, opt_str, value, parser, opts, nargs): + counter = getattr(parser.values, option.dest) + counter[opt_str] += 1 + if counter[opt_str] > parser.ALIAS_TRIGGER_LIMIT: + raise optparse.OptionValueError(f'Alias {opt_str} exceeded invocation limit') + if nargs == 1: + value = [value] + assert (nargs == 0 and value is None) or len(value) == nargs + parser.rargs[:0] = shlex.split( + opts if value is None else opts.format(*map(shlex.quote, value))) + + general = optparse.OptionGroup(parser, 'General Options') + general.add_option( + '-h', '--help', dest='print_help', action='store_true', + help='Print this help text and exit') + general.add_option( + '--version', + action='version', + help='Print program version and exit') + general.add_option( + '-U', '--update', + action='store_const', dest='update_self', const=CHANNEL, + help=format_field( + is_non_updateable(), None, 'Check if updates are available. %s', + default=f'Update this program to the latest {CHANNEL} version')) + general.add_option( + '--no-update', + action='store_false', dest='update_self', + help='Do not check for updates (default)') + general.add_option( + '--update-to', + action='store', dest='update_self', metavar='[CHANNEL]@[TAG]', + help=( + 'Upgrade/downgrade to a specific version. CHANNEL can be a repository as well. ' + f'CHANNEL and TAG default to "{CHANNEL.partition("@")[0]}" and "latest" respectively if omitted; ' + f'See "UPDATE" for details. Supported channels: {", ".join(UPDATE_SOURCES)}')) + general.add_option( + '-i', '--ignore-errors', + action='store_true', dest='ignoreerrors', + help='Ignore download and postprocessing errors. The download will be considered successful even if the postprocessing fails') + general.add_option( + '--no-abort-on-error', + action='store_const', dest='ignoreerrors', const='only_download', + help='Continue with next video on download errors; e.g. to skip unavailable videos in a playlist (default)') + general.add_option( + '--abort-on-error', '--no-ignore-errors', + action='store_false', dest='ignoreerrors', + help='Abort downloading of further videos if an error occurs (Alias: --no-ignore-errors)') + general.add_option( + '--dump-user-agent', + action='store_true', dest='dump_user_agent', default=False, + help='Display the current user-agent and exit') + general.add_option( + '--list-extractors', + action='store_true', dest='list_extractors', default=False, + help='List all supported extractors and exit') + general.add_option( + '--extractor-descriptions', + action='store_true', dest='list_extractor_descriptions', default=False, + help='Output descriptions of all supported extractors and exit') + general.add_option( + '--use-extractors', '--ies', + action='callback', dest='allowed_extractors', metavar='NAMES', type='str', + default=[], callback=_list_from_options_callback, + help=( + 'Extractor names to use separated by commas. ' + 'You can also use regexes, "all", "default" and "end" (end URL matching); ' + 'e.g. --ies "holodex.*,end,youtube". ' + 'Prefix the name with a "-" to exclude it, e.g. --ies default,-generic. ' + 'Use --list-extractors for a list of extractor names. (Alias: --ies)')) + general.add_option( + '--force-generic-extractor', + action='store_true', dest='force_generic_extractor', default=False, + help=optparse.SUPPRESS_HELP) + general.add_option( + '--default-search', + dest='default_search', metavar='PREFIX', + help=( + 'Use this prefix for unqualified URLs. ' + 'E.g. "gvsearch2:python" downloads two videos from google videos for the search term "python". ' + 'Use the value "auto" to let yt-dlp guess ("auto_warning" to emit a warning when guessing). ' + '"error" just throws an error. The default value "fixup_error" repairs broken URLs, ' + 'but emits an error if this is not possible instead of searching')) + general.add_option( + '--ignore-config', '--no-config', + action='store_true', dest='ignoreconfig', + help=( + 'Don\'t load any more configuration files except those given by --config-locations. ' + 'For backward compatibility, if this option is found inside the system configuration file, the user configuration is not loaded. ' + '(Alias: --no-config)')) + general.add_option( + '--no-config-locations', + action='store_const', dest='config_locations', const=[], + help=( + 'Do not load any custom configuration files (default). When given inside a ' + 'configuration file, ignore all previous --config-locations defined in the current file')) + general.add_option( + '--config-locations', + dest='config_locations', metavar='PATH', action='append', + help=( + 'Location of the main configuration file; either the path to the config or its containing directory ' + '("-" for stdin). Can be used multiple times and inside other configuration files')) + general.add_option( + '--flat-playlist', + action='store_const', dest='extract_flat', const='in_playlist', default=False, + help='Do not extract the videos of a playlist, only list them') + general.add_option( + '--no-flat-playlist', + action='store_false', dest='extract_flat', + help='Fully extract the videos of a playlist (default)') + general.add_option( + '--live-from-start', + action='store_true', dest='live_from_start', + help='Download livestreams from the start. Currently only supported for YouTube (Experimental)') + general.add_option( + '--no-live-from-start', + action='store_false', dest='live_from_start', + help='Download livestreams from the current time (default)') + general.add_option( + '--wait-for-video', + dest='wait_for_video', metavar='MIN[-MAX]', default=None, + help=( + 'Wait for scheduled streams to become available. ' + 'Pass the minimum number of seconds (or range) to wait between retries')) + general.add_option( + '--no-wait-for-video', + dest='wait_for_video', action='store_const', const=None, + help='Do not wait for scheduled streams (default)') + general.add_option( + '--mark-watched', + action='store_true', dest='mark_watched', default=False, + help='Mark videos watched (even with --simulate)') + general.add_option( + '--no-mark-watched', + action='store_false', dest='mark_watched', + help='Do not mark videos watched (default)') + general.add_option( + '--no-colors', '--no-colours', + action='store_const', dest='color', const={ + 'stdout': 'no_color', + 'stderr': 'no_color', + }, + help=optparse.SUPPRESS_HELP) + general.add_option( + '--color', + dest='color', metavar='[STREAM:]POLICY', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': 'stdout|stderr', + 'default_key': ['stdout', 'stderr'], + 'process': str.strip, + }, help=( + 'Whether to emit color codes in output, optionally prefixed by ' + 'the STREAM (stdout or stderr) to apply the setting to. ' + 'Can be one of "always", "auto" (default), "never", or ' + '"no_color" (use non color terminal sequences). ' + 'Can be used multiple times')) + general.add_option( + '--compat-options', + metavar='OPTS', dest='compat_opts', default=set(), type='str', + action='callback', callback=_set_from_options_callback, + callback_kwargs={ + 'allowed_values': { + 'filename', 'filename-sanitization', 'format-sort', 'abort-on-error', 'format-spec', 'no-playlist-metafiles', + 'multistreams', 'no-live-chat', 'playlist-index', 'list-formats', 'no-direct-merge', 'playlist-match-filter', + 'no-attach-info-json', 'embed-thumbnail-atomicparsley', 'no-external-downloader-progress', + 'embed-metadata', 'seperate-video-versions', 'no-clean-infojson', 'no-keep-subs', 'no-certifi', + 'no-youtube-channel-redirect', 'no-youtube-unavailable-videos', 'no-youtube-prefer-utc-upload-date', + }, 'aliases': { + 'youtube-dl': ['all', '-multistreams', '-playlist-match-filter'], + 'youtube-dlc': ['all', '-no-youtube-channel-redirect', '-no-live-chat', '-playlist-match-filter'], + '2021': ['2022', 'no-certifi', 'filename-sanitization', 'no-youtube-prefer-utc-upload-date'], + '2022': ['no-external-downloader-progress', 'playlist-match-filter'], + } + }, help=( + 'Options that can help keep compatibility with youtube-dl or youtube-dlc ' + 'configurations by reverting some of the changes made in yt-dlp. ' + 'See "Differences in default behavior" for details')) + general.add_option( + '--alias', metavar='ALIASES OPTIONS', dest='_', type='str', nargs=2, + action='callback', callback=_create_alias, + help=( + 'Create aliases for an option string. Unless an alias starts with a dash "-", it is prefixed with "--". ' + 'Arguments are parsed according to the Python string formatting mini-language. ' + 'E.g. --alias get-audio,-X "-S=aext:{0},abr -x --audio-format {0}" creates options ' + '"--get-audio" and "-X" that takes an argument (ARG0) and expands to ' + '"-S=aext:ARG0,abr -x --audio-format ARG0". All defined aliases are listed in the --help output. ' + 'Alias options can trigger more aliases; so be careful to avoid defining recursive options. ' + f'As a safety measure, each alias may be triggered a maximum of {_YoutubeDLOptionParser.ALIAS_TRIGGER_LIMIT} times. ' + 'This option can be used multiple times')) + + network = optparse.OptionGroup(parser, 'Network Options') + network.add_option( + '--proxy', dest='proxy', + default=None, metavar='URL', + help=( + 'Use the specified HTTP/HTTPS/SOCKS proxy. To enable SOCKS proxy, specify a proper scheme, ' + 'e.g. socks5://user:pass@127.0.0.1:1080/. Pass in an empty string (--proxy "") for direct connection')) + network.add_option( + '--socket-timeout', + dest='socket_timeout', type=float, default=None, metavar='SECONDS', + help='Time to wait before giving up, in seconds') + network.add_option( + '--source-address', + metavar='IP', dest='source_address', default=None, + help='Client-side IP address to bind to', + ) + network.add_option( + '-4', '--force-ipv4', + action='store_const', const='0.0.0.0', dest='source_address', + help='Make all connections via IPv4', + ) + network.add_option( + '-6', '--force-ipv6', + action='store_const', const='::', dest='source_address', + help='Make all connections via IPv6', + ) + network.add_option( + '--enable-file-urls', action='store_true', + dest='enable_file_urls', default=False, + help='Enable file:// URLs. This is disabled by default for security reasons.' + ) + + geo = optparse.OptionGroup(parser, 'Geo-restriction') + geo.add_option( + '--geo-verification-proxy', + dest='geo_verification_proxy', default=None, metavar='URL', + help=( + 'Use this proxy to verify the IP address for some geo-restricted sites. ' + 'The default proxy specified by --proxy (or none, if the option is not present) is used for the actual downloading')) + geo.add_option( + '--cn-verification-proxy', + dest='cn_verification_proxy', default=None, metavar='URL', + help=optparse.SUPPRESS_HELP) + geo.add_option( + '--xff', metavar='VALUE', + dest='geo_bypass', default='default', + help=( + 'How to fake X-Forwarded-For HTTP header to try bypassing geographic restriction. ' + 'One of "default" (only when known to be useful), "never", ' + 'an IP block in CIDR notation, or a two-letter ISO 3166-2 country code')) + geo.add_option( + '--geo-bypass', + action='store_const', dest='geo_bypass', const='default', + help=optparse.SUPPRESS_HELP) + geo.add_option( + '--no-geo-bypass', + action='store_const', dest='geo_bypass', const='never', + help=optparse.SUPPRESS_HELP) + geo.add_option( + '--geo-bypass-country', metavar='CODE', dest='geo_bypass', + help=optparse.SUPPRESS_HELP) + geo.add_option( + '--geo-bypass-ip-block', metavar='IP_BLOCK', dest='geo_bypass', + help=optparse.SUPPRESS_HELP) + + selection = optparse.OptionGroup(parser, 'Video Selection') + selection.add_option( + '--playlist-start', + dest='playliststart', metavar='NUMBER', default=1, type=int, + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--playlist-end', + dest='playlistend', metavar='NUMBER', default=None, type=int, + help=optparse.SUPPRESS_HELP) + selection.add_option( + '-I', '--playlist-items', + dest='playlist_items', metavar='ITEM_SPEC', default=None, + help=( + 'Comma separated playlist_index of the items to download. ' + 'You can specify a range using "[START]:[STOP][:STEP]". For backward compatibility, START-STOP is also supported. ' + 'Use negative indices to count from the right and negative STEP to download in reverse order. ' + 'E.g. "-I 1:3,7,-5::2" used on a playlist of size 15 will download the items at index 1,2,3,7,11,13,15')) + selection.add_option( + '--match-title', + dest='matchtitle', metavar='REGEX', + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--reject-title', + dest='rejecttitle', metavar='REGEX', + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--min-filesize', + metavar='SIZE', dest='min_filesize', default=None, + help='Abort download if filesize is smaller than SIZE, e.g. 50k or 44.6M') + selection.add_option( + '--max-filesize', + metavar='SIZE', dest='max_filesize', default=None, + help='Abort download if filesize is larger than SIZE, e.g. 50k or 44.6M') + selection.add_option( + '--date', + metavar='DATE', dest='date', default=None, + help=( + 'Download only videos uploaded on this date. ' + 'The date can be "YYYYMMDD" or in the format [now|today|yesterday][-N[day|week|month|year]]. ' + 'E.g. "--date today-2weeks" downloads only videos uploaded on the same day two weeks ago')) + selection.add_option( + '--datebefore', + metavar='DATE', dest='datebefore', default=None, + help=( + 'Download only videos uploaded on or before this date. ' + 'The date formats accepted is the same as --date')) + selection.add_option( + '--dateafter', + metavar='DATE', dest='dateafter', default=None, + help=( + 'Download only videos uploaded on or after this date. ' + 'The date formats accepted is the same as --date')) + selection.add_option( + '--min-views', + metavar='COUNT', dest='min_views', default=None, type=int, + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--max-views', + metavar='COUNT', dest='max_views', default=None, type=int, + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--match-filters', + metavar='FILTER', dest='match_filter', action='append', + help=( + 'Generic video filter. Any "OUTPUT TEMPLATE" field can be compared with a ' + 'number or a string using the operators defined in "Filtering Formats". ' + 'You can also simply specify a field to match if the field is present, ' + 'use "!field" to check if the field is not present, and "&" to check multiple conditions. ' + 'Use a "\\" to escape "&" or quotes if needed. If used multiple times, ' + 'the filter matches if atleast one of the conditions are met. E.g. --match-filter ' + '!is_live --match-filter "like_count>?100 & description~=\'(?i)\\bcats \\& dogs\\b\'" ' + 'matches only videos that are not live OR those that have a like count more than 100 ' + '(or the like field is not available) and also has a description ' + 'that contains the phrase "cats & dogs" (caseless). ' + 'Use "--match-filter -" to interactively ask whether to download each video')) + selection.add_option( + '--no-match-filters', + dest='match_filter', action='store_const', const=None, + help='Do not use any --match-filter (default)') + selection.add_option( + '--break-match-filters', + metavar='FILTER', dest='breaking_match_filter', action='append', + help='Same as "--match-filters" but stops the download process when a video is rejected') + selection.add_option( + '--no-break-match-filters', + dest='breaking_match_filter', action='store_const', const=None, + help='Do not use any --break-match-filters (default)') + selection.add_option( + '--no-playlist', + action='store_true', dest='noplaylist', default=False, + help='Download only the video, if the URL refers to a video and a playlist') + selection.add_option( + '--yes-playlist', + action='store_false', dest='noplaylist', + help='Download the playlist, if the URL refers to a video and a playlist') + selection.add_option( + '--age-limit', + metavar='YEARS', dest='age_limit', default=None, type=int, + help='Download only videos suitable for the given age') + selection.add_option( + '--download-archive', metavar='FILE', + dest='download_archive', + help='Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it') + selection.add_option( + '--no-download-archive', + dest='download_archive', action="store_const", const=None, + help='Do not use archive file (default)') + selection.add_option( + '--max-downloads', + dest='max_downloads', metavar='NUMBER', type=int, default=None, + help='Abort after downloading NUMBER files') + selection.add_option( + '--break-on-existing', + action='store_true', dest='break_on_existing', default=False, + help='Stop the download process when encountering a file that is in the archive') + selection.add_option( + '--break-on-reject', + action='store_true', dest='break_on_reject', default=False, + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--break-per-input', + action='store_true', dest='break_per_url', default=False, + help='Alters --max-downloads, --break-on-existing, --break-match-filter, and autonumber to reset per input URL') + selection.add_option( + '--no-break-per-input', + action='store_false', dest='break_per_url', + help='--break-on-existing and similar options terminates the entire download queue') + selection.add_option( + '--skip-playlist-after-errors', metavar='N', + dest='skip_playlist_after_errors', default=None, type=int, + help='Number of allowed failures until the rest of the playlist is skipped') + selection.add_option( + '--include-ads', + dest='include_ads', action='store_true', + help=optparse.SUPPRESS_HELP) + selection.add_option( + '--no-include-ads', + dest='include_ads', action='store_false', + help=optparse.SUPPRESS_HELP) + + authentication = optparse.OptionGroup(parser, 'Authentication Options') + authentication.add_option( + '-u', '--username', + dest='username', metavar='USERNAME', + help='Login with this account ID') + authentication.add_option( + '-p', '--password', + dest='password', metavar='PASSWORD', + help='Account password. If this option is left out, yt-dlp will ask interactively') + authentication.add_option( + '-2', '--twofactor', + dest='twofactor', metavar='TWOFACTOR', + help='Two-factor authentication code') + authentication.add_option( + '-n', '--netrc', + action='store_true', dest='usenetrc', default=False, + help='Use .netrc authentication data') + authentication.add_option( + '--netrc-location', + dest='netrc_location', metavar='PATH', + help='Location of .netrc authentication data; either the path or its containing directory. Defaults to ~/.netrc') + authentication.add_option( + '--netrc-cmd', + dest='netrc_cmd', metavar='NETRC_CMD', + help='Command to execute to get the credentials for an extractor.') + authentication.add_option( + '--video-password', + dest='videopassword', metavar='PASSWORD', + help='Video-specific password') + authentication.add_option( + '--ap-mso', + dest='ap_mso', metavar='MSO', + help='Adobe Pass multiple-system operator (TV provider) identifier, use --ap-list-mso for a list of available MSOs') + authentication.add_option( + '--ap-username', + dest='ap_username', metavar='USERNAME', + help='Multiple-system operator account login') + authentication.add_option( + '--ap-password', + dest='ap_password', metavar='PASSWORD', + help='Multiple-system operator account password. If this option is left out, yt-dlp will ask interactively') + authentication.add_option( + '--ap-list-mso', + action='store_true', dest='ap_list_mso', default=False, + help='List all supported multiple-system operators') + authentication.add_option( + '--client-certificate', + dest='client_certificate', metavar='CERTFILE', + help='Path to client certificate file in PEM format. May include the private key') + authentication.add_option( + '--client-certificate-key', + dest='client_certificate_key', metavar='KEYFILE', + help='Path to private key file for client certificate') + authentication.add_option( + '--client-certificate-password', + dest='client_certificate_password', metavar='PASSWORD', + help='Password for client certificate private key, if encrypted. ' + 'If not provided, and the key is encrypted, yt-dlp will ask interactively') + + video_format = optparse.OptionGroup(parser, 'Video Format Options') + video_format.add_option( + '-f', '--format', + action='store', dest='format', metavar='FORMAT', default=None, + help='Video format code, see "FORMAT SELECTION" for more details') + video_format.add_option( + '-S', '--format-sort', metavar='SORTORDER', + dest='format_sort', default=[], type='str', action='callback', + callback=_list_from_options_callback, callback_kwargs={'append': -1}, + help='Sort the formats by the fields given, see "Sorting Formats" for more details') + video_format.add_option( + '--format-sort-force', '--S-force', + action='store_true', dest='format_sort_force', metavar='FORMAT', default=False, + help=( + 'Force user specified sort order to have precedence over all fields, ' + 'see "Sorting Formats" for more details (Alias: --S-force)')) + video_format.add_option( + '--no-format-sort-force', + action='store_false', dest='format_sort_force', metavar='FORMAT', default=False, + help='Some fields have precedence over the user specified sort order (default)') + video_format.add_option( + '--video-multistreams', + action='store_true', dest='allow_multiple_video_streams', default=None, + help='Allow multiple video streams to be merged into a single file') + video_format.add_option( + '--no-video-multistreams', + action='store_false', dest='allow_multiple_video_streams', + help='Only one video stream is downloaded for each output file (default)') + video_format.add_option( + '--audio-multistreams', + action='store_true', dest='allow_multiple_audio_streams', default=None, + help='Allow multiple audio streams to be merged into a single file') + video_format.add_option( + '--no-audio-multistreams', + action='store_false', dest='allow_multiple_audio_streams', + help='Only one audio stream is downloaded for each output file (default)') + video_format.add_option( + '--all-formats', + action='store_const', dest='format', const='all', + help=optparse.SUPPRESS_HELP) + video_format.add_option( + '--prefer-free-formats', + action='store_true', dest='prefer_free_formats', default=False, + help=( + 'Prefer video formats with free containers over non-free ones of same quality. ' + 'Use with "-S ext" to strictly prefer free containers irrespective of quality')) + video_format.add_option( + '--no-prefer-free-formats', + action='store_false', dest='prefer_free_formats', default=False, + help="Don't give any special preference to free containers (default)") + video_format.add_option( + '--check-formats', + action='store_const', const='selected', dest='check_formats', default=None, + help='Make sure formats are selected only from those that are actually downloadable') + video_format.add_option( + '--check-all-formats', + action='store_true', dest='check_formats', + help='Check all formats for whether they are actually downloadable') + video_format.add_option( + '--no-check-formats', + action='store_false', dest='check_formats', + help='Do not check that the formats are actually downloadable') + video_format.add_option( + '-F', '--list-formats', + action='store_true', dest='listformats', + help='List available formats of each video. Simulate unless --no-simulate is used') + video_format.add_option( + '--list-formats-as-table', + action='store_true', dest='listformats_table', default=True, + help=optparse.SUPPRESS_HELP) + video_format.add_option( + '--list-formats-old', '--no-list-formats-as-table', + action='store_false', dest='listformats_table', + help=optparse.SUPPRESS_HELP) + video_format.add_option( + '--merge-output-format', + action='store', dest='merge_output_format', metavar='FORMAT', default=None, + help=( + 'Containers that may be used when merging formats, separated by "/", e.g. "mp4/mkv". ' + 'Ignored if no merge is required. ' + f'(currently supported: {", ".join(sorted(FFmpegMergerPP.SUPPORTED_EXTS))})')) + video_format.add_option( + '--allow-unplayable-formats', + action='store_true', dest='allow_unplayable_formats', default=False, + help=optparse.SUPPRESS_HELP) + video_format.add_option( + '--no-allow-unplayable-formats', + action='store_false', dest='allow_unplayable_formats', + help=optparse.SUPPRESS_HELP) + + subtitles = optparse.OptionGroup(parser, 'Subtitle Options') + subtitles.add_option( + '--write-subs', '--write-srt', + action='store_true', dest='writesubtitles', default=False, + help='Write subtitle file') + subtitles.add_option( + '--no-write-subs', '--no-write-srt', + action='store_false', dest='writesubtitles', + help='Do not write subtitle file (default)') + subtitles.add_option( + '--write-auto-subs', '--write-automatic-subs', + action='store_true', dest='writeautomaticsub', default=False, + help='Write automatically generated subtitle file (Alias: --write-automatic-subs)') + subtitles.add_option( + '--no-write-auto-subs', '--no-write-automatic-subs', + action='store_false', dest='writeautomaticsub', default=False, + help='Do not write auto-generated subtitles (default) (Alias: --no-write-automatic-subs)') + subtitles.add_option( + '--all-subs', + action='store_true', dest='allsubtitles', default=False, + help=optparse.SUPPRESS_HELP) + subtitles.add_option( + '--list-subs', + action='store_true', dest='listsubtitles', default=False, + help='List available subtitles of each video. Simulate unless --no-simulate is used') + subtitles.add_option( + '--sub-format', + action='store', dest='subtitlesformat', metavar='FORMAT', default='best', + help='Subtitle format; accepts formats preference, e.g. "srt" or "ass/srt/best"') + subtitles.add_option( + '--sub-langs', '--srt-langs', + action='callback', dest='subtitleslangs', metavar='LANGS', type='str', + default=[], callback=_list_from_options_callback, + help=( + 'Languages of the subtitles to download (can be regex) or "all" separated by commas, e.g. --sub-langs "en.*,ja". ' + 'You can prefix the language code with a "-" to exclude it from the requested languages, e.g. --sub-langs all,-live_chat. ' + 'Use --list-subs for a list of available language tags')) + + downloader = optparse.OptionGroup(parser, 'Download Options') + downloader.add_option( + '-N', '--concurrent-fragments', + dest='concurrent_fragment_downloads', metavar='N', default=1, type=int, + help='Number of fragments of a dash/hlsnative video that should be downloaded concurrently (default is %default)') + downloader.add_option( + '-r', '--limit-rate', '--rate-limit', + dest='ratelimit', metavar='RATE', + help='Maximum download rate in bytes per second, e.g. 50K or 4.2M') + downloader.add_option( + '--throttled-rate', + dest='throttledratelimit', metavar='RATE', + help='Minimum download rate in bytes per second below which throttling is assumed and the video data is re-extracted, e.g. 100K') + downloader.add_option( + '-R', '--retries', + dest='retries', metavar='RETRIES', default=10, + help='Number of retries (default is %default), or "infinite"') + downloader.add_option( + '--file-access-retries', + dest='file_access_retries', metavar='RETRIES', default=3, + help='Number of times to retry on file access error (default is %default), or "infinite"') + downloader.add_option( + '--fragment-retries', + dest='fragment_retries', metavar='RETRIES', default=10, + help='Number of retries for a fragment (default is %default), or "infinite" (DASH, hlsnative and ISM)') + downloader.add_option( + '--retry-sleep', + dest='retry_sleep', metavar='[TYPE:]EXPR', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': 'http|fragment|file_access|extractor', + 'default_key': 'http', + }, help=( + 'Time to sleep between retries in seconds (optionally) prefixed by the type of retry ' + '(http (default), fragment, file_access, extractor) to apply the sleep to. ' + 'EXPR can be a number, linear=START[:END[:STEP=1]] or exp=START[:END[:BASE=2]]. ' + 'This option can be used multiple times to set the sleep for the different retry types, ' + 'e.g. --retry-sleep linear=1::2 --retry-sleep fragment:exp=1:20')) + downloader.add_option( + '--skip-unavailable-fragments', '--no-abort-on-unavailable-fragments', + action='store_true', dest='skip_unavailable_fragments', default=True, + help='Skip unavailable fragments for DASH, hlsnative and ISM downloads (default) (Alias: --no-abort-on-unavailable-fragments)') + downloader.add_option( + '--abort-on-unavailable-fragments', '--no-skip-unavailable-fragments', + action='store_false', dest='skip_unavailable_fragments', + help='Abort download if a fragment is unavailable (Alias: --no-skip-unavailable-fragments)') + downloader.add_option( + '--keep-fragments', + action='store_true', dest='keep_fragments', default=False, + help='Keep downloaded fragments on disk after downloading is finished') + downloader.add_option( + '--no-keep-fragments', + action='store_false', dest='keep_fragments', + help='Delete downloaded fragments after downloading is finished (default)') + downloader.add_option( + '--buffer-size', + dest='buffersize', metavar='SIZE', default='1024', + help='Size of download buffer, e.g. 1024 or 16K (default is %default)') + downloader.add_option( + '--resize-buffer', + action='store_false', dest='noresizebuffer', + help='The buffer size is automatically resized from an initial value of --buffer-size (default)') + downloader.add_option( + '--no-resize-buffer', + action='store_true', dest='noresizebuffer', default=False, + help='Do not automatically adjust the buffer size') + downloader.add_option( + '--http-chunk-size', + dest='http_chunk_size', metavar='SIZE', default=None, + help=( + 'Size of a chunk for chunk-based HTTP downloading, e.g. 10485760 or 10M (default is disabled). ' + 'May be useful for bypassing bandwidth throttling imposed by a webserver (experimental)')) + downloader.add_option( + '--test', + action='store_true', dest='test', default=False, + help=optparse.SUPPRESS_HELP) + downloader.add_option( + '--playlist-reverse', + action='store_true', dest='playlist_reverse', + help=optparse.SUPPRESS_HELP) + downloader.add_option( + '--no-playlist-reverse', + action='store_false', dest='playlist_reverse', + help=optparse.SUPPRESS_HELP) + downloader.add_option( + '--playlist-random', + action='store_true', dest='playlist_random', + help='Download playlist videos in random order') + downloader.add_option( + '--lazy-playlist', + action='store_true', dest='lazy_playlist', + help='Process entries in the playlist as they are received. This disables n_entries, --playlist-random and --playlist-reverse') + downloader.add_option( + '--no-lazy-playlist', + action='store_false', dest='lazy_playlist', + help='Process videos in the playlist only after the entire playlist is parsed (default)') + downloader.add_option( + '--xattr-set-filesize', + dest='xattr_set_filesize', action='store_true', + help='Set file xattribute ytdl.filesize with expected file size') + downloader.add_option( + '--hls-prefer-native', + dest='hls_prefer_native', action='store_true', default=None, + help=optparse.SUPPRESS_HELP) + downloader.add_option( + '--hls-prefer-ffmpeg', + dest='hls_prefer_native', action='store_false', default=None, + help=optparse.SUPPRESS_HELP) + downloader.add_option( + '--hls-use-mpegts', + dest='hls_use_mpegts', action='store_true', default=None, + help=( + 'Use the mpegts container for HLS videos; ' + 'allowing some players to play the video while downloading, ' + 'and reducing the chance of file corruption if download is interrupted. ' + 'This is enabled by default for live streams')) + downloader.add_option( + '--no-hls-use-mpegts', + dest='hls_use_mpegts', action='store_false', + help=( + 'Do not use the mpegts container for HLS videos. ' + 'This is default when not downloading live streams')) + downloader.add_option( + '--download-sections', + metavar='REGEX', dest='download_ranges', action='append', + help=( + 'Download only chapters that match the regular expression. ' + 'A "*" prefix denotes time-range instead of chapter. Negative timestamps are calculated from the end. ' + '"*from-url" can be used to download between the "start_time" and "end_time" extracted from the URL. ' + 'Needs ffmpeg. This option can be used multiple times to download multiple sections, ' + 'e.g. --download-sections "*10:15-inf" --download-sections "intro"')) + downloader.add_option( + '--downloader', '--external-downloader', + dest='external_downloader', metavar='[PROTO:]NAME', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': 'http|ftp|m3u8|dash|rtsp|rtmp|mms', + 'default_key': 'default', + 'process': str.strip + }, help=( + 'Name or path of the external downloader to use (optionally) prefixed by ' + 'the protocols (http, ftp, m3u8, dash, rstp, rtmp, mms) to use it for. ' + f'Currently supports native, {", ".join(sorted(list_external_downloaders()))}. ' + 'You can use this option multiple times to set different downloaders for different protocols. ' + 'E.g. --downloader aria2c --downloader "dash,m3u8:native" will use ' + 'aria2c for http/ftp downloads, and the native downloader for dash/m3u8 downloads ' + '(Alias: --external-downloader)')) + downloader.add_option( + '--downloader-args', '--external-downloader-args', + metavar='NAME:ARGS', dest='external_downloader_args', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': r'ffmpeg_[io]\d*|%s' % '|'.join(map(re.escape, list_external_downloaders())), + 'default_key': 'default', + 'process': shlex.split + }, help=( + 'Give these arguments to the external downloader. ' + 'Specify the downloader name and the arguments separated by a colon ":". ' + 'For ffmpeg, arguments can be passed to different positions using the same syntax as --postprocessor-args. ' + 'You can use this option multiple times to give different arguments to different downloaders ' + '(Alias: --external-downloader-args)')) + + workarounds = optparse.OptionGroup(parser, 'Workarounds') + workarounds.add_option( + '--encoding', + dest='encoding', metavar='ENCODING', + help='Force the specified encoding (experimental)') + workarounds.add_option( + '--legacy-server-connect', + action='store_true', dest='legacy_server_connect', default=False, + help='Explicitly allow HTTPS connection to servers that do not support RFC 5746 secure renegotiation') + workarounds.add_option( + '--no-check-certificates', + action='store_true', dest='no_check_certificate', default=False, + help='Suppress HTTPS certificate validation') + workarounds.add_option( + '--prefer-insecure', '--prefer-unsecure', + action='store_true', dest='prefer_insecure', + help='Use an unencrypted connection to retrieve information about the video (Currently supported only for YouTube)') + workarounds.add_option( + '--user-agent', + metavar='UA', dest='user_agent', + help=optparse.SUPPRESS_HELP) + workarounds.add_option( + '--referer', + metavar='URL', dest='referer', default=None, + help=optparse.SUPPRESS_HELP) + workarounds.add_option( + '--add-headers', + metavar='FIELD:VALUE', dest='headers', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={'multiple_keys': False}, + help='Specify a custom HTTP header and its value, separated by a colon ":". You can use this option multiple times', + ) + workarounds.add_option( + '--bidi-workaround', + dest='bidi_workaround', action='store_true', + help='Work around terminals that lack bidirectional text support. Requires bidiv or fribidi executable in PATH') + workarounds.add_option( + '--sleep-requests', metavar='SECONDS', + dest='sleep_interval_requests', type=float, + help='Number of seconds to sleep between requests during data extraction') + workarounds.add_option( + '--sleep-interval', '--min-sleep-interval', metavar='SECONDS', + dest='sleep_interval', type=float, + help=( + 'Number of seconds to sleep before each download. ' + 'This is the minimum time to sleep when used along with --max-sleep-interval ' + '(Alias: --min-sleep-interval)')) + workarounds.add_option( + '--max-sleep-interval', metavar='SECONDS', + dest='max_sleep_interval', type=float, + help='Maximum number of seconds to sleep. Can only be used along with --min-sleep-interval') + workarounds.add_option( + '--sleep-subtitles', metavar='SECONDS', + dest='sleep_interval_subtitles', default=0, type=int, + help='Number of seconds to sleep before each subtitle download') + + verbosity = optparse.OptionGroup(parser, 'Verbosity and Simulation Options') + verbosity.add_option( + '-q', '--quiet', + action='store_true', dest='quiet', default=None, + help='Activate quiet mode. If used with --verbose, print the log to stderr') + verbosity.add_option( + '--no-quiet', + action='store_false', dest='quiet', + help='Deactivate quiet mode. (Default)') + verbosity.add_option( + '--no-warnings', + dest='no_warnings', action='store_true', default=False, + help='Ignore warnings') + verbosity.add_option( + '-s', '--simulate', + action='store_true', dest='simulate', default=None, + help='Do not download the video and do not write anything to disk') + verbosity.add_option( + '--no-simulate', + action='store_false', dest='simulate', + help='Download the video even if printing/listing options are used') + verbosity.add_option( + '--ignore-no-formats-error', + action='store_true', dest='ignore_no_formats_error', default=False, + help=( + 'Ignore "No video formats" error. Useful for extracting metadata ' + 'even if the videos are not actually available for download (experimental)')) + verbosity.add_option( + '--no-ignore-no-formats-error', + action='store_false', dest='ignore_no_formats_error', + help='Throw error when no downloadable video formats are found (default)') + verbosity.add_option( + '--skip-download', '--no-download', + action='store_true', dest='skip_download', default=False, + help='Do not download the video but write all related files (Alias: --no-download)') + verbosity.add_option( + '-O', '--print', + metavar='[WHEN:]TEMPLATE', dest='forceprint', **when_prefix('video'), + help=( + 'Field name or output template to print to screen, optionally prefixed with when to print it, separated by a ":". ' + 'Supported values of "WHEN" are the same as that of --use-postprocessor (default: video). ' + 'Implies --quiet. Implies --simulate unless --no-simulate or later stages of WHEN are used. ' + 'This option can be used multiple times')) + verbosity.add_option( + '--print-to-file', + metavar='[WHEN:]TEMPLATE FILE', dest='print_to_file', nargs=2, **when_prefix('video'), + help=( + 'Append given template to the file. The values of WHEN and TEMPLATE are same as that of --print. ' + 'FILE uses the same syntax as the output template. This option can be used multiple times')) + verbosity.add_option( + '-g', '--get-url', + action='store_true', dest='geturl', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '-e', '--get-title', + action='store_true', dest='gettitle', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--get-id', + action='store_true', dest='getid', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--get-thumbnail', + action='store_true', dest='getthumbnail', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--get-description', + action='store_true', dest='getdescription', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--get-duration', + action='store_true', dest='getduration', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--get-filename', + action='store_true', dest='getfilename', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--get-format', + action='store_true', dest='getformat', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '-j', '--dump-json', + action='store_true', dest='dumpjson', default=False, + help='Quiet, but print JSON information for each video. Simulate unless --no-simulate is used. See "OUTPUT TEMPLATE" for a description of available keys') + verbosity.add_option( + '-J', '--dump-single-json', + action='store_true', dest='dump_single_json', default=False, + help=( + 'Quiet, but print JSON information for each url or infojson passed. Simulate unless --no-simulate is used. ' + 'If the URL refers to a playlist, the whole playlist information is dumped in a single line')) + verbosity.add_option( + '--print-json', + action='store_true', dest='print_json', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--force-write-archive', '--force-write-download-archive', '--force-download-archive', + action='store_true', dest='force_write_download_archive', default=False, + help=( + 'Force download archive entries to be written as far as no errors occur, ' + 'even if -s or another simulation option is used (Alias: --force-download-archive)')) + verbosity.add_option( + '--newline', + action='store_true', dest='progress_with_newline', default=False, + help='Output progress bar as new lines') + verbosity.add_option( + '--no-progress', + action='store_true', dest='noprogress', default=None, + help='Do not print progress bar') + verbosity.add_option( + '--progress', + action='store_false', dest='noprogress', + help='Show progress bar, even if in quiet mode') + verbosity.add_option( + '--console-title', + action='store_true', dest='consoletitle', default=False, + help='Display progress in console titlebar') + verbosity.add_option( + '--progress-template', + metavar='[TYPES:]TEMPLATE', dest='progress_template', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': '(download|postprocess)(-title)?', + 'default_key': 'download' + }, help=( + 'Template for progress outputs, optionally prefixed with one of "download:" (default), ' + '"download-title:" (the console title), "postprocess:", or "postprocess-title:". ' + 'The video\'s fields are accessible under the "info" key and ' + 'the progress attributes are accessible under "progress" key. E.g. ' + # TODO: Document the fields inside "progress" + '--console-title --progress-template "download-title:%(info.id)s-%(progress.eta)s"')) + verbosity.add_option( + '-v', '--verbose', + action='store_true', dest='verbose', default=False, + help='Print various debugging information') + verbosity.add_option( + '--dump-pages', '--dump-intermediate-pages', + action='store_true', dest='dump_intermediate_pages', default=False, + help='Print downloaded pages encoded using base64 to debug problems (very verbose)') + verbosity.add_option( + '--write-pages', + action='store_true', dest='write_pages', default=False, + help='Write downloaded intermediary pages to files in the current directory to debug problems') + verbosity.add_option( + '--load-pages', + action='store_true', dest='load_pages', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--youtube-print-sig-code', + action='store_true', dest='youtube_print_sig_code', default=False, + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--print-traffic', '--dump-headers', + dest='debug_printtraffic', action='store_true', default=False, + help='Display sent and read HTTP traffic') + verbosity.add_option( + '-C', '--call-home', + dest='call_home', action='store_true', default=False, + # help='Contact the yt-dlp server for debugging') + help=optparse.SUPPRESS_HELP) + verbosity.add_option( + '--no-call-home', + dest='call_home', action='store_false', + # help='Do not contact the yt-dlp server for debugging (default)') + help=optparse.SUPPRESS_HELP) + + filesystem = optparse.OptionGroup(parser, 'Filesystem Options') + filesystem.add_option( + '-a', '--batch-file', + dest='batchfile', metavar='FILE', + help=( + 'File containing URLs to download ("-" for stdin), one URL per line. ' + 'Lines starting with "#", ";" or "]" are considered as comments and ignored')) + filesystem.add_option( + '--no-batch-file', + dest='batchfile', action='store_const', const=None, + help='Do not read URLs from batch file (default)') + filesystem.add_option( + '--id', default=False, + action='store_true', dest='useid', help=optparse.SUPPRESS_HELP) + filesystem.add_option( + '-P', '--paths', + metavar='[TYPES:]PATH', dest='paths', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': 'home|temp|%s' % '|'.join(map(re.escape, OUTTMPL_TYPES.keys())), + 'default_key': 'home' + }, help=( + 'The paths where the files should be downloaded. ' + 'Specify the type of file and the path separated by a colon ":". ' + 'All the same TYPES as --output are supported. ' + 'Additionally, you can also provide "home" (default) and "temp" paths. ' + 'All intermediary files are first downloaded to the temp path and ' + 'then the final files are moved over to the home path after download is finished. ' + 'This option is ignored if --output is an absolute path')) + filesystem.add_option( + '-o', '--output', + metavar='[TYPES:]TEMPLATE', dest='outtmpl', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': '|'.join(map(re.escape, OUTTMPL_TYPES.keys())), + 'default_key': 'default' + }, help='Output filename template; see "OUTPUT TEMPLATE" for details') + filesystem.add_option( + '--output-na-placeholder', + dest='outtmpl_na_placeholder', metavar='TEXT', default='NA', + help=('Placeholder for unavailable fields in "OUTPUT TEMPLATE" (default: "%default")')) + filesystem.add_option( + '--autonumber-size', + dest='autonumber_size', metavar='NUMBER', type=int, + help=optparse.SUPPRESS_HELP) + filesystem.add_option( + '--autonumber-start', + dest='autonumber_start', metavar='NUMBER', default=1, type=int, + help=optparse.SUPPRESS_HELP) + filesystem.add_option( + '--restrict-filenames', + action='store_true', dest='restrictfilenames', default=False, + help='Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames') + filesystem.add_option( + '--no-restrict-filenames', + action='store_false', dest='restrictfilenames', + help='Allow Unicode characters, "&" and spaces in filenames (default)') + filesystem.add_option( + '--windows-filenames', + action='store_true', dest='windowsfilenames', default=False, + help='Force filenames to be Windows-compatible') + filesystem.add_option( + '--no-windows-filenames', + action='store_false', dest='windowsfilenames', + help='Make filenames Windows-compatible only if using Windows (default)') + filesystem.add_option( + '--trim-filenames', '--trim-file-names', metavar='LENGTH', + dest='trim_file_name', default=0, type=int, + help='Limit the filename length (excluding extension) to the specified number of characters') + filesystem.add_option( + '-w', '--no-overwrites', + action='store_false', dest='overwrites', default=None, + help='Do not overwrite any files') + filesystem.add_option( + '--force-overwrites', '--yes-overwrites', + action='store_true', dest='overwrites', + help='Overwrite all video and metadata files. This option includes --no-continue') + filesystem.add_option( + '--no-force-overwrites', + action='store_const', dest='overwrites', const=None, + help='Do not overwrite the video, but overwrite related files (default)') + filesystem.add_option( + '-c', '--continue', + action='store_true', dest='continue_dl', default=True, + help='Resume partially downloaded files/fragments (default)') + filesystem.add_option( + '--no-continue', + action='store_false', dest='continue_dl', + help=( + 'Do not resume partially downloaded fragments. ' + 'If the file is not fragmented, restart download of the entire file')) + filesystem.add_option( + '--part', + action='store_false', dest='nopart', default=False, + help='Use .part files instead of writing directly into output file (default)') + filesystem.add_option( + '--no-part', + action='store_true', dest='nopart', + help='Do not use .part files - write directly into output file') + filesystem.add_option( + '--mtime', + action='store_true', dest='updatetime', default=True, + help='Use the Last-modified header to set the file modification time (default)') + filesystem.add_option( + '--no-mtime', + action='store_false', dest='updatetime', + help='Do not use the Last-modified header to set the file modification time') + filesystem.add_option( + '--write-description', + action='store_true', dest='writedescription', default=False, + help='Write video description to a .description file') + filesystem.add_option( + '--no-write-description', + action='store_false', dest='writedescription', + help='Do not write video description (default)') + filesystem.add_option( + '--write-info-json', + action='store_true', dest='writeinfojson', default=None, + help='Write video metadata to a .info.json file (this may contain personal information)') + filesystem.add_option( + '--no-write-info-json', + action='store_false', dest='writeinfojson', + help='Do not write video metadata (default)') + filesystem.add_option( + '--write-annotations', + action='store_true', dest='writeannotations', default=False, + help=optparse.SUPPRESS_HELP) + filesystem.add_option( + '--no-write-annotations', + action='store_false', dest='writeannotations', + help=optparse.SUPPRESS_HELP) + filesystem.add_option( + '--write-playlist-metafiles', + action='store_true', dest='allow_playlist_files', default=None, + help=( + 'Write playlist metadata in addition to the video metadata ' + 'when using --write-info-json, --write-description etc. (default)')) + filesystem.add_option( + '--no-write-playlist-metafiles', + action='store_false', dest='allow_playlist_files', + help='Do not write playlist metadata when using --write-info-json, --write-description etc.') + filesystem.add_option( + '--clean-info-json', '--clean-infojson', + action='store_true', dest='clean_infojson', default=None, + help=( + 'Remove some internal metadata such as filenames from the infojson (default)')) + filesystem.add_option( + '--no-clean-info-json', '--no-clean-infojson', + action='store_false', dest='clean_infojson', + help='Write all fields to the infojson') + filesystem.add_option( + '--write-comments', '--get-comments', + action='store_true', dest='getcomments', default=False, + help=( + 'Retrieve video comments to be placed in the infojson. ' + 'The comments are fetched even without this option if the extraction is known to be quick (Alias: --get-comments)')) + filesystem.add_option( + '--no-write-comments', '--no-get-comments', + action='store_false', dest='getcomments', + help='Do not retrieve video comments unless the extraction is known to be quick (Alias: --no-get-comments)') + filesystem.add_option( + '--load-info-json', '--load-info', + dest='load_info_filename', metavar='FILE', + help='JSON file containing the video information (created with the "--write-info-json" option)') + filesystem.add_option( + '--cookies', + dest='cookiefile', metavar='FILE', + help='Netscape formatted file to read cookies from and dump cookie jar in') + filesystem.add_option( + '--no-cookies', + action='store_const', const=None, dest='cookiefile', metavar='FILE', + help='Do not read/dump cookies from/to file (default)') + filesystem.add_option( + '--cookies-from-browser', + dest='cookiesfrombrowser', metavar='BROWSER[+KEYRING][:PROFILE][::CONTAINER]', + help=( + 'The name of the browser to load cookies from. ' + f'Currently supported browsers are: {", ".join(sorted(SUPPORTED_BROWSERS))}. ' + 'Optionally, the KEYRING used for decrypting Chromium cookies on Linux, ' + 'the name/path of the PROFILE to load cookies from, ' + 'and the CONTAINER name (if Firefox) ("none" for no container) ' + 'can be given with their respective seperators. ' + 'By default, all containers of the most recently accessed profile are used. ' + f'Currently supported keyrings are: {", ".join(map(str.lower, sorted(SUPPORTED_KEYRINGS)))}')) + filesystem.add_option( + '--no-cookies-from-browser', + action='store_const', const=None, dest='cookiesfrombrowser', + help='Do not load cookies from browser (default)') + filesystem.add_option( + '--cache-dir', dest='cachedir', default=None, metavar='DIR', + help=( + 'Location in the filesystem where yt-dlp can store some downloaded information ' + '(such as client ids and signatures) permanently. By default ${XDG_CACHE_HOME}/yt-dlp')) + filesystem.add_option( + '--no-cache-dir', action='store_false', dest='cachedir', + help='Disable filesystem caching') + filesystem.add_option( + '--rm-cache-dir', + action='store_true', dest='rm_cachedir', + help='Delete all filesystem cache files') + + thumbnail = optparse.OptionGroup(parser, 'Thumbnail Options') + thumbnail.add_option( + '--write-thumbnail', + action='callback', dest='writethumbnail', default=False, + # Should override --no-write-thumbnail, but not --write-all-thumbnail + callback=lambda option, _, __, parser: setattr( + parser.values, option.dest, getattr(parser.values, option.dest) or True), + help='Write thumbnail image to disk') + thumbnail.add_option( + '--no-write-thumbnail', + action='store_false', dest='writethumbnail', + help='Do not write thumbnail image to disk (default)') + thumbnail.add_option( + '--write-all-thumbnails', + action='store_const', dest='writethumbnail', const='all', + help='Write all thumbnail image formats to disk') + thumbnail.add_option( + '--list-thumbnails', + action='store_true', dest='list_thumbnails', default=False, + help='List available thumbnails of each video. Simulate unless --no-simulate is used') + + link = optparse.OptionGroup(parser, 'Internet Shortcut Options') + link.add_option( + '--write-link', + action='store_true', dest='writelink', default=False, + help='Write an internet shortcut file, depending on the current platform (.url, .webloc or .desktop). The URL may be cached by the OS') + link.add_option( + '--write-url-link', + action='store_true', dest='writeurllink', default=False, + help='Write a .url Windows internet shortcut. The OS caches the URL based on the file path') + link.add_option( + '--write-webloc-link', + action='store_true', dest='writewebloclink', default=False, + help='Write a .webloc macOS internet shortcut') + link.add_option( + '--write-desktop-link', + action='store_true', dest='writedesktoplink', default=False, + help='Write a .desktop Linux internet shortcut') + + postproc = optparse.OptionGroup(parser, 'Post-Processing Options') + postproc.add_option( + '-x', '--extract-audio', + action='store_true', dest='extractaudio', default=False, + help='Convert video files to audio-only files (requires ffmpeg and ffprobe)') + postproc.add_option( + '--audio-format', metavar='FORMAT', dest='audioformat', default='best', + help=( + 'Format to convert the audio to when -x is used. ' + f'(currently supported: best (default), {", ".join(sorted(FFmpegExtractAudioPP.SUPPORTED_EXTS))}). ' + 'You can specify multiple rules using similar syntax as --remux-video')) + postproc.add_option( + '--audio-quality', metavar='QUALITY', + dest='audioquality', default='5', + help=( + 'Specify ffmpeg audio quality to use when converting the audio with -x. ' + 'Insert a value between 0 (best) and 10 (worst) for VBR or a specific bitrate like 128K (default %default)')) + postproc.add_option( + '--remux-video', + metavar='FORMAT', dest='remuxvideo', default=None, + help=( + 'Remux the video into another container if necessary ' + f'(currently supported: {", ".join(FFmpegVideoRemuxerPP.SUPPORTED_EXTS)}). ' + 'If target container does not support the video/audio codec, remuxing will fail. You can specify multiple rules; ' + 'e.g. "aac>m4a/mov>mp4/mkv" will remux aac to m4a, mov to mp4 and anything else to mkv')) + postproc.add_option( + '--recode-video', + metavar='FORMAT', dest='recodevideo', default=None, + help='Re-encode the video into another format if necessary. The syntax and supported formats are the same as --remux-video') + postproc.add_option( + '--postprocessor-args', '--ppa', + metavar='NAME:ARGS', dest='postprocessor_args', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'allowed_keys': r'\w+(?:\+\w+)?', + 'default_key': 'default-compat', + 'process': shlex.split, + 'multiple_keys': False + }, help=( + 'Give these arguments to the postprocessors. ' + 'Specify the postprocessor/executable name and the arguments separated by a colon ":" ' + 'to give the argument to the specified postprocessor/executable. Supported PP are: ' + 'Merger, ModifyChapters, SplitChapters, ExtractAudio, VideoRemuxer, VideoConvertor, ' + 'Metadata, EmbedSubtitle, EmbedThumbnail, SubtitlesConvertor, ThumbnailsConvertor, ' + 'FixupStretched, FixupM4a, FixupM3u8, FixupTimestamp and FixupDuration. ' + 'The supported executables are: AtomicParsley, FFmpeg and FFprobe. ' + 'You can also specify "PP+EXE:ARGS" to give the arguments to the specified executable ' + 'only when being used by the specified postprocessor. Additionally, for ffmpeg/ffprobe, ' + '"_i"/"_o" can be appended to the prefix optionally followed by a number to pass the argument ' + 'before the specified input/output file, e.g. --ppa "Merger+ffmpeg_i1:-v quiet". ' + 'You can use this option multiple times to give different arguments to different ' + 'postprocessors. (Alias: --ppa)')) + postproc.add_option( + '-k', '--keep-video', + action='store_true', dest='keepvideo', default=False, + help='Keep the intermediate video file on disk after post-processing') + postproc.add_option( + '--no-keep-video', + action='store_false', dest='keepvideo', + help='Delete the intermediate video file after post-processing (default)') + postproc.add_option( + '--post-overwrites', + action='store_false', dest='nopostoverwrites', + help='Overwrite post-processed files (default)') + postproc.add_option( + '--no-post-overwrites', + action='store_true', dest='nopostoverwrites', default=False, + help='Do not overwrite post-processed files') + postproc.add_option( + '--embed-subs', + action='store_true', dest='embedsubtitles', default=False, + help='Embed subtitles in the video (only for mp4, webm and mkv videos)') + postproc.add_option( + '--no-embed-subs', + action='store_false', dest='embedsubtitles', + help='Do not embed subtitles (default)') + postproc.add_option( + '--embed-thumbnail', + action='store_true', dest='embedthumbnail', default=False, + help='Embed thumbnail in the video as cover art') + postproc.add_option( + '--no-embed-thumbnail', + action='store_false', dest='embedthumbnail', + help='Do not embed thumbnail (default)') + postproc.add_option( + '--embed-metadata', '--add-metadata', + action='store_true', dest='addmetadata', default=False, + help=( + 'Embed metadata to the video file. Also embeds chapters/infojson if present ' + 'unless --no-embed-chapters/--no-embed-info-json are used (Alias: --add-metadata)')) + postproc.add_option( + '--no-embed-metadata', '--no-add-metadata', + action='store_false', dest='addmetadata', + help='Do not add metadata to file (default) (Alias: --no-add-metadata)') + postproc.add_option( + '--embed-chapters', '--add-chapters', + action='store_true', dest='addchapters', default=None, + help='Add chapter markers to the video file (Alias: --add-chapters)') + postproc.add_option( + '--no-embed-chapters', '--no-add-chapters', + action='store_false', dest='addchapters', + help='Do not add chapter markers (default) (Alias: --no-add-chapters)') + postproc.add_option( + '--embed-info-json', + action='store_true', dest='embed_infojson', default=None, + help='Embed the infojson as an attachment to mkv/mka video files') + postproc.add_option( + '--no-embed-info-json', + action='store_false', dest='embed_infojson', + help='Do not embed the infojson as an attachment to the video file') + postproc.add_option( + '--metadata-from-title', + metavar='FORMAT', dest='metafromtitle', + help=optparse.SUPPRESS_HELP) + postproc.add_option( + '--parse-metadata', + metavar='[WHEN:]FROM:TO', dest='parse_metadata', **when_prefix('pre_process'), + help=( + 'Parse additional metadata like title/artist from other fields; see "MODIFYING METADATA" for details. ' + 'Supported values of "WHEN" are the same as that of --use-postprocessor (default: pre_process)')) + postproc.add_option( + '--replace-in-metadata', + dest='parse_metadata', metavar='[WHEN:]FIELDS REGEX REPLACE', nargs=3, **when_prefix('pre_process'), + help=( + 'Replace text in a metadata field using the given regex. This option can be used multiple times. ' + 'Supported values of "WHEN" are the same as that of --use-postprocessor (default: pre_process)')) + postproc.add_option( + '--xattrs', '--xattr', + action='store_true', dest='xattrs', default=False, + help='Write metadata to the video file\'s xattrs (using dublin core and xdg standards)') + postproc.add_option( + '--concat-playlist', + metavar='POLICY', dest='concat_playlist', default='multi_video', + choices=('never', 'always', 'multi_video'), + help=( + 'Concatenate videos in a playlist. One of "never", "always", or ' + '"multi_video" (default; only when the videos form a single show). ' + 'All the video files must have same codecs and number of streams to be concatable. ' + 'The "pl_video:" prefix can be used with "--paths" and "--output" to ' + 'set the output filename for the concatenated files. See "OUTPUT TEMPLATE" for details')) + postproc.add_option( + '--fixup', + metavar='POLICY', dest='fixup', default=None, + choices=('never', 'ignore', 'warn', 'detect_or_warn', 'force'), + help=( + 'Automatically correct known faults of the file. ' + 'One of never (do nothing), warn (only emit a warning), ' + 'detect_or_warn (the default; fix file if we can, warn otherwise), ' + 'force (try fixing even if file already exists)')) + postproc.add_option( + '--prefer-avconv', '--no-prefer-ffmpeg', + action='store_false', dest='prefer_ffmpeg', + help=optparse.SUPPRESS_HELP) + postproc.add_option( + '--prefer-ffmpeg', '--no-prefer-avconv', + action='store_true', dest='prefer_ffmpeg', default=True, + help=optparse.SUPPRESS_HELP) + postproc.add_option( + '--ffmpeg-location', '--avconv-location', metavar='PATH', + dest='ffmpeg_location', + help='Location of the ffmpeg binary; either the path to the binary or its containing directory') + postproc.add_option( + '--exec', + metavar='[WHEN:]CMD', dest='exec_cmd', **when_prefix('after_move'), + help=( + 'Execute a command, optionally prefixed with when to execute it, separated by a ":". ' + 'Supported values of "WHEN" are the same as that of --use-postprocessor (default: after_move). ' + 'Same syntax as the output template can be used to pass any field as arguments to the command. ' + 'If no fields are passed, %(filepath,_filename|)q is appended to the end of the command. ' + 'This option can be used multiple times')) + postproc.add_option( + '--no-exec', + action='store_const', dest='exec_cmd', const={}, + help='Remove any previously defined --exec') + postproc.add_option( + '--exec-before-download', metavar='CMD', + action='append', dest='exec_before_dl_cmd', + help=optparse.SUPPRESS_HELP) + postproc.add_option( + '--no-exec-before-download', + action='store_const', dest='exec_before_dl_cmd', const=None, + help=optparse.SUPPRESS_HELP) + postproc.add_option( + '--convert-subs', '--convert-sub', '--convert-subtitles', + metavar='FORMAT', dest='convertsubtitles', default=None, + help=( + 'Convert the subtitles to another format (currently supported: %s) ' + '(Alias: --convert-subtitles)' % ', '.join(sorted(FFmpegSubtitlesConvertorPP.SUPPORTED_EXTS)))) + postproc.add_option( + '--convert-thumbnails', + metavar='FORMAT', dest='convertthumbnails', default=None, + help=( + 'Convert the thumbnails to another format ' + f'(currently supported: {", ".join(sorted(FFmpegThumbnailsConvertorPP.SUPPORTED_EXTS))}). ' + 'You can specify multiple rules using similar syntax as --remux-video')) + postproc.add_option( + '--split-chapters', '--split-tracks', + dest='split_chapters', action='store_true', default=False, + help=( + 'Split video into multiple files based on internal chapters. ' + 'The "chapter:" prefix can be used with "--paths" and "--output" to ' + 'set the output filename for the split files. See "OUTPUT TEMPLATE" for details')) + postproc.add_option( + '--no-split-chapters', '--no-split-tracks', + dest='split_chapters', action='store_false', + help='Do not split video based on chapters (default)') + postproc.add_option( + '--remove-chapters', + metavar='REGEX', dest='remove_chapters', action='append', + help=( + 'Remove chapters whose title matches the given regular expression. ' + 'The syntax is the same as --download-sections. This option can be used multiple times')) + postproc.add_option( + '--no-remove-chapters', dest='remove_chapters', action='store_const', const=None, + help='Do not remove any chapters from the file (default)') + postproc.add_option( + '--force-keyframes-at-cuts', + action='store_true', dest='force_keyframes_at_cuts', default=False, + help=( + 'Force keyframes at cuts when downloading/splitting/removing sections. ' + 'This is slow due to needing a re-encode, but the resulting video may have fewer artifacts around the cuts')) + postproc.add_option( + '--no-force-keyframes-at-cuts', + action='store_false', dest='force_keyframes_at_cuts', + help='Do not force keyframes around the chapters when cutting/splitting (default)') + _postprocessor_opts_parser = lambda key, val='': ( + *(item.split('=', 1) for item in (val.split(';') if val else [])), + ('key', remove_end(key, 'PP'))) + postproc.add_option( + '--use-postprocessor', + metavar='NAME[:ARGS]', dest='add_postprocessors', default=[], type='str', + action='callback', callback=_list_from_options_callback, + callback_kwargs={ + 'delim': None, + 'process': lambda val: dict(_postprocessor_opts_parser(*val.split(':', 1))) + }, help=( + 'The (case sensitive) name of plugin postprocessors to be enabled, ' + 'and (optionally) arguments to be passed to it, separated by a colon ":". ' + 'ARGS are a semicolon ";" delimited list of NAME=VALUE. ' + 'The "when" argument determines when the postprocessor is invoked. ' + 'It can be one of "pre_process" (after video extraction), "after_filter" (after video passes filter), ' + '"video" (after --format; before --print/--output), "before_dl" (before each video download), ' + '"post_process" (after each video download; default), ' + '"after_move" (after moving video file to it\'s final locations), ' + '"after_video" (after downloading and processing all formats of a video), ' + 'or "playlist" (at end of playlist). ' + 'This option can be used multiple times to add different postprocessors')) + + sponsorblock = optparse.OptionGroup(parser, 'SponsorBlock Options', description=( + 'Make chapter entries for, or remove various segments (sponsor, introductions, etc.) ' + 'from downloaded YouTube videos using the SponsorBlock API (https://sponsor.ajay.app)')) + sponsorblock.add_option( + '--sponsorblock-mark', metavar='CATS', + dest='sponsorblock_mark', default=set(), action='callback', type='str', + callback=_set_from_options_callback, callback_kwargs={ + 'allowed_values': SponsorBlockPP.CATEGORIES.keys(), + 'aliases': {'default': ['all']} + }, help=( + 'SponsorBlock categories to create chapters for, separated by commas. ' + f'Available categories are {", ".join(SponsorBlockPP.CATEGORIES.keys())}, all and default (=all). ' + 'You can prefix the category with a "-" to exclude it. See [1] for description of the categories. ' + 'E.g. --sponsorblock-mark all,-preview [1] https://wiki.sponsor.ajay.app/w/Segment_Categories')) + sponsorblock.add_option( + '--sponsorblock-remove', metavar='CATS', + dest='sponsorblock_remove', default=set(), action='callback', type='str', + callback=_set_from_options_callback, callback_kwargs={ + 'allowed_values': set(SponsorBlockPP.CATEGORIES.keys()) - set(SponsorBlockPP.NON_SKIPPABLE_CATEGORIES.keys()), + # Note: From https://wiki.sponsor.ajay.app/w/Types: + # The filler category is very aggressive. + # It is strongly recommended to not use this in a client by default. + 'aliases': {'default': ['all', '-filler']} + }, help=( + 'SponsorBlock categories to be removed from the video file, separated by commas. ' + 'If a category is present in both mark and remove, remove takes precedence. ' + 'The syntax and available categories are the same as for --sponsorblock-mark ' + 'except that "default" refers to "all,-filler" ' + f'and {", ".join(SponsorBlockPP.NON_SKIPPABLE_CATEGORIES.keys())} are not available')) + sponsorblock.add_option( + '--sponsorblock-chapter-title', metavar='TEMPLATE', + default=DEFAULT_SPONSORBLOCK_CHAPTER_TITLE, dest='sponsorblock_chapter_title', + help=( + 'An output template for the title of the SponsorBlock chapters created by --sponsorblock-mark. ' + 'The only available fields are start_time, end_time, category, categories, name, category_names. ' + 'Defaults to "%default"')) + sponsorblock.add_option( + '--no-sponsorblock', default=False, + action='store_true', dest='no_sponsorblock', + help='Disable both --sponsorblock-mark and --sponsorblock-remove') + sponsorblock.add_option( + '--sponsorblock-api', metavar='URL', + default='https://sponsor.ajay.app', dest='sponsorblock_api', + help='SponsorBlock API location, defaults to %default') + + sponsorblock.add_option( + '--sponskrub', + action='store_true', dest='sponskrub', default=False, + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--no-sponskrub', + action='store_false', dest='sponskrub', + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--sponskrub-cut', default=False, + action='store_true', dest='sponskrub_cut', + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--no-sponskrub-cut', + action='store_false', dest='sponskrub_cut', + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--sponskrub-force', default=False, + action='store_true', dest='sponskrub_force', + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--no-sponskrub-force', + action='store_true', dest='sponskrub_force', + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--sponskrub-location', metavar='PATH', + dest='sponskrub_path', default='', + help=optparse.SUPPRESS_HELP) + sponsorblock.add_option( + '--sponskrub-args', dest='sponskrub_args', metavar='ARGS', + help=optparse.SUPPRESS_HELP) + + extractor = optparse.OptionGroup(parser, 'Extractor Options') + extractor.add_option( + '--extractor-retries', + dest='extractor_retries', metavar='RETRIES', default=3, + help='Number of retries for known extractor errors (default is %default), or "infinite"') + extractor.add_option( + '--allow-dynamic-mpd', '--no-ignore-dynamic-mpd', + action='store_true', dest='dynamic_mpd', default=True, + help='Process dynamic DASH manifests (default) (Alias: --no-ignore-dynamic-mpd)') + extractor.add_option( + '--ignore-dynamic-mpd', '--no-allow-dynamic-mpd', + action='store_false', dest='dynamic_mpd', + help='Do not process dynamic DASH manifests (Alias: --no-allow-dynamic-mpd)') + extractor.add_option( + '--hls-split-discontinuity', + dest='hls_split_discontinuity', action='store_true', default=False, + help='Split HLS playlists to different formats at discontinuities such as ad breaks' + ) + extractor.add_option( + '--no-hls-split-discontinuity', + dest='hls_split_discontinuity', action='store_false', + help='Do not split HLS playlists to different formats at discontinuities such as ad breaks (default)') + _extractor_arg_parser = lambda key, vals='': (key.strip().lower().replace('-', '_'), [ + val.replace(r'\,', ',').strip() for val in re.split(r'(?<!\\),', vals)]) + extractor.add_option( + '--extractor-args', + metavar='IE_KEY:ARGS', dest='extractor_args', default={}, type='str', + action='callback', callback=_dict_from_options_callback, + callback_kwargs={ + 'multiple_keys': False, + 'process': lambda val: dict( + _extractor_arg_parser(*arg.split('=', 1)) for arg in val.split(';')) + }, help=( + 'Pass ARGS arguments to the IE_KEY extractor. See "EXTRACTOR ARGUMENTS" for details. ' + 'You can use this option multiple times to give arguments for different extractors')) + extractor.add_option( + '--youtube-include-dash-manifest', '--no-youtube-skip-dash-manifest', + action='store_true', dest='youtube_include_dash_manifest', default=True, + help=optparse.SUPPRESS_HELP) + extractor.add_option( + '--youtube-skip-dash-manifest', '--no-youtube-include-dash-manifest', + action='store_false', dest='youtube_include_dash_manifest', + help=optparse.SUPPRESS_HELP) + extractor.add_option( + '--youtube-include-hls-manifest', '--no-youtube-skip-hls-manifest', + action='store_true', dest='youtube_include_hls_manifest', default=True, + help=optparse.SUPPRESS_HELP) + extractor.add_option( + '--youtube-skip-hls-manifest', '--no-youtube-include-hls-manifest', + action='store_false', dest='youtube_include_hls_manifest', + help=optparse.SUPPRESS_HELP) + + parser.add_option_group(general) + parser.add_option_group(network) + parser.add_option_group(geo) + parser.add_option_group(selection) + parser.add_option_group(downloader) + parser.add_option_group(filesystem) + parser.add_option_group(thumbnail) + parser.add_option_group(link) + parser.add_option_group(verbosity) + parser.add_option_group(workarounds) + parser.add_option_group(video_format) + parser.add_option_group(subtitles) + parser.add_option_group(authentication) + parser.add_option_group(postproc) + parser.add_option_group(sponsorblock) + parser.add_option_group(extractor) + + return parser + + +def _hide_login_info(opts): + deprecation_warning(f'"{__name__}._hide_login_info" is deprecated and may be removed ' + 'in a future version. Use "yt_dlp.utils.Config.hide_login_info" instead') + return Config.hide_login_info(opts) diff --git a/lib/python3.11/site-packages/yt_dlp/plugins.py b/python/lib/python3.10/site-packages/yt_dlp/plugins.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/plugins.py rename to python/lib/python3.10/site-packages/yt_dlp/plugins.py diff --git a/lib/python3.11/site-packages/yt_dlp/postprocessor/__init__.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/__init__.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/postprocessor/__init__.py rename to python/lib/python3.10/site-packages/yt_dlp/postprocessor/__init__.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/postprocessor/common.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/common.py new file mode 100644 index 0000000..8cef86c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/common.py @@ -0,0 +1,215 @@ +import functools +import json +import os + +from ..networking import Request +from ..networking.exceptions import HTTPError, network_exceptions +from ..utils import ( + PostProcessingError, + RetryManager, + _configuration_args, + deprecation_warning, + encodeFilename, +) + + +class PostProcessorMetaClass(type): + @staticmethod + def run_wrapper(func): + @functools.wraps(func) + def run(self, info, *args, **kwargs): + info_copy = self._copy_infodict(info) + self._hook_progress({'status': 'started'}, info_copy) + ret = func(self, info, *args, **kwargs) + if ret is not None: + _, info = ret + self._hook_progress({'status': 'finished'}, info_copy) + return ret + return run + + def __new__(cls, name, bases, attrs): + if 'run' in attrs: + attrs['run'] = cls.run_wrapper(attrs['run']) + return type.__new__(cls, name, bases, attrs) + + +class PostProcessor(metaclass=PostProcessorMetaClass): + """Post Processor class. + + PostProcessor objects can be added to downloaders with their + add_post_processor() method. When the downloader has finished a + successful download, it will take its internal chain of PostProcessors + and start calling the run() method on each one of them, first with + an initial argument and then with the returned value of the previous + PostProcessor. + + PostProcessor objects follow a "mutual registration" process similar + to InfoExtractor objects. + + Optionally PostProcessor can use a list of additional command-line arguments + with self._configuration_args. + """ + + _downloader = None + + def __init__(self, downloader=None): + self._progress_hooks = [] + self.add_progress_hook(self.report_progress) + self.set_downloader(downloader) + self.PP_NAME = self.pp_key() + + @classmethod + def pp_key(cls): + name = cls.__name__[:-2] + return name[6:] if name[:6].lower() == 'ffmpeg' else name + + def to_screen(self, text, prefix=True, *args, **kwargs): + if self._downloader: + tag = '[%s] ' % self.PP_NAME if prefix else '' + return self._downloader.to_screen(f'{tag}{text}', *args, **kwargs) + + def report_warning(self, text, *args, **kwargs): + if self._downloader: + return self._downloader.report_warning(text, *args, **kwargs) + + def deprecation_warning(self, msg): + warn = getattr(self._downloader, 'deprecation_warning', deprecation_warning) + return warn(msg, stacklevel=1) + + def deprecated_feature(self, msg): + if self._downloader: + return self._downloader.deprecated_feature(msg) + return deprecation_warning(msg, stacklevel=1) + + def report_error(self, text, *args, **kwargs): + self.deprecation_warning('"yt_dlp.postprocessor.PostProcessor.report_error" is deprecated. ' + 'raise "yt_dlp.utils.PostProcessingError" instead') + if self._downloader: + return self._downloader.report_error(text, *args, **kwargs) + + def write_debug(self, text, *args, **kwargs): + if self._downloader: + return self._downloader.write_debug(text, *args, **kwargs) + + def _delete_downloaded_files(self, *files_to_delete, **kwargs): + if self._downloader: + return self._downloader._delete_downloaded_files(*files_to_delete, **kwargs) + for filename in set(filter(None, files_to_delete)): + os.remove(filename) + + def get_param(self, name, default=None, *args, **kwargs): + if self._downloader: + return self._downloader.params.get(name, default, *args, **kwargs) + return default + + def set_downloader(self, downloader): + """Sets the downloader for this PP.""" + self._downloader = downloader + for ph in getattr(downloader, '_postprocessor_hooks', []): + self.add_progress_hook(ph) + + def _copy_infodict(self, info_dict): + return getattr(self._downloader, '_copy_infodict', dict)(info_dict) + + @staticmethod + def _restrict_to(*, video=True, audio=True, images=True, simulated=True): + allowed = {'video': video, 'audio': audio, 'images': images} + + def decorator(func): + @functools.wraps(func) + def wrapper(self, info): + if not simulated and (self.get_param('simulate') or self.get_param('skip_download')): + return [], info + format_type = ( + 'video' if info.get('vcodec') != 'none' + else 'audio' if info.get('acodec') != 'none' + else 'images') + if allowed[format_type]: + return func(self, info) + else: + self.to_screen('Skipping %s' % format_type) + return [], info + return wrapper + return decorator + + def run(self, information): + """Run the PostProcessor. + + The "information" argument is a dictionary like the ones + composed by InfoExtractors. The only difference is that this + one has an extra field called "filepath" that points to the + downloaded file. + + This method returns a tuple, the first element is a list of the files + that can be deleted, and the second of which is the updated + information. + + In addition, this method may raise a PostProcessingError + exception if post processing fails. + """ + return [], information # by default, keep file and do nothing + + def try_utime(self, path, atime, mtime, errnote='Cannot update utime of file'): + try: + os.utime(encodeFilename(path), (atime, mtime)) + except Exception: + self.report_warning(errnote) + + def _configuration_args(self, exe, *args, **kwargs): + return _configuration_args( + self.pp_key(), self.get_param('postprocessor_args'), exe, *args, **kwargs) + + def _hook_progress(self, status, info_dict): + if not self._progress_hooks: + return + status.update({ + 'info_dict': info_dict, + 'postprocessor': self.pp_key(), + }) + for ph in self._progress_hooks: + ph(status) + + def add_progress_hook(self, ph): + # See YoutubeDl.py (search for postprocessor_hooks) for a description of this interface + self._progress_hooks.append(ph) + + def report_progress(self, s): + s['_default_template'] = '%(postprocessor)s %(status)s' % s + if not self._downloader: + return + + progress_dict = s.copy() + progress_dict.pop('info_dict') + progress_dict = {'info': s['info_dict'], 'progress': progress_dict} + + progress_template = self.get_param('progress_template', {}) + tmpl = progress_template.get('postprocess') + if tmpl: + self._downloader.to_screen( + self._downloader.evaluate_outtmpl(tmpl, progress_dict), quiet=False) + + self._downloader.to_console_title(self._downloader.evaluate_outtmpl( + progress_template.get('postprocess-title') or 'yt-dlp %(progress._default_template)s', + progress_dict)) + + def _retry_download(self, err, count, retries): + # While this is not an extractor, it behaves similar to one and + # so obey extractor_retries and "--retry-sleep extractor" + RetryManager.report_retry(err, count, retries, info=self.to_screen, warn=self.report_warning, + sleep_func=self.get_param('retry_sleep_functions', {}).get('extractor')) + + def _download_json(self, url, *, expected_http_errors=(404,)): + self.write_debug(f'{self.PP_NAME} query: {url}') + for retry in RetryManager(self.get_param('extractor_retries', 3), self._retry_download): + try: + rsp = self._downloader.urlopen(Request(url)) + except network_exceptions as e: + if isinstance(e, HTTPError) and e.status in expected_http_errors: + return None + retry.error = PostProcessingError(f'Unable to communicate with {self.PP_NAME} API: {e}') + continue + return json.loads(rsp.read().decode(rsp.headers.get_param('charset') or 'utf-8')) + + +class AudioConversionError(PostProcessingError): # Deprecated + pass diff --git a/python/lib/python3.10/site-packages/yt_dlp/postprocessor/embedthumbnail.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/embedthumbnail.py new file mode 100644 index 0000000..d7be0b3 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/embedthumbnail.py @@ -0,0 +1,227 @@ +import base64 +import os +import re +import subprocess + +from .common import PostProcessor +from .ffmpeg import FFmpegPostProcessor, FFmpegThumbnailsConvertorPP +from ..compat import imghdr +from ..dependencies import mutagen +from ..utils import ( + Popen, + PostProcessingError, + check_executable, + encodeArgument, + encodeFilename, + error_to_compat_str, + prepend_extension, + shell_quote, +) + +if mutagen: + from mutagen.flac import FLAC, Picture + from mutagen.mp4 import MP4, MP4Cover + from mutagen.oggopus import OggOpus + from mutagen.oggvorbis import OggVorbis + + +class EmbedThumbnailPPError(PostProcessingError): + pass + + +class EmbedThumbnailPP(FFmpegPostProcessor): + + def __init__(self, downloader=None, already_have_thumbnail=False): + FFmpegPostProcessor.__init__(self, downloader) + self._already_have_thumbnail = already_have_thumbnail + + def _get_thumbnail_resolution(self, filename, thumbnail_dict): + def guess(): + width, height = thumbnail_dict.get('width'), thumbnail_dict.get('height') + if width and height: + return width, height + + try: + size_regex = r',\s*(?P<w>\d+)x(?P<h>\d+)\s*[,\[]' + size_result = self.run_ffmpeg(filename, None, ['-hide_banner'], expected_retcodes=(1,)) + mobj = re.search(size_regex, size_result) + if mobj is None: + return guess() + except PostProcessingError as err: + self.report_warning('unable to find the thumbnail resolution; %s' % error_to_compat_str(err)) + return guess() + return int(mobj.group('w')), int(mobj.group('h')) + + def _report_run(self, exe, filename): + self.to_screen(f'{exe}: Adding thumbnail to "{filename}"') + + @PostProcessor._restrict_to(images=False) + def run(self, info): + filename = info['filepath'] + temp_filename = prepend_extension(filename, 'temp') + + if not info.get('thumbnails'): + self.to_screen('There aren\'t any thumbnails to embed') + return [], info + + idx = next((-i for i, t in enumerate(info['thumbnails'][::-1], 1) if t.get('filepath')), None) + if idx is None: + self.to_screen('There are no thumbnails on disk') + return [], info + thumbnail_filename = info['thumbnails'][idx]['filepath'] + if not os.path.exists(encodeFilename(thumbnail_filename)): + self.report_warning('Skipping embedding the thumbnail because the file is missing.') + return [], info + + # Correct extension for WebP file with wrong extension (see #25687, #25717) + convertor = FFmpegThumbnailsConvertorPP(self._downloader) + convertor.fixup_webp(info, idx) + + original_thumbnail = thumbnail_filename = info['thumbnails'][idx]['filepath'] + + # Convert unsupported thumbnail formats (see #25687, #25717) + # PNG is preferred since JPEG is lossy + thumbnail_ext = os.path.splitext(thumbnail_filename)[1][1:] + if info['ext'] not in ('mkv', 'mka') and thumbnail_ext not in ('jpg', 'jpeg', 'png'): + thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png') + thumbnail_ext = 'png' + + mtime = os.stat(encodeFilename(filename)).st_mtime + + success = True + if info['ext'] == 'mp3': + options = [ + '-c', 'copy', '-map', '0:0', '-map', '1:0', '-write_id3v1', '1', '-id3v2_version', '3', + '-metadata:s:v', 'title="Album cover"', '-metadata:s:v', 'comment=Cover (front)'] + + self._report_run('ffmpeg', filename) + self.run_ffmpeg_multiple_files([filename, thumbnail_filename], temp_filename, options) + + elif info['ext'] in ['mkv', 'mka']: + options = list(self.stream_copy_opts()) + + mimetype = f'image/{thumbnail_ext.replace("jpg", "jpeg")}' + old_stream, new_stream = self.get_stream_number( + filename, ('tags', 'mimetype'), mimetype) + if old_stream is not None: + options.extend(['-map', '-0:%d' % old_stream]) + new_stream -= 1 + options.extend([ + '-attach', self._ffmpeg_filename_argument(thumbnail_filename), + '-metadata:s:%d' % new_stream, 'mimetype=%s' % mimetype, + '-metadata:s:%d' % new_stream, 'filename=cover.%s' % thumbnail_ext]) + + self._report_run('ffmpeg', filename) + self.run_ffmpeg(filename, temp_filename, options) + + elif info['ext'] in ['m4a', 'mp4', 'm4v', 'mov']: + prefer_atomicparsley = 'embed-thumbnail-atomicparsley' in self.get_param('compat_opts', []) + # Method 1: Use mutagen + if not mutagen or prefer_atomicparsley: + success = False + else: + try: + self._report_run('mutagen', filename) + meta = MP4(filename) + # NOTE: the 'covr' atom is a non-standard MPEG-4 atom, + # Apple iTunes 'M4A' files include the 'moov.udta.meta.ilst' atom. + f = {'jpeg': MP4Cover.FORMAT_JPEG, 'png': MP4Cover.FORMAT_PNG}[imghdr.what(thumbnail_filename)] + with open(thumbnail_filename, 'rb') as thumbfile: + thumb_data = thumbfile.read() + meta.tags['covr'] = [MP4Cover(data=thumb_data, imageformat=f)] + meta.save() + temp_filename = filename + except Exception as err: + self.report_warning('unable to embed using mutagen; %s' % error_to_compat_str(err)) + success = False + + # Method 2: Use AtomicParsley + if not success: + success = True + atomicparsley = next(( + # libatomicparsley.so : See https://github.com/xibr/ytdlp-lazy/issues/1 + x for x in ['AtomicParsley', 'atomicparsley', 'libatomicparsley.so'] + if check_executable(x, ['-v'])), None) + if atomicparsley is None: + self.to_screen('Neither mutagen nor AtomicParsley was found. Falling back to ffmpeg') + success = False + else: + if not prefer_atomicparsley: + self.to_screen('mutagen was not found. Falling back to AtomicParsley') + cmd = [encodeFilename(atomicparsley, True), + encodeFilename(filename, True), + encodeArgument('--artwork'), + encodeFilename(thumbnail_filename, True), + encodeArgument('-o'), + encodeFilename(temp_filename, True)] + cmd += [encodeArgument(o) for o in self._configuration_args('AtomicParsley')] + + self._report_run('atomicparsley', filename) + self.write_debug('AtomicParsley command line: %s' % shell_quote(cmd)) + stdout, stderr, returncode = Popen.run(cmd, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + if returncode: + self.report_warning(f'Unable to embed thumbnails using AtomicParsley; {stderr.strip()}') + # for formats that don't support thumbnails (like 3gp) AtomicParsley + # won't create to the temporary file + if 'No changes' in stdout: + self.report_warning('The file format doesn\'t support embedding a thumbnail') + success = False + + # Method 3: Use ffmpeg+ffprobe + # Thumbnails attached using this method doesn't show up as cover in some cases + # See https://github.com/yt-dlp/yt-dlp/issues/2125, https://github.com/yt-dlp/yt-dlp/issues/411 + if not success: + success = True + try: + options = [*self.stream_copy_opts(), '-map', '1'] + + old_stream, new_stream = self.get_stream_number( + filename, ('disposition', 'attached_pic'), 1) + if old_stream is not None: + options.extend(['-map', '-0:%d' % old_stream]) + new_stream -= 1 + options.extend(['-disposition:%s' % new_stream, 'attached_pic']) + + self._report_run('ffmpeg', filename) + self.run_ffmpeg_multiple_files([filename, thumbnail_filename], temp_filename, options) + except PostProcessingError as err: + success = False + raise EmbedThumbnailPPError(f'Unable to embed using ffprobe & ffmpeg; {err}') + + elif info['ext'] in ['ogg', 'opus', 'flac']: + if not mutagen: + raise EmbedThumbnailPPError('module mutagen was not found. Please install using `python -m pip install mutagen`') + + self._report_run('mutagen', filename) + f = {'opus': OggOpus, 'flac': FLAC, 'ogg': OggVorbis}[info['ext']](filename) + + pic = Picture() + pic.mime = 'image/%s' % imghdr.what(thumbnail_filename) + with open(thumbnail_filename, 'rb') as thumbfile: + pic.data = thumbfile.read() + pic.type = 3 # front cover + res = self._get_thumbnail_resolution(thumbnail_filename, info['thumbnails'][idx]) + if res is not None: + pic.width, pic.height = res + + if info['ext'] == 'flac': + f.add_picture(pic) + else: + # https://wiki.xiph.org/VorbisComment#METADATA_BLOCK_PICTURE + f['METADATA_BLOCK_PICTURE'] = base64.b64encode(pic.write()).decode('ascii') + f.save() + temp_filename = filename + + else: + raise EmbedThumbnailPPError('Supported filetypes for thumbnail embedding are: mp3, mkv/mka, ogg/opus/flac, m4a/mp4/m4v/mov') + + if success and temp_filename != filename: + os.replace(temp_filename, filename) + + self.try_utime(filename, mtime, mtime) + converted = original_thumbnail != thumbnail_filename + self._delete_downloaded_files( + thumbnail_filename if converted or not self._already_have_thumbnail else None, + original_thumbnail if converted and not self._already_have_thumbnail else None, + info=info) + return [], info diff --git a/python/lib/python3.10/site-packages/yt_dlp/postprocessor/exec.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/exec.py new file mode 100644 index 0000000..c2e73fb --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/exec.py @@ -0,0 +1,41 @@ +from .common import PostProcessor +from ..compat import compat_shlex_quote +from ..utils import Popen, PostProcessingError, variadic + + +class ExecPP(PostProcessor): + + def __init__(self, downloader, exec_cmd): + PostProcessor.__init__(self, downloader) + self.exec_cmd = variadic(exec_cmd) + + def parse_cmd(self, cmd, info): + tmpl, tmpl_dict = self._downloader.prepare_outtmpl(cmd, info) + if tmpl_dict: # if there are no replacements, tmpl_dict = {} + return self._downloader.escape_outtmpl(tmpl) % tmpl_dict + + filepath = info.get('filepath', info.get('_filename')) + # If video, and no replacements are found, replace {} for backard compatibility + if filepath: + if '{}' not in cmd: + cmd += ' {}' + cmd = cmd.replace('{}', compat_shlex_quote(filepath)) + return cmd + + def run(self, info): + for tmpl in self.exec_cmd: + cmd = self.parse_cmd(tmpl, info) + self.to_screen(f'Executing command: {cmd}') + _, _, return_code = Popen.run(cmd, shell=True) + if return_code != 0: + raise PostProcessingError(f'Command returned error code {return_code}') + return [], info + + +# Deprecated +class ExecAfterDownloadPP(ExecPP): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.deprecation_warning( + 'yt_dlp.postprocessor.ExecAfterDownloadPP is deprecated ' + 'and may be removed in a future version. Use yt_dlp.postprocessor.ExecPP instead') diff --git a/python/lib/python3.10/site-packages/yt_dlp/postprocessor/ffmpeg.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/ffmpeg.py new file mode 100644 index 0000000..323f430 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/ffmpeg.py @@ -0,0 +1,1190 @@ +import collections +import contextvars +import itertools +import json +import os +import re +import subprocess +import time + +from .common import PostProcessor +from ..compat import functools, imghdr +from ..utils import ( + MEDIA_EXTENSIONS, + ISO639Utils, + Popen, + PostProcessingError, + _get_exe_version_output, + deprecation_warning, + detect_exe_version, + determine_ext, + dfxp2srt, + encodeArgument, + encodeFilename, + filter_dict, + float_or_none, + is_outdated_version, + orderedSet, + prepend_extension, + replace_extension, + shell_quote, + traverse_obj, + variadic, + write_json_file, +) + +EXT_TO_OUT_FORMATS = { + 'aac': 'adts', + 'flac': 'flac', + 'm4a': 'ipod', + 'mka': 'matroska', + 'mkv': 'matroska', + 'mpg': 'mpeg', + 'ogv': 'ogg', + 'ts': 'mpegts', + 'wma': 'asf', + 'wmv': 'asf', + 'weba': 'webm', + 'vtt': 'webvtt', +} +ACODECS = { + # name: (ext, encoder, opts) + 'mp3': ('mp3', 'libmp3lame', ()), + 'aac': ('m4a', 'aac', ('-f', 'adts')), + 'm4a': ('m4a', 'aac', ('-bsf:a', 'aac_adtstoasc')), + 'opus': ('opus', 'libopus', ()), + 'vorbis': ('ogg', 'libvorbis', ()), + 'flac': ('flac', 'flac', ()), + 'alac': ('m4a', None, ('-acodec', 'alac')), + 'wav': ('wav', None, ('-f', 'wav')), +} + + +def create_mapping_re(supported): + return re.compile(r'{0}(?:/{0})*$'.format(r'(?:\s*\w+\s*>)?\s*(?:%s)\s*' % '|'.join(supported))) + + +def resolve_mapping(source, mapping): + """ + Get corresponding item from a mapping string like 'A>B/C>D/E' + @returns (target, error_message) + """ + for pair in mapping.lower().split('/'): + kv = pair.split('>', 1) + if len(kv) == 1 or kv[0].strip() == source: + target = kv[-1].strip() + if target == source: + return target, f'already is in target format {source}' + return target, None + return None, f'could not find a mapping for {source}' + + +class FFmpegPostProcessorError(PostProcessingError): + pass + + +class FFmpegPostProcessor(PostProcessor): + _ffmpeg_location = contextvars.ContextVar('ffmpeg_location', default=None) + + def __init__(self, downloader=None): + PostProcessor.__init__(self, downloader) + self._prefer_ffmpeg = self.get_param('prefer_ffmpeg', True) + self._paths = self._determine_executables() + + @staticmethod + def get_versions_and_features(downloader=None): + pp = FFmpegPostProcessor(downloader) + return pp._versions, pp._features + + @staticmethod + def get_versions(downloader=None): + return FFmpegPostProcessor.get_versions_and_features(downloader)[0] + + _ffmpeg_to_avconv = {'ffmpeg': 'avconv', 'ffprobe': 'avprobe'} + + def _determine_executables(self): + programs = [*self._ffmpeg_to_avconv.keys(), *self._ffmpeg_to_avconv.values()] + + location = self.get_param('ffmpeg_location', self._ffmpeg_location.get()) + if location is None: + return {p: p for p in programs} + + if not os.path.exists(location): + self.report_warning( + f'ffmpeg-location {location} does not exist! Continuing without ffmpeg', only_once=True) + return {} + elif os.path.isdir(location): + dirname, basename, filename = location, None, None + else: + filename = os.path.basename(location) + basename = next((p for p in programs if p in filename), 'ffmpeg') + dirname = os.path.dirname(os.path.abspath(location)) + if basename in self._ffmpeg_to_avconv.keys(): + self._prefer_ffmpeg = True + + paths = {p: os.path.join(dirname, p) for p in programs} + if basename and basename in filename: + for p in programs: + path = os.path.join(dirname, filename.replace(basename, p)) + if os.path.exists(path): + paths[p] = path + if basename: + paths[basename] = location + return paths + + _version_cache, _features_cache = {None: None}, {} + + def _get_ffmpeg_version(self, prog): + path = self._paths.get(prog) + if path in self._version_cache: + return self._version_cache[path], self._features_cache.get(path, {}) + out = _get_exe_version_output(path, ['-bsfs']) + ver = detect_exe_version(out) if out else False + if ver: + regexs = [ + r'(?:\d+:)?([0-9.]+)-[0-9]+ubuntu[0-9.]+$', # Ubuntu, see [1] + r'n([0-9.]+)$', # Arch Linux + # 1. http://www.ducea.com/2006/06/17/ubuntu-package-version-naming-explanation/ + ] + for regex in regexs: + mobj = re.match(regex, ver) + if mobj: + ver = mobj.group(1) + self._version_cache[path] = ver + if prog != 'ffmpeg' or not out: + return ver, {} + + mobj = re.search(r'(?m)^\s+libavformat\s+(?:[0-9. ]+)\s+/\s+(?P<runtime>[0-9. ]+)', out) + lavf_runtime_version = mobj.group('runtime').replace(' ', '') if mobj else None + self._features_cache[path] = features = { + 'fdk': '--enable-libfdk-aac' in out, + 'setts': 'setts' in out.splitlines(), + 'needs_adtstoasc': is_outdated_version(lavf_runtime_version, '57.56.100', False), + } + return ver, features + + @property + def _versions(self): + return filter_dict({self.basename: self._version, self.probe_basename: self._probe_version}) + + @functools.cached_property + def basename(self): + self._version # run property + return self.basename + + @functools.cached_property + def probe_basename(self): + self._probe_version # run property + return self.probe_basename + + def _get_version(self, kind): + executables = (kind, ) + if not self._prefer_ffmpeg: + executables = (kind, self._ffmpeg_to_avconv[kind]) + basename, version, features = next(filter( + lambda x: x[1], ((p, *self._get_ffmpeg_version(p)) for p in executables)), (None, None, {})) + if kind == 'ffmpeg': + self.basename, self._features = basename, features + else: + self.probe_basename = basename + if basename == self._ffmpeg_to_avconv[kind]: + self.deprecated_feature(f'Support for {self._ffmpeg_to_avconv[kind]} is deprecated and ' + f'may be removed in a future version. Use {kind} instead') + return version + + @functools.cached_property + def _version(self): + return self._get_version('ffmpeg') + + @functools.cached_property + def _probe_version(self): + return self._get_version('ffprobe') + + @property + def available(self): + return self.basename is not None + + @property + def executable(self): + return self._paths.get(self.basename) + + @property + def probe_available(self): + return self.probe_basename is not None + + @property + def probe_executable(self): + return self._paths.get(self.probe_basename) + + @staticmethod + def stream_copy_opts(copy=True, *, ext=None): + yield from ('-map', '0') + # Don't copy Apple TV chapters track, bin_data + # See https://github.com/yt-dlp/yt-dlp/issues/2, #19042, #19024, https://trac.ffmpeg.org/ticket/6016 + yield from ('-dn', '-ignore_unknown') + if copy: + yield from ('-c', 'copy') + if ext in ('mp4', 'mov', 'm4a'): + yield from ('-c:s', 'mov_text') + + def check_version(self): + if not self.available: + raise FFmpegPostProcessorError('ffmpeg not found. Please install or provide the path using --ffmpeg-location') + + required_version = '10-0' if self.basename == 'avconv' else '1.0' + if is_outdated_version(self._version, required_version): + self.report_warning(f'Your copy of {self.basename} is outdated, update {self.basename} ' + f'to version {required_version} or newer if you encounter any errors') + + def get_audio_codec(self, path): + if not self.probe_available and not self.available: + raise PostProcessingError('ffprobe and ffmpeg not found. Please install or provide the path using --ffmpeg-location') + try: + if self.probe_available: + cmd = [ + encodeFilename(self.probe_executable, True), + encodeArgument('-show_streams')] + else: + cmd = [ + encodeFilename(self.executable, True), + encodeArgument('-i')] + cmd.append(encodeFilename(self._ffmpeg_filename_argument(path), True)) + self.write_debug(f'{self.basename} command line: {shell_quote(cmd)}') + stdout, stderr, returncode = Popen.run( + cmd, text=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + if returncode != (0 if self.probe_available else 1): + return None + except OSError: + return None + output = stdout if self.probe_available else stderr + if self.probe_available: + audio_codec = None + for line in output.split('\n'): + if line.startswith('codec_name='): + audio_codec = line.split('=')[1].strip() + elif line.strip() == 'codec_type=audio' and audio_codec is not None: + return audio_codec + else: + # Stream #FILE_INDEX:STREAM_INDEX[STREAM_ID](LANGUAGE): CODEC_TYPE: CODEC_NAME + mobj = re.search( + r'Stream\s*#\d+:\d+(?:\[0x[0-9a-f]+\])?(?:\([a-z]{3}\))?:\s*Audio:\s*([0-9a-z]+)', + output) + if mobj: + return mobj.group(1) + return None + + def get_metadata_object(self, path, opts=[]): + if self.probe_basename != 'ffprobe': + if self.probe_available: + self.report_warning('Only ffprobe is supported for metadata extraction') + raise PostProcessingError('ffprobe not found. Please install or provide the path using --ffmpeg-location') + self.check_version() + + cmd = [ + encodeFilename(self.probe_executable, True), + encodeArgument('-hide_banner'), + encodeArgument('-show_format'), + encodeArgument('-show_streams'), + encodeArgument('-print_format'), + encodeArgument('json'), + ] + + cmd += opts + cmd.append(self._ffmpeg_filename_argument(path)) + self.write_debug(f'ffprobe command line: {shell_quote(cmd)}') + stdout, _, _ = Popen.run(cmd, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) + return json.loads(stdout) + + def get_stream_number(self, path, keys, value): + streams = self.get_metadata_object(path)['streams'] + num = next( + (i for i, stream in enumerate(streams) if traverse_obj(stream, keys, casesense=False) == value), + None) + return num, len(streams) + + def _fixup_chapters(self, info): + last_chapter = traverse_obj(info, ('chapters', -1)) + if last_chapter and not last_chapter.get('end_time'): + last_chapter['end_time'] = self._get_real_video_duration(info['filepath']) + + def _get_real_video_duration(self, filepath, fatal=True): + try: + duration = float_or_none( + traverse_obj(self.get_metadata_object(filepath), ('format', 'duration'))) + if not duration: + raise PostProcessingError('ffprobe returned empty duration') + return duration + except PostProcessingError as e: + if fatal: + raise PostProcessingError(f'Unable to determine video duration: {e.msg}') + + def _duration_mismatch(self, d1, d2, tolerance=2): + if not d1 or not d2: + return None + # The duration is often only known to nearest second. So there can be <1sec disparity natually. + # Further excuse an additional <1sec difference. + return abs(d1 - d2) > tolerance + + def run_ffmpeg_multiple_files(self, input_paths, out_path, opts, **kwargs): + return self.real_run_ffmpeg( + [(path, []) for path in input_paths], + [(out_path, opts)], **kwargs) + + def real_run_ffmpeg(self, input_path_opts, output_path_opts, *, expected_retcodes=(0,)): + self.check_version() + + oldest_mtime = min( + os.stat(encodeFilename(path)).st_mtime for path, _ in input_path_opts if path) + + cmd = [encodeFilename(self.executable, True), encodeArgument('-y')] + # avconv does not have repeat option + if self.basename == 'ffmpeg': + cmd += [encodeArgument('-loglevel'), encodeArgument('repeat+info')] + + def make_args(file, args, name, number): + keys = ['_%s%d' % (name, number), '_%s' % name] + if name == 'o': + args += ['-movflags', '+faststart'] + if number == 1: + keys.append('') + args += self._configuration_args(self.basename, keys) + if name == 'i': + args.append('-i') + return ( + [encodeArgument(arg) for arg in args] + + [encodeFilename(self._ffmpeg_filename_argument(file), True)]) + + for arg_type, path_opts in (('i', input_path_opts), ('o', output_path_opts)): + cmd += itertools.chain.from_iterable( + make_args(path, list(opts), arg_type, i + 1) + for i, (path, opts) in enumerate(path_opts) if path) + + self.write_debug('ffmpeg command line: %s' % shell_quote(cmd)) + _, stderr, returncode = Popen.run( + cmd, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) + if returncode not in variadic(expected_retcodes): + self.write_debug(stderr) + raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1]) + for out_path, _ in output_path_opts: + if out_path: + self.try_utime(out_path, oldest_mtime, oldest_mtime) + return stderr + + def run_ffmpeg(self, path, out_path, opts, **kwargs): + return self.run_ffmpeg_multiple_files([path], out_path, opts, **kwargs) + + @staticmethod + def _ffmpeg_filename_argument(fn): + # Always use 'file:' because the filename may contain ':' (ffmpeg + # interprets that as a protocol) or can start with '-' (-- is broken in + # ffmpeg, see https://ffmpeg.org/trac/ffmpeg/ticket/2127 for details) + # Also leave '-' intact in order not to break streaming to stdout. + if fn.startswith(('http://', 'https://')): + return fn + return 'file:' + fn if fn != '-' else fn + + @staticmethod + def _quote_for_ffmpeg(string): + # See https://ffmpeg.org/ffmpeg-utils.html#toc-Quoting-and-escaping + # A sequence of '' produces '\'''\''; + # final replace removes the empty '' between \' \'. + string = string.replace("'", r"'\''").replace("'''", "'") + # Handle potential ' at string boundaries. + string = string[1:] if string[0] == "'" else "'" + string + return string[:-1] if string[-1] == "'" else string + "'" + + def force_keyframes(self, filename, timestamps): + timestamps = orderedSet(timestamps) + if timestamps[0] == 0: + timestamps = timestamps[1:] + keyframe_file = prepend_extension(filename, 'keyframes.temp') + self.to_screen(f'Re-encoding "{filename}" with appropriate keyframes') + self.run_ffmpeg(filename, keyframe_file, [ + *self.stream_copy_opts(False, ext=determine_ext(filename)), + '-force_key_frames', ','.join(f'{t:.6f}' for t in timestamps)]) + return keyframe_file + + def concat_files(self, in_files, out_file, concat_opts=None): + """ + Use concat demuxer to concatenate multiple files having identical streams. + + Only inpoint, outpoint, and duration concat options are supported. + See https://ffmpeg.org/ffmpeg-formats.html#concat-1 for details + """ + concat_file = f'{out_file}.concat' + self.write_debug(f'Writing concat spec to {concat_file}') + with open(concat_file, 'w', encoding='utf-8') as f: + f.writelines(self._concat_spec(in_files, concat_opts)) + + out_flags = list(self.stream_copy_opts(ext=determine_ext(out_file))) + + self.real_run_ffmpeg( + [(concat_file, ['-hide_banner', '-nostdin', '-f', 'concat', '-safe', '0'])], + [(out_file, out_flags)]) + self._delete_downloaded_files(concat_file) + + @classmethod + def _concat_spec(cls, in_files, concat_opts=None): + if concat_opts is None: + concat_opts = [{}] * len(in_files) + yield 'ffconcat version 1.0\n' + for file, opts in zip(in_files, concat_opts): + yield f'file {cls._quote_for_ffmpeg(cls._ffmpeg_filename_argument(file))}\n' + # Iterate explicitly to yield the following directives in order, ignoring the rest. + for directive in 'inpoint', 'outpoint', 'duration': + if directive in opts: + yield f'{directive} {opts[directive]}\n' + + +class FFmpegExtractAudioPP(FFmpegPostProcessor): + COMMON_AUDIO_EXTS = MEDIA_EXTENSIONS.common_audio + ('wma', ) + SUPPORTED_EXTS = tuple(ACODECS.keys()) + FORMAT_RE = create_mapping_re(('best', *SUPPORTED_EXTS)) + + def __init__(self, downloader=None, preferredcodec=None, preferredquality=None, nopostoverwrites=False): + FFmpegPostProcessor.__init__(self, downloader) + self.mapping = preferredcodec or 'best' + self._preferredquality = float_or_none(preferredquality) + self._nopostoverwrites = nopostoverwrites + + def _quality_args(self, codec): + if self._preferredquality is None: + return [] + elif self._preferredquality > 10: + return ['-b:a', f'{self._preferredquality}k'] + + limits = { + 'libmp3lame': (10, 0), + 'libvorbis': (0, 10), + # FFmpeg's AAC encoder does not have an upper limit for the value of -q:a. + # Experimentally, with values over 4, bitrate changes were minimal or non-existent + 'aac': (0.1, 4), + 'libfdk_aac': (1, 5), + }.get(codec) + if not limits: + return [] + + q = limits[1] + (limits[0] - limits[1]) * (self._preferredquality / 10) + if codec == 'libfdk_aac': + return ['-vbr', f'{int(q)}'] + return ['-q:a', f'{q}'] + + def run_ffmpeg(self, path, out_path, codec, more_opts): + if codec is None: + acodec_opts = [] + else: + acodec_opts = ['-acodec', codec] + opts = ['-vn'] + acodec_opts + more_opts + try: + FFmpegPostProcessor.run_ffmpeg(self, path, out_path, opts) + except FFmpegPostProcessorError as err: + raise PostProcessingError(f'audio conversion failed: {err.msg}') + + @PostProcessor._restrict_to(images=False) + def run(self, information): + orig_path = path = information['filepath'] + target_format, _skip_msg = resolve_mapping(information['ext'], self.mapping) + if target_format == 'best' and information['ext'] in self.COMMON_AUDIO_EXTS: + target_format, _skip_msg = None, 'the file is already in a common audio format' + if not target_format: + self.to_screen(f'Not converting audio {orig_path}; {_skip_msg}') + return [], information + + filecodec = self.get_audio_codec(path) + if filecodec is None: + raise PostProcessingError('WARNING: unable to obtain file audio codec with ffprobe') + + if filecodec == 'aac' and target_format in ('m4a', 'best'): + # Lossless, but in another container + extension, _, more_opts, acodec = *ACODECS['m4a'], 'copy' + elif target_format == 'best' or target_format == filecodec: + # Lossless if possible + try: + extension, _, more_opts, acodec = *ACODECS[filecodec], 'copy' + except KeyError: + extension, acodec, more_opts = ACODECS['mp3'] + else: + # We convert the audio (lossy if codec is lossy) + extension, acodec, more_opts = ACODECS[target_format] + if acodec == 'aac' and self._features.get('fdk'): + acodec, more_opts = 'libfdk_aac', [] + + more_opts = list(more_opts) + if acodec != 'copy': + more_opts = self._quality_args(acodec) + + temp_path = new_path = replace_extension(path, extension, information['ext']) + + if new_path == path: + if acodec == 'copy': + self.to_screen(f'Not converting audio {orig_path}; file is already in target format {target_format}') + return [], information + orig_path = prepend_extension(path, 'orig') + temp_path = prepend_extension(path, 'temp') + if (self._nopostoverwrites and os.path.exists(encodeFilename(new_path)) + and os.path.exists(encodeFilename(orig_path))): + self.to_screen('Post-process file %s exists, skipping' % new_path) + return [], information + + self.to_screen(f'Destination: {new_path}') + self.run_ffmpeg(path, temp_path, acodec, more_opts) + + os.replace(path, orig_path) + os.replace(temp_path, new_path) + information['filepath'] = new_path + information['ext'] = extension + + # Try to update the date time for extracted audio file. + if information.get('filetime') is not None: + self.try_utime( + new_path, time.time(), information['filetime'], errnote='Cannot update utime of audio file') + + return [orig_path], information + + +class FFmpegVideoConvertorPP(FFmpegPostProcessor): + SUPPORTED_EXTS = ( + *sorted((*MEDIA_EXTENSIONS.common_video, 'gif')), + *sorted((*MEDIA_EXTENSIONS.common_audio, 'aac', 'vorbis')), + ) + FORMAT_RE = create_mapping_re(SUPPORTED_EXTS) + _ACTION = 'converting' + + def __init__(self, downloader=None, preferedformat=None): + super().__init__(downloader) + self.mapping = preferedformat + + @staticmethod + def _options(target_ext): + yield from FFmpegPostProcessor.stream_copy_opts(False) + if target_ext == 'avi': + yield from ('-c:v', 'libxvid', '-vtag', 'XVID') + + @PostProcessor._restrict_to(images=False) + def run(self, info): + filename, source_ext = info['filepath'], info['ext'].lower() + target_ext, _skip_msg = resolve_mapping(source_ext, self.mapping) + if _skip_msg: + self.to_screen(f'Not {self._ACTION} media file "{filename}"; {_skip_msg}') + return [], info + + outpath = replace_extension(filename, target_ext, source_ext) + self.to_screen(f'{self._ACTION.title()} video from {source_ext} to {target_ext}; Destination: {outpath}') + self.run_ffmpeg(filename, outpath, self._options(target_ext)) + + info['filepath'] = outpath + info['format'] = info['ext'] = target_ext + return [filename], info + + +class FFmpegVideoRemuxerPP(FFmpegVideoConvertorPP): + _ACTION = 'remuxing' + + @staticmethod + def _options(target_ext): + return FFmpegPostProcessor.stream_copy_opts() + + +class FFmpegEmbedSubtitlePP(FFmpegPostProcessor): + SUPPORTED_EXTS = ('mp4', 'mov', 'm4a', 'webm', 'mkv', 'mka') + + def __init__(self, downloader=None, already_have_subtitle=False): + super().__init__(downloader) + self._already_have_subtitle = already_have_subtitle + + @PostProcessor._restrict_to(images=False) + def run(self, info): + if info['ext'] not in self.SUPPORTED_EXTS: + self.to_screen(f'Subtitles can only be embedded in {", ".join(self.SUPPORTED_EXTS)} files') + return [], info + subtitles = info.get('requested_subtitles') + if not subtitles: + self.to_screen('There aren\'t any subtitles to embed') + return [], info + + filename = info['filepath'] + + # Disabled temporarily. There needs to be a way to override this + # in case of duration actually mismatching in extractor + # See: https://github.com/yt-dlp/yt-dlp/issues/1870, https://github.com/yt-dlp/yt-dlp/issues/1385 + ''' + if info.get('duration') and not info.get('__real_download') and self._duration_mismatch( + self._get_real_video_duration(filename, False), info['duration']): + self.to_screen(f'Skipping {self.pp_key()} since the real and expected durations mismatch') + return [], info + ''' + + ext = info['ext'] + sub_langs, sub_names, sub_filenames = [], [], [] + webm_vtt_warn = False + mp4_ass_warn = False + + for lang, sub_info in subtitles.items(): + if not os.path.exists(sub_info.get('filepath', '')): + self.report_warning(f'Skipping embedding {lang} subtitle because the file is missing') + continue + sub_ext = sub_info['ext'] + if sub_ext == 'json': + self.report_warning('JSON subtitles cannot be embedded') + elif ext != 'webm' or ext == 'webm' and sub_ext == 'vtt': + sub_langs.append(lang) + sub_names.append(sub_info.get('name')) + sub_filenames.append(sub_info['filepath']) + else: + if not webm_vtt_warn and ext == 'webm' and sub_ext != 'vtt': + webm_vtt_warn = True + self.report_warning('Only WebVTT subtitles can be embedded in webm files') + if not mp4_ass_warn and ext == 'mp4' and sub_ext == 'ass': + mp4_ass_warn = True + self.report_warning('ASS subtitles cannot be properly embedded in mp4 files; expect issues') + + if not sub_langs: + return [], info + + input_files = [filename] + sub_filenames + + opts = [ + *self.stream_copy_opts(ext=info['ext']), + # Don't copy the existing subtitles, we may be running the + # postprocessor a second time + '-map', '-0:s', + ] + for i, (lang, name) in enumerate(zip(sub_langs, sub_names)): + opts.extend(['-map', '%d:0' % (i + 1)]) + lang_code = ISO639Utils.short2long(lang) or lang + opts.extend(['-metadata:s:s:%d' % i, 'language=%s' % lang_code]) + if name: + opts.extend(['-metadata:s:s:%d' % i, 'handler_name=%s' % name, + '-metadata:s:s:%d' % i, 'title=%s' % name]) + + temp_filename = prepend_extension(filename, 'temp') + self.to_screen('Embedding subtitles in "%s"' % filename) + self.run_ffmpeg_multiple_files(input_files, temp_filename, opts) + os.replace(temp_filename, filename) + + files_to_delete = [] if self._already_have_subtitle else sub_filenames + return files_to_delete, info + + +class FFmpegMetadataPP(FFmpegPostProcessor): + + def __init__(self, downloader, add_metadata=True, add_chapters=True, add_infojson='if_exists'): + FFmpegPostProcessor.__init__(self, downloader) + self._add_metadata = add_metadata + self._add_chapters = add_chapters + self._add_infojson = add_infojson + + @staticmethod + def _options(target_ext): + audio_only = target_ext == 'm4a' + yield from FFmpegPostProcessor.stream_copy_opts(not audio_only) + if audio_only: + yield from ('-vn', '-acodec', 'copy') + + @PostProcessor._restrict_to(images=False) + def run(self, info): + self._fixup_chapters(info) + filename, metadata_filename = info['filepath'], None + files_to_delete, options = [], [] + if self._add_chapters and info.get('chapters'): + metadata_filename = replace_extension(filename, 'meta') + options.extend(self._get_chapter_opts(info['chapters'], metadata_filename)) + files_to_delete.append(metadata_filename) + if self._add_metadata: + options.extend(self._get_metadata_opts(info)) + + if self._add_infojson: + if info['ext'] in ('mkv', 'mka'): + infojson_filename = info.get('infojson_filename') + options.extend(self._get_infojson_opts(info, infojson_filename)) + if not infojson_filename: + files_to_delete.append(info.get('infojson_filename')) + elif self._add_infojson is True: + self.to_screen('The info-json can only be attached to mkv/mka files') + + if not options: + self.to_screen('There isn\'t any metadata to add') + return [], info + + temp_filename = prepend_extension(filename, 'temp') + self.to_screen('Adding metadata to "%s"' % filename) + self.run_ffmpeg_multiple_files( + (filename, metadata_filename), temp_filename, + itertools.chain(self._options(info['ext']), *options)) + self._delete_downloaded_files(*files_to_delete) + os.replace(temp_filename, filename) + return [], info + + @staticmethod + def _get_chapter_opts(chapters, metadata_filename): + with open(metadata_filename, 'w', encoding='utf-8') as f: + def ffmpeg_escape(text): + return re.sub(r'([\\=;#\n])', r'\\\1', text) + + metadata_file_content = ';FFMETADATA1\n' + for chapter in chapters: + metadata_file_content += '[CHAPTER]\nTIMEBASE=1/1000\n' + metadata_file_content += 'START=%d\n' % (chapter['start_time'] * 1000) + metadata_file_content += 'END=%d\n' % (chapter['end_time'] * 1000) + chapter_title = chapter.get('title') + if chapter_title: + metadata_file_content += 'title=%s\n' % ffmpeg_escape(chapter_title) + f.write(metadata_file_content) + yield ('-map_metadata', '1') + + def _get_metadata_opts(self, info): + meta_prefix = 'meta' + metadata = collections.defaultdict(dict) + + def add(meta_list, info_list=None): + value = next(( + str(info[key]) for key in [f'{meta_prefix}_'] + list(variadic(info_list or meta_list)) + if info.get(key) is not None), None) + if value not in ('', None): + value = value.replace('\0', '') # nul character cannot be passed in command line + metadata['common'].update({meta_f: value for meta_f in variadic(meta_list)}) + + # Info on media metadata/metadata supported by ffmpeg: + # https://wiki.multimedia.cx/index.php/FFmpeg_Metadata + # https://kdenlive.org/en/project/adding-meta-data-to-mp4-video/ + # https://kodi.wiki/view/Video_file_tagging + + add('title', ('track', 'title')) + add('date', 'upload_date') + add(('description', 'synopsis'), 'description') + add(('purl', 'comment'), 'webpage_url') + add('track', 'track_number') + add('artist', ('artist', 'creator', 'uploader', 'uploader_id')) + add('genre') + add('album') + add('album_artist') + add('disc', 'disc_number') + add('show', 'series') + add('season_number') + add('episode_id', ('episode', 'episode_id')) + add('episode_sort', 'episode_number') + if 'embed-metadata' in self.get_param('compat_opts', []): + add('comment', 'description') + metadata['common'].pop('synopsis', None) + + meta_regex = rf'{re.escape(meta_prefix)}(?P<i>\d+)?_(?P<key>.+)' + for key, value in info.items(): + mobj = re.fullmatch(meta_regex, key) + if value is not None and mobj: + metadata[mobj.group('i') or 'common'][mobj.group('key')] = value.replace('\0', '') + + # Write id3v1 metadata also since Windows Explorer can't handle id3v2 tags + yield ('-write_id3v1', '1') + + for name, value in metadata['common'].items(): + yield ('-metadata', f'{name}={value}') + + stream_idx = 0 + for fmt in info.get('requested_formats') or []: + stream_count = 2 if 'none' not in (fmt.get('vcodec'), fmt.get('acodec')) else 1 + lang = ISO639Utils.short2long(fmt.get('language') or '') or fmt.get('language') + for i in range(stream_idx, stream_idx + stream_count): + if lang: + metadata[str(i)].setdefault('language', lang) + for name, value in metadata[str(i)].items(): + yield (f'-metadata:s:{i}', f'{name}={value}') + stream_idx += stream_count + + def _get_infojson_opts(self, info, infofn): + if not infofn or not os.path.exists(infofn): + if self._add_infojson is not True: + return + infofn = infofn or '%s.temp' % ( + self._downloader.prepare_filename(info, 'infojson') + or replace_extension(self._downloader.prepare_filename(info), 'info.json', info['ext'])) + if not self._downloader._ensure_dir_exists(infofn): + return + self.write_debug(f'Writing info-json to: {infofn}') + write_json_file(self._downloader.sanitize_info(info, self.get_param('clean_infojson', True)), infofn) + info['infojson_filename'] = infofn + + old_stream, new_stream = self.get_stream_number(info['filepath'], ('tags', 'mimetype'), 'application/json') + if old_stream is not None: + yield ('-map', '-0:%d' % old_stream) + new_stream -= 1 + + yield ( + '-attach', self._ffmpeg_filename_argument(infofn), + f'-metadata:s:{new_stream}', 'mimetype=application/json', + f'-metadata:s:{new_stream}', 'filename=info.json', + ) + + +class FFmpegMergerPP(FFmpegPostProcessor): + SUPPORTED_EXTS = MEDIA_EXTENSIONS.common_video + + @PostProcessor._restrict_to(images=False) + def run(self, info): + filename = info['filepath'] + temp_filename = prepend_extension(filename, 'temp') + args = ['-c', 'copy'] + audio_streams = 0 + for (i, fmt) in enumerate(info['requested_formats']): + if fmt.get('acodec') != 'none': + args.extend(['-map', f'{i}:a:0']) + aac_fixup = fmt['protocol'].startswith('m3u8') and self.get_audio_codec(fmt['filepath']) == 'aac' + if aac_fixup: + args.extend([f'-bsf:a:{audio_streams}', 'aac_adtstoasc']) + audio_streams += 1 + if fmt.get('vcodec') != 'none': + args.extend(['-map', '%u:v:0' % (i)]) + self.to_screen('Merging formats into "%s"' % filename) + self.run_ffmpeg_multiple_files(info['__files_to_merge'], temp_filename, args) + os.rename(encodeFilename(temp_filename), encodeFilename(filename)) + return info['__files_to_merge'], info + + def can_merge(self): + # TODO: figure out merge-capable ffmpeg version + if self.basename != 'avconv': + return True + + required_version = '10-0' + if is_outdated_version( + self._versions[self.basename], required_version): + warning = ('Your copy of %s is outdated and unable to properly mux separate video and audio files, ' + 'yt-dlp will download single file media. ' + 'Update %s to version %s or newer to fix this.') % ( + self.basename, self.basename, required_version) + self.report_warning(warning) + return False + return True + + +class FFmpegFixupPostProcessor(FFmpegPostProcessor): + def _fixup(self, msg, filename, options): + temp_filename = prepend_extension(filename, 'temp') + + self.to_screen(f'{msg} of "{filename}"') + self.run_ffmpeg(filename, temp_filename, options) + + os.replace(temp_filename, filename) + + +class FFmpegFixupStretchedPP(FFmpegFixupPostProcessor): + @PostProcessor._restrict_to(images=False, audio=False) + def run(self, info): + stretched_ratio = info.get('stretched_ratio') + if stretched_ratio not in (None, 1): + self._fixup('Fixing aspect ratio', info['filepath'], [ + *self.stream_copy_opts(), '-aspect', '%f' % stretched_ratio]) + return [], info + + +class FFmpegFixupM4aPP(FFmpegFixupPostProcessor): + @PostProcessor._restrict_to(images=False, video=False) + def run(self, info): + if info.get('container') == 'm4a_dash': + self._fixup('Correcting container', info['filepath'], [*self.stream_copy_opts(), '-f', 'mp4']) + return [], info + + +class FFmpegFixupM3u8PP(FFmpegFixupPostProcessor): + def _needs_fixup(self, info): + yield info['ext'] in ('mp4', 'm4a') + yield info['protocol'].startswith('m3u8') + try: + metadata = self.get_metadata_object(info['filepath']) + except PostProcessingError as e: + self.report_warning(f'Unable to extract metadata: {e.msg}') + yield True + else: + yield traverse_obj(metadata, ('format', 'format_name'), casesense=False) == 'mpegts' + + @PostProcessor._restrict_to(images=False) + def run(self, info): + if all(self._needs_fixup(info)): + args = ['-f', 'mp4'] + if self.get_audio_codec(info['filepath']) == 'aac': + args.extend(['-bsf:a', 'aac_adtstoasc']) + self._fixup('Fixing MPEG-TS in MP4 container', info['filepath'], [ + *self.stream_copy_opts(), *args]) + return [], info + + +class FFmpegFixupTimestampPP(FFmpegFixupPostProcessor): + + def __init__(self, downloader=None, trim=0.001): + # "trim" should be used when the video contains unintended packets + super().__init__(downloader) + assert isinstance(trim, (int, float)) + self.trim = str(trim) + + @PostProcessor._restrict_to(images=False) + def run(self, info): + if not self._features.get('setts'): + self.report_warning( + 'A re-encode is needed to fix timestamps in older versions of ffmpeg. ' + 'Please install ffmpeg 4.4 or later to fixup without re-encoding') + opts = ['-vf', 'setpts=PTS-STARTPTS'] + else: + opts = ['-c', 'copy', '-bsf', 'setts=ts=TS-STARTPTS'] + self._fixup('Fixing frame timestamp', info['filepath'], opts + [*self.stream_copy_opts(False), '-ss', self.trim]) + return [], info + + +class FFmpegCopyStreamPP(FFmpegFixupPostProcessor): + MESSAGE = 'Copying stream' + + @PostProcessor._restrict_to(images=False) + def run(self, info): + self._fixup(self.MESSAGE, info['filepath'], self.stream_copy_opts()) + return [], info + + +class FFmpegFixupDurationPP(FFmpegCopyStreamPP): + MESSAGE = 'Fixing video duration' + + +class FFmpegFixupDuplicateMoovPP(FFmpegCopyStreamPP): + MESSAGE = 'Fixing duplicate MOOV atoms' + + +class FFmpegSubtitlesConvertorPP(FFmpegPostProcessor): + SUPPORTED_EXTS = MEDIA_EXTENSIONS.subtitles + + def __init__(self, downloader=None, format=None): + super().__init__(downloader) + self.format = format + + def run(self, info): + subs = info.get('requested_subtitles') + new_ext = self.format + new_format = new_ext + if new_format == 'vtt': + new_format = 'webvtt' + if subs is None: + self.to_screen('There aren\'t any subtitles to convert') + return [], info + self.to_screen('Converting subtitles') + sub_filenames = [] + for lang, sub in subs.items(): + if not os.path.exists(sub.get('filepath', '')): + self.report_warning(f'Skipping embedding {lang} subtitle because the file is missing') + continue + ext = sub['ext'] + if ext == new_ext: + self.to_screen('Subtitle file for %s is already in the requested format' % new_ext) + continue + elif ext == 'json': + self.to_screen( + 'You have requested to convert json subtitles into another format, ' + 'which is currently not possible') + continue + old_file = sub['filepath'] + sub_filenames.append(old_file) + new_file = replace_extension(old_file, new_ext) + + if ext in ('dfxp', 'ttml', 'tt'): + self.report_warning( + 'You have requested to convert dfxp (TTML) subtitles into another format, ' + 'which results in style information loss') + + dfxp_file = old_file + srt_file = replace_extension(old_file, 'srt') + + with open(dfxp_file, 'rb') as f: + srt_data = dfxp2srt(f.read()) + + with open(srt_file, 'w', encoding='utf-8') as f: + f.write(srt_data) + old_file = srt_file + + subs[lang] = { + 'ext': 'srt', + 'data': srt_data, + 'filepath': srt_file, + } + + if new_ext == 'srt': + continue + else: + sub_filenames.append(srt_file) + + self.run_ffmpeg(old_file, new_file, ['-f', new_format]) + + with open(new_file, encoding='utf-8') as f: + subs[lang] = { + 'ext': new_ext, + 'data': f.read(), + 'filepath': new_file, + } + + info['__files_to_move'][new_file] = replace_extension( + info['__files_to_move'][sub['filepath']], new_ext) + + return sub_filenames, info + + +class FFmpegSplitChaptersPP(FFmpegPostProcessor): + def __init__(self, downloader, force_keyframes=False): + FFmpegPostProcessor.__init__(self, downloader) + self._force_keyframes = force_keyframes + + def _prepare_filename(self, number, chapter, info): + info = info.copy() + info.update({ + 'section_number': number, + 'section_title': chapter.get('title'), + 'section_start': chapter.get('start_time'), + 'section_end': chapter.get('end_time'), + }) + return self._downloader.prepare_filename(info, 'chapter') + + def _ffmpeg_args_for_chapter(self, number, chapter, info): + destination = self._prepare_filename(number, chapter, info) + if not self._downloader._ensure_dir_exists(encodeFilename(destination)): + return + + chapter['filepath'] = destination + self.to_screen('Chapter %03d; Destination: %s' % (number, destination)) + return ( + destination, + ['-ss', str(chapter['start_time']), + '-t', str(chapter['end_time'] - chapter['start_time'])]) + + @PostProcessor._restrict_to(images=False) + def run(self, info): + self._fixup_chapters(info) + chapters = info.get('chapters') or [] + if not chapters: + self.to_screen('Chapter information is unavailable') + return [], info + + in_file = info['filepath'] + if self._force_keyframes and len(chapters) > 1: + in_file = self.force_keyframes(in_file, (c['start_time'] for c in chapters)) + self.to_screen('Splitting video by chapters; %d chapters found' % len(chapters)) + for idx, chapter in enumerate(chapters): + destination, opts = self._ffmpeg_args_for_chapter(idx + 1, chapter, info) + self.real_run_ffmpeg([(in_file, opts)], [(destination, self.stream_copy_opts())]) + if in_file != info['filepath']: + self._delete_downloaded_files(in_file, msg=None) + return [], info + + +class FFmpegThumbnailsConvertorPP(FFmpegPostProcessor): + SUPPORTED_EXTS = MEDIA_EXTENSIONS.thumbnails + FORMAT_RE = create_mapping_re(SUPPORTED_EXTS) + + def __init__(self, downloader=None, format=None): + super().__init__(downloader) + self.mapping = format + + @classmethod + def is_webp(cls, path): + deprecation_warning(f'{cls.__module__}.{cls.__name__}.is_webp is deprecated') + return imghdr.what(path) == 'webp' + + def fixup_webp(self, info, idx=-1): + thumbnail_filename = info['thumbnails'][idx]['filepath'] + _, thumbnail_ext = os.path.splitext(thumbnail_filename) + if thumbnail_ext: + if thumbnail_ext.lower() != '.webp' and imghdr.what(thumbnail_filename) == 'webp': + self.to_screen('Correcting thumbnail "%s" extension to webp' % thumbnail_filename) + webp_filename = replace_extension(thumbnail_filename, 'webp') + os.replace(thumbnail_filename, webp_filename) + info['thumbnails'][idx]['filepath'] = webp_filename + info['__files_to_move'][webp_filename] = replace_extension( + info['__files_to_move'].pop(thumbnail_filename), 'webp') + + @staticmethod + def _options(target_ext): + yield from ('-update', '1') + if target_ext == 'jpg': + yield from ('-bsf:v', 'mjpeg2jpeg') + + def convert_thumbnail(self, thumbnail_filename, target_ext): + thumbnail_conv_filename = replace_extension(thumbnail_filename, target_ext) + + self.to_screen(f'Converting thumbnail "{thumbnail_filename}" to {target_ext}') + _, source_ext = os.path.splitext(thumbnail_filename) + self.real_run_ffmpeg( + [(thumbnail_filename, [] if source_ext == '.gif' else ['-f', 'image2', '-pattern_type', 'none'])], + [(thumbnail_conv_filename, self._options(target_ext))]) + return thumbnail_conv_filename + + def run(self, info): + files_to_delete = [] + has_thumbnail = False + + for idx, thumbnail_dict in enumerate(info.get('thumbnails') or []): + original_thumbnail = thumbnail_dict.get('filepath') + if not original_thumbnail: + continue + has_thumbnail = True + self.fixup_webp(info, idx) + original_thumbnail = thumbnail_dict['filepath'] # Path can change during fixup + thumbnail_ext = os.path.splitext(original_thumbnail)[1][1:].lower() + if thumbnail_ext == 'jpeg': + thumbnail_ext = 'jpg' + target_ext, _skip_msg = resolve_mapping(thumbnail_ext, self.mapping) + if _skip_msg: + self.to_screen(f'Not converting thumbnail "{original_thumbnail}"; {_skip_msg}') + continue + thumbnail_dict['filepath'] = self.convert_thumbnail(original_thumbnail, target_ext) + files_to_delete.append(original_thumbnail) + info['__files_to_move'][thumbnail_dict['filepath']] = replace_extension( + info['__files_to_move'][original_thumbnail], target_ext) + + if not has_thumbnail: + self.to_screen('There aren\'t any thumbnails to convert') + return files_to_delete, info + + +class FFmpegConcatPP(FFmpegPostProcessor): + def __init__(self, downloader, only_multi_video=False): + self._only_multi_video = only_multi_video + super().__init__(downloader) + + def _get_codecs(self, file): + codecs = traverse_obj(self.get_metadata_object(file), ('streams', ..., 'codec_name')) + self.write_debug(f'Codecs = {", ".join(codecs)}') + return tuple(codecs) + + def concat_files(self, in_files, out_file): + if not self._downloader._ensure_dir_exists(out_file): + return + if len(in_files) == 1: + if os.path.realpath(in_files[0]) != os.path.realpath(out_file): + self.to_screen(f'Moving "{in_files[0]}" to "{out_file}"') + os.replace(in_files[0], out_file) + return [] + + if len(set(map(self._get_codecs, in_files))) > 1: + raise PostProcessingError( + 'The files have different streams/codecs and cannot be concatenated. ' + 'Either select different formats or --recode-video them to a common format') + + self.to_screen(f'Concatenating {len(in_files)} files; Destination: {out_file}') + super().concat_files(in_files, out_file) + return in_files + + @PostProcessor._restrict_to(images=False, simulated=False) + def run(self, info): + entries = info.get('entries') or [] + if not any(entries) or (self._only_multi_video and info['_type'] != 'multi_video'): + return [], info + elif traverse_obj(entries, (..., lambda k, v: k == 'requested_downloads' and len(v) > 1)): + raise PostProcessingError('Concatenation is not supported when downloading multiple separate formats') + + in_files = traverse_obj(entries, (..., 'requested_downloads', 0, 'filepath')) or [] + if len(in_files) < len(entries): + raise PostProcessingError('Aborting concatenation because some downloads failed') + + exts = traverse_obj(entries, (..., 'requested_downloads', 0, 'ext'), (..., 'ext')) + ie_copy = collections.ChainMap({'ext': exts[0] if len(set(exts)) == 1 else 'mkv'}, + info, self._downloader._playlist_infodict(info)) + out_file = self._downloader.prepare_filename(ie_copy, 'pl_video') + + files_to_delete = self.concat_files(in_files, out_file) + + info['requested_downloads'] = [{ + 'filepath': out_file, + 'ext': ie_copy['ext'], + }] + return files_to_delete, info diff --git a/lib/python3.11/site-packages/yt_dlp/postprocessor/metadataparser.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/metadataparser.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/postprocessor/metadataparser.py rename to python/lib/python3.10/site-packages/yt_dlp/postprocessor/metadataparser.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/postprocessor/modify_chapters.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/modify_chapters.py new file mode 100644 index 0000000..f521986 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/modify_chapters.py @@ -0,0 +1,336 @@ +import copy +import heapq +import os + +from .common import PostProcessor +from .ffmpeg import FFmpegPostProcessor, FFmpegSubtitlesConvertorPP +from .sponsorblock import SponsorBlockPP +from ..utils import PostProcessingError, orderedSet, prepend_extension + +_TINY_CHAPTER_DURATION = 1 +DEFAULT_SPONSORBLOCK_CHAPTER_TITLE = '[SponsorBlock]: %(category_names)l' + + +class ModifyChaptersPP(FFmpegPostProcessor): + def __init__(self, downloader, remove_chapters_patterns=None, remove_sponsor_segments=None, remove_ranges=None, + *, sponsorblock_chapter_title=DEFAULT_SPONSORBLOCK_CHAPTER_TITLE, force_keyframes=False): + FFmpegPostProcessor.__init__(self, downloader) + self._remove_chapters_patterns = set(remove_chapters_patterns or []) + self._remove_sponsor_segments = set(remove_sponsor_segments or []) - set(SponsorBlockPP.NON_SKIPPABLE_CATEGORIES.keys()) + self._ranges_to_remove = set(remove_ranges or []) + self._sponsorblock_chapter_title = sponsorblock_chapter_title + self._force_keyframes = force_keyframes + + @PostProcessor._restrict_to(images=False) + def run(self, info): + self._fixup_chapters(info) + # Chapters must be preserved intact when downloading multiple formats of the same video. + chapters, sponsor_chapters = self._mark_chapters_to_remove( + copy.deepcopy(info.get('chapters')) or [], + copy.deepcopy(info.get('sponsorblock_chapters')) or []) + if not chapters and not sponsor_chapters: + return [], info + + real_duration = self._get_real_video_duration(info['filepath']) + if not chapters: + chapters = [{'start_time': 0, 'end_time': info.get('duration') or real_duration, 'title': info['title']}] + + info['chapters'], cuts = self._remove_marked_arrange_sponsors(chapters + sponsor_chapters) + if not cuts: + return [], info + elif not info['chapters']: + self.report_warning('You have requested to remove the entire video, which is not possible') + return [], info + + original_duration, info['duration'] = info.get('duration'), info['chapters'][-1]['end_time'] + if self._duration_mismatch(real_duration, original_duration, 1): + if not self._duration_mismatch(real_duration, info['duration']): + self.to_screen(f'Skipping {self.pp_key()} since the video appears to be already cut') + return [], info + if not info.get('__real_download'): + raise PostProcessingError('Cannot cut video since the real and expected durations mismatch. ' + 'Different chapters may have already been removed') + else: + self.write_debug('Expected and actual durations mismatch') + + concat_opts = self._make_concat_opts(cuts, real_duration) + self.write_debug('Concat spec = %s' % ', '.join(f'{c.get("inpoint", 0.0)}-{c.get("outpoint", "inf")}' for c in concat_opts)) + + def remove_chapters(file, is_sub): + return file, self.remove_chapters(file, cuts, concat_opts, self._force_keyframes and not is_sub) + + in_out_files = [remove_chapters(info['filepath'], False)] + in_out_files.extend(remove_chapters(in_file, True) for in_file in self._get_supported_subs(info)) + + # Renaming should only happen after all files are processed + files_to_remove = [] + for in_file, out_file in in_out_files: + mtime = os.stat(in_file).st_mtime + uncut_file = prepend_extension(in_file, 'uncut') + os.replace(in_file, uncut_file) + os.replace(out_file, in_file) + self.try_utime(in_file, mtime, mtime) + files_to_remove.append(uncut_file) + + return files_to_remove, info + + def _mark_chapters_to_remove(self, chapters, sponsor_chapters): + if self._remove_chapters_patterns: + warn_no_chapter_to_remove = True + if not chapters: + self.to_screen('Chapter information is unavailable') + warn_no_chapter_to_remove = False + for c in chapters: + if any(regex.search(c['title']) for regex in self._remove_chapters_patterns): + c['remove'] = True + warn_no_chapter_to_remove = False + if warn_no_chapter_to_remove: + self.to_screen('There are no chapters matching the regex') + + if self._remove_sponsor_segments: + warn_no_chapter_to_remove = True + if not sponsor_chapters: + self.to_screen('SponsorBlock information is unavailable') + warn_no_chapter_to_remove = False + for c in sponsor_chapters: + if c['category'] in self._remove_sponsor_segments: + c['remove'] = True + warn_no_chapter_to_remove = False + if warn_no_chapter_to_remove: + self.to_screen('There are no matching SponsorBlock chapters') + + sponsor_chapters.extend({ + 'start_time': start, + 'end_time': end, + 'category': 'manually_removed', + '_categories': [('manually_removed', start, end, 'Manually removed')], + 'remove': True, + } for start, end in self._ranges_to_remove) + + return chapters, sponsor_chapters + + def _get_supported_subs(self, info): + for sub in (info.get('requested_subtitles') or {}).values(): + sub_file = sub.get('filepath') + # The file might have been removed by --embed-subs + if not sub_file or not os.path.exists(sub_file): + continue + ext = sub['ext'] + if ext not in FFmpegSubtitlesConvertorPP.SUPPORTED_EXTS: + self.report_warning(f'Cannot remove chapters from external {ext} subtitles; "{sub_file}" is now out of sync') + continue + # TODO: create __real_download for subs? + yield sub_file + + def _remove_marked_arrange_sponsors(self, chapters): + # Store cuts separately, since adjacent and overlapping cuts must be merged. + cuts = [] + + def append_cut(c): + assert 'remove' in c, 'Not a cut is appended to cuts' + last_to_cut = cuts[-1] if cuts else None + if last_to_cut and last_to_cut['end_time'] >= c['start_time']: + last_to_cut['end_time'] = max(last_to_cut['end_time'], c['end_time']) + else: + cuts.append(c) + return len(cuts) - 1 + + def excess_duration(c): + # Cuts that are completely within the chapter reduce chapters' duration. + # Since cuts can overlap, excess duration may be less that the sum of cuts' durations. + # To avoid that, chapter stores the index to the fist cut within the chapter, + # instead of storing excess duration. append_cut ensures that subsequent cuts (if any) + # will be merged with previous ones (if necessary). + cut_idx, excess = c.pop('cut_idx', len(cuts)), 0 + while cut_idx < len(cuts): + cut = cuts[cut_idx] + if cut['start_time'] >= c['end_time']: + break + if cut['end_time'] > c['start_time']: + excess += min(cut['end_time'], c['end_time']) + excess -= max(cut['start_time'], c['start_time']) + cut_idx += 1 + return excess + + new_chapters = [] + + def append_chapter(c): + assert 'remove' not in c, 'Cut is appended to chapters' + length = c['end_time'] - c['start_time'] - excess_duration(c) + # Chapter is completely covered by cuts or sponsors. + if length <= 0: + return + start = new_chapters[-1]['end_time'] if new_chapters else 0 + c.update(start_time=start, end_time=start + length) + new_chapters.append(c) + + # Turn into a priority queue, index is a tie breaker. + # Plain stack sorted by start_time is not enough: after splitting the chapter, + # the part returned to the stack is not guaranteed to have start_time + # less than or equal to the that of the stack's head. + chapters = [(c['start_time'], i, c) for i, c in enumerate(chapters)] + heapq.heapify(chapters) + + _, cur_i, cur_chapter = heapq.heappop(chapters) + while chapters: + _, i, c = heapq.heappop(chapters) + # Non-overlapping chapters or cuts can be appended directly. However, + # adjacent non-overlapping cuts must be merged, which is handled by append_cut. + if cur_chapter['end_time'] <= c['start_time']: + (append_chapter if 'remove' not in cur_chapter else append_cut)(cur_chapter) + cur_i, cur_chapter = i, c + continue + + # Eight possibilities for overlapping chapters: (cut, cut), (cut, sponsor), + # (cut, normal), (sponsor, cut), (normal, cut), (sponsor, sponsor), + # (sponsor, normal), and (normal, sponsor). There is no (normal, normal): + # normal chapters are assumed not to overlap. + if 'remove' in cur_chapter: + # (cut, cut): adjust end_time. + if 'remove' in c: + cur_chapter['end_time'] = max(cur_chapter['end_time'], c['end_time']) + # (cut, sponsor/normal): chop the beginning of the later chapter + # (if it's not completely hidden by the cut). Push to the priority queue + # to restore sorting by start_time: with beginning chopped, c may actually + # start later than the remaining chapters from the queue. + elif cur_chapter['end_time'] < c['end_time']: + c['start_time'] = cur_chapter['end_time'] + c['_was_cut'] = True + heapq.heappush(chapters, (c['start_time'], i, c)) + # (sponsor/normal, cut). + elif 'remove' in c: + cur_chapter['_was_cut'] = True + # Chop the end of the current chapter if the cut is not contained within it. + # Chopping the end doesn't break start_time sorting, no PQ push is necessary. + if cur_chapter['end_time'] <= c['end_time']: + cur_chapter['end_time'] = c['start_time'] + append_chapter(cur_chapter) + cur_i, cur_chapter = i, c + continue + # Current chapter contains the cut within it. If the current chapter is + # a sponsor chapter, check whether the categories before and after the cut differ. + if '_categories' in cur_chapter: + after_c = dict(cur_chapter, start_time=c['end_time'], _categories=[]) + cur_cats = [] + for cat_start_end in cur_chapter['_categories']: + if cat_start_end[1] < c['start_time']: + cur_cats.append(cat_start_end) + if cat_start_end[2] > c['end_time']: + after_c['_categories'].append(cat_start_end) + cur_chapter['_categories'] = cur_cats + if cur_chapter['_categories'] != after_c['_categories']: + # Categories before and after the cut differ: push the after part to PQ. + heapq.heappush(chapters, (after_c['start_time'], cur_i, after_c)) + cur_chapter['end_time'] = c['start_time'] + append_chapter(cur_chapter) + cur_i, cur_chapter = i, c + continue + # Either sponsor categories before and after the cut are the same or + # we're dealing with a normal chapter. Just register an outstanding cut: + # subsequent append_chapter will reduce the duration. + cur_chapter.setdefault('cut_idx', append_cut(c)) + # (sponsor, normal): if a normal chapter is not completely overlapped, + # chop the beginning of it and push it to PQ. + elif '_categories' in cur_chapter and '_categories' not in c: + if cur_chapter['end_time'] < c['end_time']: + c['start_time'] = cur_chapter['end_time'] + c['_was_cut'] = True + heapq.heappush(chapters, (c['start_time'], i, c)) + # (normal, sponsor) and (sponsor, sponsor) + else: + assert '_categories' in c, 'Normal chapters overlap' + cur_chapter['_was_cut'] = True + c['_was_cut'] = True + # Push the part after the sponsor to PQ. + if cur_chapter['end_time'] > c['end_time']: + # deepcopy to make categories in after_c and cur_chapter/c refer to different lists. + after_c = dict(copy.deepcopy(cur_chapter), start_time=c['end_time']) + heapq.heappush(chapters, (after_c['start_time'], cur_i, after_c)) + # Push the part after the overlap to PQ. + elif c['end_time'] > cur_chapter['end_time']: + after_cur = dict(copy.deepcopy(c), start_time=cur_chapter['end_time']) + heapq.heappush(chapters, (after_cur['start_time'], cur_i, after_cur)) + c['end_time'] = cur_chapter['end_time'] + # (sponsor, sponsor): merge categories in the overlap. + if '_categories' in cur_chapter: + c['_categories'] = cur_chapter['_categories'] + c['_categories'] + # Inherit the cuts that the current chapter has accumulated within it. + if 'cut_idx' in cur_chapter: + c['cut_idx'] = cur_chapter['cut_idx'] + cur_chapter['end_time'] = c['start_time'] + append_chapter(cur_chapter) + cur_i, cur_chapter = i, c + (append_chapter if 'remove' not in cur_chapter else append_cut)(cur_chapter) + return self._remove_tiny_rename_sponsors(new_chapters), cuts + + def _remove_tiny_rename_sponsors(self, chapters): + new_chapters = [] + for i, c in enumerate(chapters): + # Merge with the previous/next if the chapter is tiny. + # Only tiny chapters resulting from a cut can be skipped. + # Chapters that were already tiny in the original list will be preserved. + if (('_was_cut' in c or '_categories' in c) + and c['end_time'] - c['start_time'] < _TINY_CHAPTER_DURATION): + if not new_chapters: + # Prepend tiny chapter to the next one if possible. + if i < len(chapters) - 1: + chapters[i + 1]['start_time'] = c['start_time'] + continue + else: + old_c = new_chapters[-1] + if i < len(chapters) - 1: + next_c = chapters[i + 1] + # Not a typo: key names in old_c and next_c are really different. + prev_is_sponsor = 'categories' in old_c + next_is_sponsor = '_categories' in next_c + # Preferentially prepend tiny normals to normals and sponsors to sponsors. + if (('_categories' not in c and prev_is_sponsor and not next_is_sponsor) + or ('_categories' in c and not prev_is_sponsor and next_is_sponsor)): + next_c['start_time'] = c['start_time'] + continue + old_c['end_time'] = c['end_time'] + continue + + c.pop('_was_cut', None) + cats = c.pop('_categories', None) + if cats: + category, _, _, category_name = min(cats, key=lambda c: c[2] - c[1]) + c.update({ + 'category': category, + 'categories': orderedSet(x[0] for x in cats), + 'name': category_name, + 'category_names': orderedSet(x[3] for x in cats), + }) + c['title'] = self._downloader.evaluate_outtmpl(self._sponsorblock_chapter_title, c.copy()) + # Merge identically named sponsors. + if (new_chapters and 'categories' in new_chapters[-1] + and new_chapters[-1]['title'] == c['title']): + new_chapters[-1]['end_time'] = c['end_time'] + continue + new_chapters.append(c) + return new_chapters + + def remove_chapters(self, filename, ranges_to_cut, concat_opts, force_keyframes=False): + in_file = filename + out_file = prepend_extension(in_file, 'temp') + if force_keyframes: + in_file = self.force_keyframes(in_file, (t for c in ranges_to_cut for t in (c['start_time'], c['end_time']))) + self.to_screen(f'Removing chapters from {filename}') + self.concat_files([in_file] * len(concat_opts), out_file, concat_opts) + if in_file != filename: + self._delete_downloaded_files(in_file, msg=None) + return out_file + + @staticmethod + def _make_concat_opts(chapters_to_remove, duration): + opts = [{}] + for s in chapters_to_remove: + # Do not create 0 duration chunk at the beginning. + if s['start_time'] == 0: + opts[-1]['inpoint'] = f'{s["end_time"]:.6f}' + continue + opts[-1]['outpoint'] = f'{s["start_time"]:.6f}' + # Do not create 0 duration chunk at the end. + if s['end_time'] < duration: + opts.append({'inpoint': f'{s["end_time"]:.6f}'}) + return opts diff --git a/lib/python3.11/site-packages/yt_dlp/postprocessor/movefilesafterdownload.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/movefilesafterdownload.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/postprocessor/movefilesafterdownload.py rename to python/lib/python3.10/site-packages/yt_dlp/postprocessor/movefilesafterdownload.py diff --git a/lib/python3.11/site-packages/yt_dlp/postprocessor/sponskrub.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/sponskrub.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/postprocessor/sponskrub.py rename to python/lib/python3.10/site-packages/yt_dlp/postprocessor/sponskrub.py diff --git a/lib/python3.11/site-packages/yt_dlp/postprocessor/sponsorblock.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/sponsorblock.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/postprocessor/sponsorblock.py rename to python/lib/python3.10/site-packages/yt_dlp/postprocessor/sponsorblock.py diff --git a/lib/python3.11/site-packages/yt_dlp/postprocessor/xattrpp.py b/python/lib/python3.10/site-packages/yt_dlp/postprocessor/xattrpp.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/postprocessor/xattrpp.py rename to python/lib/python3.10/site-packages/yt_dlp/postprocessor/xattrpp.py diff --git a/python/lib/python3.10/site-packages/yt_dlp/socks.py b/python/lib/python3.10/site-packages/yt_dlp/socks.py new file mode 100644 index 0000000..e7f41d7 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/socks.py @@ -0,0 +1,274 @@ +# Public Domain SOCKS proxy protocol implementation +# Adapted from https://gist.github.com/bluec0re/cafd3764412967417fd3 +# References: +# SOCKS4 protocol http://www.openssh.com/txt/socks4.protocol +# SOCKS4A protocol http://www.openssh.com/txt/socks4a.protocol +# SOCKS5 protocol https://tools.ietf.org/html/rfc1928 +# SOCKS5 username/password authentication https://tools.ietf.org/html/rfc1929 + +import collections +import socket +import struct + +from .compat import compat_ord + +__author__ = 'Timo Schmid <coding@timoschmid.de>' + +SOCKS4_VERSION = 4 +SOCKS4_REPLY_VERSION = 0x00 +# Excerpt from SOCKS4A protocol: +# if the client cannot resolve the destination host's domain name to find its +# IP address, it should set the first three bytes of DSTIP to NULL and the last +# byte to a non-zero value. +SOCKS4_DEFAULT_DSTIP = struct.pack('!BBBB', 0, 0, 0, 0xFF) + +SOCKS5_VERSION = 5 +SOCKS5_USER_AUTH_VERSION = 0x01 +SOCKS5_USER_AUTH_SUCCESS = 0x00 + + +class Socks4Command: + CMD_CONNECT = 0x01 + CMD_BIND = 0x02 + + +class Socks5Command(Socks4Command): + CMD_UDP_ASSOCIATE = 0x03 + + +class Socks5Auth: + AUTH_NONE = 0x00 + AUTH_GSSAPI = 0x01 + AUTH_USER_PASS = 0x02 + AUTH_NO_ACCEPTABLE = 0xFF # For server response + + +class Socks5AddressType: + ATYP_IPV4 = 0x01 + ATYP_DOMAINNAME = 0x03 + ATYP_IPV6 = 0x04 + + +class ProxyError(socket.error): + ERR_SUCCESS = 0x00 + + def __init__(self, code=None, msg=None): + if code is not None and msg is None: + msg = self.CODES.get(code) or 'unknown error' + super().__init__(code, msg) + + +class InvalidVersionError(ProxyError): + def __init__(self, expected_version, got_version): + msg = ('Invalid response version from server. Expected {:02x} got ' + '{:02x}'.format(expected_version, got_version)) + super().__init__(0, msg) + + +class Socks4Error(ProxyError): + ERR_SUCCESS = 90 + + CODES = { + 91: 'request rejected or failed', + 92: 'request rejected because SOCKS server cannot connect to identd on the client', + 93: 'request rejected because the client program and identd report different user-ids' + } + + +class Socks5Error(ProxyError): + ERR_GENERAL_FAILURE = 0x01 + + CODES = { + 0x01: 'general SOCKS server failure', + 0x02: 'connection not allowed by ruleset', + 0x03: 'Network unreachable', + 0x04: 'Host unreachable', + 0x05: 'Connection refused', + 0x06: 'TTL expired', + 0x07: 'Command not supported', + 0x08: 'Address type not supported', + 0xFE: 'unknown username or invalid password', + 0xFF: 'all offered authentication methods were rejected' + } + + +class ProxyType: + SOCKS4 = 0 + SOCKS4A = 1 + SOCKS5 = 2 + + +Proxy = collections.namedtuple('Proxy', ( + 'type', 'host', 'port', 'username', 'password', 'remote_dns')) + + +class sockssocket(socket.socket): + def __init__(self, *args, **kwargs): + self._proxy = None + super().__init__(*args, **kwargs) + + def setproxy(self, proxytype, addr, port, rdns=True, username=None, password=None): + assert proxytype in (ProxyType.SOCKS4, ProxyType.SOCKS4A, ProxyType.SOCKS5) + + self._proxy = Proxy(proxytype, addr, port, username, password, rdns) + + def recvall(self, cnt): + data = b'' + while len(data) < cnt: + cur = self.recv(cnt - len(data)) + if not cur: + raise EOFError(f'{cnt - len(data)} bytes missing') + data += cur + return data + + def _recv_bytes(self, cnt): + data = self.recvall(cnt) + return struct.unpack(f'!{cnt}B', data) + + @staticmethod + def _len_and_data(data): + return struct.pack('!B', len(data)) + data + + def _check_response_version(self, expected_version, got_version): + if got_version != expected_version: + self.close() + raise InvalidVersionError(expected_version, got_version) + + def _resolve_address(self, destaddr, default, use_remote_dns, family=None): + for f in (family,) if family else (socket.AF_INET, socket.AF_INET6): + try: + return f, socket.inet_pton(f, destaddr) + except OSError: + continue + + if use_remote_dns and self._proxy.remote_dns: + return 0, default + else: + res = socket.getaddrinfo(destaddr, None, family=family or 0) + f, _, _, _, ipaddr = res[0] + return f, socket.inet_pton(f, ipaddr[0]) + + def _setup_socks4(self, address, is_4a=False): + destaddr, port = address + + _, ipaddr = self._resolve_address(destaddr, SOCKS4_DEFAULT_DSTIP, use_remote_dns=is_4a, family=socket.AF_INET) + + packet = struct.pack('!BBH', SOCKS4_VERSION, Socks4Command.CMD_CONNECT, port) + ipaddr + + username = (self._proxy.username or '').encode() + packet += username + b'\x00' + + if is_4a and self._proxy.remote_dns and ipaddr == SOCKS4_DEFAULT_DSTIP: + packet += destaddr.encode() + b'\x00' + + self.sendall(packet) + + version, resp_code, dstport, dsthost = struct.unpack('!BBHI', self.recvall(8)) + + self._check_response_version(SOCKS4_REPLY_VERSION, version) + + if resp_code != Socks4Error.ERR_SUCCESS: + self.close() + raise Socks4Error(resp_code) + + return (dsthost, dstport) + + def _setup_socks4a(self, address): + self._setup_socks4(address, is_4a=True) + + def _socks5_auth(self): + packet = struct.pack('!B', SOCKS5_VERSION) + + auth_methods = [Socks5Auth.AUTH_NONE] + if self._proxy.username and self._proxy.password: + auth_methods.append(Socks5Auth.AUTH_USER_PASS) + + packet += struct.pack('!B', len(auth_methods)) + packet += struct.pack(f'!{len(auth_methods)}B', *auth_methods) + + self.sendall(packet) + + version, method = self._recv_bytes(2) + + self._check_response_version(SOCKS5_VERSION, version) + + if method == Socks5Auth.AUTH_NO_ACCEPTABLE or ( + method == Socks5Auth.AUTH_USER_PASS and (not self._proxy.username or not self._proxy.password)): + self.close() + raise Socks5Error(Socks5Auth.AUTH_NO_ACCEPTABLE) + + if method == Socks5Auth.AUTH_USER_PASS: + username = self._proxy.username.encode() + password = self._proxy.password.encode() + packet = struct.pack('!B', SOCKS5_USER_AUTH_VERSION) + packet += self._len_and_data(username) + self._len_and_data(password) + self.sendall(packet) + + version, status = self._recv_bytes(2) + + self._check_response_version(SOCKS5_USER_AUTH_VERSION, version) + + if status != SOCKS5_USER_AUTH_SUCCESS: + self.close() + raise Socks5Error(Socks5Error.ERR_GENERAL_FAILURE) + + def _setup_socks5(self, address): + destaddr, port = address + + family, ipaddr = self._resolve_address(destaddr, None, use_remote_dns=True) + + self._socks5_auth() + + reserved = 0 + packet = struct.pack('!BBB', SOCKS5_VERSION, Socks5Command.CMD_CONNECT, reserved) + if ipaddr is None: + destaddr = destaddr.encode() + packet += struct.pack('!B', Socks5AddressType.ATYP_DOMAINNAME) + packet += self._len_and_data(destaddr) + elif family == socket.AF_INET: + packet += struct.pack('!B', Socks5AddressType.ATYP_IPV4) + ipaddr + elif family == socket.AF_INET6: + packet += struct.pack('!B', Socks5AddressType.ATYP_IPV6) + ipaddr + packet += struct.pack('!H', port) + + self.sendall(packet) + + version, status, reserved, atype = self._recv_bytes(4) + + self._check_response_version(SOCKS5_VERSION, version) + + if status != Socks5Error.ERR_SUCCESS: + self.close() + raise Socks5Error(status) + + if atype == Socks5AddressType.ATYP_IPV4: + destaddr = self.recvall(4) + elif atype == Socks5AddressType.ATYP_DOMAINNAME: + alen = compat_ord(self.recv(1)) + destaddr = self.recvall(alen) + elif atype == Socks5AddressType.ATYP_IPV6: + destaddr = self.recvall(16) + destport = struct.unpack('!H', self.recvall(2))[0] + + return (destaddr, destport) + + def _make_proxy(self, connect_func, address): + if not self._proxy: + return connect_func(self, address) + + result = connect_func(self, (self._proxy.host, self._proxy.port)) + if result != 0 and result is not None: + return result + setup_funcs = { + ProxyType.SOCKS4: self._setup_socks4, + ProxyType.SOCKS4A: self._setup_socks4a, + ProxyType.SOCKS5: self._setup_socks5, + } + setup_funcs[self._proxy.type](address) + return result + + def connect(self, address): + self._make_proxy(socket.socket.connect, address) + + def connect_ex(self, address): + return self._make_proxy(socket.socket.connect_ex, address) diff --git a/python/lib/python3.10/site-packages/yt_dlp/update.py b/python/lib/python3.10/site-packages/yt_dlp/update.py new file mode 100644 index 0000000..db79df1 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/update.py @@ -0,0 +1,464 @@ +import atexit +import contextlib +import hashlib +import json +import os +import platform +import re +import subprocess +import sys +from zipimport import zipimporter + +from .compat import functools # isort: split +from .compat import compat_realpath, compat_shlex_quote +from .networking import Request +from .networking.exceptions import HTTPError, network_exceptions +from .utils import ( + Popen, + cached_method, + deprecation_warning, + remove_end, + remove_start, + shell_quote, + system_identifier, + version_tuple, +) +from .version import CHANNEL, UPDATE_HINT, VARIANT, __version__ + +UPDATE_SOURCES = { + 'stable': 'yt-dlp/yt-dlp', + 'nightly': 'yt-dlp/yt-dlp-nightly-builds', +} +REPOSITORY = UPDATE_SOURCES['stable'] + +_VERSION_RE = re.compile(r'(\d+\.)*\d+') + +API_BASE_URL = 'https://api.github.com/repos' + +# Backwards compatibility variables for the current channel +API_URL = f'{API_BASE_URL}/{REPOSITORY}/releases' + + +@functools.cache +def _get_variant_and_executable_path(): + """@returns (variant, executable_path)""" + if getattr(sys, 'frozen', False): + path = sys.executable + if not hasattr(sys, '_MEIPASS'): + return 'py2exe', path + elif sys._MEIPASS == os.path.dirname(path): + return f'{sys.platform}_dir', path + elif sys.platform == 'darwin': + machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else '' + else: + machine = f'_{platform.machine().lower()}' + # Ref: https://en.wikipedia.org/wiki/Uname#Examples + if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'): + machine = '_x86' if platform.architecture()[0][:2] == '32' else '' + return f'{remove_end(sys.platform, "32")}{machine}_exe', path + + path = os.path.dirname(__file__) + if isinstance(__loader__, zipimporter): + return 'zip', os.path.join(path, '..') + elif (os.path.basename(sys.argv[0]) in ('__main__.py', '-m') + and os.path.exists(os.path.join(path, '../.git/HEAD'))): + return 'source', path + return 'unknown', path + + +def detect_variant(): + return VARIANT or _get_variant_and_executable_path()[0] + + +@functools.cache +def current_git_head(): + if detect_variant() != 'source': + return + with contextlib.suppress(Exception): + stdout, _, _ = Popen.run( + ['git', 'rev-parse', '--short', 'HEAD'], + text=True, cwd=os.path.dirname(os.path.abspath(__file__)), + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + if re.fullmatch('[0-9a-f]+', stdout.strip()): + return stdout.strip() + + +_FILE_SUFFIXES = { + 'zip': '', + 'py2exe': '_min.exe', + 'win_exe': '.exe', + 'win_x86_exe': '_x86.exe', + 'darwin_exe': '_macos', + 'darwin_legacy_exe': '_macos_legacy', + 'linux_exe': '_linux', + 'linux_aarch64_exe': '_linux_aarch64', + 'linux_armv7l_exe': '_linux_armv7l', +} + +_NON_UPDATEABLE_REASONS = { + **{variant: None for variant in _FILE_SUFFIXES}, # Updatable + **{variant: f'Auto-update is not supported for unpackaged {name} executable; Re-download the latest release' + for variant, name in {'win32_dir': 'Windows', 'darwin_dir': 'MacOS', 'linux_dir': 'Linux'}.items()}, + 'source': 'You cannot update when running from source code; Use git to pull the latest changes', + 'unknown': 'You installed yt-dlp with a package manager or setup.py; Use that to update', + 'other': 'You are using an unofficial build of yt-dlp; Build the executable again', +} + + +def is_non_updateable(): + if UPDATE_HINT: + return UPDATE_HINT + return _NON_UPDATEABLE_REASONS.get( + detect_variant(), _NON_UPDATEABLE_REASONS['unknown' if VARIANT else 'other']) + + +def _get_system_deprecation(): + MIN_SUPPORTED, MIN_RECOMMENDED = (3, 7), (3, 8) + + if sys.version_info > MIN_RECOMMENDED: + return None + + major, minor = sys.version_info[:2] + if sys.version_info < MIN_SUPPORTED: + msg = f'Python version {major}.{minor} is no longer supported' + else: + msg = f'Support for Python version {major}.{minor} has been deprecated. ' + # Temporary until `win_x86_exe` uses 3.8, which will deprecate Vista and Server 2008 + if detect_variant() == 'win_x86_exe': + platform_name = platform.platform() + if any(platform_name.startswith(f'Windows-{name}') for name in ('Vista', '2008Server')): + msg = 'Support for Windows Vista/Server 2008 has been deprecated. ' + else: + return None + msg += ('See https://github.com/yt-dlp/yt-dlp/issues/7803 for details.' + '\nYou may stop receiving updates on this version at any time') + + major, minor = MIN_RECOMMENDED + return f'{msg}! Please update to Python {major}.{minor} or above' + + +def _sha256_file(path): + h = hashlib.sha256() + mv = memoryview(bytearray(128 * 1024)) + with open(os.path.realpath(path), 'rb', buffering=0) as f: + for n in iter(lambda: f.readinto(mv), 0): + h.update(mv[:n]) + return h.hexdigest() + + +class Updater: + _exact = True + + def __init__(self, ydl, target=None): + self.ydl = ydl + + self.target_channel, sep, self.target_tag = (target or CHANNEL).rpartition('@') + # stable => stable@latest + if not sep and ('/' in self.target_tag or self.target_tag in UPDATE_SOURCES): + self.target_channel = self.target_tag + self.target_tag = None + elif not self.target_channel: + self.target_channel = CHANNEL.partition('@')[0] + + if not self.target_tag: + self.target_tag = 'latest' + self._exact = False + elif self.target_tag != 'latest': + self.target_tag = f'tags/{self.target_tag}' + + if '/' in self.target_channel: + self._target_repo = self.target_channel + if self.target_channel not in (CHANNEL, *UPDATE_SOURCES.values()): + self.ydl.report_warning( + f'You are switching to an {self.ydl._format_err("unofficial", "red")} executable ' + f'from {self.ydl._format_err(self._target_repo, self.ydl.Styles.EMPHASIS)}. ' + f'Run {self.ydl._format_err("at your own risk", "light red")}') + self._block_restart('Automatically restarting into custom builds is disabled for security reasons') + else: + self._target_repo = UPDATE_SOURCES.get(self.target_channel) + if not self._target_repo: + self._report_error( + f'Invalid update channel {self.target_channel!r} requested. ' + f'Valid channels are {", ".join(UPDATE_SOURCES)}', True) + + def _version_compare(self, a, b, channel=CHANNEL): + if self._exact and channel != self.target_channel: + return False + + if _VERSION_RE.fullmatch(f'{a}.{b}'): + a, b = version_tuple(a), version_tuple(b) + return a == b if self._exact else a >= b + return a == b + + @functools.cached_property + def _tag(self): + if self._version_compare(self.current_version, self.latest_version): + return self.target_tag + + identifier = f'{detect_variant()} {self.target_channel} {system_identifier()}' + for line in self._download('_update_spec', 'latest').decode().splitlines(): + if not line.startswith('lock '): + continue + _, tag, pattern = line.split(' ', 2) + if re.match(pattern, identifier): + if not self._exact: + return f'tags/{tag}' + elif self.target_tag == 'latest' or not self._version_compare( + tag, self.target_tag[5:], channel=self.target_channel): + self._report_error( + f'yt-dlp cannot be updated above {tag} since you are on an older Python version', True) + return f'tags/{self.current_version}' + return self.target_tag + + @cached_method + def _get_version_info(self, tag): + url = f'{API_BASE_URL}/{self._target_repo}/releases/{tag}' + self.ydl.write_debug(f'Fetching release info: {url}') + return json.loads(self.ydl.urlopen(Request(url, headers={ + 'Accept': 'application/vnd.github+json', + 'User-Agent': 'yt-dlp', + 'X-GitHub-Api-Version': '2022-11-28', + })).read().decode()) + + @property + def current_version(self): + """Current version""" + return __version__ + + @staticmethod + def _label(channel, tag): + """Label for a given channel and tag""" + return f'{channel}@{remove_start(tag, "tags/")}' + + def _get_actual_tag(self, tag): + if tag.startswith('tags/'): + return tag[5:] + return self._get_version_info(tag)['tag_name'] + + @property + def new_version(self): + """Version of the latest release we can update to""" + return self._get_actual_tag(self._tag) + + @property + def latest_version(self): + """Version of the target release""" + return self._get_actual_tag(self.target_tag) + + @property + def has_update(self): + """Whether there is an update available""" + return not self._version_compare(self.current_version, self.new_version) + + @functools.cached_property + def filename(self): + """Filename of the executable""" + return compat_realpath(_get_variant_and_executable_path()[1]) + + def _download(self, name, tag): + slug = 'latest/download' if tag == 'latest' else f'download/{tag[5:]}' + url = f'https://github.com/{self._target_repo}/releases/{slug}/{name}' + self.ydl.write_debug(f'Downloading {name} from {url}') + return self.ydl.urlopen(url).read() + + @functools.cached_property + def release_name(self): + """The release filename""" + return f'yt-dlp{_FILE_SUFFIXES[detect_variant()]}' + + @functools.cached_property + def release_hash(self): + """Hash of the latest release""" + hash_data = dict(ln.split()[::-1] for ln in self._download('SHA2-256SUMS', self._tag).decode().splitlines()) + return hash_data[self.release_name] + + def _report_error(self, msg, expected=False): + self.ydl.report_error(msg, tb=False if expected else None) + self.ydl._download_retcode = 100 + + def _report_permission_error(self, file): + self._report_error(f'Unable to write to {file}; Try running as administrator', True) + + def _report_network_error(self, action, delim=';'): + self._report_error( + f'Unable to {action}{delim} visit ' + f'https://github.com/{self._target_repo}/releases/{self.target_tag.replace("tags/", "tag/")}', True) + + def check_update(self): + """Report whether there is an update available""" + if not self._target_repo: + return False + try: + self.ydl.to_screen(( + f'Available version: {self._label(self.target_channel, self.latest_version)}, ' if self.target_tag == 'latest' else '' + ) + f'Current version: {self._label(CHANNEL, self.current_version)}') + except network_exceptions as e: + return self._report_network_error(f'obtain version info ({e})', delim='; Please try again later or') + + if not is_non_updateable(): + self.ydl.to_screen(f'Current Build Hash: {_sha256_file(self.filename)}') + + if self.has_update: + return True + + if self.target_tag == self._tag: + self.ydl.to_screen(f'yt-dlp is up to date ({self._label(CHANNEL, self.current_version)})') + elif not self._exact: + self.ydl.report_warning('yt-dlp cannot be updated any further since you are on an older Python version') + return False + + def update(self): + """Update yt-dlp executable to the latest version""" + if not self.check_update(): + return + err = is_non_updateable() + if err: + return self._report_error(err, True) + self.ydl.to_screen(f'Updating to {self._label(self.target_channel, self.new_version)} ...') + if (_VERSION_RE.fullmatch(self.target_tag[5:]) + and version_tuple(self.target_tag[5:]) < (2023, 3, 2)): + self.ydl.report_warning('You are downgrading to a version without --update-to') + self._block_restart('Cannot automatically restart to a version without --update-to') + + directory = os.path.dirname(self.filename) + if not os.access(self.filename, os.W_OK): + return self._report_permission_error(self.filename) + elif not os.access(directory, os.W_OK): + return self._report_permission_error(directory) + + new_filename, old_filename = f'{self.filename}.new', f'{self.filename}.old' + if detect_variant() == 'zip': # Can be replaced in-place + new_filename, old_filename = self.filename, None + + try: + if os.path.exists(old_filename or ''): + os.remove(old_filename) + except OSError: + return self._report_error('Unable to remove the old version') + + try: + newcontent = self._download(self.release_name, self._tag) + except network_exceptions as e: + if isinstance(e, HTTPError) and e.status == 404: + return self._report_error( + f'The requested tag {self._label(self.target_channel, self.target_tag)} does not exist', True) + return self._report_network_error(f'fetch updates: {e}') + + try: + expected_hash = self.release_hash + except Exception: + self.ydl.report_warning('no hash information found for the release') + else: + if hashlib.sha256(newcontent).hexdigest() != expected_hash: + return self._report_network_error('verify the new executable') + + try: + with open(new_filename, 'wb') as outf: + outf.write(newcontent) + except OSError: + return self._report_permission_error(new_filename) + + if old_filename: + mask = os.stat(self.filename).st_mode + try: + os.rename(self.filename, old_filename) + except OSError: + return self._report_error('Unable to move current version') + + try: + os.rename(new_filename, self.filename) + except OSError: + self._report_error('Unable to overwrite current version') + return os.rename(old_filename, self.filename) + + variant = detect_variant() + if variant.startswith('win') or variant == 'py2exe': + atexit.register(Popen, f'ping 127.0.0.1 -n 5 -w 1000 & del /F "{old_filename}"', + shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) + elif old_filename: + try: + os.remove(old_filename) + except OSError: + self._report_error('Unable to remove the old version') + + try: + os.chmod(self.filename, mask) + except OSError: + return self._report_error( + f'Unable to set permissions. Run: sudo chmod a+rx {compat_shlex_quote(self.filename)}') + + self.ydl.to_screen(f'Updated yt-dlp to {self._label(self.target_channel, self.new_version)}') + return True + + @functools.cached_property + def cmd(self): + """The command-line to run the executable, if known""" + # There is no sys.orig_argv in py < 3.10. Also, it can be [] when frozen + if getattr(sys, 'orig_argv', None): + return sys.orig_argv + elif getattr(sys, 'frozen', False): + return sys.argv + + def restart(self): + """Restart the executable""" + assert self.cmd, 'Must be frozen or Py >= 3.10' + self.ydl.write_debug(f'Restarting: {shell_quote(self.cmd)}') + _, _, returncode = Popen.run(self.cmd) + return returncode + + def _block_restart(self, msg): + def wrapper(): + self._report_error(f'{msg}. Restart yt-dlp to use the updated version', expected=True) + return self.ydl._download_retcode + self.restart = wrapper + + +def run_update(ydl): + """Update the program file with the latest version from the repository + @returns Whether there was a successful update (No update = False) + """ + return Updater(ydl).update() + + +# Deprecated +def update_self(to_screen, verbose, opener): + import traceback + + deprecation_warning(f'"{__name__}.update_self" is deprecated and may be removed ' + f'in a future version. Use "{__name__}.run_update(ydl)" instead') + + printfn = to_screen + + class FakeYDL(): + to_screen = printfn + + def report_warning(self, msg, *args, **kwargs): + return printfn(f'WARNING: {msg}', *args, **kwargs) + + def report_error(self, msg, tb=None): + printfn(f'ERROR: {msg}') + if not verbose: + return + if tb is None: + # Copied from YoutubeDL.trouble + if sys.exc_info()[0]: + tb = '' + if hasattr(sys.exc_info()[1], 'exc_info') and sys.exc_info()[1].exc_info[0]: + tb += ''.join(traceback.format_exception(*sys.exc_info()[1].exc_info)) + tb += traceback.format_exc() + else: + tb_data = traceback.format_list(traceback.extract_stack()) + tb = ''.join(tb_data) + if tb: + printfn(tb) + + def write_debug(self, msg, *args, **kwargs): + printfn(f'[debug] {msg}', *args, **kwargs) + + def urlopen(self, url): + return opener.open(url) + + return run_update(FakeYDL()) + + +__all__ = ['Updater'] diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/__init__.py b/python/lib/python3.10/site-packages/yt_dlp/utils/__init__.py new file mode 100644 index 0000000..c267e32 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/__init__.py @@ -0,0 +1,10 @@ +# flake8: noqa: F403 +from ..compat.compat_utils import passthrough_module + +passthrough_module(__name__, '._deprecated') +del passthrough_module + +# isort: off +from .traversal import * +from ._utils import * +from ._utils import _configuration_args, _get_exe_version_output # noqa: F401 diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/_deprecated.py b/python/lib/python3.10/site-packages/yt_dlp/utils/_deprecated.py new file mode 100644 index 0000000..a8ae8ec --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/_deprecated.py @@ -0,0 +1,39 @@ +"""Deprecated - New code should avoid these""" +import warnings + +from ..compat.compat_utils import passthrough_module + +# XXX: Implement this the same way as other DeprecationWarnings without circular import +passthrough_module(__name__, '.._legacy', callback=lambda attr: warnings.warn( + DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=6)) +del passthrough_module + + +from ._utils import preferredencoding + + +def encodeFilename(s, for_subprocess=False): + assert isinstance(s, str) + return s + + +def decodeFilename(b, for_subprocess=False): + return b + + +def decodeArgument(b): + return b + + +def decodeOption(optval): + if optval is None: + return optval + if isinstance(optval, bytes): + optval = optval.decode(preferredencoding()) + + assert isinstance(optval, str) + return optval + + +def error_to_compat_str(err): + return str(err) diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/_legacy.py b/python/lib/python3.10/site-packages/yt_dlp/utils/_legacy.py new file mode 100644 index 0000000..dde0209 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/_legacy.py @@ -0,0 +1,242 @@ +"""No longer used and new code should not use. Exists only for API compat.""" +import platform +import struct +import sys +import urllib.error +import urllib.parse +import urllib.request +import zlib + +from ._utils import Popen, decode_base_n, preferredencoding +from .networking import escape_rfc3986 # noqa: F401 +from .networking import normalize_url as escape_url # noqa: F401 +from .traversal import traverse_obj +from ..dependencies import certifi, websockets +from ..networking._helper import make_ssl_context +from ..networking._urllib import HTTPHandler + +# isort: split +from .networking import random_user_agent, std_headers # noqa: F401 +from ..cookies import YoutubeDLCookieJar # noqa: F401 +from ..networking._urllib import PUTRequest # noqa: F401 +from ..networking._urllib import SUPPORTED_ENCODINGS, HEADRequest # noqa: F401 +from ..networking._urllib import ProxyHandler as PerRequestProxyHandler # noqa: F401 +from ..networking._urllib import RedirectHandler as YoutubeDLRedirectHandler # noqa: F401 +from ..networking._urllib import ( # noqa: F401 + make_socks_conn_class, + update_Request, +) +from ..networking.exceptions import HTTPError, network_exceptions # noqa: F401 + +has_certifi = bool(certifi) +has_websockets = bool(websockets) + + +def load_plugins(name, suffix, namespace): + from ..plugins import load_plugins + ret = load_plugins(name, suffix) + namespace.update(ret) + return ret + + +def traverse_dict(dictn, keys, casesense=True): + return traverse_obj(dictn, keys, casesense=casesense, is_user_input=True, traverse_string=True) + + +def decode_base(value, digits): + return decode_base_n(value, table=digits) + + +def platform_name(): + """ Returns the platform name as a str """ + return platform.platform() + + +def get_subprocess_encoding(): + if sys.platform == 'win32' and sys.getwindowsversion()[0] >= 5: + # For subprocess calls, encode with locale encoding + # Refer to http://stackoverflow.com/a/9951851/35070 + encoding = preferredencoding() + else: + encoding = sys.getfilesystemencoding() + if encoding is None: + encoding = 'utf-8' + return encoding + + +# UNUSED +# Based on png2str() written by @gdkchan and improved by @yokrysty +# Originally posted at https://github.com/ytdl-org/youtube-dl/issues/9706 +def decode_png(png_data): + # Reference: https://www.w3.org/TR/PNG/ + header = png_data[8:] + + if png_data[:8] != b'\x89PNG\x0d\x0a\x1a\x0a' or header[4:8] != b'IHDR': + raise OSError('Not a valid PNG file.') + + int_map = {1: '>B', 2: '>H', 4: '>I'} + unpack_integer = lambda x: struct.unpack(int_map[len(x)], x)[0] + + chunks = [] + + while header: + length = unpack_integer(header[:4]) + header = header[4:] + + chunk_type = header[:4] + header = header[4:] + + chunk_data = header[:length] + header = header[length:] + + header = header[4:] # Skip CRC + + chunks.append({ + 'type': chunk_type, + 'length': length, + 'data': chunk_data + }) + + ihdr = chunks[0]['data'] + + width = unpack_integer(ihdr[:4]) + height = unpack_integer(ihdr[4:8]) + + idat = b'' + + for chunk in chunks: + if chunk['type'] == b'IDAT': + idat += chunk['data'] + + if not idat: + raise OSError('Unable to read PNG data.') + + decompressed_data = bytearray(zlib.decompress(idat)) + + stride = width * 3 + pixels = [] + + def _get_pixel(idx): + x = idx % stride + y = idx // stride + return pixels[y][x] + + for y in range(height): + basePos = y * (1 + stride) + filter_type = decompressed_data[basePos] + + current_row = [] + + pixels.append(current_row) + + for x in range(stride): + color = decompressed_data[1 + basePos + x] + basex = y * stride + x + left = 0 + up = 0 + + if x > 2: + left = _get_pixel(basex - 3) + if y > 0: + up = _get_pixel(basex - stride) + + if filter_type == 1: # Sub + color = (color + left) & 0xff + elif filter_type == 2: # Up + color = (color + up) & 0xff + elif filter_type == 3: # Average + color = (color + ((left + up) >> 1)) & 0xff + elif filter_type == 4: # Paeth + a = left + b = up + c = 0 + + if x > 2 and y > 0: + c = _get_pixel(basex - stride - 3) + + p = a + b - c + + pa = abs(p - a) + pb = abs(p - b) + pc = abs(p - c) + + if pa <= pb and pa <= pc: + color = (color + a) & 0xff + elif pb <= pc: + color = (color + b) & 0xff + else: + color = (color + c) & 0xff + + current_row.append(color) + + return width, height, pixels + + +def register_socks_protocols(): + # "Register" SOCKS protocols + # In Python < 2.6.5, urlsplit() suffers from bug https://bugs.python.org/issue7904 + # URLs with protocols not in urlparse.uses_netloc are not handled correctly + for scheme in ('socks', 'socks4', 'socks4a', 'socks5'): + if scheme not in urllib.parse.uses_netloc: + urllib.parse.uses_netloc.append(scheme) + + +def handle_youtubedl_headers(headers): + filtered_headers = headers + + if 'Youtubedl-no-compression' in filtered_headers: + filtered_headers = {k: v for k, v in filtered_headers.items() if k.lower() != 'accept-encoding'} + del filtered_headers['Youtubedl-no-compression'] + + return filtered_headers + + +def request_to_url(req): + if isinstance(req, urllib.request.Request): + return req.get_full_url() + else: + return req + + +def sanitized_Request(url, *args, **kwargs): + from ..utils import extract_basic_auth, sanitize_url + url, auth_header = extract_basic_auth(escape_url(sanitize_url(url))) + if auth_header is not None: + headers = args[1] if len(args) >= 2 else kwargs.setdefault('headers', {}) + headers['Authorization'] = auth_header + return urllib.request.Request(url, *args, **kwargs) + + +class YoutubeDLHandler(HTTPHandler): + def __init__(self, params, *args, **kwargs): + self._params = params + super().__init__(*args, **kwargs) + + +YoutubeDLHTTPSHandler = YoutubeDLHandler + + +class YoutubeDLCookieProcessor(urllib.request.HTTPCookieProcessor): + def __init__(self, cookiejar=None): + urllib.request.HTTPCookieProcessor.__init__(self, cookiejar) + + def http_response(self, request, response): + return urllib.request.HTTPCookieProcessor.http_response(self, request, response) + + https_request = urllib.request.HTTPCookieProcessor.http_request + https_response = http_response + + +def make_HTTPS_handler(params, **kwargs): + return YoutubeDLHTTPSHandler(params, context=make_ssl_context( + verify=not params.get('nocheckcertificate'), + client_certificate=params.get('client_certificate'), + client_certificate_key=params.get('client_certificate_key'), + client_certificate_password=params.get('client_certificate_password'), + legacy_support=params.get('legacyserverconnect'), + use_certifi='no-certifi' not in params.get('compat_opts', []), + ), **kwargs) + + +def process_communicate_or_kill(p, *args, **kwargs): + return Popen.communicate_or_kill(p, *args, **kwargs) diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/_utils.py b/python/lib/python3.10/site-packages/yt_dlp/utils/_utils.py new file mode 100644 index 0000000..10c7c43 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/_utils.py @@ -0,0 +1,5502 @@ +import asyncio +import atexit +import base64 +import binascii +import calendar +import codecs +import collections +import collections.abc +import contextlib +import datetime +import email.header +import email.utils +import errno +import hashlib +import hmac +import html.entities +import html.parser +import inspect +import io +import itertools +import json +import locale +import math +import mimetypes +import netrc +import operator +import os +import platform +import random +import re +import shlex +import socket +import ssl +import struct +import subprocess +import sys +import tempfile +import time +import traceback +import types +import unicodedata +import urllib.error +import urllib.parse +import urllib.request +import xml.etree.ElementTree + +from . import traversal + +from ..compat import functools # isort: split +from ..compat import ( + compat_etree_fromstring, + compat_expanduser, + compat_HTMLParseError, + compat_os_name, + compat_shlex_quote, +) +from ..dependencies import websockets, xattr + +__name__ = __name__.rsplit('.', 1)[0] # Pretend to be the parent module + +# This is not clearly defined otherwise +compiled_regex_type = type(re.compile('')) + + +class NO_DEFAULT: + pass + + +def IDENTITY(x): + return x + + +ENGLISH_MONTH_NAMES = [ + 'January', 'February', 'March', 'April', 'May', 'June', + 'July', 'August', 'September', 'October', 'November', 'December'] + +MONTH_NAMES = { + 'en': ENGLISH_MONTH_NAMES, + 'fr': [ + 'janvier', 'février', 'mars', 'avril', 'mai', 'juin', + 'juillet', 'août', 'septembre', 'octobre', 'novembre', 'décembre'], + # these follow the genitive grammatical case (dopeÅ‚niacz) + # some websites might be using nominative, which will require another month list + # https://en.wikibooks.org/wiki/Polish/Noun_cases + 'pl': ['stycznia', 'lutego', 'marca', 'kwietnia', 'maja', 'czerwca', + 'lipca', 'sierpnia', 'wrzeÅ›nia', 'października', 'listopada', 'grudnia'], +} + +# From https://github.com/python/cpython/blob/3.11/Lib/email/_parseaddr.py#L36-L42 +TIMEZONE_NAMES = { + 'UT': 0, 'UTC': 0, 'GMT': 0, 'Z': 0, + 'AST': -4, 'ADT': -3, # Atlantic (used in Canada) + 'EST': -5, 'EDT': -4, # Eastern + 'CST': -6, 'CDT': -5, # Central + 'MST': -7, 'MDT': -6, # Mountain + 'PST': -8, 'PDT': -7 # Pacific +} + +# needed for sanitizing filenames in restricted mode +ACCENT_CHARS = dict(zip('ÂÃÄÀÃÅÆÇÈÉÊËÌÃÃŽÃÃÑÒÓÔÕÖÅØŒÙÚÛÜŰÃÞßàáâãäåæçèéêëìíîïðñòóôõöőøœùúûüűýþÿ', + itertools.chain('AAAAAA', ['AE'], 'CEEEEIIIIDNOOOOOOO', ['OE'], 'UUUUUY', ['TH', 'ss'], + 'aaaaaa', ['ae'], 'ceeeeiiiionooooooo', ['oe'], 'uuuuuy', ['th'], 'y'))) + +DATE_FORMATS = ( + '%d %B %Y', + '%d %b %Y', + '%B %d %Y', + '%B %dst %Y', + '%B %dnd %Y', + '%B %drd %Y', + '%B %dth %Y', + '%b %d %Y', + '%b %dst %Y', + '%b %dnd %Y', + '%b %drd %Y', + '%b %dth %Y', + '%b %dst %Y %I:%M', + '%b %dnd %Y %I:%M', + '%b %drd %Y %I:%M', + '%b %dth %Y %I:%M', + '%Y %m %d', + '%Y-%m-%d', + '%Y.%m.%d.', + '%Y/%m/%d', + '%Y/%m/%d %H:%M', + '%Y/%m/%d %H:%M:%S', + '%Y%m%d%H%M', + '%Y%m%d%H%M%S', + '%Y%m%d', + '%Y-%m-%d %H:%M', + '%Y-%m-%d %H:%M:%S', + '%Y-%m-%d %H:%M:%S.%f', + '%Y-%m-%d %H:%M:%S:%f', + '%d.%m.%Y %H:%M', + '%d.%m.%Y %H.%M', + '%Y-%m-%dT%H:%M:%SZ', + '%Y-%m-%dT%H:%M:%S.%fZ', + '%Y-%m-%dT%H:%M:%S.%f0Z', + '%Y-%m-%dT%H:%M:%S', + '%Y-%m-%dT%H:%M:%S.%f', + '%Y-%m-%dT%H:%M', + '%b %d %Y at %H:%M', + '%b %d %Y at %H:%M:%S', + '%B %d %Y at %H:%M', + '%B %d %Y at %H:%M:%S', + '%H:%M %d-%b-%Y', +) + +DATE_FORMATS_DAY_FIRST = list(DATE_FORMATS) +DATE_FORMATS_DAY_FIRST.extend([ + '%d-%m-%Y', + '%d.%m.%Y', + '%d.%m.%y', + '%d/%m/%Y', + '%d/%m/%y', + '%d/%m/%Y %H:%M:%S', + '%d-%m-%Y %H:%M', + '%H:%M %d/%m/%Y', +]) + +DATE_FORMATS_MONTH_FIRST = list(DATE_FORMATS) +DATE_FORMATS_MONTH_FIRST.extend([ + '%m-%d-%Y', + '%m.%d.%Y', + '%m/%d/%Y', + '%m/%d/%y', + '%m/%d/%Y %H:%M:%S', +]) + +PACKED_CODES_RE = r"}\('(.+)',(\d+),(\d+),'([^']+)'\.split\('\|'\)" +JSON_LD_RE = r'(?is)<script[^>]+type=(["\']?)application/ld\+json\1[^>]*>\s*(?P<json_ld>{.+?}|\[.+?\])\s*</script>' + +NUMBER_RE = r'\d+(?:\.\d+)?' + + +@functools.cache +def preferredencoding(): + """Get preferred encoding. + + Returns the best encoding scheme for the system, based on + locale.getpreferredencoding() and some further tweaks. + """ + try: + pref = locale.getpreferredencoding() + 'TEST'.encode(pref) + except Exception: + pref = 'UTF-8' + + return pref + + +def write_json_file(obj, fn): + """ Encode obj as JSON and write it to fn, atomically if possible """ + + tf = tempfile.NamedTemporaryFile( + prefix=f'{os.path.basename(fn)}.', dir=os.path.dirname(fn), + suffix='.tmp', delete=False, mode='w', encoding='utf-8') + + try: + with tf: + json.dump(obj, tf, ensure_ascii=False) + if sys.platform == 'win32': + # Need to remove existing file on Windows, else os.rename raises + # WindowsError or FileExistsError. + with contextlib.suppress(OSError): + os.unlink(fn) + with contextlib.suppress(OSError): + mask = os.umask(0) + os.umask(mask) + os.chmod(tf.name, 0o666 & ~mask) + os.rename(tf.name, fn) + except Exception: + with contextlib.suppress(OSError): + os.remove(tf.name) + raise + + +def find_xpath_attr(node, xpath, key, val=None): + """ Find the xpath xpath[@key=val] """ + assert re.match(r'^[a-zA-Z_-]+$', key) + expr = xpath + ('[@%s]' % key if val is None else f"[@{key}='{val}']") + return node.find(expr) + +# On python2.6 the xml.etree.ElementTree.Element methods don't support +# the namespace parameter + + +def xpath_with_ns(path, ns_map): + components = [c.split(':') for c in path.split('/')] + replaced = [] + for c in components: + if len(c) == 1: + replaced.append(c[0]) + else: + ns, tag = c + replaced.append('{%s}%s' % (ns_map[ns], tag)) + return '/'.join(replaced) + + +def xpath_element(node, xpath, name=None, fatal=False, default=NO_DEFAULT): + def _find_xpath(xpath): + return node.find(xpath) + + if isinstance(xpath, str): + n = _find_xpath(xpath) + else: + for xp in xpath: + n = _find_xpath(xp) + if n is not None: + break + + if n is None: + if default is not NO_DEFAULT: + return default + elif fatal: + name = xpath if name is None else name + raise ExtractorError('Could not find XML element %s' % name) + else: + return None + return n + + +def xpath_text(node, xpath, name=None, fatal=False, default=NO_DEFAULT): + n = xpath_element(node, xpath, name, fatal=fatal, default=default) + if n is None or n == default: + return n + if n.text is None: + if default is not NO_DEFAULT: + return default + elif fatal: + name = xpath if name is None else name + raise ExtractorError('Could not find XML element\'s text %s' % name) + else: + return None + return n.text + + +def xpath_attr(node, xpath, key, name=None, fatal=False, default=NO_DEFAULT): + n = find_xpath_attr(node, xpath, key) + if n is None: + if default is not NO_DEFAULT: + return default + elif fatal: + name = f'{xpath}[@{key}]' if name is None else name + raise ExtractorError('Could not find XML attribute %s' % name) + else: + return None + return n.attrib[key] + + +def get_element_by_id(id, html, **kwargs): + """Return the content of the tag with the specified ID in the passed HTML document""" + return get_element_by_attribute('id', id, html, **kwargs) + + +def get_element_html_by_id(id, html, **kwargs): + """Return the html of the tag with the specified ID in the passed HTML document""" + return get_element_html_by_attribute('id', id, html, **kwargs) + + +def get_element_by_class(class_name, html): + """Return the content of the first tag with the specified class in the passed HTML document""" + retval = get_elements_by_class(class_name, html) + return retval[0] if retval else None + + +def get_element_html_by_class(class_name, html): + """Return the html of the first tag with the specified class in the passed HTML document""" + retval = get_elements_html_by_class(class_name, html) + return retval[0] if retval else None + + +def get_element_by_attribute(attribute, value, html, **kwargs): + retval = get_elements_by_attribute(attribute, value, html, **kwargs) + return retval[0] if retval else None + + +def get_element_html_by_attribute(attribute, value, html, **kargs): + retval = get_elements_html_by_attribute(attribute, value, html, **kargs) + return retval[0] if retval else None + + +def get_elements_by_class(class_name, html, **kargs): + """Return the content of all tags with the specified class in the passed HTML document as a list""" + return get_elements_by_attribute( + 'class', r'[^\'"]*(?<=[\'"\s])%s(?=[\'"\s])[^\'"]*' % re.escape(class_name), + html, escape_value=False) + + +def get_elements_html_by_class(class_name, html): + """Return the html of all tags with the specified class in the passed HTML document as a list""" + return get_elements_html_by_attribute( + 'class', r'[^\'"]*(?<=[\'"\s])%s(?=[\'"\s])[^\'"]*' % re.escape(class_name), + html, escape_value=False) + + +def get_elements_by_attribute(*args, **kwargs): + """Return the content of the tag with the specified attribute in the passed HTML document""" + return [content for content, _ in get_elements_text_and_html_by_attribute(*args, **kwargs)] + + +def get_elements_html_by_attribute(*args, **kwargs): + """Return the html of the tag with the specified attribute in the passed HTML document""" + return [whole for _, whole in get_elements_text_and_html_by_attribute(*args, **kwargs)] + + +def get_elements_text_and_html_by_attribute(attribute, value, html, *, tag=r'[\w:.-]+', escape_value=True): + """ + Return the text (content) and the html (whole) of the tag with the specified + attribute in the passed HTML document + """ + if not value: + return + + quote = '' if re.match(r'''[\s"'`=<>]''', value) else '?' + + value = re.escape(value) if escape_value else value + + partial_element_re = rf'''(?x) + <(?P<tag>{tag}) + (?:\s(?:[^>"']|"[^"]*"|'[^']*')*)? + \s{re.escape(attribute)}\s*=\s*(?P<_q>['"]{quote})(?-x:{value})(?P=_q) + ''' + + for m in re.finditer(partial_element_re, html): + content, whole = get_element_text_and_html_by_tag(m.group('tag'), html[m.start():]) + + yield ( + unescapeHTML(re.sub(r'^(?P<q>["\'])(?P<content>.*)(?P=q)$', r'\g<content>', content, flags=re.DOTALL)), + whole + ) + + +class HTMLBreakOnClosingTagParser(html.parser.HTMLParser): + """ + HTML parser which raises HTMLBreakOnClosingTagException upon reaching the + closing tag for the first opening tag it has encountered, and can be used + as a context manager + """ + + class HTMLBreakOnClosingTagException(Exception): + pass + + def __init__(self): + self.tagstack = collections.deque() + html.parser.HTMLParser.__init__(self) + + def __enter__(self): + return self + + def __exit__(self, *_): + self.close() + + def close(self): + # handle_endtag does not return upon raising HTMLBreakOnClosingTagException, + # so data remains buffered; we no longer have any interest in it, thus + # override this method to discard it + pass + + def handle_starttag(self, tag, _): + self.tagstack.append(tag) + + def handle_endtag(self, tag): + if not self.tagstack: + raise compat_HTMLParseError('no tags in the stack') + while self.tagstack: + inner_tag = self.tagstack.pop() + if inner_tag == tag: + break + else: + raise compat_HTMLParseError(f'matching opening tag for closing {tag} tag not found') + if not self.tagstack: + raise self.HTMLBreakOnClosingTagException() + + +# XXX: This should be far less strict +def get_element_text_and_html_by_tag(tag, html): + """ + For the first element with the specified tag in the passed HTML document + return its' content (text) and the whole element (html) + """ + def find_or_raise(haystack, needle, exc): + try: + return haystack.index(needle) + except ValueError: + raise exc + closing_tag = f'</{tag}>' + whole_start = find_or_raise( + html, f'<{tag}', compat_HTMLParseError(f'opening {tag} tag not found')) + content_start = find_or_raise( + html[whole_start:], '>', compat_HTMLParseError(f'malformed opening {tag} tag')) + content_start += whole_start + 1 + with HTMLBreakOnClosingTagParser() as parser: + parser.feed(html[whole_start:content_start]) + if not parser.tagstack or parser.tagstack[0] != tag: + raise compat_HTMLParseError(f'parser did not match opening {tag} tag') + offset = content_start + while offset < len(html): + next_closing_tag_start = find_or_raise( + html[offset:], closing_tag, + compat_HTMLParseError(f'closing {tag} tag not found')) + next_closing_tag_end = next_closing_tag_start + len(closing_tag) + try: + parser.feed(html[offset:offset + next_closing_tag_end]) + offset += next_closing_tag_end + except HTMLBreakOnClosingTagParser.HTMLBreakOnClosingTagException: + return html[content_start:offset + next_closing_tag_start], \ + html[whole_start:offset + next_closing_tag_end] + raise compat_HTMLParseError('unexpected end of html') + + +class HTMLAttributeParser(html.parser.HTMLParser): + """Trivial HTML parser to gather the attributes for a single element""" + + def __init__(self): + self.attrs = {} + html.parser.HTMLParser.__init__(self) + + def handle_starttag(self, tag, attrs): + self.attrs = dict(attrs) + raise compat_HTMLParseError('done') + + +class HTMLListAttrsParser(html.parser.HTMLParser): + """HTML parser to gather the attributes for the elements of a list""" + + def __init__(self): + html.parser.HTMLParser.__init__(self) + self.items = [] + self._level = 0 + + def handle_starttag(self, tag, attrs): + if tag == 'li' and self._level == 0: + self.items.append(dict(attrs)) + self._level += 1 + + def handle_endtag(self, tag): + self._level -= 1 + + +def extract_attributes(html_element): + """Given a string for an HTML element such as + <el + a="foo" B="bar" c="&98;az" d=boz + empty= noval entity="&" + sq='"' dq="'" + > + Decode and return a dictionary of attributes. + { + 'a': 'foo', 'b': 'bar', c: 'baz', d: 'boz', + 'empty': '', 'noval': None, 'entity': '&', + 'sq': '"', 'dq': '\'' + }. + """ + parser = HTMLAttributeParser() + with contextlib.suppress(compat_HTMLParseError): + parser.feed(html_element) + parser.close() + return parser.attrs + + +def parse_list(webpage): + """Given a string for an series of HTML <li> elements, + return a dictionary of their attributes""" + parser = HTMLListAttrsParser() + parser.feed(webpage) + parser.close() + return parser.items + + +def clean_html(html): + """Clean an HTML snippet into a readable string""" + + if html is None: # Convenience for sanitizing descriptions etc. + return html + + html = re.sub(r'\s+', ' ', html) + html = re.sub(r'(?u)\s?<\s?br\s?/?\s?>\s?', '\n', html) + html = re.sub(r'(?u)<\s?/\s?p\s?>\s?<\s?p[^>]*>', '\n', html) + # Strip html tags + html = re.sub('<.*?>', '', html) + # Replace html entities + html = unescapeHTML(html) + return html.strip() + + +class LenientJSONDecoder(json.JSONDecoder): + # TODO: Write tests + def __init__(self, *args, transform_source=None, ignore_extra=False, close_objects=0, **kwargs): + self.transform_source, self.ignore_extra = transform_source, ignore_extra + self._close_attempts = 2 * close_objects + super().__init__(*args, **kwargs) + + @staticmethod + def _close_object(err): + doc = err.doc[:err.pos] + # We need to add comma first to get the correct error message + if err.msg.startswith('Expecting \',\''): + return doc + ',' + elif not doc.endswith(','): + return + + if err.msg.startswith('Expecting property name'): + return doc[:-1] + '}' + elif err.msg.startswith('Expecting value'): + return doc[:-1] + ']' + + def decode(self, s): + if self.transform_source: + s = self.transform_source(s) + for attempt in range(self._close_attempts + 1): + try: + if self.ignore_extra: + return self.raw_decode(s.lstrip())[0] + return super().decode(s) + except json.JSONDecodeError as e: + if e.pos is None: + raise + elif attempt < self._close_attempts: + s = self._close_object(e) + if s is not None: + continue + raise type(e)(f'{e.msg} in {s[e.pos-10:e.pos+10]!r}', s, e.pos) + assert False, 'Too many attempts to decode JSON' + + +def sanitize_open(filename, open_mode): + """Try to open the given filename, and slightly tweak it if this fails. + + Attempts to open the given filename. If this fails, it tries to change + the filename slightly, step by step, until it's either able to open it + or it fails and raises a final exception, like the standard open() + function. + + It returns the tuple (stream, definitive_file_name). + """ + if filename == '-': + if sys.platform == 'win32': + import msvcrt + + # stdout may be any IO stream, e.g. when using contextlib.redirect_stdout + with contextlib.suppress(io.UnsupportedOperation): + msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) + return (sys.stdout.buffer if hasattr(sys.stdout, 'buffer') else sys.stdout, filename) + + for attempt in range(2): + try: + try: + if sys.platform == 'win32': + # FIXME: An exclusive lock also locks the file from being read. + # Since windows locks are mandatory, don't lock the file on windows (for now). + # Ref: https://github.com/yt-dlp/yt-dlp/issues/3124 + raise LockingUnsupportedError() + stream = locked_file(filename, open_mode, block=False).__enter__() + except OSError: + stream = open(filename, open_mode) + return stream, filename + except OSError as err: + if attempt or err.errno in (errno.EACCES,): + raise + old_filename, filename = filename, sanitize_path(filename) + if old_filename == filename: + raise + + +def timeconvert(timestr): + """Convert RFC 2822 defined time string into system timestamp""" + timestamp = None + timetuple = email.utils.parsedate_tz(timestr) + if timetuple is not None: + timestamp = email.utils.mktime_tz(timetuple) + return timestamp + + +def sanitize_filename(s, restricted=False, is_id=NO_DEFAULT): + """Sanitizes a string so it could be used as part of a filename. + @param restricted Use a stricter subset of allowed characters + @param is_id Whether this is an ID that should be kept unchanged if possible. + If unset, yt-dlp's new sanitization rules are in effect + """ + if s == '': + return '' + + def replace_insane(char): + if restricted and char in ACCENT_CHARS: + return ACCENT_CHARS[char] + elif not restricted and char == '\n': + return '\0 ' + elif is_id is NO_DEFAULT and not restricted and char in '"*:<>?|/\\': + # Replace with their full-width unicode counterparts + return {'/': '\u29F8', '\\': '\u29f9'}.get(char, chr(ord(char) + 0xfee0)) + elif char == '?' or ord(char) < 32 or ord(char) == 127: + return '' + elif char == '"': + return '' if restricted else '\'' + elif char == ':': + return '\0_\0-' if restricted else '\0 \0-' + elif char in '\\/|*<>': + return '\0_' + if restricted and (char in '!&\'()[]{}$;`^,#' or char.isspace() or ord(char) > 127): + return '\0_' + return char + + # Replace look-alike Unicode glyphs + if restricted and (is_id is NO_DEFAULT or not is_id): + s = unicodedata.normalize('NFKC', s) + s = re.sub(r'[0-9]+(?::[0-9]+)+', lambda m: m.group(0).replace(':', '_'), s) # Handle timestamps + result = ''.join(map(replace_insane, s)) + if is_id is NO_DEFAULT: + result = re.sub(r'(\0.)(?:(?=\1)..)+', r'\1', result) # Remove repeated substitute chars + STRIP_RE = r'(?:\0.|[ _-])*' + result = re.sub(f'^\0.{STRIP_RE}|{STRIP_RE}\0.$', '', result) # Remove substitute chars from start/end + result = result.replace('\0', '') or '_' + + if not is_id: + while '__' in result: + result = result.replace('__', '_') + result = result.strip('_') + # Common case of "Foreign band name - English song title" + if restricted and result.startswith('-_'): + result = result[2:] + if result.startswith('-'): + result = '_' + result[len('-'):] + result = result.lstrip('.') + if not result: + result = '_' + return result + + +def sanitize_path(s, force=False): + """Sanitizes and normalizes path on Windows""" + # XXX: this handles drive relative paths (c:sth) incorrectly + if sys.platform == 'win32': + force = False + drive_or_unc, _ = os.path.splitdrive(s) + elif force: + drive_or_unc = '' + else: + return s + + norm_path = os.path.normpath(remove_start(s, drive_or_unc)).split(os.path.sep) + if drive_or_unc: + norm_path.pop(0) + sanitized_path = [ + path_part if path_part in ['.', '..'] else re.sub(r'(?:[/<>:"\|\\?\*]|[\s.]$)', '#', path_part) + for path_part in norm_path] + if drive_or_unc: + sanitized_path.insert(0, drive_or_unc + os.path.sep) + elif force and s and s[0] == os.path.sep: + sanitized_path.insert(0, os.path.sep) + # TODO: Fix behavioral differences <3.12 + # The workaround using `normpath` only superficially passes tests + # Ref: https://github.com/python/cpython/pull/100351 + return os.path.normpath(os.path.join(*sanitized_path)) + + +def sanitize_url(url, *, scheme='http'): + # Prepend protocol-less URLs with `http:` scheme in order to mitigate + # the number of unwanted failures due to missing protocol + if url is None: + return + elif url.startswith('//'): + return f'{scheme}:{url}' + # Fix some common typos seen so far + COMMON_TYPOS = ( + # https://github.com/ytdl-org/youtube-dl/issues/15649 + (r'^httpss://', r'https://'), + # https://bx1.be/lives/direct-tv/ + (r'^rmtp([es]?)://', r'rtmp\1://'), + ) + for mistake, fixup in COMMON_TYPOS: + if re.match(mistake, url): + return re.sub(mistake, fixup, url) + return url + + +def extract_basic_auth(url): + parts = urllib.parse.urlsplit(url) + if parts.username is None: + return url, None + url = urllib.parse.urlunsplit(parts._replace(netloc=( + parts.hostname if parts.port is None + else '%s:%d' % (parts.hostname, parts.port)))) + auth_payload = base64.b64encode( + ('%s:%s' % (parts.username, parts.password or '')).encode()) + return url, f'Basic {auth_payload.decode()}' + + +def expand_path(s): + """Expand shell variables and ~""" + return os.path.expandvars(compat_expanduser(s)) + + +def orderedSet(iterable, *, lazy=False): + """Remove all duplicates from the input iterable""" + def _iter(): + seen = [] # Do not use set since the items can be unhashable + for x in iterable: + if x not in seen: + seen.append(x) + yield x + + return _iter() if lazy else list(_iter()) + + +def _htmlentity_transform(entity_with_semicolon): + """Transforms an HTML entity to a character.""" + entity = entity_with_semicolon[:-1] + + # Known non-numeric HTML entity + if entity in html.entities.name2codepoint: + return chr(html.entities.name2codepoint[entity]) + + # TODO: HTML5 allows entities without a semicolon. + # E.g. 'Éric' should be decoded as 'Éric'. + if entity_with_semicolon in html.entities.html5: + return html.entities.html5[entity_with_semicolon] + + mobj = re.match(r'#(x[0-9a-fA-F]+|[0-9]+)', entity) + if mobj is not None: + numstr = mobj.group(1) + if numstr.startswith('x'): + base = 16 + numstr = '0%s' % numstr + else: + base = 10 + # See https://github.com/ytdl-org/youtube-dl/issues/7518 + with contextlib.suppress(ValueError): + return chr(int(numstr, base)) + + # Unknown entity in name, return its literal representation + return '&%s;' % entity + + +def unescapeHTML(s): + if s is None: + return None + assert isinstance(s, str) + + return re.sub( + r'&([^&;]+;)', lambda m: _htmlentity_transform(m.group(1)), s) + + +def escapeHTML(text): + return ( + text + .replace('&', '&') + .replace('<', '<') + .replace('>', '>') + .replace('"', '"') + .replace("'", ''') + ) + + +class netrc_from_content(netrc.netrc): + def __init__(self, content): + self.hosts, self.macros = {}, {} + with io.StringIO(content) as stream: + self._parse('-', stream, False) + + +class Popen(subprocess.Popen): + if sys.platform == 'win32': + _startupinfo = subprocess.STARTUPINFO() + _startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW + else: + _startupinfo = None + + @staticmethod + def _fix_pyinstaller_ld_path(env): + """Restore LD_LIBRARY_PATH when using PyInstaller + Ref: https://github.com/pyinstaller/pyinstaller/blob/develop/doc/runtime-information.rst#ld_library_path--libpath-considerations + https://github.com/yt-dlp/yt-dlp/issues/4573 + """ + if not hasattr(sys, '_MEIPASS'): + return + + def _fix(key): + orig = env.get(f'{key}_ORIG') + if orig is None: + env.pop(key, None) + else: + env[key] = orig + + _fix('LD_LIBRARY_PATH') # Linux + _fix('DYLD_LIBRARY_PATH') # macOS + + def __init__(self, args, *remaining, env=None, text=False, shell=False, **kwargs): + if env is None: + env = os.environ.copy() + self._fix_pyinstaller_ld_path(env) + + self.__text_mode = kwargs.get('encoding') or kwargs.get('errors') or text or kwargs.get('universal_newlines') + if text is True: + kwargs['universal_newlines'] = True # For 3.6 compatibility + kwargs.setdefault('encoding', 'utf-8') + kwargs.setdefault('errors', 'replace') + + if shell and compat_os_name == 'nt' and kwargs.get('executable') is None: + if not isinstance(args, str): + args = ' '.join(compat_shlex_quote(a) for a in args) + shell = False + args = f'{self.__comspec()} /Q /S /D /V:OFF /C "{args}"' + + super().__init__(args, *remaining, env=env, shell=shell, **kwargs, startupinfo=self._startupinfo) + + def __comspec(self): + comspec = os.environ.get('ComSpec') or os.path.join( + os.environ.get('SystemRoot', ''), 'System32', 'cmd.exe') + if os.path.isabs(comspec): + return comspec + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + + def communicate_or_kill(self, *args, **kwargs): + try: + return self.communicate(*args, **kwargs) + except BaseException: # Including KeyboardInterrupt + self.kill(timeout=None) + raise + + def kill(self, *, timeout=0): + super().kill() + if timeout != 0: + self.wait(timeout=timeout) + + @classmethod + def run(cls, *args, timeout=None, **kwargs): + with cls(*args, **kwargs) as proc: + default = '' if proc.__text_mode else b'' + stdout, stderr = proc.communicate_or_kill(timeout=timeout) + return stdout or default, stderr or default, proc.returncode + + +def encodeArgument(s): + # Legacy code that uses byte strings + # Uncomment the following line after fixing all post processors + # assert isinstance(s, str), 'Internal error: %r should be of type %r, is %r' % (s, str, type(s)) + return s if isinstance(s, str) else s.decode('ascii') + + +_timetuple = collections.namedtuple('Time', ('hours', 'minutes', 'seconds', 'milliseconds')) + + +def timetuple_from_msec(msec): + secs, msec = divmod(msec, 1000) + mins, secs = divmod(secs, 60) + hrs, mins = divmod(mins, 60) + return _timetuple(hrs, mins, secs, msec) + + +def formatSeconds(secs, delim=':', msec=False): + time = timetuple_from_msec(secs * 1000) + if time.hours: + ret = '%d%s%02d%s%02d' % (time.hours, delim, time.minutes, delim, time.seconds) + elif time.minutes: + ret = '%d%s%02d' % (time.minutes, delim, time.seconds) + else: + ret = '%d' % time.seconds + return '%s.%03d' % (ret, time.milliseconds) if msec else ret + + +def bug_reports_message(before=';'): + from ..update import REPOSITORY + + msg = (f'please report this issue on https://github.com/{REPOSITORY}/issues?q= , ' + 'filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U') + + before = before.rstrip() + if not before or before.endswith(('.', '!', '?')): + msg = msg[0].title() + msg[1:] + + return (before + ' ' if before else '') + msg + + +class YoutubeDLError(Exception): + """Base exception for YoutubeDL errors.""" + msg = None + + def __init__(self, msg=None): + if msg is not None: + self.msg = msg + elif self.msg is None: + self.msg = type(self).__name__ + super().__init__(self.msg) + + +class ExtractorError(YoutubeDLError): + """Error during info extraction.""" + + def __init__(self, msg, tb=None, expected=False, cause=None, video_id=None, ie=None): + """ tb, if given, is the original traceback (so that it can be printed out). + If expected is set, this is a normal error message and most likely not a bug in yt-dlp. + """ + from ..networking.exceptions import network_exceptions + if sys.exc_info()[0] in network_exceptions: + expected = True + + self.orig_msg = str(msg) + self.traceback = tb + self.expected = expected + self.cause = cause + self.video_id = video_id + self.ie = ie + self.exc_info = sys.exc_info() # preserve original exception + if isinstance(self.exc_info[1], ExtractorError): + self.exc_info = self.exc_info[1].exc_info + super().__init__(self.__msg) + + @property + def __msg(self): + return ''.join(( + format_field(self.ie, None, '[%s] '), + format_field(self.video_id, None, '%s: '), + self.orig_msg, + format_field(self.cause, None, ' (caused by %r)'), + '' if self.expected else bug_reports_message())) + + def format_traceback(self): + return join_nonempty( + self.traceback and ''.join(traceback.format_tb(self.traceback)), + self.cause and ''.join(traceback.format_exception(None, self.cause, self.cause.__traceback__)[1:]), + delim='\n') or None + + def __setattr__(self, name, value): + super().__setattr__(name, value) + if getattr(self, 'msg', None) and name not in ('msg', 'args'): + self.msg = self.__msg or type(self).__name__ + self.args = (self.msg, ) # Cannot be property + + +class UnsupportedError(ExtractorError): + def __init__(self, url): + super().__init__( + 'Unsupported URL: %s' % url, expected=True) + self.url = url + + +class RegexNotFoundError(ExtractorError): + """Error when a regex didn't match""" + pass + + +class GeoRestrictedError(ExtractorError): + """Geographic restriction Error exception. + + This exception may be thrown when a video is not available from your + geographic location due to geographic restrictions imposed by a website. + """ + + def __init__(self, msg, countries=None, **kwargs): + kwargs['expected'] = True + super().__init__(msg, **kwargs) + self.countries = countries + + +class UserNotLive(ExtractorError): + """Error when a channel/user is not live""" + + def __init__(self, msg=None, **kwargs): + kwargs['expected'] = True + super().__init__(msg or 'The channel is not currently live', **kwargs) + + +class DownloadError(YoutubeDLError): + """Download Error exception. + + This exception may be thrown by FileDownloader objects if they are not + configured to continue on errors. They will contain the appropriate + error message. + """ + + def __init__(self, msg, exc_info=None): + """ exc_info, if given, is the original exception that caused the trouble (as returned by sys.exc_info()). """ + super().__init__(msg) + self.exc_info = exc_info + + +class EntryNotInPlaylist(YoutubeDLError): + """Entry not in playlist exception. + + This exception will be thrown by YoutubeDL when a requested entry + is not found in the playlist info_dict + """ + msg = 'Entry not found in info' + + +class SameFileError(YoutubeDLError): + """Same File exception. + + This exception will be thrown by FileDownloader objects if they detect + multiple files would have to be downloaded to the same file on disk. + """ + msg = 'Fixed output name but more than one file to download' + + def __init__(self, filename=None): + if filename is not None: + self.msg += f': {filename}' + super().__init__(self.msg) + + +class PostProcessingError(YoutubeDLError): + """Post Processing exception. + + This exception may be raised by PostProcessor's .run() method to + indicate an error in the postprocessing task. + """ + + +class DownloadCancelled(YoutubeDLError): + """ Exception raised when the download queue should be interrupted """ + msg = 'The download was cancelled' + + +class ExistingVideoReached(DownloadCancelled): + """ --break-on-existing triggered """ + msg = 'Encountered a video that is already in the archive, stopping due to --break-on-existing' + + +class RejectedVideoReached(DownloadCancelled): + """ --break-match-filter triggered """ + msg = 'Encountered a video that did not match filter, stopping due to --break-match-filter' + + +class MaxDownloadsReached(DownloadCancelled): + """ --max-downloads limit has been reached. """ + msg = 'Maximum number of downloads reached, stopping due to --max-downloads' + + +class ReExtractInfo(YoutubeDLError): + """ Video info needs to be re-extracted. """ + + def __init__(self, msg, expected=False): + super().__init__(msg) + self.expected = expected + + +class ThrottledDownload(ReExtractInfo): + """ Download speed below --throttled-rate. """ + msg = 'The download speed is below throttle limit' + + def __init__(self): + super().__init__(self.msg, expected=False) + + +class UnavailableVideoError(YoutubeDLError): + """Unavailable Format exception. + + This exception will be thrown when a video is requested + in a format that is not available for that video. + """ + msg = 'Unable to download video' + + def __init__(self, err=None): + if err is not None: + self.msg += f': {err}' + super().__init__(self.msg) + + +class ContentTooShortError(YoutubeDLError): + """Content Too Short exception. + + This exception may be raised by FileDownloader objects when a file they + download is too small for what the server announced first, indicating + the connection was probably interrupted. + """ + + def __init__(self, downloaded, expected): + super().__init__(f'Downloaded {downloaded} bytes, expected {expected} bytes') + # Both in bytes + self.downloaded = downloaded + self.expected = expected + + +class XAttrMetadataError(YoutubeDLError): + def __init__(self, code=None, msg='Unknown error'): + super().__init__(msg) + self.code = code + self.msg = msg + + # Parsing code and msg + if (self.code in (errno.ENOSPC, errno.EDQUOT) + or 'No space left' in self.msg or 'Disk quota exceeded' in self.msg): + self.reason = 'NO_SPACE' + elif self.code == errno.E2BIG or 'Argument list too long' in self.msg: + self.reason = 'VALUE_TOO_LONG' + else: + self.reason = 'NOT_SUPPORTED' + + +class XAttrUnavailableError(YoutubeDLError): + pass + + +def is_path_like(f): + return isinstance(f, (str, bytes, os.PathLike)) + + +def extract_timezone(date_str): + m = re.search( + r'''(?x) + ^.{8,}? # >=8 char non-TZ prefix, if present + (?P<tz>Z| # just the UTC Z, or + (?:(?<=.\b\d{4}|\b\d{2}:\d\d)| # preceded by 4 digits or hh:mm or + (?<!.\b[a-zA-Z]{3}|[a-zA-Z]{4}|..\b\d\d)) # not preceded by 3 alpha word or >= 4 alpha or 2 digits + [ ]? # optional space + (?P<sign>\+|-) # +/- + (?P<hours>[0-9]{2}):?(?P<minutes>[0-9]{2}) # hh[:]mm + $) + ''', date_str) + if not m: + m = re.search(r'\d{1,2}:\d{1,2}(?:\.\d+)?(?P<tz>\s*[A-Z]+)$', date_str) + timezone = TIMEZONE_NAMES.get(m and m.group('tz').strip()) + if timezone is not None: + date_str = date_str[:-len(m.group('tz'))] + timezone = datetime.timedelta(hours=timezone or 0) + else: + date_str = date_str[:-len(m.group('tz'))] + if not m.group('sign'): + timezone = datetime.timedelta() + else: + sign = 1 if m.group('sign') == '+' else -1 + timezone = datetime.timedelta( + hours=sign * int(m.group('hours')), + minutes=sign * int(m.group('minutes'))) + return timezone, date_str + + +def parse_iso8601(date_str, delimiter='T', timezone=None): + """ Return a UNIX timestamp from the given date """ + + if date_str is None: + return None + + date_str = re.sub(r'\.[0-9]+', '', date_str) + + if timezone is None: + timezone, date_str = extract_timezone(date_str) + + with contextlib.suppress(ValueError): + date_format = f'%Y-%m-%d{delimiter}%H:%M:%S' + dt = datetime.datetime.strptime(date_str, date_format) - timezone + return calendar.timegm(dt.timetuple()) + + +def date_formats(day_first=True): + return DATE_FORMATS_DAY_FIRST if day_first else DATE_FORMATS_MONTH_FIRST + + +def unified_strdate(date_str, day_first=True): + """Return a string with the date in the format YYYYMMDD""" + + if date_str is None: + return None + upload_date = None + # Replace commas + date_str = date_str.replace(',', ' ') + # Remove AM/PM + timezone + date_str = re.sub(r'(?i)\s*(?:AM|PM)(?:\s+[A-Z]+)?', '', date_str) + _, date_str = extract_timezone(date_str) + + for expression in date_formats(day_first): + with contextlib.suppress(ValueError): + upload_date = datetime.datetime.strptime(date_str, expression).strftime('%Y%m%d') + if upload_date is None: + timetuple = email.utils.parsedate_tz(date_str) + if timetuple: + with contextlib.suppress(ValueError): + upload_date = datetime.datetime(*timetuple[:6]).strftime('%Y%m%d') + if upload_date is not None: + return str(upload_date) + + +def unified_timestamp(date_str, day_first=True): + if not isinstance(date_str, str): + return None + + date_str = re.sub(r'\s+', ' ', re.sub( + r'(?i)[,|]|(mon|tues?|wed(nes)?|thu(rs)?|fri|sat(ur)?)(day)?', '', date_str)) + + pm_delta = 12 if re.search(r'(?i)PM', date_str) else 0 + timezone, date_str = extract_timezone(date_str) + + # Remove AM/PM + timezone + date_str = re.sub(r'(?i)\s*(?:AM|PM)(?:\s+[A-Z]+)?', '', date_str) + + # Remove unrecognized timezones from ISO 8601 alike timestamps + m = re.search(r'\d{1,2}:\d{1,2}(?:\.\d+)?(?P<tz>\s*[A-Z]+)$', date_str) + if m: + date_str = date_str[:-len(m.group('tz'))] + + # Python only supports microseconds, so remove nanoseconds + m = re.search(r'^([0-9]{4,}-[0-9]{1,2}-[0-9]{1,2}T[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}\.[0-9]{6})[0-9]+$', date_str) + if m: + date_str = m.group(1) + + for expression in date_formats(day_first): + with contextlib.suppress(ValueError): + dt = datetime.datetime.strptime(date_str, expression) - timezone + datetime.timedelta(hours=pm_delta) + return calendar.timegm(dt.timetuple()) + + timetuple = email.utils.parsedate_tz(date_str) + if timetuple: + return calendar.timegm(timetuple) + pm_delta * 3600 - timezone.total_seconds() + + +def determine_ext(url, default_ext='unknown_video'): + if url is None or '.' not in url: + return default_ext + guess = url.partition('?')[0].rpartition('.')[2] + if re.match(r'^[A-Za-z0-9]+$', guess): + return guess + # Try extract ext from URLs like http://example.com/foo/bar.mp4/?download + elif guess.rstrip('/') in KNOWN_EXTENSIONS: + return guess.rstrip('/') + else: + return default_ext + + +def subtitles_filename(filename, sub_lang, sub_format, expected_real_ext=None): + return replace_extension(filename, sub_lang + '.' + sub_format, expected_real_ext) + + +def datetime_from_str(date_str, precision='auto', format='%Y%m%d'): + R""" + Return a datetime object from a string. + Supported format: + (now|today|yesterday|DATE)([+-]\d+(microsecond|second|minute|hour|day|week|month|year)s?)? + + @param format strftime format of DATE + @param precision Round the datetime object: auto|microsecond|second|minute|hour|day + auto: round to the unit provided in date_str (if applicable). + """ + auto_precision = False + if precision == 'auto': + auto_precision = True + precision = 'microsecond' + today = datetime_round(datetime.datetime.now(datetime.timezone.utc), precision) + if date_str in ('now', 'today'): + return today + if date_str == 'yesterday': + return today - datetime.timedelta(days=1) + match = re.match( + r'(?P<start>.+)(?P<sign>[+-])(?P<time>\d+)(?P<unit>microsecond|second|minute|hour|day|week|month|year)s?', + date_str) + if match is not None: + start_time = datetime_from_str(match.group('start'), precision, format) + time = int(match.group('time')) * (-1 if match.group('sign') == '-' else 1) + unit = match.group('unit') + if unit == 'month' or unit == 'year': + new_date = datetime_add_months(start_time, time * 12 if unit == 'year' else time) + unit = 'day' + else: + if unit == 'week': + unit = 'day' + time *= 7 + delta = datetime.timedelta(**{unit + 's': time}) + new_date = start_time + delta + if auto_precision: + return datetime_round(new_date, unit) + return new_date + + return datetime_round(datetime.datetime.strptime(date_str, format), precision) + + +def date_from_str(date_str, format='%Y%m%d', strict=False): + R""" + Return a date object from a string using datetime_from_str + + @param strict Restrict allowed patterns to "YYYYMMDD" and + (now|today|yesterday)(-\d+(day|week|month|year)s?)? + """ + if strict and not re.fullmatch(r'\d{8}|(now|today|yesterday)(-\d+(day|week|month|year)s?)?', date_str): + raise ValueError(f'Invalid date format "{date_str}"') + return datetime_from_str(date_str, precision='microsecond', format=format).date() + + +def datetime_add_months(dt, months): + """Increment/Decrement a datetime object by months.""" + month = dt.month + months - 1 + year = dt.year + month // 12 + month = month % 12 + 1 + day = min(dt.day, calendar.monthrange(year, month)[1]) + return dt.replace(year, month, day) + + +def datetime_round(dt, precision='day'): + """ + Round a datetime object's time to a specific precision + """ + if precision == 'microsecond': + return dt + + unit_seconds = { + 'day': 86400, + 'hour': 3600, + 'minute': 60, + 'second': 1, + } + roundto = lambda x, n: ((x + n / 2) // n) * n + timestamp = roundto(calendar.timegm(dt.timetuple()), unit_seconds[precision]) + return datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc) + + +def hyphenate_date(date_str): + """ + Convert a date in 'YYYYMMDD' format to 'YYYY-MM-DD' format""" + match = re.match(r'^(\d\d\d\d)(\d\d)(\d\d)$', date_str) + if match is not None: + return '-'.join(match.groups()) + else: + return date_str + + +class DateRange: + """Represents a time interval between two dates""" + + def __init__(self, start=None, end=None): + """start and end must be strings in the format accepted by date""" + if start is not None: + self.start = date_from_str(start, strict=True) + else: + self.start = datetime.datetime.min.date() + if end is not None: + self.end = date_from_str(end, strict=True) + else: + self.end = datetime.datetime.max.date() + if self.start > self.end: + raise ValueError('Date range: "%s" , the start date must be before the end date' % self) + + @classmethod + def day(cls, day): + """Returns a range that only contains the given day""" + return cls(day, day) + + def __contains__(self, date): + """Check if the date is in the range""" + if not isinstance(date, datetime.date): + date = date_from_str(date) + return self.start <= date <= self.end + + def __repr__(self): + return f'{__name__}.{type(self).__name__}({self.start.isoformat()!r}, {self.end.isoformat()!r})' + + def __eq__(self, other): + return (isinstance(other, DateRange) + and self.start == other.start and self.end == other.end) + + +@functools.cache +def system_identifier(): + python_implementation = platform.python_implementation() + if python_implementation == 'PyPy' and hasattr(sys, 'pypy_version_info'): + python_implementation += ' version %d.%d.%d' % sys.pypy_version_info[:3] + libc_ver = [] + with contextlib.suppress(OSError): # We may not have access to the executable + libc_ver = platform.libc_ver() + + return 'Python %s (%s %s %s) - %s (%s%s)' % ( + platform.python_version(), + python_implementation, + platform.machine(), + platform.architecture()[0], + platform.platform(), + ssl.OPENSSL_VERSION, + format_field(join_nonempty(*libc_ver, delim=' '), None, ', %s'), + ) + + +@functools.cache +def get_windows_version(): + ''' Get Windows version. returns () if it's not running on Windows ''' + if compat_os_name == 'nt': + return version_tuple(platform.win32_ver()[1]) + else: + return () + + +def write_string(s, out=None, encoding=None): + assert isinstance(s, str) + out = out or sys.stderr + # `sys.stderr` might be `None` (Ref: https://github.com/pyinstaller/pyinstaller/pull/7217) + if not out: + return + + if compat_os_name == 'nt' and supports_terminal_sequences(out): + s = re.sub(r'([\r\n]+)', r' \1', s) + + enc, buffer = None, out + if 'b' in getattr(out, 'mode', ''): + enc = encoding or preferredencoding() + elif hasattr(out, 'buffer'): + buffer = out.buffer + enc = encoding or getattr(out, 'encoding', None) or preferredencoding() + + buffer.write(s.encode(enc, 'ignore') if enc else s) + out.flush() + + +# TODO: Use global logger +def deprecation_warning(msg, *, printer=None, stacklevel=0, **kwargs): + from .. import _IN_CLI + if _IN_CLI: + if msg in deprecation_warning._cache: + return + deprecation_warning._cache.add(msg) + if printer: + return printer(f'{msg}{bug_reports_message()}', **kwargs) + return write_string(f'ERROR: {msg}{bug_reports_message()}\n', **kwargs) + else: + import warnings + warnings.warn(DeprecationWarning(msg), stacklevel=stacklevel + 3) + + +deprecation_warning._cache = set() + + +def bytes_to_intlist(bs): + if not bs: + return [] + if isinstance(bs[0], int): # Python 3 + return list(bs) + else: + return [ord(c) for c in bs] + + +def intlist_to_bytes(xs): + if not xs: + return b'' + return struct.pack('%dB' % len(xs), *xs) + + +class LockingUnsupportedError(OSError): + msg = 'File locking is not supported' + + def __init__(self): + super().__init__(self.msg) + + +# Cross-platform file locking +if sys.platform == 'win32': + import ctypes + import ctypes.wintypes + import msvcrt + + class OVERLAPPED(ctypes.Structure): + _fields_ = [ + ('Internal', ctypes.wintypes.LPVOID), + ('InternalHigh', ctypes.wintypes.LPVOID), + ('Offset', ctypes.wintypes.DWORD), + ('OffsetHigh', ctypes.wintypes.DWORD), + ('hEvent', ctypes.wintypes.HANDLE), + ] + + kernel32 = ctypes.WinDLL('kernel32') + LockFileEx = kernel32.LockFileEx + LockFileEx.argtypes = [ + ctypes.wintypes.HANDLE, # hFile + ctypes.wintypes.DWORD, # dwFlags + ctypes.wintypes.DWORD, # dwReserved + ctypes.wintypes.DWORD, # nNumberOfBytesToLockLow + ctypes.wintypes.DWORD, # nNumberOfBytesToLockHigh + ctypes.POINTER(OVERLAPPED) # Overlapped + ] + LockFileEx.restype = ctypes.wintypes.BOOL + UnlockFileEx = kernel32.UnlockFileEx + UnlockFileEx.argtypes = [ + ctypes.wintypes.HANDLE, # hFile + ctypes.wintypes.DWORD, # dwReserved + ctypes.wintypes.DWORD, # nNumberOfBytesToLockLow + ctypes.wintypes.DWORD, # nNumberOfBytesToLockHigh + ctypes.POINTER(OVERLAPPED) # Overlapped + ] + UnlockFileEx.restype = ctypes.wintypes.BOOL + whole_low = 0xffffffff + whole_high = 0x7fffffff + + def _lock_file(f, exclusive, block): + overlapped = OVERLAPPED() + overlapped.Offset = 0 + overlapped.OffsetHigh = 0 + overlapped.hEvent = 0 + f._lock_file_overlapped_p = ctypes.pointer(overlapped) + + if not LockFileEx(msvcrt.get_osfhandle(f.fileno()), + (0x2 if exclusive else 0x0) | (0x0 if block else 0x1), + 0, whole_low, whole_high, f._lock_file_overlapped_p): + # NB: No argument form of "ctypes.FormatError" does not work on PyPy + raise BlockingIOError(f'Locking file failed: {ctypes.FormatError(ctypes.GetLastError())!r}') + + def _unlock_file(f): + assert f._lock_file_overlapped_p + handle = msvcrt.get_osfhandle(f.fileno()) + if not UnlockFileEx(handle, 0, whole_low, whole_high, f._lock_file_overlapped_p): + raise OSError('Unlocking file failed: %r' % ctypes.FormatError()) + +else: + try: + import fcntl + + def _lock_file(f, exclusive, block): + flags = fcntl.LOCK_EX if exclusive else fcntl.LOCK_SH + if not block: + flags |= fcntl.LOCK_NB + try: + fcntl.flock(f, flags) + except BlockingIOError: + raise + except OSError: # AOSP does not have flock() + fcntl.lockf(f, flags) + + def _unlock_file(f): + with contextlib.suppress(OSError): + return fcntl.flock(f, fcntl.LOCK_UN) + with contextlib.suppress(OSError): + return fcntl.lockf(f, fcntl.LOCK_UN) # AOSP does not have flock() + return fcntl.flock(f, fcntl.LOCK_UN | fcntl.LOCK_NB) # virtiofs needs LOCK_NB on unlocking + + except ImportError: + + def _lock_file(f, exclusive, block): + raise LockingUnsupportedError() + + def _unlock_file(f): + raise LockingUnsupportedError() + + +class locked_file: + locked = False + + def __init__(self, filename, mode, block=True, encoding=None): + if mode not in {'r', 'rb', 'a', 'ab', 'w', 'wb'}: + raise NotImplementedError(mode) + self.mode, self.block = mode, block + + writable = any(f in mode for f in 'wax+') + readable = any(f in mode for f in 'r+') + flags = functools.reduce(operator.ior, ( + getattr(os, 'O_CLOEXEC', 0), # UNIX only + getattr(os, 'O_BINARY', 0), # Windows only + getattr(os, 'O_NOINHERIT', 0), # Windows only + os.O_CREAT if writable else 0, # O_TRUNC only after locking + os.O_APPEND if 'a' in mode else 0, + os.O_EXCL if 'x' in mode else 0, + os.O_RDONLY if not writable else os.O_RDWR if readable else os.O_WRONLY, + )) + + self.f = os.fdopen(os.open(filename, flags, 0o666), mode, encoding=encoding) + + def __enter__(self): + exclusive = 'r' not in self.mode + try: + _lock_file(self.f, exclusive, self.block) + self.locked = True + except OSError: + self.f.close() + raise + if 'w' in self.mode: + try: + self.f.truncate() + except OSError as e: + if e.errno not in ( + errno.ESPIPE, # Illegal seek - expected for FIFO + errno.EINVAL, # Invalid argument - expected for /dev/null + ): + raise + return self + + def unlock(self): + if not self.locked: + return + try: + _unlock_file(self.f) + finally: + self.locked = False + + def __exit__(self, *_): + try: + self.unlock() + finally: + self.f.close() + + open = __enter__ + close = __exit__ + + def __getattr__(self, attr): + return getattr(self.f, attr) + + def __iter__(self): + return iter(self.f) + + +@functools.cache +def get_filesystem_encoding(): + encoding = sys.getfilesystemencoding() + return encoding if encoding is not None else 'utf-8' + + +def shell_quote(args): + quoted_args = [] + encoding = get_filesystem_encoding() + for a in args: + if isinstance(a, bytes): + # We may get a filename encoded with 'encodeFilename' + a = a.decode(encoding) + quoted_args.append(compat_shlex_quote(a)) + return ' '.join(quoted_args) + + +def smuggle_url(url, data): + """ Pass additional data in a URL for internal use. """ + + url, idata = unsmuggle_url(url, {}) + data.update(idata) + sdata = urllib.parse.urlencode( + {'__youtubedl_smuggle': json.dumps(data)}) + return url + '#' + sdata + + +def unsmuggle_url(smug_url, default=None): + if '#__youtubedl_smuggle' not in smug_url: + return smug_url, default + url, _, sdata = smug_url.rpartition('#') + jsond = urllib.parse.parse_qs(sdata)['__youtubedl_smuggle'][0] + data = json.loads(jsond) + return url, data + + +def format_decimal_suffix(num, fmt='%d%s', *, factor=1000): + """ Formats numbers with decimal sufixes like K, M, etc """ + num, factor = float_or_none(num), float(factor) + if num is None or num < 0: + return None + POSSIBLE_SUFFIXES = 'kMGTPEZY' + exponent = 0 if num == 0 else min(int(math.log(num, factor)), len(POSSIBLE_SUFFIXES)) + suffix = ['', *POSSIBLE_SUFFIXES][exponent] + if factor == 1024: + suffix = {'k': 'Ki', '': ''}.get(suffix, f'{suffix}i') + converted = num / (factor ** exponent) + return fmt % (converted, suffix) + + +def format_bytes(bytes): + return format_decimal_suffix(bytes, '%.2f%sB', factor=1024) or 'N/A' + + +def lookup_unit_table(unit_table, s, strict=False): + num_re = NUMBER_RE if strict else NUMBER_RE.replace(R'\.', '[,.]') + units_re = '|'.join(re.escape(u) for u in unit_table) + m = (re.fullmatch if strict else re.match)( + rf'(?P<num>{num_re})\s*(?P<unit>{units_re})\b', s) + if not m: + return None + + num = float(m.group('num').replace(',', '.')) + mult = unit_table[m.group('unit')] + return round(num * mult) + + +def parse_bytes(s): + """Parse a string indicating a byte quantity into an integer""" + return lookup_unit_table( + {u: 1024**i for i, u in enumerate(['', *'KMGTPEZY'])}, + s.upper(), strict=True) + + +def parse_filesize(s): + if s is None: + return None + + # The lower-case forms are of course incorrect and unofficial, + # but we support those too + _UNIT_TABLE = { + 'B': 1, + 'b': 1, + 'bytes': 1, + 'KiB': 1024, + 'KB': 1000, + 'kB': 1024, + 'Kb': 1000, + 'kb': 1000, + 'kilobytes': 1000, + 'kibibytes': 1024, + 'MiB': 1024 ** 2, + 'MB': 1000 ** 2, + 'mB': 1024 ** 2, + 'Mb': 1000 ** 2, + 'mb': 1000 ** 2, + 'megabytes': 1000 ** 2, + 'mebibytes': 1024 ** 2, + 'GiB': 1024 ** 3, + 'GB': 1000 ** 3, + 'gB': 1024 ** 3, + 'Gb': 1000 ** 3, + 'gb': 1000 ** 3, + 'gigabytes': 1000 ** 3, + 'gibibytes': 1024 ** 3, + 'TiB': 1024 ** 4, + 'TB': 1000 ** 4, + 'tB': 1024 ** 4, + 'Tb': 1000 ** 4, + 'tb': 1000 ** 4, + 'terabytes': 1000 ** 4, + 'tebibytes': 1024 ** 4, + 'PiB': 1024 ** 5, + 'PB': 1000 ** 5, + 'pB': 1024 ** 5, + 'Pb': 1000 ** 5, + 'pb': 1000 ** 5, + 'petabytes': 1000 ** 5, + 'pebibytes': 1024 ** 5, + 'EiB': 1024 ** 6, + 'EB': 1000 ** 6, + 'eB': 1024 ** 6, + 'Eb': 1000 ** 6, + 'eb': 1000 ** 6, + 'exabytes': 1000 ** 6, + 'exbibytes': 1024 ** 6, + 'ZiB': 1024 ** 7, + 'ZB': 1000 ** 7, + 'zB': 1024 ** 7, + 'Zb': 1000 ** 7, + 'zb': 1000 ** 7, + 'zettabytes': 1000 ** 7, + 'zebibytes': 1024 ** 7, + 'YiB': 1024 ** 8, + 'YB': 1000 ** 8, + 'yB': 1024 ** 8, + 'Yb': 1000 ** 8, + 'yb': 1000 ** 8, + 'yottabytes': 1000 ** 8, + 'yobibytes': 1024 ** 8, + } + + return lookup_unit_table(_UNIT_TABLE, s) + + +def parse_count(s): + if s is None: + return None + + s = re.sub(r'^[^\d]+\s', '', s).strip() + + if re.match(r'^[\d,.]+$', s): + return str_to_int(s) + + _UNIT_TABLE = { + 'k': 1000, + 'K': 1000, + 'm': 1000 ** 2, + 'M': 1000 ** 2, + 'kk': 1000 ** 2, + 'KK': 1000 ** 2, + 'b': 1000 ** 3, + 'B': 1000 ** 3, + } + + ret = lookup_unit_table(_UNIT_TABLE, s) + if ret is not None: + return ret + + mobj = re.match(r'([\d,.]+)(?:$|\s)', s) + if mobj: + return str_to_int(mobj.group(1)) + + +def parse_resolution(s, *, lenient=False): + if s is None: + return {} + + if lenient: + mobj = re.search(r'(?P<w>\d+)\s*[xX×,]\s*(?P<h>\d+)', s) + else: + mobj = re.search(r'(?<![a-zA-Z0-9])(?P<w>\d+)\s*[xX×,]\s*(?P<h>\d+)(?![a-zA-Z0-9])', s) + if mobj: + return { + 'width': int(mobj.group('w')), + 'height': int(mobj.group('h')), + } + + mobj = re.search(r'(?<![a-zA-Z0-9])(\d+)[pPiI](?![a-zA-Z0-9])', s) + if mobj: + return {'height': int(mobj.group(1))} + + mobj = re.search(r'\b([48])[kK]\b', s) + if mobj: + return {'height': int(mobj.group(1)) * 540} + + return {} + + +def parse_bitrate(s): + if not isinstance(s, str): + return + mobj = re.search(r'\b(\d+)\s*kbps', s) + if mobj: + return int(mobj.group(1)) + + +def month_by_name(name, lang='en'): + """ Return the number of a month by (locale-independently) English name """ + + month_names = MONTH_NAMES.get(lang, MONTH_NAMES['en']) + + try: + return month_names.index(name) + 1 + except ValueError: + return None + + +def month_by_abbreviation(abbrev): + """ Return the number of a month by (locale-independently) English + abbreviations """ + + try: + return [s[:3] for s in ENGLISH_MONTH_NAMES].index(abbrev) + 1 + except ValueError: + return None + + +def fix_xml_ampersands(xml_str): + """Replace all the '&' by '&' in XML""" + return re.sub( + r'&(?!amp;|lt;|gt;|apos;|quot;|#x[0-9a-fA-F]{,4};|#[0-9]{,4};)', + '&', + xml_str) + + +def setproctitle(title): + assert isinstance(title, str) + + # Workaround for https://github.com/yt-dlp/yt-dlp/issues/4541 + try: + import ctypes + except ImportError: + return + + try: + libc = ctypes.cdll.LoadLibrary('libc.so.6') + except OSError: + return + except TypeError: + # LoadLibrary in Windows Python 2.7.13 only expects + # a bytestring, but since unicode_literals turns + # every string into a unicode string, it fails. + return + title_bytes = title.encode() + buf = ctypes.create_string_buffer(len(title_bytes)) + buf.value = title_bytes + try: + libc.prctl(15, buf, 0, 0, 0) + except AttributeError: + return # Strange libc, just skip this + + +def remove_start(s, start): + return s[len(start):] if s is not None and s.startswith(start) else s + + +def remove_end(s, end): + return s[:-len(end)] if s is not None and s.endswith(end) else s + + +def remove_quotes(s): + if s is None or len(s) < 2: + return s + for quote in ('"', "'", ): + if s[0] == quote and s[-1] == quote: + return s[1:-1] + return s + + +def get_domain(url): + """ + This implementation is inconsistent, but is kept for compatibility. + Use this only for "webpage_url_domain" + """ + return remove_start(urllib.parse.urlparse(url).netloc, 'www.') or None + + +def url_basename(url): + path = urllib.parse.urlparse(url).path + return path.strip('/').split('/')[-1] + + +def base_url(url): + return re.match(r'https?://[^?#]+/', url).group() + + +def urljoin(base, path): + if isinstance(path, bytes): + path = path.decode() + if not isinstance(path, str) or not path: + return None + if re.match(r'^(?:[a-zA-Z][a-zA-Z0-9+-.]*:)?//', path): + return path + if isinstance(base, bytes): + base = base.decode() + if not isinstance(base, str) or not re.match( + r'^(?:https?:)?//', base): + return None + return urllib.parse.urljoin(base, path) + + +def int_or_none(v, scale=1, default=None, get_attr=None, invscale=1): + if get_attr and v is not None: + v = getattr(v, get_attr, None) + try: + return int(v) * invscale // scale + except (ValueError, TypeError, OverflowError): + return default + + +def str_or_none(v, default=None): + return default if v is None else str(v) + + +def str_to_int(int_str): + """ A more relaxed version of int_or_none """ + if isinstance(int_str, int): + return int_str + elif isinstance(int_str, str): + int_str = re.sub(r'[,\.\+]', '', int_str) + return int_or_none(int_str) + + +def float_or_none(v, scale=1, invscale=1, default=None): + if v is None: + return default + try: + return float(v) * invscale / scale + except (ValueError, TypeError): + return default + + +def bool_or_none(v, default=None): + return v if isinstance(v, bool) else default + + +def strip_or_none(v, default=None): + return v.strip() if isinstance(v, str) else default + + +def url_or_none(url): + if not url or not isinstance(url, str): + return None + url = url.strip() + return url if re.match(r'^(?:(?:https?|rt(?:m(?:pt?[es]?|fp)|sp[su]?)|mms|ftps?):)?//', url) else None + + +def strftime_or_none(timestamp, date_format='%Y%m%d', default=None): + datetime_object = None + try: + if isinstance(timestamp, (int, float)): # unix timestamp + # Using naive datetime here can break timestamp() in Windows + # Ref: https://github.com/yt-dlp/yt-dlp/issues/5185, https://github.com/python/cpython/issues/94414 + # Also, datetime.datetime.fromtimestamp breaks for negative timestamps + # Ref: https://github.com/yt-dlp/yt-dlp/issues/6706#issuecomment-1496842642 + datetime_object = (datetime.datetime.fromtimestamp(0, datetime.timezone.utc) + + datetime.timedelta(seconds=timestamp)) + elif isinstance(timestamp, str): # assume YYYYMMDD + datetime_object = datetime.datetime.strptime(timestamp, '%Y%m%d') + date_format = re.sub( # Support %s on windows + r'(?<!%)(%%)*%s', rf'\g<1>{int(datetime_object.timestamp())}', date_format) + return datetime_object.strftime(date_format) + except (ValueError, TypeError, AttributeError): + return default + + +def parse_duration(s): + if not isinstance(s, str): + return None + s = s.strip() + if not s: + return None + + days, hours, mins, secs, ms = [None] * 5 + m = re.match(r'''(?x) + (?P<before_secs> + (?:(?:(?P<days>[0-9]+):)?(?P<hours>[0-9]+):)?(?P<mins>[0-9]+):)? + (?P<secs>(?(before_secs)[0-9]{1,2}|[0-9]+)) + (?P<ms>[.:][0-9]+)?Z?$ + ''', s) + if m: + days, hours, mins, secs, ms = m.group('days', 'hours', 'mins', 'secs', 'ms') + else: + m = re.match( + r'''(?ix)(?:P? + (?: + [0-9]+\s*y(?:ears?)?,?\s* + )? + (?: + [0-9]+\s*m(?:onths?)?,?\s* + )? + (?: + [0-9]+\s*w(?:eeks?)?,?\s* + )? + (?: + (?P<days>[0-9]+)\s*d(?:ays?)?,?\s* + )? + T)? + (?: + (?P<hours>[0-9]+)\s*h(?:(?:ou)?rs?)?,?\s* + )? + (?: + (?P<mins>[0-9]+)\s*m(?:in(?:ute)?s?)?,?\s* + )? + (?: + (?P<secs>[0-9]+)(?P<ms>\.[0-9]+)?\s*s(?:ec(?:ond)?s?)?\s* + )?Z?$''', s) + if m: + days, hours, mins, secs, ms = m.groups() + else: + m = re.match(r'(?i)(?:(?P<hours>[0-9.]+)\s*(?:hours?)|(?P<mins>[0-9.]+)\s*(?:mins?\.?|minutes?)\s*)Z?$', s) + if m: + hours, mins = m.groups() + else: + return None + + if ms: + ms = ms.replace(':', '.') + return sum(float(part or 0) * mult for part, mult in ( + (days, 86400), (hours, 3600), (mins, 60), (secs, 1), (ms, 1))) + + +def prepend_extension(filename, ext, expected_real_ext=None): + name, real_ext = os.path.splitext(filename) + return ( + f'{name}.{ext}{real_ext}' + if not expected_real_ext or real_ext[1:] == expected_real_ext + else f'{filename}.{ext}') + + +def replace_extension(filename, ext, expected_real_ext=None): + name, real_ext = os.path.splitext(filename) + return '{}.{}'.format( + name if not expected_real_ext or real_ext[1:] == expected_real_ext else filename, + ext) + + +def check_executable(exe, args=[]): + """ Checks if the given binary is installed somewhere in PATH, and returns its name. + args can be a list of arguments for a short output (like -version) """ + try: + Popen.run([exe] + args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + except OSError: + return False + return exe + + +def _get_exe_version_output(exe, args): + try: + # STDIN should be redirected too. On UNIX-like systems, ffmpeg triggers + # SIGTTOU if yt-dlp is run in the background. + # See https://github.com/ytdl-org/youtube-dl/issues/955#issuecomment-209789656 + stdout, _, ret = Popen.run([encodeArgument(exe)] + args, text=True, + stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) + if ret: + return None + except OSError: + return False + return stdout + + +def detect_exe_version(output, version_re=None, unrecognized='present'): + assert isinstance(output, str) + if version_re is None: + version_re = r'version\s+([-0-9._a-zA-Z]+)' + m = re.search(version_re, output) + if m: + return m.group(1) + else: + return unrecognized + + +def get_exe_version(exe, args=['--version'], + version_re=None, unrecognized=('present', 'broken')): + """ Returns the version of the specified executable, + or False if the executable is not present """ + unrecognized = variadic(unrecognized) + assert len(unrecognized) in (1, 2) + out = _get_exe_version_output(exe, args) + if out is None: + return unrecognized[-1] + return out and detect_exe_version(out, version_re, unrecognized[0]) + + +def frange(start=0, stop=None, step=1): + """Float range""" + if stop is None: + start, stop = 0, start + sign = [-1, 1][step > 0] if step else 0 + while sign * start < sign * stop: + yield start + start += step + + +class LazyList(collections.abc.Sequence): + """Lazy immutable list from an iterable + Note that slices of a LazyList are lists and not LazyList""" + + class IndexError(IndexError): + pass + + def __init__(self, iterable, *, reverse=False, _cache=None): + self._iterable = iter(iterable) + self._cache = [] if _cache is None else _cache + self._reversed = reverse + + def __iter__(self): + if self._reversed: + # We need to consume the entire iterable to iterate in reverse + yield from self.exhaust() + return + yield from self._cache + for item in self._iterable: + self._cache.append(item) + yield item + + def _exhaust(self): + self._cache.extend(self._iterable) + self._iterable = [] # Discard the emptied iterable to make it pickle-able + return self._cache + + def exhaust(self): + """Evaluate the entire iterable""" + return self._exhaust()[::-1 if self._reversed else 1] + + @staticmethod + def _reverse_index(x): + return None if x is None else ~x + + def __getitem__(self, idx): + if isinstance(idx, slice): + if self._reversed: + idx = slice(self._reverse_index(idx.start), self._reverse_index(idx.stop), -(idx.step or 1)) + start, stop, step = idx.start, idx.stop, idx.step or 1 + elif isinstance(idx, int): + if self._reversed: + idx = self._reverse_index(idx) + start, stop, step = idx, idx, 0 + else: + raise TypeError('indices must be integers or slices') + if ((start or 0) < 0 or (stop or 0) < 0 + or (start is None and step < 0) + or (stop is None and step > 0)): + # We need to consume the entire iterable to be able to slice from the end + # Obviously, never use this with infinite iterables + self._exhaust() + try: + return self._cache[idx] + except IndexError as e: + raise self.IndexError(e) from e + n = max(start or 0, stop or 0) - len(self._cache) + 1 + if n > 0: + self._cache.extend(itertools.islice(self._iterable, n)) + try: + return self._cache[idx] + except IndexError as e: + raise self.IndexError(e) from e + + def __bool__(self): + try: + self[-1] if self._reversed else self[0] + except self.IndexError: + return False + return True + + def __len__(self): + self._exhaust() + return len(self._cache) + + def __reversed__(self): + return type(self)(self._iterable, reverse=not self._reversed, _cache=self._cache) + + def __copy__(self): + return type(self)(self._iterable, reverse=self._reversed, _cache=self._cache) + + def __repr__(self): + # repr and str should mimic a list. So we exhaust the iterable + return repr(self.exhaust()) + + def __str__(self): + return repr(self.exhaust()) + + +class PagedList: + + class IndexError(IndexError): + pass + + def __len__(self): + # This is only useful for tests + return len(self.getslice()) + + def __init__(self, pagefunc, pagesize, use_cache=True): + self._pagefunc = pagefunc + self._pagesize = pagesize + self._pagecount = float('inf') + self._use_cache = use_cache + self._cache = {} + + def getpage(self, pagenum): + page_results = self._cache.get(pagenum) + if page_results is None: + page_results = [] if pagenum > self._pagecount else list(self._pagefunc(pagenum)) + if self._use_cache: + self._cache[pagenum] = page_results + return page_results + + def getslice(self, start=0, end=None): + return list(self._getslice(start, end)) + + def _getslice(self, start, end): + raise NotImplementedError('This method must be implemented by subclasses') + + def __getitem__(self, idx): + assert self._use_cache, 'Indexing PagedList requires cache' + if not isinstance(idx, int) or idx < 0: + raise TypeError('indices must be non-negative integers') + entries = self.getslice(idx, idx + 1) + if not entries: + raise self.IndexError() + return entries[0] + + +class OnDemandPagedList(PagedList): + """Download pages until a page with less than maximum results""" + + def _getslice(self, start, end): + for pagenum in itertools.count(start // self._pagesize): + firstid = pagenum * self._pagesize + nextfirstid = pagenum * self._pagesize + self._pagesize + if start >= nextfirstid: + continue + + startv = ( + start % self._pagesize + if firstid <= start < nextfirstid + else 0) + endv = ( + ((end - 1) % self._pagesize) + 1 + if (end is not None and firstid <= end <= nextfirstid) + else None) + + try: + page_results = self.getpage(pagenum) + except Exception: + self._pagecount = pagenum - 1 + raise + if startv != 0 or endv is not None: + page_results = page_results[startv:endv] + yield from page_results + + # A little optimization - if current page is not "full", ie. does + # not contain page_size videos then we can assume that this page + # is the last one - there are no more ids on further pages - + # i.e. no need to query again. + if len(page_results) + startv < self._pagesize: + break + + # If we got the whole page, but the next page is not interesting, + # break out early as well + if end == nextfirstid: + break + + +class InAdvancePagedList(PagedList): + """PagedList with total number of pages known in advance""" + + def __init__(self, pagefunc, pagecount, pagesize): + PagedList.__init__(self, pagefunc, pagesize, True) + self._pagecount = pagecount + + def _getslice(self, start, end): + start_page = start // self._pagesize + end_page = self._pagecount if end is None else min(self._pagecount, end // self._pagesize + 1) + skip_elems = start - start_page * self._pagesize + only_more = None if end is None else end - start + for pagenum in range(start_page, end_page): + page_results = self.getpage(pagenum) + if skip_elems: + page_results = page_results[skip_elems:] + skip_elems = None + if only_more is not None: + if len(page_results) < only_more: + only_more -= len(page_results) + else: + yield from page_results[:only_more] + break + yield from page_results + + +class PlaylistEntries: + MissingEntry = object() + is_exhausted = False + + def __init__(self, ydl, info_dict): + self.ydl = ydl + + # _entries must be assigned now since infodict can change during iteration + entries = info_dict.get('entries') + if entries is None: + raise EntryNotInPlaylist('There are no entries') + elif isinstance(entries, list): + self.is_exhausted = True + + requested_entries = info_dict.get('requested_entries') + self.is_incomplete = requested_entries is not None + if self.is_incomplete: + assert self.is_exhausted + self._entries = [self.MissingEntry] * max(requested_entries or [0]) + for i, entry in zip(requested_entries, entries): + self._entries[i - 1] = entry + elif isinstance(entries, (list, PagedList, LazyList)): + self._entries = entries + else: + self._entries = LazyList(entries) + + PLAYLIST_ITEMS_RE = re.compile(r'''(?x) + (?P<start>[+-]?\d+)? + (?P<range>[:-] + (?P<end>[+-]?\d+|inf(?:inite)?)? + (?::(?P<step>[+-]?\d+))? + )?''') + + @classmethod + def parse_playlist_items(cls, string): + for segment in string.split(','): + if not segment: + raise ValueError('There is two or more consecutive commas') + mobj = cls.PLAYLIST_ITEMS_RE.fullmatch(segment) + if not mobj: + raise ValueError(f'{segment!r} is not a valid specification') + start, end, step, has_range = mobj.group('start', 'end', 'step', 'range') + if int_or_none(step) == 0: + raise ValueError(f'Step in {segment!r} cannot be zero') + yield slice(int_or_none(start), float_or_none(end), int_or_none(step)) if has_range else int(start) + + def get_requested_items(self): + playlist_items = self.ydl.params.get('playlist_items') + playlist_start = self.ydl.params.get('playliststart', 1) + playlist_end = self.ydl.params.get('playlistend') + # For backwards compatibility, interpret -1 as whole list + if playlist_end in (-1, None): + playlist_end = '' + if not playlist_items: + playlist_items = f'{playlist_start}:{playlist_end}' + elif playlist_start != 1 or playlist_end: + self.ydl.report_warning('Ignoring playliststart and playlistend because playlistitems was given', only_once=True) + + for index in self.parse_playlist_items(playlist_items): + for i, entry in self[index]: + yield i, entry + if not entry: + continue + try: + # The item may have just been added to archive. Don't break due to it + if not self.ydl.params.get('lazy_playlist'): + # TODO: Add auto-generated fields + self.ydl._match_entry(entry, incomplete=True, silent=True) + except (ExistingVideoReached, RejectedVideoReached): + return + + def get_full_count(self): + if self.is_exhausted and not self.is_incomplete: + return len(self) + elif isinstance(self._entries, InAdvancePagedList): + if self._entries._pagesize == 1: + return self._entries._pagecount + + @functools.cached_property + def _getter(self): + if isinstance(self._entries, list): + def get_entry(i): + try: + entry = self._entries[i] + except IndexError: + entry = self.MissingEntry + if not self.is_incomplete: + raise self.IndexError() + if entry is self.MissingEntry: + raise EntryNotInPlaylist(f'Entry {i + 1} cannot be found') + return entry + else: + def get_entry(i): + try: + return type(self.ydl)._handle_extraction_exceptions(lambda _, i: self._entries[i])(self.ydl, i) + except (LazyList.IndexError, PagedList.IndexError): + raise self.IndexError() + return get_entry + + def __getitem__(self, idx): + if isinstance(idx, int): + idx = slice(idx, idx) + + # NB: PlaylistEntries[1:10] => (0, 1, ... 9) + step = 1 if idx.step is None else idx.step + if idx.start is None: + start = 0 if step > 0 else len(self) - 1 + else: + start = idx.start - 1 if idx.start >= 0 else len(self) + idx.start + + # NB: Do not call len(self) when idx == [:] + if idx.stop is None: + stop = 0 if step < 0 else float('inf') + else: + stop = idx.stop - 1 if idx.stop >= 0 else len(self) + idx.stop + stop += [-1, 1][step > 0] + + for i in frange(start, stop, step): + if i < 0: + continue + try: + entry = self._getter(i) + except self.IndexError: + self.is_exhausted = True + if step > 0: + break + continue + yield i + 1, entry + + def __len__(self): + return len(tuple(self[:])) + + class IndexError(IndexError): + pass + + +def uppercase_escape(s): + unicode_escape = codecs.getdecoder('unicode_escape') + return re.sub( + r'\\U[0-9a-fA-F]{8}', + lambda m: unicode_escape(m.group(0))[0], + s) + + +def lowercase_escape(s): + unicode_escape = codecs.getdecoder('unicode_escape') + return re.sub( + r'\\u[0-9a-fA-F]{4}', + lambda m: unicode_escape(m.group(0))[0], + s) + + +def parse_qs(url, **kwargs): + return urllib.parse.parse_qs(urllib.parse.urlparse(url).query, **kwargs) + + +def read_batch_urls(batch_fd): + def fixup(url): + if not isinstance(url, str): + url = url.decode('utf-8', 'replace') + BOM_UTF8 = ('\xef\xbb\xbf', '\ufeff') + for bom in BOM_UTF8: + if url.startswith(bom): + url = url[len(bom):] + url = url.lstrip() + if not url or url.startswith(('#', ';', ']')): + return False + # "#" cannot be stripped out since it is part of the URI + # However, it can be safely stripped out if following a whitespace + return re.split(r'\s#', url, 1)[0].rstrip() + + with contextlib.closing(batch_fd) as fd: + return [url for url in map(fixup, fd) if url] + + +def urlencode_postdata(*args, **kargs): + return urllib.parse.urlencode(*args, **kargs).encode('ascii') + + +def update_url(url, *, query_update=None, **kwargs): + """Replace URL components specified by kwargs + @param url str or parse url tuple + @param query_update update query + @returns str + """ + if isinstance(url, str): + if not kwargs and not query_update: + return url + else: + url = urllib.parse.urlparse(url) + if query_update: + assert 'query' not in kwargs, 'query_update and query cannot be specified at the same time' + kwargs['query'] = urllib.parse.urlencode({ + **urllib.parse.parse_qs(url.query), + **query_update + }, True) + return urllib.parse.urlunparse(url._replace(**kwargs)) + + +def update_url_query(url, query): + return update_url(url, query_update=query) + + +def _multipart_encode_impl(data, boundary): + content_type = 'multipart/form-data; boundary=%s' % boundary + + out = b'' + for k, v in data.items(): + out += b'--' + boundary.encode('ascii') + b'\r\n' + if isinstance(k, str): + k = k.encode() + if isinstance(v, str): + v = v.encode() + # RFC 2047 requires non-ASCII field names to be encoded, while RFC 7578 + # suggests sending UTF-8 directly. Firefox sends UTF-8, too + content = b'Content-Disposition: form-data; name="' + k + b'"\r\n\r\n' + v + b'\r\n' + if boundary.encode('ascii') in content: + raise ValueError('Boundary overlaps with data') + out += content + + out += b'--' + boundary.encode('ascii') + b'--\r\n' + + return out, content_type + + +def multipart_encode(data, boundary=None): + ''' + Encode a dict to RFC 7578-compliant form-data + + data: + A dict where keys and values can be either Unicode or bytes-like + objects. + boundary: + If specified a Unicode object, it's used as the boundary. Otherwise + a random boundary is generated. + + Reference: https://tools.ietf.org/html/rfc7578 + ''' + has_specified_boundary = boundary is not None + + while True: + if boundary is None: + boundary = '---------------' + str(random.randrange(0x0fffffff, 0xffffffff)) + + try: + out, content_type = _multipart_encode_impl(data, boundary) + break + except ValueError: + if has_specified_boundary: + raise + boundary = None + + return out, content_type + + +def is_iterable_like(x, allowed_types=collections.abc.Iterable, blocked_types=NO_DEFAULT): + if blocked_types is NO_DEFAULT: + blocked_types = (str, bytes, collections.abc.Mapping) + return isinstance(x, allowed_types) and not isinstance(x, blocked_types) + + +def variadic(x, allowed_types=NO_DEFAULT): + if not isinstance(allowed_types, (tuple, type)): + deprecation_warning('allowed_types should be a tuple or a type') + allowed_types = tuple(allowed_types) + return x if is_iterable_like(x, blocked_types=allowed_types) else (x, ) + + +def try_call(*funcs, expected_type=None, args=[], kwargs={}): + for f in funcs: + try: + val = f(*args, **kwargs) + except (AttributeError, KeyError, TypeError, IndexError, ValueError, ZeroDivisionError): + pass + else: + if expected_type is None or isinstance(val, expected_type): + return val + + +def try_get(src, getter, expected_type=None): + return try_call(*variadic(getter), args=(src,), expected_type=expected_type) + + +def filter_dict(dct, cndn=lambda _, v: v is not None): + return {k: v for k, v in dct.items() if cndn(k, v)} + + +def merge_dicts(*dicts): + merged = {} + for a_dict in dicts: + for k, v in a_dict.items(): + if (v is not None and k not in merged + or isinstance(v, str) and merged[k] == ''): + merged[k] = v + return merged + + +def encode_compat_str(string, encoding=preferredencoding(), errors='strict'): + return string if isinstance(string, str) else str(string, encoding, errors) + + +US_RATINGS = { + 'G': 0, + 'PG': 10, + 'PG-13': 13, + 'R': 16, + 'NC': 18, +} + + +TV_PARENTAL_GUIDELINES = { + 'TV-Y': 0, + 'TV-Y7': 7, + 'TV-G': 0, + 'TV-PG': 0, + 'TV-14': 14, + 'TV-MA': 17, +} + + +def parse_age_limit(s): + # isinstance(False, int) is True. So type() must be used instead + if type(s) is int: # noqa: E721 + return s if 0 <= s <= 21 else None + elif not isinstance(s, str): + return None + m = re.match(r'^(?P<age>\d{1,2})\+?$', s) + if m: + return int(m.group('age')) + s = s.upper() + if s in US_RATINGS: + return US_RATINGS[s] + m = re.match(r'^TV[_-]?(%s)$' % '|'.join(k[3:] for k in TV_PARENTAL_GUIDELINES), s) + if m: + return TV_PARENTAL_GUIDELINES['TV-' + m.group(1)] + return None + + +def strip_jsonp(code): + return re.sub( + r'''(?sx)^ + (?:window\.)?(?P<func_name>[a-zA-Z0-9_.$]*) + (?:\s*&&\s*(?P=func_name))? + \s*\(\s*(?P<callback_data>.*)\);? + \s*?(?://[^\n]*)*$''', + r'\g<callback_data>', code) + + +def js_to_json(code, vars={}, *, strict=False): + # vars is a dict of var, val pairs to substitute + STRING_QUOTES = '\'"`' + STRING_RE = '|'.join(rf'{q}(?:\\.|[^\\{q}])*{q}' for q in STRING_QUOTES) + COMMENT_RE = r'/\*(?:(?!\*/).)*?\*/|//[^\n]*\n' + SKIP_RE = fr'\s*(?:{COMMENT_RE})?\s*' + INTEGER_TABLE = ( + (fr'(?s)^(0[xX][0-9a-fA-F]+){SKIP_RE}:?$', 16), + (fr'(?s)^(0+[0-7]+){SKIP_RE}:?$', 8), + ) + + def process_escape(match): + JSON_PASSTHROUGH_ESCAPES = R'"\bfnrtu' + escape = match.group(1) or match.group(2) + + return (Rf'\{escape}' if escape in JSON_PASSTHROUGH_ESCAPES + else R'\u00' if escape == 'x' + else '' if escape == '\n' + else escape) + + def template_substitute(match): + evaluated = js_to_json(match.group(1), vars, strict=strict) + if evaluated[0] == '"': + return json.loads(evaluated) + return evaluated + + def fix_kv(m): + v = m.group(0) + if v in ('true', 'false', 'null'): + return v + elif v in ('undefined', 'void 0'): + return 'null' + elif v.startswith('/*') or v.startswith('//') or v.startswith('!') or v == ',': + return '' + + if v[0] in STRING_QUOTES: + v = re.sub(r'(?s)\${([^}]+)}', template_substitute, v[1:-1]) if v[0] == '`' else v[1:-1] + escaped = re.sub(r'(?s)(")|\\(.)', process_escape, v) + return f'"{escaped}"' + + for regex, base in INTEGER_TABLE: + im = re.match(regex, v) + if im: + i = int(im.group(1), base) + return f'"{i}":' if v.endswith(':') else str(i) + + if v in vars: + try: + if not strict: + json.loads(vars[v]) + except json.JSONDecodeError: + return json.dumps(vars[v]) + else: + return vars[v] + + if not strict: + return f'"{v}"' + + raise ValueError(f'Unknown value: {v}') + + def create_map(mobj): + return json.dumps(dict(json.loads(js_to_json(mobj.group(1) or '[]', vars=vars)))) + + code = re.sub(r'(?:new\s+)?Array\((.*?)\)', r'[\g<1>]', code) + code = re.sub(r'new Map\((\[.*?\])?\)', create_map, code) + if not strict: + code = re.sub(rf'new Date\(({STRING_RE})\)', r'\g<1>', code) + code = re.sub(r'new \w+\((.*?)\)', lambda m: json.dumps(m.group(0)), code) + code = re.sub(r'parseInt\([^\d]+(\d+)[^\d]+\)', r'\1', code) + code = re.sub(r'\(function\([^)]*\)\s*\{[^}]*\}\s*\)\s*\(\s*(["\'][^)]*["\'])\s*\)', r'\1', code) + + return re.sub(rf'''(?sx) + {STRING_RE}| + {COMMENT_RE}|,(?={SKIP_RE}[\]}}])| + void\s0|(?:(?<![0-9])[eE]|[a-df-zA-DF-Z_$])[.a-zA-Z_$0-9]*| + \b(?:0[xX][0-9a-fA-F]+|0+[0-7]+)(?:{SKIP_RE}:)?| + [0-9]+(?={SKIP_RE}:)| + !+ + ''', fix_kv, code) + + +def qualities(quality_ids): + """ Get a numeric quality value out of a list of possible values """ + def q(qid): + try: + return quality_ids.index(qid) + except ValueError: + return -1 + return q + + +POSTPROCESS_WHEN = ('pre_process', 'after_filter', 'video', 'before_dl', 'post_process', 'after_move', 'after_video', 'playlist') + + +DEFAULT_OUTTMPL = { + 'default': '%(title)s [%(id)s].%(ext)s', + 'chapter': '%(title)s - %(section_number)03d %(section_title)s [%(id)s].%(ext)s', +} +OUTTMPL_TYPES = { + 'chapter': None, + 'subtitle': None, + 'thumbnail': None, + 'description': 'description', + 'annotation': 'annotations.xml', + 'infojson': 'info.json', + 'link': None, + 'pl_video': None, + 'pl_thumbnail': None, + 'pl_description': 'description', + 'pl_infojson': 'info.json', +} + +# As of [1] format syntax is: +# %[mapping_key][conversion_flags][minimum_width][.precision][length_modifier]type +# 1. https://docs.python.org/2/library/stdtypes.html#string-formatting +STR_FORMAT_RE_TMPL = r'''(?x) + (?<!%)(?P<prefix>(?:%%)*) + % + (?P<has_key>\((?P<key>{0})\))? + (?P<format> + (?P<conversion>[#0\-+ ]+)? + (?P<min_width>\d+)? + (?P<precision>\.\d+)? + (?P<len_mod>[hlL])? # unused in python + {1} # conversion type + ) +''' + + +STR_FORMAT_TYPES = 'diouxXeEfFgGcrsa' + + +def limit_length(s, length): + """ Add ellipses to overly long strings """ + if s is None: + return None + ELLIPSES = '...' + if len(s) > length: + return s[:length - len(ELLIPSES)] + ELLIPSES + return s + + +def version_tuple(v): + return tuple(int(e) for e in re.split(r'[-.]', v)) + + +def is_outdated_version(version, limit, assume_new=True): + if not version: + return not assume_new + try: + return version_tuple(version) < version_tuple(limit) + except ValueError: + return not assume_new + + +def ytdl_is_updateable(): + """ Returns if yt-dlp can be updated with -U """ + + from ..update import is_non_updateable + + return not is_non_updateable() + + +def args_to_str(args): + # Get a short string representation for a subprocess command + return ' '.join(compat_shlex_quote(a) for a in args) + + +def error_to_str(err): + return f'{type(err).__name__}: {err}' + + +def mimetype2ext(mt, default=NO_DEFAULT): + if not isinstance(mt, str): + if default is not NO_DEFAULT: + return default + return None + + MAP = { + # video + '3gpp': '3gp', + 'mp2t': 'ts', + 'mp4': 'mp4', + 'mpeg': 'mpeg', + 'mpegurl': 'm3u8', + 'quicktime': 'mov', + 'webm': 'webm', + 'vp9': 'vp9', + 'video/ogg': 'ogv', + 'x-flv': 'flv', + 'x-m4v': 'm4v', + 'x-matroska': 'mkv', + 'x-mng': 'mng', + 'x-mp4-fragmented': 'mp4', + 'x-ms-asf': 'asf', + 'x-ms-wmv': 'wmv', + 'x-msvideo': 'avi', + + # application (streaming playlists) + 'dash+xml': 'mpd', + 'f4m+xml': 'f4m', + 'hds+xml': 'f4m', + 'vnd.apple.mpegurl': 'm3u8', + 'vnd.ms-sstr+xml': 'ism', + 'x-mpegurl': 'm3u8', + + # audio + 'audio/mp4': 'm4a', + # Per RFC 3003, audio/mpeg can be .mp1, .mp2 or .mp3. + # Using .mp3 as it's the most popular one + 'audio/mpeg': 'mp3', + 'audio/webm': 'webm', + 'audio/x-matroska': 'mka', + 'audio/x-mpegurl': 'm3u', + 'midi': 'mid', + 'ogg': 'ogg', + 'wav': 'wav', + 'wave': 'wav', + 'x-aac': 'aac', + 'x-flac': 'flac', + 'x-m4a': 'm4a', + 'x-realaudio': 'ra', + 'x-wav': 'wav', + + # image + 'avif': 'avif', + 'bmp': 'bmp', + 'gif': 'gif', + 'jpeg': 'jpg', + 'png': 'png', + 'svg+xml': 'svg', + 'tiff': 'tif', + 'vnd.wap.wbmp': 'wbmp', + 'webp': 'webp', + 'x-icon': 'ico', + 'x-jng': 'jng', + 'x-ms-bmp': 'bmp', + + # caption + 'filmstrip+json': 'fs', + 'smptett+xml': 'tt', + 'ttaf+xml': 'dfxp', + 'ttml+xml': 'ttml', + 'x-ms-sami': 'sami', + + # misc + 'gzip': 'gz', + 'json': 'json', + 'xml': 'xml', + 'zip': 'zip', + } + + mimetype = mt.partition(';')[0].strip().lower() + _, _, subtype = mimetype.rpartition('/') + + ext = traversal.traverse_obj(MAP, mimetype, subtype, subtype.rsplit('+')[-1]) + if ext: + return ext + elif default is not NO_DEFAULT: + return default + return subtype.replace('+', '.') + + +def ext2mimetype(ext_or_url): + if not ext_or_url: + return None + if '.' not in ext_or_url: + ext_or_url = f'file.{ext_or_url}' + return mimetypes.guess_type(ext_or_url)[0] + + +def parse_codecs(codecs_str): + # http://tools.ietf.org/html/rfc6381 + if not codecs_str: + return {} + split_codecs = list(filter(None, map( + str.strip, codecs_str.strip().strip(',').split(',')))) + vcodec, acodec, scodec, hdr = None, None, None, None + for full_codec in split_codecs: + parts = re.sub(r'0+(?=\d)', '', full_codec).split('.') + if parts[0] in ('avc1', 'avc2', 'avc3', 'avc4', 'vp9', 'vp8', 'hev1', 'hev2', + 'h263', 'h264', 'mp4v', 'hvc1', 'av1', 'theora', 'dvh1', 'dvhe'): + if vcodec: + continue + vcodec = full_codec + if parts[0] in ('dvh1', 'dvhe'): + hdr = 'DV' + elif parts[0] == 'av1' and traversal.traverse_obj(parts, 3) == '10': + hdr = 'HDR10' + elif parts[:2] == ['vp9', '2']: + hdr = 'HDR10' + elif parts[0] in ('flac', 'mp4a', 'opus', 'vorbis', 'mp3', 'aac', 'ac-4', + 'ac-3', 'ec-3', 'eac3', 'dtsc', 'dtse', 'dtsh', 'dtsl'): + acodec = acodec or full_codec + elif parts[0] in ('stpp', 'wvtt'): + scodec = scodec or full_codec + else: + write_string(f'WARNING: Unknown codec {full_codec}\n') + if vcodec or acodec or scodec: + return { + 'vcodec': vcodec or 'none', + 'acodec': acodec or 'none', + 'dynamic_range': hdr, + **({'scodec': scodec} if scodec is not None else {}), + } + elif len(split_codecs) == 2: + return { + 'vcodec': split_codecs[0], + 'acodec': split_codecs[1], + } + return {} + + +def get_compatible_ext(*, vcodecs, acodecs, vexts, aexts, preferences=None): + assert len(vcodecs) == len(vexts) and len(acodecs) == len(aexts) + + allow_mkv = not preferences or 'mkv' in preferences + + if allow_mkv and max(len(acodecs), len(vcodecs)) > 1: + return 'mkv' # TODO: any other format allows this? + + # TODO: All codecs supported by parse_codecs isn't handled here + COMPATIBLE_CODECS = { + 'mp4': { + 'av1', 'hevc', 'avc1', 'mp4a', 'ac-4', # fourcc (m3u8, mpd) + 'h264', 'aacl', 'ec-3', # Set in ISM + }, + 'webm': { + 'av1', 'vp9', 'vp8', 'opus', 'vrbs', + 'vp9x', 'vp8x', # in the webm spec + }, + } + + sanitize_codec = functools.partial( + try_get, getter=lambda x: x[0].split('.')[0].replace('0', '').lower()) + vcodec, acodec = sanitize_codec(vcodecs), sanitize_codec(acodecs) + + for ext in preferences or COMPATIBLE_CODECS.keys(): + codec_set = COMPATIBLE_CODECS.get(ext, set()) + if ext == 'mkv' or codec_set.issuperset((vcodec, acodec)): + return ext + + COMPATIBLE_EXTS = ( + {'mp3', 'mp4', 'm4a', 'm4p', 'm4b', 'm4r', 'm4v', 'ismv', 'isma', 'mov'}, + {'webm', 'weba'}, + ) + for ext in preferences or vexts: + current_exts = {ext, *vexts, *aexts} + if ext == 'mkv' or current_exts == {ext} or any( + ext_sets.issuperset(current_exts) for ext_sets in COMPATIBLE_EXTS): + return ext + return 'mkv' if allow_mkv else preferences[-1] + + +def urlhandle_detect_ext(url_handle, default=NO_DEFAULT): + getheader = url_handle.headers.get + + cd = getheader('Content-Disposition') + if cd: + m = re.match(r'attachment;\s*filename="(?P<filename>[^"]+)"', cd) + if m: + e = determine_ext(m.group('filename'), default_ext=None) + if e: + return e + + meta_ext = getheader('x-amz-meta-name') + if meta_ext: + e = meta_ext.rpartition('.')[2] + if e: + return e + + return mimetype2ext(getheader('Content-Type'), default=default) + + +def encode_data_uri(data, mime_type): + return 'data:%s;base64,%s' % (mime_type, base64.b64encode(data).decode('ascii')) + + +def age_restricted(content_limit, age_limit): + """ Returns True iff the content should be blocked """ + + if age_limit is None: # No limit set + return False + if content_limit is None: + return False # Content available for everyone + return age_limit < content_limit + + +# List of known byte-order-marks (BOM) +BOMS = [ + (b'\xef\xbb\xbf', 'utf-8'), + (b'\x00\x00\xfe\xff', 'utf-32-be'), + (b'\xff\xfe\x00\x00', 'utf-32-le'), + (b'\xff\xfe', 'utf-16-le'), + (b'\xfe\xff', 'utf-16-be'), +] + + +def is_html(first_bytes): + """ Detect whether a file contains HTML by examining its first bytes. """ + + encoding = 'utf-8' + for bom, enc in BOMS: + while first_bytes.startswith(bom): + encoding, first_bytes = enc, first_bytes[len(bom):] + + return re.match(r'^\s*<', first_bytes.decode(encoding, 'replace')) + + +def determine_protocol(info_dict): + protocol = info_dict.get('protocol') + if protocol is not None: + return protocol + + url = sanitize_url(info_dict['url']) + if url.startswith('rtmp'): + return 'rtmp' + elif url.startswith('mms'): + return 'mms' + elif url.startswith('rtsp'): + return 'rtsp' + + ext = determine_ext(url) + if ext == 'm3u8': + return 'm3u8' if info_dict.get('is_live') else 'm3u8_native' + elif ext == 'f4m': + return 'f4m' + + return urllib.parse.urlparse(url).scheme + + +def render_table(header_row, data, delim=False, extra_gap=0, hide_empty=False): + """ Render a list of rows, each as a list of values. + Text after a \t will be right aligned """ + def width(string): + return len(remove_terminal_sequences(string).replace('\t', '')) + + def get_max_lens(table): + return [max(width(str(v)) for v in col) for col in zip(*table)] + + def filter_using_list(row, filterArray): + return [col for take, col in itertools.zip_longest(filterArray, row, fillvalue=True) if take] + + max_lens = get_max_lens(data) if hide_empty else [] + header_row = filter_using_list(header_row, max_lens) + data = [filter_using_list(row, max_lens) for row in data] + + table = [header_row] + data + max_lens = get_max_lens(table) + extra_gap += 1 + if delim: + table = [header_row, [delim * (ml + extra_gap) for ml in max_lens]] + data + table[1][-1] = table[1][-1][:-extra_gap * len(delim)] # Remove extra_gap from end of delimiter + for row in table: + for pos, text in enumerate(map(str, row)): + if '\t' in text: + row[pos] = text.replace('\t', ' ' * (max_lens[pos] - width(text))) + ' ' * extra_gap + else: + row[pos] = text + ' ' * (max_lens[pos] - width(text) + extra_gap) + ret = '\n'.join(''.join(row).rstrip() for row in table) + return ret + + +def _match_one(filter_part, dct, incomplete): + # TODO: Generalize code with YoutubeDL._build_format_filter + STRING_OPERATORS = { + '*=': operator.contains, + '^=': lambda attr, value: attr.startswith(value), + '$=': lambda attr, value: attr.endswith(value), + '~=': lambda attr, value: re.search(value, attr), + } + COMPARISON_OPERATORS = { + **STRING_OPERATORS, + '<=': operator.le, # "<=" must be defined above "<" + '<': operator.lt, + '>=': operator.ge, + '>': operator.gt, + '=': operator.eq, + } + + if isinstance(incomplete, bool): + is_incomplete = lambda _: incomplete + else: + is_incomplete = lambda k: k in incomplete + + operator_rex = re.compile(r'''(?x) + (?P<key>[a-z_]+) + \s*(?P<negation>!\s*)?(?P<op>%s)(?P<none_inclusive>\s*\?)?\s* + (?: + (?P<quote>["\'])(?P<quotedstrval>.+?)(?P=quote)| + (?P<strval>.+?) + ) + ''' % '|'.join(map(re.escape, COMPARISON_OPERATORS.keys()))) + m = operator_rex.fullmatch(filter_part.strip()) + if m: + m = m.groupdict() + unnegated_op = COMPARISON_OPERATORS[m['op']] + if m['negation']: + op = lambda attr, value: not unnegated_op(attr, value) + else: + op = unnegated_op + comparison_value = m['quotedstrval'] or m['strval'] or m['intval'] + if m['quote']: + comparison_value = comparison_value.replace(r'\%s' % m['quote'], m['quote']) + actual_value = dct.get(m['key']) + numeric_comparison = None + if isinstance(actual_value, (int, float)): + # If the original field is a string and matching comparisonvalue is + # a number we should respect the origin of the original field + # and process comparison value as a string (see + # https://github.com/ytdl-org/youtube-dl/issues/11082) + try: + numeric_comparison = int(comparison_value) + except ValueError: + numeric_comparison = parse_filesize(comparison_value) + if numeric_comparison is None: + numeric_comparison = parse_filesize(f'{comparison_value}B') + if numeric_comparison is None: + numeric_comparison = parse_duration(comparison_value) + if numeric_comparison is not None and m['op'] in STRING_OPERATORS: + raise ValueError('Operator %s only supports string values!' % m['op']) + if actual_value is None: + return is_incomplete(m['key']) or m['none_inclusive'] + return op(actual_value, comparison_value if numeric_comparison is None else numeric_comparison) + + UNARY_OPERATORS = { + '': lambda v: (v is True) if isinstance(v, bool) else (v is not None), + '!': lambda v: (v is False) if isinstance(v, bool) else (v is None), + } + operator_rex = re.compile(r'''(?x) + (?P<op>%s)\s*(?P<key>[a-z_]+) + ''' % '|'.join(map(re.escape, UNARY_OPERATORS.keys()))) + m = operator_rex.fullmatch(filter_part.strip()) + if m: + op = UNARY_OPERATORS[m.group('op')] + actual_value = dct.get(m.group('key')) + if is_incomplete(m.group('key')) and actual_value is None: + return True + return op(actual_value) + + raise ValueError('Invalid filter part %r' % filter_part) + + +def match_str(filter_str, dct, incomplete=False): + """ Filter a dictionary with a simple string syntax. + @returns Whether the filter passes + @param incomplete Set of keys that is expected to be missing from dct. + Can be True/False to indicate all/none of the keys may be missing. + All conditions on incomplete keys pass if the key is missing + """ + return all( + _match_one(filter_part.replace(r'\&', '&'), dct, incomplete) + for filter_part in re.split(r'(?<!\\)&', filter_str)) + + +def match_filter_func(filters, breaking_filters=None): + if not filters and not breaking_filters: + return None + breaking_filters = match_filter_func(breaking_filters) or (lambda _, __: None) + filters = set(variadic(filters or [])) + + interactive = '-' in filters + if interactive: + filters.remove('-') + + def _match_func(info_dict, incomplete=False): + ret = breaking_filters(info_dict, incomplete) + if ret is not None: + raise RejectedVideoReached(ret) + + if not filters or any(match_str(f, info_dict, incomplete) for f in filters): + return NO_DEFAULT if interactive and not incomplete else None + else: + video_title = info_dict.get('title') or info_dict.get('id') or 'entry' + filter_str = ') | ('.join(map(str.strip, filters)) + return f'{video_title} does not pass filter ({filter_str}), skipping ..' + return _match_func + + +class download_range_func: + def __init__(self, chapters, ranges, from_info=False): + self.chapters, self.ranges, self.from_info = chapters, ranges, from_info + + def __call__(self, info_dict, ydl): + + warning = ('There are no chapters matching the regex' if info_dict.get('chapters') + else 'Cannot match chapters since chapter information is unavailable') + for regex in self.chapters or []: + for i, chapter in enumerate(info_dict.get('chapters') or []): + if re.search(regex, chapter['title']): + warning = None + yield {**chapter, 'index': i} + if self.chapters and warning: + ydl.to_screen(f'[info] {info_dict["id"]}: {warning}') + + for start, end in self.ranges or []: + yield { + 'start_time': self._handle_negative_timestamp(start, info_dict), + 'end_time': self._handle_negative_timestamp(end, info_dict), + } + + if self.from_info and (info_dict.get('start_time') or info_dict.get('end_time')): + yield { + 'start_time': info_dict.get('start_time') or 0, + 'end_time': info_dict.get('end_time') or float('inf'), + } + elif not self.ranges and not self.chapters: + yield {} + + @staticmethod + def _handle_negative_timestamp(time, info): + return max(info['duration'] + time, 0) if info.get('duration') and time < 0 else time + + def __eq__(self, other): + return (isinstance(other, download_range_func) + and self.chapters == other.chapters and self.ranges == other.ranges) + + def __repr__(self): + return f'{__name__}.{type(self).__name__}({self.chapters}, {self.ranges})' + + +def parse_dfxp_time_expr(time_expr): + if not time_expr: + return + + mobj = re.match(rf'^(?P<time_offset>{NUMBER_RE})s?$', time_expr) + if mobj: + return float(mobj.group('time_offset')) + + mobj = re.match(r'^(\d+):(\d\d):(\d\d(?:(?:\.|:)\d+)?)$', time_expr) + if mobj: + return 3600 * int(mobj.group(1)) + 60 * int(mobj.group(2)) + float(mobj.group(3).replace(':', '.')) + + +def srt_subtitles_timecode(seconds): + return '%02d:%02d:%02d,%03d' % timetuple_from_msec(seconds * 1000) + + +def ass_subtitles_timecode(seconds): + time = timetuple_from_msec(seconds * 1000) + return '%01d:%02d:%02d.%02d' % (*time[:-1], time.milliseconds / 10) + + +def dfxp2srt(dfxp_data): + ''' + @param dfxp_data A bytes-like object containing DFXP data + @returns A unicode object containing converted SRT data + ''' + LEGACY_NAMESPACES = ( + (b'http://www.w3.org/ns/ttml', [ + b'http://www.w3.org/2004/11/ttaf1', + b'http://www.w3.org/2006/04/ttaf1', + b'http://www.w3.org/2006/10/ttaf1', + ]), + (b'http://www.w3.org/ns/ttml#styling', [ + b'http://www.w3.org/ns/ttml#style', + ]), + ) + + SUPPORTED_STYLING = [ + 'color', + 'fontFamily', + 'fontSize', + 'fontStyle', + 'fontWeight', + 'textDecoration' + ] + + _x = functools.partial(xpath_with_ns, ns_map={ + 'xml': 'http://www.w3.org/XML/1998/namespace', + 'ttml': 'http://www.w3.org/ns/ttml', + 'tts': 'http://www.w3.org/ns/ttml#styling', + }) + + styles = {} + default_style = {} + + class TTMLPElementParser: + _out = '' + _unclosed_elements = [] + _applied_styles = [] + + def start(self, tag, attrib): + if tag in (_x('ttml:br'), 'br'): + self._out += '\n' + else: + unclosed_elements = [] + style = {} + element_style_id = attrib.get('style') + if default_style: + style.update(default_style) + if element_style_id: + style.update(styles.get(element_style_id, {})) + for prop in SUPPORTED_STYLING: + prop_val = attrib.get(_x('tts:' + prop)) + if prop_val: + style[prop] = prop_val + if style: + font = '' + for k, v in sorted(style.items()): + if self._applied_styles and self._applied_styles[-1].get(k) == v: + continue + if k == 'color': + font += ' color="%s"' % v + elif k == 'fontSize': + font += ' size="%s"' % v + elif k == 'fontFamily': + font += ' face="%s"' % v + elif k == 'fontWeight' and v == 'bold': + self._out += '<b>' + unclosed_elements.append('b') + elif k == 'fontStyle' and v == 'italic': + self._out += '<i>' + unclosed_elements.append('i') + elif k == 'textDecoration' and v == 'underline': + self._out += '<u>' + unclosed_elements.append('u') + if font: + self._out += '<font' + font + '>' + unclosed_elements.append('font') + applied_style = {} + if self._applied_styles: + applied_style.update(self._applied_styles[-1]) + applied_style.update(style) + self._applied_styles.append(applied_style) + self._unclosed_elements.append(unclosed_elements) + + def end(self, tag): + if tag not in (_x('ttml:br'), 'br'): + unclosed_elements = self._unclosed_elements.pop() + for element in reversed(unclosed_elements): + self._out += '</%s>' % element + if unclosed_elements and self._applied_styles: + self._applied_styles.pop() + + def data(self, data): + self._out += data + + def close(self): + return self._out.strip() + + # Fix UTF-8 encoded file wrongly marked as UTF-16. See https://github.com/yt-dlp/yt-dlp/issues/6543#issuecomment-1477169870 + # This will not trigger false positives since only UTF-8 text is being replaced + dfxp_data = dfxp_data.replace(b'encoding=\'UTF-16\'', b'encoding=\'UTF-8\'') + + def parse_node(node): + target = TTMLPElementParser() + parser = xml.etree.ElementTree.XMLParser(target=target) + parser.feed(xml.etree.ElementTree.tostring(node)) + return parser.close() + + for k, v in LEGACY_NAMESPACES: + for ns in v: + dfxp_data = dfxp_data.replace(ns, k) + + dfxp = compat_etree_fromstring(dfxp_data) + out = [] + paras = dfxp.findall(_x('.//ttml:p')) or dfxp.findall('.//p') + + if not paras: + raise ValueError('Invalid dfxp/TTML subtitle') + + repeat = False + while True: + for style in dfxp.findall(_x('.//ttml:style')): + style_id = style.get('id') or style.get(_x('xml:id')) + if not style_id: + continue + parent_style_id = style.get('style') + if parent_style_id: + if parent_style_id not in styles: + repeat = True + continue + styles[style_id] = styles[parent_style_id].copy() + for prop in SUPPORTED_STYLING: + prop_val = style.get(_x('tts:' + prop)) + if prop_val: + styles.setdefault(style_id, {})[prop] = prop_val + if repeat: + repeat = False + else: + break + + for p in ('body', 'div'): + ele = xpath_element(dfxp, [_x('.//ttml:' + p), './/' + p]) + if ele is None: + continue + style = styles.get(ele.get('style')) + if not style: + continue + default_style.update(style) + + for para, index in zip(paras, itertools.count(1)): + begin_time = parse_dfxp_time_expr(para.attrib.get('begin')) + end_time = parse_dfxp_time_expr(para.attrib.get('end')) + dur = parse_dfxp_time_expr(para.attrib.get('dur')) + if begin_time is None: + continue + if not end_time: + if not dur: + continue + end_time = begin_time + dur + out.append('%d\n%s --> %s\n%s\n\n' % ( + index, + srt_subtitles_timecode(begin_time), + srt_subtitles_timecode(end_time), + parse_node(para))) + + return ''.join(out) + + +def cli_option(params, command_option, param, separator=None): + param = params.get(param) + return ([] if param is None + else [command_option, str(param)] if separator is None + else [f'{command_option}{separator}{param}']) + + +def cli_bool_option(params, command_option, param, true_value='true', false_value='false', separator=None): + param = params.get(param) + assert param in (True, False, None) + return cli_option({True: true_value, False: false_value}, command_option, param, separator) + + +def cli_valueless_option(params, command_option, param, expected_value=True): + return [command_option] if params.get(param) == expected_value else [] + + +def cli_configuration_args(argdict, keys, default=[], use_compat=True): + if isinstance(argdict, (list, tuple)): # for backward compatibility + if use_compat: + return argdict + else: + argdict = None + if argdict is None: + return default + assert isinstance(argdict, dict) + + assert isinstance(keys, (list, tuple)) + for key_list in keys: + arg_list = list(filter( + lambda x: x is not None, + [argdict.get(key.lower()) for key in variadic(key_list)])) + if arg_list: + return [arg for args in arg_list for arg in args] + return default + + +def _configuration_args(main_key, argdict, exe, keys=None, default=[], use_compat=True): + main_key, exe = main_key.lower(), exe.lower() + root_key = exe if main_key == exe else f'{main_key}+{exe}' + keys = [f'{root_key}{k}' for k in (keys or [''])] + if root_key in keys: + if main_key != exe: + keys.append((main_key, exe)) + keys.append('default') + else: + use_compat = False + return cli_configuration_args(argdict, keys, default, use_compat) + + +class ISO639Utils: + # See http://www.loc.gov/standards/iso639-2/ISO-639-2_utf-8.txt + _lang_map = { + 'aa': 'aar', + 'ab': 'abk', + 'ae': 'ave', + 'af': 'afr', + 'ak': 'aka', + 'am': 'amh', + 'an': 'arg', + 'ar': 'ara', + 'as': 'asm', + 'av': 'ava', + 'ay': 'aym', + 'az': 'aze', + 'ba': 'bak', + 'be': 'bel', + 'bg': 'bul', + 'bh': 'bih', + 'bi': 'bis', + 'bm': 'bam', + 'bn': 'ben', + 'bo': 'bod', + 'br': 'bre', + 'bs': 'bos', + 'ca': 'cat', + 'ce': 'che', + 'ch': 'cha', + 'co': 'cos', + 'cr': 'cre', + 'cs': 'ces', + 'cu': 'chu', + 'cv': 'chv', + 'cy': 'cym', + 'da': 'dan', + 'de': 'deu', + 'dv': 'div', + 'dz': 'dzo', + 'ee': 'ewe', + 'el': 'ell', + 'en': 'eng', + 'eo': 'epo', + 'es': 'spa', + 'et': 'est', + 'eu': 'eus', + 'fa': 'fas', + 'ff': 'ful', + 'fi': 'fin', + 'fj': 'fij', + 'fo': 'fao', + 'fr': 'fra', + 'fy': 'fry', + 'ga': 'gle', + 'gd': 'gla', + 'gl': 'glg', + 'gn': 'grn', + 'gu': 'guj', + 'gv': 'glv', + 'ha': 'hau', + 'he': 'heb', + 'iw': 'heb', # Replaced by he in 1989 revision + 'hi': 'hin', + 'ho': 'hmo', + 'hr': 'hrv', + 'ht': 'hat', + 'hu': 'hun', + 'hy': 'hye', + 'hz': 'her', + 'ia': 'ina', + 'id': 'ind', + 'in': 'ind', # Replaced by id in 1989 revision + 'ie': 'ile', + 'ig': 'ibo', + 'ii': 'iii', + 'ik': 'ipk', + 'io': 'ido', + 'is': 'isl', + 'it': 'ita', + 'iu': 'iku', + 'ja': 'jpn', + 'jv': 'jav', + 'ka': 'kat', + 'kg': 'kon', + 'ki': 'kik', + 'kj': 'kua', + 'kk': 'kaz', + 'kl': 'kal', + 'km': 'khm', + 'kn': 'kan', + 'ko': 'kor', + 'kr': 'kau', + 'ks': 'kas', + 'ku': 'kur', + 'kv': 'kom', + 'kw': 'cor', + 'ky': 'kir', + 'la': 'lat', + 'lb': 'ltz', + 'lg': 'lug', + 'li': 'lim', + 'ln': 'lin', + 'lo': 'lao', + 'lt': 'lit', + 'lu': 'lub', + 'lv': 'lav', + 'mg': 'mlg', + 'mh': 'mah', + 'mi': 'mri', + 'mk': 'mkd', + 'ml': 'mal', + 'mn': 'mon', + 'mr': 'mar', + 'ms': 'msa', + 'mt': 'mlt', + 'my': 'mya', + 'na': 'nau', + 'nb': 'nob', + 'nd': 'nde', + 'ne': 'nep', + 'ng': 'ndo', + 'nl': 'nld', + 'nn': 'nno', + 'no': 'nor', + 'nr': 'nbl', + 'nv': 'nav', + 'ny': 'nya', + 'oc': 'oci', + 'oj': 'oji', + 'om': 'orm', + 'or': 'ori', + 'os': 'oss', + 'pa': 'pan', + 'pe': 'per', + 'pi': 'pli', + 'pl': 'pol', + 'ps': 'pus', + 'pt': 'por', + 'qu': 'que', + 'rm': 'roh', + 'rn': 'run', + 'ro': 'ron', + 'ru': 'rus', + 'rw': 'kin', + 'sa': 'san', + 'sc': 'srd', + 'sd': 'snd', + 'se': 'sme', + 'sg': 'sag', + 'si': 'sin', + 'sk': 'slk', + 'sl': 'slv', + 'sm': 'smo', + 'sn': 'sna', + 'so': 'som', + 'sq': 'sqi', + 'sr': 'srp', + 'ss': 'ssw', + 'st': 'sot', + 'su': 'sun', + 'sv': 'swe', + 'sw': 'swa', + 'ta': 'tam', + 'te': 'tel', + 'tg': 'tgk', + 'th': 'tha', + 'ti': 'tir', + 'tk': 'tuk', + 'tl': 'tgl', + 'tn': 'tsn', + 'to': 'ton', + 'tr': 'tur', + 'ts': 'tso', + 'tt': 'tat', + 'tw': 'twi', + 'ty': 'tah', + 'ug': 'uig', + 'uk': 'ukr', + 'ur': 'urd', + 'uz': 'uzb', + 've': 'ven', + 'vi': 'vie', + 'vo': 'vol', + 'wa': 'wln', + 'wo': 'wol', + 'xh': 'xho', + 'yi': 'yid', + 'ji': 'yid', # Replaced by yi in 1989 revision + 'yo': 'yor', + 'za': 'zha', + 'zh': 'zho', + 'zu': 'zul', + } + + @classmethod + def short2long(cls, code): + """Convert language code from ISO 639-1 to ISO 639-2/T""" + return cls._lang_map.get(code[:2]) + + @classmethod + def long2short(cls, code): + """Convert language code from ISO 639-2/T to ISO 639-1""" + for short_name, long_name in cls._lang_map.items(): + if long_name == code: + return short_name + + +class ISO3166Utils: + # From http://data.okfn.org/data/core/country-list + _country_map = { + 'AF': 'Afghanistan', + 'AX': 'Ã…land Islands', + 'AL': 'Albania', + 'DZ': 'Algeria', + 'AS': 'American Samoa', + 'AD': 'Andorra', + 'AO': 'Angola', + 'AI': 'Anguilla', + 'AQ': 'Antarctica', + 'AG': 'Antigua and Barbuda', + 'AR': 'Argentina', + 'AM': 'Armenia', + 'AW': 'Aruba', + 'AU': 'Australia', + 'AT': 'Austria', + 'AZ': 'Azerbaijan', + 'BS': 'Bahamas', + 'BH': 'Bahrain', + 'BD': 'Bangladesh', + 'BB': 'Barbados', + 'BY': 'Belarus', + 'BE': 'Belgium', + 'BZ': 'Belize', + 'BJ': 'Benin', + 'BM': 'Bermuda', + 'BT': 'Bhutan', + 'BO': 'Bolivia, Plurinational State of', + 'BQ': 'Bonaire, Sint Eustatius and Saba', + 'BA': 'Bosnia and Herzegovina', + 'BW': 'Botswana', + 'BV': 'Bouvet Island', + 'BR': 'Brazil', + 'IO': 'British Indian Ocean Territory', + 'BN': 'Brunei Darussalam', + 'BG': 'Bulgaria', + 'BF': 'Burkina Faso', + 'BI': 'Burundi', + 'KH': 'Cambodia', + 'CM': 'Cameroon', + 'CA': 'Canada', + 'CV': 'Cape Verde', + 'KY': 'Cayman Islands', + 'CF': 'Central African Republic', + 'TD': 'Chad', + 'CL': 'Chile', + 'CN': 'China', + 'CX': 'Christmas Island', + 'CC': 'Cocos (Keeling) Islands', + 'CO': 'Colombia', + 'KM': 'Comoros', + 'CG': 'Congo', + 'CD': 'Congo, the Democratic Republic of the', + 'CK': 'Cook Islands', + 'CR': 'Costa Rica', + 'CI': 'Côte d\'Ivoire', + 'HR': 'Croatia', + 'CU': 'Cuba', + 'CW': 'Curaçao', + 'CY': 'Cyprus', + 'CZ': 'Czech Republic', + 'DK': 'Denmark', + 'DJ': 'Djibouti', + 'DM': 'Dominica', + 'DO': 'Dominican Republic', + 'EC': 'Ecuador', + 'EG': 'Egypt', + 'SV': 'El Salvador', + 'GQ': 'Equatorial Guinea', + 'ER': 'Eritrea', + 'EE': 'Estonia', + 'ET': 'Ethiopia', + 'FK': 'Falkland Islands (Malvinas)', + 'FO': 'Faroe Islands', + 'FJ': 'Fiji', + 'FI': 'Finland', + 'FR': 'France', + 'GF': 'French Guiana', + 'PF': 'French Polynesia', + 'TF': 'French Southern Territories', + 'GA': 'Gabon', + 'GM': 'Gambia', + 'GE': 'Georgia', + 'DE': 'Germany', + 'GH': 'Ghana', + 'GI': 'Gibraltar', + 'GR': 'Greece', + 'GL': 'Greenland', + 'GD': 'Grenada', + 'GP': 'Guadeloupe', + 'GU': 'Guam', + 'GT': 'Guatemala', + 'GG': 'Guernsey', + 'GN': 'Guinea', + 'GW': 'Guinea-Bissau', + 'GY': 'Guyana', + 'HT': 'Haiti', + 'HM': 'Heard Island and McDonald Islands', + 'VA': 'Holy See (Vatican City State)', + 'HN': 'Honduras', + 'HK': 'Hong Kong', + 'HU': 'Hungary', + 'IS': 'Iceland', + 'IN': 'India', + 'ID': 'Indonesia', + 'IR': 'Iran, Islamic Republic of', + 'IQ': 'Iraq', + 'IE': 'Ireland', + 'IM': 'Isle of Man', + 'IL': 'Israel', + 'IT': 'Italy', + 'JM': 'Jamaica', + 'JP': 'Japan', + 'JE': 'Jersey', + 'JO': 'Jordan', + 'KZ': 'Kazakhstan', + 'KE': 'Kenya', + 'KI': 'Kiribati', + 'KP': 'Korea, Democratic People\'s Republic of', + 'KR': 'Korea, Republic of', + 'KW': 'Kuwait', + 'KG': 'Kyrgyzstan', + 'LA': 'Lao People\'s Democratic Republic', + 'LV': 'Latvia', + 'LB': 'Lebanon', + 'LS': 'Lesotho', + 'LR': 'Liberia', + 'LY': 'Libya', + 'LI': 'Liechtenstein', + 'LT': 'Lithuania', + 'LU': 'Luxembourg', + 'MO': 'Macao', + 'MK': 'Macedonia, the Former Yugoslav Republic of', + 'MG': 'Madagascar', + 'MW': 'Malawi', + 'MY': 'Malaysia', + 'MV': 'Maldives', + 'ML': 'Mali', + 'MT': 'Malta', + 'MH': 'Marshall Islands', + 'MQ': 'Martinique', + 'MR': 'Mauritania', + 'MU': 'Mauritius', + 'YT': 'Mayotte', + 'MX': 'Mexico', + 'FM': 'Micronesia, Federated States of', + 'MD': 'Moldova, Republic of', + 'MC': 'Monaco', + 'MN': 'Mongolia', + 'ME': 'Montenegro', + 'MS': 'Montserrat', + 'MA': 'Morocco', + 'MZ': 'Mozambique', + 'MM': 'Myanmar', + 'NA': 'Namibia', + 'NR': 'Nauru', + 'NP': 'Nepal', + 'NL': 'Netherlands', + 'NC': 'New Caledonia', + 'NZ': 'New Zealand', + 'NI': 'Nicaragua', + 'NE': 'Niger', + 'NG': 'Nigeria', + 'NU': 'Niue', + 'NF': 'Norfolk Island', + 'MP': 'Northern Mariana Islands', + 'NO': 'Norway', + 'OM': 'Oman', + 'PK': 'Pakistan', + 'PW': 'Palau', + 'PS': 'Palestine, State of', + 'PA': 'Panama', + 'PG': 'Papua New Guinea', + 'PY': 'Paraguay', + 'PE': 'Peru', + 'PH': 'Philippines', + 'PN': 'Pitcairn', + 'PL': 'Poland', + 'PT': 'Portugal', + 'PR': 'Puerto Rico', + 'QA': 'Qatar', + 'RE': 'Réunion', + 'RO': 'Romania', + 'RU': 'Russian Federation', + 'RW': 'Rwanda', + 'BL': 'Saint Barthélemy', + 'SH': 'Saint Helena, Ascension and Tristan da Cunha', + 'KN': 'Saint Kitts and Nevis', + 'LC': 'Saint Lucia', + 'MF': 'Saint Martin (French part)', + 'PM': 'Saint Pierre and Miquelon', + 'VC': 'Saint Vincent and the Grenadines', + 'WS': 'Samoa', + 'SM': 'San Marino', + 'ST': 'Sao Tome and Principe', + 'SA': 'Saudi Arabia', + 'SN': 'Senegal', + 'RS': 'Serbia', + 'SC': 'Seychelles', + 'SL': 'Sierra Leone', + 'SG': 'Singapore', + 'SX': 'Sint Maarten (Dutch part)', + 'SK': 'Slovakia', + 'SI': 'Slovenia', + 'SB': 'Solomon Islands', + 'SO': 'Somalia', + 'ZA': 'South Africa', + 'GS': 'South Georgia and the South Sandwich Islands', + 'SS': 'South Sudan', + 'ES': 'Spain', + 'LK': 'Sri Lanka', + 'SD': 'Sudan', + 'SR': 'Suriname', + 'SJ': 'Svalbard and Jan Mayen', + 'SZ': 'Swaziland', + 'SE': 'Sweden', + 'CH': 'Switzerland', + 'SY': 'Syrian Arab Republic', + 'TW': 'Taiwan, Province of China', + 'TJ': 'Tajikistan', + 'TZ': 'Tanzania, United Republic of', + 'TH': 'Thailand', + 'TL': 'Timor-Leste', + 'TG': 'Togo', + 'TK': 'Tokelau', + 'TO': 'Tonga', + 'TT': 'Trinidad and Tobago', + 'TN': 'Tunisia', + 'TR': 'Turkey', + 'TM': 'Turkmenistan', + 'TC': 'Turks and Caicos Islands', + 'TV': 'Tuvalu', + 'UG': 'Uganda', + 'UA': 'Ukraine', + 'AE': 'United Arab Emirates', + 'GB': 'United Kingdom', + 'US': 'United States', + 'UM': 'United States Minor Outlying Islands', + 'UY': 'Uruguay', + 'UZ': 'Uzbekistan', + 'VU': 'Vanuatu', + 'VE': 'Venezuela, Bolivarian Republic of', + 'VN': 'Viet Nam', + 'VG': 'Virgin Islands, British', + 'VI': 'Virgin Islands, U.S.', + 'WF': 'Wallis and Futuna', + 'EH': 'Western Sahara', + 'YE': 'Yemen', + 'ZM': 'Zambia', + 'ZW': 'Zimbabwe', + # Not ISO 3166 codes, but used for IP blocks + 'AP': 'Asia/Pacific Region', + 'EU': 'Europe', + } + + @classmethod + def short2full(cls, code): + """Convert an ISO 3166-2 country code to the corresponding full name""" + return cls._country_map.get(code.upper()) + + +class GeoUtils: + # Major IPv4 address blocks per country + _country_ip_map = { + 'AD': '46.172.224.0/19', + 'AE': '94.200.0.0/13', + 'AF': '149.54.0.0/17', + 'AG': '209.59.64.0/18', + 'AI': '204.14.248.0/21', + 'AL': '46.99.0.0/16', + 'AM': '46.70.0.0/15', + 'AO': '105.168.0.0/13', + 'AP': '182.50.184.0/21', + 'AQ': '23.154.160.0/24', + 'AR': '181.0.0.0/12', + 'AS': '202.70.112.0/20', + 'AT': '77.116.0.0/14', + 'AU': '1.128.0.0/11', + 'AW': '181.41.0.0/18', + 'AX': '185.217.4.0/22', + 'AZ': '5.197.0.0/16', + 'BA': '31.176.128.0/17', + 'BB': '65.48.128.0/17', + 'BD': '114.130.0.0/16', + 'BE': '57.0.0.0/8', + 'BF': '102.178.0.0/15', + 'BG': '95.42.0.0/15', + 'BH': '37.131.0.0/17', + 'BI': '154.117.192.0/18', + 'BJ': '137.255.0.0/16', + 'BL': '185.212.72.0/23', + 'BM': '196.12.64.0/18', + 'BN': '156.31.0.0/16', + 'BO': '161.56.0.0/16', + 'BQ': '161.0.80.0/20', + 'BR': '191.128.0.0/12', + 'BS': '24.51.64.0/18', + 'BT': '119.2.96.0/19', + 'BW': '168.167.0.0/16', + 'BY': '178.120.0.0/13', + 'BZ': '179.42.192.0/18', + 'CA': '99.224.0.0/11', + 'CD': '41.243.0.0/16', + 'CF': '197.242.176.0/21', + 'CG': '160.113.0.0/16', + 'CH': '85.0.0.0/13', + 'CI': '102.136.0.0/14', + 'CK': '202.65.32.0/19', + 'CL': '152.172.0.0/14', + 'CM': '102.244.0.0/14', + 'CN': '36.128.0.0/10', + 'CO': '181.240.0.0/12', + 'CR': '201.192.0.0/12', + 'CU': '152.206.0.0/15', + 'CV': '165.90.96.0/19', + 'CW': '190.88.128.0/17', + 'CY': '31.153.0.0/16', + 'CZ': '88.100.0.0/14', + 'DE': '53.0.0.0/8', + 'DJ': '197.241.0.0/17', + 'DK': '87.48.0.0/12', + 'DM': '192.243.48.0/20', + 'DO': '152.166.0.0/15', + 'DZ': '41.96.0.0/12', + 'EC': '186.68.0.0/15', + 'EE': '90.190.0.0/15', + 'EG': '156.160.0.0/11', + 'ER': '196.200.96.0/20', + 'ES': '88.0.0.0/11', + 'ET': '196.188.0.0/14', + 'EU': '2.16.0.0/13', + 'FI': '91.152.0.0/13', + 'FJ': '144.120.0.0/16', + 'FK': '80.73.208.0/21', + 'FM': '119.252.112.0/20', + 'FO': '88.85.32.0/19', + 'FR': '90.0.0.0/9', + 'GA': '41.158.0.0/15', + 'GB': '25.0.0.0/8', + 'GD': '74.122.88.0/21', + 'GE': '31.146.0.0/16', + 'GF': '161.22.64.0/18', + 'GG': '62.68.160.0/19', + 'GH': '154.160.0.0/12', + 'GI': '95.164.0.0/16', + 'GL': '88.83.0.0/19', + 'GM': '160.182.0.0/15', + 'GN': '197.149.192.0/18', + 'GP': '104.250.0.0/19', + 'GQ': '105.235.224.0/20', + 'GR': '94.64.0.0/13', + 'GT': '168.234.0.0/16', + 'GU': '168.123.0.0/16', + 'GW': '197.214.80.0/20', + 'GY': '181.41.64.0/18', + 'HK': '113.252.0.0/14', + 'HN': '181.210.0.0/16', + 'HR': '93.136.0.0/13', + 'HT': '148.102.128.0/17', + 'HU': '84.0.0.0/14', + 'ID': '39.192.0.0/10', + 'IE': '87.32.0.0/12', + 'IL': '79.176.0.0/13', + 'IM': '5.62.80.0/20', + 'IN': '117.192.0.0/10', + 'IO': '203.83.48.0/21', + 'IQ': '37.236.0.0/14', + 'IR': '2.176.0.0/12', + 'IS': '82.221.0.0/16', + 'IT': '79.0.0.0/10', + 'JE': '87.244.64.0/18', + 'JM': '72.27.0.0/17', + 'JO': '176.29.0.0/16', + 'JP': '133.0.0.0/8', + 'KE': '105.48.0.0/12', + 'KG': '158.181.128.0/17', + 'KH': '36.37.128.0/17', + 'KI': '103.25.140.0/22', + 'KM': '197.255.224.0/20', + 'KN': '198.167.192.0/19', + 'KP': '175.45.176.0/22', + 'KR': '175.192.0.0/10', + 'KW': '37.36.0.0/14', + 'KY': '64.96.0.0/15', + 'KZ': '2.72.0.0/13', + 'LA': '115.84.64.0/18', + 'LB': '178.135.0.0/16', + 'LC': '24.92.144.0/20', + 'LI': '82.117.0.0/19', + 'LK': '112.134.0.0/15', + 'LR': '102.183.0.0/16', + 'LS': '129.232.0.0/17', + 'LT': '78.56.0.0/13', + 'LU': '188.42.0.0/16', + 'LV': '46.109.0.0/16', + 'LY': '41.252.0.0/14', + 'MA': '105.128.0.0/11', + 'MC': '88.209.64.0/18', + 'MD': '37.246.0.0/16', + 'ME': '178.175.0.0/17', + 'MF': '74.112.232.0/21', + 'MG': '154.126.0.0/17', + 'MH': '117.103.88.0/21', + 'MK': '77.28.0.0/15', + 'ML': '154.118.128.0/18', + 'MM': '37.111.0.0/17', + 'MN': '49.0.128.0/17', + 'MO': '60.246.0.0/16', + 'MP': '202.88.64.0/20', + 'MQ': '109.203.224.0/19', + 'MR': '41.188.64.0/18', + 'MS': '208.90.112.0/22', + 'MT': '46.11.0.0/16', + 'MU': '105.16.0.0/12', + 'MV': '27.114.128.0/18', + 'MW': '102.70.0.0/15', + 'MX': '187.192.0.0/11', + 'MY': '175.136.0.0/13', + 'MZ': '197.218.0.0/15', + 'NA': '41.182.0.0/16', + 'NC': '101.101.0.0/18', + 'NE': '197.214.0.0/18', + 'NF': '203.17.240.0/22', + 'NG': '105.112.0.0/12', + 'NI': '186.76.0.0/15', + 'NL': '145.96.0.0/11', + 'NO': '84.208.0.0/13', + 'NP': '36.252.0.0/15', + 'NR': '203.98.224.0/19', + 'NU': '49.156.48.0/22', + 'NZ': '49.224.0.0/14', + 'OM': '5.36.0.0/15', + 'PA': '186.72.0.0/15', + 'PE': '186.160.0.0/14', + 'PF': '123.50.64.0/18', + 'PG': '124.240.192.0/19', + 'PH': '49.144.0.0/13', + 'PK': '39.32.0.0/11', + 'PL': '83.0.0.0/11', + 'PM': '70.36.0.0/20', + 'PR': '66.50.0.0/16', + 'PS': '188.161.0.0/16', + 'PT': '85.240.0.0/13', + 'PW': '202.124.224.0/20', + 'PY': '181.120.0.0/14', + 'QA': '37.210.0.0/15', + 'RE': '102.35.0.0/16', + 'RO': '79.112.0.0/13', + 'RS': '93.86.0.0/15', + 'RU': '5.136.0.0/13', + 'RW': '41.186.0.0/16', + 'SA': '188.48.0.0/13', + 'SB': '202.1.160.0/19', + 'SC': '154.192.0.0/11', + 'SD': '102.120.0.0/13', + 'SE': '78.64.0.0/12', + 'SG': '8.128.0.0/10', + 'SI': '188.196.0.0/14', + 'SK': '78.98.0.0/15', + 'SL': '102.143.0.0/17', + 'SM': '89.186.32.0/19', + 'SN': '41.82.0.0/15', + 'SO': '154.115.192.0/18', + 'SR': '186.179.128.0/17', + 'SS': '105.235.208.0/21', + 'ST': '197.159.160.0/19', + 'SV': '168.243.0.0/16', + 'SX': '190.102.0.0/20', + 'SY': '5.0.0.0/16', + 'SZ': '41.84.224.0/19', + 'TC': '65.255.48.0/20', + 'TD': '154.68.128.0/19', + 'TG': '196.168.0.0/14', + 'TH': '171.96.0.0/13', + 'TJ': '85.9.128.0/18', + 'TK': '27.96.24.0/21', + 'TL': '180.189.160.0/20', + 'TM': '95.85.96.0/19', + 'TN': '197.0.0.0/11', + 'TO': '175.176.144.0/21', + 'TR': '78.160.0.0/11', + 'TT': '186.44.0.0/15', + 'TV': '202.2.96.0/19', + 'TW': '120.96.0.0/11', + 'TZ': '156.156.0.0/14', + 'UA': '37.52.0.0/14', + 'UG': '102.80.0.0/13', + 'US': '6.0.0.0/8', + 'UY': '167.56.0.0/13', + 'UZ': '84.54.64.0/18', + 'VA': '212.77.0.0/19', + 'VC': '207.191.240.0/21', + 'VE': '186.88.0.0/13', + 'VG': '66.81.192.0/20', + 'VI': '146.226.0.0/16', + 'VN': '14.160.0.0/11', + 'VU': '202.80.32.0/20', + 'WF': '117.20.32.0/21', + 'WS': '202.4.32.0/19', + 'YE': '134.35.0.0/16', + 'YT': '41.242.116.0/22', + 'ZA': '41.0.0.0/11', + 'ZM': '102.144.0.0/13', + 'ZW': '102.177.192.0/18', + } + + @classmethod + def random_ipv4(cls, code_or_block): + if len(code_or_block) == 2: + block = cls._country_ip_map.get(code_or_block.upper()) + if not block: + return None + else: + block = code_or_block + addr, preflen = block.split('/') + addr_min = struct.unpack('!L', socket.inet_aton(addr))[0] + addr_max = addr_min | (0xffffffff >> int(preflen)) + return str(socket.inet_ntoa( + struct.pack('!L', random.randint(addr_min, addr_max)))) + + +# Both long_to_bytes and bytes_to_long are adapted from PyCrypto, which is +# released into Public Domain +# https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/Util/number.py#L387 + +def long_to_bytes(n, blocksize=0): + """long_to_bytes(n:long, blocksize:int) : string + Convert a long integer to a byte string. + + If optional blocksize is given and greater than zero, pad the front of the + byte string with binary zeros so that the length is a multiple of + blocksize. + """ + # after much testing, this algorithm was deemed to be the fastest + s = b'' + n = int(n) + while n > 0: + s = struct.pack('>I', n & 0xffffffff) + s + n = n >> 32 + # strip off leading zeros + for i in range(len(s)): + if s[i] != b'\000'[0]: + break + else: + # only happens when n == 0 + s = b'\000' + i = 0 + s = s[i:] + # add back some pad bytes. this could be done more efficiently w.r.t. the + # de-padding being done above, but sigh... + if blocksize > 0 and len(s) % blocksize: + s = (blocksize - len(s) % blocksize) * b'\000' + s + return s + + +def bytes_to_long(s): + """bytes_to_long(string) : long + Convert a byte string to a long integer. + + This is (essentially) the inverse of long_to_bytes(). + """ + acc = 0 + length = len(s) + if length % 4: + extra = (4 - length % 4) + s = b'\000' * extra + s + length = length + extra + for i in range(0, length, 4): + acc = (acc << 32) + struct.unpack('>I', s[i:i + 4])[0] + return acc + + +def ohdave_rsa_encrypt(data, exponent, modulus): + ''' + Implement OHDave's RSA algorithm. See http://www.ohdave.com/rsa/ + + Input: + data: data to encrypt, bytes-like object + exponent, modulus: parameter e and N of RSA algorithm, both integer + Output: hex string of encrypted data + + Limitation: supports one block encryption only + ''' + + payload = int(binascii.hexlify(data[::-1]), 16) + encrypted = pow(payload, exponent, modulus) + return '%x' % encrypted + + +def pkcs1pad(data, length): + """ + Padding input data with PKCS#1 scheme + + @param {int[]} data input data + @param {int} length target length + @returns {int[]} padded data + """ + if len(data) > length - 11: + raise ValueError('Input data too long for PKCS#1 padding') + + pseudo_random = [random.randint(0, 254) for _ in range(length - len(data) - 3)] + return [0, 2] + pseudo_random + [0] + data + + +def _base_n_table(n, table): + if not table and not n: + raise ValueError('Either table or n must be specified') + table = (table or '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ')[:n] + + if n and n != len(table): + raise ValueError(f'base {n} exceeds table length {len(table)}') + return table + + +def encode_base_n(num, n=None, table=None): + """Convert given int to a base-n string""" + table = _base_n_table(n, table) + if not num: + return table[0] + + result, base = '', len(table) + while num: + result = table[num % base] + result + num = num // base + return result + + +def decode_base_n(string, n=None, table=None): + """Convert given base-n string to int""" + table = {char: index for index, char in enumerate(_base_n_table(n, table))} + result, base = 0, len(table) + for char in string: + result = result * base + table[char] + return result + + +def decode_packed_codes(code): + mobj = re.search(PACKED_CODES_RE, code) + obfuscated_code, base, count, symbols = mobj.groups() + base = int(base) + count = int(count) + symbols = symbols.split('|') + symbol_table = {} + + while count: + count -= 1 + base_n_count = encode_base_n(count, base) + symbol_table[base_n_count] = symbols[count] or base_n_count + + return re.sub( + r'\b(\w+)\b', lambda mobj: symbol_table[mobj.group(0)], + obfuscated_code) + + +def caesar(s, alphabet, shift): + if shift == 0: + return s + l = len(alphabet) + return ''.join( + alphabet[(alphabet.index(c) + shift) % l] if c in alphabet else c + for c in s) + + +def rot47(s): + return caesar(s, r'''!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~''', 47) + + +def parse_m3u8_attributes(attrib): + info = {} + for (key, val) in re.findall(r'(?P<key>[A-Z0-9-]+)=(?P<val>"[^"]+"|[^",]+)(?:,|$)', attrib): + if val.startswith('"'): + val = val[1:-1] + info[key] = val + return info + + +def urshift(val, n): + return val >> n if val >= 0 else (val + 0x100000000) >> n + + +def write_xattr(path, key, value): + # Windows: Write xattrs to NTFS Alternate Data Streams: + # http://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.28ADS.29 + if compat_os_name == 'nt': + assert ':' not in key + assert os.path.exists(path) + + try: + with open(f'{path}:{key}', 'wb') as f: + f.write(value) + except OSError as e: + raise XAttrMetadataError(e.errno, e.strerror) + return + + # UNIX Method 1. Use os.setxattr/xattrs/pyxattrs modules + + setxattr = None + if callable(getattr(os, 'setxattr', None)): + setxattr = os.setxattr + elif getattr(xattr, '_yt_dlp__identifier', None) == 'pyxattr': + # Unicode arguments are not supported in pyxattr until version 0.5.0 + # See https://github.com/ytdl-org/youtube-dl/issues/5498 + if version_tuple(xattr.__version__) >= (0, 5, 0): + setxattr = xattr.set + elif xattr: + setxattr = xattr.setxattr + + if setxattr: + try: + setxattr(path, key, value) + except OSError as e: + raise XAttrMetadataError(e.errno, e.strerror) + return + + # UNIX Method 2. Use setfattr/xattr executables + exe = ('setfattr' if check_executable('setfattr', ['--version']) + else 'xattr' if check_executable('xattr', ['-h']) else None) + if not exe: + raise XAttrUnavailableError( + 'Couldn\'t find a tool to set the xattrs. Install either the python "xattr" or "pyxattr" modules or the ' + + ('"xattr" binary' if sys.platform != 'linux' else 'GNU "attr" package (which contains the "setfattr" tool)')) + + value = value.decode() + try: + _, stderr, returncode = Popen.run( + [exe, '-w', key, value, path] if exe == 'xattr' else [exe, '-n', key, '-v', value, path], + text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) + except OSError as e: + raise XAttrMetadataError(e.errno, e.strerror) + if returncode: + raise XAttrMetadataError(returncode, stderr) + + +def random_birthday(year_field, month_field, day_field): + start_date = datetime.date(1950, 1, 1) + end_date = datetime.date(1995, 12, 31) + offset = random.randint(0, (end_date - start_date).days) + random_date = start_date + datetime.timedelta(offset) + return { + year_field: str(random_date.year), + month_field: str(random_date.month), + day_field: str(random_date.day), + } + + +def find_available_port(interface=''): + try: + with socket.socket() as sock: + sock.bind((interface, 0)) + return sock.getsockname()[1] + except OSError: + return None + + +# Templates for internet shortcut files, which are plain text files. +DOT_URL_LINK_TEMPLATE = '''\ +[InternetShortcut] +URL=%(url)s +''' + +DOT_WEBLOC_LINK_TEMPLATE = '''\ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> +<plist version="1.0"> +<dict> +\t<key>URL</key> +\t<string>%(url)s</string> +</dict> +</plist> +''' + +DOT_DESKTOP_LINK_TEMPLATE = '''\ +[Desktop Entry] +Encoding=UTF-8 +Name=%(filename)s +Type=Link +URL=%(url)s +Icon=text-html +''' + +LINK_TEMPLATES = { + 'url': DOT_URL_LINK_TEMPLATE, + 'desktop': DOT_DESKTOP_LINK_TEMPLATE, + 'webloc': DOT_WEBLOC_LINK_TEMPLATE, +} + + +def iri_to_uri(iri): + """ + Converts an IRI (Internationalized Resource Identifier, allowing Unicode characters) to a URI (Uniform Resource Identifier, ASCII-only). + + The function doesn't add an additional layer of escaping; e.g., it doesn't escape `%3C` as `%253C`. Instead, it percent-escapes characters with an underlying UTF-8 encoding *besides* those already escaped, leaving the URI intact. + """ + + iri_parts = urllib.parse.urlparse(iri) + + if '[' in iri_parts.netloc: + raise ValueError('IPv6 URIs are not, yet, supported.') + # Querying `.netloc`, when there's only one bracket, also raises a ValueError. + + # The `safe` argument values, that the following code uses, contain the characters that should not be percent-encoded. Everything else but letters, digits and '_.-' will be percent-encoded with an underlying UTF-8 encoding. Everything already percent-encoded will be left as is. + + net_location = '' + if iri_parts.username: + net_location += urllib.parse.quote(iri_parts.username, safe=r"!$%&'()*+,~") + if iri_parts.password is not None: + net_location += ':' + urllib.parse.quote(iri_parts.password, safe=r"!$%&'()*+,~") + net_location += '@' + + net_location += iri_parts.hostname.encode('idna').decode() # Punycode for Unicode hostnames. + # The 'idna' encoding produces ASCII text. + if iri_parts.port is not None and iri_parts.port != 80: + net_location += ':' + str(iri_parts.port) + + return urllib.parse.urlunparse( + (iri_parts.scheme, + net_location, + + urllib.parse.quote_plus(iri_parts.path, safe=r"!$%&'()*+,/:;=@|~"), + + # Unsure about the `safe` argument, since this is a legacy way of handling parameters. + urllib.parse.quote_plus(iri_parts.params, safe=r"!$%&'()*+,/:;=@|~"), + + # Not totally sure about the `safe` argument, since the source does not explicitly mention the query URI component. + urllib.parse.quote_plus(iri_parts.query, safe=r"!$%&'()*+,/:;=?@{|}~"), + + urllib.parse.quote_plus(iri_parts.fragment, safe=r"!#$%&'()*+,/:;=?@{|}~"))) + + # Source for `safe` arguments: https://url.spec.whatwg.org/#percent-encoded-bytes. + + +def to_high_limit_path(path): + if sys.platform in ['win32', 'cygwin']: + # Work around MAX_PATH limitation on Windows. The maximum allowed length for the individual path segments may still be quite limited. + return '\\\\?\\' + os.path.abspath(path) + + return path + + +def format_field(obj, field=None, template='%s', ignore=NO_DEFAULT, default='', func=IDENTITY): + val = traversal.traverse_obj(obj, *variadic(field)) + if not val if ignore is NO_DEFAULT else val in variadic(ignore): + return default + return template % func(val) + + +def clean_podcast_url(url): + url = re.sub(r'''(?x) + (?: + (?: + chtbl\.com/track| + media\.blubrry\.com| # https://create.blubrry.com/resources/podcast-media-download-statistics/getting-started/ + play\.podtrac\.com| + chrt\.fm/track| + mgln\.ai/e + )(?:/[^/.]+)?| + (?:dts|www)\.podtrac\.com/(?:pts/)?redirect\.[0-9a-z]{3,4}| # http://analytics.podtrac.com/how-to-measure + flex\.acast\.com| + pd(?: + cn\.co| # https://podcorn.com/analytics-prefix/ + st\.fm # https://podsights.com/docs/ + )/e| + [0-9]\.gum\.fm| + pscrb\.fm/rss/p + )/''', '', url) + return re.sub(r'^\w+://(\w+://)', r'\1', url) + + +_HEX_TABLE = '0123456789abcdef' + + +def random_uuidv4(): + return re.sub(r'[xy]', lambda x: _HEX_TABLE[random.randint(0, 15)], 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx') + + +def make_dir(path, to_screen=None): + try: + dn = os.path.dirname(path) + if dn: + os.makedirs(dn, exist_ok=True) + return True + except OSError as err: + if callable(to_screen) is not None: + to_screen(f'unable to create directory {err}') + return False + + +def get_executable_path(): + from ..update import _get_variant_and_executable_path + + return os.path.dirname(os.path.abspath(_get_variant_and_executable_path()[1])) + + +def get_user_config_dirs(package_name): + # .config (e.g. ~/.config/package_name) + xdg_config_home = os.getenv('XDG_CONFIG_HOME') or compat_expanduser('~/.config') + yield os.path.join(xdg_config_home, package_name) + + # appdata (%APPDATA%/package_name) + appdata_dir = os.getenv('appdata') + if appdata_dir: + yield os.path.join(appdata_dir, package_name) + + # home (~/.package_name) + yield os.path.join(compat_expanduser('~'), f'.{package_name}') + + +def get_system_config_dirs(package_name): + # /etc/package_name + yield os.path.join('/etc', package_name) + + +def time_seconds(**kwargs): + """ + Returns TZ-aware time in seconds since the epoch (1970-01-01T00:00:00Z) + """ + return time.time() + datetime.timedelta(**kwargs).total_seconds() + + +# create a JSON Web Signature (jws) with HS256 algorithm +# the resulting format is in JWS Compact Serialization +# implemented following JWT https://www.rfc-editor.org/rfc/rfc7519.html +# implemented following JWS https://www.rfc-editor.org/rfc/rfc7515.html +def jwt_encode_hs256(payload_data, key, headers={}): + header_data = { + 'alg': 'HS256', + 'typ': 'JWT', + } + if headers: + header_data.update(headers) + header_b64 = base64.b64encode(json.dumps(header_data).encode()) + payload_b64 = base64.b64encode(json.dumps(payload_data).encode()) + h = hmac.new(key.encode(), header_b64 + b'.' + payload_b64, hashlib.sha256) + signature_b64 = base64.b64encode(h.digest()) + token = header_b64 + b'.' + payload_b64 + b'.' + signature_b64 + return token + + +# can be extended in future to verify the signature and parse header and return the algorithm used if it's not HS256 +def jwt_decode_hs256(jwt): + header_b64, payload_b64, signature_b64 = jwt.split('.') + # add trailing ='s that may have been stripped, superfluous ='s are ignored + payload_data = json.loads(base64.urlsafe_b64decode(f'{payload_b64}===')) + return payload_data + + +WINDOWS_VT_MODE = False if compat_os_name == 'nt' else None + + +@functools.cache +def supports_terminal_sequences(stream): + if compat_os_name == 'nt': + if not WINDOWS_VT_MODE: + return False + elif not os.getenv('TERM'): + return False + try: + return stream.isatty() + except BaseException: + return False + + +def windows_enable_vt_mode(): + """Ref: https://bugs.python.org/issue30075 """ + if get_windows_version() < (10, 0, 10586): + return + + import ctypes + import ctypes.wintypes + import msvcrt + + ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004 + + dll = ctypes.WinDLL('kernel32', use_last_error=False) + handle = os.open('CONOUT$', os.O_RDWR) + try: + h_out = ctypes.wintypes.HANDLE(msvcrt.get_osfhandle(handle)) + dw_original_mode = ctypes.wintypes.DWORD() + success = dll.GetConsoleMode(h_out, ctypes.byref(dw_original_mode)) + if not success: + raise Exception('GetConsoleMode failed') + + success = dll.SetConsoleMode(h_out, ctypes.wintypes.DWORD( + dw_original_mode.value | ENABLE_VIRTUAL_TERMINAL_PROCESSING)) + if not success: + raise Exception('SetConsoleMode failed') + finally: + os.close(handle) + + global WINDOWS_VT_MODE + WINDOWS_VT_MODE = True + supports_terminal_sequences.cache_clear() + + +_terminal_sequences_re = re.compile('\033\\[[^m]+m') + + +def remove_terminal_sequences(string): + return _terminal_sequences_re.sub('', string) + + +def number_of_digits(number): + return len('%d' % number) + + +def join_nonempty(*values, delim='-', from_dict=None): + if from_dict is not None: + values = (traversal.traverse_obj(from_dict, variadic(v)) for v in values) + return delim.join(map(str, filter(None, values))) + + +def scale_thumbnails_to_max_format_width(formats, thumbnails, url_width_re): + """ + Find the largest format dimensions in terms of video width and, for each thumbnail: + * Modify the URL: Match the width with the provided regex and replace with the former width + * Update dimensions + + This function is useful with video services that scale the provided thumbnails on demand + """ + _keys = ('width', 'height') + max_dimensions = max( + (tuple(format.get(k) or 0 for k in _keys) for format in formats), + default=(0, 0)) + if not max_dimensions[0]: + return thumbnails + return [ + merge_dicts( + {'url': re.sub(url_width_re, str(max_dimensions[0]), thumbnail['url'])}, + dict(zip(_keys, max_dimensions)), thumbnail) + for thumbnail in thumbnails + ] + + +def parse_http_range(range): + """ Parse value of "Range" or "Content-Range" HTTP header into tuple. """ + if not range: + return None, None, None + crg = re.search(r'bytes[ =](\d+)-(\d+)?(?:/(\d+))?', range) + if not crg: + return None, None, None + return int(crg.group(1)), int_or_none(crg.group(2)), int_or_none(crg.group(3)) + + +def read_stdin(what): + eof = 'Ctrl+Z' if compat_os_name == 'nt' else 'Ctrl+D' + write_string(f'Reading {what} from STDIN - EOF ({eof}) to end:\n') + return sys.stdin + + +def determine_file_encoding(data): + """ + Detect the text encoding used + @returns (encoding, bytes to skip) + """ + + # BOM marks are given priority over declarations + for bom, enc in BOMS: + if data.startswith(bom): + return enc, len(bom) + + # Strip off all null bytes to match even when UTF-16 or UTF-32 is used. + # We ignore the endianness to get a good enough match + data = data.replace(b'\0', b'') + mobj = re.match(rb'(?m)^#\s*coding\s*:\s*(\S+)\s*$', data) + return mobj.group(1).decode() if mobj else None, 0 + + +class Config: + own_args = None + parsed_args = None + filename = None + __initialized = False + + def __init__(self, parser, label=None): + self.parser, self.label = parser, label + self._loaded_paths, self.configs = set(), [] + + def init(self, args=None, filename=None): + assert not self.__initialized + self.own_args, self.filename = args, filename + return self.load_configs() + + def load_configs(self): + directory = '' + if self.filename: + location = os.path.realpath(self.filename) + directory = os.path.dirname(location) + if location in self._loaded_paths: + return False + self._loaded_paths.add(location) + + self.__initialized = True + opts, _ = self.parser.parse_known_args(self.own_args) + self.parsed_args = self.own_args + for location in opts.config_locations or []: + if location == '-': + if location in self._loaded_paths: + continue + self._loaded_paths.add(location) + self.append_config(shlex.split(read_stdin('options'), comments=True), label='stdin') + continue + location = os.path.join(directory, expand_path(location)) + if os.path.isdir(location): + location = os.path.join(location, 'yt-dlp.conf') + if not os.path.exists(location): + self.parser.error(f'config location {location} does not exist') + self.append_config(self.read_file(location), location) + return True + + def __str__(self): + label = join_nonempty( + self.label, 'config', f'"{self.filename}"' if self.filename else '', + delim=' ') + return join_nonempty( + self.own_args is not None and f'{label[0].upper()}{label[1:]}: {self.hide_login_info(self.own_args)}', + *(f'\n{c}'.replace('\n', '\n| ')[1:] for c in self.configs), + delim='\n') + + @staticmethod + def read_file(filename, default=[]): + try: + optionf = open(filename, 'rb') + except OSError: + return default # silently skip if file is not present + try: + enc, skip = determine_file_encoding(optionf.read(512)) + optionf.seek(skip, io.SEEK_SET) + except OSError: + enc = None # silently skip read errors + try: + # FIXME: https://github.com/ytdl-org/youtube-dl/commit/dfe5fa49aed02cf36ba9f743b11b0903554b5e56 + contents = optionf.read().decode(enc or preferredencoding()) + res = shlex.split(contents, comments=True) + except Exception as err: + raise ValueError(f'Unable to parse "{filename}": {err}') + finally: + optionf.close() + return res + + @staticmethod + def hide_login_info(opts): + PRIVATE_OPTS = {'-p', '--password', '-u', '--username', '--video-password', '--ap-password', '--ap-username'} + eqre = re.compile('^(?P<key>' + ('|'.join(re.escape(po) for po in PRIVATE_OPTS)) + ')=.+$') + + def _scrub_eq(o): + m = eqre.match(o) + if m: + return m.group('key') + '=PRIVATE' + else: + return o + + opts = list(map(_scrub_eq, opts)) + for idx, opt in enumerate(opts): + if opt in PRIVATE_OPTS and idx + 1 < len(opts): + opts[idx + 1] = 'PRIVATE' + return opts + + def append_config(self, *args, label=None): + config = type(self)(self.parser, label) + config._loaded_paths = self._loaded_paths + if config.init(*args): + self.configs.append(config) + + @property + def all_args(self): + for config in reversed(self.configs): + yield from config.all_args + yield from self.parsed_args or [] + + def parse_known_args(self, **kwargs): + return self.parser.parse_known_args(self.all_args, **kwargs) + + def parse_args(self): + return self.parser.parse_args(self.all_args) + + +class WebSocketsWrapper: + """Wraps websockets module to use in non-async scopes""" + pool = None + + def __init__(self, url, headers=None, connect=True): + self.loop = asyncio.new_event_loop() + # XXX: "loop" is deprecated + self.conn = websockets.connect( + url, extra_headers=headers, ping_interval=None, + close_timeout=float('inf'), loop=self.loop, ping_timeout=float('inf')) + if connect: + self.__enter__() + atexit.register(self.__exit__, None, None, None) + + def __enter__(self): + if not self.pool: + self.pool = self.run_with_loop(self.conn.__aenter__(), self.loop) + return self + + def send(self, *args): + self.run_with_loop(self.pool.send(*args), self.loop) + + def recv(self, *args): + return self.run_with_loop(self.pool.recv(*args), self.loop) + + def __exit__(self, type, value, traceback): + try: + return self.run_with_loop(self.conn.__aexit__(type, value, traceback), self.loop) + finally: + self.loop.close() + self._cancel_all_tasks(self.loop) + + # taken from https://github.com/python/cpython/blob/3.9/Lib/asyncio/runners.py with modifications + # for contributors: If there's any new library using asyncio needs to be run in non-async, move these function out of this class + @staticmethod + def run_with_loop(main, loop): + if not asyncio.iscoroutine(main): + raise ValueError(f'a coroutine was expected, got {main!r}') + + try: + return loop.run_until_complete(main) + finally: + loop.run_until_complete(loop.shutdown_asyncgens()) + if hasattr(loop, 'shutdown_default_executor'): + loop.run_until_complete(loop.shutdown_default_executor()) + + @staticmethod + def _cancel_all_tasks(loop): + to_cancel = asyncio.all_tasks(loop) + + if not to_cancel: + return + + for task in to_cancel: + task.cancel() + + # XXX: "loop" is removed in python 3.10+ + loop.run_until_complete( + asyncio.gather(*to_cancel, loop=loop, return_exceptions=True)) + + for task in to_cancel: + if task.cancelled(): + continue + if task.exception() is not None: + loop.call_exception_handler({ + 'message': 'unhandled exception during asyncio.run() shutdown', + 'exception': task.exception(), + 'task': task, + }) + + +def merge_headers(*dicts): + """Merge dicts of http headers case insensitively, prioritizing the latter ones""" + return {k.title(): v for k, v in itertools.chain.from_iterable(map(dict.items, dicts))} + + +def cached_method(f): + """Cache a method""" + signature = inspect.signature(f) + + @functools.wraps(f) + def wrapper(self, *args, **kwargs): + bound_args = signature.bind(self, *args, **kwargs) + bound_args.apply_defaults() + key = tuple(bound_args.arguments.values())[1:] + + cache = vars(self).setdefault('_cached_method__cache', {}).setdefault(f.__name__, {}) + if key not in cache: + cache[key] = f(self, *args, **kwargs) + return cache[key] + return wrapper + + +class classproperty: + """property access for class methods with optional caching""" + def __new__(cls, func=None, *args, **kwargs): + if not func: + return functools.partial(cls, *args, **kwargs) + return super().__new__(cls) + + def __init__(self, func, *, cache=False): + functools.update_wrapper(self, func) + self.func = func + self._cache = {} if cache else None + + def __get__(self, _, cls): + if self._cache is None: + return self.func(cls) + elif cls not in self._cache: + self._cache[cls] = self.func(cls) + return self._cache[cls] + + +class function_with_repr: + def __init__(self, func, repr_=None): + functools.update_wrapper(self, func) + self.func, self.__repr = func, repr_ + + def __call__(self, *args, **kwargs): + return self.func(*args, **kwargs) + + def __repr__(self): + if self.__repr: + return self.__repr + return f'{self.func.__module__}.{self.func.__qualname__}' + + +class Namespace(types.SimpleNamespace): + """Immutable namespace""" + + def __iter__(self): + return iter(self.__dict__.values()) + + @property + def items_(self): + return self.__dict__.items() + + +MEDIA_EXTENSIONS = Namespace( + common_video=('avi', 'flv', 'mkv', 'mov', 'mp4', 'webm'), + video=('3g2', '3gp', 'f4v', 'mk3d', 'divx', 'mpg', 'ogv', 'm4v', 'wmv'), + common_audio=('aiff', 'alac', 'flac', 'm4a', 'mka', 'mp3', 'ogg', 'opus', 'wav'), + audio=('aac', 'ape', 'asf', 'f4a', 'f4b', 'm4b', 'm4p', 'm4r', 'oga', 'ogx', 'spx', 'vorbis', 'wma', 'weba'), + thumbnails=('jpg', 'png', 'webp'), + storyboards=('mhtml', ), + subtitles=('srt', 'vtt', 'ass', 'lrc'), + manifests=('f4f', 'f4m', 'm3u8', 'smil', 'mpd'), +) +MEDIA_EXTENSIONS.video += MEDIA_EXTENSIONS.common_video +MEDIA_EXTENSIONS.audio += MEDIA_EXTENSIONS.common_audio + +KNOWN_EXTENSIONS = (*MEDIA_EXTENSIONS.video, *MEDIA_EXTENSIONS.audio, *MEDIA_EXTENSIONS.manifests) + + +class RetryManager: + """Usage: + for retry in RetryManager(...): + try: + ... + except SomeException as err: + retry.error = err + continue + """ + attempt, _error = 0, None + + def __init__(self, _retries, _error_callback, **kwargs): + self.retries = _retries or 0 + self.error_callback = functools.partial(_error_callback, **kwargs) + + def _should_retry(self): + return self._error is not NO_DEFAULT and self.attempt <= self.retries + + @property + def error(self): + if self._error is NO_DEFAULT: + return None + return self._error + + @error.setter + def error(self, value): + self._error = value + + def __iter__(self): + while self._should_retry(): + self.error = NO_DEFAULT + self.attempt += 1 + yield self + if self.error: + self.error_callback(self.error, self.attempt, self.retries) + + @staticmethod + def report_retry(e, count, retries, *, sleep_func, info, warn, error=None, suffix=None): + """Utility function for reporting retries""" + if count > retries: + if error: + return error(f'{e}. Giving up after {count - 1} retries') if count > 1 else error(str(e)) + raise e + + if not count: + return warn(e) + elif isinstance(e, ExtractorError): + e = remove_end(str_or_none(e.cause) or e.orig_msg, '.') + warn(f'{e}. Retrying{format_field(suffix, None, " %s")} ({count}/{retries})...') + + delay = float_or_none(sleep_func(n=count - 1)) if callable(sleep_func) else sleep_func + if delay: + info(f'Sleeping {delay:.2f} seconds ...') + time.sleep(delay) + + +def make_archive_id(ie, video_id): + ie_key = ie if isinstance(ie, str) else ie.ie_key() + return f'{ie_key.lower()} {video_id}' + + +def truncate_string(s, left, right=0): + assert left > 3 and right >= 0 + if s is None or len(s) <= left + right: + return s + return f'{s[:left-3]}...{s[-right:] if right else ""}' + + +def orderedSet_from_options(options, alias_dict, *, use_regex=False, start=None): + assert 'all' in alias_dict, '"all" alias is required' + requested = list(start or []) + for val in options: + discard = val.startswith('-') + if discard: + val = val[1:] + + if val in alias_dict: + val = alias_dict[val] if not discard else [ + i[1:] if i.startswith('-') else f'-{i}' for i in alias_dict[val]] + # NB: Do not allow regex in aliases for performance + requested = orderedSet_from_options(val, alias_dict, start=requested) + continue + + current = (filter(re.compile(val, re.I).fullmatch, alias_dict['all']) if use_regex + else [val] if val in alias_dict['all'] else None) + if current is None: + raise ValueError(val) + + if discard: + for item in current: + while item in requested: + requested.remove(item) + else: + requested.extend(current) + + return orderedSet(requested) + + +# TODO: Rewrite +class FormatSorter: + regex = r' *((?P<reverse>\+)?(?P<field>[a-zA-Z0-9_]+)((?P<separator>[~:])(?P<limit>.*?))?)? *$' + + default = ('hidden', 'aud_or_vid', 'hasvid', 'ie_pref', 'lang', 'quality', + 'res', 'fps', 'hdr:12', 'vcodec:vp9.2', 'channels', 'acodec', + 'size', 'br', 'asr', 'proto', 'ext', 'hasaud', 'source', 'id') # These must not be aliases + ytdl_default = ('hasaud', 'lang', 'quality', 'tbr', 'filesize', 'vbr', + 'height', 'width', 'proto', 'vext', 'abr', 'aext', + 'fps', 'fs_approx', 'source', 'id') + + settings = { + 'vcodec': {'type': 'ordered', 'regex': True, + 'order': ['av0?1', 'vp0?9.2', 'vp0?9', '[hx]265|he?vc?', '[hx]264|avc', 'vp0?8', 'mp4v|h263', 'theora', '', None, 'none']}, + 'acodec': {'type': 'ordered', 'regex': True, + 'order': ['[af]lac', 'wav|aiff', 'opus', 'vorbis|ogg', 'aac', 'mp?4a?', 'mp3', 'ac-?4', 'e-?a?c-?3', 'ac-?3', 'dts', '', None, 'none']}, + 'hdr': {'type': 'ordered', 'regex': True, 'field': 'dynamic_range', + 'order': ['dv', '(hdr)?12', r'(hdr)?10\+', '(hdr)?10', 'hlg', '', 'sdr', None]}, + 'proto': {'type': 'ordered', 'regex': True, 'field': 'protocol', + 'order': ['(ht|f)tps', '(ht|f)tp$', 'm3u8.*', '.*dash', 'websocket_frag', 'rtmpe?', '', 'mms|rtsp', 'ws|websocket', 'f4']}, + 'vext': {'type': 'ordered', 'field': 'video_ext', + 'order': ('mp4', 'mov', 'webm', 'flv', '', 'none'), + 'order_free': ('webm', 'mp4', 'mov', 'flv', '', 'none')}, + 'aext': {'type': 'ordered', 'regex': True, 'field': 'audio_ext', + 'order': ('m4a', 'aac', 'mp3', 'ogg', 'opus', 'web[am]', '', 'none'), + 'order_free': ('ogg', 'opus', 'web[am]', 'mp3', 'm4a', 'aac', '', 'none')}, + 'hidden': {'visible': False, 'forced': True, 'type': 'extractor', 'max': -1000}, + 'aud_or_vid': {'visible': False, 'forced': True, 'type': 'multiple', + 'field': ('vcodec', 'acodec'), + 'function': lambda it: int(any(v != 'none' for v in it))}, + 'ie_pref': {'priority': True, 'type': 'extractor'}, + 'hasvid': {'priority': True, 'field': 'vcodec', 'type': 'boolean', 'not_in_list': ('none',)}, + 'hasaud': {'field': 'acodec', 'type': 'boolean', 'not_in_list': ('none',)}, + 'lang': {'convert': 'float', 'field': 'language_preference', 'default': -1}, + 'quality': {'convert': 'float', 'default': -1}, + 'filesize': {'convert': 'bytes'}, + 'fs_approx': {'convert': 'bytes', 'field': 'filesize_approx'}, + 'id': {'convert': 'string', 'field': 'format_id'}, + 'height': {'convert': 'float_none'}, + 'width': {'convert': 'float_none'}, + 'fps': {'convert': 'float_none'}, + 'channels': {'convert': 'float_none', 'field': 'audio_channels'}, + 'tbr': {'convert': 'float_none'}, + 'vbr': {'convert': 'float_none'}, + 'abr': {'convert': 'float_none'}, + 'asr': {'convert': 'float_none'}, + 'source': {'convert': 'float', 'field': 'source_preference', 'default': -1}, + + 'codec': {'type': 'combined', 'field': ('vcodec', 'acodec')}, + 'br': {'type': 'multiple', 'field': ('tbr', 'vbr', 'abr'), 'convert': 'float_none', + 'function': lambda it: next(filter(None, it), None)}, + 'size': {'type': 'multiple', 'field': ('filesize', 'fs_approx'), 'convert': 'bytes', + 'function': lambda it: next(filter(None, it), None)}, + 'ext': {'type': 'combined', 'field': ('vext', 'aext')}, + 'res': {'type': 'multiple', 'field': ('height', 'width'), + 'function': lambda it: (lambda l: min(l) if l else 0)(tuple(filter(None, it)))}, + + # Actual field names + 'format_id': {'type': 'alias', 'field': 'id'}, + 'preference': {'type': 'alias', 'field': 'ie_pref'}, + 'language_preference': {'type': 'alias', 'field': 'lang'}, + 'source_preference': {'type': 'alias', 'field': 'source'}, + 'protocol': {'type': 'alias', 'field': 'proto'}, + 'filesize_approx': {'type': 'alias', 'field': 'fs_approx'}, + 'audio_channels': {'type': 'alias', 'field': 'channels'}, + + # Deprecated + 'dimension': {'type': 'alias', 'field': 'res', 'deprecated': True}, + 'resolution': {'type': 'alias', 'field': 'res', 'deprecated': True}, + 'extension': {'type': 'alias', 'field': 'ext', 'deprecated': True}, + 'bitrate': {'type': 'alias', 'field': 'br', 'deprecated': True}, + 'total_bitrate': {'type': 'alias', 'field': 'tbr', 'deprecated': True}, + 'video_bitrate': {'type': 'alias', 'field': 'vbr', 'deprecated': True}, + 'audio_bitrate': {'type': 'alias', 'field': 'abr', 'deprecated': True}, + 'framerate': {'type': 'alias', 'field': 'fps', 'deprecated': True}, + 'filesize_estimate': {'type': 'alias', 'field': 'size', 'deprecated': True}, + 'samplerate': {'type': 'alias', 'field': 'asr', 'deprecated': True}, + 'video_ext': {'type': 'alias', 'field': 'vext', 'deprecated': True}, + 'audio_ext': {'type': 'alias', 'field': 'aext', 'deprecated': True}, + 'video_codec': {'type': 'alias', 'field': 'vcodec', 'deprecated': True}, + 'audio_codec': {'type': 'alias', 'field': 'acodec', 'deprecated': True}, + 'video': {'type': 'alias', 'field': 'hasvid', 'deprecated': True}, + 'has_video': {'type': 'alias', 'field': 'hasvid', 'deprecated': True}, + 'audio': {'type': 'alias', 'field': 'hasaud', 'deprecated': True}, + 'has_audio': {'type': 'alias', 'field': 'hasaud', 'deprecated': True}, + 'extractor': {'type': 'alias', 'field': 'ie_pref', 'deprecated': True}, + 'extractor_preference': {'type': 'alias', 'field': 'ie_pref', 'deprecated': True}, + } + + def __init__(self, ydl, field_preference): + self.ydl = ydl + self._order = [] + self.evaluate_params(self.ydl.params, field_preference) + if ydl.params.get('verbose'): + self.print_verbose_info(self.ydl.write_debug) + + def _get_field_setting(self, field, key): + if field not in self.settings: + if key in ('forced', 'priority'): + return False + self.ydl.deprecated_feature(f'Using arbitrary fields ({field}) for format sorting is ' + 'deprecated and may be removed in a future version') + self.settings[field] = {} + propObj = self.settings[field] + if key not in propObj: + type = propObj.get('type') + if key == 'field': + default = 'preference' if type == 'extractor' else (field,) if type in ('combined', 'multiple') else field + elif key == 'convert': + default = 'order' if type == 'ordered' else 'float_string' if field else 'ignore' + else: + default = {'type': 'field', 'visible': True, 'order': [], 'not_in_list': (None,)}.get(key, None) + propObj[key] = default + return propObj[key] + + def _resolve_field_value(self, field, value, convertNone=False): + if value is None: + if not convertNone: + return None + else: + value = value.lower() + conversion = self._get_field_setting(field, 'convert') + if conversion == 'ignore': + return None + if conversion == 'string': + return value + elif conversion == 'float_none': + return float_or_none(value) + elif conversion == 'bytes': + return parse_bytes(value) + elif conversion == 'order': + order_list = (self._use_free_order and self._get_field_setting(field, 'order_free')) or self._get_field_setting(field, 'order') + use_regex = self._get_field_setting(field, 'regex') + list_length = len(order_list) + empty_pos = order_list.index('') if '' in order_list else list_length + 1 + if use_regex and value is not None: + for i, regex in enumerate(order_list): + if regex and re.match(regex, value): + return list_length - i + return list_length - empty_pos # not in list + else: # not regex or value = None + return list_length - (order_list.index(value) if value in order_list else empty_pos) + else: + if value.isnumeric(): + return float(value) + else: + self.settings[field]['convert'] = 'string' + return value + + def evaluate_params(self, params, sort_extractor): + self._use_free_order = params.get('prefer_free_formats', False) + self._sort_user = params.get('format_sort', []) + self._sort_extractor = sort_extractor + + def add_item(field, reverse, closest, limit_text): + field = field.lower() + if field in self._order: + return + self._order.append(field) + limit = self._resolve_field_value(field, limit_text) + data = { + 'reverse': reverse, + 'closest': False if limit is None else closest, + 'limit_text': limit_text, + 'limit': limit} + if field in self.settings: + self.settings[field].update(data) + else: + self.settings[field] = data + + sort_list = ( + tuple(field for field in self.default if self._get_field_setting(field, 'forced')) + + (tuple() if params.get('format_sort_force', False) + else tuple(field for field in self.default if self._get_field_setting(field, 'priority'))) + + tuple(self._sort_user) + tuple(sort_extractor) + self.default) + + for item in sort_list: + match = re.match(self.regex, item) + if match is None: + raise ExtractorError('Invalid format sort string "%s" given by extractor' % item) + field = match.group('field') + if field is None: + continue + if self._get_field_setting(field, 'type') == 'alias': + alias, field = field, self._get_field_setting(field, 'field') + if self._get_field_setting(alias, 'deprecated'): + self.ydl.deprecated_feature(f'Format sorting alias {alias} is deprecated and may ' + f'be removed in a future version. Please use {field} instead') + reverse = match.group('reverse') is not None + closest = match.group('separator') == '~' + limit_text = match.group('limit') + + has_limit = limit_text is not None + has_multiple_fields = self._get_field_setting(field, 'type') == 'combined' + has_multiple_limits = has_limit and has_multiple_fields and not self._get_field_setting(field, 'same_limit') + + fields = self._get_field_setting(field, 'field') if has_multiple_fields else (field,) + limits = limit_text.split(':') if has_multiple_limits else (limit_text,) if has_limit else tuple() + limit_count = len(limits) + for (i, f) in enumerate(fields): + add_item(f, reverse, closest, + limits[i] if i < limit_count + else limits[0] if has_limit and not has_multiple_limits + else None) + + def print_verbose_info(self, write_debug): + if self._sort_user: + write_debug('Sort order given by user: %s' % ', '.join(self._sort_user)) + if self._sort_extractor: + write_debug('Sort order given by extractor: %s' % ', '.join(self._sort_extractor)) + write_debug('Formats sorted by: %s' % ', '.join(['%s%s%s' % ( + '+' if self._get_field_setting(field, 'reverse') else '', field, + '%s%s(%s)' % ('~' if self._get_field_setting(field, 'closest') else ':', + self._get_field_setting(field, 'limit_text'), + self._get_field_setting(field, 'limit')) + if self._get_field_setting(field, 'limit_text') is not None else '') + for field in self._order if self._get_field_setting(field, 'visible')])) + + def _calculate_field_preference_from_value(self, format, field, type, value): + reverse = self._get_field_setting(field, 'reverse') + closest = self._get_field_setting(field, 'closest') + limit = self._get_field_setting(field, 'limit') + + if type == 'extractor': + maximum = self._get_field_setting(field, 'max') + if value is None or (maximum is not None and value >= maximum): + value = -1 + elif type == 'boolean': + in_list = self._get_field_setting(field, 'in_list') + not_in_list = self._get_field_setting(field, 'not_in_list') + value = 0 if ((in_list is None or value in in_list) and (not_in_list is None or value not in not_in_list)) else -1 + elif type == 'ordered': + value = self._resolve_field_value(field, value, True) + + # try to convert to number + val_num = float_or_none(value, default=self._get_field_setting(field, 'default')) + is_num = self._get_field_setting(field, 'convert') != 'string' and val_num is not None + if is_num: + value = val_num + + return ((-10, 0) if value is None + else (1, value, 0) if not is_num # if a field has mixed strings and numbers, strings are sorted higher + else (0, -abs(value - limit), value - limit if reverse else limit - value) if closest + else (0, value, 0) if not reverse and (limit is None or value <= limit) + else (0, -value, 0) if limit is None or (reverse and value == limit) or value > limit + else (-1, value, 0)) + + def _calculate_field_preference(self, format, field): + type = self._get_field_setting(field, 'type') # extractor, boolean, ordered, field, multiple + get_value = lambda f: format.get(self._get_field_setting(f, 'field')) + if type == 'multiple': + type = 'field' # Only 'field' is allowed in multiple for now + actual_fields = self._get_field_setting(field, 'field') + + value = self._get_field_setting(field, 'function')(get_value(f) for f in actual_fields) + else: + value = get_value(field) + return self._calculate_field_preference_from_value(format, field, type, value) + + def calculate_preference(self, format): + # Determine missing protocol + if not format.get('protocol'): + format['protocol'] = determine_protocol(format) + + # Determine missing ext + if not format.get('ext') and 'url' in format: + format['ext'] = determine_ext(format['url']) + if format.get('vcodec') == 'none': + format['audio_ext'] = format['ext'] if format.get('acodec') != 'none' else 'none' + format['video_ext'] = 'none' + else: + format['video_ext'] = format['ext'] + format['audio_ext'] = 'none' + # if format.get('preference') is None and format.get('ext') in ('f4f', 'f4m'): # Not supported? + # format['preference'] = -1000 + + if format.get('preference') is None and format.get('ext') == 'flv' and re.match('[hx]265|he?vc?', format.get('vcodec') or ''): + # HEVC-over-FLV is out-of-spec by FLV's original spec + # ref. https://trac.ffmpeg.org/ticket/6389 + # ref. https://github.com/yt-dlp/yt-dlp/pull/5821 + format['preference'] = -100 + + # Determine missing bitrates + if format.get('vcodec') == 'none': + format['vbr'] = 0 + if format.get('acodec') == 'none': + format['abr'] = 0 + if not format.get('vbr') and format.get('vcodec') != 'none': + format['vbr'] = try_call(lambda: format['tbr'] - format['abr']) or None + if not format.get('abr') and format.get('acodec') != 'none': + format['abr'] = try_call(lambda: format['tbr'] - format['vbr']) or None + if not format.get('tbr'): + format['tbr'] = try_call(lambda: format['vbr'] + format['abr']) or None + + return tuple(self._calculate_field_preference(format, field) for field in self._order) + + +# XXX: Temporary +class _YDLLogger: + def __init__(self, ydl=None): + self._ydl = ydl + + def debug(self, message): + if self._ydl: + self._ydl.write_debug(message) + + def info(self, message): + if self._ydl: + self._ydl.to_screen(message) + + def warning(self, message, *, once=False): + if self._ydl: + self._ydl.report_warning(message, once) + + def error(self, message, *, is_error=True): + if self._ydl: + self._ydl.report_error(message, is_error=is_error) + + def stdout(self, message): + if self._ydl: + self._ydl.to_stdout(message) + + def stderr(self, message): + if self._ydl: + self._ydl.to_stderr(message) diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/networking.py b/python/lib/python3.10/site-packages/yt_dlp/utils/networking.py new file mode 100644 index 0000000..ba0493c --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/networking.py @@ -0,0 +1,163 @@ +import collections +import random +import urllib.parse +import urllib.request + +from ._utils import remove_start + + +def random_user_agent(): + _USER_AGENT_TPL = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/%s Safari/537.36' + _CHROME_VERSIONS = ( + '90.0.4430.212', + '90.0.4430.24', + '90.0.4430.70', + '90.0.4430.72', + '90.0.4430.85', + '90.0.4430.93', + '91.0.4472.101', + '91.0.4472.106', + '91.0.4472.114', + '91.0.4472.124', + '91.0.4472.164', + '91.0.4472.19', + '91.0.4472.77', + '92.0.4515.107', + '92.0.4515.115', + '92.0.4515.131', + '92.0.4515.159', + '92.0.4515.43', + '93.0.4556.0', + '93.0.4577.15', + '93.0.4577.63', + '93.0.4577.82', + '94.0.4606.41', + '94.0.4606.54', + '94.0.4606.61', + '94.0.4606.71', + '94.0.4606.81', + '94.0.4606.85', + '95.0.4638.17', + '95.0.4638.50', + '95.0.4638.54', + '95.0.4638.69', + '95.0.4638.74', + '96.0.4664.18', + '96.0.4664.45', + '96.0.4664.55', + '96.0.4664.93', + '97.0.4692.20', + ) + return _USER_AGENT_TPL % random.choice(_CHROME_VERSIONS) + + +class HTTPHeaderDict(collections.UserDict, dict): + """ + Store and access keys case-insensitively. + The constructor can take multiple dicts, in which keys in the latter are prioritised. + """ + + def __init__(self, *args, **kwargs): + super().__init__() + for dct in args: + if dct is not None: + self.update(dct) + self.update(kwargs) + + def __setitem__(self, key, value): + if isinstance(value, bytes): + value = value.decode('latin-1') + super().__setitem__(key.title(), str(value)) + + def __getitem__(self, key): + return super().__getitem__(key.title()) + + def __delitem__(self, key): + super().__delitem__(key.title()) + + def __contains__(self, key): + return super().__contains__(key.title() if isinstance(key, str) else key) + + +std_headers = HTTPHeaderDict({ + 'User-Agent': random_user_agent(), + 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', + 'Accept-Language': 'en-us,en;q=0.5', + 'Sec-Fetch-Mode': 'navigate', +}) + + +def clean_proxies(proxies: dict, headers: HTTPHeaderDict): + req_proxy = headers.pop('Ytdl-Request-Proxy', None) + if req_proxy: + proxies.clear() # XXX: compat: Ytdl-Request-Proxy takes preference over everything, including NO_PROXY + proxies['all'] = req_proxy + for proxy_key, proxy_url in proxies.items(): + if proxy_url == '__noproxy__': + proxies[proxy_key] = None + continue + if proxy_key == 'no': # special case + continue + if proxy_url is not None: + # Ensure proxies without a scheme are http. + try: + proxy_scheme = urllib.request._parse_proxy(proxy_url)[0] + except ValueError: + # Ignore invalid proxy URLs. Sometimes these may be introduced through environment + # variables unrelated to proxy settings - e.g. Colab `COLAB_LANGUAGE_SERVER_PROXY`. + # If the proxy is going to be used, the Request Handler proxy validation will handle it. + continue + if proxy_scheme is None: + proxies[proxy_key] = 'http://' + remove_start(proxy_url, '//') + + replace_scheme = { + 'socks5': 'socks5h', # compat: socks5 was treated as socks5h + 'socks': 'socks4' # compat: non-standard + } + if proxy_scheme in replace_scheme: + proxies[proxy_key] = urllib.parse.urlunparse( + urllib.parse.urlparse(proxy_url)._replace(scheme=replace_scheme[proxy_scheme])) + + +def clean_headers(headers: HTTPHeaderDict): + if 'Youtubedl-No-Compression' in headers: # compat + del headers['Youtubedl-No-Compression'] + headers['Accept-Encoding'] = 'identity' + + +def remove_dot_segments(path): + # Implements RFC3986 5.2.4 remote_dot_segments + # Pseudo-code: https://tools.ietf.org/html/rfc3986#section-5.2.4 + # https://github.com/urllib3/urllib3/blob/ba49f5c4e19e6bca6827282feb77a3c9f937e64b/src/urllib3/util/url.py#L263 + output = [] + segments = path.split('/') + for s in segments: + if s == '.': + continue + elif s == '..': + if output: + output.pop() + else: + output.append(s) + if not segments[0] and (not output or output[0]): + output.insert(0, '') + if segments[-1] in ('.', '..'): + output.append('') + return '/'.join(output) + + +def escape_rfc3986(s): + """Escape non-ASCII characters as suggested by RFC 3986""" + return urllib.parse.quote(s, b"%/;:@&=+$,!~*'()?#[]") + + +def normalize_url(url): + """Normalize URL as suggested by RFC 3986""" + url_parsed = urllib.parse.urlparse(url) + return url_parsed._replace( + netloc=url_parsed.netloc.encode('idna').decode('ascii'), + path=escape_rfc3986(remove_dot_segments(url_parsed.path)), + params=escape_rfc3986(url_parsed.params), + query=escape_rfc3986(url_parsed.query), + fragment=escape_rfc3986(url_parsed.fragment) + ).geturl() diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/progress.py b/python/lib/python3.10/site-packages/yt_dlp/utils/progress.py new file mode 100644 index 0000000..f254a38 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/progress.py @@ -0,0 +1,109 @@ +from __future__ import annotations + +import bisect +import threading +import time + + +class ProgressCalculator: + # Time to calculate the speed over (seconds) + SAMPLING_WINDOW = 3 + # Minimum timeframe before to sample next downloaded bytes (seconds) + SAMPLING_RATE = 0.05 + # Time before showing eta (seconds) + GRACE_PERIOD = 1 + + def __init__(self, initial: int): + self._initial = initial or 0 + self.downloaded = self._initial + + self.elapsed: float = 0 + self.speed = SmoothValue(0, smoothing=0.7) + self.eta = SmoothValue(None, smoothing=0.9) + + self._total = 0 + self._start_time = time.monotonic() + self._last_update = self._start_time + + self._lock = threading.Lock() + self._thread_sizes: dict[int, int] = {} + + self._times = [self._start_time] + self._downloaded = [self.downloaded] + + @property + def total(self): + return self._total + + @total.setter + def total(self, value: int | None): + with self._lock: + if value is not None and value < self.downloaded: + value = self.downloaded + + self._total = value + + def thread_reset(self): + current_thread = threading.get_ident() + with self._lock: + self._thread_sizes[current_thread] = 0 + + def update(self, size: int | None): + if not size: + return + + current_thread = threading.get_ident() + + with self._lock: + last_size = self._thread_sizes.get(current_thread, 0) + self._thread_sizes[current_thread] = size + self._update(size - last_size) + + def _update(self, size: int): + current_time = time.monotonic() + + self.downloaded += size + self.elapsed = current_time - self._start_time + if self.total is not None and self.downloaded > self.total: + self._total = self.downloaded + + if self._last_update + self.SAMPLING_RATE > current_time: + return + self._last_update = current_time + + self._times.append(current_time) + self._downloaded.append(self.downloaded) + + offset = bisect.bisect_left(self._times, current_time - self.SAMPLING_WINDOW) + del self._times[:offset] + del self._downloaded[:offset] + if len(self._times) < 2: + self.speed.reset() + self.eta.reset() + return + + download_time = current_time - self._times[0] + if not download_time: + return + + self.speed.set((self.downloaded - self._downloaded[0]) / download_time) + if self.total and self.speed.value and self.elapsed > self.GRACE_PERIOD: + self.eta.set((self.total - self.downloaded) / self.speed.value) + else: + self.eta.reset() + + +class SmoothValue: + def __init__(self, initial: float | None, smoothing: float): + self.value = self.smooth = self._initial = initial + self._smoothing = smoothing + + def set(self, value: float): + self.value = value + if self.smooth is None: + self.smooth = self.value + else: + self.smooth = (1 - self._smoothing) * value + self._smoothing * self.smooth + + def reset(self): + self.value = self.smooth = self._initial diff --git a/python/lib/python3.10/site-packages/yt_dlp/utils/traversal.py b/python/lib/python3.10/site-packages/yt_dlp/utils/traversal.py new file mode 100644 index 0000000..462c3ba --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/utils/traversal.py @@ -0,0 +1,254 @@ +import collections.abc +import contextlib +import inspect +import itertools +import re + +from ._utils import ( + IDENTITY, + NO_DEFAULT, + LazyList, + int_or_none, + is_iterable_like, + try_call, + variadic, +) + + +def traverse_obj( + obj, *paths, default=NO_DEFAULT, expected_type=None, get_all=True, + casesense=True, is_user_input=False, traverse_string=False): + """ + Safely traverse nested `dict`s and `Iterable`s + + >>> obj = [{}, {"key": "value"}] + >>> traverse_obj(obj, (1, "key")) + "value" + + Each of the provided `paths` is tested and the first producing a valid result will be returned. + The next path will also be tested if the path branched but no results could be found. + Supported values for traversal are `Mapping`, `Iterable` and `re.Match`. + Unhelpful values (`{}`, `None`) are treated as the absence of a value and discarded. + + The paths will be wrapped in `variadic`, so that `'key'` is conveniently the same as `('key', )`. + + The keys in the path can be one of: + - `None`: Return the current object. + - `set`: Requires the only item in the set to be a type or function, + like `{type}`/`{func}`. If a `type`, returns only values + of this type. If a function, returns `func(obj)`. + - `str`/`int`: Return `obj[key]`. For `re.Match`, return `obj.group(key)`. + - `slice`: Branch out and return all values in `obj[key]`. + - `Ellipsis`: Branch out and return a list of all values. + - `tuple`/`list`: Branch out and return a list of all matching values. + Read as: `[traverse_obj(obj, branch) for branch in branches]`. + - `function`: Branch out and return values filtered by the function. + Read as: `[value for key, value in obj if function(key, value)]`. + For `Iterable`s, `key` is the index of the value. + For `re.Match`es, `key` is the group number (0 = full match) + as well as additionally any group names, if given. + - `dict` Transform the current object and return a matching dict. + Read as: `{key: traverse_obj(obj, path) for key, path in dct.items()}`. + + `tuple`, `list`, and `dict` all support nested paths and branches. + + @params paths Paths which to traverse by. + @param default Value to return if the paths do not match. + If the last key in the path is a `dict`, it will apply to each value inside + the dict instead, depth first. Try to avoid if using nested `dict` keys. + @param expected_type If a `type`, only accept final values of this type. + If any other callable, try to call the function on each result. + If the last key in the path is a `dict`, it will apply to each value inside + the dict instead, recursively. This does respect branching paths. + @param get_all If `False`, return the first matching result, otherwise all matching ones. + @param casesense If `False`, consider string dictionary keys as case insensitive. + + The following are only meant to be used by YoutubeDL.prepare_outtmpl and are not part of the API + + @param is_user_input Whether the keys are generated from user input. + If `True` strings get converted to `int`/`slice` if needed. + @param traverse_string Whether to traverse into objects as strings. + If `True`, any non-compatible object will first be + converted into a string and then traversed into. + The return value of that path will be a string instead, + not respecting any further branching. + + + @returns The result of the object traversal. + If successful, `get_all=True`, and the path branches at least once, + then a list of results is returned instead. + If no `default` is given and the last path branches, a `list` of results + is always returned. If a path ends on a `dict` that result will always be a `dict`. + """ + casefold = lambda k: k.casefold() if isinstance(k, str) else k + + if isinstance(expected_type, type): + type_test = lambda val: val if isinstance(val, expected_type) else None + else: + type_test = lambda val: try_call(expected_type or IDENTITY, args=(val,)) + + def apply_key(key, obj, is_last): + branching = False + result = None + + if obj is None and traverse_string: + if key is ... or callable(key) or isinstance(key, slice): + branching = True + result = () + + elif key is None: + result = obj + + elif isinstance(key, set): + assert len(key) == 1, 'Set should only be used to wrap a single item' + item = next(iter(key)) + if isinstance(item, type): + if isinstance(obj, item): + result = obj + else: + result = try_call(item, args=(obj,)) + + elif isinstance(key, (list, tuple)): + branching = True + result = itertools.chain.from_iterable( + apply_path(obj, branch, is_last)[0] for branch in key) + + elif key is ...: + branching = True + if isinstance(obj, collections.abc.Mapping): + result = obj.values() + elif is_iterable_like(obj): + result = obj + elif isinstance(obj, re.Match): + result = obj.groups() + elif traverse_string: + branching = False + result = str(obj) + else: + result = () + + elif callable(key): + branching = True + if isinstance(obj, collections.abc.Mapping): + iter_obj = obj.items() + elif is_iterable_like(obj): + iter_obj = enumerate(obj) + elif isinstance(obj, re.Match): + iter_obj = itertools.chain( + enumerate((obj.group(), *obj.groups())), + obj.groupdict().items()) + elif traverse_string: + branching = False + iter_obj = enumerate(str(obj)) + else: + iter_obj = () + + result = (v for k, v in iter_obj if try_call(key, args=(k, v))) + if not branching: # string traversal + result = ''.join(result) + + elif isinstance(key, dict): + iter_obj = ((k, _traverse_obj(obj, v, False, is_last)) for k, v in key.items()) + result = { + k: v if v is not None else default for k, v in iter_obj + if v is not None or default is not NO_DEFAULT + } or None + + elif isinstance(obj, collections.abc.Mapping): + result = (try_call(obj.get, args=(key,)) if casesense or try_call(obj.__contains__, args=(key,)) else + next((v for k, v in obj.items() if casefold(k) == key), None)) + + elif isinstance(obj, re.Match): + if isinstance(key, int) or casesense: + with contextlib.suppress(IndexError): + result = obj.group(key) + + elif isinstance(key, str): + result = next((v for k, v in obj.groupdict().items() if casefold(k) == key), None) + + elif isinstance(key, (int, slice)): + if is_iterable_like(obj, collections.abc.Sequence): + branching = isinstance(key, slice) + with contextlib.suppress(IndexError): + result = obj[key] + elif traverse_string: + with contextlib.suppress(IndexError): + result = str(obj)[key] + + return branching, result if branching else (result,) + + def lazy_last(iterable): + iterator = iter(iterable) + prev = next(iterator, NO_DEFAULT) + if prev is NO_DEFAULT: + return + + for item in iterator: + yield False, prev + prev = item + + yield True, prev + + def apply_path(start_obj, path, test_type): + objs = (start_obj,) + has_branched = False + + key = None + for last, key in lazy_last(variadic(path, (str, bytes, dict, set))): + if is_user_input and isinstance(key, str): + if key == ':': + key = ... + elif ':' in key: + key = slice(*map(int_or_none, key.split(':'))) + elif int_or_none(key) is not None: + key = int(key) + + if not casesense and isinstance(key, str): + key = key.casefold() + + if __debug__ and callable(key): + # Verify function signature + inspect.signature(key).bind(None, None) + + new_objs = [] + for obj in objs: + branching, results = apply_key(key, obj, last) + has_branched |= branching + new_objs.append(results) + + objs = itertools.chain.from_iterable(new_objs) + + if test_type and not isinstance(key, (dict, list, tuple)): + objs = map(type_test, objs) + + return objs, has_branched, isinstance(key, dict) + + def _traverse_obj(obj, path, allow_empty, test_type): + results, has_branched, is_dict = apply_path(obj, path, test_type) + results = LazyList(item for item in results if item not in (None, {})) + if get_all and has_branched: + if results: + return results.exhaust() + if allow_empty: + return [] if default is NO_DEFAULT else default + return None + + return results[0] if results else {} if allow_empty and is_dict else None + + for index, path in enumerate(paths, 1): + result = _traverse_obj(obj, path, index == len(paths), True) + if result is not None: + return result + + return None if default is NO_DEFAULT else default + + +def get_first(obj, *paths, **kwargs): + return traverse_obj(obj, *((..., *variadic(keys)) for keys in paths), **kwargs, get_all=False) + + +def dict_get(d, key_or_keys, default=None, skip_false_values=True): + for val in map(d.get, variadic(key_or_keys)): + if val is not None and (val or not skip_false_values): + return val + return default diff --git a/python/lib/python3.10/site-packages/yt_dlp/version.py b/python/lib/python3.10/site-packages/yt_dlp/version.py new file mode 100644 index 0000000..6b4a207 --- /dev/null +++ b/python/lib/python3.10/site-packages/yt_dlp/version.py @@ -0,0 +1,11 @@ +# Autogenerated by devscripts/update-version.py + +__version__ = '2023.10.13' + +RELEASE_GIT_HEAD = 'b634ba742d8f38ce9ecfa0546485728b0c6c59d1' + +VARIANT = 'pip' + +UPDATE_HINT = 'You installed yt-dlp with pip or using the wheel from PyPi; Use that to update' + +CHANNEL = 'stable' diff --git a/lib/python3.11/site-packages/yt_dlp/webvtt.py b/python/lib/python3.10/site-packages/yt_dlp/webvtt.py similarity index 100% rename from lib/python3.11/site-packages/yt_dlp/webvtt.py rename to python/lib/python3.10/site-packages/yt_dlp/webvtt.py diff --git a/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/INSTALLER b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/lib/python3.11/site-packages/zipp-3.15.0.dist-info/LICENSE b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/LICENSE similarity index 100% rename from lib/python3.11/site-packages/zipp-3.15.0.dist-info/LICENSE rename to python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/LICENSE diff --git a/lib/python3.11/site-packages/zipp-3.15.0.dist-info/METADATA b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/zipp-3.15.0.dist-info/METADATA rename to python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/METADATA diff --git a/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/RECORD b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/RECORD new file mode 100644 index 0000000..7026b8e --- /dev/null +++ b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/RECORD @@ -0,0 +1,11 @@ +zipp-3.15.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +zipp-3.15.0.dist-info/LICENSE,sha256=2z8CRrH5J48VhFuZ_sR4uLUG63ZIeZNyL4xuJUKF-vg,1050 +zipp-3.15.0.dist-info/METADATA,sha256=el77dlVTqXoMRoTxqwIdoMNmNAkERCn9v9-XlKjJfAU,3735 +zipp-3.15.0.dist-info/RECORD,, +zipp-3.15.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +zipp-3.15.0.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92 +zipp-3.15.0.dist-info/top_level.txt,sha256=iAbdoSHfaGqBfVb2XuR9JqSQHCoOsOtG6y9C_LSpqFw,5 +zipp/__init__.py,sha256=ZNjHuOKaFmgrSPvtIlHYFNCvENHqSYuqEVktBgj9cu8,10676 +zipp/__pycache__/__init__.cpython-310.pyc,, +zipp/__pycache__/py310compat.cpython-310.pyc,, +zipp/py310compat.py,sha256=HQG4If-eI5v4RT82Uagk7PuLsbnJ7HKgj0aQKGRAp8M,309 diff --git a/lib/site-packages/pip/_internal/operations/__init__.py b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_internal/operations/__init__.py rename to python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/pip-22.3.1.dist-info/WHEEL b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/pip-22.3.1.dist-info/WHEEL rename to python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/WHEEL diff --git a/lib/python3.11/site-packages/zipp-3.15.0.dist-info/top_level.txt b/python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/zipp-3.15.0.dist-info/top_level.txt rename to python/lib/python3.10/site-packages/zipp-3.15.0.dist-info/top_level.txt diff --git a/lib/python3.11/site-packages/zipp/__init__.py b/python/lib/python3.10/site-packages/zipp/__init__.py similarity index 100% rename from lib/python3.11/site-packages/zipp/__init__.py rename to python/lib/python3.10/site-packages/zipp/__init__.py diff --git a/lib/python3.11/site-packages/zipp/py310compat.py b/python/lib/python3.10/site-packages/zipp/py310compat.py similarity index 100% rename from lib/python3.11/site-packages/zipp/py310compat.py rename to python/lib/python3.10/site-packages/zipp/py310compat.py diff --git a/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/INSTALLER b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt new file mode 100644 index 0000000..07806f8 --- /dev/null +++ b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/LICENSE.txt @@ -0,0 +1,19 @@ +This is the MIT license: http://www.opensource.org/licenses/mit-license.php + +Copyright (c) Alex Grönholm + +Permission is hereby granted, free of charge, to any person obtaining a copy of this +software and associated documentation files (the "Software"), to deal in the Software +without restriction, including without limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons +to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or +substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, +INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR +PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE +FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/METADATA b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/METADATA new file mode 100644 index 0000000..64f91d3 --- /dev/null +++ b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/METADATA @@ -0,0 +1,138 @@ +Metadata-Version: 2.1 +Name: APScheduler +Version: 3.10.1 +Summary: In-process task scheduler with Cron-like capabilities +Home-page: https://github.com/agronholm/apscheduler +Author: Alex Grönholm +Author-email: apscheduler@nextday.fi +License: MIT +Keywords: scheduling cron +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Requires-Python: >=3.6 +License-File: LICENSE.txt +Requires-Dist: setuptools (>=0.7) +Requires-Dist: six (>=1.4.0) +Requires-Dist: pytz +Requires-Dist: tzlocal (!=3.*,>=2.0) +Provides-Extra: doc +Requires-Dist: sphinx ; extra == 'doc' +Requires-Dist: sphinx-rtd-theme ; extra == 'doc' +Provides-Extra: gevent +Requires-Dist: gevent ; extra == 'gevent' +Provides-Extra: mongodb +Requires-Dist: pymongo (>=3.0) ; extra == 'mongodb' +Provides-Extra: redis +Requires-Dist: redis (>=3.0) ; extra == 'redis' +Provides-Extra: rethinkdb +Requires-Dist: rethinkdb (>=2.4.0) ; extra == 'rethinkdb' +Provides-Extra: sqlalchemy +Requires-Dist: sqlalchemy (>=1.4) ; extra == 'sqlalchemy' +Provides-Extra: testing +Requires-Dist: pytest ; extra == 'testing' +Requires-Dist: pytest-asyncio ; extra == 'testing' +Requires-Dist: pytest-cov ; extra == 'testing' +Requires-Dist: pytest-tornado5 ; extra == 'testing' +Provides-Extra: tornado +Requires-Dist: tornado (>=4.3) ; extra == 'tornado' +Provides-Extra: twisted +Requires-Dist: twisted ; extra == 'twisted' +Provides-Extra: zookeeper +Requires-Dist: kazoo ; extra == 'zookeeper' + +.. image:: https://github.com/agronholm/apscheduler/workflows/Python%20codeqa/test/badge.svg?branch=3.x + :target: https://github.com/agronholm/apscheduler/actions?query=workflow%3A%22Python+codeqa%2Ftest%22+branch%3A3.x + :alt: Build Status +.. image:: https://coveralls.io/repos/github/agronholm/apscheduler/badge.svg?branch=3.x + :target: https://coveralls.io/github/agronholm/apscheduler?branch=3.x + :alt: Code Coverage +.. image:: https://readthedocs.org/projects/apscheduler/badge/?version=3.x + :target: https://apscheduler.readthedocs.io/en/master/?badge=3.x + :alt: Documentation + +Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code +to be executed later, either just once or periodically. You can add new jobs or remove old ones on +the fly as you please. If you store your jobs in a database, they will also survive scheduler +restarts and maintain their state. When the scheduler is restarted, it will then run all the jobs +it should have run while it was offline [#f1]_. + +Among other things, APScheduler can be used as a cross-platform, application specific replacement +to platform specific schedulers, such as the cron daemon or the Windows task scheduler. Please +note, however, that APScheduler is **not** a daemon or service itself, nor does it come with any +command line tools. It is primarily meant to be run inside existing applications. That said, +APScheduler does provide some building blocks for you to build a scheduler service or to run a +dedicated scheduler process. + +APScheduler has three built-in scheduling systems you can use: + +* Cron-style scheduling (with optional start/end times) +* Interval-based execution (runs jobs on even intervals, with optional start/end times) +* One-off delayed execution (runs jobs once, on a set date/time) + +You can mix and match scheduling systems and the backends where the jobs are stored any way you +like. Supported backends for storing jobs include: + +* Memory +* `SQLAlchemy <http://www.sqlalchemy.org/>`_ (any RDBMS supported by SQLAlchemy works) +* `MongoDB <http://www.mongodb.org/>`_ +* `Redis <http://redis.io/>`_ +* `RethinkDB <https://www.rethinkdb.com/>`_ +* `ZooKeeper <https://zookeeper.apache.org/>`_ + +APScheduler also integrates with several common Python frameworks, like: + +* `asyncio <http://docs.python.org/3.4/library/asyncio.html>`_ (:pep:`3156`) +* `gevent <http://www.gevent.org/>`_ +* `Tornado <http://www.tornadoweb.org/>`_ +* `Twisted <http://twistedmatrix.com/>`_ +* `Qt <http://qt-project.org/>`_ (using either + `PyQt <http://www.riverbankcomputing.com/software/pyqt/intro>`_ , + `PySide6 <https://wiki.qt.io/Qt_for_Python>`_ , + `PySide2 <https://wiki.qt.io/Qt_for_Python>`_ or + `PySide <http://qt-project.org/wiki/PySide>`_) + +There are third party solutions for integrating APScheduler with other frameworks: + +* `Django <https://github.com/jarekwg/django-apscheduler>`_ +* `Flask <https://github.com/viniciuschiele/flask-apscheduler>`_ + + +.. [#f1] The cutoff period for this is also configurable. + + +Documentation +------------- + +Documentation can be found `here <https://apscheduler.readthedocs.io/>`_. + + +Source +------ + +The source can be browsed at `Github <https://github.com/agronholm/apscheduler/tree/3.x>`_. + + +Reporting bugs +-------------- + +A `bug tracker <https://github.com/agronholm/apscheduler/issues>`_ is provided by Github. + + +Getting help +------------ + +If you have problems or other questions, you can either: + +* Ask in the `apscheduler <https://gitter.im/apscheduler/Lobby>`_ room on Gitter +* Ask on the `APScheduler GitHub discussion forum <https://github.com/agronholm/apscheduler/discussions>`_, or +* Ask on `StackOverflow <http://stackoverflow.com/questions/tagged/apscheduler>`_ and tag your + question with the ``apscheduler`` tag diff --git a/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/RECORD b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/RECORD rename to python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/RECORD diff --git a/lib/site-packages/pip/_internal/operations/build/__init__.py b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_internal/operations/build/__init__.py rename to python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/setuptools-65.5.1.dist-info/WHEEL b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/setuptools-65.5.1.dist-info/WHEEL rename to python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/WHEEL diff --git a/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt new file mode 100644 index 0000000..0adfe3e --- /dev/null +++ b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/entry_points.txt @@ -0,0 +1,23 @@ +[apscheduler.executors] +asyncio = apscheduler.executors.asyncio:AsyncIOExecutor [asyncio] +debug = apscheduler.executors.debug:DebugExecutor +gevent = apscheduler.executors.gevent:GeventExecutor [gevent] +processpool = apscheduler.executors.pool:ProcessPoolExecutor +threadpool = apscheduler.executors.pool:ThreadPoolExecutor +tornado = apscheduler.executors.tornado:TornadoExecutor [tornado] +twisted = apscheduler.executors.twisted:TwistedExecutor [twisted] + +[apscheduler.jobstores] +memory = apscheduler.jobstores.memory:MemoryJobStore +mongodb = apscheduler.jobstores.mongodb:MongoDBJobStore [mongodb] +redis = apscheduler.jobstores.redis:RedisJobStore [redis] +rethinkdb = apscheduler.jobstores.rethinkdb:RethinkDBJobStore [rethinkdb] +sqlalchemy = apscheduler.jobstores.sqlalchemy:SQLAlchemyJobStore [sqlalchemy] +zookeeper = apscheduler.jobstores.zookeeper:ZooKeeperJobStore [zookeeper] + +[apscheduler.triggers] +and = apscheduler.triggers.combining:AndTrigger +cron = apscheduler.triggers.cron:CronTrigger +date = apscheduler.triggers.date:DateTrigger +interval = apscheduler.triggers.interval:IntervalTrigger +or = apscheduler.triggers.combining:OrTrigger diff --git a/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/top_level.txt b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/top_level.txt new file mode 100644 index 0000000..d31d10d --- /dev/null +++ b/python/lib/python3.11/site-packages/APScheduler-3.10.1.dist-info/top_level.txt @@ -0,0 +1 @@ +apscheduler diff --git a/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/INSTALLER b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/LICENSE b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/LICENSE new file mode 100644 index 0000000..33b7cdd --- /dev/null +++ b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/LICENSE @@ -0,0 +1,19 @@ +Copyright (c) 2009, 2010, 2013-2016 by the Brotli Authors. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/METADATA b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/METADATA similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/METADATA rename to python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/METADATA diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/RECORD b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/RECORD rename to python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/RECORD diff --git a/lib/site-packages/pip/_internal/resolution/__init__.py b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_internal/resolution/__init__.py rename to python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/WHEEL b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/Brotli-1.0.9.dist-info/WHEEL rename to python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/WHEEL diff --git a/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/top_level.txt b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/top_level.txt new file mode 100644 index 0000000..a111e9c --- /dev/null +++ b/python/lib/python3.11/site-packages/Brotli-1.0.9.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_brotli +brotli diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/AES.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/AES.py new file mode 100644 index 0000000..402a3d7 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/AES.py @@ -0,0 +1,234 @@ +# -*- coding: utf-8 -*- +# +# Cipher/AES.py : AES +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +from Cryptodome.Util import _cpu_features +from Cryptodome.Random import get_random_bytes + +MODE_ECB = 1 #: Electronic Code Book (:ref:`ecb_mode`) +MODE_CBC = 2 #: Cipher-Block Chaining (:ref:`cbc_mode`) +MODE_CFB = 3 #: Cipher Feedback (:ref:`cfb_mode`) +MODE_OFB = 5 #: Output Feedback (:ref:`ofb_mode`) +MODE_CTR = 6 #: Counter mode (:ref:`ctr_mode`) +MODE_OPENPGP = 7 #: OpenPGP mode (:ref:`openpgp_mode`) +MODE_CCM = 8 #: Counter with CBC-MAC (:ref:`ccm_mode`) +MODE_EAX = 9 #: :ref:`eax_mode` +MODE_SIV = 10 #: Synthetic Initialization Vector (:ref:`siv_mode`) +MODE_GCM = 11 #: Galois Counter Mode (:ref:`gcm_mode`) +MODE_OCB = 12 #: Offset Code Book (:ref:`ocb_mode`) + + +_cproto = """ + int AES_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int AES_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int AES_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int AES_stop_operation(void *state); + """ + + +# Load portable AES +_raw_aes_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_aes", + _cproto) + +# Try to load AES with AES NI instructions +try: + _raw_aesni_lib = None + if _cpu_features.have_aes_ni(): + _raw_aesni_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_aesni", + _cproto.replace("AES", + "AESNI")) +# _raw_aesni may not have been compiled in +except OSError: + pass + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + use_aesni = dict_parameters.pop("use_aesni", True) + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) not in key_size: + raise ValueError("Incorrect AES key length (%d bytes)" % len(key)) + + if use_aesni and _raw_aesni_lib: + start_operation = _raw_aesni_lib.AESNI_start_operation + stop_operation = _raw_aesni_lib.AESNI_stop_operation + else: + start_operation = _raw_aes_lib.AES_start_operation + stop_operation = _raw_aes_lib.AES_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the AES cipher" + % result) + return SmartPointer(cipher.get(), stop_operation) + + +def _derive_Poly1305_key_pair(key, nonce): + """Derive a tuple (r, s, nonce) for a Poly1305 MAC. + + If nonce is ``None``, a new 16-byte nonce is generated. + """ + + if len(key) != 32: + raise ValueError("Poly1305 with AES requires a 32-byte key") + + if nonce is None: + nonce = get_random_bytes(16) + elif len(nonce) != 16: + raise ValueError("Poly1305 with AES requires a 16-byte nonce") + + s = new(key[:16], MODE_ECB).encrypt(nonce) + return key[16:], s, nonce + + +def new(key, mode, *args, **kwargs): + """Create a new AES cipher. + + Args: + key(bytes/bytearray/memoryview): + The secret key to use in the symmetric cipher. + + It must be 16 (*AES-128)*, 24 (*AES-192*) or 32 (*AES-256*) bytes long. + + For ``MODE_SIV`` only, it doubles to 32, 48, or 64 bytes. + mode (a ``MODE_*`` constant): + The chaining mode to use for encryption or decryption. + If in doubt, use ``MODE_EAX``. + + Keyword Args: + iv (bytes/bytearray/memoryview): + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 16 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 16 bytes long for encryption + and 18 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + nonce (bytes/bytearray/memoryview): + (Only applicable for ``MODE_CCM``, ``MODE_EAX``, ``MODE_GCM``, + ``MODE_SIV``, ``MODE_OCB``, and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key (except possibly for ``MODE_SIV``, see below). + + For ``MODE_EAX``, ``MODE_GCM`` and ``MODE_SIV`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CCM``, its length must be in the range **[7..13]**. + Bear in mind that with CCM there is a trade-off between nonce + length and maximum message size. Recommendation: **11** bytes. + + For ``MODE_OCB``, its length must be in the range **[1..15]** + (recommended: **15**). + + For ``MODE_CTR``, its length must be in the range **[0..15]** + (recommended: **8**). + + For ``MODE_SIV``, the nonce is optional, if it is not specified, + then no nonce is being used, which renders the encryption + deterministic. + + If not provided, for modes other than ``MODE_SIV``, a random + byte string of the recommended length is used (you must then + read its value with the :attr:`nonce` attribute). + + segment_size (integer): + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + mac_len (integer): + (Only ``MODE_EAX``, ``MODE_GCM``, ``MODE_OCB``, ``MODE_CCM``) + Length of the authentication tag, in bytes. + + It must be even and in the range **[4..16]**. + The recommended value (and the default, if not specified) is **16**. + + msg_len (integer): + (Only ``MODE_CCM``). Length of the message to (de)cipher. + If not specified, ``encrypt`` must be called with the entire message. + Similarly, ``decrypt`` can only be called once. + + assoc_len (integer): + (Only ``MODE_CCM``). Length of the associated data. + If not specified, all associated data is buffered internally, + which may represent a problem for very large messages. + + initial_value (integer or bytes/bytearray/memoryview): + (Only ``MODE_CTR``). + The initial value for the counter. If not present, the cipher will + start counting from 0. The value is incremented by one for each block. + The counter number is encoded in big endian mode. + + counter (object): + (Only ``MODE_CTR``). + Instance of ``Cryptodome.Util.Counter``, which allows full customization + of the counter block. This parameter is incompatible to both ``nonce`` + and ``initial_value``. + + use_aesni: (boolean): + Use Intel AES-NI hardware extensions (default: use if available). + + Returns: + an AES object, of the applicable mode. + """ + + kwargs["add_aes_modes"] = True + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + + +# Size of a data block (in bytes) +block_size = 16 +# Size of a key (in bytes) +key_size = (16, 24, 32) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/AES.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/AES.pyi new file mode 100644 index 0000000..a694b0f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/AES.pyi @@ -0,0 +1,154 @@ +from typing import ByteString, Dict, Optional, Tuple, Union, overload +from typing_extensions import Literal + +from Cryptodome.Cipher._mode_ecb import EcbMode +from Cryptodome.Cipher._mode_cbc import CbcMode +from Cryptodome.Cipher._mode_cfb import CfbMode +from Cryptodome.Cipher._mode_ofb import OfbMode +from Cryptodome.Cipher._mode_ctr import CtrMode +from Cryptodome.Cipher._mode_openpgp import OpenPgpMode +from Cryptodome.Cipher._mode_ccm import CcmMode +from Cryptodome.Cipher._mode_eax import EaxMode +from Cryptodome.Cipher._mode_gcm import GcmMode +from Cryptodome.Cipher._mode_siv import SivMode +from Cryptodome.Cipher._mode_ocb import OcbMode + +MODE_ECB: Literal[1] +MODE_CBC: Literal[2] +MODE_CFB: Literal[3] +MODE_OFB: Literal[5] +MODE_CTR: Literal[6] +MODE_OPENPGP: Literal[7] +MODE_CCM: Literal[8] +MODE_EAX: Literal[9] +MODE_SIV: Literal[10] +MODE_GCM: Literal[11] +MODE_OCB: Literal[12] + +# MODE_ECB +@overload +def new(key: ByteString, + mode: Literal[1], + use_aesni : bool = ...) -> \ + EcbMode: ... + +# MODE_CBC +@overload +def new(key: ByteString, + mode: Literal[2], + iv : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + CbcMode: ... + +@overload +def new(key: ByteString, + mode: Literal[2], + IV : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + CbcMode: ... + +# MODE_CFB +@overload +def new(key: ByteString, + mode: Literal[3], + iv : Optional[ByteString] = ..., + segment_size : int = ..., + use_aesni : bool = ...) -> \ + CfbMode: ... + +@overload +def new(key: ByteString, + mode: Literal[3], + IV : Optional[ByteString] = ..., + segment_size : int = ..., + use_aesni : bool = ...) -> \ + CfbMode: ... + +# MODE_OFB +@overload +def new(key: ByteString, + mode: Literal[5], + iv : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + OfbMode: ... + +@overload +def new(key: ByteString, + mode: Literal[5], + IV : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + OfbMode: ... + +# MODE_CTR +@overload +def new(key: ByteString, + mode: Literal[6], + nonce : Optional[ByteString] = ..., + initial_value : Union[int, ByteString] = ..., + counter : Dict = ..., + use_aesni : bool = ...) -> \ + CtrMode: ... + +# MODE_OPENPGP +@overload +def new(key: ByteString, + mode: Literal[7], + iv : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + OpenPgpMode: ... + +@overload +def new(key: ByteString, + mode: Literal[7], + IV : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + OpenPgpMode: ... + +# MODE_CCM +@overload +def new(key: ByteString, + mode: Literal[8], + nonce : Optional[ByteString] = ..., + mac_len : int = ..., + assoc_len : int = ..., + use_aesni : bool = ...) -> \ + CcmMode: ... + +# MODE_EAX +@overload +def new(key: ByteString, + mode: Literal[9], + nonce : Optional[ByteString] = ..., + mac_len : int = ..., + use_aesni : bool = ...) -> \ + EaxMode: ... + +# MODE_GCM +@overload +def new(key: ByteString, + mode: Literal[10], + nonce : Optional[ByteString] = ..., + use_aesni : bool = ...) -> \ + SivMode: ... + +# MODE_SIV +@overload +def new(key: ByteString, + mode: Literal[11], + nonce : Optional[ByteString] = ..., + mac_len : int = ..., + use_aesni : bool = ...) -> \ + GcmMode: ... + +# MODE_OCB +@overload +def new(key: ByteString, + mode: Literal[12], + nonce : Optional[ByteString] = ..., + mac_len : int = ..., + use_aesni : bool = ...) -> \ + OcbMode: ... + + +block_size: int +key_size: Tuple[int, int, int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.py new file mode 100644 index 0000000..4dc1bb8 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.py @@ -0,0 +1,175 @@ +# -*- coding: utf-8 -*- +# +# Cipher/ARC2.py : ARC2.py +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with ARC2: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) <ecb_mode>` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) <cbc_mode>` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) <cfb_mode>` +:var MODE_OFB: :ref:`Output FeedBack (OFB) <ofb_mode>` +:var MODE_CTR: :ref:`CounTer Mode (CTR) <ctr_mode>` +:var MODE_OPENPGP: :ref:`OpenPGP Mode <openpgp_mode>` +:var MODE_EAX: :ref:`EAX Mode <eax_mode>` +""" + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util.py3compat import byte_string +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +_raw_arc2_lib = load_pycryptodome_raw_lib( + "Cryptodome.Cipher._raw_arc2", + """ + int ARC2_start_operation(const uint8_t key[], + size_t key_len, + size_t effective_key_len, + void **pResult); + int ARC2_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ARC2_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ARC2_stop_operation(void *state); + """ + ) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + effective_keylen = dict_parameters.pop("effective_keylen", 1024) + + if len(key) not in key_size: + raise ValueError("Incorrect ARC2 key length (%d bytes)" % len(key)) + + if not (40 <= effective_keylen <= 1024): + raise ValueError("'effective_key_len' must be at least 40 and no larger than 1024 " + "(not %d)" % effective_keylen) + + start_operation = _raw_arc2_lib.ARC2_start_operation + stop_operation = _raw_arc2_lib.ARC2_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + c_size_t(effective_keylen), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the ARC2 cipher" + % result) + + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new RC2 cipher. + + :param key: + The secret key to use in the symmetric cipher. + Its length can vary from 5 to 128 bytes; the actual search space + (and the cipher strength) can be reduced with the ``effective_keylen`` parameter. + :type key: bytes, bytearray, memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **effective_keylen** (*integer*) -- + Optional. Maximum strength in bits of the actual key used by the ARC2 algorithm. + If the supplied ``key`` parameter is longer (in bits) of the value specified + here, it will be weakened to match it. + If not specified, no limitation is applied. + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: an ARC2 object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(5, 128 + 1) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.pyi new file mode 100644 index 0000000..178c3c0 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC2.pyi @@ -0,0 +1,33 @@ +from typing import Union, Dict, Iterable, Optional, ByteString + +from Cryptodome.Cipher._mode_ecb import EcbMode +from Cryptodome.Cipher._mode_cbc import CbcMode +from Cryptodome.Cipher._mode_cfb import CfbMode +from Cryptodome.Cipher._mode_ofb import OfbMode +from Cryptodome.Cipher._mode_ctr import CtrMode +from Cryptodome.Cipher._mode_openpgp import OpenPgpMode +from Cryptodome.Cipher._mode_eax import EaxMode + +ARC2Mode = int + +MODE_ECB: ARC2Mode +MODE_CBC: ARC2Mode +MODE_CFB: ARC2Mode +MODE_OFB: ARC2Mode +MODE_CTR: ARC2Mode +MODE_OPENPGP: ARC2Mode +MODE_EAX: ARC2Mode + +def new(key: ByteString, + mode: ARC2Mode, + iv : Optional[ByteString] = ..., + IV : Optional[ByteString] = ..., + nonce : Optional[ByteString] = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, ByteString] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: Iterable[int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.py new file mode 100644 index 0000000..543a323 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.py @@ -0,0 +1,136 @@ +# -*- coding: utf-8 -*- +# +# Cipher/ARC4.py : ARC4 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr) + + +_raw_arc4_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._ARC4", """ + int ARC4_stream_encrypt(void *rc4State, const uint8_t in[], + uint8_t out[], size_t len); + int ARC4_stream_init(uint8_t *key, size_t keylen, + void **pRc4State); + int ARC4_stream_destroy(void *rc4State); + """) + + +class ARC4Cipher: + """ARC4 cipher object. Do not create it directly. Use + :func:`Cryptodome.Cipher.ARC4.new` instead. + """ + + def __init__(self, key, *args, **kwargs): + """Initialize an ARC4 cipher object + + See also `new()` at the module level.""" + + if len(args) > 0: + ndrop = args[0] + args = args[1:] + else: + ndrop = kwargs.pop('drop', 0) + + if len(key) not in key_size: + raise ValueError("Incorrect ARC4 key length (%d bytes)" % + len(key)) + + self._state = VoidPointer() + result = _raw_arc4_lib.ARC4_stream_init(c_uint8_ptr(key), + c_size_t(len(key)), + self._state.address_of()) + if result != 0: + raise ValueError("Error %d while creating the ARC4 cipher" + % result) + self._state = SmartPointer(self._state.get(), + _raw_arc4_lib.ARC4_stream_destroy) + + if ndrop > 0: + # This is OK even if the cipher is used for decryption, + # since encrypt and decrypt are actually the same thing + # with ARC4. + self.encrypt(b'\x00' * ndrop) + + self.block_size = 1 + self.key_size = len(key) + + def encrypt(self, plaintext): + """Encrypt a piece of data. + + :param plaintext: The data to encrypt, of any size. + :type plaintext: bytes, bytearray, memoryview + :returns: the encrypted byte string, of equal length as the + plaintext. + """ + + ciphertext = create_string_buffer(len(plaintext)) + result = _raw_arc4_lib.ARC4_stream_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + ciphertext, + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting with RC4" % result) + return get_raw_buffer(ciphertext) + + def decrypt(self, ciphertext): + """Decrypt a piece of data. + + :param ciphertext: The data to decrypt, of any size. + :type ciphertext: bytes, bytearray, memoryview + :returns: the decrypted byte string, of equal length as the + ciphertext. + """ + + try: + return self.encrypt(ciphertext) + except ValueError as e: + raise ValueError(str(e).replace("enc", "dec")) + + +def new(key, *args, **kwargs): + """Create a new ARC4 cipher. + + :param key: + The secret key to use in the symmetric cipher. + Its length must be in the range ``[1..256]``. + The recommended length is 16 bytes. + :type key: bytes, bytearray, memoryview + + :Keyword Arguments: + * *drop* (``integer``) -- + The amount of bytes to discard from the initial part of the keystream. + In fact, such part has been found to be distinguishable from random + data (while it shouldn't) and also correlated to key. + + The recommended value is 3072_ bytes. The default value is 0. + + :Return: an `ARC4Cipher` object + + .. _3072: http://eprint.iacr.org/2002/067.pdf + """ + return ARC4Cipher(key, *args, **kwargs) + + +# Size of a data block (in bytes) +block_size = 1 +# Size of a key (in bytes) +key_size = range(1, 256+1) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.pyi new file mode 100644 index 0000000..96bf6e2 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ARC4.pyi @@ -0,0 +1,14 @@ +from typing import Any, Union, Iterable, ByteString + +class ARC4Cipher: + block_size: int + key_size: int + + def __init__(self, key: ByteString, *args: Any, **kwargs: Any) -> None: ... + def encrypt(self, plaintext: ByteString) -> bytes: ... + def decrypt(self, ciphertext: ByteString) -> bytes: ... + +def new(key: ByteString, drop : int = ...) -> ARC4Cipher: ... + +block_size: int +key_size: Iterable[int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.py new file mode 100644 index 0000000..536cbc8 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.py @@ -0,0 +1,159 @@ +# -*- coding: utf-8 -*- +# +# Cipher/Blowfish.py : Blowfish +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with Blowfish: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) <ecb_mode>` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) <cbc_mode>` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) <cfb_mode>` +:var MODE_OFB: :ref:`Output FeedBack (OFB) <ofb_mode>` +:var MODE_CTR: :ref:`CounTer Mode (CTR) <ctr_mode>` +:var MODE_OPENPGP: :ref:`OpenPGP Mode <openpgp_mode>` +:var MODE_EAX: :ref:`EAX Mode <eax_mode>` +""" + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, c_size_t, + c_uint8_ptr) + +_raw_blowfish_lib = load_pycryptodome_raw_lib( + "Cryptodome.Cipher._raw_blowfish", + """ + int Blowfish_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int Blowfish_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int Blowfish_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int Blowfish_stop_operation(void *state); + """ + ) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a smart pointer to + a low-level base cipher. It will absorb named parameters in + the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) not in key_size: + raise ValueError("Incorrect Blowfish key length (%d bytes)" % len(key)) + + start_operation = _raw_blowfish_lib.Blowfish_start_operation + stop_operation = _raw_blowfish_lib.Blowfish_stop_operation + + void_p = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + void_p.address_of()) + if result: + raise ValueError("Error %X while instantiating the Blowfish cipher" + % result) + return SmartPointer(void_p.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new Blowfish cipher + + :param key: + The secret key to use in the symmetric cipher. + Its length can vary from 5 to 56 bytes. + :type key: bytes, bytearray, memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a Blowfish object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(4, 56 + 1) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.pyi new file mode 100644 index 0000000..961f07e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Blowfish.pyi @@ -0,0 +1,33 @@ +from typing import Union, Dict, Iterable, ByteString, Optional + +from Cryptodome.Cipher._mode_ecb import EcbMode +from Cryptodome.Cipher._mode_cbc import CbcMode +from Cryptodome.Cipher._mode_cfb import CfbMode +from Cryptodome.Cipher._mode_ofb import OfbMode +from Cryptodome.Cipher._mode_ctr import CtrMode +from Cryptodome.Cipher._mode_openpgp import OpenPgpMode +from Cryptodome.Cipher._mode_eax import EaxMode + +BlowfishMode = int + +MODE_ECB: BlowfishMode +MODE_CBC: BlowfishMode +MODE_CFB: BlowfishMode +MODE_OFB: BlowfishMode +MODE_CTR: BlowfishMode +MODE_OPENPGP: BlowfishMode +MODE_EAX: BlowfishMode + +def new(key: ByteString, + mode: BlowfishMode, + iv : Optional[ByteString] = ..., + IV : Optional[ByteString] = ..., + nonce : Optional[ByteString] = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, ByteString] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: Iterable[int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.py new file mode 100644 index 0000000..84eb88e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.py @@ -0,0 +1,159 @@ +# -*- coding: utf-8 -*- +# +# Cipher/CAST.py : CAST +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with CAST: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) <ecb_mode>` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) <cbc_mode>` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) <cfb_mode>` +:var MODE_OFB: :ref:`Output FeedBack (OFB) <ofb_mode>` +:var MODE_CTR: :ref:`CounTer Mode (CTR) <ctr_mode>` +:var MODE_OPENPGP: :ref:`OpenPGP Mode <openpgp_mode>` +:var MODE_EAX: :ref:`EAX Mode <eax_mode>` +""" + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util.py3compat import byte_string +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +_raw_cast_lib = load_pycryptodome_raw_lib( + "Cryptodome.Cipher._raw_cast", + """ + int CAST_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int CAST_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CAST_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CAST_stop_operation(void *state); + """) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) not in key_size: + raise ValueError("Incorrect CAST key length (%d bytes)" % len(key)) + + start_operation = _raw_cast_lib.CAST_start_operation + stop_operation = _raw_cast_lib.CAST_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the CAST cipher" + % result) + + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new CAST cipher + + :param key: + The secret key to use in the symmetric cipher. + Its length can vary from 5 to 16 bytes. + :type key: bytes, bytearray, memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a CAST object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(5, 16 + 1) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.pyi new file mode 100644 index 0000000..5dcd20c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/CAST.pyi @@ -0,0 +1,33 @@ +from typing import Union, Dict, Iterable, Optional, ByteString + +from Cryptodome.Cipher._mode_ecb import EcbMode +from Cryptodome.Cipher._mode_cbc import CbcMode +from Cryptodome.Cipher._mode_cfb import CfbMode +from Cryptodome.Cipher._mode_ofb import OfbMode +from Cryptodome.Cipher._mode_ctr import CtrMode +from Cryptodome.Cipher._mode_openpgp import OpenPgpMode +from Cryptodome.Cipher._mode_eax import EaxMode + +CASTMode = int + +MODE_ECB: CASTMode +MODE_CBC: CASTMode +MODE_CFB: CASTMode +MODE_OFB: CASTMode +MODE_CTR: CASTMode +MODE_OPENPGP: CASTMode +MODE_EAX: CASTMode + +def new(key: ByteString, + mode: CASTMode, + iv : Optional[ByteString] = ..., + IV : Optional[ByteString] = ..., + nonce : Optional[ByteString] = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, ByteString] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size : Iterable[int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.py new file mode 100644 index 0000000..648d692 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.py @@ -0,0 +1,287 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Random import get_random_bytes + +from Cryptodome.Util.py3compat import _copy_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, VoidPointer, + SmartPointer, c_size_t, + c_uint8_ptr, c_ulong, + is_writeable_buffer) + +_raw_chacha20_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._chacha20", + """ + int chacha20_init(void **pState, + const uint8_t *key, + size_t keySize, + const uint8_t *nonce, + size_t nonceSize); + + int chacha20_destroy(void *state); + + int chacha20_encrypt(void *state, + const uint8_t in[], + uint8_t out[], + size_t len); + + int chacha20_seek(void *state, + unsigned long block_high, + unsigned long block_low, + unsigned offset); + int hchacha20( const uint8_t key[32], + const uint8_t nonce16[16], + uint8_t subkey[32]); + """) + + +def _HChaCha20(key, nonce): + + assert(len(key) == 32) + assert(len(nonce) == 16) + + subkey = bytearray(32) + result = _raw_chacha20_lib.hchacha20( + c_uint8_ptr(key), + c_uint8_ptr(nonce), + c_uint8_ptr(subkey)) + if result: + raise ValueError("Error %d when deriving subkey with HChaCha20" % result) + + return subkey + + +class ChaCha20Cipher(object): + """ChaCha20 (or XChaCha20) cipher object. + Do not create it directly. Use :py:func:`new` instead. + + :var nonce: The nonce with length 8, 12 or 24 bytes + :vartype nonce: bytes + """ + + block_size = 1 + + def __init__(self, key, nonce): + """Initialize a ChaCha20/XChaCha20 cipher object + + See also `new()` at the module level.""" + + self.nonce = _copy_bytes(None, None, nonce) + + # XChaCha20 requires a key derivation with HChaCha20 + # See 2.3 in https://tools.ietf.org/html/draft-arciszewski-xchacha-03 + if len(nonce) == 24: + key = _HChaCha20(key, nonce[:16]) + nonce = b'\x00' * 4 + nonce[16:] + self._name = "XChaCha20" + else: + self._name = "ChaCha20" + nonce = self.nonce + + self._next = ("encrypt", "decrypt") + + self._state = VoidPointer() + result = _raw_chacha20_lib.chacha20_init( + self._state.address_of(), + c_uint8_ptr(key), + c_size_t(len(key)), + nonce, + c_size_t(len(nonce))) + if result: + raise ValueError("Error %d instantiating a %s cipher" % (result, + self._name)) + self._state = SmartPointer(self._state.get(), + _raw_chacha20_lib.chacha20_destroy) + + def encrypt(self, plaintext, output=None): + """Encrypt a piece of data. + + Args: + plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the ciphertext + is written to. If ``None``, the ciphertext is returned. + Returns: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("Cipher object can only be used for decryption") + self._next = ("encrypt",) + return self._encrypt(plaintext, output) + + def _encrypt(self, plaintext, output): + """Encrypt without FSM checks""" + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = _raw_chacha20_lib.chacha20_encrypt( + self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting with %s" % (result, self._name)) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt a piece of data. + + Args: + ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the plaintext + is written to. If ``None``, the plaintext is returned. + Returns: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("Cipher object can only be used for encryption") + self._next = ("decrypt",) + + try: + return self._encrypt(ciphertext, output) + except ValueError as e: + raise ValueError(str(e).replace("enc", "dec")) + + def seek(self, position): + """Seek to a certain position in the key stream. + + Args: + position (integer): + The absolute position within the key stream, in bytes. + """ + + position, offset = divmod(position, 64) + block_low = position & 0xFFFFFFFF + block_high = position >> 32 + + result = _raw_chacha20_lib.chacha20_seek( + self._state.get(), + c_ulong(block_high), + c_ulong(block_low), + offset + ) + if result: + raise ValueError("Error %d while seeking with %s" % (result, self._name)) + + +def _derive_Poly1305_key_pair(key, nonce): + """Derive a tuple (r, s, nonce) for a Poly1305 MAC. + + If nonce is ``None``, a new 12-byte nonce is generated. + """ + + if len(key) != 32: + raise ValueError("Poly1305 with ChaCha20 requires a 32-byte key") + + if nonce is None: + padded_nonce = nonce = get_random_bytes(12) + elif len(nonce) == 8: + # See RFC7538, 2.6: [...] ChaCha20 as specified here requires a 96-bit + # nonce. So if the provided nonce is only 64-bit, then the first 32 + # bits of the nonce will be set to a constant number. + # This will usually be zero, but for protocols with multiple senders it may be + # different for each sender, but should be the same for all + # invocations of the function with the same key by a particular + # sender. + padded_nonce = b'\x00\x00\x00\x00' + nonce + elif len(nonce) == 12: + padded_nonce = nonce + else: + raise ValueError("Poly1305 with ChaCha20 requires an 8- or 12-byte nonce") + + rs = new(key=key, nonce=padded_nonce).encrypt(b'\x00' * 32) + return rs[:16], rs[16:], nonce + + +def new(**kwargs): + """Create a new ChaCha20 or XChaCha20 cipher + + Keyword Args: + key (bytes/bytearray/memoryview): The secret key to use. + It must be 32 bytes long. + nonce (bytes/bytearray/memoryview): A mandatory value that + must never be reused for any other encryption + done with this key. + + For ChaCha20, it must be 8 or 12 bytes long. + + For XChaCha20, it must be 24 bytes long. + + If not provided, 8 bytes will be randomly generated + (you can find them back in the ``nonce`` attribute). + + :Return: a :class:`Cryptodome.Cipher.ChaCha20.ChaCha20Cipher` object + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter %s" % e) + + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(8) + + if len(key) != 32: + raise ValueError("ChaCha20/XChaCha20 key must be 32 bytes long") + + if len(nonce) not in (8, 12, 24): + raise ValueError("Nonce must be 8/12 bytes(ChaCha20) or 24 bytes (XChaCha20)") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return ChaCha20Cipher(key, nonce) + +# Size of a data block (in bytes) +block_size = 1 + +# Size of a key (in bytes) +key_size = 32 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.pyi new file mode 100644 index 0000000..ba2f699 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20.pyi @@ -0,0 +1,23 @@ +from typing import Union, overload, ByteString, Optional + +def _HChaCha20(key: ByteString, nonce: ByteString) -> bytearray: ... + +class ChaCha20Cipher: + block_size: int + nonce: bytes + + def __init__(self, key: ByteString, nonce: ByteString) -> None: ... + @overload + def encrypt(self, plaintext: ByteString) -> bytes: ... + @overload + def encrypt(self, plaintext: ByteString, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: ByteString) -> bytes: ... + @overload + def decrypt(self, plaintext: ByteString, output: Union[bytearray, memoryview]) -> None: ... + def seek(self, position: int) -> None: ... + +def new(key: ByteString, nonce: Optional[ByteString] = ...) -> ChaCha20Cipher: ... + +block_size: int +key_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py new file mode 100644 index 0000000..b2923ed --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.py @@ -0,0 +1,336 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Cryptodome.Cipher import ChaCha20 +from Cryptodome.Cipher.ChaCha20 import _HChaCha20 +from Cryptodome.Hash import Poly1305, BLAKE2s + +from Cryptodome.Random import get_random_bytes + +from Cryptodome.Util.number import long_to_bytes +from Cryptodome.Util.py3compat import _copy_bytes, bord +from Cryptodome.Util._raw_api import is_buffer + + +def _enum(**enums): + return type('Enum', (), enums) + + +_CipherStatus = _enum(PROCESSING_AUTH_DATA=1, + PROCESSING_CIPHERTEXT=2, + PROCESSING_DONE=3) + + +class ChaCha20Poly1305Cipher(object): + """ChaCha20-Poly1305 and XChaCha20-Poly1305 cipher object. + Do not create it directly. Use :py:func:`new` instead. + + :var nonce: The nonce with length 8, 12 or 24 bytes + :vartype nonce: byte string + """ + + def __init__(self, key, nonce): + """Initialize a ChaCha20-Poly1305 AEAD cipher object + + See also `new()` at the module level.""" + + self._next = ("update", "encrypt", "decrypt", "digest", + "verify") + + self._authenticator = Poly1305.new(key=key, nonce=nonce, cipher=ChaCha20) + + self._cipher = ChaCha20.new(key=key, nonce=nonce) + self._cipher.seek(64) # Block counter starts at 1 + + self._len_aad = 0 + self._len_ct = 0 + self._mac_tag = None + self._status = _CipherStatus.PROCESSING_AUTH_DATA + + def update(self, data): + """Protect the associated data. + + Associated data (also known as *additional authenticated data* - AAD) + is the piece of the message that must stay in the clear, while + still allowing the receiver to verify its integrity. + An example is packet headers. + + The associated data (possibly split into multiple segments) is + fed into :meth:`update` before any call to :meth:`decrypt` or :meth:`encrypt`. + If there is no associated data, :meth:`update` is not called. + + :param bytes/bytearray/memoryview assoc_data: + A piece of associated data. There are no restrictions on its size. + """ + + if "update" not in self._next: + raise TypeError("update() method cannot be called") + + self._len_aad += len(data) + self._authenticator.update(data) + + def _pad_aad(self): + + assert(self._status == _CipherStatus.PROCESSING_AUTH_DATA) + if self._len_aad & 0x0F: + self._authenticator.update(b'\x00' * (16 - (self._len_aad & 0x0F))) + self._status = _CipherStatus.PROCESSING_CIPHERTEXT + + def encrypt(self, plaintext, output=None): + """Encrypt a piece of data. + + Args: + plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the ciphertext + is written to. If ``None``, the ciphertext is returned. + Returns: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() method cannot be called") + + if self._status == _CipherStatus.PROCESSING_AUTH_DATA: + self._pad_aad() + + self._next = ("encrypt", "digest") + + result = self._cipher.encrypt(plaintext, output=output) + self._len_ct += len(plaintext) + if output is None: + self._authenticator.update(result) + else: + self._authenticator.update(output) + return result + + def decrypt(self, ciphertext, output=None): + """Decrypt a piece of data. + + Args: + ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the plaintext + is written to. If ``None``, the plaintext is returned. + Returns: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() method cannot be called") + + if self._status == _CipherStatus.PROCESSING_AUTH_DATA: + self._pad_aad() + + self._next = ("decrypt", "verify") + + self._len_ct += len(ciphertext) + self._authenticator.update(ciphertext) + return self._cipher.decrypt(ciphertext, output=output) + + def _compute_mac(self): + """Finalize the cipher (if not done already) and return the MAC.""" + + if self._mac_tag: + assert(self._status == _CipherStatus.PROCESSING_DONE) + return self._mac_tag + + assert(self._status != _CipherStatus.PROCESSING_DONE) + + if self._status == _CipherStatus.PROCESSING_AUTH_DATA: + self._pad_aad() + + if self._len_ct & 0x0F: + self._authenticator.update(b'\x00' * (16 - (self._len_ct & 0x0F))) + + self._status = _CipherStatus.PROCESSING_DONE + + self._authenticator.update(long_to_bytes(self._len_aad, 8)[::-1]) + self._authenticator.update(long_to_bytes(self._len_ct, 8)[::-1]) + self._mac_tag = self._authenticator.digest() + return self._mac_tag + + def digest(self): + """Compute the *binary* authentication tag (MAC). + + :Return: the MAC tag, as 16 ``bytes``. + """ + + if "digest" not in self._next: + raise TypeError("digest() method cannot be called") + self._next = ("digest",) + + return self._compute_mac() + + def hexdigest(self): + """Compute the *printable* authentication tag (MAC). + + This method is like :meth:`digest`. + + :Return: the MAC tag, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* authentication tag (MAC). + + The receiver invokes this method at the very end, to + check if the associated data (if any) and the decrypted + messages are valid. + + :param bytes/bytearray/memoryview received_mac_tag: + This is the 16-byte *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "verify" not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = ("verify",) + + secret = get_random_bytes(16) + + self._compute_mac() + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, + data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, + data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* authentication tag (MAC). + + This method is like :meth:`verify`. + + :param string hex_mac_tag: + This is the *printable* MAC. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext): + """Perform :meth:`encrypt` and :meth:`digest` in one step. + + :param plaintext: The data to encrypt, of any size. + :type plaintext: bytes/bytearray/memoryview + :return: a tuple with two ``bytes`` objects: + + - the ciphertext, of equal length as the plaintext + - the 16-byte MAC tag + """ + + return self.encrypt(plaintext), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag): + """Perform :meth:`decrypt` and :meth:`verify` in one step. + + :param ciphertext: The piece of data to decrypt. + :type ciphertext: bytes/bytearray/memoryview + :param bytes received_mac_tag: + This is the 16-byte *binary* MAC, as received from the sender. + :return: the decrypted data (as ``bytes``) + :raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext) + self.verify(received_mac_tag) + return plaintext + + +def new(**kwargs): + """Create a new ChaCha20-Poly1305 or XChaCha20-Poly1305 AEAD cipher. + + :keyword key: The secret key to use. It must be 32 bytes long. + :type key: byte string + + :keyword nonce: + A value that must never be reused for any other encryption + done with this key. + + For ChaCha20-Poly1305, it must be 8 or 12 bytes long. + + For XChaCha20-Poly1305, it must be 24 bytes long. + + If not provided, 12 ``bytes`` will be generated randomly + (you can find them back in the ``nonce`` attribute). + :type nonce: bytes, bytearray, memoryview + + :Return: a :class:`Cryptodome.Cipher.ChaCha20.ChaCha20Poly1305Cipher` object + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter %s" % e) + + self._len_ct += len(plaintext) + + if len(key) != 32: + raise ValueError("Key must be 32 bytes long") + + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(12) + + if len(nonce) in (8, 12): + chacha20_poly1305_nonce = nonce + elif len(nonce) == 24: + key = _HChaCha20(key, nonce[:16]) + chacha20_poly1305_nonce = b'\x00\x00\x00\x00' + nonce[16:] + else: + raise ValueError("Nonce must be 8, 12 or 24 bytes long") + + if not is_buffer(nonce): + raise TypeError("nonce must be bytes, bytearray or memoryview") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + cipher = ChaCha20Poly1305Cipher(key, chacha20_poly1305_nonce) + cipher.nonce = _copy_bytes(None, None, nonce) + return cipher + + +# Size of a key (in bytes) +key_size = 32 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi new file mode 100644 index 0000000..f4b2a42 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/ChaCha20_Poly1305.pyi @@ -0,0 +1,26 @@ +from typing import Union, Tuple, overload, ByteString, Optional + +class ChaCha20Poly1305Cipher: + nonce: bytes + + def __init__(self, key: ByteString, nonce: ByteString) -> None: ... + def update(self, data: ByteString) -> None: ... + @overload + def encrypt(self, plaintext: ByteString) -> bytes: ... + @overload + def encrypt(self, plaintext: ByteString, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: ByteString) -> bytes: ... + @overload + def decrypt(self, plaintext: ByteString, output: Union[bytearray, memoryview]) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: ByteString) -> None: ... + def hexverify(self, received_mac_tag: str) -> None: ... + def encrypt_and_digest(self, plaintext: ByteString) -> Tuple[bytes, bytes]: ... + def decrypt_and_verify(self, ciphertext: ByteString, received_mac_tag: ByteString) -> bytes: ... + +def new(key: ByteString, nonce: Optional[ByteString] = ...) -> ChaCha20Poly1305Cipher: ... + +block_size: int +key_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES.py new file mode 100644 index 0000000..026b491 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES.py @@ -0,0 +1,158 @@ +# -*- coding: utf-8 -*- +# +# Cipher/DES.py : DES +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with Single DES: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) <ecb_mode>` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) <cbc_mode>` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) <cfb_mode>` +:var MODE_OFB: :ref:`Output FeedBack (OFB) <ofb_mode>` +:var MODE_CTR: :ref:`CounTer Mode (CTR) <ctr_mode>` +:var MODE_OPENPGP: :ref:`OpenPGP Mode <openpgp_mode>` +:var MODE_EAX: :ref:`EAX Mode <eax_mode>` +""" + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util.py3compat import byte_string +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t, c_uint8_ptr) + +_raw_des_lib = load_pycryptodome_raw_lib( + "Cryptodome.Cipher._raw_des", + """ + int DES_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int DES_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES_stop_operation(void *state); + """) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level + base cipher. It will absorb named parameters in the process.""" + + try: + key = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + if len(key) != key_size: + raise ValueError("Incorrect DES key length (%d bytes)" % len(key)) + + start_operation = _raw_des_lib.DES_start_operation + stop_operation = _raw_des_lib.DES_stop_operation + + cipher = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the DES cipher" + % result) + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new DES cipher. + + :param key: + The secret key to use in the symmetric cipher. + It must be 8 byte long. The parity bits will be ignored. + :type key: bytes/bytearray/memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*byte string*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*byte string*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a DES object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = 8 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES.pyi new file mode 100644 index 0000000..dc18713 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES.pyi @@ -0,0 +1,33 @@ +from typing import Union, Dict, Iterable, ByteString, Optional + +from Cryptodome.Cipher._mode_ecb import EcbMode +from Cryptodome.Cipher._mode_cbc import CbcMode +from Cryptodome.Cipher._mode_cfb import CfbMode +from Cryptodome.Cipher._mode_ofb import OfbMode +from Cryptodome.Cipher._mode_ctr import CtrMode +from Cryptodome.Cipher._mode_openpgp import OpenPgpMode +from Cryptodome.Cipher._mode_eax import EaxMode + +DESMode = int + +MODE_ECB: DESMode +MODE_CBC: DESMode +MODE_CFB: DESMode +MODE_OFB: DESMode +MODE_CTR: DESMode +MODE_OPENPGP: DESMode +MODE_EAX: DESMode + +def new(key: ByteString, + mode: DESMode, + iv : Optional[ByteString] = ..., + IV : Optional[ByteString] = ..., + nonce : Optional[ByteString] = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, ByteString] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.py new file mode 100644 index 0000000..3b2828e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.py @@ -0,0 +1,187 @@ +# -*- coding: utf-8 -*- +# +# Cipher/DES3.py : DES3 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +""" +Module's constants for the modes of operation supported with Triple DES: + +:var MODE_ECB: :ref:`Electronic Code Book (ECB) <ecb_mode>` +:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) <cbc_mode>` +:var MODE_CFB: :ref:`Cipher FeedBack (CFB) <cfb_mode>` +:var MODE_OFB: :ref:`Output FeedBack (OFB) <ofb_mode>` +:var MODE_CTR: :ref:`CounTer Mode (CTR) <ctr_mode>` +:var MODE_OPENPGP: :ref:`OpenPGP Mode <openpgp_mode>` +:var MODE_EAX: :ref:`EAX Mode <eax_mode>` +""" + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util.py3compat import byte_string, bchr, bord, bstr +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + c_size_t) + +_raw_des3_lib = load_pycryptodome_raw_lib( + "Cryptodome.Cipher._raw_des3", + """ + int DES3_start_operation(const uint8_t key[], + size_t key_len, + void **pResult); + int DES3_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES3_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int DES3_stop_operation(void *state); + """) + + +def adjust_key_parity(key_in): + """Set the parity bits in a TDES key. + + :param key_in: the TDES key whose bits need to be adjusted + :type key_in: byte string + + :returns: a copy of ``key_in``, with the parity bits correctly set + :rtype: byte string + + :raises ValueError: if the TDES key is not 16 or 24 bytes long + :raises ValueError: if the TDES key degenerates into Single DES + """ + + def parity_byte(key_byte): + parity = 1 + for i in range(1, 8): + parity ^= (key_byte >> i) & 1 + return (key_byte & 0xFE) | parity + + if len(key_in) not in key_size: + raise ValueError("Not a valid TDES key") + + key_out = b"".join([ bchr(parity_byte(bord(x))) for x in key_in ]) + + if key_out[:8] == key_out[8:16] or key_out[-16:-8] == key_out[-8:]: + raise ValueError("Triple DES key degenerates to single DES") + + return key_out + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a handle to a low-level base cipher. + It will absorb named parameters in the process.""" + + try: + key_in = dict_parameters.pop("key") + except KeyError: + raise TypeError("Missing 'key' parameter") + + key = adjust_key_parity(bstr(key_in)) + + start_operation = _raw_des3_lib.DES3_start_operation + stop_operation = _raw_des3_lib.DES3_stop_operation + + cipher = VoidPointer() + result = start_operation(key, + c_size_t(len(key)), + cipher.address_of()) + if result: + raise ValueError("Error %X while instantiating the TDES cipher" + % result) + return SmartPointer(cipher.get(), stop_operation) + + +def new(key, mode, *args, **kwargs): + """Create a new Triple DES cipher. + + :param key: + The secret key to use in the symmetric cipher. + It must be 16 or 24 byte long. The parity bits will be ignored. + :type key: bytes/bytearray/memoryview + + :param mode: + The chaining mode to use for encryption or decryption. + :type mode: One of the supported ``MODE_*`` constants + + :Keyword Arguments: + * **iv** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, + and ``MODE_OPENPGP`` modes). + + The initialization vector to use for encryption or decryption. + + For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. + + For ``MODE_OPENPGP`` mode only, + it must be 8 bytes long for encryption + and 10 bytes for decryption (in the latter case, it is + actually the *encrypted* IV which was prefixed to the ciphertext). + + If not provided, a random byte string is generated (you must then + read its value with the :attr:`iv` attribute). + + * **nonce** (*bytes*, *bytearray*, *memoryview*) -- + (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). + + A value that must never be reused for any other encryption done + with this key. + + For ``MODE_EAX`` there are no + restrictions on its length (recommended: **16** bytes). + + For ``MODE_CTR``, its length must be in the range **[0..7]**. + + If not provided for ``MODE_EAX``, a random byte string is generated (you + can read it back via the ``nonce`` attribute). + + * **segment_size** (*integer*) -- + (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext + are segmented in. It must be a multiple of 8. + If not specified, it will be assumed to be 8. + + * **mac_len** : (*integer*) -- + (Only ``MODE_EAX``) + Length of the authentication tag, in bytes. + It must be no longer than 8 (default). + + * **initial_value** : (*integer*) -- + (Only ``MODE_CTR``). The initial value for the counter within + the counter block. By default it is **0**. + + :Return: a Triple DES object, of the applicable mode. + """ + + return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) + +MODE_ECB = 1 +MODE_CBC = 2 +MODE_CFB = 3 +MODE_OFB = 5 +MODE_CTR = 6 +MODE_OPENPGP = 7 +MODE_EAX = 9 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = (16, 24) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.pyi new file mode 100644 index 0000000..d5eac4d --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/DES3.pyi @@ -0,0 +1,35 @@ +from typing import Union, Dict, Tuple, ByteString, Optional + +from Cryptodome.Cipher._mode_ecb import EcbMode +from Cryptodome.Cipher._mode_cbc import CbcMode +from Cryptodome.Cipher._mode_cfb import CfbMode +from Cryptodome.Cipher._mode_ofb import OfbMode +from Cryptodome.Cipher._mode_ctr import CtrMode +from Cryptodome.Cipher._mode_openpgp import OpenPgpMode +from Cryptodome.Cipher._mode_eax import EaxMode + +def adjust_key_parity(key_in: bytes) -> bytes: ... + +DES3Mode = int + +MODE_ECB: DES3Mode +MODE_CBC: DES3Mode +MODE_CFB: DES3Mode +MODE_OFB: DES3Mode +MODE_CTR: DES3Mode +MODE_OPENPGP: DES3Mode +MODE_EAX: DES3Mode + +def new(key: ByteString, + mode: DES3Mode, + iv : Optional[ByteString] = ..., + IV : Optional[ByteString] = ..., + nonce : Optional[ByteString] = ..., + segment_size : int = ..., + mac_len : int = ..., + initial_value : Union[int, ByteString] = ..., + counter : Dict = ...) -> \ + Union[EcbMode, CbcMode, CfbMode, OfbMode, CtrMode, OpenPgpMode]: ... + +block_size: int +key_size: Tuple[int, int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py new file mode 100644 index 0000000..7525c5d --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.py @@ -0,0 +1,239 @@ +# -*- coding: utf-8 -*- +# +# Cipher/PKCS1_OAEP.py : PKCS#1 OAEP +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Signature.pss import MGF1 +import Cryptodome.Hash.SHA1 + +from Cryptodome.Util.py3compat import bord, _copy_bytes +import Cryptodome.Util.number +from Cryptodome.Util.number import ceil_div, bytes_to_long, long_to_bytes +from Cryptodome.Util.strxor import strxor +from Cryptodome import Random + +class PKCS1OAEP_Cipher: + """Cipher object for PKCS#1 v1.5 OAEP. + Do not create directly: use :func:`new` instead.""" + + def __init__(self, key, hashAlgo, mgfunc, label, randfunc): + """Initialize this PKCS#1 OAEP cipher object. + + :Parameters: + key : an RSA key object + If a private half is given, both encryption and decryption are possible. + If a public half is given, only encryption is possible. + hashAlgo : hash object + The hash function to use. This can be a module under `Cryptodome.Hash` + or an existing hash object created from any of such modules. If not specified, + `Cryptodome.Hash.SHA1` is used. + mgfunc : callable + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + If not specified, the standard MGF1 consistent with ``hashAlgo`` is used (a safe choice). + label : bytes/bytearray/memoryview + A label to apply to this particular encryption. If not specified, + an empty string is used. Specifying a label does not improve + security. + randfunc : callable + A function that returns random bytes. + + :attention: Modify the mask generation function only if you know what you are doing. + Sender and receiver must use the same one. + """ + self._key = key + + if hashAlgo: + self._hashObj = hashAlgo + else: + self._hashObj = Cryptodome.Hash.SHA1 + + if mgfunc: + self._mgf = mgfunc + else: + self._mgf = lambda x,y: MGF1(x,y,self._hashObj) + + self._label = _copy_bytes(None, None, label) + self._randfunc = randfunc + + def can_encrypt(self): + """Legacy function to check if you can call :meth:`encrypt`. + + .. deprecated:: 3.0""" + return self._key.can_encrypt() + + def can_decrypt(self): + """Legacy function to check if you can call :meth:`decrypt`. + + .. deprecated:: 3.0""" + return self._key.can_decrypt() + + def encrypt(self, message): + """Encrypt a message with PKCS#1 OAEP. + + :param message: + The message to encrypt, also known as plaintext. It can be of + variable length, but not longer than the RSA modulus (in bytes) + minus 2, minus twice the hash output size. + For instance, if you use RSA 2048 and SHA-256, the longest message + you can encrypt is 190 byte long. + :type message: bytes/bytearray/memoryview + + :returns: The ciphertext, as large as the RSA modulus. + :rtype: bytes + + :raises ValueError: + if the message is too long. + """ + + # See 7.1.1 in RFC3447 + modBits = Cryptodome.Util.number.size(self._key.n) + k = ceil_div(modBits, 8) # Convert from bits to bytes + hLen = self._hashObj.digest_size + mLen = len(message) + + # Step 1b + ps_len = k - mLen - 2 * hLen - 2 + if ps_len < 0: + raise ValueError("Plaintext is too long.") + # Step 2a + lHash = self._hashObj.new(self._label).digest() + # Step 2b + ps = b'\x00' * ps_len + # Step 2c + db = lHash + ps + b'\x01' + _copy_bytes(None, None, message) + # Step 2d + ros = self._randfunc(hLen) + # Step 2e + dbMask = self._mgf(ros, k-hLen-1) + # Step 2f + maskedDB = strxor(db, dbMask) + # Step 2g + seedMask = self._mgf(maskedDB, hLen) + # Step 2h + maskedSeed = strxor(ros, seedMask) + # Step 2i + em = b'\x00' + maskedSeed + maskedDB + # Step 3a (OS2IP) + em_int = bytes_to_long(em) + # Step 3b (RSAEP) + m_int = self._key._encrypt(em_int) + # Step 3c (I2OSP) + c = long_to_bytes(m_int, k) + return c + + def decrypt(self, ciphertext): + """Decrypt a message with PKCS#1 OAEP. + + :param ciphertext: The encrypted message. + :type ciphertext: bytes/bytearray/memoryview + + :returns: The original message (plaintext). + :rtype: bytes + + :raises ValueError: + if the ciphertext has the wrong length, or if decryption + fails the integrity check (in which case, the decryption + key is probably wrong). + :raises TypeError: + if the RSA key has no private half (i.e. you are trying + to decrypt using a public key). + """ + + # See 7.1.2 in RFC3447 + modBits = Cryptodome.Util.number.size(self._key.n) + k = ceil_div(modBits,8) # Convert from bits to bytes + hLen = self._hashObj.digest_size + + # Step 1b and 1c + if len(ciphertext) != k or k<hLen+2: + raise ValueError("Ciphertext with incorrect length.") + # Step 2a (O2SIP) + ct_int = bytes_to_long(ciphertext) + # Step 2b (RSADP) + m_int = self._key._decrypt(ct_int) + # Complete step 2c (I2OSP) + em = long_to_bytes(m_int, k) + # Step 3a + lHash = self._hashObj.new(self._label).digest() + # Step 3b + y = em[0] + # y must be 0, but we MUST NOT check it here in order not to + # allow attacks like Manger's (http://dl.acm.org/citation.cfm?id=704143) + maskedSeed = em[1:hLen+1] + maskedDB = em[hLen+1:] + # Step 3c + seedMask = self._mgf(maskedDB, hLen) + # Step 3d + seed = strxor(maskedSeed, seedMask) + # Step 3e + dbMask = self._mgf(seed, k-hLen-1) + # Step 3f + db = strxor(maskedDB, dbMask) + # Step 3g + one_pos = hLen + db[hLen:].find(b'\x01') + lHash1 = db[:hLen] + invalid = bord(y) | int(one_pos < hLen) + hash_compare = strxor(lHash1, lHash) + for x in hash_compare: + invalid |= bord(x) + for x in db[hLen:one_pos]: + invalid |= bord(x) + if invalid != 0: + raise ValueError("Incorrect decryption.") + # Step 4 + return db[one_pos + 1:] + +def new(key, hashAlgo=None, mgfunc=None, label=b'', randfunc=None): + """Return a cipher object :class:`PKCS1OAEP_Cipher` that can be used to perform PKCS#1 OAEP encryption or decryption. + + :param key: + The key object to use to encrypt or decrypt the message. + Decryption is only possible with a private RSA key. + :type key: RSA key object + + :param hashAlgo: + The hash function to use. This can be a module under `Cryptodome.Hash` + or an existing hash object created from any of such modules. + If not specified, `Cryptodome.Hash.SHA1` is used. + :type hashAlgo: hash object + + :param mgfunc: + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + If not specified, the standard MGF1 consistent with ``hashAlgo`` is used (a safe choice). + :type mgfunc: callable + + :param label: + A label to apply to this particular encryption. If not specified, + an empty string is used. Specifying a label does not improve + security. + :type label: bytes/bytearray/memoryview + + :param randfunc: + A function that returns random bytes. + The default is `Random.get_random_bytes`. + :type randfunc: callable + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + return PKCS1OAEP_Cipher(key, hashAlgo, mgfunc, label, randfunc) + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi new file mode 100644 index 0000000..b54cd3f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_OAEP.pyi @@ -0,0 +1,35 @@ +from typing import Optional, Union, Callable, Any, overload +from typing_extensions import Protocol + +from Cryptodome.PublicKey.RSA import RsaKey + +class HashLikeClass(Protocol): + digest_size : int + def new(self, data: Optional[bytes] = ...) -> Any: ... + +class HashLikeModule(Protocol): + digest_size : int + @staticmethod + def new(data: Optional[bytes] = ...) -> Any: ... + +HashLike = Union[HashLikeClass, HashLikeModule] + +Buffer = Union[bytes, bytearray, memoryview] + +class PKCS1OAEP_Cipher: + def __init__(self, + key: RsaKey, + hashAlgo: HashLike, + mgfunc: Callable[[bytes, int], bytes], + label: Buffer, + randfunc: Callable[[int], bytes]) -> None: ... + def can_encrypt(self) -> bool: ... + def can_decrypt(self) -> bool: ... + def encrypt(self, message: Buffer) -> bytes: ... + def decrypt(self, ciphertext: Buffer) -> bytes: ... + +def new(key: RsaKey, + hashAlgo: Optional[HashLike] = ..., + mgfunc: Optional[Callable[[bytes, int], bytes]] = ..., + label: Optional[Buffer] = ..., + randfunc: Optional[Callable[[int], bytes]] = ...) -> PKCS1OAEP_Cipher: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py new file mode 100644 index 0000000..17ef9eb --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.py @@ -0,0 +1,217 @@ +# -*- coding: utf-8 -*- +# +# Cipher/PKCS1-v1_5.py : PKCS#1 v1.5 +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['new', 'PKCS115_Cipher'] + +from Cryptodome import Random +from Cryptodome.Util.number import bytes_to_long, long_to_bytes +from Cryptodome.Util.py3compat import bord, is_bytes, _copy_bytes + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, c_size_t, + c_uint8_ptr) + + +_raw_pkcs1_decode = load_pycryptodome_raw_lib("Cryptodome.Cipher._pkcs1_decode", + """ + int pkcs1_decode(const uint8_t *em, size_t len_em, + const uint8_t *sentinel, size_t len_sentinel, + size_t expected_pt_len, + uint8_t *output); + """) + + +def _pkcs1_decode(em, sentinel, expected_pt_len, output): + if len(em) != len(output): + raise ValueError("Incorrect output length") + + ret = _raw_pkcs1_decode.pkcs1_decode(c_uint8_ptr(em), + c_size_t(len(em)), + c_uint8_ptr(sentinel), + c_size_t(len(sentinel)), + c_size_t(expected_pt_len), + c_uint8_ptr(output)) + return ret + + +class PKCS115_Cipher: + """This cipher can perform PKCS#1 v1.5 RSA encryption or decryption. + Do not instantiate directly. Use :func:`Cryptodome.Cipher.PKCS1_v1_5.new` instead.""" + + def __init__(self, key, randfunc): + """Initialize this PKCS#1 v1.5 cipher object. + + :Parameters: + key : an RSA key object + If a private half is given, both encryption and decryption are possible. + If a public half is given, only encryption is possible. + randfunc : callable + Function that returns random bytes. + """ + + self._key = key + self._randfunc = randfunc + + def can_encrypt(self): + """Return True if this cipher object can be used for encryption.""" + return self._key.can_encrypt() + + def can_decrypt(self): + """Return True if this cipher object can be used for decryption.""" + return self._key.can_decrypt() + + def encrypt(self, message): + """Produce the PKCS#1 v1.5 encryption of a message. + + This function is named ``RSAES-PKCS1-V1_5-ENCRYPT``, and it is specified in + `section 7.2.1 of RFC8017 + <https://tools.ietf.org/html/rfc8017#page-28>`_. + + :param message: + The message to encrypt, also known as plaintext. It can be of + variable length, but not longer than the RSA modulus (in bytes) minus 11. + :type message: bytes/bytearray/memoryview + + :Returns: A byte string, the ciphertext in which the message is encrypted. + It is as long as the RSA modulus (in bytes). + + :Raises ValueError: + If the RSA key length is not sufficiently long to deal with the given + message. + """ + + # See 7.2.1 in RFC8017 + k = self._key.size_in_bytes() + mLen = len(message) + + # Step 1 + if mLen > k - 11: + raise ValueError("Plaintext is too long.") + # Step 2a + ps = [] + while len(ps) != k - mLen - 3: + new_byte = self._randfunc(1) + if bord(new_byte[0]) == 0x00: + continue + ps.append(new_byte) + ps = b"".join(ps) + assert(len(ps) == k - mLen - 3) + # Step 2b + em = b'\x00\x02' + ps + b'\x00' + _copy_bytes(None, None, message) + # Step 3a (OS2IP) + em_int = bytes_to_long(em) + # Step 3b (RSAEP) + m_int = self._key._encrypt(em_int) + # Step 3c (I2OSP) + c = long_to_bytes(m_int, k) + return c + + def decrypt(self, ciphertext, sentinel, expected_pt_len=0): + r"""Decrypt a PKCS#1 v1.5 ciphertext. + + This is the function ``RSAES-PKCS1-V1_5-DECRYPT`` specified in + `section 7.2.2 of RFC8017 + <https://tools.ietf.org/html/rfc8017#page-29>`_. + + Args: + ciphertext (bytes/bytearray/memoryview): + The ciphertext that contains the message to recover. + sentinel (any type): + The object to return whenever an error is detected. + expected_pt_len (integer): + The length the plaintext is known to have, or 0 if unknown. + + Returns (byte string): + It is either the original message or the ``sentinel`` (in case of an error). + + .. warning:: + PKCS#1 v1.5 decryption is intrinsically vulnerable to timing + attacks (see `Bleichenbacher's`__ attack). + **Use PKCS#1 OAEP instead**. + + This implementation attempts to mitigate the risk + with some constant-time constructs. + However, they are not sufficient by themselves: the type of protocol you + implement and the way you handle errors make a big difference. + + Specifically, you should make it very hard for the (malicious) + party that submitted the ciphertext to quickly understand if decryption + succeeded or not. + + To this end, it is recommended that your protocol only encrypts + plaintexts of fixed length (``expected_pt_len``), + that ``sentinel`` is a random byte string of the same length, + and that processing continues for as long + as possible even if ``sentinel`` is returned (i.e. in case of + incorrect decryption). + + .. __: https://dx.doi.org/10.1007/BFb0055716 + """ + + # See 7.2.2 in RFC8017 + k = self._key.size_in_bytes() + + # Step 1 + if len(ciphertext) != k: + raise ValueError("Ciphertext with incorrect length (not %d bytes)" % k) + + # Step 2a (O2SIP) + ct_int = bytes_to_long(ciphertext) + + # Step 2b (RSADP) + m_int = self._key._decrypt(ct_int) + + # Complete step 2c (I2OSP) + em = long_to_bytes(m_int, k) + + # Step 3 (not constant time when the sentinel is not a byte string) + output = bytes(bytearray(k)) + if not is_bytes(sentinel) or len(sentinel) > k: + size = _pkcs1_decode(em, b'', expected_pt_len, output) + if size < 0: + return sentinel + else: + return output[size:] + + # Step 3 (somewhat constant time) + size = _pkcs1_decode(em, sentinel, expected_pt_len, output) + return output[size:] + + +def new(key, randfunc=None): + """Create a cipher for performing PKCS#1 v1.5 encryption or decryption. + + :param key: + The key to use to encrypt or decrypt the message. This is a `Cryptodome.PublicKey.RSA` object. + Decryption is only possible if *key* is a private RSA key. + :type key: RSA key object + + :param randfunc: + Function that return random bytes. + The default is :func:`Cryptodome.Random.get_random_bytes`. + :type randfunc: callable + + :returns: A cipher object `PKCS115_Cipher`. + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + return PKCS115_Cipher(key, randfunc) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi new file mode 100644 index 0000000..b69f509 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/PKCS1_v1_5.pyi @@ -0,0 +1,20 @@ +from typing import Callable, Union, Any, Optional, TypeVar + +from Cryptodome.PublicKey.RSA import RsaKey + +Buffer = Union[bytes, bytearray, memoryview] +T = TypeVar('T') + +class PKCS115_Cipher: + def __init__(self, + key: RsaKey, + randfunc: Callable[[int], bytes]) -> None: ... + def can_encrypt(self) -> bool: ... + def can_decrypt(self) -> bool: ... + def encrypt(self, message: Buffer) -> bytes: ... + def decrypt(self, ciphertext: Buffer, + sentinel: T, + expected_pt_len: Optional[int] = ...) -> Union[bytes, T]: ... + +def new(key: RsaKey, + randfunc: Optional[Callable[[int], bytes]] = ...) -> PKCS115_Cipher: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.py new file mode 100644 index 0000000..79e6701 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +# +# Cipher/Salsa20.py : Salsa20 stream cipher (http://cr.yp.to/snuffle.html) +# +# Contributed by Fabrizio Tarizzo <fabrizio@fabriziotarizzo.org>. +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import _copy_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, VoidPointer, + SmartPointer, c_size_t, + c_uint8_ptr, is_writeable_buffer) + +from Cryptodome.Random import get_random_bytes + +_raw_salsa20_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._Salsa20", + """ + int Salsa20_stream_init(uint8_t *key, size_t keylen, + uint8_t *nonce, size_t nonce_len, + void **pSalsaState); + int Salsa20_stream_destroy(void *salsaState); + int Salsa20_stream_encrypt(void *salsaState, + const uint8_t in[], + uint8_t out[], size_t len); + """) + + +class Salsa20Cipher: + """Salsa20 cipher object. Do not create it directly. Use :py:func:`new` + instead. + + :var nonce: The nonce with length 8 + :vartype nonce: byte string + """ + + def __init__(self, key, nonce): + """Initialize a Salsa20 cipher object + + See also `new()` at the module level.""" + + if len(key) not in key_size: + raise ValueError("Incorrect key length for Salsa20 (%d bytes)" % len(key)) + + if len(nonce) != 8: + raise ValueError("Incorrect nonce length for Salsa20 (%d bytes)" % + len(nonce)) + + self.nonce = _copy_bytes(None, None, nonce) + + self._state = VoidPointer() + result = _raw_salsa20_lib.Salsa20_stream_init( + c_uint8_ptr(key), + c_size_t(len(key)), + c_uint8_ptr(nonce), + c_size_t(len(nonce)), + self._state.address_of()) + if result: + raise ValueError("Error %d instantiating a Salsa20 cipher") + self._state = SmartPointer(self._state.get(), + _raw_salsa20_lib.Salsa20_stream_destroy) + + self.block_size = 1 + self.key_size = len(key) + + def encrypt(self, plaintext, output=None): + """Encrypt a piece of data. + + Args: + plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the ciphertext + is written to. If ``None``, the ciphertext is returned. + Returns: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = _raw_salsa20_lib.Salsa20_stream_encrypt( + self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting with Salsa20" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt a piece of data. + + Args: + ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. + Keyword Args: + output(bytes/bytearray/memoryview): The location where the plaintext + is written to. If ``None``, the plaintext is returned. + Returns: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + try: + return self.encrypt(ciphertext, output=output) + except ValueError as e: + raise ValueError(str(e).replace("enc", "dec")) + + +def new(key, nonce=None): + """Create a new Salsa20 cipher + + :keyword key: The secret key to use. It must be 16 or 32 bytes long. + :type key: bytes/bytearray/memoryview + + :keyword nonce: + A value that must never be reused for any other encryption + done with this key. It must be 8 bytes long. + + If not provided, a random byte string will be generated (you can read + it back via the ``nonce`` attribute of the returned object). + :type nonce: bytes/bytearray/memoryview + + :Return: a :class:`Cryptodome.Cipher.Salsa20.Salsa20Cipher` object + """ + + if nonce is None: + nonce = get_random_bytes(8) + + return Salsa20Cipher(key, nonce) + +# Size of a data block (in bytes) +block_size = 1 + +# Size of a key (in bytes) +key_size = (16, 32) + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.pyi new file mode 100644 index 0000000..cc56808 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/Salsa20.pyi @@ -0,0 +1,24 @@ +from typing import Union, Tuple, Optional, overload, ByteString, Optional + +class Salsa20Cipher: + nonce: bytes + block_size: int + key_size: int + + def __init__(self, + key: ByteString, + nonce: ByteString) -> None: ... + @overload + def encrypt(self, plaintext: ByteString) -> bytes: ... + @overload + def encrypt(self, plaintext: ByteString, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: ByteString) -> bytes: ... + @overload + def decrypt(self, plaintext: ByteString, output: Union[bytearray, memoryview]) -> None: ... + +def new(key: ByteString, nonce: Optional[ByteString] = ...) -> Salsa20Cipher: ... + +block_size: int +key_size: Tuple[int, int] + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_ARC4.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_ARC4.abi3.so new file mode 100755 index 0000000..451d359 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_ARC4.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.py new file mode 100644 index 0000000..c1c3249 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.py @@ -0,0 +1,131 @@ +# =================================================================== +# +# Copyright (c) 2019, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import sys + +from Cryptodome.Cipher import _create_cipher +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, c_size_t, + c_uint8_ptr, c_uint) + +_raw_blowfish_lib = load_pycryptodome_raw_lib( + "Cryptodome.Cipher._raw_eksblowfish", + """ + int EKSBlowfish_start_operation(const uint8_t key[], + size_t key_len, + const uint8_t salt[16], + size_t salt_len, + unsigned cost, + unsigned invert, + void **pResult); + int EKSBlowfish_encrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int EKSBlowfish_decrypt(const void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int EKSBlowfish_stop_operation(void *state); + """ + ) + + +def _create_base_cipher(dict_parameters): + """This method instantiates and returns a smart pointer to + a low-level base cipher. It will absorb named parameters in + the process.""" + + try: + key = dict_parameters.pop("key") + salt = dict_parameters.pop("salt") + cost = dict_parameters.pop("cost") + except KeyError as e: + raise TypeError("Missing EKSBlowfish parameter: " + str(e)) + invert = dict_parameters.pop("invert", True) + + if len(key) not in key_size: + raise ValueError("Incorrect EKSBlowfish key length (%d bytes)" % len(key)) + + start_operation = _raw_blowfish_lib.EKSBlowfish_start_operation + stop_operation = _raw_blowfish_lib.EKSBlowfish_stop_operation + + void_p = VoidPointer() + result = start_operation(c_uint8_ptr(key), + c_size_t(len(key)), + c_uint8_ptr(salt), + c_size_t(len(salt)), + c_uint(cost), + c_uint(int(invert)), + void_p.address_of()) + if result: + raise ValueError("Error %X while instantiating the EKSBlowfish cipher" + % result) + return SmartPointer(void_p.get(), stop_operation) + + +def new(key, mode, salt, cost, invert): + """Create a new EKSBlowfish cipher + + Args: + + key (bytes, bytearray, memoryview): + The secret key to use in the symmetric cipher. + Its length can vary from 0 to 72 bytes. + + mode (one of the supported ``MODE_*`` constants): + The chaining mode to use for encryption or decryption. + + salt (bytes, bytearray, memoryview): + The salt that bcrypt uses to thwart rainbow table attacks + + cost (integer): + The complexity factor in bcrypt + + invert (bool): + If ``False``, in the inner loop use ``ExpandKey`` first over the salt + and then over the key, as defined in + the `original bcrypt specification <https://www.usenix.org/legacy/events/usenix99/provos/provos_html/node4.html>`_. + If ``True``, reverse the order, as in the first implementation of + `bcrypt` in OpenBSD. + + :Return: an EKSBlowfish object + """ + + kwargs = { 'salt':salt, 'cost':cost, 'invert':invert } + return _create_cipher(sys.modules[__name__], key, mode, **kwargs) + + +MODE_ECB = 1 + +# Size of a data block (in bytes) +block_size = 8 +# Size of a key (in bytes) +key_size = range(0, 72 + 1) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi new file mode 100644 index 0000000..49c8448 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_EKSBlowfish.pyi @@ -0,0 +1,15 @@ +from typing import Union, Iterable + +from Cryptodome.Cipher._mode_ecb import EcbMode + +MODE_ECB: int + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + mode: int, + salt: Buffer, + cost: int) -> EcbMode: ... + +block_size: int +key_size: Iterable[int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so new file mode 100755 index 0000000..a303d91 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_Salsa20.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.py new file mode 100644 index 0000000..9bf067f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.py @@ -0,0 +1,79 @@ +# +# A block cipher is instantiated as a combination of: +# 1. A base cipher (such as AES) +# 2. A mode of operation (such as CBC) +# +# Both items are implemented as C modules. +# +# The API of #1 is (replace "AES" with the name of the actual cipher): +# - AES_start_operaion(key) --> base_cipher_state +# - AES_encrypt(base_cipher_state, in, out, length) +# - AES_decrypt(base_cipher_state, in, out, length) +# - AES_stop_operation(base_cipher_state) +# +# Where base_cipher_state is AES_State, a struct with BlockBase (set of +# pointers to encrypt/decrypt/stop) followed by cipher-specific data. +# +# The API of #2 is (replace "CBC" with the name of the actual mode): +# - CBC_start_operation(base_cipher_state) --> mode_state +# - CBC_encrypt(mode_state, in, out, length) +# - CBC_decrypt(mode_state, in, out, length) +# - CBC_stop_operation(mode_state) +# +# where mode_state is a a pointer to base_cipher_state plus mode-specific data. + +import os + +from Cryptodome.Cipher._mode_ecb import _create_ecb_cipher +from Cryptodome.Cipher._mode_cbc import _create_cbc_cipher +from Cryptodome.Cipher._mode_cfb import _create_cfb_cipher +from Cryptodome.Cipher._mode_ofb import _create_ofb_cipher +from Cryptodome.Cipher._mode_ctr import _create_ctr_cipher +from Cryptodome.Cipher._mode_openpgp import _create_openpgp_cipher +from Cryptodome.Cipher._mode_ccm import _create_ccm_cipher +from Cryptodome.Cipher._mode_eax import _create_eax_cipher +from Cryptodome.Cipher._mode_siv import _create_siv_cipher +from Cryptodome.Cipher._mode_gcm import _create_gcm_cipher +from Cryptodome.Cipher._mode_ocb import _create_ocb_cipher + +_modes = { 1:_create_ecb_cipher, + 2:_create_cbc_cipher, + 3:_create_cfb_cipher, + 5:_create_ofb_cipher, + 6:_create_ctr_cipher, + 7:_create_openpgp_cipher, + 9:_create_eax_cipher + } + +_extra_modes = { 8:_create_ccm_cipher, + 10:_create_siv_cipher, + 11:_create_gcm_cipher, + 12:_create_ocb_cipher + } + +def _create_cipher(factory, key, mode, *args, **kwargs): + + kwargs["key"] = key + + modes = dict(_modes) + if kwargs.pop("add_aes_modes", False): + modes.update(_extra_modes) + if not mode in modes: + raise ValueError("Mode not supported") + + if args: + if mode in (8, 9, 10, 11, 12): + if len(args) > 1: + raise TypeError("Too many arguments for this mode") + kwargs["nonce"] = args[0] + elif mode in (2, 3, 5, 7): + if len(args) > 1: + raise TypeError("Too many arguments for this mode") + kwargs["IV"] = args[0] + elif mode == 6: + if len(args) > 0: + raise TypeError("Too many arguments for this mode") + elif mode == 1: + raise TypeError("IV is not meaningful for the ECB mode") + + return modes[mode](factory, **kwargs) diff --git a/lib/site-packages/pip/_internal/resolution/legacy/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.pyi similarity index 100% rename from lib/site-packages/pip/_internal/resolution/legacy/__init__.py rename to python/lib/python3.11/site-packages/Cryptodome/Cipher/__init__.pyi diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_chacha20.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_chacha20.abi3.so new file mode 100755 index 0000000..f1f1fa1 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_chacha20.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.py new file mode 100644 index 0000000..94d02e7 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.py @@ -0,0 +1,293 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Ciphertext Block Chaining (CBC) mode. +""" + +__all__ = ['CbcMode'] + +from Cryptodome.Util.py3compat import _copy_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Cryptodome.Random import get_random_bytes + +raw_cbc_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_cbc", """ + int CBC_start_operation(void *cipher, + const uint8_t iv[], + size_t iv_len, + void **pResult); + int CBC_encrypt(void *cbcState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CBC_decrypt(void *cbcState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CBC_stop_operation(void *state); + """ + ) + + +class CbcMode(object): + """*Cipher-Block Chaining (CBC)*. + + Each of the ciphertext blocks depends on the current + and all previous plaintext blocks. + + An Initialization Vector (*IV*) is required. + + See `NIST SP800-38A`_ , Section 6.2 . + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, iv): + """Create a new block cipher, configured in CBC mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + iv : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + It is as long as the cipher block. + + **The IV must be unpredictable**. Ideally it is picked randomly. + + Reusing the *IV* for encryptions performed with the same key + compromises confidentiality. + """ + + self._state = VoidPointer() + result = raw_cbc_lib.CBC_start_operation(block_cipher.get(), + c_uint8_ptr(iv), + c_size_t(len(iv)), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the CBC mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_cbc_lib.CBC_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(iv) + """The block size of the underlying cipher, in bytes.""" + + self.iv = _copy_bytes(None, None, iv) + """The Initialization Vector originally used to create the object. + The value does not change.""" + + self.IV = self.iv + """Alias for `iv`""" + + self._next = ["encrypt", "decrypt"] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + That also means that you cannot reuse an object for encrypting + or decrypting other data with the same key. + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + Its lenght must be multiple of the cipher block size. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = ["encrypt"] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cbc_lib.CBC_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + if result == 3: + raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size) + raise ValueError("Error %d while encrypting in CBC mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + Its length must be multiple of the cipher block size. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = ["decrypt"] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cbc_lib.CBC_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + if result == 3: + raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size) + raise ValueError("Error %d while decrypting in CBC mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_cbc_cipher(factory, **kwargs): + """Instantiate a cipher object that performs CBC encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Cryptodome.Cipher``. + + :Keywords: + iv : bytes/bytearray/memoryview + The IV to use for CBC. + + IV : bytes/bytearray/memoryview + Alias for ``iv``. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + if len(iv) != factory.block_size: + raise ValueError("Incorrect IV length (it must be %d bytes long)" % + factory.block_size) + + if kwargs: + raise TypeError("Unknown parameters for CBC: %s" % str(kwargs)) + + return CbcMode(cipher_state, iv) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.pyi new file mode 100644 index 0000000..526632e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cbc.pyi @@ -0,0 +1,25 @@ +from typing import Union, overload + +from Cryptodome.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CbcMode'] + +class CbcMode(object): + block_size: int + iv: Buffer + IV: Buffer + + def __init__(self, + block_cipher: SmartPointer, + iv: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.py new file mode 100644 index 0000000..ec2e4f4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.py @@ -0,0 +1,650 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Counter with CBC-MAC (CCM) mode. +""" + +__all__ = ['CcmMode'] + +import struct +from binascii import unhexlify + +from Cryptodome.Util.py3compat import (byte_string, bord, + _copy_bytes) +from Cryptodome.Util._raw_api import is_writeable_buffer + +from Cryptodome.Util.strxor import strxor +from Cryptodome.Util.number import long_to_bytes + +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Random import get_random_bytes + + +def enum(**enums): + return type('Enum', (), enums) + +MacStatus = enum(NOT_STARTED=0, PROCESSING_AUTH_DATA=1, PROCESSING_PLAINTEXT=2) + + +class CcmMode(object): + """Counter with CBC-MAC (CCM). + + This is an Authenticated Encryption with Associated Data (`AEAD`_) mode. + It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, and it will + still be subject to authentication. The decryption step tells the receiver + if the message comes from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - including the + header - has been modified or corrupted. + + This mode requires a nonce. The nonce shall never repeat for two + different messages encrypted with the same key, but it does not need + to be random. + Note that there is a trade-off between the size of the nonce and the + maximum size of a single message you can encrypt. + + It is important to use a large nonce if the key is reused across several + messages and the nonce is chosen randomly. + + It is acceptable to us a short nonce if the key is only used a few times or + if the nonce is taken from a counter. + + The following table shows the trade-off when the nonce is chosen at + random. The column on the left shows how many messages it takes + for the keystream to repeat **on average**. In practice, you will want to + stop using the key way before that. + + +--------------------+---------------+-------------------+ + | Avg. # of messages | nonce | Max. message | + | before keystream | size | size | + | repeats | (bytes) | (bytes) | + +====================+===============+===================+ + | 2^52 | 13 | 64K | + +--------------------+---------------+-------------------+ + | 2^48 | 12 | 16M | + +--------------------+---------------+-------------------+ + | 2^44 | 11 | 4G | + +--------------------+---------------+-------------------+ + | 2^40 | 10 | 1T | + +--------------------+---------------+-------------------+ + | 2^36 | 9 | 64P | + +--------------------+---------------+-------------------+ + | 2^32 | 8 | 16E | + +--------------------+---------------+-------------------+ + + This mode is only available for ciphers that operate on 128 bits blocks + (e.g. AES but not TDES). + + See `NIST SP800-38C`_ or RFC3610_. + + .. _`NIST SP800-38C`: http://csrc.nist.gov/publications/nistpubs/800-38C/SP800-38C.pdf + .. _RFC3610: https://tools.ietf.org/html/rfc3610 + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, mac_len, msg_len, assoc_len, + cipher_params): + + self.block_size = factory.block_size + """The block size of the underlying cipher, in bytes.""" + + self.nonce = _copy_bytes(None, None, nonce) + """The nonce used for this cipher instance""" + + self._factory = factory + self._key = _copy_bytes(None, None, key) + self._mac_len = mac_len + self._msg_len = msg_len + self._assoc_len = assoc_len + self._cipher_params = cipher_params + + self._mac_tag = None # Cache for MAC tag + + if self.block_size != 16: + raise ValueError("CCM mode is only available for ciphers" + " that operate on 128 bits blocks") + + # MAC tag length (Tlen) + if mac_len not in (4, 6, 8, 10, 12, 14, 16): + raise ValueError("Parameter 'mac_len' must be even" + " and in the range 4..16 (not %d)" % mac_len) + + # Nonce value + if not (nonce and 7 <= len(nonce) <= 13): + raise ValueError("Length of parameter 'nonce' must be" + " in the range 7..13 bytes") + + # Create MAC object (the tag will be the last block + # bytes worth of ciphertext) + self._mac = self._factory.new(key, + factory.MODE_CBC, + iv=b'\x00' * 16, + **cipher_params) + self._mac_status = MacStatus.NOT_STARTED + self._t = None + + # Allowed transitions after initialization + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + # Cumulative lengths + self._cumul_assoc_len = 0 + self._cumul_msg_len = 0 + + # Cache for unaligned associated data/plaintext. + # This is a list with byte strings, but when the MAC starts, + # it will become a binary string no longer than the block size. + self._cache = [] + + # Start CTR cipher, by formatting the counter (A.3) + q = 15 - len(nonce) # length of Q, the encoded message length + self._cipher = self._factory.new(key, + self._factory.MODE_CTR, + nonce=struct.pack("B", q - 1) + self.nonce, + **cipher_params) + + # S_0, step 6 in 6.1 for j=0 + self._s_0 = self._cipher.encrypt(b'\x00' * 16) + + # Try to start the MAC + if None not in (assoc_len, msg_len): + self._start_mac() + + def _start_mac(self): + + assert(self._mac_status == MacStatus.NOT_STARTED) + assert(None not in (self._assoc_len, self._msg_len)) + assert(isinstance(self._cache, list)) + + # Formatting control information and nonce (A.2.1) + q = 15 - len(self.nonce) # length of Q, the encoded message length + flags = (64 * (self._assoc_len > 0) + 8 * ((self._mac_len - 2) // 2) + + (q - 1)) + b_0 = struct.pack("B", flags) + self.nonce + long_to_bytes(self._msg_len, q) + + # Formatting associated data (A.2.2) + # Encoded 'a' is concatenated with the associated data 'A' + assoc_len_encoded = b'' + if self._assoc_len > 0: + if self._assoc_len < (2 ** 16 - 2 ** 8): + enc_size = 2 + elif self._assoc_len < (2 ** 32): + assoc_len_encoded = b'\xFF\xFE' + enc_size = 4 + else: + assoc_len_encoded = b'\xFF\xFF' + enc_size = 8 + assoc_len_encoded += long_to_bytes(self._assoc_len, enc_size) + + # b_0 and assoc_len_encoded must be processed first + self._cache.insert(0, b_0) + self._cache.insert(1, assoc_len_encoded) + + # Process all the data cached so far + first_data_to_mac = b"".join(self._cache) + self._cache = b"" + self._mac_status = MacStatus.PROCESSING_AUTH_DATA + self._update(first_data_to_mac) + + def _pad_cache_and_update(self): + + assert(self._mac_status != MacStatus.NOT_STARTED) + assert(len(self._cache) < self.block_size) + + # Associated data is concatenated with the least number + # of zero bytes (possibly none) to reach alignment to + # the 16 byte boundary (A.2.3) + len_cache = len(self._cache) + if len_cache > 0: + self._update(b'\x00' * (self.block_size - len_cache)) + + def update(self, assoc_data): + """Protect associated data + + If there is any associated data, the caller has to invoke + this function one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver is still able to detect any modification to it. + In CCM, the *associated data* is also called + *additional authenticated data* (AAD). + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. There are no restrictions on its size. + """ + + if "update" not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + self._cumul_assoc_len += len(assoc_data) + if self._assoc_len is not None and \ + self._cumul_assoc_len > self._assoc_len: + raise ValueError("Associated data is too long") + + self._update(assoc_data) + return self + + def _update(self, assoc_data_pt=b""): + """Update the MAC with associated data or plaintext + (without FSM checks)""" + + # If MAC has not started yet, we just park the data into a list. + # If the data is mutable, we create a copy and store that instead. + if self._mac_status == MacStatus.NOT_STARTED: + if is_writeable_buffer(assoc_data_pt): + assoc_data_pt = _copy_bytes(None, None, assoc_data_pt) + self._cache.append(assoc_data_pt) + return + + assert(len(self._cache) < self.block_size) + + if len(self._cache) > 0: + filler = min(self.block_size - len(self._cache), + len(assoc_data_pt)) + self._cache += _copy_bytes(None, filler, assoc_data_pt) + assoc_data_pt = _copy_bytes(filler, None, assoc_data_pt) + + if len(self._cache) < self.block_size: + return + + # The cache is exactly one block + self._t = self._mac.encrypt(self._cache) + self._cache = b"" + + update_len = len(assoc_data_pt) // self.block_size * self.block_size + self._cache = _copy_bytes(update_len, None, assoc_data_pt) + if update_len > 0: + self._t = self._mac.encrypt(assoc_data_pt[:update_len])[-16:] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + This method can be called only **once** if ``msg_len`` was + not passed at initialization. + + If ``msg_len`` was given, the data to encrypt can be broken + up in two or more pieces and `encrypt` can be called + multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + self._next = ["encrypt", "digest"] + + # No more associated data allowed from now + if self._assoc_len is None: + assert(isinstance(self._cache, list)) + self._assoc_len = sum([len(x) for x in self._cache]) + if self._msg_len is not None: + self._start_mac() + else: + if self._cumul_assoc_len < self._assoc_len: + raise ValueError("Associated data is too short") + + # Only once piece of plaintext accepted if message length was + # not declared in advance + if self._msg_len is None: + self._msg_len = len(plaintext) + self._start_mac() + self._next = ["digest"] + + self._cumul_msg_len += len(plaintext) + if self._cumul_msg_len > self._msg_len: + raise ValueError("Message is too long") + + if self._mac_status == MacStatus.PROCESSING_AUTH_DATA: + # Associated data is concatenated with the least number + # of zero bytes (possibly none) to reach alignment to + # the 16 byte boundary (A.2.3) + self._pad_cache_and_update() + self._mac_status = MacStatus.PROCESSING_PLAINTEXT + + self._update(plaintext) + return self._cipher.encrypt(plaintext, output=output) + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + This method can be called only **once** if ``msg_len`` was + not passed at initialization. + + If ``msg_len`` was given, the data to decrypt can be + broken up in two or more pieces and `decrypt` can be + called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = ["decrypt", "verify"] + + # No more associated data allowed from now + if self._assoc_len is None: + assert(isinstance(self._cache, list)) + self._assoc_len = sum([len(x) for x in self._cache]) + if self._msg_len is not None: + self._start_mac() + else: + if self._cumul_assoc_len < self._assoc_len: + raise ValueError("Associated data is too short") + + # Only once piece of ciphertext accepted if message length was + # not declared in advance + if self._msg_len is None: + self._msg_len = len(ciphertext) + self._start_mac() + self._next = ["verify"] + + self._cumul_msg_len += len(ciphertext) + if self._cumul_msg_len > self._msg_len: + raise ValueError("Message is too long") + + if self._mac_status == MacStatus.PROCESSING_AUTH_DATA: + # Associated data is concatenated with the least number + # of zero bytes (possibly none) to reach alignment to + # the 16 byte boundary (A.2.3) + self._pad_cache_and_update() + self._mac_status = MacStatus.PROCESSING_PLAINTEXT + + # Encrypt is equivalent to decrypt with the CTR mode + plaintext = self._cipher.encrypt(ciphertext, output=output) + if output is None: + self._update(plaintext) + else: + self._update(output) + return plaintext + + def digest(self): + """Compute the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if "digest" not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = ["digest"] + return self._digest() + + def _digest(self): + if self._mac_tag: + return self._mac_tag + + if self._assoc_len is None: + assert(isinstance(self._cache, list)) + self._assoc_len = sum([len(x) for x in self._cache]) + if self._msg_len is not None: + self._start_mac() + else: + if self._cumul_assoc_len < self._assoc_len: + raise ValueError("Associated data is too short") + + if self._msg_len is None: + self._msg_len = 0 + self._start_mac() + + if self._cumul_msg_len != self._msg_len: + raise ValueError("Message is too short") + + # Both associated data and payload are concatenated with the least + # number of zero bytes (possibly none) that align it to the + # 16 byte boundary (A.2.2 and A.2.3) + self._pad_cache_and_update() + + # Step 8 in 6.1 (T xor MSB_Tlen(S_0)) + self._mac_tag = strxor(self._t, self._s_0)[:self._mac_len] + + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "verify" not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = ["verify"] + + self._digest() + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + return self.encrypt(plaintext, output=output), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag, output=None): + """Perform decrypt() and verify() in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext, output=output) + self.verify(received_mac_tag) + return plaintext + + +def _create_ccm_cipher(factory, **kwargs): + """Create a new block cipher, configured in CCM mode. + + :Parameters: + factory : module + A symmetric cipher module from `Cryptodome.Cipher` (like + `Cryptodome.Cipher.AES`). + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + + Its length must be in the range ``[7..13]``. + 11 or 12 bytes are reasonable values in general. Bear in + mind that with CCM there is a trade-off between nonce length and + maximum message size. + + If not specified, a 11 byte long random string is used. + + mac_len : integer + Length of the MAC, in bytes. It must be even and in + the range ``[4..16]``. The default is 16. + + msg_len : integer + Length of the message to (de)cipher. + If not specified, ``encrypt`` or ``decrypt`` may only be called once. + + assoc_len : integer + Length of the associated data. + If not specified, all data is internally buffered. + """ + + try: + key = key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter: " + str(e)) + + nonce = kwargs.pop("nonce", None) # N + if nonce is None: + nonce = get_random_bytes(11) + mac_len = kwargs.pop("mac_len", factory.block_size) + msg_len = kwargs.pop("msg_len", None) # p + assoc_len = kwargs.pop("assoc_len", None) # a + cipher_params = dict(kwargs) + + return CcmMode(factory, key, nonce, mac_len, msg_len, + assoc_len, cipher_params) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.pyi new file mode 100644 index 0000000..4b9f620 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ccm.pyi @@ -0,0 +1,47 @@ +from types import ModuleType +from typing import Union, overload, Dict, Tuple, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CcmMode'] + +class CcmMode(object): + block_size: int + nonce: bytes + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + mac_len: int, + msg_len: int, + assoc_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> CcmMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.py new file mode 100644 index 0000000..1b1b6c3 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.py @@ -0,0 +1,293 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_cfb.py : CFB mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Counter Feedback (CFB) mode. +""" + +__all__ = ['CfbMode'] + +from Cryptodome.Util.py3compat import _copy_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Cryptodome.Random import get_random_bytes + +raw_cfb_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_cfb",""" + int CFB_start_operation(void *cipher, + const uint8_t iv[], + size_t iv_len, + size_t segment_len, /* In bytes */ + void **pResult); + int CFB_encrypt(void *cfbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CFB_decrypt(void *cfbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CFB_stop_operation(void *state);""" + ) + + +class CfbMode(object): + """*Cipher FeedBack (CFB)*. + + This mode is similar to CFB, but it transforms + the underlying block cipher into a stream cipher. + + Plaintext and ciphertext are processed in *segments* + of **s** bits. The mode is therefore sometimes + labelled **s**-bit CFB. + + An Initialization Vector (*IV*) is required. + + See `NIST SP800-38A`_ , Section 6.3. + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, iv, segment_size): + """Create a new block cipher, configured in CFB mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + iv : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + It is as long as the cipher block. + + **The IV must be unpredictable**. Ideally it is picked randomly. + + Reusing the *IV* for encryptions performed with the same key + compromises confidentiality. + + segment_size : integer + The number of bytes the plaintext and ciphertext are segmented in. + """ + + self._state = VoidPointer() + result = raw_cfb_lib.CFB_start_operation(block_cipher.get(), + c_uint8_ptr(iv), + c_size_t(len(iv)), + c_size_t(segment_size), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the CFB mode" % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_cfb_lib.CFB_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(iv) + """The block size of the underlying cipher, in bytes.""" + + self.iv = _copy_bytes(None, None, iv) + """The Initialization Vector originally used to create the object. + The value does not change.""" + + self.IV = self.iv + """Alias for `iv`""" + + self._next = ["encrypt", "decrypt"] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = ["encrypt"] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cfb_lib.CFB_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting in CFB mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = ["decrypt"] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_cfb_lib.CFB_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + raise ValueError("Error %d while decrypting in CFB mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_cfb_cipher(factory, **kwargs): + """Instantiate a cipher object that performs CFB encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Cryptodome.Cipher``. + + :Keywords: + iv : bytes/bytearray/memoryview + The IV to use for CFB. + + IV : bytes/bytearray/memoryview + Alias for ``iv``. + + segment_size : integer + The number of bit the plaintext and ciphertext are segmented in. + If not present, the default is 8. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + if len(iv) != factory.block_size: + raise ValueError("Incorrect IV length (it must be %d bytes long)" % + factory.block_size) + + segment_size_bytes, rem = divmod(kwargs.pop("segment_size", 8), 8) + if segment_size_bytes == 0 or rem != 0: + raise ValueError("'segment_size' must be positive and multiple of 8 bits") + + if kwargs: + raise TypeError("Unknown parameters for CFB: %s" % str(kwargs)) + return CfbMode(cipher_state, iv, segment_size_bytes) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.pyi new file mode 100644 index 0000000..228e464 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_cfb.pyi @@ -0,0 +1,26 @@ +from typing import Union, overload + +from Cryptodome.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CfbMode'] + + +class CfbMode(object): + block_size: int + iv: Buffer + IV: Buffer + + def __init__(self, + block_cipher: SmartPointer, + iv: Buffer, + segment_size: int) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.py new file mode 100644 index 0000000..9ce357f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.py @@ -0,0 +1,393 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_ctr.py : CTR mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Counter (CTR) mode. +""" + +__all__ = ['CtrMode'] + +import struct + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Cryptodome.Random import get_random_bytes +from Cryptodome.Util.py3compat import _copy_bytes, is_native_int +from Cryptodome.Util.number import long_to_bytes + +raw_ctr_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_ctr", """ + int CTR_start_operation(void *cipher, + uint8_t initialCounterBlock[], + size_t initialCounterBlock_len, + size_t prefix_len, + unsigned counter_len, + unsigned littleEndian, + void **pResult); + int CTR_encrypt(void *ctrState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CTR_decrypt(void *ctrState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int CTR_stop_operation(void *ctrState);""" + ) + + +class CtrMode(object): + """*CounTeR (CTR)* mode. + + This mode is very similar to ECB, in that + encryption of one block is done independently of all other blocks. + + Unlike ECB, the block *position* contributes to the encryption + and no information leaks about symbol frequency. + + Each message block is associated to a *counter* which + must be unique across all messages that get encrypted + with the same key (not just within the same message). + The counter is as big as the block size. + + Counters can be generated in several ways. The most + straightword one is to choose an *initial counter block* + (which can be made public, similarly to the *IV* for the + other modes) and increment its lowest **m** bits by one + (modulo *2^m*) for each block. In most cases, **m** is + chosen to be half the block size. + + See `NIST SP800-38A`_, Section 6.5 (for the mode) and + Appendix B (for how to manage the *initial counter block*). + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, initial_counter_block, + prefix_len, counter_len, little_endian): + """Create a new block cipher, configured in CTR mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + initial_counter_block : bytes/bytearray/memoryview + The initial plaintext to use to generate the key stream. + + It is as large as the cipher block, and it embeds + the initial value of the counter. + + This value must not be reused. + It shall contain a nonce or a random component. + Reusing the *initial counter block* for encryptions + performed with the same key compromises confidentiality. + + prefix_len : integer + The amount of bytes at the beginning of the counter block + that never change. + + counter_len : integer + The length in bytes of the counter embedded in the counter + block. + + little_endian : boolean + True if the counter in the counter block is an integer encoded + in little endian mode. If False, it is big endian. + """ + + if len(initial_counter_block) == prefix_len + counter_len: + self.nonce = _copy_bytes(None, prefix_len, initial_counter_block) + """Nonce; not available if there is a fixed suffix""" + + self._state = VoidPointer() + result = raw_ctr_lib.CTR_start_operation(block_cipher.get(), + c_uint8_ptr(initial_counter_block), + c_size_t(len(initial_counter_block)), + c_size_t(prefix_len), + counter_len, + little_endian, + self._state.address_of()) + if result: + raise ValueError("Error %X while instantiating the CTR mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_ctr_lib.CTR_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(initial_counter_block) + """The block size of the underlying cipher, in bytes.""" + + self._next = ["encrypt", "decrypt"] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = ["encrypt"] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ctr_lib.CTR_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + if result == 0x60002: + raise OverflowError("The counter has wrapped around in" + " CTR mode") + raise ValueError("Error %X while encrypting in CTR mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = ["decrypt"] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ctr_lib.CTR_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + if result == 0x60002: + raise OverflowError("The counter has wrapped around in" + " CTR mode") + raise ValueError("Error %X while decrypting in CTR mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_ctr_cipher(factory, **kwargs): + """Instantiate a cipher object that performs CTR encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Cryptodome.Cipher``. + + :Keywords: + nonce : bytes/bytearray/memoryview + The fixed part at the beginning of the counter block - the rest is + the counter number that gets increased when processing the next block. + The nonce must be such that no two messages are encrypted under the + same key and the same nonce. + + The nonce must be shorter than the block size (it can have + zero length; the counter is then as long as the block). + + If this parameter is not present, a random nonce will be created with + length equal to half the block size. No random nonce shorter than + 64 bits will be created though - you must really think through all + security consequences of using such a short block size. + + initial_value : posive integer or bytes/bytearray/memoryview + The initial value for the counter. If not present, the cipher will + start counting from 0. The value is incremented by one for each block. + The counter number is encoded in big endian mode. + + counter : object + Instance of ``Cryptodome.Util.Counter``, which allows full customization + of the counter block. This parameter is incompatible to both ``nonce`` + and ``initial_value``. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + + counter = kwargs.pop("counter", None) + nonce = kwargs.pop("nonce", None) + initial_value = kwargs.pop("initial_value", None) + if kwargs: + raise TypeError("Invalid parameters for CTR mode: %s" % str(kwargs)) + + if counter is not None and (nonce, initial_value) != (None, None): + raise TypeError("'counter' and 'nonce'/'initial_value'" + " are mutually exclusive") + + if counter is None: + # Cryptodome.Util.Counter is not used + if nonce is None: + if factory.block_size < 16: + raise TypeError("Impossible to create a safe nonce for short" + " block sizes") + nonce = get_random_bytes(factory.block_size // 2) + else: + if len(nonce) >= factory.block_size: + raise ValueError("Nonce is too long") + + # What is not nonce is counter + counter_len = factory.block_size - len(nonce) + + if initial_value is None: + initial_value = 0 + + if is_native_int(initial_value): + if (1 << (counter_len * 8)) - 1 < initial_value: + raise ValueError("Initial counter value is too large") + initial_counter_block = nonce + long_to_bytes(initial_value, counter_len) + else: + if len(initial_value) != counter_len: + raise ValueError("Incorrect length for counter byte string (%d bytes, expected %d)" % + (len(initial_value), counter_len)) + initial_counter_block = nonce + initial_value + + return CtrMode(cipher_state, + initial_counter_block, + len(nonce), # prefix + counter_len, + False) # little_endian + + # Cryptodome.Util.Counter is used + + # 'counter' used to be a callable object, but now it is + # just a dictionary for backward compatibility. + _counter = dict(counter) + try: + counter_len = _counter.pop("counter_len") + prefix = _counter.pop("prefix") + suffix = _counter.pop("suffix") + initial_value = _counter.pop("initial_value") + little_endian = _counter.pop("little_endian") + except KeyError: + raise TypeError("Incorrect counter object" + " (use Cryptodome.Util.Counter.new)") + + # Compute initial counter block + words = [] + while initial_value > 0: + words.append(struct.pack('B', initial_value & 255)) + initial_value >>= 8 + words += [b'\x00'] * max(0, counter_len - len(words)) + if not little_endian: + words.reverse() + initial_counter_block = prefix + b"".join(words) + suffix + + if len(initial_counter_block) != factory.block_size: + raise ValueError("Size of the counter block (%d bytes) must match" + " block size (%d)" % (len(initial_counter_block), + factory.block_size)) + + return CtrMode(cipher_state, initial_counter_block, + len(prefix), counter_len, little_endian) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.pyi new file mode 100644 index 0000000..a68a890 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ctr.pyi @@ -0,0 +1,27 @@ +from typing import Union, overload + +from Cryptodome.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['CtrMode'] + +class CtrMode(object): + block_size: int + nonce: bytes + + def __init__(self, + block_cipher: SmartPointer, + initial_counter_block: Buffer, + prefix_len: int, + counter_len: int, + little_endian: bool) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.py new file mode 100644 index 0000000..44ef21f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.py @@ -0,0 +1,408 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +EAX mode. +""" + +__all__ = ['EaxMode'] + +import struct +from binascii import unhexlify + +from Cryptodome.Util.py3compat import byte_string, bord, _copy_bytes + +from Cryptodome.Util._raw_api import is_buffer + +from Cryptodome.Util.strxor import strxor +from Cryptodome.Util.number import long_to_bytes, bytes_to_long + +from Cryptodome.Hash import CMAC, BLAKE2s +from Cryptodome.Random import get_random_bytes + + +class EaxMode(object): + """*EAX* mode. + + This is an Authenticated Encryption with Associated Data + (`AEAD`_) mode. It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, + and it will still be subject to authentication. + + The decryption step tells the receiver if the message comes + from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - + including the header - has been modified or corrupted. + + This mode requires a *nonce*. + + This mode is only available for ciphers that operate on 64 or + 128 bits blocks. + + There are no official standards defining EAX. + The implementation is based on `a proposal`__ that + was presented to NIST. + + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + .. __: http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/eax/eax-spec.pdf + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, mac_len, cipher_params): + """EAX cipher mode""" + + self.block_size = factory.block_size + """The block size of the underlying cipher, in bytes.""" + + self.nonce = _copy_bytes(None, None, nonce) + """The nonce originally used to create the object.""" + + self._mac_len = mac_len + self._mac_tag = None # Cache for MAC tag + + # Allowed transitions after initialization + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + # MAC tag length + if not (2 <= self._mac_len <= self.block_size): + raise ValueError("'mac_len' must be at least 2 and not larger than %d" + % self.block_size) + + # Nonce cannot be empty and must be a byte string + if len(self.nonce) == 0: + raise ValueError("Nonce cannot be empty in EAX mode") + if not is_buffer(nonce): + raise TypeError("nonce must be bytes, bytearray or memoryview") + + self._omac = [ + CMAC.new(key, + b'\x00' * (self.block_size - 1) + struct.pack('B', i), + ciphermod=factory, + cipher_params=cipher_params) + for i in range(0, 3) + ] + + # Compute MAC of nonce + self._omac[0].update(self.nonce) + self._signer = self._omac[1] + + # MAC of the nonce is also the initial counter for CTR encryption + counter_int = bytes_to_long(self._omac[0].digest()) + self._cipher = factory.new(key, + factory.MODE_CTR, + initial_value=counter_int, + nonce=b"", + **cipher_params) + + def update(self, assoc_data): + """Protect associated data + + If there is any associated data, the caller has to invoke + this function one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver is still able to detect any modification to it. + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. There are no restrictions on its size. + """ + + if "update" not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + self._signer.update(assoc_data) + return self + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + self._next = ["encrypt", "digest"] + ct = self._cipher.encrypt(plaintext, output=output) + if output is None: + self._omac[2].update(ct) + else: + self._omac[2].update(output) + return ct + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = ["decrypt", "verify"] + self._omac[2].update(ciphertext) + return self._cipher.decrypt(ciphertext, output=output) + + def digest(self): + """Compute the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if "digest" not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = ["digest"] + + if not self._mac_tag: + tag = b'\x00' * self.block_size + for i in range(3): + tag = strxor(tag, self._omac[i].digest()) + self._mac_tag = tag[:self._mac_len] + + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises MacMismatchError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "verify" not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = ["verify"] + + if not self._mac_tag: + tag = b'\x00' * self.block_size + for i in range(3): + tag = strxor(tag, self._omac[i].digest()) + self._mac_tag = tag[:self._mac_len] + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises MacMismatchError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + return self.encrypt(plaintext, output=output), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag, output=None): + """Perform decrypt() and verify() in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises MacMismatchError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + pt = self.decrypt(ciphertext, output=output) + self.verify(received_mac_tag) + return pt + + +def _create_eax_cipher(factory, **kwargs): + """Create a new block cipher, configured in EAX mode. + + :Parameters: + factory : module + A symmetric cipher module from `Cryptodome.Cipher` (like + `Cryptodome.Cipher.AES`). + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + There are no restrictions on its length, but it is recommended to use + at least 16 bytes. + + The nonce shall never repeat for two different messages encrypted with + the same key, but it does not need to be random. + + If not specified, a 16 byte long random string is used. + + mac_len : integer + Length of the MAC, in bytes. It must be no larger than the cipher + block bytes (which is the default). + """ + + try: + key = kwargs.pop("key") + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(16) + mac_len = kwargs.pop("mac_len", factory.block_size) + except KeyError as e: + raise TypeError("Missing parameter: " + str(e)) + + return EaxMode(factory, key, nonce, mac_len, kwargs) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.pyi new file mode 100644 index 0000000..cbfa467 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_eax.pyi @@ -0,0 +1,45 @@ +from types import ModuleType +from typing import Any, Union, Tuple, Dict, overload, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['EaxMode'] + +class EaxMode(object): + block_size: int + nonce: bytes + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + mac_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> EaxMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.py new file mode 100644 index 0000000..a01a16f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.py @@ -0,0 +1,220 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_ecb.py : ECB mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Electronic Code Book (ECB) mode. +""" + +__all__ = [ 'EcbMode' ] + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, create_string_buffer, + get_raw_buffer, SmartPointer, + c_size_t, c_uint8_ptr, + is_writeable_buffer) + +raw_ecb_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_ecb", """ + int ECB_start_operation(void *cipher, + void **pResult); + int ECB_encrypt(void *ecbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ECB_decrypt(void *ecbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int ECB_stop_operation(void *state); + """ + ) + + +class EcbMode(object): + """*Electronic Code Book (ECB)*. + + This is the simplest encryption mode. Each of the plaintext blocks + is directly encrypted into a ciphertext block, independently of + any other block. + + This mode is dangerous because it exposes frequency of symbols + in your plaintext. Other modes (e.g. *CBC*) should be used instead. + + See `NIST SP800-38A`_ , Section 6.1. + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher): + """Create a new block cipher, configured in ECB mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + """ + self.block_size = block_cipher.block_size + + self._state = VoidPointer() + result = raw_ecb_lib.ECB_start_operation(block_cipher.get(), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the ECB mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher + # mode + self._state = SmartPointer(self._state.get(), + raw_ecb_lib.ECB_stop_operation) + + # Memory allocated for the underlying block cipher is now owned + # by the cipher mode + block_cipher.release() + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key set at initialization. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + The length must be multiple of the cipher block length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ecb_lib.ECB_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + if result == 3: + raise ValueError("Data must be aligned to block boundary in ECB mode") + raise ValueError("Error %d while encrypting in ECB mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key set at initialization. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + The length must be multiple of the cipher block length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ecb_lib.ECB_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + if result == 3: + raise ValueError("Data must be aligned to block boundary in ECB mode") + raise ValueError("Error %d while decrypting in ECB mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_ecb_cipher(factory, **kwargs): + """Instantiate a cipher object that performs ECB encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Cryptodome.Cipher``. + + All keywords are passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present""" + + cipher_state = factory._create_base_cipher(kwargs) + cipher_state.block_size = factory.block_size + if kwargs: + raise TypeError("Unknown parameters for ECB: %s" % str(kwargs)) + return EcbMode(cipher_state) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.pyi new file mode 100644 index 0000000..936195f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ecb.pyi @@ -0,0 +1,19 @@ +from typing import Union, overload + +from Cryptodome.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = [ 'EcbMode' ] + +class EcbMode(object): + def __init__(self, block_cipher: SmartPointer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.py new file mode 100644 index 0000000..9914400 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.py @@ -0,0 +1,620 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Galois/Counter Mode (GCM). +""" + +__all__ = ['GcmMode'] + +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bord, _copy_bytes + +from Cryptodome.Util._raw_api import is_buffer + +from Cryptodome.Util.number import long_to_bytes, bytes_to_long +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Random import get_random_bytes + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr) + +from Cryptodome.Util import _cpu_features + + +# C API by module implementing GHASH +_ghash_api_template = """ + int ghash_%imp%(uint8_t y_out[16], + const uint8_t block_data[], + size_t len, + const uint8_t y_in[16], + const void *exp_key); + int ghash_expand_%imp%(const uint8_t h[16], + void **ghash_tables); + int ghash_destroy_%imp%(void *ghash_tables); +""" + +def _build_impl(lib, postfix): + from collections import namedtuple + + funcs = ( "ghash", "ghash_expand", "ghash_destroy" ) + GHASH_Imp = namedtuple('_GHash_Imp', funcs) + try: + imp_funcs = [ getattr(lib, x + "_" + postfix) for x in funcs ] + except AttributeError: # Make sphinx stop complaining with its mocklib + imp_funcs = [ None ] * 3 + params = dict(zip(funcs, imp_funcs)) + return GHASH_Imp(**params) + + +def _get_ghash_portable(): + api = _ghash_api_template.replace("%imp%", "portable") + lib = load_pycryptodome_raw_lib("Cryptodome.Hash._ghash_portable", api) + result = _build_impl(lib, "portable") + return result +_ghash_portable = _get_ghash_portable() + + +def _get_ghash_clmul(): + """Return None if CLMUL implementation is not available""" + + if not _cpu_features.have_clmul(): + return None + try: + api = _ghash_api_template.replace("%imp%", "clmul") + lib = load_pycryptodome_raw_lib("Cryptodome.Hash._ghash_clmul", api) + result = _build_impl(lib, "clmul") + except OSError: + result = None + return result +_ghash_clmul = _get_ghash_clmul() + + +class _GHASH(object): + """GHASH function defined in NIST SP 800-38D, Algorithm 2. + + If X_1, X_2, .. X_m are the blocks of input data, the function + computes: + + X_1*H^{m} + X_2*H^{m-1} + ... + X_m*H + + in the Galois field GF(2^256) using the reducing polynomial + (x^128 + x^7 + x^2 + x + 1). + """ + + def __init__(self, subkey, ghash_c): + assert len(subkey) == 16 + + self.ghash_c = ghash_c + + self._exp_key = VoidPointer() + result = ghash_c.ghash_expand(c_uint8_ptr(subkey), + self._exp_key.address_of()) + if result: + raise ValueError("Error %d while expanding the GHASH key" % result) + + self._exp_key = SmartPointer(self._exp_key.get(), + ghash_c.ghash_destroy) + + # create_string_buffer always returns a string of zeroes + self._last_y = create_string_buffer(16) + + def update(self, block_data): + assert len(block_data) % 16 == 0 + + result = self.ghash_c.ghash(self._last_y, + c_uint8_ptr(block_data), + c_size_t(len(block_data)), + self._last_y, + self._exp_key.get()) + if result: + raise ValueError("Error %d while updating GHASH" % result) + + return self + + def digest(self): + return get_raw_buffer(self._last_y) + + +def enum(**enums): + return type('Enum', (), enums) + + +MacStatus = enum(PROCESSING_AUTH_DATA=1, PROCESSING_CIPHERTEXT=2) + + +class GcmMode(object): + """Galois Counter Mode (GCM). + + This is an Authenticated Encryption with Associated Data (`AEAD`_) mode. + It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, and it will + still be subject to authentication. The decryption step tells the receiver + if the message comes from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - including the + header - has been modified or corrupted. + + This mode requires a *nonce*. + + This mode is only available for ciphers that operate on 128 bits blocks + (e.g. AES but not TDES). + + See `NIST SP800-38D`_. + + .. _`NIST SP800-38D`: http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, mac_len, cipher_params, ghash_c): + + self.block_size = factory.block_size + if self.block_size != 16: + raise ValueError("GCM mode is only available for ciphers" + " that operate on 128 bits blocks") + + if len(nonce) == 0: + raise ValueError("Nonce cannot be empty") + + if not is_buffer(nonce): + raise TypeError("Nonce must be bytes, bytearray or memoryview") + + # See NIST SP 800 38D, 5.2.1.1 + if len(nonce) > 2**64 - 1: + raise ValueError("Nonce exceeds maximum length") + + + self.nonce = _copy_bytes(None, None, nonce) + """Nonce""" + + self._factory = factory + self._key = _copy_bytes(None, None, key) + self._tag = None # Cache for MAC tag + + self._mac_len = mac_len + if not (4 <= mac_len <= 16): + raise ValueError("Parameter 'mac_len' must be in the range 4..16") + + # Allowed transitions after initialization + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + self._no_more_assoc_data = False + + # Length of associated data + self._auth_len = 0 + + # Length of the ciphertext or plaintext + self._msg_len = 0 + + # Step 1 in SP800-38D, Algorithm 4 (encryption) - Compute H + # See also Algorithm 5 (decryption) + hash_subkey = factory.new(key, + self._factory.MODE_ECB, + **cipher_params + ).encrypt(b'\x00' * 16) + + # Step 2 - Compute J0 + if len(self.nonce) == 12: + j0 = self.nonce + b"\x00\x00\x00\x01" + else: + fill = (16 - (len(self.nonce) % 16)) % 16 + 8 + ghash_in = (self.nonce + + b'\x00' * fill + + long_to_bytes(8 * len(self.nonce), 8)) + j0 = _GHASH(hash_subkey, ghash_c).update(ghash_in).digest() + + # Step 3 - Prepare GCTR cipher for encryption/decryption + nonce_ctr = j0[:12] + iv_ctr = (bytes_to_long(j0) + 1) & 0xFFFFFFFF + self._cipher = factory.new(key, + self._factory.MODE_CTR, + initial_value=iv_ctr, + nonce=nonce_ctr, + **cipher_params) + + # Step 5 - Bootstrat GHASH + self._signer = _GHASH(hash_subkey, ghash_c) + + # Step 6 - Prepare GCTR cipher for GMAC + self._tag_cipher = factory.new(key, + self._factory.MODE_CTR, + initial_value=j0, + nonce=b"", + **cipher_params) + + # Cache for data to authenticate + self._cache = b"" + + self._status = MacStatus.PROCESSING_AUTH_DATA + + def update(self, assoc_data): + """Protect associated data + + If there is any associated data, the caller has to invoke + this function one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver is still able to detect any modification to it. + In GCM, the *associated data* is also called + *additional authenticated data* (AAD). + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. There are no restrictions on its size. + """ + + if "update" not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + self._update(assoc_data) + self._auth_len += len(assoc_data) + + # See NIST SP 800 38D, 5.2.1.1 + if self._auth_len > 2**64 - 1: + raise ValueError("Additional Authenticated Data exceeds maximum length") + + return self + + def _update(self, data): + assert(len(self._cache) < 16) + + if len(self._cache) > 0: + filler = min(16 - len(self._cache), len(data)) + self._cache += _copy_bytes(None, filler, data) + data = data[filler:] + + if len(self._cache) < 16: + return + + # The cache is exactly one block + self._signer.update(self._cache) + self._cache = b"" + + update_len = len(data) // 16 * 16 + self._cache = _copy_bytes(update_len, None, data) + if update_len > 0: + self._signer.update(data[:update_len]) + + def _pad_cache_and_update(self): + assert(len(self._cache) < 16) + + # The authenticated data A is concatenated to the minimum + # number of zero bytes (possibly none) such that the + # - ciphertext C is aligned to the 16 byte boundary. + # See step 5 in section 7.1 + # - ciphertext C is aligned to the 16 byte boundary. + # See step 6 in section 7.2 + len_cache = len(self._cache) + if len_cache > 0: + self._update(b'\x00' * (16 - len_cache)) + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + self._next = ["encrypt", "digest"] + + ciphertext = self._cipher.encrypt(plaintext, output=output) + + if self._status == MacStatus.PROCESSING_AUTH_DATA: + self._pad_cache_and_update() + self._status = MacStatus.PROCESSING_CIPHERTEXT + + self._update(ciphertext if output is None else output) + self._msg_len += len(plaintext) + + # See NIST SP 800 38D, 5.2.1.1 + if self._msg_len > 2**39 - 256: + raise ValueError("Plaintext exceeds maximum length") + + return ciphertext + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = ["decrypt", "verify"] + + if self._status == MacStatus.PROCESSING_AUTH_DATA: + self._pad_cache_and_update() + self._status = MacStatus.PROCESSING_CIPHERTEXT + + self._update(ciphertext) + self._msg_len += len(ciphertext) + + return self._cipher.decrypt(ciphertext, output=output) + + def digest(self): + """Compute the *binary* MAC tag in an AEAD mode. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if "digest" not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = ["digest"] + + return self._compute_mac() + + def _compute_mac(self): + """Compute MAC without any FSM checks.""" + + if self._tag: + return self._tag + + # Step 5 in NIST SP 800-38D, Algorithm 4 - Compute S + self._pad_cache_and_update() + self._update(long_to_bytes(8 * self._auth_len, 8)) + self._update(long_to_bytes(8 * self._msg_len, 8)) + s_tag = self._signer.digest() + + # Step 6 - Compute T + self._tag = self._tag_cipher.encrypt(s_tag)[:self._mac_len] + + return self._tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "verify" not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = ["verify"] + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, + data=self._compute_mac()) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, + data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + return self.encrypt(plaintext, output=output), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag, output=None): + """Perform decrypt() and verify() in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + received_mac_tag : byte string + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext, output=output) + self.verify(received_mac_tag) + return plaintext + + +def _create_gcm_cipher(factory, **kwargs): + """Create a new block cipher, configured in Galois Counter Mode (GCM). + + :Parameters: + factory : module + A block cipher module, taken from `Cryptodome.Cipher`. + The cipher must have block length of 16 bytes. + GCM has been only defined for `Cryptodome.Cipher.AES`. + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + It must be 16 (e.g. *AES-128*), 24 (e.g. *AES-192*) + or 32 (e.g. *AES-256*) bytes long. + + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + + There are no restrictions on its length, + but it is recommended to use at least 16 bytes. + + The nonce shall never repeat for two + different messages encrypted with the same key, + but it does not need to be random. + + If not provided, a 16 byte nonce will be randomly created. + + mac_len : integer + Length of the MAC, in bytes. + It must be no larger than 16 bytes (which is the default). + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter:" + str(e)) + + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(16) + mac_len = kwargs.pop("mac_len", 16) + + # Not documented - only used for testing + use_clmul = kwargs.pop("use_clmul", True) + if use_clmul and _ghash_clmul: + ghash_c = _ghash_clmul + else: + ghash_c = _ghash_portable + + return GcmMode(factory, key, nonce, mac_len, kwargs, ghash_c) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.pyi new file mode 100644 index 0000000..8912955 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_gcm.pyi @@ -0,0 +1,45 @@ +from types import ModuleType +from typing import Union, Tuple, Dict, overload, Optional + +__all__ = ['GcmMode'] + +Buffer = Union[bytes, bytearray, memoryview] + +class GcmMode(object): + block_size: int + nonce: Buffer + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + mac_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> GcmMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.py new file mode 100644 index 0000000..1295e61 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.py @@ -0,0 +1,532 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Offset Codebook (OCB) mode. + +OCB is Authenticated Encryption with Associated Data (AEAD) cipher mode +designed by Prof. Phillip Rogaway and specified in `RFC7253`_. + +The algorithm provides both authenticity and privacy, it is very efficient, +it uses only one key and it can be used in online mode (so that encryption +or decryption can start before the end of the message is available). + +This module implements the third and last variant of OCB (OCB3) and it only +works in combination with a 128-bit block symmetric cipher, like AES. + +OCB is patented in US but `free licenses`_ exist for software implementations +meant for non-military purposes. + +Example: + >>> from Cryptodome.Cipher import AES + >>> from Cryptodome.Random import get_random_bytes + >>> + >>> key = get_random_bytes(32) + >>> cipher = AES.new(key, AES.MODE_OCB) + >>> plaintext = b"Attack at dawn" + >>> ciphertext, mac = cipher.encrypt_and_digest(plaintext) + >>> # Deliver cipher.nonce, ciphertext and mac + ... + >>> cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) + >>> try: + >>> plaintext = cipher.decrypt_and_verify(ciphertext, mac) + >>> except ValueError: + >>> print "Invalid message" + >>> else: + >>> print plaintext + +:undocumented: __package__ + +.. _RFC7253: http://www.rfc-editor.org/info/rfc7253 +.. _free licenses: http://web.cs.ucdavis.edu/~rogaway/ocb/license.htm +""" + +import struct +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bord, _copy_bytes, bchr +from Cryptodome.Util.number import long_to_bytes, bytes_to_long +from Cryptodome.Util.strxor import strxor + +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Random import get_random_bytes + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_buffer) + +_raw_ocb_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_ocb", """ + int OCB_start_operation(void *cipher, + const uint8_t *offset_0, + size_t offset_0_len, + void **pState); + int OCB_encrypt(void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OCB_decrypt(void *state, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OCB_update(void *state, + const uint8_t *in, + size_t data_len); + int OCB_digest(void *state, + uint8_t *tag, + size_t tag_len); + int OCB_stop_operation(void *state); + """) + + +class OcbMode(object): + """Offset Codebook (OCB) mode. + + :undocumented: __init__ + """ + + def __init__(self, factory, nonce, mac_len, cipher_params): + + if factory.block_size != 16: + raise ValueError("OCB mode is only available for ciphers" + " that operate on 128 bits blocks") + + self.block_size = 16 + """The block size of the underlying cipher, in bytes.""" + + self.nonce = _copy_bytes(None, None, nonce) + """Nonce used for this session.""" + if len(nonce) not in range(1, 16): + raise ValueError("Nonce must be at most 15 bytes long") + if not is_buffer(nonce): + raise TypeError("Nonce must be bytes, bytearray or memoryview") + + self._mac_len = mac_len + if not 8 <= mac_len <= 16: + raise ValueError("MAC tag must be between 8 and 16 bytes long") + + # Cache for MAC tag + self._mac_tag = None + + # Cache for unaligned associated data + self._cache_A = b"" + + # Cache for unaligned ciphertext/plaintext + self._cache_P = b"" + + # Allowed transitions after initialization + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + # Compute Offset_0 + params_without_key = dict(cipher_params) + key = params_without_key.pop("key") + + taglen_mod128 = (self._mac_len * 8) % 128 + if len(self.nonce) < 15: + nonce = bchr(taglen_mod128 << 1) +\ + b'\x00' * (14 - len(nonce)) +\ + b'\x01' +\ + self.nonce + else: + nonce = bchr((taglen_mod128 << 1) | 0x01) +\ + self.nonce + + bottom_bits = bord(nonce[15]) & 0x3F # 6 bits, 0..63 + top_bits = bord(nonce[15]) & 0xC0 # 2 bits + + ktop_cipher = factory.new(key, + factory.MODE_ECB, + **params_without_key) + ktop = ktop_cipher.encrypt(struct.pack('15sB', + nonce[:15], + top_bits)) + + stretch = ktop + strxor(ktop[:8], ktop[1:9]) # 192 bits + offset_0 = long_to_bytes(bytes_to_long(stretch) >> + (64 - bottom_bits), 24)[8:] + + # Create low-level cipher instance + raw_cipher = factory._create_base_cipher(cipher_params) + if cipher_params: + raise TypeError("Unknown keywords: " + str(cipher_params)) + + self._state = VoidPointer() + result = _raw_ocb_lib.OCB_start_operation(raw_cipher.get(), + offset_0, + c_size_t(len(offset_0)), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the OCB mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + _raw_ocb_lib.OCB_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + raw_cipher.release() + + def _update(self, assoc_data, assoc_data_len): + result = _raw_ocb_lib.OCB_update(self._state.get(), + c_uint8_ptr(assoc_data), + c_size_t(assoc_data_len)) + if result: + raise ValueError("Error %d while computing MAC in OCB mode" % result) + + def update(self, assoc_data): + """Process the associated data. + + If there is any associated data, the caller has to invoke + this method one or more times, before using + ``decrypt`` or ``encrypt``. + + By *associated data* it is meant any data (e.g. packet headers) that + will not be encrypted and will be transmitted in the clear. + However, the receiver shall still able to detect modifications. + + If there is no associated data, this method must not be called. + + The caller may split associated data in segments of any size, and + invoke this method multiple times, each time with the next segment. + + :Parameters: + assoc_data : bytes/bytearray/memoryview + A piece of associated data. + """ + + if "update" not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = ["encrypt", "decrypt", "digest", + "verify", "update"] + + if len(self._cache_A) > 0: + filler = min(16 - len(self._cache_A), len(assoc_data)) + self._cache_A += _copy_bytes(None, filler, assoc_data) + assoc_data = assoc_data[filler:] + + if len(self._cache_A) < 16: + return self + + # Clear the cache, and proceeding with any other aligned data + self._cache_A, seg = b"", self._cache_A + self.update(seg) + + update_len = len(assoc_data) // 16 * 16 + self._cache_A = _copy_bytes(update_len, None, assoc_data) + self._update(assoc_data, update_len) + return self + + def _transcrypt_aligned(self, in_data, in_data_len, + trans_func, trans_desc): + + out_data = create_string_buffer(in_data_len) + result = trans_func(self._state.get(), + in_data, + out_data, + c_size_t(in_data_len)) + if result: + raise ValueError("Error %d while %sing in OCB mode" + % (result, trans_desc)) + return get_raw_buffer(out_data) + + def _transcrypt(self, in_data, trans_func, trans_desc): + # Last piece to encrypt/decrypt + if in_data is None: + out_data = self._transcrypt_aligned(self._cache_P, + len(self._cache_P), + trans_func, + trans_desc) + self._cache_P = b"" + return out_data + + # Try to fill up the cache, if it already contains something + prefix = b"" + if len(self._cache_P) > 0: + filler = min(16 - len(self._cache_P), len(in_data)) + self._cache_P += _copy_bytes(None, filler, in_data) + in_data = in_data[filler:] + + if len(self._cache_P) < 16: + # We could not manage to fill the cache, so there is certainly + # no output yet. + return b"" + + # Clear the cache, and proceeding with any other aligned data + prefix = self._transcrypt_aligned(self._cache_P, + len(self._cache_P), + trans_func, + trans_desc) + self._cache_P = b"" + + # Process data in multiples of the block size + trans_len = len(in_data) // 16 * 16 + result = self._transcrypt_aligned(c_uint8_ptr(in_data), + trans_len, + trans_func, + trans_desc) + if prefix: + result = prefix + result + + # Left-over + self._cache_P = _copy_bytes(trans_len, None, in_data) + + return result + + def encrypt(self, plaintext=None): + """Encrypt the next piece of plaintext. + + After the entire plaintext has been passed (but before `digest`), + you **must** call this method one last time with no arguments to collect + the final piece of ciphertext. + + If possible, use the method `encrypt_and_digest` instead. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The next piece of data to encrypt or ``None`` to signify + that encryption has finished and that any remaining ciphertext + has to be produced. + :Return: + the ciphertext, as a byte string. + Its length may not match the length of the *plaintext*. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + + if plaintext is None: + self._next = ["digest"] + else: + self._next = ["encrypt"] + return self._transcrypt(plaintext, _raw_ocb_lib.OCB_encrypt, "encrypt") + + def decrypt(self, ciphertext=None): + """Decrypt the next piece of ciphertext. + + After the entire ciphertext has been passed (but before `verify`), + you **must** call this method one last time with no arguments to collect + the remaining piece of plaintext. + + If possible, use the method `decrypt_and_verify` instead. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The next piece of data to decrypt or ``None`` to signify + that decryption has finished and that any remaining plaintext + has to be produced. + :Return: + the plaintext, as a byte string. + Its length may not match the length of the *ciphertext*. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() can only be called after" + " initialization or an update()") + + if ciphertext is None: + self._next = ["verify"] + else: + self._next = ["decrypt"] + return self._transcrypt(ciphertext, + _raw_ocb_lib.OCB_decrypt, + "decrypt") + + def _compute_mac_tag(self): + + if self._mac_tag is not None: + return + + if self._cache_A: + self._update(self._cache_A, len(self._cache_A)) + self._cache_A = b"" + + mac_tag = create_string_buffer(16) + result = _raw_ocb_lib.OCB_digest(self._state.get(), + mac_tag, + c_size_t(len(mac_tag)) + ) + if result: + raise ValueError("Error %d while computing digest in OCB mode" + % result) + self._mac_tag = get_raw_buffer(mac_tag)[:self._mac_len] + + def digest(self): + """Compute the *binary* MAC tag. + + Call this method after the final `encrypt` (the one with no arguments) + to obtain the MAC tag. + + The MAC tag is needed by the receiver to determine authenticity + of the message. + + :Return: the MAC, as a byte string. + """ + + if "digest" not in self._next: + raise TypeError("digest() cannot be called now for this cipher") + + assert(len(self._cache_P) == 0) + + self._next = ["digest"] + + if self._mac_tag is None: + self._compute_mac_tag() + + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + Call this method after the final `decrypt` (the one with no arguments) + to check if the message is authentic and valid. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "verify" not in self._next: + raise TypeError("verify() cannot be called now for this cipher") + + assert(len(self._cache_P) == 0) + + self._next = ["verify"] + + if self._mac_tag is None: + self._compute_mac_tag() + + secret = get_random_bytes(16) + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext): + """Encrypt the message and create the MAC tag in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The entire message to encrypt. + :Return: + a tuple with two byte strings: + + - the encrypted data + - the MAC + """ + + return self.encrypt(plaintext) + self.encrypt(), self.digest() + + def decrypt_and_verify(self, ciphertext, received_mac_tag): + """Decrypted the message and verify its authenticity in one step. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The entire message to decrypt. + received_mac_tag : byte string + This is the *binary* MAC, as received from the sender. + + :Return: the decrypted data (byte string). + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + plaintext = self.decrypt(ciphertext) + self.decrypt() + self.verify(received_mac_tag) + return plaintext + + +def _create_ocb_cipher(factory, **kwargs): + """Create a new block cipher, configured in OCB mode. + + :Parameters: + factory : module + A symmetric cipher module from `Cryptodome.Cipher` + (like `Cryptodome.Cipher.AES`). + + :Keywords: + nonce : bytes/bytearray/memoryview + A value that must never be reused for any other encryption. + Its length can vary from 1 to 15 bytes. + If not specified, a random 15 bytes long nonce is generated. + + mac_len : integer + Length of the MAC, in bytes. + It must be in the range ``[8..16]``. + The default is 16 (128 bits). + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + try: + nonce = kwargs.pop("nonce", None) + if nonce is None: + nonce = get_random_bytes(15) + mac_len = kwargs.pop("mac_len", 16) + except KeyError as e: + raise TypeError("Keyword missing: " + str(e)) + + return OcbMode(factory, nonce, mac_len, kwargs) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.pyi new file mode 100644 index 0000000..a1909fc --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ocb.pyi @@ -0,0 +1,36 @@ +from types import ModuleType +from typing import Union, Any, Optional, Tuple, Dict, overload + +Buffer = Union[bytes, bytearray, memoryview] + +class OcbMode(object): + block_size: int + nonce: Buffer + + def __init__(self, + factory: ModuleType, + nonce: Buffer, + mac_len: int, + cipher_params: Dict) -> None: ... + + def update(self, assoc_data: Buffer) -> OcbMode: ... + + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.py new file mode 100644 index 0000000..8c0ccf6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.py @@ -0,0 +1,282 @@ +# -*- coding: utf-8 -*- +# +# Cipher/mode_ofb.py : OFB mode +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +""" +Output Feedback (CFB) mode. +""" + +__all__ = ['OfbMode'] + +from Cryptodome.Util.py3compat import _copy_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + create_string_buffer, get_raw_buffer, + SmartPointer, c_size_t, c_uint8_ptr, + is_writeable_buffer) + +from Cryptodome.Random import get_random_bytes + +raw_ofb_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._raw_ofb", """ + int OFB_start_operation(void *cipher, + const uint8_t iv[], + size_t iv_len, + void **pResult); + int OFB_encrypt(void *ofbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OFB_decrypt(void *ofbState, + const uint8_t *in, + uint8_t *out, + size_t data_len); + int OFB_stop_operation(void *state); + """ + ) + + +class OfbMode(object): + """*Output FeedBack (OFB)*. + + This mode is very similar to CBC, but it + transforms the underlying block cipher into a stream cipher. + + The keystream is the iterated block encryption of the + previous ciphertext block. + + An Initialization Vector (*IV*) is required. + + See `NIST SP800-38A`_ , Section 6.4. + + .. _`NIST SP800-38A` : http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf + + :undocumented: __init__ + """ + + def __init__(self, block_cipher, iv): + """Create a new block cipher, configured in OFB mode. + + :Parameters: + block_cipher : C pointer + A smart pointer to the low-level block cipher instance. + + iv : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + It is as long as the cipher block. + + **The IV must be a nonce, to to be reused for any other + message**. It shall be a nonce or a random value. + + Reusing the *IV* for encryptions performed with the same key + compromises confidentiality. + """ + + self._state = VoidPointer() + result = raw_ofb_lib.OFB_start_operation(block_cipher.get(), + c_uint8_ptr(iv), + c_size_t(len(iv)), + self._state.address_of()) + if result: + raise ValueError("Error %d while instantiating the OFB mode" + % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the cipher mode + self._state = SmartPointer(self._state.get(), + raw_ofb_lib.OFB_stop_operation) + + # Memory allocated for the underlying block cipher is now owed + # by the cipher mode + block_cipher.release() + + self.block_size = len(iv) + """The block size of the underlying cipher, in bytes.""" + + self.iv = _copy_bytes(None, None, iv) + """The Initialization Vector originally used to create the object. + The value does not change.""" + + self.IV = self.iv + """Alias for `iv`""" + + self._next = ["encrypt", "decrypt"] + + def encrypt(self, plaintext, output=None): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + If ``output`` is ``None``, the ciphertext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() cannot be called after decrypt()") + self._next = ["encrypt"] + + if output is None: + ciphertext = create_string_buffer(len(plaintext)) + else: + ciphertext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(plaintext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ofb_lib.OFB_encrypt(self._state.get(), + c_uint8_ptr(plaintext), + c_uint8_ptr(ciphertext), + c_size_t(len(plaintext))) + if result: + raise ValueError("Error %d while encrypting in OFB mode" % result) + + if output is None: + return get_raw_buffer(ciphertext) + else: + return None + + def decrypt(self, ciphertext, output=None): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + :Keywords: + output : bytearray/memoryview + The location where the plaintext is written to. + If ``None``, the plaintext is returned. + :Return: + If ``output`` is ``None``, the plaintext is returned as ``bytes``. + Otherwise, ``None``. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() cannot be called after encrypt()") + self._next = ["decrypt"] + + if output is None: + plaintext = create_string_buffer(len(ciphertext)) + else: + plaintext = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(ciphertext) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(plaintext)) + + result = raw_ofb_lib.OFB_decrypt(self._state.get(), + c_uint8_ptr(ciphertext), + c_uint8_ptr(plaintext), + c_size_t(len(ciphertext))) + if result: + raise ValueError("Error %d while decrypting in OFB mode" % result) + + if output is None: + return get_raw_buffer(plaintext) + else: + return None + + +def _create_ofb_cipher(factory, **kwargs): + """Instantiate a cipher object that performs OFB encryption/decryption. + + :Parameters: + factory : module + The underlying block cipher, a module from ``Cryptodome.Cipher``. + + :Keywords: + iv : bytes/bytearray/memoryview + The IV to use for OFB. + + IV : bytes/bytearray/memoryview + Alias for ``iv``. + + Any other keyword will be passed to the underlying block cipher. + See the relevant documentation for details (at least ``key`` will need + to be present). + """ + + cipher_state = factory._create_base_cipher(kwargs) + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + if len(iv) != factory.block_size: + raise ValueError("Incorrect IV length (it must be %d bytes long)" % + factory.block_size) + + if kwargs: + raise TypeError("Unknown parameters for OFB: %s" % str(kwargs)) + + return OfbMode(cipher_state, iv) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.pyi new file mode 100644 index 0000000..d28608e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_ofb.pyi @@ -0,0 +1,25 @@ +from typing import Union, overload + +from Cryptodome.Util._raw_api import SmartPointer + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['OfbMode'] + +class OfbMode(object): + block_size: int + iv: Buffer + IV: Buffer + + def __init__(self, + block_cipher: SmartPointer, + iv: Buffer) -> None: ... + @overload + def encrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def encrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + @overload + def decrypt(self, plaintext: Buffer) -> bytes: ... + @overload + def decrypt(self, plaintext: Buffer, output: Union[bytearray, memoryview]) -> None: ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.py new file mode 100644 index 0000000..d86ed19 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.py @@ -0,0 +1,206 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +OpenPGP mode. +""" + +__all__ = ['OpenPgpMode'] + +from Cryptodome.Util.py3compat import _copy_bytes +from Cryptodome.Random import get_random_bytes + +class OpenPgpMode(object): + """OpenPGP mode. + + This mode is a variant of CFB, and it is only used in PGP and + OpenPGP_ applications. If in doubt, use another mode. + + An Initialization Vector (*IV*) is required. + + Unlike CFB, the *encrypted* IV (not the IV itself) is + transmitted to the receiver. + + The IV is a random data block. For legacy reasons, two of its bytes are + duplicated to act as a checksum for the correctness of the key, which is now + known to be insecure and is ignored. The encrypted IV is therefore 2 bytes + longer than the clean IV. + + .. _OpenPGP: http://tools.ietf.org/html/rfc4880 + + :undocumented: __init__ + """ + + def __init__(self, factory, key, iv, cipher_params): + + #: The block size of the underlying cipher, in bytes. + self.block_size = factory.block_size + + self._done_first_block = False # True after the first encryption + + # Instantiate a temporary cipher to process the IV + IV_cipher = factory.new( + key, + factory.MODE_CFB, + IV=b'\x00' * self.block_size, + segment_size=self.block_size * 8, + **cipher_params) + + iv = _copy_bytes(None, None, iv) + + # The cipher will be used for... + if len(iv) == self.block_size: + # ... encryption + self._encrypted_IV = IV_cipher.encrypt(iv + iv[-2:]) + elif len(iv) == self.block_size + 2: + # ... decryption + self._encrypted_IV = iv + # Last two bytes are for a deprecated "quick check" feature that + # should not be used. (https://eprint.iacr.org/2005/033) + iv = IV_cipher.decrypt(iv)[:-2] + else: + raise ValueError("Length of IV must be %d or %d bytes" + " for MODE_OPENPGP" + % (self.block_size, self.block_size + 2)) + + self.iv = self.IV = iv + + # Instantiate the cipher for the real PGP data + self._cipher = factory.new( + key, + factory.MODE_CFB, + IV=self._encrypted_IV[-self.block_size:], + segment_size=self.block_size * 8, + **cipher_params) + + def encrypt(self, plaintext): + """Encrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have encrypted a message + you cannot encrypt (or decrypt) another message using the same + object. + + The data to encrypt can be broken up in two or + more pieces and `encrypt` can be called multiple times. + + That is, the statement: + + >>> c.encrypt(a) + c.encrypt(b) + + is equivalent to: + + >>> c.encrypt(a+b) + + This function does not add any padding to the plaintext. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + + :Return: + the encrypted data, as a byte string. + It is as long as *plaintext* with one exception: + when encrypting the first message chunk, + the encypted IV is prepended to the returned ciphertext. + """ + + res = self._cipher.encrypt(plaintext) + if not self._done_first_block: + res = self._encrypted_IV + res + self._done_first_block = True + return res + + def decrypt(self, ciphertext): + """Decrypt data with the key and the parameters set at initialization. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + The data to decrypt can be broken up in two or + more pieces and `decrypt` can be called multiple times. + + That is, the statement: + + >>> c.decrypt(a) + c.decrypt(b) + + is equivalent to: + + >>> c.decrypt(a+b) + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + + :Return: the decrypted data (byte string). + """ + + return self._cipher.decrypt(ciphertext) + + +def _create_openpgp_cipher(factory, **kwargs): + """Create a new block cipher, configured in OpenPGP mode. + + :Parameters: + factory : module + The module. + + :Keywords: + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + + IV : bytes/bytearray/memoryview + The initialization vector to use for encryption or decryption. + + For encryption, the IV must be as long as the cipher block size. + + For decryption, it must be 2 bytes longer (it is actually the + *encrypted* IV which was prefixed to the ciphertext). + """ + + iv = kwargs.pop("IV", None) + IV = kwargs.pop("iv", None) + + if (None, None) == (iv, IV): + iv = get_random_bytes(factory.block_size) + if iv is not None: + if IV is not None: + raise TypeError("You must either use 'iv' or 'IV', not both") + else: + iv = IV + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing component: " + str(e)) + + return OpenPgpMode(factory, key, iv, kwargs) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi new file mode 100644 index 0000000..14b8105 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_openpgp.pyi @@ -0,0 +1,20 @@ +from types import ModuleType +from typing import Union, Dict + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['OpenPgpMode'] + +class OpenPgpMode(object): + block_size: int + iv: Union[bytes, bytearray, memoryview] + IV: Union[bytes, bytearray, memoryview] + + def __init__(self, + factory: ModuleType, + key: Buffer, + iv: Buffer, + cipher_params: Dict) -> None: ... + def encrypt(self, plaintext: Buffer) -> bytes: ... + def decrypt(self, plaintext: Buffer) -> bytes: ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.py b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.py new file mode 100644 index 0000000..4a76ad6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.py @@ -0,0 +1,392 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Synthetic Initialization Vector (SIV) mode. +""" + +__all__ = ['SivMode'] + +from binascii import hexlify, unhexlify + +from Cryptodome.Util.py3compat import bord, _copy_bytes + +from Cryptodome.Util._raw_api import is_buffer + +from Cryptodome.Util.number import long_to_bytes, bytes_to_long +from Cryptodome.Protocol.KDF import _S2V +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Random import get_random_bytes + + +class SivMode(object): + """Synthetic Initialization Vector (SIV). + + This is an Authenticated Encryption with Associated Data (`AEAD`_) mode. + It provides both confidentiality and authenticity. + + The header of the message may be left in the clear, if needed, and it will + still be subject to authentication. The decryption step tells the receiver + if the message comes from a source that really knowns the secret key. + Additionally, decryption detects if any part of the message - including the + header - has been modified or corrupted. + + Unlike other AEAD modes such as CCM, EAX or GCM, accidental reuse of a + nonce is not catastrophic for the confidentiality of the message. The only + effect is that an attacker can tell when the same plaintext (and same + associated data) is protected with the same key. + + The length of the MAC is fixed to the block size of the underlying cipher. + The key size is twice the length of the key of the underlying cipher. + + This mode is only available for AES ciphers. + + +--------------------+---------------+-------------------+ + | Cipher | SIV MAC size | SIV key length | + | | (bytes) | (bytes) | + +====================+===============+===================+ + | AES-128 | 16 | 32 | + +--------------------+---------------+-------------------+ + | AES-192 | 16 | 48 | + +--------------------+---------------+-------------------+ + | AES-256 | 16 | 64 | + +--------------------+---------------+-------------------+ + + See `RFC5297`_ and the `original paper`__. + + .. _RFC5297: https://tools.ietf.org/html/rfc5297 + .. _AEAD: http://blog.cryptographyengineering.com/2012/05/how-to-choose-authenticated-encryption.html + .. __: http://www.cs.ucdavis.edu/~rogaway/papers/keywrap.pdf + + :undocumented: __init__ + """ + + def __init__(self, factory, key, nonce, kwargs): + + self.block_size = factory.block_size + """The block size of the underlying cipher, in bytes.""" + + self._factory = factory + + self._cipher_params = kwargs + + if len(key) not in (32, 48, 64): + raise ValueError("Incorrect key length (%d bytes)" % len(key)) + + if nonce is not None: + if not is_buffer(nonce): + raise TypeError("When provided, the nonce must be bytes, bytearray or memoryview") + + if len(nonce) == 0: + raise ValueError("When provided, the nonce must be non-empty") + + self.nonce = _copy_bytes(None, None, nonce) + """Public attribute is only available in case of non-deterministic + encryption.""" + + subkey_size = len(key) // 2 + + self._mac_tag = None # Cache for MAC tag + self._kdf = _S2V(key[:subkey_size], + ciphermod=factory, + cipher_params=self._cipher_params) + self._subkey_cipher = key[subkey_size:] + + # Purely for the purpose of verifying that cipher_params are OK + factory.new(key[:subkey_size], factory.MODE_ECB, **kwargs) + + # Allowed transitions after initialization + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + def _create_ctr_cipher(self, v): + """Create a new CTR cipher from V in SIV mode""" + + v_int = bytes_to_long(v) + q = v_int & 0xFFFFFFFFFFFFFFFF7FFFFFFF7FFFFFFF + return self._factory.new( + self._subkey_cipher, + self._factory.MODE_CTR, + initial_value=q, + nonce=b"", + **self._cipher_params) + + def update(self, component): + """Protect one associated data component + + For SIV, the associated data is a sequence (*vector*) of non-empty + byte strings (*components*). + + This method consumes the next component. It must be called + once for each of the components that constitue the associated data. + + Note that the components have clear boundaries, so that: + + >>> cipher.update(b"builtin") + >>> cipher.update(b"securely") + + is not equivalent to: + + >>> cipher.update(b"built") + >>> cipher.update(b"insecurely") + + If there is no associated data, this method must not be called. + + :Parameters: + component : bytes/bytearray/memoryview + The next associated data component. + """ + + if "update" not in self._next: + raise TypeError("update() can only be called" + " immediately after initialization") + + self._next = ["update", "encrypt", "decrypt", + "digest", "verify"] + + return self._kdf.update(component) + + def encrypt(self, plaintext): + """ + For SIV, encryption and MAC authentication must take place at the same + point. This method shall not be used. + + Use `encrypt_and_digest` instead. + """ + + raise TypeError("encrypt() not allowed for SIV mode." + " Use encrypt_and_digest() instead.") + + def decrypt(self, ciphertext): + """ + For SIV, decryption and verification must take place at the same + point. This method shall not be used. + + Use `decrypt_and_verify` instead. + """ + + raise TypeError("decrypt() not allowed for SIV mode." + " Use decrypt_and_verify() instead.") + + def digest(self): + """Compute the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method returns the MAC that shall be sent to the receiver, + together with the ciphertext. + + :Return: the MAC, as a byte string. + """ + + if "digest" not in self._next: + raise TypeError("digest() cannot be called when decrypting" + " or validating a message") + self._next = ["digest"] + if self._mac_tag is None: + self._mac_tag = self._kdf.derive() + return self._mac_tag + + def hexdigest(self): + """Compute the *printable* MAC tag. + + This method is like `digest`. + + :Return: the MAC, as a hexadecimal string. + """ + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def verify(self, received_mac_tag): + """Validate the *binary* MAC tag. + + The caller invokes this function at the very end. + + This method checks if the decrypted message is indeed valid + (that is, if the key is correct) and it has not been + tampered with while in transit. + + :Parameters: + received_mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "verify" not in self._next: + raise TypeError("verify() cannot be called" + " when encrypting a message") + self._next = ["verify"] + + if self._mac_tag is None: + self._mac_tag = self._kdf.derive() + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=self._mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=received_mac_tag) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Validate the *printable* MAC tag. + + This method is like `verify`. + + :Parameters: + hex_mac_tag : string + This is the *printable* MAC, as received from the sender. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + self.verify(unhexlify(hex_mac_tag)) + + def encrypt_and_digest(self, plaintext, output=None): + """Perform encrypt() and digest() in one step. + + :Parameters: + plaintext : bytes/bytearray/memoryview + The piece of data to encrypt. + :Keywords: + output : bytearray/memoryview + The location where the ciphertext must be written to. + If ``None``, the ciphertext is returned. + :Return: + a tuple with two items: + + - the ciphertext, as ``bytes`` + - the MAC tag, as ``bytes`` + + The first item becomes ``None`` when the ``output`` parameter + specified a location for the result. + """ + + if "encrypt" not in self._next: + raise TypeError("encrypt() can only be called after" + " initialization or an update()") + + self._next = ["digest"] + + # Compute V (MAC) + if hasattr(self, 'nonce'): + self._kdf.update(self.nonce) + self._kdf.update(plaintext) + self._mac_tag = self._kdf.derive() + + cipher = self._create_ctr_cipher(self._mac_tag) + + return cipher.encrypt(plaintext, output=output), self._mac_tag + + def decrypt_and_verify(self, ciphertext, mac_tag, output=None): + """Perform decryption and verification in one step. + + A cipher object is stateful: once you have decrypted a message + you cannot decrypt (or encrypt) another message with the same + object. + + You cannot reuse an object for encrypting + or decrypting other data with the same key. + + This function does not remove any padding from the plaintext. + + :Parameters: + ciphertext : bytes/bytearray/memoryview + The piece of data to decrypt. + It can be of any length. + mac_tag : bytes/bytearray/memoryview + This is the *binary* MAC, as received from the sender. + :Keywords: + output : bytearray/memoryview + The location where the plaintext must be written to. + If ``None``, the plaintext is returned. + :Return: the plaintext as ``bytes`` or ``None`` when the ``output`` + parameter specified a location for the result. + :Raises ValueError: + if the MAC does not match. The message has been tampered with + or the key is incorrect. + """ + + if "decrypt" not in self._next: + raise TypeError("decrypt() can only be called" + " after initialization or an update()") + self._next = ["verify"] + + # Take the MAC and start the cipher for decryption + self._cipher = self._create_ctr_cipher(mac_tag) + + plaintext = self._cipher.decrypt(ciphertext, output=output) + + if hasattr(self, 'nonce'): + self._kdf.update(self.nonce) + self._kdf.update(plaintext if output is None else output) + self.verify(mac_tag) + + return plaintext + + +def _create_siv_cipher(factory, **kwargs): + """Create a new block cipher, configured in + Synthetic Initializaton Vector (SIV) mode. + + :Parameters: + + factory : object + A symmetric cipher module from `Cryptodome.Cipher` + (like `Cryptodome.Cipher.AES`). + + :Keywords: + + key : bytes/bytearray/memoryview + The secret key to use in the symmetric cipher. + It must be 32, 48 or 64 bytes long. + If AES is the chosen cipher, the variants *AES-128*, + *AES-192* and or *AES-256* will be used internally. + + nonce : bytes/bytearray/memoryview + For deterministic encryption, it is not present. + + Otherwise, it is a value that must never be reused + for encrypting message under this key. + + There are no restrictions on its length, + but it is recommended to use at least 16 bytes. + """ + + try: + key = kwargs.pop("key") + except KeyError as e: + raise TypeError("Missing parameter: " + str(e)) + + nonce = kwargs.pop("nonce", None) + + return SivMode(factory, key, nonce, kwargs) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.pyi b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.pyi new file mode 100644 index 0000000..2934f23 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_mode_siv.pyi @@ -0,0 +1,38 @@ +from types import ModuleType +from typing import Union, Tuple, Dict, Optional, overload + +Buffer = Union[bytes, bytearray, memoryview] + +__all__ = ['SivMode'] + +class SivMode(object): + block_size: int + nonce: bytes + + def __init__(self, + factory: ModuleType, + key: Buffer, + nonce: Buffer, + kwargs: Dict) -> None: ... + + def update(self, component: Buffer) -> SivMode: ... + + def encrypt(self, plaintext: Buffer) -> bytes: ... + def decrypt(self, plaintext: Buffer) -> bytes: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, received_mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + @overload + def encrypt_and_digest(self, + plaintext: Buffer) -> Tuple[bytes, bytes]: ... + @overload + def encrypt_and_digest(self, + plaintext: Buffer, + output: Buffer) -> Tuple[None, bytes]: ... + def decrypt_and_verify(self, + ciphertext: Buffer, + received_mac_tag: Buffer, + output: Optional[Union[bytearray, memoryview]] = ...) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so new file mode 100755 index 0000000..2f8b59f Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_pkcs1_decode.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so new file mode 100755 index 0000000..b37dd95 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aes.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so new file mode 100755 index 0000000..5f08fe7 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_aesni.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so new file mode 100755 index 0000000..2287d2e Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_arc2.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so new file mode 100755 index 0000000..ad77ccb Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_blowfish.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so new file mode 100755 index 0000000..730e178 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cast.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so new file mode 100755 index 0000000..847d824 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cbc.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so new file mode 100755 index 0000000..2c9b852 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_cfb.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so new file mode 100755 index 0000000..761cd36 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ctr.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des.abi3.so new file mode 100755 index 0000000..7f1f824 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so new file mode 100755 index 0000000..b475c52 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_des3.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so new file mode 100755 index 0000000..91e8126 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ecb.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so new file mode 100755 index 0000000..c3c45d5 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_eksblowfish.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so new file mode 100755 index 0000000..9685971 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ocb.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so new file mode 100755 index 0000000..a4a629a Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Cipher/_raw_ofb.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.py new file mode 100644 index 0000000..85da887 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.py @@ -0,0 +1,247 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bord, tobytes + +from Cryptodome.Random import get_random_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_blake2b_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._BLAKE2b", + """ + int blake2b_init(void **state, + const uint8_t *key, + size_t key_size, + size_t digest_size); + int blake2b_destroy(void *state); + int blake2b_update(void *state, + const uint8_t *buf, + size_t len); + int blake2b_digest(const void *state, + uint8_t digest[64]); + int blake2b_copy(const void *src, void *dst); + """) + + +class BLAKE2b_Hash(object): + """A BLAKE2b hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The internal block size of the hash algorithm in bytes. + block_size = 64 + + def __init__(self, data, key, digest_bytes, update_after_digest): + + # The size of the resulting hash in bytes. + self.digest_size = digest_bytes + + self._update_after_digest = update_after_digest + self._digest_done = False + + # See https://tools.ietf.org/html/rfc7693 + if digest_bytes in (20, 32, 48, 64) and not key: + self.oid = "1.3.6.1.4.1.1722.12.2.1." + str(digest_bytes) + + state = VoidPointer() + result = _raw_blake2b_lib.blake2b_init(state.address_of(), + c_uint8_ptr(key), + c_size_t(len(key)), + c_size_t(digest_bytes) + ) + if result: + raise ValueError("Error %d while instantiating BLAKE2b" % result) + self._state = SmartPointer(state.get(), + _raw_blake2b_lib.blake2b_destroy) + if data: + self.update(data) + + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (bytes/bytearray/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_blake2b_lib.blake2b_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing BLAKE2b data" % result) + return self + + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(64) + result = _raw_blake2b_lib.blake2b_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while creating BLAKE2b digest" % result) + + self._digest_done = True + + return get_raw_buffer(bfr)[:self.digest_size] + + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (bytes/bytearray/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = new(digest_bits=160, key=secret, data=mac_tag) + mac2 = new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + + def new(self, **kwargs): + """Return a new instance of a BLAKE2b hash object. + See :func:`new`. + """ + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new hash object. + + Args: + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`BLAKE2b_Hash.update`. + digest_bytes (integer): + Optional. The size of the digest, in bytes (1 to 64). Default is 64. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (8 to 512, in steps of 8). + Default is 512. + key (bytes/bytearray/memoryview): + Optional. The key to use to compute the MAC (1 to 64 bytes). + If not specified, no key will be used. + update_after_digest (boolean): + Optional. By default, a hash object cannot be updated anymore after + the digest is computed. When this flag is ``True``, such check + is no longer enforced. + + Returns: + A :class:`BLAKE2b_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 64 + if digest_bytes is not None: + if not (1 <= digest_bytes <= 64): + raise ValueError("'digest_bytes' not in range 1..64") + else: + if not (8 <= digest_bits <= 512) or (digest_bits % 8): + raise ValueError("'digest_bits' not in range 8..512, " + "with steps of 8") + digest_bytes = digest_bits // 8 + + key = kwargs.pop("key", b"") + if len(key) > 64: + raise ValueError("BLAKE2b key cannot exceed 64 bytes") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return BLAKE2b_Hash(data, key, digest_bytes, update_after_digest) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.pyi new file mode 100644 index 0000000..d37c374 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2b.pyi @@ -0,0 +1,32 @@ +from typing import Any, Union +from types import ModuleType + +Buffer = Union[bytes, bytearray, memoryview] + +class BLAKE2b_Hash(object): + block_size: int + digest_size: int + oid: str + + def __init__(self, + data: Buffer, + key: Buffer, + digest_bytes: bytes, + update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> BLAKE2b_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + def new(self, + data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + key: Buffer = ..., + update_after_digest: bool = ...) -> BLAKE2b_Hash: ... + +def new(data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + key: Buffer = ..., + update_after_digest: bool = ...) -> BLAKE2b_Hash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.py new file mode 100644 index 0000000..43be5c4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.py @@ -0,0 +1,247 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bord, tobytes + +from Cryptodome.Random import get_random_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_blake2s_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._BLAKE2s", + """ + int blake2s_init(void **state, + const uint8_t *key, + size_t key_size, + size_t digest_size); + int blake2s_destroy(void *state); + int blake2s_update(void *state, + const uint8_t *buf, + size_t len); + int blake2s_digest(const void *state, + uint8_t digest[32]); + int blake2s_copy(const void *src, void *dst); + """) + + +class BLAKE2s_Hash(object): + """A BLAKE2s hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The internal block size of the hash algorithm in bytes. + block_size = 32 + + def __init__(self, data, key, digest_bytes, update_after_digest): + + # The size of the resulting hash in bytes. + self.digest_size = digest_bytes + + self._update_after_digest = update_after_digest + self._digest_done = False + + # See https://tools.ietf.org/html/rfc7693 + if digest_bytes in (16, 20, 28, 32) and not key: + self.oid = "1.3.6.1.4.1.1722.12.2.2." + str(digest_bytes) + + state = VoidPointer() + result = _raw_blake2s_lib.blake2s_init(state.address_of(), + c_uint8_ptr(key), + c_size_t(len(key)), + c_size_t(digest_bytes) + ) + if result: + raise ValueError("Error %d while instantiating BLAKE2s" % result) + self._state = SmartPointer(state.get(), + _raw_blake2s_lib.blake2s_destroy) + if data: + self.update(data) + + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_blake2s_lib.blake2s_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing BLAKE2s data" % result) + return self + + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(32) + result = _raw_blake2s_lib.blake2s_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while creating BLAKE2s digest" % result) + + self._digest_done = True + + return get_raw_buffer(bfr)[:self.digest_size] + + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte array/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = new(digest_bits=160, key=secret, data=mac_tag) + mac2 = new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + + def new(self, **kwargs): + """Return a new instance of a BLAKE2s hash object. + See :func:`new`. + """ + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`BLAKE2s_Hash.update`. + digest_bytes (integer): + Optional. The size of the digest, in bytes (1 to 32). Default is 32. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (8 to 256, in steps of 8). + Default is 256. + key (byte string): + Optional. The key to use to compute the MAC (1 to 64 bytes). + If not specified, no key will be used. + update_after_digest (boolean): + Optional. By default, a hash object cannot be updated anymore after + the digest is computed. When this flag is ``True``, such check + is no longer enforced. + + Returns: + A :class:`BLAKE2s_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 32 + if digest_bytes is not None: + if not (1 <= digest_bytes <= 32): + raise ValueError("'digest_bytes' not in range 1..32") + else: + if not (8 <= digest_bits <= 256) or (digest_bits % 8): + raise ValueError("'digest_bits' not in range 8..256, " + "with steps of 8") + digest_bytes = digest_bits // 8 + + key = kwargs.pop("key", b"") + if len(key) > 32: + raise ValueError("BLAKE2s key cannot exceed 32 bytes") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return BLAKE2s_Hash(data, key, digest_bytes, update_after_digest) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.pyi new file mode 100644 index 0000000..374b3a4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/BLAKE2s.pyi @@ -0,0 +1,26 @@ +from typing import Any, Union + +Buffer = Union[bytes, bytearray, memoryview] + +class BLAKE2s_Hash(object): + block_size: int + digest_size: int + oid: str + + def __init__(self, + data: Buffer, + key: Buffer, + digest_bytes: bytes, + update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> BLAKE2s_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + def new(self, **kwargs: Any) -> BLAKE2s_Hash: ... + +def new(data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + key: Buffer = ..., + update_after_digest: bool = ...) -> BLAKE2s_Hash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.py new file mode 100644 index 0000000..e831700 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.py @@ -0,0 +1,302 @@ +# -*- coding: utf-8 -*- +# +# Hash/CMAC.py - Implements the CMAC algorithm +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from binascii import unhexlify + +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Util.strxor import strxor +from Cryptodome.Util.number import long_to_bytes, bytes_to_long +from Cryptodome.Util.py3compat import bord, tobytes, _copy_bytes +from Cryptodome.Random import get_random_bytes + + +# The size of the authentication tag produced by the MAC. +digest_size = None + + +def _shift_bytes(bs, xor_lsb=0): + num = (bytes_to_long(bs) << 1) ^ xor_lsb + return long_to_bytes(num, len(bs))[-len(bs):] + + +class CMAC(object): + """A CMAC hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting MAC tag + :vartype digest_size: integer + """ + + digest_size = None + + def __init__(self, key, msg, ciphermod, cipher_params, mac_len, + update_after_digest): + + self.digest_size = mac_len + + self._key = _copy_bytes(None, None, key) + self._factory = ciphermod + self._cipher_params = cipher_params + self._block_size = bs = ciphermod.block_size + self._mac_tag = None + self._update_after_digest = update_after_digest + + # Section 5.3 of NIST SP 800 38B and Appendix B + if bs == 8: + const_Rb = 0x1B + self._max_size = 8 * (2 ** 21) + elif bs == 16: + const_Rb = 0x87 + self._max_size = 16 * (2 ** 48) + else: + raise TypeError("CMAC requires a cipher with a block size" + " of 8 or 16 bytes, not %d" % bs) + + # Compute sub-keys + zero_block = b'\x00' * bs + self._ecb = ciphermod.new(key, + ciphermod.MODE_ECB, + **self._cipher_params) + L = self._ecb.encrypt(zero_block) + if bord(L[0]) & 0x80: + self._k1 = _shift_bytes(L, const_Rb) + else: + self._k1 = _shift_bytes(L) + if bord(self._k1[0]) & 0x80: + self._k2 = _shift_bytes(self._k1, const_Rb) + else: + self._k2 = _shift_bytes(self._k1) + + # Initialize CBC cipher with zero IV + self._cbc = ciphermod.new(key, + ciphermod.MODE_CBC, + zero_block, + **self._cipher_params) + + # Cache for outstanding data to authenticate + self._cache = bytearray(bs) + self._cache_n = 0 + + # Last piece of ciphertext produced + self._last_ct = zero_block + + # Last block that was encrypted with AES + self._last_pt = None + + # Counter for total message size + self._data_size = 0 + + if msg: + self.update(msg) + + def update(self, msg): + """Authenticate the next chunk of message. + + Args: + data (byte string/byte array/memoryview): The next chunk of data + """ + + if self._mac_tag is not None and not self._update_after_digest: + raise TypeError("update() cannot be called after digest() or verify()") + + self._data_size += len(msg) + bs = self._block_size + + if self._cache_n > 0: + filler = min(bs - self._cache_n, len(msg)) + self._cache[self._cache_n:self._cache_n+filler] = msg[:filler] + self._cache_n += filler + + if self._cache_n < bs: + return self + + msg = memoryview(msg)[filler:] + self._update(self._cache) + self._cache_n = 0 + + remain = len(msg) % bs + if remain > 0: + self._update(msg[:-remain]) + self._cache[:remain] = msg[-remain:] + else: + self._update(msg) + self._cache_n = remain + return self + + def _update(self, data_block): + """Update a block aligned to the block boundary""" + + bs = self._block_size + assert len(data_block) % bs == 0 + + if len(data_block) == 0: + return + + ct = self._cbc.encrypt(data_block) + if len(data_block) == bs: + second_last = self._last_ct + else: + second_last = ct[-bs*2:-bs] + self._last_ct = ct[-bs:] + self._last_pt = strxor(second_last, data_block[-bs:]) + + def copy(self): + """Return a copy ("clone") of the CMAC object. + + The copy will have the same internal state as the original CMAC + object. + This can be used to efficiently compute the MAC tag of byte + strings that share a common initial substring. + + :return: An :class:`CMAC` + """ + + obj = self.__new__(CMAC) + obj.__dict__ = self.__dict__.copy() + obj._cbc = self._factory.new(self._key, + self._factory.MODE_CBC, + self._last_ct, + **self._cipher_params) + obj._cache = self._cache[:] + obj._last_ct = self._last_ct[:] + return obj + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message + that has been authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bs = self._block_size + + if self._mac_tag is not None and not self._update_after_digest: + return self._mac_tag + + if self._data_size > self._max_size: + raise ValueError("MAC is unsafe for this message") + + if self._cache_n == 0 and self._data_size > 0: + # Last block was full + pt = strxor(self._last_pt, self._k1) + else: + # Last block is partial (or message length is zero) + partial = self._cache[:] + partial[self._cache_n:] = b'\x80' + b'\x00' * (bs - self._cache_n - 1) + pt = strxor(strxor(self._last_ct, partial), self._k2) + + self._mac_tag = self._ecb.encrypt(pt)[:self.digest_size] + + return self._mac_tag + + def hexdigest(self): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) + for x in tuple(self.digest())]) + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte array/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + +def new(key, msg=None, ciphermod=None, cipher_params=None, mac_len=None, + update_after_digest=False): + """Create a new MAC object. + + Args: + key (byte string/byte array/memoryview): + key for the CMAC object. + The key must be valid for the underlying cipher algorithm. + For instance, it must be 16 bytes long for AES-128. + ciphermod (module): + A cipher module from :mod:`Cryptodome.Cipher`. + The cipher's block size has to be 128 bits, + like :mod:`Cryptodome.Cipher.AES`, to reduce the probability + of collisions. + msg (byte string/byte array/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to `CMAC.update`. Optional. + cipher_params (dict): + Optional. A set of parameters to use when instantiating a cipher + object. + mac_len (integer): + Length of the MAC, in bytes. + It must be at least 4 bytes long. + The default (and recommended) length matches the size of a cipher block. + update_after_digest (boolean): + Optional. By default, a hash object cannot be updated anymore after + the digest is computed. When this flag is ``True``, such check + is no longer enforced. + Returns: + A :class:`CMAC` object + """ + + if ciphermod is None: + raise TypeError("ciphermod must be specified (try AES)") + + cipher_params = {} if cipher_params is None else dict(cipher_params) + + if mac_len is None: + mac_len = ciphermod.block_size + + if mac_len < 4: + raise ValueError("MAC tag length must be at least 4 bytes long") + + if mac_len > ciphermod.block_size: + raise ValueError("MAC tag length cannot be larger than a cipher block (%d) bytes" % ciphermod.block_size) + + return CMAC(key, msg, ciphermod, cipher_params, mac_len, + update_after_digest) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.pyi new file mode 100644 index 0000000..acdf055 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/CMAC.pyi @@ -0,0 +1,30 @@ +from types import ModuleType +from typing import Union, Dict, Any + +Buffer = Union[bytes, bytearray, memoryview] + +digest_size: int + +class CMAC(object): + digest_size: int + + def __init__(self, + key: Buffer, + msg: Buffer, + ciphermod: ModuleType, + cipher_params: Dict[str, Any], + mac_len: int, update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> CMAC: ... + def copy(self) -> CMAC: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + +def new(key: Buffer, + msg: Buffer = ..., + ciphermod: ModuleType = ..., + cipher_params: Dict[str, Any] = ..., + mac_len: int = ..., + update_after_digest: bool = ...) -> CMAC: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.py new file mode 100644 index 0000000..165dd83 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.py @@ -0,0 +1,213 @@ +# +# HMAC.py - Implements the HMAC algorithm as described by RFC 2104. +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord, tobytes + +from binascii import unhexlify + +from Cryptodome.Hash import MD5 +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Util.strxor import strxor +from Cryptodome.Random import get_random_bytes + +__all__ = ['new', 'HMAC'] + + +class HMAC(object): + """An HMAC hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting MAC tag + :vartype digest_size: integer + """ + + def __init__(self, key, msg=b"", digestmod=None): + + if digestmod is None: + digestmod = MD5 + + if msg is None: + msg = b"" + + # Size of the MAC tag + self.digest_size = digestmod.digest_size + + self._digestmod = digestmod + + if isinstance(key, memoryview): + key = key.tobytes() + + try: + if len(key) <= digestmod.block_size: + # Step 1 or 2 + key_0 = key + b"\x00" * (digestmod.block_size - len(key)) + else: + # Step 3 + hash_k = digestmod.new(key).digest() + key_0 = hash_k + b"\x00" * (digestmod.block_size - len(hash_k)) + except AttributeError: + # Not all hash types have "block_size" + raise ValueError("Hash type incompatible to HMAC") + + # Step 4 + key_0_ipad = strxor(key_0, b"\x36" * len(key_0)) + + # Start step 5 and 6 + self._inner = digestmod.new(key_0_ipad) + self._inner.update(msg) + + # Step 7 + key_0_opad = strxor(key_0, b"\x5c" * len(key_0)) + + # Start step 8 and 9 + self._outer = digestmod.new(key_0_opad) + + def update(self, msg): + """Authenticate the next chunk of message. + + Args: + data (byte string/byte array/memoryview): The next chunk of data + """ + + self._inner.update(msg) + return self + + def _pbkdf2_hmac_assist(self, first_digest, iterations): + """Carry out the expensive inner loop for PBKDF2-HMAC""" + + result = self._digestmod._pbkdf2_hmac_assist( + self._inner, + self._outer, + first_digest, + iterations) + return result + + def copy(self): + """Return a copy ("clone") of the HMAC object. + + The copy will have the same internal state as the original HMAC + object. + This can be used to efficiently compute the MAC tag of byte + strings that share a common initial substring. + + :return: An :class:`HMAC` + """ + + new_hmac = HMAC(b"fake key", digestmod=self._digestmod) + + # Syncronize the state + new_hmac._inner = self._inner.copy() + new_hmac._outer = self._outer.copy() + + return new_hmac + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message + authenticated so far. + + :return: The MAC tag digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + frozen_outer_hash = self._outer.copy() + frozen_outer_hash.update(self._inner.digest()) + return frozen_outer_hash.digest() + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte string/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexdigest(self): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) + for x in tuple(self.digest())]) + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, + as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + +def new(key, msg=b"", digestmod=None): + """Create a new MAC object. + + Args: + key (bytes/bytearray/memoryview): + key for the MAC object. + It must be long enough to match the expected security level of the + MAC. + msg (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to :meth:`HMAC.update`. + digestmod (module): + The hash to use to implement the HMAC. + Default is :mod:`Cryptodome.Hash.MD5`. + + Returns: + An :class:`HMAC` object + """ + + return HMAC(key, msg, digestmod) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.pyi new file mode 100644 index 0000000..b577230 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/HMAC.pyi @@ -0,0 +1,25 @@ +from types import ModuleType +from typing import Union, Dict + +Buffer = Union[bytes, bytearray, memoryview] + +digest_size: int + +class HMAC(object): + digest_size: int + + def __init__(self, + key: Buffer, + msg: Buffer, + digestmod: ModuleType) -> None: ... + def update(self, msg: Buffer) -> HMAC: ... + def copy(self) -> HMAC: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + + +def new(key: Buffer, + msg: Buffer = ..., + digestmod: ModuleType = ...) -> HMAC: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.py new file mode 100644 index 0000000..afd91c4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.py @@ -0,0 +1,179 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bord, tobytes, is_bytes +from Cryptodome.Random import get_random_bytes + +from . import cSHAKE128, SHA3_256 +from .cSHAKE128 import _bytepad, _encode_str, _right_encode + + +class KMAC_Hash(object): + """A KMAC hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, data, key, mac_len, custom, + oid_variant, cshake, rate): + + # See https://tools.ietf.org/html/rfc8702 + self.oid = "2.16.840.1.101.3.4.2." + oid_variant + self.digest_size = mac_len + + self._mac = None + + partial_newX = _bytepad(_encode_str(tobytes(key)), rate) + self._cshake = cshake._new(partial_newX, custom, b"KMAC") + + if data: + self._cshake.update(data) + + def update(self, data): + """Authenticate the next chunk of message. + + Args: + data (bytes/bytearray/memoryview): The next chunk of the message to + authenticate. + """ + + if self._mac: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + self._cshake.update(data) + return self + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message. + + :return: The MAC tag. Binary form. + :rtype: byte string + """ + + if not self._mac: + self._cshake.update(_right_encode(self.digest_size * 8)) + self._mac = self._cshake.read(self.digest_size) + + return self._mac + + def hexdigest(self): + """Return the **printable** MAC tag of the message. + + :return: The MAC tag. Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (bytes/bytearray/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = SHA3_256.new(secret + mac_tag) + mac2 = SHA3_256.new(secret + self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + def new(self, **kwargs): + """Return a new instance of a KMAC hash object. + See :func:`new`. + """ + + if "mac_len" not in kwargs: + kwargs["mac_len"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new KMAC128 object. + + Args: + key (bytes/bytearray/memoryview): + The key to use to compute the MAC. + It must be at least 128 bits long (16 bytes). + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to :meth:`KMAC_Hash.update`. + mac_len (integer): + Optional. The size of the authentication tag, in bytes. + Default is 64. Minimum is 8. + custom (bytes/bytearray/memoryview): + Optional. A customization byte string (``S`` in SP 800-185). + + Returns: + A :class:`KMAC_Hash` hash object + """ + + key = kwargs.pop("key", None) + if not is_bytes(key): + raise TypeError("You must pass a key to KMAC128") + if len(key) < 16: + raise ValueError("The key must be at least 128 bits long (16 bytes)") + + data = kwargs.pop("data", None) + + mac_len = kwargs.pop("mac_len", 64) + if mac_len < 8: + raise ValueError("'mac_len' must be 8 bytes or more") + + custom = kwargs.pop("custom", b"") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return KMAC_Hash(data, key, mac_len, custom, "19", cSHAKE128, 168) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.pyi new file mode 100644 index 0000000..8947dab --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC128.pyi @@ -0,0 +1,33 @@ +from typing import Union +from types import ModuleType + +Buffer = Union[bytes, bytearray, memoryview] + +class KMAC_Hash(object): + + def __init__(self, + data: Buffer, + key: Buffer, + mac_len: int, + custom: Buffer, + oid_variant: str, + cshake: ModuleType, + rate: int) -> None: ... + + def update(self, data: Buffer) -> KMAC_Hash: ... + + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + def new(self, + data: Buffer = ..., + mac_len: int = ..., + key: Buffer = ..., + custom: Buffer = ...) -> KMAC_Hash: ... + + +def new(key: Buffer, + data: Buffer = ..., + mac_len: int = ..., + custom: Buffer = ...) -> KMAC_Hash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.py new file mode 100644 index 0000000..82da062 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.py @@ -0,0 +1,74 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import is_bytes + +from .KMAC128 import KMAC_Hash +from . import cSHAKE256 + + +def new(**kwargs): + """Create a new KMAC256 object. + + Args: + key (bytes/bytearray/memoryview): + The key to use to compute the MAC. + It must be at least 256 bits long (32 bytes). + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to :meth:`KMAC_Hash.update`. + mac_len (integer): + Optional. The size of the authentication tag, in bytes. + Default is 64. Minimum is 8. + custom (bytes/bytearray/memoryview): + Optional. A customization byte string (``S`` in SP 800-185). + + Returns: + A :class:`KMAC_Hash` hash object + """ + + key = kwargs.pop("key", None) + if not is_bytes(key): + raise TypeError("You must pass a key to KMAC256") + if len(key) < 32: + raise ValueError("The key must be at least 256 bits long (32 bytes)") + + data = kwargs.pop("data", None) + + mac_len = kwargs.pop("mac_len", 64) + if mac_len < 8: + raise ValueError("'mac_len' must be 8 bytes or more") + + custom = kwargs.pop("custom", b"") + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return KMAC_Hash(data, key, mac_len, custom, "20", cSHAKE256, 136) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.pyi new file mode 100644 index 0000000..86cc500 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/KMAC256.pyi @@ -0,0 +1,10 @@ +from typing import Union + +from .KMAC128 import KMAC_Hash + +Buffer = Union[bytes, bytearray, memoryview] + +def new(key: Buffer, + data: Buffer = ..., + mac_len: int = ..., + custom: Buffer = ...) -> KMAC_Hash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.py new file mode 100644 index 0000000..44d935f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.py @@ -0,0 +1,262 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util._raw_api import (VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Util.number import long_to_bytes +from Cryptodome.Util.py3compat import bchr + +from .keccak import _raw_keccak_lib + + +def _length_encode(x): + if x == 0: + return b'\x00' + + S = long_to_bytes(x) + return S + bchr(len(S)) + + +# Possible states for a KangarooTwelve instance, which depend on the amount of data processed so far. +SHORT_MSG = 1 # Still within the first 8192 bytes, but it is not certain we will exceed them. +LONG_MSG_S0 = 2 # Still within the first 8192 bytes, and it is certain we will exceed them. +LONG_MSG_SX = 3 # Beyond the first 8192 bytes. +SQUEEZING = 4 # No more data to process. + + +class K12_XOF(object): + """A KangarooTwelve hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, data, custom): + + if custom == None: + custom = b'' + + self._custom = custom + _length_encode(len(custom)) + self._state = SHORT_MSG + self._padding = None # Final padding is only decided in read() + + # Internal hash that consumes FinalNode + self._hash1 = self._create_keccak() + self._length1 = 0 + + # Internal hash that produces CV_i (reset each time) + self._hash2 = None + self._length2 = 0 + + # Incremented by one for each 8192-byte block + self._ctr = 0 + + if data: + self.update(data) + + def _create_keccak(self): + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(32), # 32 bytes of capacity (256 bits) + c_ubyte(12)) # Reduced number of rounds + if result: + raise ValueError("Error %d while instantiating KangarooTwelve" + % result) + return SmartPointer(state.get(), _raw_keccak_lib.keccak_destroy) + + def _update(self, data, hash_obj): + result = _raw_keccak_lib.keccak_absorb(hash_obj.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating KangarooTwelve state" + % result) + + def _squeeze(self, hash_obj, length, padding): + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(hash_obj.get(), + bfr, + c_size_t(length), + c_ubyte(padding)) + if result: + raise ValueError("Error %d while extracting from KangarooTwelve" + % result) + + return get_raw_buffer(bfr) + + def _reset(self, hash_obj): + result = _raw_keccak_lib.keccak_reset(hash_obj.get()) + if result: + raise ValueError("Error %d while resetting KangarooTwelve state" + % result) + + def update(self, data): + """Hash the next piece of data. + + .. note:: + For better performance, submit chunks with a length multiple of 8192 bytes. + + Args: + data (byte string/byte array/memoryview): The next chunk of the + message to hash. + """ + + if self._state == SQUEEZING: + raise TypeError("You cannot call 'update' after the first 'read'") + + if self._state == SHORT_MSG: + next_length = self._length1 + len(data) + + if next_length + len(self._custom) <= 8192: + self._length1 = next_length + self._update(data, self._hash1) + return self + + # Switch to tree hashing + self._state = LONG_MSG_S0 + + if self._state == LONG_MSG_S0: + data_mem = memoryview(data) + assert(self._length1 < 8192) + dtc = min(len(data), 8192 - self._length1) + self._update(data_mem[:dtc], self._hash1) + self._length1 += dtc + + if self._length1 < 8192: + return self + + # Finish hashing S_0 and start S_1 + assert(self._length1 == 8192) + + divider = b'\x03' + b'\x00' * 7 + self._update(divider, self._hash1) + self._length1 += 8 + + self._hash2 = self._create_keccak() + self._length2 = 0 + self._ctr = 1 + + self._state = LONG_MSG_SX + return self.update(data_mem[dtc:]) + + # LONG_MSG_SX + assert(self._state == LONG_MSG_SX) + index = 0 + len_data = len(data) + + # All iteractions could actually run in parallel + data_mem = memoryview(data) + while index < len_data: + + new_index = min(index + 8192 - self._length2, len_data) + self._update(data_mem[index:new_index], self._hash2) + self._length2 += new_index - index + index = new_index + + if self._length2 == 8192: + cv_i = self._squeeze(self._hash2, 32, 0x0B) + self._update(cv_i, self._hash1) + self._length1 += 32 + self._reset(self._hash2) + self._length2 = 0 + self._ctr += 1 + + return self + + def read(self, length): + """ + Produce more bytes of the digest. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + custom_was_consumed = False + + if self._state == SHORT_MSG: + self._update(self._custom, self._hash1) + self._padding = 0x07 + self._state = SQUEEZING + + if self._state == LONG_MSG_S0: + self.update(self._custom) + custom_was_consumed = True + assert(self._state == LONG_MSG_SX) + + if self._state == LONG_MSG_SX: + if not custom_was_consumed: + self.update(self._custom) + + # Is there still some leftover data in hash2? + if self._length2 > 0: + cv_i = self._squeeze(self._hash2, 32, 0x0B) + self._update(cv_i, self._hash1) + self._length1 += 32 + self._reset(self._hash2) + self._length2 = 0 + self._ctr += 1 + + trailer = _length_encode(self._ctr - 1) + b'\xFF\xFF' + self._update(trailer, self._hash1) + + self._padding = 0x06 + self._state = SQUEEZING + + return self._squeeze(self._hash1, length, self._padding) + + def new(self, data=None, custom=b''): + return type(self)(data, custom) + + +def new(data=None, custom=None): + """Return a fresh instance of a KangarooTwelve object. + + Args: + data (bytes/bytearray/memoryview): + Optional. + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + custom (bytes): + Optional. + A customization byte string. + + :Return: A :class:`K12_XOF` object + """ + + return K12_XOF(data, custom) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.pyi new file mode 100644 index 0000000..8b3fd74 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/KangarooTwelve.pyi @@ -0,0 +1,16 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class K12_XOF(object): + def __init__(self, + data: Optional[Buffer] = ..., + custom: Optional[bytes] = ...) -> None: ... + def update(self, data: Buffer) -> K12_XOF: ... + def read(self, length: int) -> bytes: ... + def new(self, + data: Optional[Buffer] = ..., + custom: Optional[bytes] = ...) -> None: ... + +def new(data: Optional[Buffer] = ..., + custom: Optional[Buffer] = ...) -> K12_XOF: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/MD2.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD2.py new file mode 100644 index 0000000..47ecc05 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD2.py @@ -0,0 +1,166 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_md2_lib = load_pycryptodome_raw_lib( + "Cryptodome.Hash._MD2", + """ + int md2_init(void **shaState); + int md2_destroy(void *shaState); + int md2_update(void *hs, + const uint8_t *buf, + size_t len); + int md2_digest(const void *shaState, + uint8_t digest[20]); + int md2_copy(const void *src, void *dst); + """) + + +class MD2Hash(object): + """An MD2 hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 16 + # The internal block size of the hash algorithm in bytes. + block_size = 16 + # ASN.1 Object ID + oid = "1.2.840.113549.2.2" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_md2_lib.md2_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating MD2" + % result) + self._state = SmartPointer(state.get(), + _raw_md2_lib.md2_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_md2_lib.md2_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating MD2" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_md2_lib.md2_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating MD2" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = MD2Hash() + result = _raw_md2_lib.md2_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying MD2" % result) + return clone + + def new(self, data=None): + return MD2Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`MD2Hash.update`. + :type data: bytes/bytearray/memoryview + + :Return: A :class:`MD2Hash` hash object + """ + + return MD2Hash().new(data) + +# The size of the resulting hash in bytes. +digest_size = MD2Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = MD2Hash.block_size diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/MD2.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD2.pyi new file mode 100644 index 0000000..95a97a9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD2.pyi @@ -0,0 +1,19 @@ +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class MD4Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Buffer = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> MD4Hash: ... + def new(self, data: Buffer = ...) -> MD4Hash: ... + +def new(data: Buffer = ...) -> MD4Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/MD4.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD4.py new file mode 100644 index 0000000..668fa65 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD4.py @@ -0,0 +1,185 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +MD4 is specified in RFC1320_ and produces the 128 bit digest of a message. + + >>> from Cryptodome.Hash import MD4 + >>> + >>> h = MD4.new() + >>> h.update(b'Hello') + >>> print h.hexdigest() + +MD4 stand for Message Digest version 4, and it was invented by Rivest in 1990. +This algorithm is insecure. Do not use it for new designs. + +.. _RFC1320: http://tools.ietf.org/html/rfc1320 +""" + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_md4_lib = load_pycryptodome_raw_lib( + "Cryptodome.Hash._MD4", + """ + int md4_init(void **shaState); + int md4_destroy(void *shaState); + int md4_update(void *hs, + const uint8_t *buf, + size_t len); + int md4_digest(const void *shaState, + uint8_t digest[20]); + int md4_copy(const void *src, void *dst); + """) + + +class MD4Hash(object): + """Class that implements an MD4 hash + """ + + #: The size of the resulting hash in bytes. + digest_size = 16 + #: The internal block size of the hash algorithm in bytes. + block_size = 64 + #: ASN.1 Object ID + oid = "1.2.840.113549.2.4" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_md4_lib.md4_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating MD4" + % result) + self._state = SmartPointer(state.get(), + _raw_md4_lib.md4_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Repeated calls are equivalent to a single call with the concatenation + of all the arguments. In other words: + + >>> m.update(a); m.update(b) + + is equivalent to: + + >>> m.update(a+b) + + :Parameters: + data : byte string/byte array/memoryview + The next chunk of the message being hashed. + """ + + result = _raw_md4_lib.md4_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating MD4" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that + has been hashed so far. + + This method does not change the state of the hash object. + You can continue updating the object after calling this function. + + :Return: A byte string of `digest_size` bytes. It may contain non-ASCII + characters, including null bytes. + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_md4_lib.md4_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating MD4" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been + hashed so far. + + This method does not change the state of the hash object. + + :Return: A string of 2* `digest_size` characters. It contains only + hexadecimal ASCII digits. + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :Return: A hash object of the same type + """ + + clone = MD4Hash() + result = _raw_md4_lib.md4_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying MD4" % result) + return clone + + def new(self, data=None): + return MD4Hash(data) + + +def new(data=None): + """Return a fresh instance of the hash object. + + :Parameters: + data : byte string/byte array/memoryview + The very first chunk of the message to hash. + It is equivalent to an early call to `MD4Hash.update()`. + Optional. + + :Return: A `MD4Hash` object + """ + return MD4Hash().new(data) + +#: The size of the resulting hash in bytes. +digest_size = MD4Hash.digest_size + +#: The internal block size of the hash algorithm in bytes. +block_size = MD4Hash.block_size diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/MD4.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD4.pyi new file mode 100644 index 0000000..a9a7295 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD4.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class MD4Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> MD4Hash: ... + def new(self, data: Optional[Buffer] = ...) -> MD4Hash: ... + +def new(data: Optional[Buffer] = ...) -> MD4Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/MD5.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD5.py new file mode 100644 index 0000000..8f573a9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD5.py @@ -0,0 +1,184 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import * + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_md5_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._MD5", + """ + #define MD5_DIGEST_SIZE 16 + + int MD5_init(void **shaState); + int MD5_destroy(void *shaState); + int MD5_update(void *hs, + const uint8_t *buf, + size_t len); + int MD5_digest(const void *shaState, + uint8_t digest[MD5_DIGEST_SIZE]); + int MD5_copy(const void *src, void *dst); + + int MD5_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t first_digest[MD5_DIGEST_SIZE], + uint8_t final_digest[MD5_DIGEST_SIZE], + size_t iterations); + """) + +class MD5Hash(object): + """A MD5 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 16 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "1.2.840.113549.2.5" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_md5_lib.MD5_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating MD5" + % result) + self._state = SmartPointer(state.get(), + _raw_md5_lib.MD5_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_md5_lib.MD5_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating MD5" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_md5_lib.MD5_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating MD5" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = MD5Hash() + result = _raw_md5_lib.MD5_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying MD5" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-1 hash object.""" + + return MD5Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`MD5Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`MD5Hash` hash object + """ + return MD5Hash().new(data) + +# The size of the resulting hash in bytes. +digest_size = 16 + +# The internal block size of the hash algorithm in bytes. +block_size = 64 + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert len(first_digest) == digest_size + assert iterations > 0 + + bfr = create_string_buffer(digest_size); + result = _raw_md5_lib.MD5_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations)) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assis for MD5" % result) + + return get_raw_buffer(bfr) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/MD5.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD5.pyi new file mode 100644 index 0000000..d819556 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/MD5.pyi @@ -0,0 +1,19 @@ +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class MD5Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Buffer = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> MD5Hash: ... + def new(self, data: Buffer = ...) -> MD5Hash: ... + +def new(data: Buffer = ...) -> MD5Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.py new file mode 100644 index 0000000..c03f522 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.py @@ -0,0 +1,217 @@ +# -*- coding: utf-8 -*- +# +# Hash/Poly1305.py - Implements the Poly1305 MAC +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bord, tobytes, _copy_bytes + +from Cryptodome.Hash import BLAKE2s +from Cryptodome.Random import get_random_bytes +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + + +_raw_poly1305 = load_pycryptodome_raw_lib("Cryptodome.Hash._poly1305", + """ + int poly1305_init(void **state, + const uint8_t *r, + size_t r_len, + const uint8_t *s, + size_t s_len); + int poly1305_destroy(void *state); + int poly1305_update(void *state, + const uint8_t *in, + size_t len); + int poly1305_digest(const void *state, + uint8_t *digest, + size_t len); + """) + + +class Poly1305_MAC(object): + """An Poly1305 MAC object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting MAC tag + :vartype digest_size: integer + """ + + digest_size = 16 + + def __init__(self, r, s, data): + + if len(r) != 16: + raise ValueError("Parameter r is not 16 bytes long") + if len(s) != 16: + raise ValueError("Parameter s is not 16 bytes long") + + self._mac_tag = None + + state = VoidPointer() + result = _raw_poly1305.poly1305_init(state.address_of(), + c_uint8_ptr(r), + c_size_t(len(r)), + c_uint8_ptr(s), + c_size_t(len(s)) + ) + if result: + raise ValueError("Error %d while instantiating Poly1305" % result) + self._state = SmartPointer(state.get(), + _raw_poly1305.poly1305_destroy) + if data: + self.update(data) + + def update(self, data): + """Authenticate the next chunk of message. + + Args: + data (byte string/byte array/memoryview): The next chunk of data + """ + + if self._mac_tag: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_poly1305.poly1305_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing Poly1305 data" % result) + return self + + def copy(self): + raise NotImplementedError() + + def digest(self): + """Return the **binary** (non-printable) MAC tag of the message + authenticated so far. + + :return: The MAC tag digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + if self._mac_tag: + return self._mac_tag + + bfr = create_string_buffer(16) + result = _raw_poly1305.poly1305_digest(self._state.get(), + bfr, + c_size_t(len(bfr))) + if result: + raise ValueError("Error %d while creating Poly1305 digest" % result) + + self._mac_tag = get_raw_buffer(bfr) + return self._mac_tag + + def hexdigest(self): + """Return the **printable** MAC tag of the message authenticated so far. + + :return: The MAC tag, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) + for x in tuple(self.digest())]) + + def verify(self, mac_tag): + """Verify that a given **binary** MAC (computed by another party) + is valid. + + Args: + mac_tag (byte string/byte string/memoryview): the expected MAC of the message. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) + + if mac1.digest() != mac2.digest(): + raise ValueError("MAC check failed") + + def hexverify(self, hex_mac_tag): + """Verify that a given **printable** MAC (computed by another party) + is valid. + + Args: + hex_mac_tag (string): the expected MAC of the message, + as a hexadecimal string. + + Raises: + ValueError: if the MAC does not match. It means that the message + has been tampered with or that the MAC key is incorrect. + """ + + self.verify(unhexlify(tobytes(hex_mac_tag))) + + + +def new(**kwargs): + """Create a new Poly1305 MAC object. + + Args: + key (bytes/bytearray/memoryview): + The 32-byte key for the Poly1305 object. + cipher (module from ``Cryptodome.Cipher``): + The cipher algorithm to use for deriving the Poly1305 + key pair *(r, s)*. + It can only be ``Cryptodome.Cipher.AES`` or ``Cryptodome.Cipher.ChaCha20``. + nonce (bytes/bytearray/memoryview): + Optional. The non-repeatable value to use for the MAC of this message. + It must be 16 bytes long for ``AES`` and 8 or 12 bytes for ``ChaCha20``. + If not passed, a random nonce is created; you will find it in the + ``nonce`` attribute of the new object. + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to authenticate. + It is equivalent to an early call to ``update()``. + + Returns: + A :class:`Poly1305_MAC` object + """ + + cipher = kwargs.pop("cipher", None) + if not hasattr(cipher, '_derive_Poly1305_key_pair'): + raise ValueError("Parameter 'cipher' must be AES or ChaCha20") + + cipher_key = kwargs.pop("key", None) + if cipher_key is None: + raise TypeError("You must pass a parameter 'key'") + + nonce = kwargs.pop("nonce", None) + data = kwargs.pop("data", None) + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + r, s, nonce = cipher._derive_Poly1305_key_pair(cipher_key, nonce) + + new_mac = Poly1305_MAC(r, s, data) + new_mac.nonce = _copy_bytes(None, None, nonce) # nonce may still be just a memoryview + return new_mac diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.pyi new file mode 100644 index 0000000..f97a14a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/Poly1305.pyi @@ -0,0 +1,24 @@ +from types import ModuleType +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class Poly1305_MAC(object): + block_size: int + digest_size: int + oid: str + + def __init__(self, + r : int, + s : int, + data : Buffer) -> None: ... + def update(self, data: Buffer) -> Poly1305_MAC: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def verify(self, mac_tag: Buffer) -> None: ... + def hexverify(self, hex_mac_tag: str) -> None: ... + +def new(key: Buffer, + cipher: ModuleType, + nonce: Buffer = ..., + data: Buffer = ...) -> Poly1305_MAC: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.py new file mode 100644 index 0000000..35ad576 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +# This file exists for backward compatibility with old code that refers to +# Cryptodome.Hash.RIPEMD + +"""Deprecated alias for `Cryptodome.Hash.RIPEMD160`""" + +from Cryptodome.Hash.RIPEMD160 import new, block_size, digest_size diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.pyi new file mode 100644 index 0000000..cfb2252 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD.pyi @@ -0,0 +1,3 @@ +# This file exists for backward compatibility with old code that refers to +# Cryptodome.Hash.SHA + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.py new file mode 100644 index 0000000..f959027 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.py @@ -0,0 +1,169 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_ripemd160_lib = load_pycryptodome_raw_lib( + "Cryptodome.Hash._RIPEMD160", + """ + int ripemd160_init(void **shaState); + int ripemd160_destroy(void *shaState); + int ripemd160_update(void *hs, + const uint8_t *buf, + size_t len); + int ripemd160_digest(const void *shaState, + uint8_t digest[20]); + int ripemd160_copy(const void *src, void *dst); + """) + + +class RIPEMD160Hash(object): + """A RIPEMD-160 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 20 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "1.3.36.3.2.1" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_ripemd160_lib.ripemd160_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating RIPEMD160" + % result) + self._state = SmartPointer(state.get(), + _raw_ripemd160_lib.ripemd160_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_ripemd160_lib.ripemd160_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating ripemd160" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_ripemd160_lib.ripemd160_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating ripemd160" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = RIPEMD160Hash() + result = _raw_ripemd160_lib.ripemd160_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying ripemd160" % result) + return clone + + def new(self, data=None): + """Create a fresh RIPEMD-160 hash object.""" + + return RIPEMD160Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`RIPEMD160Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`RIPEMD160Hash` hash object + """ + + return RIPEMD160Hash().new(data) + +# The size of the resulting hash in bytes. +digest_size = RIPEMD160Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = RIPEMD160Hash.block_size diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.pyi new file mode 100644 index 0000000..b619473 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/RIPEMD160.pyi @@ -0,0 +1,19 @@ +from typing import Union + +Buffer = Union[bytes, bytearray, memoryview] + +class RIPEMD160Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Buffer = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> RIPEMD160Hash: ... + def new(self, data: Buffer = ...) -> RIPEMD160Hash: ... + +def new(data: Buffer = ...) -> RIPEMD160Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA.py new file mode 100644 index 0000000..95f8745 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +# This file exists for backward compatibility with old code that refers to +# Cryptodome.Hash.SHA + +from Cryptodome.Hash.SHA1 import __doc__, new, block_size, digest_size diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA.pyi new file mode 100644 index 0000000..7d01a5f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA.pyi @@ -0,0 +1,4 @@ +# This file exists for backward compatibility with old code that refers to +# Cryptodome.Hash.SHA + +from Cryptodome.Hash.SHA1 import __doc__, new, block_size, digest_size diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.py new file mode 100644 index 0000000..dea51bc --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.py @@ -0,0 +1,185 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import * + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha1_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._SHA1", + """ + #define SHA1_DIGEST_SIZE 20 + + int SHA1_init(void **shaState); + int SHA1_destroy(void *shaState); + int SHA1_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA1_digest(const void *shaState, + uint8_t digest[SHA1_DIGEST_SIZE]); + int SHA1_copy(const void *src, void *dst); + + int SHA1_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t first_digest[SHA1_DIGEST_SIZE], + uint8_t final_digest[SHA1_DIGEST_SIZE], + size_t iterations); + """) + +class SHA1Hash(object): + """A SHA-1 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 20 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "1.3.14.3.2.26" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha1_lib.SHA1_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA1" + % result) + self._state = SmartPointer(state.get(), + _raw_sha1_lib.SHA1_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha1_lib.SHA1_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while instantiating SHA1" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha1_lib.SHA1_digest(self._state.get(), + bfr) + if result: + raise ValueError("Error %d while instantiating SHA1" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA1Hash() + result = _raw_sha1_lib.SHA1_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA1" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-1 hash object.""" + + return SHA1Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA1Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA1Hash` hash object + """ + return SHA1Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA1Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA1Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert len(first_digest) == digest_size + assert iterations > 0 + + bfr = create_string_buffer(digest_size); + result = _raw_sha1_lib.SHA1_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations)) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assis for SHA1" % result) + + return get_raw_buffer(bfr) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.pyi new file mode 100644 index 0000000..d6c8e25 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA1.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA1Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA1Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA1Hash: ... + +def new(data: Optional[Buffer] = ...) -> SHA1Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.py new file mode 100644 index 0000000..fca7622 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.py @@ -0,0 +1,186 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha224_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._SHA224", + """ + int SHA224_init(void **shaState); + int SHA224_destroy(void *shaState); + int SHA224_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA224_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA224_copy(const void *src, void *dst); + + int SHA224_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA224Hash(object): + """A SHA-224 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 28 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = '2.16.840.1.101.3.4.2.4' + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha224_lib.SHA224_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA224" + % result) + self._state = SmartPointer(state.get(), + _raw_sha224_lib.SHA224_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha224_lib.SHA224_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA224" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha224_lib.SHA224_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA224 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA224Hash() + result = _raw_sha224_lib.SHA224_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA224" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-224 hash object.""" + + return SHA224Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA224Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA224Hash` hash object + """ + return SHA224Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA224Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA224Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha224_lib.SHA224_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA224" % result) + + return get_raw_buffer(bfr) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.pyi new file mode 100644 index 0000000..613a7f9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA224.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA224Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA224Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA224Hash: ... + +def new(data: Optional[Buffer] = ...) -> SHA224Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.py new file mode 100644 index 0000000..c1a81b1 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.py @@ -0,0 +1,185 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha256_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._SHA256", + """ + int SHA256_init(void **shaState); + int SHA256_destroy(void *shaState); + int SHA256_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA256_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA256_copy(const void *src, void *dst); + + int SHA256_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA256Hash(object): + """A SHA-256 hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 32 + # The internal block size of the hash algorithm in bytes. + block_size = 64 + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.1" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha256_lib.SHA256_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA256" + % result) + self._state = SmartPointer(state.get(), + _raw_sha256_lib.SHA256_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha256_lib.SHA256_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA256" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha256_lib.SHA256_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA256 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA256Hash() + result = _raw_sha256_lib.SHA256_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA256" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-256 hash object.""" + + return SHA256Hash(data) + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA256Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA256Hash` hash object + """ + + return SHA256Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA256Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA256Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha256_lib.SHA256_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA256" % result) + + return get_raw_buffer(bfr) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.pyi new file mode 100644 index 0000000..cbf21bf --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA256.pyi @@ -0,0 +1,18 @@ +from typing import Union, Optional + + +class SHA256Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Union[bytes, bytearray, memoryview]]=None) -> None: ... + def update(self, data: Union[bytes, bytearray, memoryview]) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA256Hash: ... + def new(self, data: Optional[Union[bytes, bytearray, memoryview]]=None) -> SHA256Hash: ... + +def new(data: Optional[Union[bytes, bytearray, memoryview]]=None) -> SHA256Hash: ... + +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.py new file mode 100644 index 0000000..711aa73 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.py @@ -0,0 +1,186 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha384_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._SHA384", + """ + int SHA384_init(void **shaState); + int SHA384_destroy(void *shaState); + int SHA384_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA384_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA384_copy(const void *src, void *dst); + + int SHA384_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA384Hash(object): + """A SHA-384 hash object. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 48 + # The internal block size of the hash algorithm in bytes. + block_size = 128 + # ASN.1 Object ID + oid = '2.16.840.1.101.3.4.2.2' + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_sha384_lib.SHA384_init(state.address_of()) + if result: + raise ValueError("Error %d while instantiating SHA384" + % result) + self._state = SmartPointer(state.get(), + _raw_sha384_lib.SHA384_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha384_lib.SHA384_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA384" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha384_lib.SHA384_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA384 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA384Hash() + result = _raw_sha384_lib.SHA384_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA384" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-384 hash object.""" + + return SHA384Hash(data) + + +def new(data=None): + """Create a new hash object. + + :parameter data: + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA384Hash.update`. + :type data: byte string/byte array/memoryview + + :Return: A :class:`SHA384Hash` hash object + """ + + return SHA384Hash().new(data) + + +# The size of the resulting hash in bytes. +digest_size = SHA384Hash.digest_size + +# The internal block size of the hash algorithm in bytes. +block_size = SHA384Hash.block_size + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha384_lib.SHA384_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA384" % result) + + return get_raw_buffer(bfr) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.pyi new file mode 100644 index 0000000..c2aab9e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA384.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA384Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA384Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA384Hash: ... + +def new(data: Optional[Buffer] = ...) -> SHA384Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.py new file mode 100644 index 0000000..34888c5 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Hash.keccak import _raw_keccak_lib + +class SHA3_224_Hash(object): + """A SHA3-224 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 28 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.7" + + # Input block size for HMAC + block_size = 144 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/224" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data)) + ) + if result: + raise ValueError("Error %d while updating SHA-3/224" + % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/224" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-224" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-224 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_224_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_224_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_224_Hash.digest_size + +# Input block size for HMAC +block_size = 144 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.pyi new file mode 100644 index 0000000..2180821 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_224.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_224_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_224_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_224_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_224_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_224_Hash: ... + +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.py new file mode 100644 index 0000000..024962f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Hash.keccak import _raw_keccak_lib + +class SHA3_256_Hash(object): + """A SHA3-256 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 32 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.8" + + # Input block size for HMAC + block_size = 136 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/256" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data)) + ) + if result: + raise ValueError("Error %d while updating SHA-3/256" + % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/256" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-256" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-256 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_256_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_256_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_256_Hash.digest_size + +# Input block size for HMAC +block_size = 136 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.pyi new file mode 100644 index 0000000..88436bd --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_256.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_256_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_256_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_256_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_256_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_256_Hash: ... + +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.py new file mode 100644 index 0000000..26eeb79 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.py @@ -0,0 +1,179 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Hash.keccak import _raw_keccak_lib + +class SHA3_384_Hash(object): + """A SHA3-384 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 48 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.9" + + # Input block size for HMAC + block_size = 104 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/384" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHA-3/384" + % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/384" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-384" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-256 hash object.""" + + return type(self)(data, self._update_after_digest) + + + def new(self, data=None): + """Create a fresh SHA3-384 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_384_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_384_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_384_Hash.digest_size + +# Input block size for HMAC +block_size = 104 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.pyi new file mode 100644 index 0000000..98d00c6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_384.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_384_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_384_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_384_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_384_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_384_Hash: ... + +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.py new file mode 100644 index 0000000..99b1c37 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Hash.keccak import _raw_keccak_lib + +class SHA3_512_Hash(object): + """A SHA3-512 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The size of the resulting hash in bytes. + digest_size = 64 + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.10" + + # Input block size for HMAC + block_size = 72 + + def __init__(self, data, update_after_digest): + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x06 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHA-3/512" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHA-3/512" + % result) + return self + + def digest(self): + + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while instantiating SHA-3/512" + % result) + + self._digest_value = get_raw_buffer(bfr) + return self._digest_value + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = self.new() + result = _raw_keccak_lib.keccak_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA3-512" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA3-521 hash object.""" + + return type(self)(data, self._update_after_digest) + + +def new(*args, **kwargs): + """Create a new hash object. + + Args: + data (byte string/byte array/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + update_after_digest (boolean): + Whether :meth:`digest` can be followed by another :meth:`update` + (default: ``False``). + + :Return: A :class:`SHA3_512_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + if len(args) == 1: + if data: + raise ValueError("Initial data for hash specified twice") + data = args[0] + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return SHA3_512_Hash(data, update_after_digest) + +# The size of the resulting hash in bytes. +digest_size = SHA3_512_Hash.digest_size + +# Input block size for HMAC +block_size = 72 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.pyi new file mode 100644 index 0000000..cdeec16 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA3_512.pyi @@ -0,0 +1,19 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA3_512_Hash(object): + digest_size: int + block_size: int + oid: str + def __init__(self, data: Optional[Buffer], update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> SHA3_512_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA3_512_Hash: ... + def new(self, data: Optional[Buffer]) -> SHA3_512_Hash: ... + +def new(__data: Buffer = ..., update_after_digest: bool = ...) -> SHA3_512_Hash: ... + +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.py new file mode 100644 index 0000000..5066197 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.py @@ -0,0 +1,204 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr) + +_raw_sha512_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._SHA512", + """ + int SHA512_init(void **shaState, + size_t digest_size); + int SHA512_destroy(void *shaState); + int SHA512_update(void *hs, + const uint8_t *buf, + size_t len); + int SHA512_digest(const void *shaState, + uint8_t *digest, + size_t digest_size); + int SHA512_copy(const void *src, void *dst); + + int SHA512_pbkdf2_hmac_assist(const void *inner, + const void *outer, + const uint8_t *first_digest, + uint8_t *final_digest, + size_t iterations, + size_t digest_size); + """) + +class SHA512Hash(object): + """A SHA-512 hash object (possibly in its truncated version SHA-512/224 or + SHA-512/256. + Do not instantiate directly. Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + + :ivar block_size: the size in bytes of the internal message block, + input to the compression function + :vartype block_size: integer + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + # The internal block size of the hash algorithm in bytes. + block_size = 128 + + def __init__(self, data, truncate): + self._truncate = truncate + + if truncate is None: + self.oid = "2.16.840.1.101.3.4.2.3" + self.digest_size = 64 + elif truncate == "224": + self.oid = "2.16.840.1.101.3.4.2.5" + self.digest_size = 28 + elif truncate == "256": + self.oid = "2.16.840.1.101.3.4.2.6" + self.digest_size = 32 + else: + raise ValueError("Incorrect truncation length. It must be '224' or '256'.") + + state = VoidPointer() + result = _raw_sha512_lib.SHA512_init(state.address_of(), + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while instantiating SHA-512" + % result) + self._state = SmartPointer(state.get(), + _raw_sha512_lib.SHA512_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + result = _raw_sha512_lib.SHA512_update(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while hashing data with SHA512" + % result) + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + bfr = create_string_buffer(self.digest_size) + result = _raw_sha512_lib.SHA512_digest(self._state.get(), + bfr, + c_size_t(self.digest_size)) + if result: + raise ValueError("Error %d while making SHA512 digest" + % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def copy(self): + """Return a copy ("clone") of the hash object. + + The copy will have the same internal state as the original hash + object. + This can be used to efficiently compute the digests of strings that + share a common initial substring. + + :return: A hash object of the same type + """ + + clone = SHA512Hash(None, self._truncate) + result = _raw_sha512_lib.SHA512_copy(self._state.get(), + clone._state.get()) + if result: + raise ValueError("Error %d while copying SHA512" % result) + return clone + + def new(self, data=None): + """Create a fresh SHA-512 hash object.""" + + return SHA512Hash(data, self._truncate) + + +def new(data=None, truncate=None): + """Create a new hash object. + + Args: + data (bytes/bytearray/memoryview): + Optional. The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`SHA512Hash.update`. + truncate (string): + Optional. The desired length of the digest. It can be either "224" or + "256". If not present, the digest is 512 bits long. + Passing this parameter is **not** equivalent to simply truncating + the output digest. + + :Return: A :class:`SHA512Hash` hash object + """ + + return SHA512Hash(data, truncate) + + +# The size of the full SHA-512 hash in bytes. +digest_size = 64 + +# The internal block size of the hash algorithm in bytes. +block_size = 128 + + +def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): + """Compute the expensive inner loop in PBKDF-HMAC.""" + + assert iterations > 0 + + bfr = create_string_buffer(len(first_digest)); + result = _raw_sha512_lib.SHA512_pbkdf2_hmac_assist( + inner._state.get(), + outer._state.get(), + first_digest, + bfr, + c_size_t(iterations), + c_size_t(len(first_digest))) + + if result: + raise ValueError("Error %d with PBKDF2-HMAC assist for SHA512" % result) + + return get_raw_buffer(bfr) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.pyi new file mode 100644 index 0000000..f219ee9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHA512.pyi @@ -0,0 +1,22 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHA512Hash(object): + digest_size: int + block_size: int + oid: str + + def __init__(self, + data: Optional[Buffer], + truncate: Optional[str]) -> None: ... + def update(self, data: Buffer) -> None: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def copy(self) -> SHA512Hash: ... + def new(self, data: Optional[Buffer] = ...) -> SHA512Hash: ... + +def new(data: Optional[Buffer] = ..., + truncate: Optional[str] = ...) -> SHA512Hash: ... +digest_size: int +block_size: int diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.py new file mode 100644 index 0000000..5bde2b6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.py @@ -0,0 +1,129 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Hash.keccak import _raw_keccak_lib + +class SHAKE128_XOF(object): + """A SHAKE128 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + """ + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.11" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(32), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHAKE128" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + self._is_squeezing = False + self._padding = 0x1F + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._is_squeezing: + raise TypeError("You cannot call 'update' after the first 'read'") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHAKE128 state" + % result) + return self + + def read(self, length): + """ + Compute the next piece of XOF output. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + self._is_squeezing = True + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(self._state.get(), + bfr, + c_size_t(length), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while extracting from SHAKE128" + % result) + + return get_raw_buffer(bfr) + + def new(self, data=None): + return type(self)(data=data) + + +def new(data=None): + """Return a fresh instance of a SHAKE128 object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + Optional. + + :Return: A :class:`SHAKE128_XOF` object + """ + + return SHAKE128_XOF(data=data) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.pyi new file mode 100644 index 0000000..f618881 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE128.pyi @@ -0,0 +1,13 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHAKE128_XOF(object): + oid: str + def __init__(self, + data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> SHAKE128_XOF: ... + def read(self, length: int) -> bytes: ... + def new(self, data: Optional[Buffer] = ...) -> SHAKE128_XOF: ... + +def new(data: Optional[Buffer] = ...) -> SHAKE128_XOF: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.py new file mode 100644 index 0000000..8c37f6a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.py @@ -0,0 +1,130 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Hash.keccak import _raw_keccak_lib + +class SHAKE256_XOF(object): + """A SHAKE256 hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar oid: ASN.1 Object ID + :vartype oid: string + """ + + # ASN.1 Object ID + oid = "2.16.840.1.101.3.4.2.12" + + def __init__(self, data=None): + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(64), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating SHAKE256" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + self._is_squeezing = False + self._padding = 0x1F + + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._is_squeezing: + raise TypeError("You cannot call 'update' after the first 'read'") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating SHAKE256 state" + % result) + return self + + def read(self, length): + """ + Compute the next piece of XOF output. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + self._is_squeezing = True + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(self._state.get(), + bfr, + c_size_t(length), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while extracting from SHAKE256" + % result) + + return get_raw_buffer(bfr) + + def new(self, data=None): + return type(self)(data=data) + + +def new(data=None): + """Return a fresh instance of a SHAKE256 object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + Optional. + + :Return: A :class:`SHAKE256_XOF` object + """ + + return SHAKE256_XOF(data=data) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.pyi new file mode 100644 index 0000000..029347a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/SHAKE256.pyi @@ -0,0 +1,13 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class SHAKE256_XOF(object): + oid: str + def __init__(self, + data: Optional[Buffer] = ...) -> None: ... + def update(self, data: Buffer) -> SHAKE256_XOF: ... + def read(self, length: int) -> bytes: ... + def new(self, data: Optional[Buffer] = ...) -> SHAKE256_XOF: ... + +def new(data: Optional[Buffer] = ...) -> SHAKE256_XOF: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.py new file mode 100644 index 0000000..5c910e4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.py @@ -0,0 +1,138 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord, is_bytes, tobytes + +from . import cSHAKE128 +from .cSHAKE128 import _encode_str, _right_encode + + +class TupleHash(object): + """A Tuple hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, custom, cshake, digest_size): + + self.digest_size = digest_size + + self._cshake = cshake._new(b'', custom, b'TupleHash') + self._digest = None + + def update(self, data): + """Authenticate the next byte string in the tuple. + + Args: + data (bytes/bytearray/memoryview): The next byte string. + """ + + if self._digest is not None: + raise TypeError("You cannot call 'update' after 'digest' or 'hexdigest'") + + if not is_bytes(data): + raise TypeError("You can only call 'update' on bytes") + + self._cshake.update(_encode_str(tobytes(data))) + + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the tuple of byte strings. + + :return: The hash digest. Binary form. + :rtype: byte string + """ + + if self._digest is None: + self._cshake.update(_right_encode(self.digest_size * 8)) + self._digest = self._cshake.read(self.digest_size) + + return self._digest + + def hexdigest(self): + """Return the **printable** digest of the tuple of byte strings. + + :return: The hash digest. Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in tuple(self.digest())]) + + def new(self, **kwargs): + """Return a new instance of a TupleHash object. + See :func:`new`. + """ + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new TupleHash128 object. + + Args: + digest_bytes (integer): + Optional. The size of the digest, in bytes. + Default is 64. Minimum is 8. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (and in steps of 8). + Default is 512. Minimum is 64. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`TupleHash` object + """ + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 64 + if digest_bytes is not None: + if digest_bytes < 8: + raise ValueError("'digest_bytes' must be at least 8") + else: + if digest_bits < 64 or digest_bits % 8: + raise ValueError("'digest_bytes' must be at least 64 " + "in steps of 8") + digest_bytes = digest_bits // 8 + + custom = kwargs.pop("custom", b'') + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return TupleHash(custom, cSHAKE128, digest_bytes) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.pyi new file mode 100644 index 0000000..3b1e81e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash128.pyi @@ -0,0 +1,22 @@ +from typing import Any, Union +from types import ModuleType + +Buffer = Union[bytes, bytearray, memoryview] + +class TupleHash(object): + digest_size: int + def __init__(self, + custom: bytes, + cshake: ModuleType, + digest_size: int) -> None: ... + def update(self, data: Buffer) -> TupleHash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def new(self, + digest_bytes: int = ..., + digest_bits: int = ..., + custom: int = ...) -> TupleHash: ... + +def new(digest_bytes: int = ..., + digest_bits: int = ..., + custom: int = ...) -> TupleHash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.py new file mode 100644 index 0000000..9b4fba0 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.py @@ -0,0 +1,73 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from . import cSHAKE256 +from .TupleHash128 import TupleHash + + +def new(**kwargs): + """Create a new TupleHash256 object. + + Args: + digest_bytes (integer): + Optional. The size of the digest, in bytes. + Default is 64. Minimum is 8. + digest_bits (integer): + Optional and alternative to ``digest_bytes``. + The size of the digest, in bits (and in steps of 8). + Default is 512. Minimum is 64. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`TupleHash` object + """ + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + digest_bytes = 64 + if digest_bytes is not None: + if digest_bytes < 8: + raise ValueError("'digest_bytes' must be at least 8") + else: + if digest_bits < 64 or digest_bits % 8: + raise ValueError("'digest_bytes' must be at least 64 " + "in steps of 8") + digest_bytes = digest_bits // 8 + + custom = kwargs.pop("custom", b'') + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return TupleHash(custom, cSHAKE256, digest_bytes) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.pyi new file mode 100644 index 0000000..82d943f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/TupleHash256.pyi @@ -0,0 +1,5 @@ +from .TupleHash128 import TupleHash + +def new(digest_bytes: int = ..., + digest_bits: int = ..., + custom: int = ...) -> TupleHash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so new file mode 100755 index 0000000..40cf664 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2b.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so new file mode 100755 index 0000000..04a1ace Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_BLAKE2s.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD2.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD2.abi3.so new file mode 100755 index 0000000..1573ca3 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD2.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD4.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD4.abi3.so new file mode 100755 index 0000000..8f41e31 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD4.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD5.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD5.abi3.so new file mode 100755 index 0000000..b22cf36 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_MD5.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so new file mode 100755 index 0000000..78faa00 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_RIPEMD160.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA1.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA1.abi3.so new file mode 100755 index 0000000..acd08ad Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA1.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA224.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA224.abi3.so new file mode 100755 index 0000000..9cf3ef6 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA224.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA256.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA256.abi3.so new file mode 100755 index 0000000..6dffb17 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA256.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA384.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA384.abi3.so new file mode 100755 index 0000000..7c72fd0 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA384.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA512.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA512.abi3.so new file mode 100755 index 0000000..058653c Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_SHA512.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/__init__.py new file mode 100644 index 0000000..4bda084 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['HMAC', 'MD2', 'MD4', 'MD5', 'RIPEMD160', 'SHA1', + 'SHA224', 'SHA256', 'SHA384', 'SHA512', 'CMAC', 'Poly1305', + 'cSHAKE128', 'cSHAKE256', 'KMAC128', 'KMAC256', + 'TupleHash128', 'TupleHash256', 'KangarooTwelve'] diff --git a/lib/site-packages/pip/_internal/resolution/resolvelib/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/__init__.pyi similarity index 100% rename from lib/site-packages/pip/_internal/resolution/resolvelib/__init__.py rename to python/lib/python3.11/site-packages/Cryptodome/Hash/__init__.pyi diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so new file mode 100755 index 0000000..d13832c Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_clmul.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so new file mode 100755 index 0000000..555c6fc Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_ghash_portable.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_keccak.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_keccak.abi3.so new file mode 100755 index 0000000..4e494b0 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_keccak.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/_poly1305.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Hash/_poly1305.abi3.so new file mode 100755 index 0000000..901b8c2 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Hash/_poly1305.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.py new file mode 100644 index 0000000..7c2f30a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.py @@ -0,0 +1,187 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bchr + +from Cryptodome.Util._raw_api import (VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +from Cryptodome.Util.number import long_to_bytes + +from Cryptodome.Hash.keccak import _raw_keccak_lib + + +def _left_encode(x): + """Left encode function as defined in NIST SP 800-185""" + + assert (x < (1 << 2040) and x >= 0) + + # Get number of bytes needed to represent this integer. + num = 1 if x == 0 else (x.bit_length() + 7) // 8 + + return bchr(num) + long_to_bytes(x) + + +def _right_encode(x): + """Right encode function as defined in NIST SP 800-185""" + + assert (x < (1 << 2040) and x >= 0) + + # Get number of bytes needed to represent this integer. + num = 1 if x == 0 else (x.bit_length() + 7) // 8 + + return long_to_bytes(x) + bchr(num) + + +def _encode_str(x): + """Encode string function as defined in NIST SP 800-185""" + + bitlen = len(x) * 8 + if bitlen >= (1 << 2040): + raise ValueError("String too large to encode in cSHAKE") + + return _left_encode(bitlen) + x + + +def _bytepad(x, length): + """Zero pad byte string as defined in NIST SP 800-185""" + + to_pad = _left_encode(length) + x + + # Note: this implementation works with byte aligned strings, + # hence no additional bit padding is needed at this point. + npad = (length - len(to_pad) % length) % length + + return to_pad + b'\x00' * npad + + +class cSHAKE_XOF(object): + """A cSHAKE hash object. + Do not instantiate directly. + Use the :func:`new` function. + """ + + def __init__(self, data, custom, capacity, function): + state = VoidPointer() + + if custom or function: + prefix_unpad = _encode_str(function) + _encode_str(custom) + prefix = _bytepad(prefix_unpad, (1600 - capacity)//8) + self._padding = 0x04 + else: + prefix = None + self._padding = 0x1F # for SHAKE + + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(capacity//8), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating cSHAKE" + % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + self._is_squeezing = False + + if prefix: + self.update(prefix) + + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._is_squeezing: + raise TypeError("You cannot call 'update' after the first 'read'") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating %s state" + % (result, self.name)) + return self + + def read(self, length): + """ + Compute the next piece of XOF output. + + .. note:: + You cannot use :meth:`update` anymore after the first call to + :meth:`read`. + + Args: + length (integer): the amount of bytes this method must return + + :return: the next piece of XOF output (of the given length) + :rtype: byte string + """ + + self._is_squeezing = True + bfr = create_string_buffer(length) + result = _raw_keccak_lib.keccak_squeeze(self._state.get(), + bfr, + c_size_t(length), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while extracting from %s" + % (result, self.name)) + + return get_raw_buffer(bfr) + + +def _new(data, custom, function): + # Use Keccak[256] + return cSHAKE_XOF(data, custom, 256, function) + + +def new(data=None, custom=None): + """Return a fresh instance of a cSHAKE128 object. + + Args: + data (bytes/bytearray/memoryview): + Optional. + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`cSHAKE_XOF` object + """ + + # Use Keccak[256] + return cSHAKE_XOF(data, custom, 256, b'') diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.pyi new file mode 100644 index 0000000..1452fea --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE128.pyi @@ -0,0 +1,14 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +class cSHAKE_XOF(object): + def __init__(self, + data: Optional[Buffer] = ..., + function: Optional[bytes] = ..., + custom: Optional[bytes] = ...) -> None: ... + def update(self, data: Buffer) -> cSHAKE_XOF: ... + def read(self, length: int) -> bytes: ... + +def new(data: Optional[Buffer] = ..., + custom: Optional[Buffer] = ...) -> cSHAKE_XOF: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.py new file mode 100644 index 0000000..a5b8701 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.py @@ -0,0 +1,56 @@ +# =================================================================== +# +# Copyright (c) 2021, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util._raw_api import c_size_t +from Cryptodome.Hash.cSHAKE128 import cSHAKE_XOF + + +def _new(data, custom, function): + # Use Keccak[512] + return cSHAKE_XOF(data, custom, 512, function) + + +def new(data=None, custom=None): + """Return a fresh instance of a cSHAKE256 object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`update`. + Optional. + custom (bytes): + Optional. + A customization bytestring (``S`` in SP 800-185). + + :Return: A :class:`cSHAKE_XOF` object + """ + + # Use Keccak[512] + return cSHAKE_XOF(data, custom, 512, b'') diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.pyi new file mode 100644 index 0000000..b910bb6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/cSHAKE256.pyi @@ -0,0 +1,8 @@ +from typing import Union, Optional + +from Cryptodome.Hash.cSHAKE128 import cSHAKE_XOF + +Buffer = Union[bytes, bytearray, memoryview] + +def new(data: Optional[Buffer] = ..., + custom: Optional[Buffer] = ...) -> cSHAKE_XOF: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/keccak.py b/python/lib/python3.11/site-packages/Cryptodome/Hash/keccak.py new file mode 100644 index 0000000..f2af202 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/keccak.py @@ -0,0 +1,181 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bord + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + VoidPointer, SmartPointer, + create_string_buffer, + get_raw_buffer, c_size_t, + c_uint8_ptr, c_ubyte) + +_raw_keccak_lib = load_pycryptodome_raw_lib("Cryptodome.Hash._keccak", + """ + int keccak_init(void **state, + size_t capacity_bytes, + uint8_t rounds); + int keccak_destroy(void *state); + int keccak_absorb(void *state, + const uint8_t *in, + size_t len); + int keccak_squeeze(const void *state, + uint8_t *out, + size_t len, + uint8_t padding); + int keccak_digest(void *state, + uint8_t *digest, + size_t len, + uint8_t padding); + int keccak_copy(const void *src, void *dst); + int keccak_reset(void *state); + """) + +class Keccak_Hash(object): + """A Keccak hash object. + Do not instantiate directly. + Use the :func:`new` function. + + :ivar digest_size: the size in bytes of the resulting hash + :vartype digest_size: integer + """ + + def __init__(self, data, digest_bytes, update_after_digest): + # The size of the resulting hash in bytes. + self.digest_size = digest_bytes + + self._update_after_digest = update_after_digest + self._digest_done = False + self._padding = 0x01 + + state = VoidPointer() + result = _raw_keccak_lib.keccak_init(state.address_of(), + c_size_t(self.digest_size * 2), + c_ubyte(24)) + if result: + raise ValueError("Error %d while instantiating keccak" % result) + self._state = SmartPointer(state.get(), + _raw_keccak_lib.keccak_destroy) + if data: + self.update(data) + + def update(self, data): + """Continue hashing of a message by consuming the next chunk of data. + + Args: + data (byte string/byte array/memoryview): The next chunk of the message being hashed. + """ + + if self._digest_done and not self._update_after_digest: + raise TypeError("You can only call 'digest' or 'hexdigest' on this object") + + result = _raw_keccak_lib.keccak_absorb(self._state.get(), + c_uint8_ptr(data), + c_size_t(len(data))) + if result: + raise ValueError("Error %d while updating keccak" % result) + return self + + def digest(self): + """Return the **binary** (non-printable) digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Binary form. + :rtype: byte string + """ + + self._digest_done = True + bfr = create_string_buffer(self.digest_size) + result = _raw_keccak_lib.keccak_digest(self._state.get(), + bfr, + c_size_t(self.digest_size), + c_ubyte(self._padding)) + if result: + raise ValueError("Error %d while squeezing keccak" % result) + + return get_raw_buffer(bfr) + + def hexdigest(self): + """Return the **printable** digest of the message that has been hashed so far. + + :return: The hash digest, computed over the data processed so far. + Hexadecimal encoded. + :rtype: string + """ + + return "".join(["%02x" % bord(x) for x in self.digest()]) + + def new(self, **kwargs): + """Create a fresh Keccak hash object.""" + + if "digest_bytes" not in kwargs and "digest_bits" not in kwargs: + kwargs["digest_bytes"] = self.digest_size + + return new(**kwargs) + + +def new(**kwargs): + """Create a new hash object. + + Args: + data (bytes/bytearray/memoryview): + The very first chunk of the message to hash. + It is equivalent to an early call to :meth:`Keccak_Hash.update`. + digest_bytes (integer): + The size of the digest, in bytes (28, 32, 48, 64). + digest_bits (integer): + The size of the digest, in bits (224, 256, 384, 512). + update_after_digest (boolean): + Whether :meth:`Keccak.digest` can be followed by another + :meth:`Keccak.update` (default: ``False``). + + :Return: A :class:`Keccak_Hash` hash object + """ + + data = kwargs.pop("data", None) + update_after_digest = kwargs.pop("update_after_digest", False) + + digest_bytes = kwargs.pop("digest_bytes", None) + digest_bits = kwargs.pop("digest_bits", None) + if None not in (digest_bytes, digest_bits): + raise TypeError("Only one digest parameter must be provided") + if (None, None) == (digest_bytes, digest_bits): + raise TypeError("Digest size (bits, bytes) not provided") + if digest_bytes is not None: + if digest_bytes not in (28, 32, 48, 64): + raise ValueError("'digest_bytes' must be: 28, 32, 48 or 64") + else: + if digest_bits not in (224, 256, 384, 512): + raise ValueError("'digest_bytes' must be: 224, 256, 384 or 512") + digest_bytes = digest_bits // 8 + + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + return Keccak_Hash(data, digest_bytes, update_after_digest) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Hash/keccak.pyi b/python/lib/python3.11/site-packages/Cryptodome/Hash/keccak.pyi new file mode 100644 index 0000000..844d256 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Hash/keccak.pyi @@ -0,0 +1,23 @@ +from typing import Union, Any + +Buffer = Union[bytes, bytearray, memoryview] + +class Keccak_Hash(object): + digest_size: int + def __init__(self, + data: Buffer, + digest_bytes: int, + update_after_digest: bool) -> None: ... + def update(self, data: Buffer) -> Keccak_Hash: ... + def digest(self) -> bytes: ... + def hexdigest(self) -> str: ... + def new(self, + data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + update_after_digest: bool = ...) -> Keccak_Hash: ... + +def new(data: Buffer = ..., + digest_bytes: int = ..., + digest_bits: int = ..., + update_after_digest: bool = ...) -> Keccak_Hash: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/PEM.py b/python/lib/python3.11/site-packages/Cryptodome/IO/PEM.py new file mode 100644 index 0000000..7655368 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/PEM.py @@ -0,0 +1,189 @@ +# +# Util/PEM.py : Privacy Enhanced Mail utilities +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['encode', 'decode'] + +import re +from binascii import a2b_base64, b2a_base64, hexlify, unhexlify + +from Cryptodome.Hash import MD5 +from Cryptodome.Util.Padding import pad, unpad +from Cryptodome.Cipher import DES, DES3, AES +from Cryptodome.Protocol.KDF import PBKDF1 +from Cryptodome.Random import get_random_bytes +from Cryptodome.Util.py3compat import tobytes, tostr + + +def encode(data, marker, passphrase=None, randfunc=None): + """Encode a piece of binary data into PEM format. + + Args: + data (byte string): + The piece of binary data to encode. + marker (string): + The marker for the PEM block (e.g. "PUBLIC KEY"). + Note that there is no official master list for all allowed markers. + Still, you can refer to the OpenSSL_ source code. + passphrase (byte string): + If given, the PEM block will be encrypted. The key is derived from + the passphrase. + randfunc (callable): + Random number generation function; it accepts an integer N and returns + a byte string of random data, N bytes long. If not given, a new one is + instantiated. + + Returns: + The PEM block, as a string. + + .. _OpenSSL: https://github.com/openssl/openssl/blob/master/include/openssl/pem.h + """ + + if randfunc is None: + randfunc = get_random_bytes + + out = "-----BEGIN %s-----\n" % marker + if passphrase: + # We only support 3DES for encryption + salt = randfunc(8) + key = PBKDF1(passphrase, salt, 16, 1, MD5) + key += PBKDF1(key + passphrase, salt, 8, 1, MD5) + objenc = DES3.new(key, DES3.MODE_CBC, salt) + out += "Proc-Type: 4,ENCRYPTED\nDEK-Info: DES-EDE3-CBC,%s\n\n" %\ + tostr(hexlify(salt).upper()) + # Encrypt with PKCS#7 padding + data = objenc.encrypt(pad(data, objenc.block_size)) + elif passphrase is not None: + raise ValueError("Empty password") + + # Each BASE64 line can take up to 64 characters (=48 bytes of data) + # b2a_base64 adds a new line character! + chunks = [tostr(b2a_base64(data[i:i + 48])) + for i in range(0, len(data), 48)] + out += "".join(chunks) + out += "-----END %s-----" % marker + return out + + +def _EVP_BytesToKey(data, salt, key_len): + d = [ b'' ] + m = (key_len + 15 ) // 16 + for _ in range(m): + nd = MD5.new(d[-1] + data + salt).digest() + d.append(nd) + return b"".join(d)[:key_len] + + +def decode(pem_data, passphrase=None): + """Decode a PEM block into binary. + + Args: + pem_data (string): + The PEM block. + passphrase (byte string): + If given and the PEM block is encrypted, + the key will be derived from the passphrase. + + Returns: + A tuple with the binary data, the marker string, and a boolean to + indicate if decryption was performed. + + Raises: + ValueError: if decoding fails, if the PEM file is encrypted and no passphrase has + been provided or if the passphrase is incorrect. + """ + + # Verify Pre-Encapsulation Boundary + r = re.compile(r"\s*-----BEGIN (.*)-----\s+") + m = r.match(pem_data) + if not m: + raise ValueError("Not a valid PEM pre boundary") + marker = m.group(1) + + # Verify Post-Encapsulation Boundary + r = re.compile(r"-----END (.*)-----\s*$") + m = r.search(pem_data) + if not m or m.group(1) != marker: + raise ValueError("Not a valid PEM post boundary") + + # Removes spaces and slit on lines + lines = pem_data.replace(" ", '').split() + + # Decrypts, if necessary + if lines[1].startswith('Proc-Type:4,ENCRYPTED'): + if not passphrase: + raise ValueError("PEM is encrypted, but no passphrase available") + DEK = lines[2].split(':') + if len(DEK) != 2 or DEK[0] != 'DEK-Info': + raise ValueError("PEM encryption format not supported.") + algo, salt = DEK[1].split(',') + salt = unhexlify(tobytes(salt)) + + padding = True + + if algo == "DES-CBC": + key = _EVP_BytesToKey(passphrase, salt, 8) + objdec = DES.new(key, DES.MODE_CBC, salt) + elif algo == "DES-EDE3-CBC": + key = _EVP_BytesToKey(passphrase, salt, 24) + objdec = DES3.new(key, DES3.MODE_CBC, salt) + elif algo == "AES-128-CBC": + key = _EVP_BytesToKey(passphrase, salt[:8], 16) + objdec = AES.new(key, AES.MODE_CBC, salt) + elif algo == "AES-192-CBC": + key = _EVP_BytesToKey(passphrase, salt[:8], 24) + objdec = AES.new(key, AES.MODE_CBC, salt) + elif algo == "AES-256-CBC": + key = _EVP_BytesToKey(passphrase, salt[:8], 32) + objdec = AES.new(key, AES.MODE_CBC, salt) + elif algo.lower() == "id-aes256-gcm": + key = _EVP_BytesToKey(passphrase, salt[:8], 32) + objdec = AES.new(key, AES.MODE_GCM, nonce=salt) + padding = False + else: + raise ValueError("Unsupport PEM encryption algorithm (%s)." % algo) + lines = lines[2:] + else: + objdec = None + + # Decode body + data = a2b_base64(''.join(lines[1:-1])) + enc_flag = False + if objdec: + if padding: + data = unpad(objdec.decrypt(data), objdec.block_size) + else: + # There is no tag, so we don't use decrypt_and_verify + data = objdec.decrypt(data) + enc_flag = True + + return (data, marker, enc_flag) diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/PEM.pyi b/python/lib/python3.11/site-packages/Cryptodome/IO/PEM.pyi new file mode 100644 index 0000000..2e324c4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/PEM.pyi @@ -0,0 +1,10 @@ +from typing import Tuple, Optional, Callable + +def encode(data: bytes, + marke: str, + passphrase: Optional[bytes] = ..., + randfunc: Optional[Callable[[int],bytes]] = ...) -> str: ... + + +def decode(pem_data: str, + passphrase: Optional[bytes] = ...) -> Tuple[bytes, str, bool]: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.py b/python/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.py new file mode 100644 index 0000000..d02aed9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.py @@ -0,0 +1,239 @@ +# +# PublicKey/PKCS8.py : PKCS#8 functions +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + + +from Cryptodome.Util.py3compat import * + +from Cryptodome.Util.asn1 import ( + DerNull, + DerSequence, + DerObjectId, + DerOctetString, + ) + +from Cryptodome.IO._PBES import PBES1, PBES2, PbesError + + +__all__ = ['wrap', 'unwrap'] + + +def wrap(private_key, key_oid, passphrase=None, protection=None, + prot_params=None, key_params=DerNull(), randfunc=None): + """Wrap a private key into a PKCS#8 blob (clear or encrypted). + + Args: + + private_key (byte string): + The private key encoded in binary form. The actual encoding is + algorithm specific. In most cases, it is DER. + + key_oid (string): + The object identifier (OID) of the private key to wrap. + It is a dotted string, like ``1.2.840.113549.1.1.1`` (for RSA keys). + + passphrase (bytes string or string): + The secret passphrase from which the wrapping key is derived. + Set it only if encryption is required. + + protection (string): + The identifier of the algorithm to use for securely wrapping the key. + The default value is ``PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC``. + + prot_params (dictionary): + Parameters for the protection algorithm. + + +------------------+-----------------------------------------------+ + | Key | Description | + +==================+===============================================+ + | iteration_count | The KDF algorithm is repeated several times to| + | | slow down brute force attacks on passwords | + | | (called *N* or CPU/memory cost in scrypt). | + | | The default value for PBKDF2 is 1000. | + | | The default value for scrypt is 16384. | + +------------------+-----------------------------------------------+ + | salt_size | Salt is used to thwart dictionary and rainbow | + | | attacks on passwords. The default value is 8 | + | | bytes. | + +------------------+-----------------------------------------------+ + | block_size | *(scrypt only)* Memory-cost (r). The default | + | | value is 8. | + +------------------+-----------------------------------------------+ + | parallelization | *(scrypt only)* CPU-cost (p). The default | + | | value is 1. | + +------------------+-----------------------------------------------+ + + key_params (DER object or None): + The ``parameters`` field to use in the ``AlgorithmIdentifier`` + SEQUENCE. If ``None``, no ``parameters`` field will be added. + By default, the ASN.1 type ``NULL`` is used. + + randfunc (callable): + Random number generation function; it should accept a single integer + N and return a string of random data, N bytes long. + If not specified, a new RNG will be instantiated + from :mod:`Cryptodome.Random`. + + Return: + The PKCS#8-wrapped private key (possibly encrypted), as a byte string. + """ + + # + # PrivateKeyInfo ::= SEQUENCE { + # version Version, + # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + # privateKey PrivateKey, + # attributes [0] IMPLICIT Attributes OPTIONAL + # } + # + if key_params is None: + algorithm = DerSequence([DerObjectId(key_oid)]) + else: + algorithm = DerSequence([DerObjectId(key_oid), key_params]) + + pk_info = DerSequence([ + 0, + algorithm, + DerOctetString(private_key) + ]) + pk_info_der = pk_info.encode() + + if passphrase is None: + return pk_info_der + + if not passphrase: + raise ValueError("Empty passphrase") + + # Encryption with PBES2 + passphrase = tobytes(passphrase) + if protection is None: + protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC' + return PBES2.encrypt(pk_info_der, passphrase, + protection, prot_params, randfunc) + + +def unwrap(p8_private_key, passphrase=None): + """Unwrap a private key from a PKCS#8 blob (clear or encrypted). + + Args: + p8_private_key (byte string): + The private key wrapped into a PKCS#8 blob, DER encoded. + passphrase (byte string or string): + The passphrase to use to decrypt the blob (if it is encrypted). + + Return: + A tuple containing + + #. the algorithm identifier of the wrapped key (OID, dotted string) + #. the private key (byte string, DER encoded) + #. the associated parameters (byte string, DER encoded) or ``None`` + + Raises: + ValueError : if decoding fails + """ + + if passphrase: + passphrase = tobytes(passphrase) + + found = False + try: + p8_private_key = PBES1.decrypt(p8_private_key, passphrase) + found = True + except PbesError as e: + error_str = "PBES1[%s]" % str(e) + except ValueError: + error_str = "PBES1[Invalid]" + + if not found: + try: + p8_private_key = PBES2.decrypt(p8_private_key, passphrase) + found = True + except PbesError as e: + error_str += ",PBES2[%s]" % str(e) + except ValueError: + error_str += ",PBES2[Invalid]" + + if not found: + raise ValueError("Error decoding PKCS#8 (%s)" % error_str) + + pk_info = DerSequence().decode(p8_private_key, nr_elements=(2, 3, 4, 5)) + if len(pk_info) == 2 and not passphrase: + raise ValueError("Not a valid clear PKCS#8 structure " + "(maybe it is encrypted?)") + + # RFC5208, PKCS#8, version is v1(0) + # + # PrivateKeyInfo ::= SEQUENCE { + # version Version, + # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + # privateKey PrivateKey, + # attributes [0] IMPLICIT Attributes OPTIONAL + # } + # + # RFC5915, Asymmetric Key Package, version is v2(1) + # + # OneAsymmetricKey ::= SEQUENCE { + # version Version, + # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, + # privateKey PrivateKey, + # attributes [0] Attributes OPTIONAL, + # ..., + # [[2: publicKey [1] PublicKey OPTIONAL ]], + # ... + # } + + if pk_info[0] == 0: + if len(pk_info) not in (3, 4): + raise ValueError("Not a valid PrivateKeyInfo SEQUENCE") + elif pk_info[0] == 1: + if len(pk_info) not in (3, 4, 5): + raise ValueError("Not a valid PrivateKeyInfo SEQUENCE") + else: + raise ValueError("Not a valid PrivateKeyInfo SEQUENCE") + + algo = DerSequence().decode(pk_info[1], nr_elements=(1, 2)) + algo_oid = DerObjectId().decode(algo[0]).value + if len(algo) == 1: + algo_params = None + else: + try: + DerNull().decode(algo[1]) + algo_params = None + except: + algo_params = algo[1] + + # PrivateKey ::= OCTET STRING + private_key = DerOctetString().decode(pk_info[2]).payload + + # We ignore attributes and (for v2 only) publickey + + return (algo_oid, private_key, algo_params) diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.pyi b/python/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.pyi new file mode 100644 index 0000000..be716af --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/PKCS8.pyi @@ -0,0 +1,14 @@ +from typing import Dict, Tuple, Optional, Union, Callable + +from Cryptodome.Util.asn1 import DerObject + +def wrap(private_key: bytes, + key_oid: str, + passphrase: Union[bytes, str] = ..., + protection: str = ..., + prot_params: Dict = ..., + key_params: Optional[DerObject] = ..., + randfunc: Optional[Callable[[int],str]] = ...) -> bytes: ... + + +def unwrap(p8_private_key: bytes, passphrase: Optional[Union[bytes, str]] = ...) -> Tuple[str, bytes, Optional[bytes]]: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/_PBES.py b/python/lib/python3.11/site-packages/Cryptodome/IO/_PBES.py new file mode 100644 index 0000000..9ee5385 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/_PBES.py @@ -0,0 +1,435 @@ +# +# PublicKey/_PBES.py : Password-Based Encryption functions +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome import Random +from Cryptodome.Util.asn1 import ( + DerSequence, DerOctetString, + DerObjectId, DerInteger, + ) + +from Cryptodome.Util.Padding import pad, unpad +from Cryptodome.Hash import MD5, SHA1, SHA224, SHA256, SHA384, SHA512 +from Cryptodome.Cipher import DES, ARC2, DES3, AES +from Cryptodome.Protocol.KDF import PBKDF1, PBKDF2, scrypt + +_OID_PBE_WITH_MD5_AND_DES_CBC = "1.2.840.113549.1.5.3" +_OID_PBE_WITH_MD5_AND_RC2_CBC = "1.2.840.113549.1.5.6" +_OID_PBE_WITH_SHA1_AND_DES_CBC = "1.2.840.113549.1.5.10" +_OID_PBE_WITH_SHA1_AND_RC2_CBC = "1.2.840.113549.1.5.11" + +_OID_PBES2 = "1.2.840.113549.1.5.13" + +_OID_PBKDF2 = "1.2.840.113549.1.5.12" +_OID_SCRYPT = "1.3.6.1.4.1.11591.4.11" + +_OID_HMAC_SHA1 = "1.2.840.113549.2.7" +_OID_HMAC_SHA224 = "1.2.840.113549.2.8" +_OID_HMAC_SHA256 = "1.2.840.113549.2.9" +_OID_HMAC_SHA384 = "1.2.840.113549.2.10" +_OID_HMAC_SHA512 = "1.2.840.113549.2.11" + +_OID_DES_EDE3_CBC = "1.2.840.113549.3.7" +_OID_AES128_CBC = "2.16.840.1.101.3.4.1.2" +_OID_AES192_CBC = "2.16.840.1.101.3.4.1.22" +_OID_AES256_CBC = "2.16.840.1.101.3.4.1.42" + + +class PbesError(ValueError): + pass + +# These are the ASN.1 definitions used by the PBES1/2 logic: +# +# EncryptedPrivateKeyInfo ::= SEQUENCE { +# encryptionAlgorithm EncryptionAlgorithmIdentifier, +# encryptedData EncryptedData +# } +# +# EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier +# +# EncryptedData ::= OCTET STRING +# +# AlgorithmIdentifier ::= SEQUENCE { +# algorithm OBJECT IDENTIFIER, +# parameters ANY DEFINED BY algorithm OPTIONAL +# } +# +# PBEParameter ::= SEQUENCE { +# salt OCTET STRING (SIZE(8)), +# iterationCount INTEGER +# } +# +# PBES2-params ::= SEQUENCE { +# keyDerivationFunc AlgorithmIdentifier {{PBES2-KDFs}}, +# encryptionScheme AlgorithmIdentifier {{PBES2-Encs}} +# } +# +# PBKDF2-params ::= SEQUENCE { +# salt CHOICE { +# specified OCTET STRING, +# otherSource AlgorithmIdentifier {{PBKDF2-SaltSources}} +# }, +# iterationCount INTEGER (1..MAX), +# keyLength INTEGER (1..MAX) OPTIONAL, +# prf AlgorithmIdentifier {{PBKDF2-PRFs}} DEFAULT algid-hmacWithSHA1 +# } +# +# scrypt-params ::= SEQUENCE { +# salt OCTET STRING, +# costParameter INTEGER (1..MAX), +# blockSize INTEGER (1..MAX), +# parallelizationParameter INTEGER (1..MAX), +# keyLength INTEGER (1..MAX) OPTIONAL +# } + +class PBES1(object): + """Deprecated encryption scheme with password-based key derivation + (originally defined in PKCS#5 v1.5, but still present in `v2.0`__). + + .. __: http://www.ietf.org/rfc/rfc2898.txt + """ + + @staticmethod + def decrypt(data, passphrase): + """Decrypt a piece of data using a passphrase and *PBES1*. + + The algorithm to use is automatically detected. + + :Parameters: + data : byte string + The piece of data to decrypt. + passphrase : byte string + The passphrase to use for decrypting the data. + :Returns: + The decrypted data, as a binary string. + """ + + enc_private_key_info = DerSequence().decode(data) + encrypted_algorithm = DerSequence().decode(enc_private_key_info[0]) + encrypted_data = DerOctetString().decode(enc_private_key_info[1]).payload + + pbe_oid = DerObjectId().decode(encrypted_algorithm[0]).value + cipher_params = {} + if pbe_oid == _OID_PBE_WITH_MD5_AND_DES_CBC: + # PBE_MD5_DES_CBC + hashmod = MD5 + ciphermod = DES + elif pbe_oid == _OID_PBE_WITH_MD5_AND_RC2_CBC: + # PBE_MD5_RC2_CBC + hashmod = MD5 + ciphermod = ARC2 + cipher_params['effective_keylen'] = 64 + elif pbe_oid == _OID_PBE_WITH_SHA1_AND_DES_CBC: + # PBE_SHA1_DES_CBC + hashmod = SHA1 + ciphermod = DES + elif pbe_oid == _OID_PBE_WITH_SHA1_AND_RC2_CBC: + # PBE_SHA1_RC2_CBC + hashmod = SHA1 + ciphermod = ARC2 + cipher_params['effective_keylen'] = 64 + else: + raise PbesError("Unknown OID for PBES1") + + pbe_params = DerSequence().decode(encrypted_algorithm[1], nr_elements=2) + salt = DerOctetString().decode(pbe_params[0]).payload + iterations = pbe_params[1] + + key_iv = PBKDF1(passphrase, salt, 16, iterations, hashmod) + key, iv = key_iv[:8], key_iv[8:] + + cipher = ciphermod.new(key, ciphermod.MODE_CBC, iv, **cipher_params) + pt = cipher.decrypt(encrypted_data) + return unpad(pt, cipher.block_size) + + +class PBES2(object): + """Encryption scheme with password-based key derivation + (defined in `PKCS#5 v2.0`__). + + .. __: http://www.ietf.org/rfc/rfc2898.txt.""" + + @staticmethod + def encrypt(data, passphrase, protection, prot_params=None, randfunc=None): + """Encrypt a piece of data using a passphrase and *PBES2*. + + :Parameters: + data : byte string + The piece of data to encrypt. + passphrase : byte string + The passphrase to use for encrypting the data. + protection : string + The identifier of the encryption algorithm to use. + The default value is '``PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC``'. + prot_params : dictionary + Parameters of the protection algorithm. + + +------------------+-----------------------------------------------+ + | Key | Description | + +==================+===============================================+ + | iteration_count | The KDF algorithm is repeated several times to| + | | slow down brute force attacks on passwords | + | | (called *N* or CPU/memory cost in scrypt). | + | | | + | | The default value for PBKDF2 is 1 000. | + | | The default value for scrypt is 16 384. | + +------------------+-----------------------------------------------+ + | salt_size | Salt is used to thwart dictionary and rainbow | + | | attacks on passwords. The default value is 8 | + | | bytes. | + +------------------+-----------------------------------------------+ + | block_size | *(scrypt only)* Memory-cost (r). The default | + | | value is 8. | + +------------------+-----------------------------------------------+ + | parallelization | *(scrypt only)* CPU-cost (p). The default | + | | value is 1. | + +------------------+-----------------------------------------------+ + + + randfunc : callable + Random number generation function; it should accept + a single integer N and return a string of random data, + N bytes long. If not specified, a new RNG will be + instantiated from ``Cryptodome.Random``. + + :Returns: + The encrypted data, as a binary string. + """ + + if prot_params is None: + prot_params = {} + + if randfunc is None: + randfunc = Random.new().read + + if protection == 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC': + key_size = 24 + module = DES3 + cipher_mode = DES3.MODE_CBC + enc_oid = _OID_DES_EDE3_CBC + elif protection in ('PBKDF2WithHMAC-SHA1AndAES128-CBC', + 'scryptAndAES128-CBC'): + key_size = 16 + module = AES + cipher_mode = AES.MODE_CBC + enc_oid = _OID_AES128_CBC + elif protection in ('PBKDF2WithHMAC-SHA1AndAES192-CBC', + 'scryptAndAES192-CBC'): + key_size = 24 + module = AES + cipher_mode = AES.MODE_CBC + enc_oid = _OID_AES192_CBC + elif protection in ('PBKDF2WithHMAC-SHA1AndAES256-CBC', + 'scryptAndAES256-CBC'): + key_size = 32 + module = AES + cipher_mode = AES.MODE_CBC + enc_oid = _OID_AES256_CBC + else: + raise ValueError("Unknown PBES2 mode") + + # Get random data + iv = randfunc(module.block_size) + salt = randfunc(prot_params.get("salt_size", 8)) + + # Derive key from password + if protection.startswith('PBKDF2'): + count = prot_params.get("iteration_count", 1000) + key = PBKDF2(passphrase, salt, key_size, count) + kdf_info = DerSequence([ + DerObjectId(_OID_PBKDF2), # PBKDF2 + DerSequence([ + DerOctetString(salt), + DerInteger(count) + ]) + ]) + else: + # It must be scrypt + count = prot_params.get("iteration_count", 16384) + scrypt_r = prot_params.get('block_size', 8) + scrypt_p = prot_params.get('parallelization', 1) + key = scrypt(passphrase, salt, key_size, + count, scrypt_r, scrypt_p) + kdf_info = DerSequence([ + DerObjectId(_OID_SCRYPT), # scrypt + DerSequence([ + DerOctetString(salt), + DerInteger(count), + DerInteger(scrypt_r), + DerInteger(scrypt_p) + ]) + ]) + + # Create cipher and use it + cipher = module.new(key, cipher_mode, iv) + encrypted_data = cipher.encrypt(pad(data, cipher.block_size)) + enc_info = DerSequence([ + DerObjectId(enc_oid), + DerOctetString(iv) + ]) + + # Result + enc_private_key_info = DerSequence([ + # encryptionAlgorithm + DerSequence([ + DerObjectId(_OID_PBES2), + DerSequence([ + kdf_info, + enc_info + ]), + ]), + DerOctetString(encrypted_data) + ]) + return enc_private_key_info.encode() + + @staticmethod + def decrypt(data, passphrase): + """Decrypt a piece of data using a passphrase and *PBES2*. + + The algorithm to use is automatically detected. + + :Parameters: + data : byte string + The piece of data to decrypt. + passphrase : byte string + The passphrase to use for decrypting the data. + :Returns: + The decrypted data, as a binary string. + """ + + enc_private_key_info = DerSequence().decode(data, nr_elements=2) + enc_algo = DerSequence().decode(enc_private_key_info[0]) + encrypted_data = DerOctetString().decode(enc_private_key_info[1]).payload + + pbe_oid = DerObjectId().decode(enc_algo[0]).value + if pbe_oid != _OID_PBES2: + raise PbesError("Not a PBES2 object") + + pbes2_params = DerSequence().decode(enc_algo[1], nr_elements=2) + + ### Key Derivation Function selection + kdf_info = DerSequence().decode(pbes2_params[0], nr_elements=2) + kdf_oid = DerObjectId().decode(kdf_info[0]).value + + kdf_key_length = None + + # We only support PBKDF2 or scrypt + if kdf_oid == _OID_PBKDF2: + + pbkdf2_params = DerSequence().decode(kdf_info[1], nr_elements=(2, 3, 4)) + salt = DerOctetString().decode(pbkdf2_params[0]).payload + iteration_count = pbkdf2_params[1] + + left = len(pbkdf2_params) - 2 + idx = 2 + + if left > 0: + try: + kdf_key_length = pbkdf2_params[idx] - 0 + left -= 1 + idx += 1 + except TypeError: + pass + + # Default is HMAC-SHA1 + pbkdf2_prf_oid = "1.2.840.113549.2.7" + if left > 0: + pbkdf2_prf_algo_id = DerSequence().decode(pbkdf2_params[idx]) + pbkdf2_prf_oid = DerObjectId().decode(pbkdf2_prf_algo_id[0]).value + + elif kdf_oid == _OID_SCRYPT: + + scrypt_params = DerSequence().decode(kdf_info[1], nr_elements=(4, 5)) + salt = DerOctetString().decode(scrypt_params[0]).payload + iteration_count, scrypt_r, scrypt_p = [scrypt_params[x] + for x in (1, 2, 3)] + if len(scrypt_params) > 4: + kdf_key_length = scrypt_params[4] + else: + kdf_key_length = None + else: + raise PbesError("Unsupported PBES2 KDF") + + ### Cipher selection + enc_info = DerSequence().decode(pbes2_params[1]) + enc_oid = DerObjectId().decode(enc_info[0]).value + + if enc_oid == _OID_DES_EDE3_CBC: + # DES_EDE3_CBC + ciphermod = DES3 + key_size = 24 + elif enc_oid == _OID_AES128_CBC: + # AES128_CBC + ciphermod = AES + key_size = 16 + elif enc_oid == _OID_AES192_CBC: + # AES192_CBC + ciphermod = AES + key_size = 24 + elif enc_oid == _OID_AES256_CBC: + # AES256_CBC + ciphermod = AES + key_size = 32 + else: + raise PbesError("Unsupported PBES2 cipher") + + if kdf_key_length and kdf_key_length != key_size: + raise PbesError("Mismatch between PBES2 KDF parameters" + " and selected cipher") + + IV = DerOctetString().decode(enc_info[1]).payload + + # Create cipher + if kdf_oid == _OID_PBKDF2: + if pbkdf2_prf_oid == _OID_HMAC_SHA1: + hmac_hash_module = SHA1 + elif pbkdf2_prf_oid == _OID_HMAC_SHA224: + hmac_hash_module = SHA224 + elif pbkdf2_prf_oid == _OID_HMAC_SHA256: + hmac_hash_module = SHA256 + elif pbkdf2_prf_oid == _OID_HMAC_SHA384: + hmac_hash_module = SHA384 + elif pbkdf2_prf_oid == _OID_HMAC_SHA512: + hmac_hash_module = SHA512 + else: + raise PbesError("Unsupported HMAC %s" % pbkdf2_prf_oid) + + key = PBKDF2(passphrase, salt, key_size, iteration_count, + hmac_hash_module=hmac_hash_module) + else: + key = scrypt(passphrase, salt, key_size, iteration_count, + scrypt_r, scrypt_p) + cipher = ciphermod.new(key, ciphermod.MODE_CBC, IV) + + # Decrypt data + pt = cipher.decrypt(encrypted_data) + return unpad(pt, cipher.block_size) diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/_PBES.pyi b/python/lib/python3.11/site-packages/Cryptodome/IO/_PBES.pyi new file mode 100644 index 0000000..a8a34ce --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/_PBES.pyi @@ -0,0 +1,19 @@ +from typing import Dict, Optional, Callable + +class PbesError(ValueError): + ... + +class PBES1(object): + @staticmethod + def decrypt(data: bytes, passphrase: bytes) -> bytes: ... + +class PBES2(object): + @staticmethod + def encrypt(data: bytes, + passphrase: bytes, + protection: str, + prot_params: Optional[Dict] = ..., + randfunc: Optional[Callable[[int],bytes]] = ...) -> bytes: ... + + @staticmethod + def decrypt(data:bytes, passphrase: bytes) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/IO/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/IO/__init__.py new file mode 100644 index 0000000..85a0d0b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/IO/__init__.py @@ -0,0 +1,31 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['PEM', 'PKCS8'] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/Numbers.py b/python/lib/python3.11/site-packages/Cryptodome/Math/Numbers.py new file mode 100644 index 0000000..c9ff848 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/Numbers.py @@ -0,0 +1,42 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ["Integer"] + +try: + from Cryptodome.Math._IntegerGMP import IntegerGMP as Integer + from Cryptodome.Math._IntegerGMP import implementation as _implementation +except (ImportError, OSError, AttributeError): + try: + from Cryptodome.Math._IntegerCustom import IntegerCustom as Integer + from Cryptodome.Math._IntegerCustom import implementation as _implementation + except (ImportError, OSError): + from Cryptodome.Math._IntegerNative import IntegerNative as Integer + _implementation = {} diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/Numbers.pyi b/python/lib/python3.11/site-packages/Cryptodome/Math/Numbers.pyi new file mode 100644 index 0000000..b0206ca --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/Numbers.pyi @@ -0,0 +1,2 @@ +from Cryptodome.Math._IntegerBase import IntegerBase as Integer +__all__ = ['Integer'] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/Primality.py b/python/lib/python3.11/site-packages/Cryptodome/Math/Primality.py new file mode 100644 index 0000000..33814fa --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/Primality.py @@ -0,0 +1,369 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Functions to create and test prime numbers. + +:undocumented: __package__ +""" + +from Cryptodome import Random +from Cryptodome.Math.Numbers import Integer + +from Cryptodome.Util.py3compat import iter_range + +COMPOSITE = 0 +PROBABLY_PRIME = 1 + + +def miller_rabin_test(candidate, iterations, randfunc=None): + """Perform a Miller-Rabin primality test on an integer. + + The test is specified in Section C.3.1 of `FIPS PUB 186-4`__. + + :Parameters: + candidate : integer + The number to test for primality. + iterations : integer + The maximum number of iterations to perform before + declaring a candidate a probable prime. + randfunc : callable + An RNG function where bases are taken from. + + :Returns: + ``Primality.COMPOSITE`` or ``Primality.PROBABLY_PRIME``. + + .. __: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if not isinstance(candidate, Integer): + candidate = Integer(candidate) + + if candidate in (1, 2, 3, 5): + return PROBABLY_PRIME + + if candidate.is_even(): + return COMPOSITE + + one = Integer(1) + minus_one = Integer(candidate - 1) + + if randfunc is None: + randfunc = Random.new().read + + # Step 1 and 2 + m = Integer(minus_one) + a = 0 + while m.is_even(): + m >>= 1 + a += 1 + + # Skip step 3 + + # Step 4 + for i in iter_range(iterations): + + # Step 4.1-2 + base = 1 + while base in (one, minus_one): + base = Integer.random_range(min_inclusive=2, + max_inclusive=candidate - 2, + randfunc=randfunc) + assert(2 <= base <= candidate - 2) + + # Step 4.3-4.4 + z = pow(base, m, candidate) + if z in (one, minus_one): + continue + + # Step 4.5 + for j in iter_range(1, a): + z = pow(z, 2, candidate) + if z == minus_one: + break + if z == one: + return COMPOSITE + else: + return COMPOSITE + + # Step 5 + return PROBABLY_PRIME + + +def lucas_test(candidate): + """Perform a Lucas primality test on an integer. + + The test is specified in Section C.3.3 of `FIPS PUB 186-4`__. + + :Parameters: + candidate : integer + The number to test for primality. + + :Returns: + ``Primality.COMPOSITE`` or ``Primality.PROBABLY_PRIME``. + + .. __: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if not isinstance(candidate, Integer): + candidate = Integer(candidate) + + # Step 1 + if candidate in (1, 2, 3, 5): + return PROBABLY_PRIME + if candidate.is_even() or candidate.is_perfect_square(): + return COMPOSITE + + # Step 2 + def alternate(): + value = 5 + while True: + yield value + if value > 0: + value += 2 + else: + value -= 2 + value = -value + + for D in alternate(): + if candidate in (D, -D): + continue + js = Integer.jacobi_symbol(D, candidate) + if js == 0: + return COMPOSITE + if js == -1: + break + # Found D. P=1 and Q=(1-D)/4 (note that Q is guaranteed to be an integer) + + # Step 3 + # This is \delta(n) = n - jacobi(D/n) + K = candidate + 1 + # Step 4 + r = K.size_in_bits() - 1 + # Step 5 + # U_1=1 and V_1=P + U_i = Integer(1) + V_i = Integer(1) + U_temp = Integer(0) + V_temp = Integer(0) + # Step 6 + for i in iter_range(r - 1, -1, -1): + # Square + # U_temp = U_i * V_i % candidate + U_temp.set(U_i) + U_temp *= V_i + U_temp %= candidate + # V_temp = (((V_i ** 2 + (U_i ** 2 * D)) * K) >> 1) % candidate + V_temp.set(U_i) + V_temp *= U_i + V_temp *= D + V_temp.multiply_accumulate(V_i, V_i) + if V_temp.is_odd(): + V_temp += candidate + V_temp >>= 1 + V_temp %= candidate + # Multiply + if K.get_bit(i): + # U_i = (((U_temp + V_temp) * K) >> 1) % candidate + U_i.set(U_temp) + U_i += V_temp + if U_i.is_odd(): + U_i += candidate + U_i >>= 1 + U_i %= candidate + # V_i = (((V_temp + U_temp * D) * K) >> 1) % candidate + V_i.set(V_temp) + V_i.multiply_accumulate(U_temp, D) + if V_i.is_odd(): + V_i += candidate + V_i >>= 1 + V_i %= candidate + else: + U_i.set(U_temp) + V_i.set(V_temp) + # Step 7 + if U_i == 0: + return PROBABLY_PRIME + return COMPOSITE + + +from Cryptodome.Util.number import sieve_base as _sieve_base_large +## The optimal number of small primes to use for the sieve +## is probably dependent on the platform and the candidate size +_sieve_base = set(_sieve_base_large[:100]) + + +def test_probable_prime(candidate, randfunc=None): + """Test if a number is prime. + + A number is qualified as prime if it passes a certain + number of Miller-Rabin tests (dependent on the size + of the number, but such that probability of a false + positive is less than 10^-30) and a single Lucas test. + + For instance, a 1024-bit candidate will need to pass + 4 Miller-Rabin tests. + + :Parameters: + candidate : integer + The number to test for primality. + randfunc : callable + The routine to draw random bytes from to select Miller-Rabin bases. + :Returns: + ``PROBABLE_PRIME`` if the number if prime with very high probability. + ``COMPOSITE`` if the number is a composite. + For efficiency reasons, ``COMPOSITE`` is also returned for small primes. + """ + + if randfunc is None: + randfunc = Random.new().read + + if not isinstance(candidate, Integer): + candidate = Integer(candidate) + + # First, check trial division by the smallest primes + if int(candidate) in _sieve_base: + return PROBABLY_PRIME + try: + map(candidate.fail_if_divisible_by, _sieve_base) + except ValueError: + return COMPOSITE + + # These are the number of Miller-Rabin iterations s.t. p(k, t) < 1E-30, + # with p(k, t) being the probability that a randomly chosen k-bit number + # is composite but still survives t MR iterations. + mr_ranges = ((220, 30), (280, 20), (390, 15), (512, 10), + (620, 7), (740, 6), (890, 5), (1200, 4), + (1700, 3), (3700, 2)) + + bit_size = candidate.size_in_bits() + try: + mr_iterations = list(filter(lambda x: bit_size < x[0], + mr_ranges))[0][1] + except IndexError: + mr_iterations = 1 + + if miller_rabin_test(candidate, mr_iterations, + randfunc=randfunc) == COMPOSITE: + return COMPOSITE + if lucas_test(candidate) == COMPOSITE: + return COMPOSITE + return PROBABLY_PRIME + + +def generate_probable_prime(**kwargs): + """Generate a random probable prime. + + The prime will not have any specific properties + (e.g. it will not be a *strong* prime). + + Random numbers are evaluated for primality until one + passes all tests, consisting of a certain number of + Miller-Rabin tests with random bases followed by + a single Lucas test. + + The number of Miller-Rabin iterations is chosen such that + the probability that the output number is a non-prime is + less than 1E-30 (roughly 2^{-100}). + + This approach is compliant to `FIPS PUB 186-4`__. + + :Keywords: + exact_bits : integer + The desired size in bits of the probable prime. + It must be at least 160. + randfunc : callable + An RNG function where candidate primes are taken from. + prime_filter : callable + A function that takes an Integer as parameter and returns + True if the number can be passed to further primality tests, + False if it should be immediately discarded. + + :Return: + A probable prime in the range 2^exact_bits > p > 2^(exact_bits-1). + + .. __: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + exact_bits = kwargs.pop("exact_bits", None) + randfunc = kwargs.pop("randfunc", None) + prime_filter = kwargs.pop("prime_filter", lambda x: True) + if kwargs: + raise ValueError("Unknown parameters: " + kwargs.keys()) + + if exact_bits is None: + raise ValueError("Missing exact_bits parameter") + if exact_bits < 160: + raise ValueError("Prime number is not big enough.") + + if randfunc is None: + randfunc = Random.new().read + + result = COMPOSITE + while result == COMPOSITE: + candidate = Integer.random(exact_bits=exact_bits, + randfunc=randfunc) | 1 + if not prime_filter(candidate): + continue + result = test_probable_prime(candidate, randfunc) + return candidate + + +def generate_probable_safe_prime(**kwargs): + """Generate a random, probable safe prime. + + Note this operation is much slower than generating a simple prime. + + :Keywords: + exact_bits : integer + The desired size in bits of the probable safe prime. + randfunc : callable + An RNG function where candidate primes are taken from. + + :Return: + A probable safe prime in the range + 2^exact_bits > p > 2^(exact_bits-1). + """ + + exact_bits = kwargs.pop("exact_bits", None) + randfunc = kwargs.pop("randfunc", None) + if kwargs: + raise ValueError("Unknown parameters: " + kwargs.keys()) + + if randfunc is None: + randfunc = Random.new().read + + result = COMPOSITE + while result == COMPOSITE: + q = generate_probable_prime(exact_bits=exact_bits - 1, randfunc=randfunc) + candidate = q * 2 + 1 + if candidate.size_in_bits() != exact_bits: + continue + result = test_probable_prime(candidate, randfunc=randfunc) + return candidate diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/Primality.pyi b/python/lib/python3.11/site-packages/Cryptodome/Math/Primality.pyi new file mode 100644 index 0000000..7813483 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/Primality.pyi @@ -0,0 +1,18 @@ +from typing import Callable, Optional, Union, Set + +PrimeResult = int + +COMPOSITE: PrimeResult +PROBABLY_PRIME: PrimeResult + +def miller_rabin_test(candidate: int, iterations: int, randfunc: Optional[Callable[[int],bytes]]=None) -> PrimeResult: ... +def lucas_test(candidate: int) -> PrimeResult: ... +_sieve_base: Set[int] +def test_probable_prime(candidate: int, randfunc: Optional[Callable[[int],bytes]]=None) -> PrimeResult: ... +def generate_probable_prime(*, + exact_bits: int = ..., + randfunc: Callable[[int],bytes] = ..., + prime_filter: Callable[[int],bool] = ...) -> int: ... +def generate_probable_safe_prime(*, + exact_bits: int = ..., + randfunc: Callable[[int],bytes] = ...) -> int: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.py b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.py new file mode 100644 index 0000000..7d78c4b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.py @@ -0,0 +1,392 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import abc + +from Cryptodome.Util.py3compat import iter_range, bord, bchr, ABC + +from Cryptodome import Random + + +class IntegerBase(ABC): + + # Conversions + @abc.abstractmethod + def __int__(self): + pass + + @abc.abstractmethod + def __str__(self): + pass + + @abc.abstractmethod + def __repr__(self): + pass + + @abc.abstractmethod + def to_bytes(self, block_size=0, byteorder='big'): + pass + + @staticmethod + @abc.abstractmethod + def from_bytes(byte_string, byteorder='big'): + pass + + # Relations + @abc.abstractmethod + def __eq__(self, term): + pass + + @abc.abstractmethod + def __ne__(self, term): + pass + + @abc.abstractmethod + def __lt__(self, term): + pass + + @abc.abstractmethod + def __le__(self, term): + pass + + @abc.abstractmethod + def __gt__(self, term): + pass + + @abc.abstractmethod + def __ge__(self, term): + pass + + @abc.abstractmethod + def __nonzero__(self): + pass + __bool__ = __nonzero__ + + @abc.abstractmethod + def is_negative(self): + pass + + # Arithmetic operations + @abc.abstractmethod + def __add__(self, term): + pass + + @abc.abstractmethod + def __sub__(self, term): + pass + + @abc.abstractmethod + def __mul__(self, factor): + pass + + @abc.abstractmethod + def __floordiv__(self, divisor): + pass + + @abc.abstractmethod + def __mod__(self, divisor): + pass + + @abc.abstractmethod + def inplace_pow(self, exponent, modulus=None): + pass + + @abc.abstractmethod + def __pow__(self, exponent, modulus=None): + pass + + @abc.abstractmethod + def __abs__(self): + pass + + @abc.abstractmethod + def sqrt(self, modulus=None): + pass + + @abc.abstractmethod + def __iadd__(self, term): + pass + + @abc.abstractmethod + def __isub__(self, term): + pass + + @abc.abstractmethod + def __imul__(self, term): + pass + + @abc.abstractmethod + def __imod__(self, term): + pass + + # Boolean/bit operations + @abc.abstractmethod + def __and__(self, term): + pass + + @abc.abstractmethod + def __or__(self, term): + pass + + @abc.abstractmethod + def __rshift__(self, pos): + pass + + @abc.abstractmethod + def __irshift__(self, pos): + pass + + @abc.abstractmethod + def __lshift__(self, pos): + pass + + @abc.abstractmethod + def __ilshift__(self, pos): + pass + + @abc.abstractmethod + def get_bit(self, n): + pass + + # Extra + @abc.abstractmethod + def is_odd(self): + pass + + @abc.abstractmethod + def is_even(self): + pass + + @abc.abstractmethod + def size_in_bits(self): + pass + + @abc.abstractmethod + def size_in_bytes(self): + pass + + @abc.abstractmethod + def is_perfect_square(self): + pass + + @abc.abstractmethod + def fail_if_divisible_by(self, small_prime): + pass + + @abc.abstractmethod + def multiply_accumulate(self, a, b): + pass + + @abc.abstractmethod + def set(self, source): + pass + + @abc.abstractmethod + def inplace_inverse(self, modulus): + pass + + @abc.abstractmethod + def inverse(self, modulus): + pass + + @abc.abstractmethod + def gcd(self, term): + pass + + @abc.abstractmethod + def lcm(self, term): + pass + + @staticmethod + @abc.abstractmethod + def jacobi_symbol(a, n): + pass + + @staticmethod + def _tonelli_shanks(n, p): + """Tonelli-shanks algorithm for computing the square root + of n modulo a prime p. + + n must be in the range [0..p-1]. + p must be at least even. + + The return value r is the square root of modulo p. If non-zero, + another solution will also exist (p-r). + + Note we cannot assume that p is really a prime: if it's not, + we can either raise an exception or return the correct value. + """ + + # See https://rosettacode.org/wiki/Tonelli-Shanks_algorithm + + if n in (0, 1): + return n + + if p % 4 == 3: + root = pow(n, (p + 1) // 4, p) + if pow(root, 2, p) != n: + raise ValueError("Cannot compute square root") + return root + + s = 1 + q = (p - 1) // 2 + while not (q & 1): + s += 1 + q >>= 1 + + z = n.__class__(2) + while True: + euler = pow(z, (p - 1) // 2, p) + if euler == 1: + z += 1 + continue + if euler == p - 1: + break + # Most probably p is not a prime + raise ValueError("Cannot compute square root") + + m = s + c = pow(z, q, p) + t = pow(n, q, p) + r = pow(n, (q + 1) // 2, p) + + while t != 1: + for i in iter_range(0, m): + if pow(t, 2**i, p) == 1: + break + if i == m: + raise ValueError("Cannot compute square root of %d mod %d" % (n, p)) + b = pow(c, 2**(m - i - 1), p) + m = i + c = b**2 % p + t = (t * b**2) % p + r = (r * b) % p + + if pow(r, 2, p) != n: + raise ValueError("Cannot compute square root") + + return r + + @classmethod + def random(cls, **kwargs): + """Generate a random natural integer of a certain size. + + :Keywords: + exact_bits : positive integer + The length in bits of the resulting random Integer number. + The number is guaranteed to fulfil the relation: + + 2^bits > result >= 2^(bits - 1) + + max_bits : positive integer + The maximum length in bits of the resulting random Integer number. + The number is guaranteed to fulfil the relation: + + 2^bits > result >=0 + + randfunc : callable + A function that returns a random byte string. The length of the + byte string is passed as parameter. Optional. + If not provided (or ``None``), randomness is read from the system RNG. + + :Return: a Integer object + """ + + exact_bits = kwargs.pop("exact_bits", None) + max_bits = kwargs.pop("max_bits", None) + randfunc = kwargs.pop("randfunc", None) + + if randfunc is None: + randfunc = Random.new().read + + if exact_bits is None and max_bits is None: + raise ValueError("Either 'exact_bits' or 'max_bits' must be specified") + + if exact_bits is not None and max_bits is not None: + raise ValueError("'exact_bits' and 'max_bits' are mutually exclusive") + + bits = exact_bits or max_bits + bytes_needed = ((bits - 1) // 8) + 1 + significant_bits_msb = 8 - (bytes_needed * 8 - bits) + msb = bord(randfunc(1)[0]) + if exact_bits is not None: + msb |= 1 << (significant_bits_msb - 1) + msb &= (1 << significant_bits_msb) - 1 + + return cls.from_bytes(bchr(msb) + randfunc(bytes_needed - 1)) + + @classmethod + def random_range(cls, **kwargs): + """Generate a random integer within a given internal. + + :Keywords: + min_inclusive : integer + The lower end of the interval (inclusive). + max_inclusive : integer + The higher end of the interval (inclusive). + max_exclusive : integer + The higher end of the interval (exclusive). + randfunc : callable + A function that returns a random byte string. The length of the + byte string is passed as parameter. Optional. + If not provided (or ``None``), randomness is read from the system RNG. + :Returns: + An Integer randomly taken in the given interval. + """ + + min_inclusive = kwargs.pop("min_inclusive", None) + max_inclusive = kwargs.pop("max_inclusive", None) + max_exclusive = kwargs.pop("max_exclusive", None) + randfunc = kwargs.pop("randfunc", None) + + if kwargs: + raise ValueError("Unknown keywords: " + str(kwargs.keys)) + if None not in (max_inclusive, max_exclusive): + raise ValueError("max_inclusive and max_exclusive cannot be both" + " specified") + if max_exclusive is not None: + max_inclusive = max_exclusive - 1 + if None in (min_inclusive, max_inclusive): + raise ValueError("Missing keyword to identify the interval") + + if randfunc is None: + randfunc = Random.new().read + + norm_maximum = max_inclusive - min_inclusive + bits_needed = cls(norm_maximum).size_in_bits() + + norm_candidate = -1 + while not 0 <= norm_candidate <= norm_maximum: + norm_candidate = cls.random( + max_bits=bits_needed, + randfunc=randfunc + ) + return norm_candidate + min_inclusive + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.pyi b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.pyi new file mode 100644 index 0000000..a42a48b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerBase.pyi @@ -0,0 +1,63 @@ +from typing import Optional, Union, Callable + +RandFunc = Callable[[int],int] + +class IntegerBase: + + def __init__(self, value: Union[IntegerBase, int]): ... + + def __int__(self) -> int: ... + def __str__(self) -> str: ... + def __repr__(self) -> str: ... + def to_bytes(self, block_size: Optional[int]=0, byteorder: str= ...) -> bytes: ... + @staticmethod + def from_bytes(byte_string: bytes, byteorder: Optional[str] = ...) -> IntegerBase: ... + def __eq__(self, term: object) -> bool: ... + def __ne__(self, term: object) -> bool: ... + def __lt__(self, term: Union[IntegerBase, int]) -> bool: ... + def __le__(self, term: Union[IntegerBase, int]) -> bool: ... + def __gt__(self, term: Union[IntegerBase, int]) -> bool: ... + def __ge__(self, term: Union[IntegerBase, int]) -> bool: ... + def __nonzero__(self) -> bool: ... + def is_negative(self) -> bool: ... + def __add__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __sub__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __mul__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __floordiv__(self, divisor: Union[IntegerBase, int]) -> IntegerBase: ... + def __mod__(self, divisor: Union[IntegerBase, int]) -> IntegerBase: ... + def inplace_pow(self, exponent: int, modulus: Optional[Union[IntegerBase, int]]=None) -> IntegerBase: ... + def __pow__(self, exponent: int, modulus: Optional[int]) -> IntegerBase: ... + def __abs__(self) -> IntegerBase: ... + def sqrt(self, modulus: Optional[int]) -> IntegerBase: ... + def __iadd__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __isub__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __imul__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __imod__(self, divisor: Union[IntegerBase, int]) -> IntegerBase: ... + def __and__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __or__(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def __rshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def __irshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def __lshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def __ilshift__(self, pos: Union[IntegerBase, int]) -> IntegerBase: ... + def get_bit(self, n: int) -> bool: ... + def is_odd(self) -> bool: ... + def is_even(self) -> bool: ... + def size_in_bits(self) -> int: ... + def size_in_bytes(self) -> int: ... + def is_perfect_square(self) -> bool: ... + def fail_if_divisible_by(self, small_prime: Union[IntegerBase, int]) -> None: ... + def multiply_accumulate(self, a: Union[IntegerBase, int], b: Union[IntegerBase, int]) -> IntegerBase: ... + def set(self, source: Union[IntegerBase, int]) -> IntegerBase: ... + def inplace_inverse(self, modulus: Union[IntegerBase, int]) -> IntegerBase: ... + def inverse(self, modulus: Union[IntegerBase, int]) -> IntegerBase: ... + def gcd(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + def lcm(self, term: Union[IntegerBase, int]) -> IntegerBase: ... + @staticmethod + def jacobi_symbol(a: Union[IntegerBase, int], n: Union[IntegerBase, int]) -> IntegerBase: ... + @staticmethod + def _tonelli_shanks(n: Union[IntegerBase, int], p: Union[IntegerBase, int]) -> IntegerBase : ... + @classmethod + def random(cls, **kwargs: Union[int,RandFunc]) -> IntegerBase : ... + @classmethod + def random_range(cls, **kwargs: Union[int,RandFunc]) -> IntegerBase : ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.py b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.py new file mode 100644 index 0000000..0e23152 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.py @@ -0,0 +1,118 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from ._IntegerNative import IntegerNative + +from Cryptodome.Util.number import long_to_bytes, bytes_to_long + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, backend, + c_size_t, c_ulonglong) + + +from Cryptodome.Random.random import getrandbits + +c_defs = """ +int monty_pow(const uint8_t *base, + const uint8_t *exp, + const uint8_t *modulus, + uint8_t *out, + size_t len, + uint64_t seed); +""" + + +_raw_montgomery = load_pycryptodome_raw_lib("Cryptodome.Math._modexp", c_defs) +implementation = {"library": "custom", "api": backend} + + +class IntegerCustom(IntegerNative): + + @staticmethod + def from_bytes(byte_string, byteorder='big'): + if byteorder == 'big': + pass + elif byteorder == 'little': + byte_string = bytearray(byte_string) + byte_string.reverse() + else: + raise ValueError("Incorrect byteorder") + return IntegerCustom(bytes_to_long(byte_string)) + + def inplace_pow(self, exponent, modulus=None): + exp_value = int(exponent) + if exp_value < 0: + raise ValueError("Exponent must not be negative") + + # No modular reduction + if modulus is None: + self._value = pow(self._value, exp_value) + return self + + # With modular reduction + mod_value = int(modulus) + if mod_value < 0: + raise ValueError("Modulus must be positive") + if mod_value == 0: + raise ZeroDivisionError("Modulus cannot be zero") + + # C extension only works with odd moduli + if (mod_value & 1) == 0: + self._value = pow(self._value, exp_value, mod_value) + return self + + # C extension only works with bases smaller than modulus + if self._value >= mod_value: + self._value %= mod_value + + max_len = len(long_to_bytes(max(self._value, exp_value, mod_value))) + + base_b = long_to_bytes(self._value, max_len) + exp_b = long_to_bytes(exp_value, max_len) + modulus_b = long_to_bytes(mod_value, max_len) + + out = create_string_buffer(max_len) + + error = _raw_montgomery.monty_pow( + out, + base_b, + exp_b, + modulus_b, + c_size_t(max_len), + c_ulonglong(getrandbits(64)) + ) + + if error: + raise ValueError("monty_pow failed with error: %d" % error) + + result = bytes_to_long(get_raw_buffer(out)) + self._value = result + return self diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.pyi b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.pyi new file mode 100644 index 0000000..2dd75c7 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerCustom.pyi @@ -0,0 +1,8 @@ +from typing import Any + +from ._IntegerNative import IntegerNative + +_raw_montgomery = Any + +class IntegerCustom(IntegerNative): + pass diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.py b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.py new file mode 100644 index 0000000..3ab7c59 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.py @@ -0,0 +1,762 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import sys + +from Cryptodome.Util.py3compat import tobytes, is_native_int + +from Cryptodome.Util._raw_api import (backend, load_lib, + get_raw_buffer, get_c_string, + null_pointer, create_string_buffer, + c_ulong, c_size_t, c_uint8_ptr) + +from ._IntegerBase import IntegerBase + +gmp_defs = """typedef unsigned long UNIX_ULONG; + typedef struct { int a; int b; void *c; } MPZ; + typedef MPZ mpz_t[1]; + typedef UNIX_ULONG mp_bitcnt_t; + + void __gmpz_init (mpz_t x); + void __gmpz_init_set (mpz_t rop, const mpz_t op); + void __gmpz_init_set_ui (mpz_t rop, UNIX_ULONG op); + + UNIX_ULONG __gmpz_get_ui (const mpz_t op); + void __gmpz_set (mpz_t rop, const mpz_t op); + void __gmpz_set_ui (mpz_t rop, UNIX_ULONG op); + void __gmpz_add (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_add_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_sub_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_addmul (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_addmul_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_submul_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + void __gmpz_import (mpz_t rop, size_t count, int order, size_t size, + int endian, size_t nails, const void *op); + void * __gmpz_export (void *rop, size_t *countp, int order, + size_t size, + int endian, size_t nails, const mpz_t op); + size_t __gmpz_sizeinbase (const mpz_t op, int base); + void __gmpz_sub (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_mul (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_mul_ui (mpz_t rop, const mpz_t op1, UNIX_ULONG op2); + int __gmpz_cmp (const mpz_t op1, const mpz_t op2); + void __gmpz_powm (mpz_t rop, const mpz_t base, const mpz_t exp, const + mpz_t mod); + void __gmpz_powm_ui (mpz_t rop, const mpz_t base, UNIX_ULONG exp, + const mpz_t mod); + void __gmpz_pow_ui (mpz_t rop, const mpz_t base, UNIX_ULONG exp); + void __gmpz_sqrt(mpz_t rop, const mpz_t op); + void __gmpz_mod (mpz_t r, const mpz_t n, const mpz_t d); + void __gmpz_neg (mpz_t rop, const mpz_t op); + void __gmpz_abs (mpz_t rop, const mpz_t op); + void __gmpz_and (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_ior (mpz_t rop, const mpz_t op1, const mpz_t op2); + void __gmpz_clear (mpz_t x); + void __gmpz_tdiv_q_2exp (mpz_t q, const mpz_t n, mp_bitcnt_t b); + void __gmpz_fdiv_q (mpz_t q, const mpz_t n, const mpz_t d); + void __gmpz_mul_2exp (mpz_t rop, const mpz_t op1, mp_bitcnt_t op2); + int __gmpz_tstbit (const mpz_t op, mp_bitcnt_t bit_index); + int __gmpz_perfect_square_p (const mpz_t op); + int __gmpz_jacobi (const mpz_t a, const mpz_t b); + void __gmpz_gcd (mpz_t rop, const mpz_t op1, const mpz_t op2); + UNIX_ULONG __gmpz_gcd_ui (mpz_t rop, const mpz_t op1, + UNIX_ULONG op2); + void __gmpz_lcm (mpz_t rop, const mpz_t op1, const mpz_t op2); + int __gmpz_invert (mpz_t rop, const mpz_t op1, const mpz_t op2); + int __gmpz_divisible_p (const mpz_t n, const mpz_t d); + int __gmpz_divisible_ui_p (const mpz_t n, UNIX_ULONG d); + """ + +if sys.platform == "win32": + raise ImportError("Not using GMP on Windows") + +lib = load_lib("gmp", gmp_defs) +implementation = {"library": "gmp", "api": backend} + +if hasattr(lib, "__mpir_version"): + raise ImportError("MPIR library detected") + +# In order to create a function that returns a pointer to +# a new MPZ structure, we need to break the abstraction +# and know exactly what ffi backend we have +if implementation["api"] == "ctypes": + from ctypes import Structure, c_int, c_void_p, byref + + class _MPZ(Structure): + _fields_ = [('_mp_alloc', c_int), + ('_mp_size', c_int), + ('_mp_d', c_void_p)] + + def new_mpz(): + return byref(_MPZ()) + +else: + # We are using CFFI + from Cryptodome.Util._raw_api import ffi + + def new_mpz(): + return ffi.new("MPZ*") + + +# Lazy creation of GMP methods +class _GMP(object): + + def __getattr__(self, name): + if name.startswith("mpz_"): + func_name = "__gmpz_" + name[4:] + elif name.startswith("gmp_"): + func_name = "__gmp_" + name[4:] + else: + raise AttributeError("Attribute %s is invalid" % name) + func = getattr(lib, func_name) + setattr(self, name, func) + return func + + +_gmp = _GMP() + + +class IntegerGMP(IntegerBase): + """A fast, arbitrary precision integer""" + + _zero_mpz_p = new_mpz() + _gmp.mpz_init_set_ui(_zero_mpz_p, c_ulong(0)) + + def __init__(self, value): + """Initialize the integer to the given value.""" + + self._mpz_p = new_mpz() + self._initialized = False + + if isinstance(value, float): + raise ValueError("A floating point type is not a natural number") + + if is_native_int(value): + _gmp.mpz_init(self._mpz_p) + self._initialized = True + if value == 0: + return + + tmp = new_mpz() + _gmp.mpz_init(tmp) + + try: + positive = value >= 0 + reduce = abs(value) + slots = (reduce.bit_length() - 1) // 32 + 1 + + while slots > 0: + slots = slots - 1 + _gmp.mpz_set_ui(tmp, + c_ulong(0xFFFFFFFF & (reduce >> (slots * 32)))) + _gmp.mpz_mul_2exp(tmp, tmp, c_ulong(slots * 32)) + _gmp.mpz_add(self._mpz_p, self._mpz_p, tmp) + finally: + _gmp.mpz_clear(tmp) + + if not positive: + _gmp.mpz_neg(self._mpz_p, self._mpz_p) + + elif isinstance(value, IntegerGMP): + _gmp.mpz_init_set(self._mpz_p, value._mpz_p) + self._initialized = True + else: + raise NotImplementedError + + + # Conversions + def __int__(self): + tmp = new_mpz() + _gmp.mpz_init_set(tmp, self._mpz_p) + + try: + value = 0 + slot = 0 + while _gmp.mpz_cmp(tmp, self._zero_mpz_p) != 0: + lsb = _gmp.mpz_get_ui(tmp) & 0xFFFFFFFF + value |= lsb << (slot * 32) + _gmp.mpz_tdiv_q_2exp(tmp, tmp, c_ulong(32)) + slot = slot + 1 + finally: + _gmp.mpz_clear(tmp) + + if self < 0: + value = -value + return int(value) + + def __str__(self): + return str(int(self)) + + def __repr__(self): + return "Integer(%s)" % str(self) + + # Only Python 2.x + def __hex__(self): + return hex(int(self)) + + # Only Python 3.x + def __index__(self): + return int(self) + + def to_bytes(self, block_size=0, byteorder='big'): + """Convert the number into a byte string. + + This method encodes the number in network order and prepends + as many zero bytes as required. It only works for non-negative + values. + + :Parameters: + block_size : integer + The exact size the output byte string must have. + If zero, the string has the minimal length. + byteorder : string + 'big' for big-endian integers (default), 'little' for litte-endian. + :Returns: + A byte string. + :Raise ValueError: + If the value is negative or if ``block_size`` is + provided and the length of the byte string would exceed it. + """ + + if self < 0: + raise ValueError("Conversion only valid for non-negative numbers") + + buf_len = (_gmp.mpz_sizeinbase(self._mpz_p, 2) + 7) // 8 + if buf_len > block_size > 0: + raise ValueError("Number is too big to convert to byte string" + " of prescribed length") + buf = create_string_buffer(buf_len) + + + _gmp.mpz_export( + buf, + null_pointer, # Ignore countp + 1, # Big endian + c_size_t(1), # Each word is 1 byte long + 0, # Endianess within a word - not relevant + c_size_t(0), # No nails + self._mpz_p) + + result = b'\x00' * max(0, block_size - buf_len) + get_raw_buffer(buf) + if byteorder == 'big': + pass + elif byteorder == 'little': + result = bytearray(result) + result.reverse() + result = bytes(result) + else: + raise ValueError("Incorrect byteorder") + return result + + @staticmethod + def from_bytes(byte_string, byteorder='big'): + """Convert a byte string into a number. + + :Parameters: + byte_string : byte string + The input number, encoded in network order. + It can only be non-negative. + byteorder : string + 'big' for big-endian integers (default), 'little' for litte-endian. + + :Return: + The ``Integer`` object carrying the same value as the input. + """ + result = IntegerGMP(0) + if byteorder == 'big': + pass + elif byteorder == 'little': + byte_string = bytearray(byte_string) + byte_string.reverse() + else: + raise ValueError("Incorrect byteorder") + _gmp.mpz_import( + result._mpz_p, + c_size_t(len(byte_string)), # Amount of words to read + 1, # Big endian + c_size_t(1), # Each word is 1 byte long + 0, # Endianess within a word - not relevant + c_size_t(0), # No nails + c_uint8_ptr(byte_string)) + return result + + # Relations + def _apply_and_return(self, func, term): + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + return func(self._mpz_p, term._mpz_p) + + def __eq__(self, term): + if not (isinstance(term, IntegerGMP) or is_native_int(term)): + return False + return self._apply_and_return(_gmp.mpz_cmp, term) == 0 + + def __ne__(self, term): + if not (isinstance(term, IntegerGMP) or is_native_int(term)): + return True + return self._apply_and_return(_gmp.mpz_cmp, term) != 0 + + def __lt__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) < 0 + + def __le__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) <= 0 + + def __gt__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) > 0 + + def __ge__(self, term): + return self._apply_and_return(_gmp.mpz_cmp, term) >= 0 + + def __nonzero__(self): + return _gmp.mpz_cmp(self._mpz_p, self._zero_mpz_p) != 0 + __bool__ = __nonzero__ + + def is_negative(self): + return _gmp.mpz_cmp(self._mpz_p, self._zero_mpz_p) < 0 + + # Arithmetic operations + def __add__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + try: + term = IntegerGMP(term) + except NotImplementedError: + return NotImplemented + _gmp.mpz_add(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __sub__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + try: + term = IntegerGMP(term) + except NotImplementedError: + return NotImplemented + _gmp.mpz_sub(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __mul__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + try: + term = IntegerGMP(term) + except NotImplementedError: + return NotImplemented + _gmp.mpz_mul(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __floordiv__(self, divisor): + if not isinstance(divisor, IntegerGMP): + divisor = IntegerGMP(divisor) + if _gmp.mpz_cmp(divisor._mpz_p, + self._zero_mpz_p) == 0: + raise ZeroDivisionError("Division by zero") + result = IntegerGMP(0) + _gmp.mpz_fdiv_q(result._mpz_p, + self._mpz_p, + divisor._mpz_p) + return result + + def __mod__(self, divisor): + if not isinstance(divisor, IntegerGMP): + divisor = IntegerGMP(divisor) + comp = _gmp.mpz_cmp(divisor._mpz_p, + self._zero_mpz_p) + if comp == 0: + raise ZeroDivisionError("Division by zero") + if comp < 0: + raise ValueError("Modulus must be positive") + result = IntegerGMP(0) + _gmp.mpz_mod(result._mpz_p, + self._mpz_p, + divisor._mpz_p) + return result + + def inplace_pow(self, exponent, modulus=None): + + if modulus is None: + if exponent < 0: + raise ValueError("Exponent must not be negative") + + # Normal exponentiation + if exponent > 256: + raise ValueError("Exponent is too big") + _gmp.mpz_pow_ui(self._mpz_p, + self._mpz_p, # Base + c_ulong(int(exponent)) + ) + else: + # Modular exponentiation + if not isinstance(modulus, IntegerGMP): + modulus = IntegerGMP(modulus) + if not modulus: + raise ZeroDivisionError("Division by zero") + if modulus.is_negative(): + raise ValueError("Modulus must be positive") + if is_native_int(exponent): + if exponent < 0: + raise ValueError("Exponent must not be negative") + if exponent < 65536: + _gmp.mpz_powm_ui(self._mpz_p, + self._mpz_p, + c_ulong(exponent), + modulus._mpz_p) + return self + exponent = IntegerGMP(exponent) + elif exponent.is_negative(): + raise ValueError("Exponent must not be negative") + _gmp.mpz_powm(self._mpz_p, + self._mpz_p, + exponent._mpz_p, + modulus._mpz_p) + return self + + def __pow__(self, exponent, modulus=None): + result = IntegerGMP(self) + return result.inplace_pow(exponent, modulus) + + def __abs__(self): + result = IntegerGMP(0) + _gmp.mpz_abs(result._mpz_p, self._mpz_p) + return result + + def sqrt(self, modulus=None): + """Return the largest Integer that does not + exceed the square root""" + + if modulus is None: + if self < 0: + raise ValueError("Square root of negative value") + result = IntegerGMP(0) + _gmp.mpz_sqrt(result._mpz_p, + self._mpz_p) + else: + if modulus <= 0: + raise ValueError("Modulus must be positive") + modulus = int(modulus) + result = IntegerGMP(self._tonelli_shanks(int(self) % modulus, modulus)) + + return result + + def __iadd__(self, term): + if is_native_int(term): + if 0 <= term < 65536: + _gmp.mpz_add_ui(self._mpz_p, + self._mpz_p, + c_ulong(term)) + return self + if -65535 < term < 0: + _gmp.mpz_sub_ui(self._mpz_p, + self._mpz_p, + c_ulong(-term)) + return self + term = IntegerGMP(term) + _gmp.mpz_add(self._mpz_p, + self._mpz_p, + term._mpz_p) + return self + + def __isub__(self, term): + if is_native_int(term): + if 0 <= term < 65536: + _gmp.mpz_sub_ui(self._mpz_p, + self._mpz_p, + c_ulong(term)) + return self + if -65535 < term < 0: + _gmp.mpz_add_ui(self._mpz_p, + self._mpz_p, + c_ulong(-term)) + return self + term = IntegerGMP(term) + _gmp.mpz_sub(self._mpz_p, + self._mpz_p, + term._mpz_p) + return self + + def __imul__(self, term): + if is_native_int(term): + if 0 <= term < 65536: + _gmp.mpz_mul_ui(self._mpz_p, + self._mpz_p, + c_ulong(term)) + return self + if -65535 < term < 0: + _gmp.mpz_mul_ui(self._mpz_p, + self._mpz_p, + c_ulong(-term)) + _gmp.mpz_neg(self._mpz_p, self._mpz_p) + return self + term = IntegerGMP(term) + _gmp.mpz_mul(self._mpz_p, + self._mpz_p, + term._mpz_p) + return self + + def __imod__(self, divisor): + if not isinstance(divisor, IntegerGMP): + divisor = IntegerGMP(divisor) + comp = _gmp.mpz_cmp(divisor._mpz_p, + divisor._zero_mpz_p) + if comp == 0: + raise ZeroDivisionError("Division by zero") + if comp < 0: + raise ValueError("Modulus must be positive") + _gmp.mpz_mod(self._mpz_p, + self._mpz_p, + divisor._mpz_p) + return self + + # Boolean/bit operations + def __and__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + _gmp.mpz_and(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __or__(self, term): + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + _gmp.mpz_ior(result._mpz_p, + self._mpz_p, + term._mpz_p) + return result + + def __rshift__(self, pos): + result = IntegerGMP(0) + if pos < 0: + raise ValueError("negative shift count") + if pos > 65536: + if self < 0: + return -1 + else: + return 0 + _gmp.mpz_tdiv_q_2exp(result._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return result + + def __irshift__(self, pos): + if pos < 0: + raise ValueError("negative shift count") + if pos > 65536: + if self < 0: + return -1 + else: + return 0 + _gmp.mpz_tdiv_q_2exp(self._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return self + + def __lshift__(self, pos): + result = IntegerGMP(0) + if not 0 <= pos < 65536: + raise ValueError("Incorrect shift count") + _gmp.mpz_mul_2exp(result._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return result + + def __ilshift__(self, pos): + if not 0 <= pos < 65536: + raise ValueError("Incorrect shift count") + _gmp.mpz_mul_2exp(self._mpz_p, + self._mpz_p, + c_ulong(int(pos))) + return self + + def get_bit(self, n): + """Return True if the n-th bit is set to 1. + Bit 0 is the least significant.""" + + if self < 0: + raise ValueError("no bit representation for negative values") + if n < 0: + raise ValueError("negative bit count") + if n > 65536: + return 0 + return bool(_gmp.mpz_tstbit(self._mpz_p, + c_ulong(int(n)))) + + # Extra + def is_odd(self): + return _gmp.mpz_tstbit(self._mpz_p, 0) == 1 + + def is_even(self): + return _gmp.mpz_tstbit(self._mpz_p, 0) == 0 + + def size_in_bits(self): + """Return the minimum number of bits that can encode the number.""" + + if self < 0: + raise ValueError("Conversion only valid for non-negative numbers") + return _gmp.mpz_sizeinbase(self._mpz_p, 2) + + def size_in_bytes(self): + """Return the minimum number of bytes that can encode the number.""" + return (self.size_in_bits() - 1) // 8 + 1 + + def is_perfect_square(self): + return _gmp.mpz_perfect_square_p(self._mpz_p) != 0 + + def fail_if_divisible_by(self, small_prime): + """Raise an exception if the small prime is a divisor.""" + + if is_native_int(small_prime): + if 0 < small_prime < 65536: + if _gmp.mpz_divisible_ui_p(self._mpz_p, + c_ulong(small_prime)): + raise ValueError("The value is composite") + return + small_prime = IntegerGMP(small_prime) + if _gmp.mpz_divisible_p(self._mpz_p, + small_prime._mpz_p): + raise ValueError("The value is composite") + + def multiply_accumulate(self, a, b): + """Increment the number by the product of a and b.""" + + if not isinstance(a, IntegerGMP): + a = IntegerGMP(a) + if is_native_int(b): + if 0 < b < 65536: + _gmp.mpz_addmul_ui(self._mpz_p, + a._mpz_p, + c_ulong(b)) + return self + if -65535 < b < 0: + _gmp.mpz_submul_ui(self._mpz_p, + a._mpz_p, + c_ulong(-b)) + return self + b = IntegerGMP(b) + _gmp.mpz_addmul(self._mpz_p, + a._mpz_p, + b._mpz_p) + return self + + def set(self, source): + """Set the Integer to have the given value""" + + if not isinstance(source, IntegerGMP): + source = IntegerGMP(source) + _gmp.mpz_set(self._mpz_p, + source._mpz_p) + return self + + def inplace_inverse(self, modulus): + """Compute the inverse of this number in the ring of + modulo integers. + + Raise an exception if no inverse exists. + """ + + if not isinstance(modulus, IntegerGMP): + modulus = IntegerGMP(modulus) + + comp = _gmp.mpz_cmp(modulus._mpz_p, + self._zero_mpz_p) + if comp == 0: + raise ZeroDivisionError("Modulus cannot be zero") + if comp < 0: + raise ValueError("Modulus must be positive") + + result = _gmp.mpz_invert(self._mpz_p, + self._mpz_p, + modulus._mpz_p) + if not result: + raise ValueError("No inverse value can be computed") + return self + + def inverse(self, modulus): + result = IntegerGMP(self) + result.inplace_inverse(modulus) + return result + + def gcd(self, term): + """Compute the greatest common denominator between this + number and another term.""" + + result = IntegerGMP(0) + if is_native_int(term): + if 0 < term < 65535: + _gmp.mpz_gcd_ui(result._mpz_p, + self._mpz_p, + c_ulong(term)) + return result + term = IntegerGMP(term) + _gmp.mpz_gcd(result._mpz_p, self._mpz_p, term._mpz_p) + return result + + def lcm(self, term): + """Compute the least common multiplier between this + number and another term.""" + + result = IntegerGMP(0) + if not isinstance(term, IntegerGMP): + term = IntegerGMP(term) + _gmp.mpz_lcm(result._mpz_p, self._mpz_p, term._mpz_p) + return result + + @staticmethod + def jacobi_symbol(a, n): + """Compute the Jacobi symbol""" + + if not isinstance(a, IntegerGMP): + a = IntegerGMP(a) + if not isinstance(n, IntegerGMP): + n = IntegerGMP(n) + if n <= 0 or n.is_even(): + raise ValueError("n must be positive odd for the Jacobi symbol") + return _gmp.mpz_jacobi(a._mpz_p, n._mpz_p) + + # Clean-up + def __del__(self): + + try: + if self._mpz_p is not None: + if self._initialized: + _gmp.mpz_clear(self._mpz_p) + + self._mpz_p = None + except AttributeError: + pass diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.pyi b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.pyi new file mode 100644 index 0000000..2181b47 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerGMP.pyi @@ -0,0 +1,3 @@ +from ._IntegerBase import IntegerBase +class IntegerGMP(IntegerBase): + pass diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.py b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.py new file mode 100644 index 0000000..0cf8173 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.py @@ -0,0 +1,370 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from ._IntegerBase import IntegerBase + +from Cryptodome.Util.number import long_to_bytes, bytes_to_long, inverse, GCD + + +class IntegerNative(IntegerBase): + """A class to model a natural integer (including zero)""" + + def __init__(self, value): + if isinstance(value, float): + raise ValueError("A floating point type is not a natural number") + try: + self._value = value._value + except AttributeError: + self._value = value + + # Conversions + def __int__(self): + return self._value + + def __str__(self): + return str(int(self)) + + def __repr__(self): + return "Integer(%s)" % str(self) + + # Only Python 2.x + def __hex__(self): + return hex(self._value) + + # Only Python 3.x + def __index__(self): + return int(self._value) + + def to_bytes(self, block_size=0, byteorder='big'): + if self._value < 0: + raise ValueError("Conversion only valid for non-negative numbers") + result = long_to_bytes(self._value, block_size) + if len(result) > block_size > 0: + raise ValueError("Value too large to encode") + if byteorder == 'big': + pass + elif byteorder == 'little': + result = bytearray(result) + result.reverse() + result = bytes(result) + else: + raise ValueError("Incorrect byteorder") + return result + + @classmethod + def from_bytes(cls, byte_string, byteorder='big'): + if byteorder == 'big': + pass + elif byteorder == 'little': + byte_string = bytearray(byte_string) + byte_string.reverse() + else: + raise ValueError("Incorrect byteorder") + return cls(bytes_to_long(byte_string)) + + # Relations + def __eq__(self, term): + if term is None: + return False + return self._value == int(term) + + def __ne__(self, term): + return not self.__eq__(term) + + def __lt__(self, term): + return self._value < int(term) + + def __le__(self, term): + return self.__lt__(term) or self.__eq__(term) + + def __gt__(self, term): + return not self.__le__(term) + + def __ge__(self, term): + return not self.__lt__(term) + + def __nonzero__(self): + return self._value != 0 + __bool__ = __nonzero__ + + def is_negative(self): + return self._value < 0 + + # Arithmetic operations + def __add__(self, term): + try: + return self.__class__(self._value + int(term)) + except (ValueError, AttributeError, TypeError): + return NotImplemented + + def __sub__(self, term): + try: + return self.__class__(self._value - int(term)) + except (ValueError, AttributeError, TypeError): + return NotImplemented + + def __mul__(self, factor): + try: + return self.__class__(self._value * int(factor)) + except (ValueError, AttributeError, TypeError): + return NotImplemented + + def __floordiv__(self, divisor): + return self.__class__(self._value // int(divisor)) + + def __mod__(self, divisor): + divisor_value = int(divisor) + if divisor_value < 0: + raise ValueError("Modulus must be positive") + return self.__class__(self._value % divisor_value) + + def inplace_pow(self, exponent, modulus=None): + exp_value = int(exponent) + if exp_value < 0: + raise ValueError("Exponent must not be negative") + + if modulus is not None: + mod_value = int(modulus) + if mod_value < 0: + raise ValueError("Modulus must be positive") + if mod_value == 0: + raise ZeroDivisionError("Modulus cannot be zero") + else: + mod_value = None + self._value = pow(self._value, exp_value, mod_value) + return self + + def __pow__(self, exponent, modulus=None): + result = self.__class__(self) + return result.inplace_pow(exponent, modulus) + + def __abs__(self): + return abs(self._value) + + def sqrt(self, modulus=None): + + value = self._value + if modulus is None: + if value < 0: + raise ValueError("Square root of negative value") + # http://stackoverflow.com/questions/15390807/integer-square-root-in-python + + x = value + y = (x + 1) // 2 + while y < x: + x = y + y = (x + value // x) // 2 + result = x + else: + if modulus <= 0: + raise ValueError("Modulus must be positive") + result = self._tonelli_shanks(self % modulus, modulus) + + return self.__class__(result) + + def __iadd__(self, term): + self._value += int(term) + return self + + def __isub__(self, term): + self._value -= int(term) + return self + + def __imul__(self, term): + self._value *= int(term) + return self + + def __imod__(self, term): + modulus = int(term) + if modulus == 0: + raise ZeroDivisionError("Division by zero") + if modulus < 0: + raise ValueError("Modulus must be positive") + self._value %= modulus + return self + + # Boolean/bit operations + def __and__(self, term): + return self.__class__(self._value & int(term)) + + def __or__(self, term): + return self.__class__(self._value | int(term)) + + def __rshift__(self, pos): + try: + return self.__class__(self._value >> int(pos)) + except OverflowError: + if self._value >= 0: + return 0 + else: + return -1 + + def __irshift__(self, pos): + try: + self._value >>= int(pos) + except OverflowError: + if self._value >= 0: + return 0 + else: + return -1 + return self + + def __lshift__(self, pos): + try: + return self.__class__(self._value << int(pos)) + except OverflowError: + raise ValueError("Incorrect shift count") + + def __ilshift__(self, pos): + try: + self._value <<= int(pos) + except OverflowError: + raise ValueError("Incorrect shift count") + return self + + def get_bit(self, n): + if self._value < 0: + raise ValueError("no bit representation for negative values") + try: + try: + result = (self._value >> n._value) & 1 + if n._value < 0: + raise ValueError("negative bit count") + except AttributeError: + result = (self._value >> n) & 1 + if n < 0: + raise ValueError("negative bit count") + except OverflowError: + result = 0 + return result + + # Extra + def is_odd(self): + return (self._value & 1) == 1 + + def is_even(self): + return (self._value & 1) == 0 + + def size_in_bits(self): + + if self._value < 0: + raise ValueError("Conversion only valid for non-negative numbers") + + if self._value == 0: + return 1 + + return self._value.bit_length() + + def size_in_bytes(self): + return (self.size_in_bits() - 1) // 8 + 1 + + def is_perfect_square(self): + if self._value < 0: + return False + if self._value in (0, 1): + return True + + x = self._value // 2 + square_x = x ** 2 + + while square_x > self._value: + x = (square_x + self._value) // (2 * x) + square_x = x ** 2 + + return self._value == x ** 2 + + def fail_if_divisible_by(self, small_prime): + if (self._value % int(small_prime)) == 0: + raise ValueError("Value is composite") + + def multiply_accumulate(self, a, b): + self._value += int(a) * int(b) + return self + + def set(self, source): + self._value = int(source) + + def inplace_inverse(self, modulus): + self._value = inverse(self._value, int(modulus)) + return self + + def inverse(self, modulus): + result = self.__class__(self) + result.inplace_inverse(modulus) + return result + + def gcd(self, term): + return self.__class__(GCD(abs(self._value), abs(int(term)))) + + def lcm(self, term): + term = int(term) + if self._value == 0 or term == 0: + return self.__class__(0) + return self.__class__(abs((self._value * term) // self.gcd(term)._value)) + + @staticmethod + def jacobi_symbol(a, n): + a = int(a) + n = int(n) + + if n <= 0: + raise ValueError("n must be a positive integer") + + if (n & 1) == 0: + raise ValueError("n must be odd for the Jacobi symbol") + + # Step 1 + a = a % n + # Step 2 + if a == 1 or n == 1: + return 1 + # Step 3 + if a == 0: + return 0 + # Step 4 + e = 0 + a1 = a + while (a1 & 1) == 0: + a1 >>= 1 + e += 1 + # Step 5 + if (e & 1) == 0: + s = 1 + elif n % 8 in (1, 7): + s = 1 + else: + s = -1 + # Step 6 + if n % 4 == 3 and a1 % 4 == 3: + s = -s + # Step 7 + n1 = n % a1 + # Step 8 + return s * IntegerNative.jacobi_symbol(n1, a1) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.pyi b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.pyi new file mode 100644 index 0000000..3f65a39 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Math/_IntegerNative.pyi @@ -0,0 +1,3 @@ +from ._IntegerBase import IntegerBase +class IntegerNative(IntegerBase): + pass diff --git a/lib/site-packages/pip/_internal/utils/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Math/__init__.py similarity index 100% rename from lib/site-packages/pip/_internal/utils/__init__.py rename to python/lib/python3.11/site-packages/Cryptodome/Math/__init__.py diff --git a/python/lib/python3.11/site-packages/Cryptodome/Math/_modexp.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Math/_modexp.abi3.so new file mode 100755 index 0000000..078fa6a Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Math/_modexp.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.py b/python/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.py new file mode 100644 index 0000000..b6d747e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.py @@ -0,0 +1,642 @@ +# coding=utf-8 +# +# KDF.py : a collection of Key Derivation Functions +# +# Part of the Python Cryptography Toolkit +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import re +import struct +from functools import reduce + +from Cryptodome.Util.py3compat import (tobytes, bord, _copy_bytes, iter_range, + tostr, bchr, bstr) + +from Cryptodome.Hash import SHA1, SHA256, HMAC, CMAC, BLAKE2s +from Cryptodome.Util.strxor import strxor +from Cryptodome.Random import get_random_bytes +from Cryptodome.Util.number import size as bit_size, long_to_bytes, bytes_to_long + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, c_size_t) + +_raw_salsa20_lib = load_pycryptodome_raw_lib("Cryptodome.Cipher._Salsa20", + """ + int Salsa20_8_core(const uint8_t *x, const uint8_t *y, + uint8_t *out); + """) + +_raw_scrypt_lib = load_pycryptodome_raw_lib("Cryptodome.Protocol._scrypt", + """ + typedef int (core_t)(const uint8_t [64], const uint8_t [64], uint8_t [64]); + int scryptROMix(const uint8_t *data_in, uint8_t *data_out, + size_t data_len, unsigned N, core_t *core); + """) + + +def PBKDF1(password, salt, dkLen, count=1000, hashAlgo=None): + """Derive one key from a password (or passphrase). + + This function performs key derivation according to an old version of + the PKCS#5 standard (v1.5) or `RFC2898 + <https://www.ietf.org/rfc/rfc2898.txt>`_. + + Args: + password (string): + The secret password to generate the key from. + salt (byte string): + An 8 byte string to use for better protection from dictionary attacks. + This value does not need to be kept secret, but it should be randomly + chosen for each derivation. + dkLen (integer): + The length of the desired key. The default is 16 bytes, suitable for + instance for :mod:`Cryptodome.Cipher.AES`. + count (integer): + The number of iterations to carry out. The recommendation is 1000 or + more. + hashAlgo (module): + The hash algorithm to use, as a module or an object from the :mod:`Cryptodome.Hash` package. + The digest length must be no shorter than ``dkLen``. + The default algorithm is :mod:`Cryptodome.Hash.SHA1`. + + Return: + A byte string of length ``dkLen`` that can be used as key. + """ + + if not hashAlgo: + hashAlgo = SHA1 + password = tobytes(password) + pHash = hashAlgo.new(password+salt) + digest = pHash.digest_size + if dkLen > digest: + raise TypeError("Selected hash algorithm has a too short digest (%d bytes)." % digest) + if len(salt) != 8: + raise ValueError("Salt is not 8 bytes long (%d bytes instead)." % len(salt)) + for i in iter_range(count-1): + pHash = pHash.new(pHash.digest()) + return pHash.digest()[:dkLen] + + +def PBKDF2(password, salt, dkLen=16, count=1000, prf=None, hmac_hash_module=None): + """Derive one or more keys from a password (or passphrase). + + This function performs key derivation according to the PKCS#5 standard (v2.0). + + Args: + password (string or byte string): + The secret password to generate the key from. + + Strings will be encoded as ISO 8859-1 (also known as Latin-1), + which does not allow any characters with codepoints > 255. + salt (string or byte string): + A (byte) string to use for better protection from dictionary attacks. + This value does not need to be kept secret, but it should be randomly + chosen for each derivation. It is recommended to use at least 16 bytes. + + Strings will be encoded as ISO 8859-1 (also known as Latin-1), + which does not allow any characters with codepoints > 255. + dkLen (integer): + The cumulative length of the keys to produce. + + Due to a flaw in the PBKDF2 design, you should not request more bytes + than the ``prf`` can output. For instance, ``dkLen`` should not exceed + 20 bytes in combination with ``HMAC-SHA1``. + count (integer): + The number of iterations to carry out. The higher the value, the slower + and the more secure the function becomes. + + You should find the maximum number of iterations that keeps the + key derivation still acceptable on the slowest hardware you must support. + + Although the default value is 1000, **it is recommended to use at least + 1000000 (1 million) iterations**. + prf (callable): + A pseudorandom function. It must be a function that returns a + pseudorandom byte string from two parameters: a secret and a salt. + The slower the algorithm, the more secure the derivation function. + If not specified, **HMAC-SHA1** is used. + hmac_hash_module (module): + A module from ``Cryptodome.Hash`` implementing a Merkle-Damgard cryptographic + hash, which PBKDF2 must use in combination with HMAC. + This parameter is mutually exclusive with ``prf``. + + Return: + A byte string of length ``dkLen`` that can be used as key material. + If you want multiple keys, just break up this string into segments of the desired length. + """ + + password = tobytes(password) + salt = tobytes(salt) + + if prf and hmac_hash_module: + raise ValueError("'prf' and 'hmac_hash_module' are mutually exlusive") + + if prf is None and hmac_hash_module is None: + hmac_hash_module = SHA1 + + if prf or not hasattr(hmac_hash_module, "_pbkdf2_hmac_assist"): + # Generic (and slow) implementation + + if prf is None: + prf = lambda p,s: HMAC.new(p, s, hmac_hash_module).digest() + + def link(s): + s[0], s[1] = s[1], prf(password, s[1]) + return s[0] + + key = b'' + i = 1 + while len(key) < dkLen: + s = [ prf(password, salt + struct.pack(">I", i)) ] * 2 + key += reduce(strxor, (link(s) for j in range(count)) ) + i += 1 + + else: + # Optimized implementation + key = b'' + i = 1 + while len(key)<dkLen: + base = HMAC.new(password, b"", hmac_hash_module) + first_digest = base.copy().update(salt + struct.pack(">I", i)).digest() + key += base._pbkdf2_hmac_assist(first_digest, count) + i += 1 + + return key[:dkLen] + + +class _S2V(object): + """String-to-vector PRF as defined in `RFC5297`_. + + This class implements a pseudorandom function family + based on CMAC that takes as input a vector of strings. + + .. _RFC5297: http://tools.ietf.org/html/rfc5297 + """ + + def __init__(self, key, ciphermod, cipher_params=None): + """Initialize the S2V PRF. + + :Parameters: + key : byte string + A secret that can be used as key for CMACs + based on ciphers from ``ciphermod``. + ciphermod : module + A block cipher module from `Cryptodome.Cipher`. + cipher_params : dictionary + A set of extra parameters to use to create a cipher instance. + """ + + self._key = _copy_bytes(None, None, key) + self._ciphermod = ciphermod + self._last_string = self._cache = b'\x00' * ciphermod.block_size + + # Max number of update() call we can process + self._n_updates = ciphermod.block_size * 8 - 1 + + if cipher_params is None: + self._cipher_params = {} + else: + self._cipher_params = dict(cipher_params) + + @staticmethod + def new(key, ciphermod): + """Create a new S2V PRF. + + :Parameters: + key : byte string + A secret that can be used as key for CMACs + based on ciphers from ``ciphermod``. + ciphermod : module + A block cipher module from `Cryptodome.Cipher`. + """ + return _S2V(key, ciphermod) + + def _double(self, bs): + doubled = bytes_to_long(bs)<<1 + if bord(bs[0]) & 0x80: + doubled ^= 0x87 + return long_to_bytes(doubled, len(bs))[-len(bs):] + + def update(self, item): + """Pass the next component of the vector. + + The maximum number of components you can pass is equal to the block + length of the cipher (in bits) minus 1. + + :Parameters: + item : byte string + The next component of the vector. + :Raise TypeError: when the limit on the number of components has been reached. + """ + + if self._n_updates == 0: + raise TypeError("Too many components passed to S2V") + self._n_updates -= 1 + + mac = CMAC.new(self._key, + msg=self._last_string, + ciphermod=self._ciphermod, + cipher_params=self._cipher_params) + self._cache = strxor(self._double(self._cache), mac.digest()) + self._last_string = _copy_bytes(None, None, item) + + def derive(self): + """"Derive a secret from the vector of components. + + :Return: a byte string, as long as the block length of the cipher. + """ + + if len(self._last_string) >= 16: + # xorend + final = self._last_string[:-16] + strxor(self._last_string[-16:], self._cache) + else: + # zero-pad & xor + padded = (self._last_string + b'\x80' + b'\x00' * 15)[:16] + final = strxor(padded, self._double(self._cache)) + mac = CMAC.new(self._key, + msg=final, + ciphermod=self._ciphermod, + cipher_params=self._cipher_params) + return mac.digest() + + +def HKDF(master, key_len, salt, hashmod, num_keys=1, context=None): + """Derive one or more keys from a master secret using + the HMAC-based KDF defined in RFC5869_. + + Args: + master (byte string): + The unguessable value used by the KDF to generate the other keys. + It must be a high-entropy secret, though not necessarily uniform. + It must not be a password. + key_len (integer): + The length in bytes of every derived key. + salt (byte string): + A non-secret, reusable value that strengthens the randomness + extraction step. + Ideally, it is as long as the digest size of the chosen hash. + If empty, a string of zeroes in used. + hashmod (module): + A cryptographic hash algorithm from :mod:`Cryptodome.Hash`. + :mod:`Cryptodome.Hash.SHA512` is a good choice. + num_keys (integer): + The number of keys to derive. Every key is :data:`key_len` bytes long. + The maximum cumulative length of all keys is + 255 times the digest size. + context (byte string): + Optional identifier describing what the keys are used for. + + Return: + A byte string or a tuple of byte strings. + + .. _RFC5869: http://tools.ietf.org/html/rfc5869 + """ + + output_len = key_len * num_keys + if output_len > (255 * hashmod.digest_size): + raise ValueError("Too much secret data to derive") + if not salt: + salt = b'\x00' * hashmod.digest_size + if context is None: + context = b"" + + # Step 1: extract + hmac = HMAC.new(salt, master, digestmod=hashmod) + prk = hmac.digest() + + # Step 2: expand + t = [ b"" ] + n = 1 + tlen = 0 + while tlen < output_len: + hmac = HMAC.new(prk, t[-1] + context + struct.pack('B', n), digestmod=hashmod) + t.append(hmac.digest()) + tlen += hashmod.digest_size + n += 1 + derived_output = b"".join(t) + if num_keys == 1: + return derived_output[:key_len] + kol = [derived_output[idx:idx + key_len] + for idx in iter_range(0, output_len, key_len)] + return list(kol[:num_keys]) + + + +def scrypt(password, salt, key_len, N, r, p, num_keys=1): + """Derive one or more keys from a passphrase. + + Args: + password (string): + The secret pass phrase to generate the keys from. + salt (string): + A string to use for better protection from dictionary attacks. + This value does not need to be kept secret, + but it should be randomly chosen for each derivation. + It is recommended to be at least 16 bytes long. + key_len (integer): + The length in bytes of each derived key. + N (integer): + CPU/Memory cost parameter. It must be a power of 2 and less + than :math:`2^{32}`. + r (integer): + Block size parameter. + p (integer): + Parallelization parameter. + It must be no greater than :math:`(2^{32}-1)/(4r)`. + num_keys (integer): + The number of keys to derive. Every key is :data:`key_len` bytes long. + By default, only 1 key is generated. + The maximum cumulative length of all keys is :math:`(2^{32}-1)*32` + (that is, 128TB). + + A good choice of parameters *(N, r , p)* was suggested + by Colin Percival in his `presentation in 2009`__: + + - *( 2¹â´, 8, 1 )* for interactive logins (≤100ms) + - *( 2²â°, 8, 1 )* for file encryption (≤5s) + + Return: + A byte string or a tuple of byte strings. + + .. __: http://www.tarsnap.com/scrypt/scrypt-slides.pdf + """ + + if 2 ** (bit_size(N) - 1) != N: + raise ValueError("N must be a power of 2") + if N >= 2 ** 32: + raise ValueError("N is too big") + if p > ((2 ** 32 - 1) * 32) // (128 * r): + raise ValueError("p or r are too big") + + prf_hmac_sha256 = lambda p, s: HMAC.new(p, s, SHA256).digest() + + stage_1 = PBKDF2(password, salt, p * 128 * r, 1, prf=prf_hmac_sha256) + + scryptROMix = _raw_scrypt_lib.scryptROMix + core = _raw_salsa20_lib.Salsa20_8_core + + # Parallelize into p flows + data_out = [] + for flow in iter_range(p): + idx = flow * 128 * r + buffer_out = create_string_buffer(128 * r) + result = scryptROMix(stage_1[idx : idx + 128 * r], + buffer_out, + c_size_t(128 * r), + N, + core) + if result: + raise ValueError("Error %X while running scrypt" % result) + data_out += [ get_raw_buffer(buffer_out) ] + + dk = PBKDF2(password, + b"".join(data_out), + key_len * num_keys, 1, + prf=prf_hmac_sha256) + + if num_keys == 1: + return dk + + kol = [dk[idx:idx + key_len] + for idx in iter_range(0, key_len * num_keys, key_len)] + return kol + + +def _bcrypt_encode(data): + s = "./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" + + bits = [] + for c in data: + bits_c = bin(bord(c))[2:].zfill(8) + bits.append(bstr(bits_c)) + bits = b"".join(bits) + + bits6 = [ bits[idx:idx+6] for idx in range(0, len(bits), 6) ] + + result = [] + for g in bits6[:-1]: + idx = int(g, 2) + result.append(s[idx]) + + g = bits6[-1] + idx = int(g, 2) << (6 - len(g)) + result.append(s[idx]) + result = "".join(result) + + return tobytes(result) + + +def _bcrypt_decode(data): + s = "./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" + + bits = [] + for c in tostr(data): + idx = s.find(c) + bits6 = bin(idx)[2:].zfill(6) + bits.append(bits6) + bits = "".join(bits) + + modulo4 = len(data) % 4 + if modulo4 == 1: + raise ValueError("Incorrect length") + elif modulo4 == 2: + bits = bits[:-4] + elif modulo4 == 3: + bits = bits[:-2] + + bits8 = [ bits[idx:idx+8] for idx in range(0, len(bits), 8) ] + + result = [] + for g in bits8: + result.append(bchr(int(g, 2))) + result = b"".join(result) + + return result + + +def _bcrypt_hash(password, cost, salt, constant, invert): + from Cryptodome.Cipher import _EKSBlowfish + + if len(password) > 72: + raise ValueError("The password is too long. It must be 72 bytes at most.") + + if not (4 <= cost <= 31): + raise ValueError("bcrypt cost factor must be in the range 4..31") + + cipher = _EKSBlowfish.new(password, _EKSBlowfish.MODE_ECB, salt, cost, invert) + ctext = constant + for _ in range(64): + ctext = cipher.encrypt(ctext) + return ctext + + +def bcrypt(password, cost, salt=None): + """Hash a password into a key, using the OpenBSD bcrypt protocol. + + Args: + password (byte string or string): + The secret password or pass phrase. + It must be at most 72 bytes long. + It must not contain the zero byte. + Unicode strings will be encoded as UTF-8. + cost (integer): + The exponential factor that makes it slower to compute the hash. + It must be in the range 4 to 31. + A value of at least 12 is recommended. + salt (byte string): + Optional. Random byte string to thwarts dictionary and rainbow table + attacks. It must be 16 bytes long. + If not passed, a random value is generated. + + Return (byte string): + The bcrypt hash + + Raises: + ValueError: if password is longer than 72 bytes or if it contains the zero byte + + """ + + password = tobytes(password, "utf-8") + + if password.find(bchr(0)[0]) != -1: + raise ValueError("The password contains the zero byte") + + if len(password) < 72: + password += b"\x00" + + if salt is None: + salt = get_random_bytes(16) + if len(salt) != 16: + raise ValueError("bcrypt salt must be 16 bytes long") + + ctext = _bcrypt_hash(password, cost, salt, b"OrpheanBeholderScryDoubt", True) + + cost_enc = b"$" + bstr(str(cost).zfill(2)) + salt_enc = b"$" + _bcrypt_encode(salt) + hash_enc = _bcrypt_encode(ctext[:-1]) # only use 23 bytes, not 24 + return b"$2a" + cost_enc + salt_enc + hash_enc + + +def bcrypt_check(password, bcrypt_hash): + """Verify if the provided password matches the given bcrypt hash. + + Args: + password (byte string or string): + The secret password or pass phrase to test. + It must be at most 72 bytes long. + It must not contain the zero byte. + Unicode strings will be encoded as UTF-8. + bcrypt_hash (byte string, bytearray): + The reference bcrypt hash the password needs to be checked against. + + Raises: + ValueError: if the password does not match + """ + + bcrypt_hash = tobytes(bcrypt_hash) + + if len(bcrypt_hash) != 60: + raise ValueError("Incorrect length of the bcrypt hash: %d bytes instead of 60" % len(bcrypt_hash)) + + if bcrypt_hash[:4] != b'$2a$': + raise ValueError("Unsupported prefix") + + p = re.compile(br'\$2a\$([0-9][0-9])\$([A-Za-z0-9./]{22,22})([A-Za-z0-9./]{31,31})') + r = p.match(bcrypt_hash) + if not r: + raise ValueError("Incorrect bcrypt hash format") + + cost = int(r.group(1)) + if not (4 <= cost <= 31): + raise ValueError("Incorrect cost") + + salt = _bcrypt_decode(r.group(2)) + + bcrypt_hash2 = bcrypt(password, cost, salt) + + secret = get_random_bytes(16) + + mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=bcrypt_hash).digest() + mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=bcrypt_hash2).digest() + if mac1 != mac2: + raise ValueError("Incorrect bcrypt hash") + + +def SP800_108_Counter(master, key_len, prf, num_keys=None, label=b'', context=b''): + """Derive one or more keys from a master secret using + a pseudorandom function in Counter Mode, as specified in + `NIST SP 800-108r1 <https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-108r1.pdf>`_. + + Args: + master (byte string): + The secret value used by the KDF to derive the other keys. + It must not be a password. + The length on the secret must be consistent with the input expected by + the :data:`prf` function. + key_len (integer): + The length in bytes of each derived key. + prf (function): + A pseudorandom function that takes two byte strings as parameters: + the secret and an input. It returns another byte string. + num_keys (integer): + The number of keys to derive. Every key is :data:`key_len` bytes long. + By default, only 1 key is derived. + label (byte string): + Optional description of the purpose of the derived keys. + It must not contain zero bytes. + context (byte string): + Optional information pertaining to + the protocol that uses the keys, such as the identity of the + participants, nonces, session IDs, etc. + It must not contain zero bytes. + + Return: + - a byte string (if ``num_keys`` is not specified), or + - a tuple of byte strings (if ``num_key`` is specified). + """ + + if num_keys is None: + num_keys = 1 + + if label.find(b'\x00') != -1: + raise ValueError("Null byte found in label") + + if context.find(b'\x00') != -1: + raise ValueError("Null byte found in context") + + key_len_enc = long_to_bytes(key_len * num_keys * 8, 4) + output_len = key_len * num_keys + + i = 1 + dk = b"" + while len(dk) < output_len: + info = long_to_bytes(i, 4) + label + b'\x00' + context + key_len_enc + dk += prf(master, info) + i += 1 + if i > 0xFFFFFFFF: + raise ValueError("Overflow in SP800 108 counter") + + if num_keys == 1: + return dk[:key_len] + else: + kol = [dk[idx:idx + key_len] + for idx in iter_range(0, output_len, key_len)] + return kol diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.pyi b/python/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.pyi new file mode 100644 index 0000000..df6c287 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Protocol/KDF.pyi @@ -0,0 +1,40 @@ +from types import ModuleType +from typing import Optional, Callable, Tuple, Union, Dict, Any, ByteString, overload +from typing_extensions import Literal + +RNG = Callable[[int], bytes] +PRF = Callable[[bytes, bytes], bytes] + +def PBKDF1(password: str, salt: bytes, dkLen: int, count: Optional[int]=1000, hashAlgo: Optional[ModuleType]=None) -> bytes: ... +def PBKDF2(password: str, salt: bytes, dkLen: Optional[int]=16, count: Optional[int]=1000, prf: Optional[RNG]=None, hmac_hash_module: Optional[ModuleType]=None) -> bytes: ... + +class _S2V(object): + def __init__(self, key: bytes, ciphermod: ModuleType, cipher_params: Optional[Dict[Any, Any]]=None) -> None: ... + + @staticmethod + def new(key: bytes, ciphermod: ModuleType) -> None: ... + def update(self, item: bytes) -> None: ... + def derive(self) -> bytes: ... + +def HKDF(master: bytes, key_len: int, salt: bytes, hashmod: ModuleType, num_keys: Optional[int]=1, context: Optional[bytes]=None) -> Union[bytes, Tuple[bytes, ...]]: ... + +def scrypt(password: str, salt: str, key_len: int, N: int, r: int, p: int, num_keys: Optional[int]=1) -> Union[bytes, Tuple[bytes, ...]]: ... + +def _bcrypt_decode(data: bytes) -> bytes: ... +def _bcrypt_hash(password:bytes , cost: int, salt: bytes, constant:bytes, invert:bool) -> bytes: ... +def bcrypt(password: Union[bytes, str], cost: int, salt: Optional[bytes]=None) -> bytes: ... +def bcrypt_check(password: Union[bytes, str], bcrypt_hash: Union[bytes, bytearray, str]) -> None: ... + +@overload +def SP800_108_Counter(master: ByteString, + key_len: int, + prf: PRF, + num_keys: Literal[None] = None, + label: ByteString = b'', context: ByteString = b'') -> bytes: ... + +@overload +def SP800_108_Counter(master: ByteString, + key_len: int, + prf: PRF, + num_keys: int, + label: ByteString = b'', context: ByteString = b'') -> Tuple[bytes]: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.py b/python/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.py new file mode 100644 index 0000000..6fdc9b4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.py @@ -0,0 +1,278 @@ +# +# SecretSharing.py : distribute a secret amongst a group of participants +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import is_native_int +from Cryptodome.Util import number +from Cryptodome.Util.number import long_to_bytes, bytes_to_long +from Cryptodome.Random import get_random_bytes as rng + + +def _mult_gf2(f1, f2): + """Multiply two polynomials in GF(2)""" + + # Ensure f2 is the smallest + if f2 > f1: + f1, f2 = f2, f1 + z = 0 + while f2: + if f2 & 1: + z ^= f1 + f1 <<= 1 + f2 >>= 1 + return z + + +def _div_gf2(a, b): + """ + Compute division of polynomials over GF(2). + Given a and b, it finds two polynomials q and r such that: + + a = b*q + r with deg(r)<deg(b) + """ + + if (a < b): + return 0, a + + deg = number.size + q = 0 + r = a + d = deg(b) + while deg(r) >= d: + s = 1 << (deg(r) - d) + q ^= s + r ^= _mult_gf2(b, s) + return (q, r) + + +class _Element(object): + """Element of GF(2^128) field""" + + # The irreducible polynomial defining this field is 1+x+x^2+x^7+x^128 + irr_poly = 1 + 2 + 4 + 128 + 2 ** 128 + + def __init__(self, encoded_value): + """Initialize the element to a certain value. + + The value passed as parameter is internally encoded as + a 128-bit integer, where each bit represents a polynomial + coefficient. The LSB is the constant coefficient. + """ + + if is_native_int(encoded_value): + self._value = encoded_value + elif len(encoded_value) == 16: + self._value = bytes_to_long(encoded_value) + else: + raise ValueError("The encoded value must be an integer or a 16 byte string") + + def __eq__(self, other): + return self._value == other._value + + def __int__(self): + """Return the field element, encoded as a 128-bit integer.""" + return self._value + + def encode(self): + """Return the field element, encoded as a 16 byte string.""" + return long_to_bytes(self._value, 16) + + def __mul__(self, factor): + + f1 = self._value + f2 = factor._value + + # Make sure that f2 is the smallest, to speed up the loop + if f2 > f1: + f1, f2 = f2, f1 + + if self.irr_poly in (f1, f2): + return _Element(0) + + mask1 = 2 ** 128 + v, z = f1, 0 + while f2: + # if f2 ^ 1: z ^= v + mask2 = int(bin(f2 & 1)[2:] * 128, base=2) + z = (mask2 & (z ^ v)) | ((mask1 - mask2 - 1) & z) + v <<= 1 + # if v & mask1: v ^= self.irr_poly + mask3 = int(bin((v >> 128) & 1)[2:] * 128, base=2) + v = (mask3 & (v ^ self.irr_poly)) | ((mask1 - mask3 - 1) & v) + f2 >>= 1 + return _Element(z) + + def __add__(self, term): + return _Element(self._value ^ term._value) + + def inverse(self): + """Return the inverse of this element in GF(2^128).""" + + # We use the Extended GCD algorithm + # http://en.wikipedia.org/wiki/Polynomial_greatest_common_divisor + + if self._value == 0: + raise ValueError("Inversion of zero") + + r0, r1 = self._value, self.irr_poly + s0, s1 = 1, 0 + while r1 > 0: + q = _div_gf2(r0, r1)[0] + r0, r1 = r1, r0 ^ _mult_gf2(q, r1) + s0, s1 = s1, s0 ^ _mult_gf2(q, s1) + return _Element(s0) + + def __pow__(self, exponent): + result = _Element(self._value) + for _ in range(exponent - 1): + result = result * self + return result + + +class Shamir(object): + """Shamir's secret sharing scheme. + + A secret is split into ``n`` shares, and it is sufficient to collect + ``k`` of them to reconstruct the secret. + """ + + @staticmethod + def split(k, n, secret, ssss=False): + """Split a secret into ``n`` shares. + + The secret can be reconstructed later using just ``k`` shares + out of the original ``n``. + Each share must be kept confidential to the person it was + assigned to. + + Each share is associated to an index (starting from 1). + + Args: + k (integer): + The sufficient number of shares to reconstruct the secret (``k < n``). + n (integer): + The number of shares that this method will create. + secret (byte string): + A byte string of 16 bytes (e.g. the AES 128 key). + ssss (bool): + If ``True``, the shares can be used with the ``ssss`` utility. + Default: ``False``. + + Return (tuples): + ``n`` tuples. A tuple is meant for each participant and it contains two items: + + 1. the unique index (an integer) + 2. the share (a byte string, 16 bytes) + """ + + # + # We create a polynomial with random coefficients in GF(2^128): + # + # p(x) = \sum_{i=0}^{k-1} c_i * x^i + # + # c_0 is the encoded secret + # + + coeffs = [_Element(rng(16)) for i in range(k - 1)] + coeffs.append(_Element(secret)) + + # Each share is y_i = p(x_i) where x_i is the public index + # associated to each of the n users. + + def make_share(user, coeffs, ssss): + idx = _Element(user) + share = _Element(0) + for coeff in coeffs: + share = idx * share + coeff + if ssss: + share += _Element(user) ** len(coeffs) + return share.encode() + + return [(i, make_share(i, coeffs, ssss)) for i in range(1, n + 1)] + + @staticmethod + def combine(shares, ssss=False): + """Recombine a secret, if enough shares are presented. + + Args: + shares (tuples): + The *k* tuples, each containin the index (an integer) and + the share (a byte string, 16 bytes long) that were assigned to + a participant. + ssss (bool): + If ``True``, the shares were produced by the ``ssss`` utility. + Default: ``False``. + + Return: + The original secret, as a byte string (16 bytes long). + """ + + # + # Given k points (x,y), the interpolation polynomial of degree k-1 is: + # + # L(x) = \sum_{j=0}^{k-1} y_i * l_j(x) + # + # where: + # + # l_j(x) = \prod_{ \overset{0 \le m \le k-1}{m \ne j} } + # \frac{x - x_m}{x_j - x_m} + # + # However, in this case we are purely interested in the constant + # coefficient of L(x). + # + + k = len(shares) + + gf_shares = [] + for x in shares: + idx = _Element(x[0]) + value = _Element(x[1]) + if any(y[0] == idx for y in gf_shares): + raise ValueError("Duplicate share") + if ssss: + value += idx ** k + gf_shares.append((idx, value)) + + result = _Element(0) + for j in range(k): + x_j, y_j = gf_shares[j] + + numerator = _Element(1) + denominator = _Element(1) + + for m in range(k): + x_m = gf_shares[m][0] + if m != j: + numerator *= x_m + denominator *= x_j + x_m + result += y_j * numerator * denominator.inverse() + return result.encode() diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.pyi b/python/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.pyi new file mode 100644 index 0000000..5952c99 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Protocol/SecretSharing.pyi @@ -0,0 +1,22 @@ +from typing import Union, List, Tuple, Optional + +def _mult_gf2(f1: int, f2: int) -> int : ... +def _div_gf2(a: int, b: int) -> int : ... + +class _Element(object): + irr_poly: int + def __init__(self, encoded_value: Union[int, bytes]) -> None: ... + def __eq__(self, other) -> bool: ... + def __int__(self) -> int: ... + def encode(self) -> bytes: ... + def __mul__(self, factor: int) -> _Element: ... + def __add__(self, term: _Element) -> _Element: ... + def inverse(self) -> _Element: ... + def __pow__(self, exponent) -> _Element: ... + +class Shamir(object): + @staticmethod + def split(k: int, n: int, secret: bytes, ssss: Optional[bool]) -> List[Tuple[int, bytes]]: ... + @staticmethod + def combine(shares: List[Tuple[int, bytes]], ssss: Optional[bool]) -> bytes: ... + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.py new file mode 100644 index 0000000..efdf034 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.py @@ -0,0 +1,31 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['KDF', 'SecretSharing'] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.pyi b/python/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.pyi new file mode 100644 index 0000000..377ed90 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Protocol/__init__.pyi @@ -0,0 +1 @@ +__all__ = ['KDF.pyi', 'SecretSharing.pyi'] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Protocol/_scrypt.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Protocol/_scrypt.abi3.so new file mode 100755 index 0000000..baf83a8 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Protocol/_scrypt.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.py new file mode 100644 index 0000000..dddd304 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.py @@ -0,0 +1,682 @@ +# -*- coding: utf-8 -*- +# +# PublicKey/DSA.py : DSA signature primitive +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['generate', 'construct', 'DsaKey', 'import_key' ] + +import binascii +import struct +import itertools + +from Cryptodome.Util.py3compat import bchr, bord, tobytes, tostr, iter_range + +from Cryptodome import Random +from Cryptodome.IO import PKCS8, PEM +from Cryptodome.Hash import SHA256 +from Cryptodome.Util.asn1 import ( + DerObject, DerSequence, + DerInteger, DerObjectId, + DerBitString, + ) + +from Cryptodome.Math.Numbers import Integer +from Cryptodome.Math.Primality import (test_probable_prime, COMPOSITE, + PROBABLY_PRIME) + +from Cryptodome.PublicKey import (_expand_subject_public_key_info, + _create_subject_public_key_info, + _extract_subject_public_key_info) + +# ; The following ASN.1 types are relevant for DSA +# +# SubjectPublicKeyInfo ::= SEQUENCE { +# algorithm AlgorithmIdentifier, +# subjectPublicKey BIT STRING +# } +# +# id-dsa ID ::= { iso(1) member-body(2) us(840) x9-57(10040) x9cm(4) 1 } +# +# ; See RFC3279 +# Dss-Parms ::= SEQUENCE { +# p INTEGER, +# q INTEGER, +# g INTEGER +# } +# +# DSAPublicKey ::= INTEGER +# +# DSSPrivatKey_OpenSSL ::= SEQUENCE +# version INTEGER, +# p INTEGER, +# q INTEGER, +# g INTEGER, +# y INTEGER, +# x INTEGER +# } +# + +class DsaKey(object): + r"""Class defining an actual DSA key. + Do not instantiate directly. + Use :func:`generate`, :func:`construct` or :func:`import_key` instead. + + :ivar p: DSA modulus + :vartype p: integer + + :ivar q: Order of the subgroup + :vartype q: integer + + :ivar g: Generator + :vartype g: integer + + :ivar y: Public key + :vartype y: integer + + :ivar x: Private key + :vartype x: integer + + :undocumented: exportKey, publickey + """ + + _keydata = ['y', 'g', 'p', 'q', 'x'] + + def __init__(self, key_dict): + input_set = set(key_dict.keys()) + public_set = set(('y' , 'g', 'p', 'q')) + if not public_set.issubset(input_set): + raise ValueError("Some DSA components are missing = %s" % + str(public_set - input_set)) + extra_set = input_set - public_set + if extra_set and extra_set != set(('x',)): + raise ValueError("Unknown DSA components = %s" % + str(extra_set - set(('x',)))) + self._key = dict(key_dict) + + def _sign(self, m, k): + if not self.has_private(): + raise TypeError("DSA public key cannot be used for signing") + if not (1 < k < self.q): + raise ValueError("k is not between 2 and q-1") + + x, q, p, g = [self._key[comp] for comp in ['x', 'q', 'p', 'g']] + + blind_factor = Integer.random_range(min_inclusive=1, + max_exclusive=q) + inv_blind_k = (blind_factor * k).inverse(q) + blind_x = x * blind_factor + + r = pow(g, k, p) % q # r = (g**k mod p) mod q + s = (inv_blind_k * (blind_factor * m + blind_x * r)) % q + return map(int, (r, s)) + + def _verify(self, m, sig): + r, s = sig + y, q, p, g = [self._key[comp] for comp in ['y', 'q', 'p', 'g']] + if not (0 < r < q) or not (0 < s < q): + return False + w = Integer(s).inverse(q) + u1 = (w * m) % q + u2 = (w * r) % q + v = (pow(g, u1, p) * pow(y, u2, p) % p) % q + return v == r + + def has_private(self): + """Whether this is a DSA private key""" + + return 'x' in self._key + + def can_encrypt(self): # legacy + return False + + def can_sign(self): # legacy + return True + + def public_key(self): + """A matching DSA public key. + + Returns: + a new :class:`DsaKey` object + """ + + public_components = dict((k, self._key[k]) for k in ('y', 'g', 'p', 'q')) + return DsaKey(public_components) + + def __eq__(self, other): + if bool(self.has_private()) != bool(other.has_private()): + return False + + result = True + for comp in self._keydata: + result = result and (getattr(self._key, comp, None) == + getattr(other._key, comp, None)) + return result + + def __ne__(self, other): + return not self.__eq__(other) + + def __getstate__(self): + # DSA key is not pickable + from pickle import PicklingError + raise PicklingError + + def domain(self): + """The DSA domain parameters. + + Returns + tuple : (p,q,g) + """ + + return [int(self._key[comp]) for comp in ('p', 'q', 'g')] + + def __repr__(self): + attrs = [] + for k in self._keydata: + if k == 'p': + bits = Integer(self.p).size_in_bits() + attrs.append("p(%d)" % (bits,)) + elif hasattr(self, k): + attrs.append(k) + if self.has_private(): + attrs.append("private") + # PY3K: This is meant to be text, do not change to bytes (data) + return "<%s @0x%x %s>" % (self.__class__.__name__, id(self), ",".join(attrs)) + + def __getattr__(self, item): + try: + return int(self._key[item]) + except KeyError: + raise AttributeError(item) + + def export_key(self, format='PEM', pkcs8=None, passphrase=None, + protection=None, randfunc=None): + """Export this DSA key. + + Args: + format (string): + The encoding for the output: + + - *'PEM'* (default). ASCII as per `RFC1421`_/ `RFC1423`_. + - *'DER'*. Binary ASN.1 encoding. + - *'OpenSSH'*. ASCII one-liner as per `RFC4253`_. + Only suitable for public keys, not for private keys. + + passphrase (string): + *Private keys only*. The pass phrase to protect the output. + + pkcs8 (boolean): + *Private keys only*. If ``True`` (default), the key is encoded + with `PKCS#8`_. If ``False``, it is encoded in the custom + OpenSSL/OpenSSH container. + + protection (string): + *Only in combination with a pass phrase*. + The encryption scheme to use to protect the output. + + If :data:`pkcs8` takes value ``True``, this is the PKCS#8 + algorithm to use for deriving the secret and encrypting + the private DSA key. + For a complete list of algorithms, see :mod:`Cryptodome.IO.PKCS8`. + The default is *PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC*. + + If :data:`pkcs8` is ``False``, the obsolete PEM encryption scheme is + used. It is based on MD5 for key derivation, and Triple DES for + encryption. Parameter :data:`protection` is then ignored. + + The combination ``format='DER'`` and ``pkcs8=False`` is not allowed + if a passphrase is present. + + randfunc (callable): + A function that returns random bytes. + By default it is :func:`Cryptodome.Random.get_random_bytes`. + + Returns: + byte string : the encoded key + + Raises: + ValueError : when the format is unknown or when you try to encrypt a private + key with *DER* format and OpenSSL/OpenSSH. + + .. warning:: + If you don't provide a pass phrase, the private key will be + exported in the clear! + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _RFC4253: http://www.ietf.org/rfc/rfc4253.txt + .. _`PKCS#8`: http://www.ietf.org/rfc/rfc5208.txt + """ + + if passphrase is not None: + passphrase = tobytes(passphrase) + + if randfunc is None: + randfunc = Random.get_random_bytes + + if format == 'OpenSSH': + tup1 = [self._key[x].to_bytes() for x in ('p', 'q', 'g', 'y')] + + def func(x): + if (bord(x[0]) & 0x80): + return bchr(0) + x + else: + return x + + tup2 = [func(x) for x in tup1] + keyparts = [b'ssh-dss'] + tup2 + keystring = b''.join( + [struct.pack(">I", len(kp)) + kp for kp in keyparts] + ) + return b'ssh-dss ' + binascii.b2a_base64(keystring)[:-1] + + # DER format is always used, even in case of PEM, which simply + # encodes it into BASE64. + params = DerSequence([self.p, self.q, self.g]) + if self.has_private(): + if pkcs8 is None: + pkcs8 = True + if pkcs8: + if not protection: + protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC' + private_key = DerInteger(self.x).encode() + binary_key = PKCS8.wrap( + private_key, oid, passphrase, + protection, key_params=params, + randfunc=randfunc + ) + if passphrase: + key_type = 'ENCRYPTED PRIVATE' + else: + key_type = 'PRIVATE' + passphrase = None + else: + if format != 'PEM' and passphrase: + raise ValueError("DSA private key cannot be encrypted") + ints = [0, self.p, self.q, self.g, self.y, self.x] + binary_key = DerSequence(ints).encode() + key_type = "DSA PRIVATE" + else: + if pkcs8: + raise ValueError("PKCS#8 is only meaningful for private keys") + + binary_key = _create_subject_public_key_info(oid, + DerInteger(self.y), params) + key_type = "PUBLIC" + + if format == 'DER': + return binary_key + if format == 'PEM': + pem_str = PEM.encode( + binary_key, key_type + " KEY", + passphrase, randfunc + ) + return tobytes(pem_str) + raise ValueError("Unknown key format '%s'. Cannot export the DSA key." % format) + + # Backward-compatibility + exportKey = export_key + publickey = public_key + + # Methods defined in PyCryptodome that we don't support anymore + + def sign(self, M, K): + raise NotImplementedError("Use module Cryptodome.Signature.DSS instead") + + def verify(self, M, signature): + raise NotImplementedError("Use module Cryptodome.Signature.DSS instead") + + def encrypt(self, plaintext, K): + raise NotImplementedError + + def decrypt(self, ciphertext): + raise NotImplementedError + + def blind(self, M, B): + raise NotImplementedError + + def unblind(self, M, B): + raise NotImplementedError + + def size(self): + raise NotImplementedError + + +def _generate_domain(L, randfunc): + """Generate a new set of DSA domain parameters""" + + N = { 1024:160, 2048:224, 3072:256 }.get(L) + if N is None: + raise ValueError("Invalid modulus length (%d)" % L) + + outlen = SHA256.digest_size * 8 + n = (L + outlen - 1) // outlen - 1 # ceil(L/outlen) -1 + b_ = L - 1 - (n * outlen) + + # Generate q (A.1.1.2) + q = Integer(4) + upper_bit = 1 << (N - 1) + while test_probable_prime(q, randfunc) != PROBABLY_PRIME: + seed = randfunc(64) + U = Integer.from_bytes(SHA256.new(seed).digest()) & (upper_bit - 1) + q = U | upper_bit | 1 + + assert(q.size_in_bits() == N) + + # Generate p (A.1.1.2) + offset = 1 + upper_bit = 1 << (L - 1) + while True: + V = [ SHA256.new(seed + Integer(offset + j).to_bytes()).digest() + for j in iter_range(n + 1) ] + V = [ Integer.from_bytes(v) for v in V ] + W = sum([V[i] * (1 << (i * outlen)) for i in iter_range(n)], + (V[n] & ((1 << b_) - 1)) * (1 << (n * outlen))) + + X = Integer(W + upper_bit) # 2^{L-1} < X < 2^{L} + assert(X.size_in_bits() == L) + + c = X % (q * 2) + p = X - (c - 1) # 2q divides (p-1) + if p.size_in_bits() == L and \ + test_probable_prime(p, randfunc) == PROBABLY_PRIME: + break + offset += n + 1 + + # Generate g (A.2.3, index=1) + e = (p - 1) // q + for count in itertools.count(1): + U = seed + b"ggen" + bchr(1) + Integer(count).to_bytes() + W = Integer.from_bytes(SHA256.new(U).digest()) + g = pow(W, e, p) + if g != 1: + break + + return (p, q, g, seed) + + +def generate(bits, randfunc=None, domain=None): + """Generate a new DSA key pair. + + The algorithm follows Appendix A.1/A.2 and B.1 of `FIPS 186-4`_, + respectively for domain generation and key pair generation. + + Args: + bits (integer): + Key length, or size (in bits) of the DSA modulus *p*. + It must be 1024, 2048 or 3072. + + randfunc (callable): + Random number generation function; it accepts a single integer N + and return a string of random data N bytes long. + If not specified, :func:`Cryptodome.Random.get_random_bytes` is used. + + domain (tuple): + The DSA domain parameters *p*, *q* and *g* as a list of 3 + integers. Size of *p* and *q* must comply to `FIPS 186-4`_. + If not specified, the parameters are created anew. + + Returns: + :class:`DsaKey` : a new DSA key object + + Raises: + ValueError : when **bits** is too little, too big, or not a multiple of 64. + + .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + + if domain: + p, q, g = map(Integer, domain) + + ## Perform consistency check on domain parameters + # P and Q must be prime + fmt_error = test_probable_prime(p) == COMPOSITE + fmt_error |= test_probable_prime(q) == COMPOSITE + # Verify Lagrange's theorem for sub-group + fmt_error |= ((p - 1) % q) != 0 + fmt_error |= g <= 1 or g >= p + fmt_error |= pow(g, q, p) != 1 + if fmt_error: + raise ValueError("Invalid DSA domain parameters") + else: + p, q, g, _ = _generate_domain(bits, randfunc) + + L = p.size_in_bits() + N = q.size_in_bits() + + if L != bits: + raise ValueError("Mismatch between size of modulus (%d)" + " and 'bits' parameter (%d)" % (L, bits)) + + if (L, N) not in [(1024, 160), (2048, 224), + (2048, 256), (3072, 256)]: + raise ValueError("Lengths of p and q (%d, %d) are not compatible" + "to FIPS 186-3" % (L, N)) + + if not 1 < g < p: + raise ValueError("Incorrent DSA generator") + + # B.1.1 + c = Integer.random(exact_bits=N + 64, randfunc=randfunc) + x = c % (q - 1) + 1 # 1 <= x <= q-1 + y = pow(g, x, p) + + key_dict = { 'y':y, 'g':g, 'p':p, 'q':q, 'x':x } + return DsaKey(key_dict) + + +def construct(tup, consistency_check=True): + """Construct a DSA key from a tuple of valid DSA components. + + Args: + tup (tuple): + A tuple of long integers, with 4 or 5 items + in the following order: + + 1. Public key (*y*). + 2. Sub-group generator (*g*). + 3. Modulus, finite field order (*p*). + 4. Sub-group order (*q*). + 5. Private key (*x*). Optional. + + consistency_check (boolean): + If ``True``, the library will verify that the provided components + fulfil the main DSA properties. + + Raises: + ValueError: when the key being imported fails the most basic DSA validity checks. + + Returns: + :class:`DsaKey` : a DSA key object + """ + + key_dict = dict(zip(('y', 'g', 'p', 'q', 'x'), map(Integer, tup))) + key = DsaKey(key_dict) + + fmt_error = False + if consistency_check: + # P and Q must be prime + fmt_error = test_probable_prime(key.p) == COMPOSITE + fmt_error |= test_probable_prime(key.q) == COMPOSITE + # Verify Lagrange's theorem for sub-group + fmt_error |= ((key.p - 1) % key.q) != 0 + fmt_error |= key.g <= 1 or key.g >= key.p + fmt_error |= pow(key.g, key.q, key.p) != 1 + # Public key + fmt_error |= key.y <= 0 or key.y >= key.p + if hasattr(key, 'x'): + fmt_error |= key.x <= 0 or key.x >= key.q + fmt_error |= pow(key.g, key.x, key.p) != key.y + + if fmt_error: + raise ValueError("Invalid DSA key components") + + return key + + +# Dss-Parms ::= SEQUENCE { +# p OCTET STRING, +# q OCTET STRING, +# g OCTET STRING +# } +# DSAPublicKey ::= INTEGER -- public key, y + +def _import_openssl_private(encoded, passphrase, params): + if params: + raise ValueError("DSA private key already comes with parameters") + der = DerSequence().decode(encoded, nr_elements=6, only_ints_expected=True) + if der[0] != 0: + raise ValueError("No version found") + tup = [der[comp] for comp in (4, 3, 1, 2, 5)] + return construct(tup) + + +def _import_subjectPublicKeyInfo(encoded, passphrase, params): + + algoid, encoded_key, emb_params = _expand_subject_public_key_info(encoded) + if algoid != oid: + raise ValueError("No DSA subjectPublicKeyInfo") + if params and emb_params: + raise ValueError("Too many DSA parameters") + + y = DerInteger().decode(encoded_key).value + p, q, g = list(DerSequence().decode(params or emb_params)) + tup = (y, g, p, q) + return construct(tup) + + +def _import_x509_cert(encoded, passphrase, params): + + sp_info = _extract_subject_public_key_info(encoded) + return _import_subjectPublicKeyInfo(sp_info, None, params) + + +def _import_pkcs8(encoded, passphrase, params): + if params: + raise ValueError("PKCS#8 already includes parameters") + k = PKCS8.unwrap(encoded, passphrase) + if k[0] != oid: + raise ValueError("No PKCS#8 encoded DSA key") + x = DerInteger().decode(k[1]).value + p, q, g = list(DerSequence().decode(k[2])) + tup = (pow(g, x, p), g, p, q, x) + return construct(tup) + + +def _import_key_der(key_data, passphrase, params): + """Import a DSA key (public or private half), encoded in DER form.""" + + decodings = (_import_openssl_private, + _import_subjectPublicKeyInfo, + _import_x509_cert, + _import_pkcs8) + + for decoding in decodings: + try: + return decoding(key_data, passphrase, params) + except ValueError: + pass + + raise ValueError("DSA key format is not supported") + + +def import_key(extern_key, passphrase=None): + """Import a DSA key. + + Args: + extern_key (string or byte string): + The DSA key to import. + + The following formats are supported for a DSA **public** key: + + - X.509 certificate (binary DER or PEM) + - X.509 ``subjectPublicKeyInfo`` (binary DER or PEM) + - OpenSSH (ASCII one-liner, see `RFC4253`_) + + The following formats are supported for a DSA **private** key: + + - `PKCS#8`_ ``PrivateKeyInfo`` or ``EncryptedPrivateKeyInfo`` + DER SEQUENCE (binary or PEM) + - OpenSSL/OpenSSH custom format (binary or PEM) + + For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. + + passphrase (string): + In case of an encrypted private key, this is the pass phrase + from which the decryption key is derived. + + Encryption may be applied either at the `PKCS#8`_ or at the PEM level. + + Returns: + :class:`DsaKey` : a DSA key object + + Raises: + ValueError : when the given key cannot be parsed (possibly because + the pass phrase is wrong). + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _RFC4253: http://www.ietf.org/rfc/rfc4253.txt + .. _PKCS#8: http://www.ietf.org/rfc/rfc5208.txt + """ + + extern_key = tobytes(extern_key) + if passphrase is not None: + passphrase = tobytes(passphrase) + + if extern_key.startswith(b'-----'): + # This is probably a PEM encoded key + (der, marker, enc_flag) = PEM.decode(tostr(extern_key), passphrase) + if enc_flag: + passphrase = None + return _import_key_der(der, passphrase, None) + + if extern_key.startswith(b'ssh-dss '): + # This is probably a public OpenSSH key + keystring = binascii.a2b_base64(extern_key.split(b' ')[1]) + keyparts = [] + while len(keystring) > 4: + length = struct.unpack(">I", keystring[:4])[0] + keyparts.append(keystring[4:4 + length]) + keystring = keystring[4 + length:] + if keyparts[0] == b"ssh-dss": + tup = [Integer.from_bytes(keyparts[x]) for x in (4, 3, 1, 2)] + return construct(tup) + + if len(extern_key) > 0 and bord(extern_key[0]) == 0x30: + # This is probably a DER encoded key + return _import_key_der(extern_key, passphrase, None) + + raise ValueError("DSA key format is not supported") + + +# Backward compatibility +importKey = import_key + +#: `Object ID`_ for a DSA key. +#: +#: id-dsa ID ::= { iso(1) member-body(2) us(840) x9-57(10040) x9cm(4) 1 } +#: +#: .. _`Object ID`: http://www.alvestrand.no/objectid/1.2.840.10040.4.1.html +oid = "1.2.840.10040.4.1" diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.pyi b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.pyi new file mode 100644 index 0000000..354ac1f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/DSA.pyi @@ -0,0 +1,31 @@ +from typing import Dict, Tuple, Callable, Union, Optional + +__all__ = ['generate', 'construct', 'DsaKey', 'import_key' ] + +RNG = Callable[[int], bytes] + +class DsaKey(object): + def __init__(self, key_dict: Dict[str, int]) -> None: ... + def has_private(self) -> bool: ... + def can_encrypt(self) -> bool: ... # legacy + def can_sign(self) -> bool: ... # legacy + def public_key(self) -> DsaKey: ... + def __eq__(self, other: object) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __getstate__(self) -> None: ... + def domain(self) -> Tuple[int, int, int]: ... + def __repr__(self) -> str: ... + def __getattr__(self, item: str) -> int: ... + def export_key(self, format: Optional[str]="PEM", pkcs8: Optional[bool]=None, passphrase: Optional[str]=None, + protection: Optional[str]=None, randfunc: Optional[RNG]=None) -> bytes: ... + # Backward-compatibility + exportKey = export_key + publickey = public_key + +def generate(bits: int, randfunc: Optional[RNG]=None, domain: Optional[Tuple[int, int, int]]=None) -> DsaKey: ... +def construct(tup: Union[Tuple[int, int, int, int], Tuple[int, int, int, int, int]], consistency_check: Optional[bool]=True) -> DsaKey: ... +def import_key(extern_key: Union[str, bytes], passphrase: Optional[str]=None) -> DsaKey: ... +# Backward compatibility +importKey = import_key + +oid: str diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.py new file mode 100644 index 0000000..e9c57ba --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.py @@ -0,0 +1,1800 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from __future__ import print_function + +import re +import struct +import binascii +from collections import namedtuple + +from Cryptodome.Util.py3compat import bord, tobytes, tostr, bchr, is_string +from Cryptodome.Util.number import bytes_to_long, long_to_bytes + +from Cryptodome.Math.Numbers import Integer +from Cryptodome.Util.asn1 import (DerObjectId, DerOctetString, DerSequence, + DerBitString) + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, VoidPointer, + SmartPointer, c_size_t, c_uint8_ptr, + c_ulonglong, null_pointer) + +from Cryptodome.PublicKey import (_expand_subject_public_key_info, + _create_subject_public_key_info, + _extract_subject_public_key_info) + +from Cryptodome.Hash import SHA512, SHAKE256 + +from Cryptodome.Random import get_random_bytes +from Cryptodome.Random.random import getrandbits + + +_ec_lib = load_pycryptodome_raw_lib("Cryptodome.PublicKey._ec_ws", """ +typedef void EcContext; +typedef void EcPoint; +int ec_ws_new_context(EcContext **pec_ctx, + const uint8_t *modulus, + const uint8_t *b, + const uint8_t *order, + size_t len, + uint64_t seed); +void ec_free_context(EcContext *ec_ctx); +int ec_ws_new_point(EcPoint **pecp, + const uint8_t *x, + const uint8_t *y, + size_t len, + const EcContext *ec_ctx); +void ec_ws_free_point(EcPoint *ecp); +int ec_ws_get_xy(uint8_t *x, + uint8_t *y, + size_t len, + const EcPoint *ecp); +int ec_ws_double(EcPoint *p); +int ec_ws_add(EcPoint *ecpa, EcPoint *ecpb); +int ec_ws_scalar(EcPoint *ecp, + const uint8_t *k, + size_t len, + uint64_t seed); +int ec_ws_clone(EcPoint **pecp2, const EcPoint *ecp); +int ec_ws_cmp(const EcPoint *ecp1, const EcPoint *ecp2); +int ec_ws_neg(EcPoint *p); +""") + +_ed25519_lib = load_pycryptodome_raw_lib("Cryptodome.PublicKey._ed25519", """ +typedef void Point; +int ed25519_new_point(Point **out, + const uint8_t x[32], + const uint8_t y[32], + size_t modsize, + const void *context); +int ed25519_clone(Point **P, const Point *Q); +void ed25519_free_point(Point *p); +int ed25519_cmp(const Point *p1, const Point *p2); +int ed25519_neg(Point *p); +int ed25519_get_xy(uint8_t *xb, uint8_t *yb, size_t modsize, Point *p); +int ed25519_double(Point *p); +int ed25519_add(Point *P1, const Point *P2); +int ed25519_scalar(Point *P, const uint8_t *scalar, size_t scalar_len, uint64_t seed); +""") + +_ed448_lib = load_pycryptodome_raw_lib("Cryptodome.PublicKey._ed448", """ +typedef void EcContext; +typedef void PointEd448; +int ed448_new_context(EcContext **pec_ctx); +void ed448_context(EcContext *ec_ctx); +void ed448_free_context(EcContext *ec_ctx); +int ed448_new_point(PointEd448 **out, + const uint8_t x[56], + const uint8_t y[56], + size_t len, + const EcContext *context); +int ed448_clone(PointEd448 **P, const PointEd448 *Q); +void ed448_free_point(PointEd448 *p); +int ed448_cmp(const PointEd448 *p1, const PointEd448 *p2); +int ed448_neg(PointEd448 *p); +int ed448_get_xy(uint8_t *xb, uint8_t *yb, size_t len, const PointEd448 *p); +int ed448_double(PointEd448 *p); +int ed448_add(PointEd448 *P1, const PointEd448 *P2); +int ed448_scalar(PointEd448 *P, const uint8_t *scalar, size_t scalar_len, uint64_t seed); +""") + + +def lib_func(ecc_obj, func_name): + if ecc_obj._curve.desc == "Ed25519": + result = getattr(_ed25519_lib, "ed25519_" + func_name) + elif ecc_obj._curve.desc == "Ed448": + result = getattr(_ed448_lib, "ed448_" + func_name) + else: + result = getattr(_ec_lib, "ec_ws_" + func_name) + return result + +# +# _curves is a database of curve parameters. Items are indexed by their +# human-friendly name, suchas "P-256". Each item has the following fields: +# - p: the prime number that defines the finite field for all modulo operations +# - b: the constant in the Short Weierstrass curve equation +# - order: the number of elements in the group with the generator below +# - Gx the affine coordinate X of the generator point +# - Gy the affine coordinate Y of the generator point +# - G the generator, as an EccPoint object +# - modulus_bits the minimum number of bits for encoding the modulus p +# - oid an ASCII string with the registered ASN.1 Object ID +# - context a raw pointer to memory holding a context for all curve operations (can be NULL) +# - desc an ASCII string describing the curve +# - openssh the ASCII string used in OpenSSH id files for public keys on this curve +# - name the ASCII string which is also a valid key in _curves + + +_Curve = namedtuple("_Curve", "p b order Gx Gy G modulus_bits oid context desc openssh name") +_curves = {} + + +p192_names = ["p192", "NIST P-192", "P-192", "prime192v1", "secp192r1", + "nistp192"] + + +def init_p192(): + p = 0xfffffffffffffffffffffffffffffffeffffffffffffffff + b = 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1 + order = 0xffffffffffffffffffffffff99def836146bc9b1b4d22831 + Gx = 0x188da80eb03090f67cbf20eb43a18800f4ff0afd82ff1012 + Gy = 0x07192b95ffc8da78631011ed6b24cdd573f977a11e794811 + + p192_modulus = long_to_bytes(p, 24) + p192_b = long_to_bytes(b, 24) + p192_order = long_to_bytes(order, 24) + + ec_p192_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p192_context.address_of(), + c_uint8_ptr(p192_modulus), + c_uint8_ptr(p192_b), + c_uint8_ptr(p192_order), + c_size_t(len(p192_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-192 context" % result) + + context = SmartPointer(ec_p192_context.get(), _ec_lib.ec_free_context) + p192 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 192, + "1.2.840.10045.3.1.1", # ANSI X9.62 / SEC2 + context, + "NIST P-192", + "ecdsa-sha2-nistp192", + "p192") + global p192_names + _curves.update(dict.fromkeys(p192_names, p192)) + + +init_p192() +del init_p192 + + +p224_names = ["p224", "NIST P-224", "P-224", "prime224v1", "secp224r1", + "nistp224"] + + +def init_p224(): + p = 0xffffffffffffffffffffffffffffffff000000000000000000000001 + b = 0xb4050a850c04b3abf54132565044b0b7d7bfd8ba270b39432355ffb4 + order = 0xffffffffffffffffffffffffffff16a2e0b8f03e13dd29455c5c2a3d + Gx = 0xb70e0cbd6bb4bf7f321390b94a03c1d356c21122343280d6115c1d21 + Gy = 0xbd376388b5f723fb4c22dfe6cd4375a05a07476444d5819985007e34 + + p224_modulus = long_to_bytes(p, 28) + p224_b = long_to_bytes(b, 28) + p224_order = long_to_bytes(order, 28) + + ec_p224_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p224_context.address_of(), + c_uint8_ptr(p224_modulus), + c_uint8_ptr(p224_b), + c_uint8_ptr(p224_order), + c_size_t(len(p224_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-224 context" % result) + + context = SmartPointer(ec_p224_context.get(), _ec_lib.ec_free_context) + p224 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 224, + "1.3.132.0.33", # SEC 2 + context, + "NIST P-224", + "ecdsa-sha2-nistp224", + "p224") + global p224_names + _curves.update(dict.fromkeys(p224_names, p224)) + + +init_p224() +del init_p224 + + +p256_names = ["p256", "NIST P-256", "P-256", "prime256v1", "secp256r1", + "nistp256"] + + +def init_p256(): + p = 0xffffffff00000001000000000000000000000000ffffffffffffffffffffffff + b = 0x5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b + order = 0xffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632551 + Gx = 0x6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296 + Gy = 0x4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5 + + p256_modulus = long_to_bytes(p, 32) + p256_b = long_to_bytes(b, 32) + p256_order = long_to_bytes(order, 32) + + ec_p256_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p256_context.address_of(), + c_uint8_ptr(p256_modulus), + c_uint8_ptr(p256_b), + c_uint8_ptr(p256_order), + c_size_t(len(p256_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-256 context" % result) + + context = SmartPointer(ec_p256_context.get(), _ec_lib.ec_free_context) + p256 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 256, + "1.2.840.10045.3.1.7", # ANSI X9.62 / SEC2 + context, + "NIST P-256", + "ecdsa-sha2-nistp256", + "p256") + global p256_names + _curves.update(dict.fromkeys(p256_names, p256)) + + +init_p256() +del init_p256 + + +p384_names = ["p384", "NIST P-384", "P-384", "prime384v1", "secp384r1", + "nistp384"] + + +def init_p384(): + p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffff0000000000000000ffffffff + b = 0xb3312fa7e23ee7e4988e056be3f82d19181d9c6efe8141120314088f5013875ac656398d8a2ed19d2a85c8edd3ec2aef + order = 0xffffffffffffffffffffffffffffffffffffffffffffffffc7634d81f4372ddf581a0db248b0a77aecec196accc52973 + Gx = 0xaa87ca22be8b05378eb1c71ef320ad746e1d3b628ba79b9859f741e082542a385502f25dbf55296c3a545e3872760aB7 + Gy = 0x3617de4a96262c6f5d9e98bf9292dc29f8f41dbd289a147ce9da3113b5f0b8c00a60b1ce1d7e819d7a431d7c90ea0e5F + + p384_modulus = long_to_bytes(p, 48) + p384_b = long_to_bytes(b, 48) + p384_order = long_to_bytes(order, 48) + + ec_p384_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p384_context.address_of(), + c_uint8_ptr(p384_modulus), + c_uint8_ptr(p384_b), + c_uint8_ptr(p384_order), + c_size_t(len(p384_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-384 context" % result) + + context = SmartPointer(ec_p384_context.get(), _ec_lib.ec_free_context) + p384 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 384, + "1.3.132.0.34", # SEC 2 + context, + "NIST P-384", + "ecdsa-sha2-nistp384", + "p384") + global p384_names + _curves.update(dict.fromkeys(p384_names, p384)) + + +init_p384() +del init_p384 + + +p521_names = ["p521", "NIST P-521", "P-521", "prime521v1", "secp521r1", + "nistp521"] + + +def init_p521(): + p = 0x000001ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff + b = 0x00000051953eb9618e1c9a1f929a21a0b68540eea2da725b99b315f3b8b489918ef109e156193951ec7e937b1652c0bd3bb1bf073573df883d2c34f1ef451fd46b503f00 + order = 0x000001fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffa51868783bf2f966b7fcc0148f709a5d03bb5c9b8899c47aebb6fb71e91386409 + Gx = 0x000000c6858e06b70404e9cd9e3ecb662395b4429c648139053fb521f828af606b4d3dbaa14b5e77efe75928fe1dc127a2ffa8de3348b3c1856a429bf97e7e31c2e5bd66 + Gy = 0x0000011839296a789a3bc0045c8a5fb42c7d1bd998f54449579b446817afbd17273e662c97ee72995ef42640c550b9013fad0761353c7086a272c24088be94769fd16650 + + p521_modulus = long_to_bytes(p, 66) + p521_b = long_to_bytes(b, 66) + p521_order = long_to_bytes(order, 66) + + ec_p521_context = VoidPointer() + result = _ec_lib.ec_ws_new_context(ec_p521_context.address_of(), + c_uint8_ptr(p521_modulus), + c_uint8_ptr(p521_b), + c_uint8_ptr(p521_order), + c_size_t(len(p521_modulus)), + c_ulonglong(getrandbits(64)) + ) + if result: + raise ImportError("Error %d initializing P-521 context" % result) + + context = SmartPointer(ec_p521_context.get(), _ec_lib.ec_free_context) + p521 = _Curve(Integer(p), + Integer(b), + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 521, + "1.3.132.0.35", # SEC 2 + context, + "NIST P-521", + "ecdsa-sha2-nistp521", + "p521") + global p521_names + _curves.update(dict.fromkeys(p521_names, p521)) + + +init_p521() +del init_p521 + + +ed25519_names = ["ed25519", "Ed25519"] + + +def init_ed25519(): + p = 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed # 2**255 - 19 + order = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed + Gx = 0x216936d3cd6e53fec0a4e231fdd6dc5c692cc7609525a7b2c9562d608f25d51a + Gy = 0x6666666666666666666666666666666666666666666666666666666666666658 + + ed25519 = _Curve(Integer(p), + None, + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 255, + "1.3.101.112", # RFC8410 + None, + "Ed25519", # Used throughout; do not change + "ssh-ed25519", + "ed25519") + global ed25519_names + _curves.update(dict.fromkeys(ed25519_names, ed25519)) + + +init_ed25519() +del init_ed25519 + + +ed448_names = ["ed448", "Ed448"] + + +def init_ed448(): + p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffffffff # 2**448 - 2**224 - 1 + order = 0x3fffffffffffffffffffffffffffffffffffffffffffffffffffffff7cca23e9c44edb49aed63690216cc2728dc58f552378c292ab5844f3 + Gx = 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e + Gy = 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14 + + ed448_context = VoidPointer() + result = _ed448_lib.ed448_new_context(ed448_context.address_of()) + if result: + raise ImportError("Error %d initializing Ed448 context" % result) + + context = SmartPointer(ed448_context.get(), _ed448_lib.ed448_free_context) + + ed448 = _Curve(Integer(p), + None, + Integer(order), + Integer(Gx), + Integer(Gy), + None, + 448, + "1.3.101.113", # RFC8410 + context, + "Ed448", # Used throughout; do not change + None, + "ed448") + global ed448_names + _curves.update(dict.fromkeys(ed448_names, ed448)) + + +init_ed448() +del init_ed448 + + +class UnsupportedEccFeature(ValueError): + pass + + +class EccPoint(object): + """A class to model a point on an Elliptic Curve. + + The class supports operators for: + + * Adding two points: ``R = S + T`` + * In-place addition: ``S += T`` + * Negating a point: ``R = -T`` + * Comparing two points: ``if S == T: ...`` or ``if S != T: ...`` + * Multiplying a point by a scalar: ``R = S*k`` + * In-place multiplication by a scalar: ``T *= k`` + + :ivar x: The affine X-coordinate of the ECC point + :vartype x: integer + + :ivar y: The affine Y-coordinate of the ECC point + :vartype y: integer + + :ivar xy: The tuple with affine X- and Y- coordinates + """ + + def __init__(self, x, y, curve="p256"): + + try: + self._curve = _curves[curve] + except KeyError: + raise ValueError("Unknown curve name %s" % str(curve)) + self._curve_name = curve + + modulus_bytes = self.size_in_bytes() + + xb = long_to_bytes(x, modulus_bytes) + yb = long_to_bytes(y, modulus_bytes) + if len(xb) != modulus_bytes or len(yb) != modulus_bytes: + raise ValueError("Incorrect coordinate length") + + new_point = lib_func(self, "new_point") + free_func = lib_func(self, "free_point") + + self._point = VoidPointer() + try: + context = self._curve.context.get() + except AttributeError: + context = null_pointer + result = new_point(self._point.address_of(), + c_uint8_ptr(xb), + c_uint8_ptr(yb), + c_size_t(modulus_bytes), + context) + + if result: + if result == 15: + raise ValueError("The EC point does not belong to the curve") + raise ValueError("Error %d while instantiating an EC point" % result) + + # Ensure that object disposal of this Python object will (eventually) + # free the memory allocated by the raw library for the EC point + self._point = SmartPointer(self._point.get(), free_func) + + def set(self, point): + clone = lib_func(self, "clone") + free_func = lib_func(self, "free_point") + + self._point = VoidPointer() + result = clone(self._point.address_of(), + point._point.get()) + + if result: + raise ValueError("Error %d while cloning an EC point" % result) + + self._point = SmartPointer(self._point.get(), free_func) + return self + + def __eq__(self, point): + if not isinstance(point, EccPoint): + return False + + cmp_func = lib_func(self, "cmp") + return 0 == cmp_func(self._point.get(), point._point.get()) + + # Only needed for Python 2 + def __ne__(self, point): + return not self == point + + def __neg__(self): + neg_func = lib_func(self, "neg") + np = self.copy() + result = neg_func(np._point.get()) + if result: + raise ValueError("Error %d while inverting an EC point" % result) + return np + + def copy(self): + """Return a copy of this point.""" + x, y = self.xy + np = EccPoint(x, y, self._curve_name) + return np + + def _is_eddsa(self): + return self._curve.name in ("ed25519", "ed448") + + def is_point_at_infinity(self): + """``True`` if this is the *point-at-infinity*.""" + + if self._is_eddsa(): + return self.x == 0 + else: + return self.xy == (0, 0) + + def point_at_infinity(self): + """Return the *point-at-infinity* for the curve.""" + + if self._is_eddsa(): + return EccPoint(0, 1, self._curve_name) + else: + return EccPoint(0, 0, self._curve_name) + + @property + def x(self): + return self.xy[0] + + @property + def y(self): + return self.xy[1] + + @property + def xy(self): + modulus_bytes = self.size_in_bytes() + xb = bytearray(modulus_bytes) + yb = bytearray(modulus_bytes) + get_xy = lib_func(self, "get_xy") + result = get_xy(c_uint8_ptr(xb), + c_uint8_ptr(yb), + c_size_t(modulus_bytes), + self._point.get()) + if result: + raise ValueError("Error %d while encoding an EC point" % result) + + return (Integer(bytes_to_long(xb)), Integer(bytes_to_long(yb))) + + def size_in_bytes(self): + """Size of each coordinate, in bytes.""" + return (self.size_in_bits() + 7) // 8 + + def size_in_bits(self): + """Size of each coordinate, in bits.""" + return self._curve.modulus_bits + + def double(self): + """Double this point (in-place operation). + + Returns: + This same object (to enable chaining). + """ + + double_func = lib_func(self, "double") + result = double_func(self._point.get()) + if result: + raise ValueError("Error %d while doubling an EC point" % result) + return self + + def __iadd__(self, point): + """Add a second point to this one""" + + add_func = lib_func(self, "add") + result = add_func(self._point.get(), point._point.get()) + if result: + if result == 16: + raise ValueError("EC points are not on the same curve") + raise ValueError("Error %d while adding two EC points" % result) + return self + + def __add__(self, point): + """Return a new point, the addition of this one and another""" + + np = self.copy() + np += point + return np + + def __imul__(self, scalar): + """Multiply this point by a scalar""" + + scalar_func = lib_func(self, "scalar") + if scalar < 0: + raise ValueError("Scalar multiplication is only defined for non-negative integers") + sb = long_to_bytes(scalar) + result = scalar_func(self._point.get(), + c_uint8_ptr(sb), + c_size_t(len(sb)), + c_ulonglong(getrandbits(64))) + if result: + raise ValueError("Error %d during scalar multiplication" % result) + return self + + def __mul__(self, scalar): + """Return a new point, the scalar product of this one""" + + np = self.copy() + np *= scalar + return np + + def __rmul__(self, left_hand): + return self.__mul__(left_hand) + + +# Last piece of initialization +p192_G = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, "p192") +p192 = _curves['p192']._replace(G=p192_G) +_curves.update(dict.fromkeys(p192_names, p192)) +del p192_G, p192, p192_names + +p224_G = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, "p224") +p224 = _curves['p224']._replace(G=p224_G) +_curves.update(dict.fromkeys(p224_names, p224)) +del p224_G, p224, p224_names + +p256_G = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy, "p256") +p256 = _curves['p256']._replace(G=p256_G) +_curves.update(dict.fromkeys(p256_names, p256)) +del p256_G, p256, p256_names + +p384_G = EccPoint(_curves['p384'].Gx, _curves['p384'].Gy, "p384") +p384 = _curves['p384']._replace(G=p384_G) +_curves.update(dict.fromkeys(p384_names, p384)) +del p384_G, p384, p384_names + +p521_G = EccPoint(_curves['p521'].Gx, _curves['p521'].Gy, "p521") +p521 = _curves['p521']._replace(G=p521_G) +_curves.update(dict.fromkeys(p521_names, p521)) +del p521_G, p521, p521_names + +ed25519_G = EccPoint(_curves['Ed25519'].Gx, _curves['Ed25519'].Gy, "Ed25519") +ed25519 = _curves['Ed25519']._replace(G=ed25519_G) +_curves.update(dict.fromkeys(ed25519_names, ed25519)) +del ed25519_G, ed25519, ed25519_names + +ed448_G = EccPoint(_curves['Ed448'].Gx, _curves['Ed448'].Gy, "Ed448") +ed448 = _curves['Ed448']._replace(G=ed448_G) +_curves.update(dict.fromkeys(ed448_names, ed448)) +del ed448_G, ed448, ed448_names + + +class EccKey(object): + r"""Class defining an ECC key. + Do not instantiate directly. + Use :func:`generate`, :func:`construct` or :func:`import_key` instead. + + :ivar curve: The name of the curve as defined in the `ECC table`_. + :vartype curve: string + + :ivar pointQ: an ECC point representating the public component. + :vartype pointQ: :class:`EccPoint` + + :ivar d: A scalar that represents the private component + in NIST P curves. It is smaller than the + order of the generator point. + :vartype d: integer + + :ivar seed: A seed that representats the private component + in EdDSA curves + (Ed25519, 32 bytes; Ed448, 57 bytes). + :vartype seed: bytes + """ + + def __init__(self, **kwargs): + """Create a new ECC key + + Keywords: + curve : string + The name of the curve. + d : integer + Mandatory for a private key one NIST P curves. + It must be in the range ``[1..order-1]``. + seed : bytes + Mandatory for a private key on the Ed25519 (32 bytes) + or Ed448 (57 bytes) curve. + point : EccPoint + Mandatory for a public key. If provided for a private key, + the implementation will NOT check whether it matches ``d``. + + Only one parameter among ``d``, ``seed`` or ``point`` may be used. + """ + + kwargs_ = dict(kwargs) + curve_name = kwargs_.pop("curve", None) + self._d = kwargs_.pop("d", None) + self._seed = kwargs_.pop("seed", None) + self._point = kwargs_.pop("point", None) + if curve_name is None and self._point: + curve_name = self._point._curve_name + if kwargs_: + raise TypeError("Unknown parameters: " + str(kwargs_)) + + if curve_name not in _curves: + raise ValueError("Unsupported curve (%s)" % curve_name) + self._curve = _curves[curve_name] + self.curve = self._curve.desc + + count = int(self._d is not None) + int(self._seed is not None) + + if count == 0: + if self._point is None: + raise ValueError("At lest one between parameters 'point', 'd' or 'seed' must be specified") + return + + if count == 2: + raise ValueError("Parameters d and seed are mutually exclusive") + + # NIST P curves work with d, EdDSA works with seed + + if not self._is_eddsa(): + if self._seed is not None: + raise ValueError("Parameter 'seed' can only be used with Ed25519 or Ed448") + self._d = Integer(self._d) + if not 1 <= self._d < self._curve.order: + raise ValueError("Parameter d must be an integer smaller than the curve order") + else: + if self._d is not None: + raise ValueError("Parameter d can only be used with NIST P curves") + # RFC 8032, 5.1.5 + if self._curve.name == "ed25519": + if len(self._seed) != 32: + raise ValueError("Parameter seed must be 32 bytes long for Ed25519") + seed_hash = SHA512.new(self._seed).digest() # h + self._prefix = seed_hash[32:] + tmp = bytearray(seed_hash[:32]) + tmp[0] &= 0xF8 + tmp[31] = (tmp[31] & 0x7F) | 0x40 + # RFC 8032, 5.2.5 + elif self._curve.name == "ed448": + if len(self._seed) != 57: + raise ValueError("Parameter seed must be 57 bytes long for Ed448") + seed_hash = SHAKE256.new(self._seed).read(114) # h + self._prefix = seed_hash[57:] + tmp = bytearray(seed_hash[:57]) + tmp[0] &= 0xFC + tmp[55] |= 0x80 + tmp[56] = 0 + self._d = Integer.from_bytes(tmp, byteorder='little') + + def _is_eddsa(self): + return self._curve.desc in ("Ed25519", "Ed448") + + def __eq__(self, other): + if not isinstance(other, EccKey): + return False + + if other.has_private() != self.has_private(): + return False + + return other.pointQ == self.pointQ + + def __repr__(self): + if self.has_private(): + if self._is_eddsa(): + extra = ", seed=%s" % tostr(binascii.hexlify(self._seed)) + else: + extra = ", d=%d" % int(self._d) + else: + extra = "" + x, y = self.pointQ.xy + return "EccKey(curve='%s', point_x=%d, point_y=%d%s)" % (self._curve.desc, x, y, extra) + + def has_private(self): + """``True`` if this key can be used for making signatures or decrypting data.""" + + return self._d is not None + + # ECDSA + def _sign(self, z, k): + assert 0 < k < self._curve.order + + order = self._curve.order + blind = Integer.random_range(min_inclusive=1, + max_exclusive=order) + + blind_d = self._d * blind + inv_blind_k = (blind * k).inverse(order) + + r = (self._curve.G * k).x % order + s = inv_blind_k * (blind * z + blind_d * r) % order + return (r, s) + + # ECDSA + def _verify(self, z, rs): + order = self._curve.order + sinv = rs[1].inverse(order) + point1 = self._curve.G * ((sinv * z) % order) + point2 = self.pointQ * ((sinv * rs[0]) % order) + return (point1 + point2).x == rs[0] + + @property + def d(self): + if not self.has_private(): + raise ValueError("This is not a private ECC key") + return self._d + + @property + def seed(self): + if not self.has_private(): + raise ValueError("This is not a private ECC key") + return self._seed + + @property + def pointQ(self): + if self._point is None: + self._point = self._curve.G * self._d + return self._point + + def public_key(self): + """A matching ECC public key. + + Returns: + a new :class:`EccKey` object + """ + + return EccKey(curve=self._curve.desc, point=self.pointQ) + + def _export_SEC1(self, compress): + if self._is_eddsa(): + raise ValueError("SEC1 format is unsupported for EdDSA curves") + + # See 2.2 in RFC5480 and 2.3.3 in SEC1 + # + # The first byte is: + # - 0x02: compressed, only X-coordinate, Y-coordinate is even + # - 0x03: compressed, only X-coordinate, Y-coordinate is odd + # - 0x04: uncompressed, X-coordinate is followed by Y-coordinate + # + # PAI is in theory encoded as 0x00. + + modulus_bytes = self.pointQ.size_in_bytes() + + if compress: + if self.pointQ.y.is_odd(): + first_byte = b'\x03' + else: + first_byte = b'\x02' + public_key = (first_byte + + self.pointQ.x.to_bytes(modulus_bytes)) + else: + public_key = (b'\x04' + + self.pointQ.x.to_bytes(modulus_bytes) + + self.pointQ.y.to_bytes(modulus_bytes)) + return public_key + + def _export_eddsa(self): + x, y = self.pointQ.xy + if self._curve.name == "ed25519": + result = bytearray(y.to_bytes(32, byteorder='little')) + result[31] = ((x & 1) << 7) | result[31] + elif self._curve.name == "ed448": + result = bytearray(y.to_bytes(57, byteorder='little')) + result[56] = (x & 1) << 7 + else: + raise ValueError("Not an EdDSA key to export") + return bytes(result) + + def _export_subjectPublicKeyInfo(self, compress): + if self._is_eddsa(): + oid = self._curve.oid + public_key = self._export_eddsa() + params = None + else: + oid = "1.2.840.10045.2.1" # unrestricted + public_key = self._export_SEC1(compress) + params = DerObjectId(self._curve.oid) + + return _create_subject_public_key_info(oid, + public_key, + params) + + def _export_rfc5915_private_der(self, include_ec_params=True): + + assert self.has_private() + + # ECPrivateKey ::= SEQUENCE { + # version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1), + # privateKey OCTET STRING, + # parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, + # publicKey [1] BIT STRING OPTIONAL + # } + + # Public key - uncompressed form + modulus_bytes = self.pointQ.size_in_bytes() + public_key = (b'\x04' + + self.pointQ.x.to_bytes(modulus_bytes) + + self.pointQ.y.to_bytes(modulus_bytes)) + + seq = [1, + DerOctetString(self.d.to_bytes(modulus_bytes)), + DerObjectId(self._curve.oid, explicit=0), + DerBitString(public_key, explicit=1)] + + if not include_ec_params: + del seq[2] + + return DerSequence(seq).encode() + + def _export_pkcs8(self, **kwargs): + from Cryptodome.IO import PKCS8 + + if kwargs.get('passphrase', None) is not None and 'protection' not in kwargs: + raise ValueError("At least the 'protection' parameter should be present") + + if self._is_eddsa(): + oid = self._curve.oid + private_key = DerOctetString(self._seed).encode() + params = None + else: + oid = "1.2.840.10045.2.1" # unrestricted + private_key = self._export_rfc5915_private_der(include_ec_params=False) + params = DerObjectId(self._curve.oid) + + result = PKCS8.wrap(private_key, + oid, + key_params=params, + **kwargs) + return result + + def _export_public_pem(self, compress): + from Cryptodome.IO import PEM + + encoded_der = self._export_subjectPublicKeyInfo(compress) + return PEM.encode(encoded_der, "PUBLIC KEY") + + def _export_private_pem(self, passphrase, **kwargs): + from Cryptodome.IO import PEM + + encoded_der = self._export_rfc5915_private_der() + return PEM.encode(encoded_der, "EC PRIVATE KEY", passphrase, **kwargs) + + def _export_private_clear_pkcs8_in_clear_pem(self): + from Cryptodome.IO import PEM + + encoded_der = self._export_pkcs8() + return PEM.encode(encoded_der, "PRIVATE KEY") + + def _export_private_encrypted_pkcs8_in_clear_pem(self, passphrase, **kwargs): + from Cryptodome.IO import PEM + + assert passphrase + if 'protection' not in kwargs: + raise ValueError("At least the 'protection' parameter should be present") + encoded_der = self._export_pkcs8(passphrase=passphrase, **kwargs) + return PEM.encode(encoded_der, "ENCRYPTED PRIVATE KEY") + + def _export_openssh(self, compress): + if self.has_private(): + raise ValueError("Cannot export OpenSSH private keys") + + desc = self._curve.openssh + + if desc is None: + raise ValueError("Cannot export %s keys as OpenSSH" % self._curve.name) + elif desc == "ssh-ed25519": + public_key = self._export_eddsa() + comps = (tobytes(desc), tobytes(public_key)) + else: + modulus_bytes = self.pointQ.size_in_bytes() + + if compress: + first_byte = 2 + self.pointQ.y.is_odd() + public_key = (bchr(first_byte) + + self.pointQ.x.to_bytes(modulus_bytes)) + else: + public_key = (b'\x04' + + self.pointQ.x.to_bytes(modulus_bytes) + + self.pointQ.y.to_bytes(modulus_bytes)) + + middle = desc.split("-")[2] + comps = (tobytes(desc), tobytes(middle), public_key) + + blob = b"".join([struct.pack(">I", len(x)) + x for x in comps]) + return desc + " " + tostr(binascii.b2a_base64(blob)) + + def export_key(self, **kwargs): + """Export this ECC key. + + Args: + format (string): + The format to use for encoding the key: + + - ``'DER'``. The key will be encoded in ASN.1 DER format (binary). + For a public key, the ASN.1 ``subjectPublicKeyInfo`` structure + defined in `RFC5480`_ will be used. + For a private key, the ASN.1 ``ECPrivateKey`` structure defined + in `RFC5915`_ is used instead (possibly within a PKCS#8 envelope, + see the ``use_pkcs8`` flag below). + - ``'PEM'``. The key will be encoded in a PEM_ envelope (ASCII). + - ``'OpenSSH'``. The key will be encoded in the OpenSSH_ format + (ASCII, public keys only). + - ``'SEC1'``. The public key (i.e., the EC point) will be encoded + into ``bytes`` according to Section 2.3.3 of `SEC1`_ + (which is a subset of the older X9.62 ITU standard). + Only for NIST P-curves. + - ``'raw'``. The public key will be encoded as ``bytes``, + without any metadata. + + * For NIST P-curves: equivalent to ``'SEC1'``. + * For EdDSA curves: ``bytes`` in the format defined in `RFC8032`_. + + passphrase (byte string or string): + The passphrase to use for protecting the private key. + + use_pkcs8 (boolean): + Only relevant for private keys. + + If ``True`` (default and recommended), the `PKCS#8`_ representation + will be used. It must be ``True`` for EdDSA curves. + + protection (string): + When a private key is exported with password-protection + and PKCS#8 (both ``DER`` and ``PEM`` formats), this parameter MUST be + present and be a valid algorithm supported by :mod:`Cryptodome.IO.PKCS8`. + It is recommended to use ``PBKDF2WithHMAC-SHA1AndAES128-CBC``. + + compress (boolean): + If ``True``, the method returns a more compact representation + of the public key, with the X-coordinate only. + + If ``False`` (default), the method returns the full public key. + + This parameter is ignored for EdDSA curves, as compression is + mandatory. + + .. warning:: + If you don't provide a passphrase, the private key will be + exported in the clear! + + .. note:: + When exporting a private key with password-protection and `PKCS#8`_ + (both ``DER`` and ``PEM`` formats), any extra parameters + to ``export_key()`` will be passed to :mod:`Cryptodome.IO.PKCS8`. + + .. _PEM: http://www.ietf.org/rfc/rfc1421.txt + .. _`PEM encryption`: http://www.ietf.org/rfc/rfc1423.txt + .. _OpenSSH: http://www.openssh.com/txt/rfc5656.txt + .. _RFC5480: https://tools.ietf.org/html/rfc5480 + .. _SEC1: https://www.secg.org/sec1-v2.pdf + + Returns: + A multi-line string (for ``'PEM'`` and ``'OpenSSH'``) or + ``bytes`` (for ``'DER'``, ``'SEC1'``, and ``'raw'``) with the encoded key. + """ + + args = kwargs.copy() + ext_format = args.pop("format") + if ext_format not in ("PEM", "DER", "OpenSSH", "SEC1", "raw"): + raise ValueError("Unknown format '%s'" % ext_format) + + compress = args.pop("compress", False) + + if self.has_private(): + passphrase = args.pop("passphrase", None) + if is_string(passphrase): + passphrase = tobytes(passphrase) + if not passphrase: + raise ValueError("Empty passphrase") + use_pkcs8 = args.pop("use_pkcs8", True) + + if not use_pkcs8 and self._is_eddsa(): + raise ValueError("'pkcs8' must be True for EdDSA curves") + + if ext_format == "PEM": + if use_pkcs8: + if passphrase: + return self._export_private_encrypted_pkcs8_in_clear_pem(passphrase, **args) + else: + return self._export_private_clear_pkcs8_in_clear_pem() + else: + return self._export_private_pem(passphrase, **args) + elif ext_format == "DER": + # DER + if passphrase and not use_pkcs8: + raise ValueError("Private keys can only be encrpyted with DER using PKCS#8") + if use_pkcs8: + return self._export_pkcs8(passphrase=passphrase, **args) + else: + return self._export_rfc5915_private_der() + else: + raise ValueError("Private keys cannot be exported " + "in the '%s' format" % ext_format) + else: # Public key + if args: + raise ValueError("Unexpected parameters: '%s'" % args) + if ext_format == "PEM": + return self._export_public_pem(compress) + elif ext_format == "DER": + return self._export_subjectPublicKeyInfo(compress) + elif ext_format == "SEC1": + return self._export_SEC1(compress) + elif ext_format == "raw": + if self._curve.name in ('ed25519', 'ed448'): + return self._export_eddsa() + else: + return self._export_SEC1(compress) + else: + return self._export_openssh(compress) + + +def generate(**kwargs): + """Generate a new private key on the given curve. + + Args: + + curve (string): + Mandatory. It must be a curve name defined in the `ECC table`_. + + randfunc (callable): + Optional. The RNG to read randomness from. + If ``None``, :func:`Cryptodome.Random.get_random_bytes` is used. + """ + + curve_name = kwargs.pop("curve") + curve = _curves[curve_name] + randfunc = kwargs.pop("randfunc", get_random_bytes) + if kwargs: + raise TypeError("Unknown parameters: " + str(kwargs)) + + if _curves[curve_name].name == "ed25519": + seed = randfunc(32) + new_key = EccKey(curve=curve_name, seed=seed) + elif _curves[curve_name].name == "ed448": + seed = randfunc(57) + new_key = EccKey(curve=curve_name, seed=seed) + else: + d = Integer.random_range(min_inclusive=1, + max_exclusive=curve.order, + randfunc=randfunc) + new_key = EccKey(curve=curve_name, d=d) + + return new_key + + +def construct(**kwargs): + """Build a new ECC key (private or public) starting + from some base components. + + In most cases, you will already have an existing key + which you can read in with :func:`import_key` instead + of this function. + + Args: + curve (string): + Mandatory. The name of the elliptic curve, as defined in the `ECC table`_. + + d (integer): + Mandatory for a private key and a NIST P-curve (e.g., P-256): + the integer in the range ``[1..order-1]`` that represents the key. + + seed (bytes): + Mandatory for a private key and an EdDSA curve. + It must be 32 bytes for Ed25519, and 57 bytes for Ed448. + + point_x (integer): + Mandatory for a public key: the X coordinate (affine) of the ECC point. + + point_y (integer): + Mandatory for a public key: the Y coordinate (affine) of the ECC point. + + Returns: + :class:`EccKey` : a new ECC key object + """ + + curve_name = kwargs["curve"] + curve = _curves[curve_name] + point_x = kwargs.pop("point_x", None) + point_y = kwargs.pop("point_y", None) + + if "point" in kwargs: + raise TypeError("Unknown keyword: point") + + if None not in (point_x, point_y): + # ValueError is raised if the point is not on the curve + kwargs["point"] = EccPoint(point_x, point_y, curve_name) + + new_key = EccKey(**kwargs) + + # Validate that the private key matches the public one + # because EccKey will not do that automatically + if new_key.has_private() and 'point' in kwargs: + pub_key = curve.G * new_key.d + if pub_key.xy != (point_x, point_y): + raise ValueError("Private and public ECC keys do not match") + + return new_key + + +def _import_public_der(ec_point, curve_oid=None, curve_name=None): + """Convert an encoded EC point into an EccKey object + + ec_point: byte string with the EC point (SEC1-encoded) + curve_oid: string with the name the curve + curve_name: string with the OID of the curve + + Either curve_id or curve_name must be specified + + """ + + for _curve_name, curve in _curves.items(): + if curve_oid and curve.oid == curve_oid: + break + if curve_name == _curve_name: + break + else: + if curve_oid: + raise UnsupportedEccFeature("Unsupported ECC curve (OID: %s)" % curve_oid) + else: + raise UnsupportedEccFeature("Unsupported ECC curve (%s)" % curve_name) + + # See 2.2 in RFC5480 and 2.3.3 in SEC1 + # The first byte is: + # - 0x02: compressed, only X-coordinate, Y-coordinate is even + # - 0x03: compressed, only X-coordinate, Y-coordinate is odd + # - 0x04: uncompressed, X-coordinate is followed by Y-coordinate + # + # PAI is in theory encoded as 0x00. + + modulus_bytes = curve.p.size_in_bytes() + point_type = bord(ec_point[0]) + + # Uncompressed point + if point_type == 0x04: + if len(ec_point) != (1 + 2 * modulus_bytes): + raise ValueError("Incorrect EC point length") + x = Integer.from_bytes(ec_point[1:modulus_bytes+1]) + y = Integer.from_bytes(ec_point[modulus_bytes+1:]) + # Compressed point + elif point_type in (0x02, 0x03): + if len(ec_point) != (1 + modulus_bytes): + raise ValueError("Incorrect EC point length") + x = Integer.from_bytes(ec_point[1:]) + # Right now, we only support Short Weierstrass curves + y = (x**3 - x*3 + curve.b).sqrt(curve.p) + if point_type == 0x02 and y.is_odd(): + y = curve.p - y + if point_type == 0x03 and y.is_even(): + y = curve.p - y + else: + raise ValueError("Incorrect EC point encoding") + + return construct(curve=_curve_name, point_x=x, point_y=y) + + +def _import_subjectPublicKeyInfo(encoded, *kwargs): + """Convert a subjectPublicKeyInfo into an EccKey object""" + + # See RFC5480 + + # Parse the generic subjectPublicKeyInfo structure + oid, ec_point, params = _expand_subject_public_key_info(encoded) + + nist_p_oids = ( + "1.2.840.10045.2.1", # id-ecPublicKey (unrestricted) + "1.3.132.1.12", # id-ecDH + "1.3.132.1.13" # id-ecMQV + ) + eddsa_oids = { + "1.3.101.112": ("Ed25519", _import_ed25519_public_key), # id-Ed25519 + "1.3.101.113": ("Ed448", _import_ed448_public_key) # id-Ed448 + } + + if oid in nist_p_oids: + # See RFC5480 + + # Parameters are mandatory and encoded as ECParameters + # ECParameters ::= CHOICE { + # namedCurve OBJECT IDENTIFIER + # -- implicitCurve NULL + # -- specifiedCurve SpecifiedECDomain + # } + # implicitCurve and specifiedCurve are not supported (as per RFC) + if not params: + raise ValueError("Missing ECC parameters for ECC OID %s" % oid) + try: + curve_oid = DerObjectId().decode(params).value + except ValueError: + raise ValueError("Error decoding namedCurve") + + # ECPoint ::= OCTET STRING + return _import_public_der(ec_point, curve_oid=curve_oid) + + elif oid in eddsa_oids: + # See RFC8410 + curve_name, import_eddsa_public_key = eddsa_oids[oid] + + # Parameters must be absent + if params: + raise ValueError("Unexpected ECC parameters for ECC OID %s" % oid) + + x, y = import_eddsa_public_key(ec_point) + return construct(point_x=x, point_y=y, curve=curve_name) + else: + raise UnsupportedEccFeature("Unsupported ECC OID: %s" % oid) + + +def _import_rfc5915_der(encoded, passphrase, curve_oid=None): + + # See RFC5915 https://tools.ietf.org/html/rfc5915 + # + # ECPrivateKey ::= SEQUENCE { + # version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1), + # privateKey OCTET STRING, + # parameters [0] ECParameters {{ NamedCurve }} OPTIONAL, + # publicKey [1] BIT STRING OPTIONAL + # } + + private_key = DerSequence().decode(encoded, nr_elements=(3, 4)) + if private_key[0] != 1: + raise ValueError("Incorrect ECC private key version") + + try: + parameters = DerObjectId(explicit=0).decode(private_key[2]).value + if curve_oid is not None and parameters != curve_oid: + raise ValueError("Curve mismatch") + curve_oid = parameters + except ValueError: + pass + + if curve_oid is None: + raise ValueError("No curve found") + + for curve_name, curve in _curves.items(): + if curve.oid == curve_oid: + break + else: + raise UnsupportedEccFeature("Unsupported ECC curve (OID: %s)" % curve_oid) + + scalar_bytes = DerOctetString().decode(private_key[1]).payload + modulus_bytes = curve.p.size_in_bytes() + if len(scalar_bytes) != modulus_bytes: + raise ValueError("Private key is too small") + d = Integer.from_bytes(scalar_bytes) + + # Decode public key (if any) + if len(private_key) > 2: + public_key_enc = DerBitString(explicit=1).decode(private_key[-1]).value + public_key = _import_public_der(public_key_enc, curve_oid=curve_oid) + point_x = public_key.pointQ.x + point_y = public_key.pointQ.y + else: + point_x = point_y = None + + return construct(curve=curve_name, d=d, point_x=point_x, point_y=point_y) + + +def _import_pkcs8(encoded, passphrase): + from Cryptodome.IO import PKCS8 + + algo_oid, private_key, params = PKCS8.unwrap(encoded, passphrase) + + nist_p_oids = ( + "1.2.840.10045.2.1", # id-ecPublicKey (unrestricted) + "1.3.132.1.12", # id-ecDH + "1.3.132.1.13" # id-ecMQV + ) + eddsa_oids = { + "1.3.101.112": "Ed25519", # id-Ed25519 + "1.3.101.113": "Ed448", # id-Ed448 + } + + if algo_oid in nist_p_oids: + curve_oid = DerObjectId().decode(params).value + return _import_rfc5915_der(private_key, passphrase, curve_oid) + elif algo_oid in eddsa_oids: + if params is not None: + raise ValueError("EdDSA ECC private key must not have parameters") + curve_oid = None + seed = DerOctetString().decode(private_key).payload + return construct(curve=eddsa_oids[algo_oid], seed=seed) + else: + raise UnsupportedEccFeature("Unsupported ECC purpose (OID: %s)" % algo_oid) + + +def _import_x509_cert(encoded, *kwargs): + + sp_info = _extract_subject_public_key_info(encoded) + return _import_subjectPublicKeyInfo(sp_info) + + +def _import_der(encoded, passphrase): + + try: + return _import_subjectPublicKeyInfo(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + try: + return _import_x509_cert(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + try: + return _import_rfc5915_der(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + try: + return _import_pkcs8(encoded, passphrase) + except UnsupportedEccFeature as err: + raise err + except (ValueError, TypeError, IndexError): + pass + + raise ValueError("Not an ECC DER key") + + +def _import_openssh_public(encoded): + parts = encoded.split(b' ') + if len(parts) not in (2, 3): + raise ValueError("Not an openssh public key") + + try: + keystring = binascii.a2b_base64(parts[1]) + + keyparts = [] + while len(keystring) > 4: + lk = struct.unpack(">I", keystring[:4])[0] + keyparts.append(keystring[4:4 + lk]) + keystring = keystring[4 + lk:] + + if parts[0] != keyparts[0]: + raise ValueError("Mismatch in openssh public key") + + # NIST P curves + if parts[0].startswith(b"ecdsa-sha2-"): + + for curve_name, curve in _curves.items(): + if curve.openssh is None: + continue + if not curve.openssh.startswith("ecdsa-sha2"): + continue + middle = tobytes(curve.openssh.split("-")[2]) + if keyparts[1] == middle: + break + else: + raise ValueError("Unsupported ECC curve: " + middle) + + ecc_key = _import_public_der(keyparts[2], curve_oid=curve.oid) + + # EdDSA + elif parts[0] == b"ssh-ed25519": + x, y = _import_ed25519_public_key(keyparts[1]) + ecc_key = construct(curve="Ed25519", point_x=x, point_y=y) + else: + raise ValueError("Unsupported SSH key type: " + parts[0]) + + except (IndexError, TypeError, binascii.Error): + raise ValueError("Error parsing SSH key type: " + parts[0]) + + return ecc_key + + +def _import_openssh_private_ecc(data, password): + + from ._openssh import (import_openssh_private_generic, + read_bytes, read_string, check_padding) + + key_type, decrypted = import_openssh_private_generic(data, password) + + eddsa_keys = { + "ssh-ed25519": ("Ed25519", _import_ed25519_public_key, 32), + } + + # https://datatracker.ietf.org/doc/html/draft-miller-ssh-agent-04 + if key_type.startswith("ecdsa-sha2"): + + ecdsa_curve_name, decrypted = read_string(decrypted) + if ecdsa_curve_name not in _curves: + raise UnsupportedEccFeature("Unsupported ECC curve %s" % ecdsa_curve_name) + curve = _curves[ecdsa_curve_name] + modulus_bytes = (curve.modulus_bits + 7) // 8 + + public_key, decrypted = read_bytes(decrypted) + + if bord(public_key[0]) != 4: + raise ValueError("Only uncompressed OpenSSH EC keys are supported") + if len(public_key) != 2 * modulus_bytes + 1: + raise ValueError("Incorrect public key length") + + point_x = Integer.from_bytes(public_key[1:1+modulus_bytes]) + point_y = Integer.from_bytes(public_key[1+modulus_bytes:]) + + private_key, decrypted = read_bytes(decrypted) + d = Integer.from_bytes(private_key) + + params = {'d': d, 'curve': ecdsa_curve_name} + + elif key_type in eddsa_keys: + + curve_name, import_eddsa_public_key, seed_len = eddsa_keys[key_type] + + public_key, decrypted = read_bytes(decrypted) + point_x, point_y = import_eddsa_public_key(public_key) + + private_public_key, decrypted = read_bytes(decrypted) + seed = private_public_key[:seed_len] + + params = {'seed': seed, 'curve': curve_name} + else: + raise ValueError("Unsupport SSH agent key type:" + key_type) + + _, padded = read_string(decrypted) # Comment + check_padding(padded) + + return construct(point_x=point_x, point_y=point_y, **params) + + +def _import_ed25519_public_key(encoded): + """Import an Ed25519 ECC public key, encoded as raw bytes as described + in RFC8032_. + + Args: + encoded (bytes): + The Ed25519 public key to import. It must be 32 bytes long. + + Returns: + :class:`EccKey` : a new ECC key object + + Raises: + ValueError: when the given key cannot be parsed. + + .. _RFC8032: https://datatracker.ietf.org/doc/html/rfc8032 + """ + + if len(encoded) != 32: + raise ValueError("Incorrect length. Only Ed25519 public keys are supported.") + + p = Integer(0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed) # 2**255 - 19 + d = 37095705934669439343138083508754565189542113879843219016388785533085940283555 + + y = bytearray(encoded) + x_lsb = y[31] >> 7 + y[31] &= 0x7F + point_y = Integer.from_bytes(y, byteorder='little') + if point_y >= p: + raise ValueError("Invalid Ed25519 key (y)") + if point_y == 1: + return 0, 1 + + u = (point_y**2 - 1) % p + v = ((point_y**2 % p) * d + 1) % p + try: + v_inv = v.inverse(p) + x2 = (u * v_inv) % p + point_x = Integer._tonelli_shanks(x2, p) + if (point_x & 1) != x_lsb: + point_x = p - point_x + except ValueError: + raise ValueError("Invalid Ed25519 public key") + return point_x, point_y + + +def _import_ed448_public_key(encoded): + """Import an Ed448 ECC public key, encoded as raw bytes as described + in RFC8032_. + + Args: + encoded (bytes): + The Ed448 public key to import. It must be 57 bytes long. + + Returns: + :class:`EccKey` : a new ECC key object + + Raises: + ValueError: when the given key cannot be parsed. + + .. _RFC8032: https://datatracker.ietf.org/doc/html/rfc8032 + """ + + if len(encoded) != 57: + raise ValueError("Incorrect length. Only Ed448 public keys are supported.") + + p = Integer(0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffffffff) # 2**448 - 2**224 - 1 + d = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffeffffffffffffffffffffffffffffffffffffffffffffffffffff6756 + + y = encoded[:56] + x_lsb = bord(encoded[56]) >> 7 + point_y = Integer.from_bytes(y, byteorder='little') + if point_y >= p: + raise ValueError("Invalid Ed448 key (y)") + if point_y == 1: + return 0, 1 + + u = (point_y**2 - 1) % p + v = ((point_y**2 % p) * d - 1) % p + try: + v_inv = v.inverse(p) + x2 = (u * v_inv) % p + point_x = Integer._tonelli_shanks(x2, p) + if (point_x & 1) != x_lsb: + point_x = p - point_x + except ValueError: + raise ValueError("Invalid Ed448 public key") + return point_x, point_y + + +def import_key(encoded, passphrase=None, curve_name=None): + """Import an ECC key (public or private). + + Args: + encoded (bytes or multi-line string): + The ECC key to import. + The function will try to automatically detect the right format. + + Supported formats for an ECC **public** key: + + * X.509 certificate: binary (DER) or ASCII (PEM). + * X.509 ``subjectPublicKeyInfo``: binary (DER) or ASCII (PEM). + * SEC1_ (or X9.62), as ``bytes``. NIST P curves only. + You must also provide the ``curve_name`` (with a value from the `ECC table`_) + * OpenSSH line, defined in RFC5656_ and RFC8709_ (ASCII). + This is normally the content of files like ``~/.ssh/id_ecdsa.pub``. + + Supported formats for an ECC **private** key: + + * A binary ``ECPrivateKey`` structure, as defined in `RFC5915`_ (DER). + NIST P curves only. + * A `PKCS#8`_ structure (or the more recent Asymmetric Key Package, RFC5958_): binary (DER) or ASCII (PEM). + * `OpenSSH 6.5`_ and newer versions (ASCII). + + Private keys can be in the clear or password-protected. + + For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. + + passphrase (byte string): + The passphrase to use for decrypting a private key. + Encryption may be applied protected at the PEM level (not recommended) + or at the PKCS#8 level (recommended). + This parameter is ignored if the key in input is not encrypted. + + curve_name (string): + For a SEC1 encoding only. This is the name of the curve, + as defined in the `ECC table`_. + + .. note:: + + To import EdDSA private and public keys, when encoded as raw ``bytes``, use: + + * :func:`Cryptodome.Signature.eddsa.import_public_key`, or + * :func:`Cryptodome.Signature.eddsa.import_private_key`. + + Returns: + :class:`EccKey` : a new ECC key object + + Raises: + ValueError: when the given key cannot be parsed (possibly because + the pass phrase is wrong). + + .. _RFC1421: https://datatracker.ietf.org/doc/html/rfc1421 + .. _RFC1423: https://datatracker.ietf.org/doc/html/rfc1423 + .. _RFC5915: https://datatracker.ietf.org/doc/html/rfc5915 + .. _RFC5656: https://datatracker.ietf.org/doc/html/rfc5656 + .. _RFC8709: https://datatracker.ietf.org/doc/html/rfc8709 + .. _RFC5958: https://datatracker.ietf.org/doc/html/rfc5958 + .. _`PKCS#8`: https://datatracker.ietf.org/doc/html/rfc5208 + .. _`OpenSSH 6.5`: https://flak.tedunangst.com/post/new-openssh-key-format-and-bcrypt-pbkdf + .. _SEC1: https://www.secg.org/sec1-v2.pdf + """ + + from Cryptodome.IO import PEM + + encoded = tobytes(encoded) + if passphrase is not None: + passphrase = tobytes(passphrase) + + # PEM + if encoded.startswith(b'-----BEGIN OPENSSH PRIVATE KEY'): + text_encoded = tostr(encoded) + openssh_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) + result = _import_openssh_private_ecc(openssh_encoded, passphrase) + return result + + elif encoded.startswith(b'-----'): + + text_encoded = tostr(encoded) + + # Remove any EC PARAMETERS section + # Ignore its content because the curve type must be already given in the key + ecparams_start = "-----BEGIN EC PARAMETERS-----" + ecparams_end = "-----END EC PARAMETERS-----" + text_encoded = re.sub(ecparams_start + ".*?" + ecparams_end, "", + text_encoded, + flags=re.DOTALL) + + der_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) + if enc_flag: + passphrase = None + try: + result = _import_der(der_encoded, passphrase) + except UnsupportedEccFeature as uef: + raise uef + except ValueError: + raise ValueError("Invalid DER encoding inside the PEM file") + return result + + # OpenSSH + if encoded.startswith((b'ecdsa-sha2-', b'ssh-ed25519')): + return _import_openssh_public(encoded) + + # DER + if len(encoded) > 0 and bord(encoded[0]) == 0x30: + return _import_der(encoded, passphrase) + + # SEC1 + if len(encoded) > 0 and bord(encoded[0]) in b'\x02\x03\x04': + if curve_name is None: + raise ValueError("No curve name was provided") + return _import_public_der(encoded, curve_name=curve_name) + + raise ValueError("ECC key format is not supported") + + +if __name__ == "__main__": + + import time + + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + + point = _curves['p256'].G.copy() + count = 3000 + + start = time.time() + for x in range(count): + pointX = point * d + print("(P-256 G)", (time.time() - start) / count * 1000, "ms") + + start = time.time() + for x in range(count): + pointX = pointX * d + print("(P-256 arbitrary point)", (time.time() - start) / count * 1000, "ms") diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.pyi b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.pyi new file mode 100644 index 0000000..b0bfbec --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ECC.pyi @@ -0,0 +1,66 @@ +from typing import Union, Callable, Optional, NamedTuple, List, Tuple, Dict, NamedTuple, Any + +from Cryptodome.Math.Numbers import Integer + +RNG = Callable[[int], bytes] + +class UnsupportedEccFeature(ValueError): ... +class EccPoint(object): + def __init__(self, x: Union[int, Integer], y: Union[int, Integer], curve: Optional[str] = ...) -> None: ... + def set(self, point: EccPoint) -> EccPoint: ... + def __eq__(self, point: object) -> bool: ... + def __neg__(self) -> EccPoint: ... + def copy(self) -> EccPoint: ... + def is_point_at_infinity(self) -> bool: ... + def point_at_infinity(self) -> EccPoint: ... + @property + def x(self) -> int: ... + @property + def y(self) -> int: ... + @property + def xy(self) -> Tuple[int, int]: ... + def size_in_bytes(self) -> int: ... + def size_in_bits(self) -> int: ... + def double(self) -> EccPoint: ... + def __iadd__(self, point: EccPoint) -> EccPoint: ... + def __add__(self, point: EccPoint) -> EccPoint: ... + def __imul__(self, scalar: int) -> EccPoint: ... + def __mul__(self, scalar: int) -> EccPoint: ... + +class EccKey(object): + curve: str + def __init__(self, *, curve: str = ..., d: int = ..., point: EccPoint = ...) -> None: ... + def __eq__(self, other: object) -> bool: ... + def __repr__(self) -> str: ... + def has_private(self) -> bool: ... + @property + def d(self) -> int: ... + @property + def pointQ(self) -> EccPoint: ... + def public_key(self) -> EccKey: ... + def export_key(self, **kwargs: Union[str, bytes, bool]) -> Union[str,bytes]: ... + + +_Curve = NamedTuple("_Curve", [('p', Integer), + ('order', Integer), + ('b', Integer), + ('Gx', Integer), + ('Gy', Integer), + ('G', EccPoint), + ('modulus_bits', int), + ('oid', str), + ('context', Any), + ('desc', str), + ('openssh', Union[str, None]), + ]) + +_curves : Dict[str, _Curve] + + +def generate(**kwargs: Union[str, RNG]) -> EccKey: ... +def construct(**kwargs: Union[str, int]) -> EccKey: ... +def import_key(encoded: Union[bytes, str], + passphrase: Optional[str]=None, + curve_name:Optional[str]=None) -> EccKey: ... +def _import_ed25519_public_key(encoded: bytes) -> EccKey: ... +def _import_ed448_public_key(encoded: bytes) -> EccKey: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.py new file mode 100644 index 0000000..95c219e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.py @@ -0,0 +1,286 @@ +# +# ElGamal.py : ElGamal encryption/decryption and signatures +# +# Part of the Python Cryptography Toolkit +# +# Originally written by: A.M. Kuchling +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['generate', 'construct', 'ElGamalKey'] + +from Cryptodome import Random +from Cryptodome.Math.Primality import ( generate_probable_safe_prime, + test_probable_prime, COMPOSITE ) +from Cryptodome.Math.Numbers import Integer + +# Generate an ElGamal key with N bits +def generate(bits, randfunc): + """Randomly generate a fresh, new ElGamal key. + + The key will be safe for use for both encryption and signature + (although it should be used for **only one** purpose). + + Args: + bits (int): + Key length, or size (in bits) of the modulus *p*. + The recommended value is 2048. + randfunc (callable): + Random number generation function; it should accept + a single integer *N* and return a string of random + *N* random bytes. + + Return: + an :class:`ElGamalKey` object + """ + + obj=ElGamalKey() + + # Generate a safe prime p + # See Algorithm 4.86 in Handbook of Applied Cryptography + obj.p = generate_probable_safe_prime(exact_bits=bits, randfunc=randfunc) + q = (obj.p - 1) >> 1 + + # Generate generator g + while 1: + # Choose a square residue; it will generate a cyclic group of order q. + obj.g = pow(Integer.random_range(min_inclusive=2, + max_exclusive=obj.p, + randfunc=randfunc), 2, obj.p) + + # We must avoid g=2 because of Bleichenbacher's attack described + # in "Generating ElGamal signatures without knowning the secret key", + # 1996 + if obj.g in (1, 2): + continue + + # Discard g if it divides p-1 because of the attack described + # in Note 11.67 (iii) in HAC + if (obj.p - 1) % obj.g == 0: + continue + + # g^{-1} must not divide p-1 because of Khadir's attack + # described in "Conditions of the generator for forging ElGamal + # signature", 2011 + ginv = obj.g.inverse(obj.p) + if (obj.p - 1) % ginv == 0: + continue + + # Found + break + + # Generate private key x + obj.x = Integer.random_range(min_inclusive=2, + max_exclusive=obj.p-1, + randfunc=randfunc) + # Generate public key y + obj.y = pow(obj.g, obj.x, obj.p) + return obj + +def construct(tup): + r"""Construct an ElGamal key from a tuple of valid ElGamal components. + + The modulus *p* must be a prime. + The following conditions must apply: + + .. math:: + + \begin{align} + &1 < g < p-1 \\ + &g^{p-1} = 1 \text{ mod } 1 \\ + &1 < x < p-1 \\ + &g^x = y \text{ mod } p + \end{align} + + Args: + tup (tuple): + A tuple with either 3 or 4 integers, + in the following order: + + 1. Modulus (*p*). + 2. Generator (*g*). + 3. Public key (*y*). + 4. Private key (*x*). Optional. + + Raises: + ValueError: when the key being imported fails the most basic ElGamal validity checks. + + Returns: + an :class:`ElGamalKey` object + """ + + obj=ElGamalKey() + if len(tup) not in [3,4]: + raise ValueError('argument for construct() wrong length') + for i in range(len(tup)): + field = obj._keydata[i] + setattr(obj, field, Integer(tup[i])) + + fmt_error = test_probable_prime(obj.p) == COMPOSITE + fmt_error |= obj.g<=1 or obj.g>=obj.p + fmt_error |= pow(obj.g, obj.p-1, obj.p)!=1 + fmt_error |= obj.y<1 or obj.y>=obj.p + if len(tup)==4: + fmt_error |= obj.x<=1 or obj.x>=obj.p + fmt_error |= pow(obj.g, obj.x, obj.p)!=obj.y + + if fmt_error: + raise ValueError("Invalid ElGamal key components") + + return obj + +class ElGamalKey(object): + r"""Class defining an ElGamal key. + Do not instantiate directly. + Use :func:`generate` or :func:`construct` instead. + + :ivar p: Modulus + :vartype d: integer + + :ivar g: Generator + :vartype e: integer + + :ivar y: Public key component + :vartype y: integer + + :ivar x: Private key component + :vartype x: integer + """ + + #: Dictionary of ElGamal parameters. + #: + #: A public key will only have the following entries: + #: + #: - **y**, the public key. + #: - **g**, the generator. + #: - **p**, the modulus. + #: + #: A private key will also have: + #: + #: - **x**, the private key. + _keydata=['p', 'g', 'y', 'x'] + + def __init__(self, randfunc=None): + if randfunc is None: + randfunc = Random.new().read + self._randfunc = randfunc + + def _encrypt(self, M, K): + a=pow(self.g, K, self.p) + b=( pow(self.y, K, self.p)*M ) % self.p + return [int(a), int(b)] + + def _decrypt(self, M): + if (not hasattr(self, 'x')): + raise TypeError('Private key not available in this object') + r = Integer.random_range(min_inclusive=2, + max_exclusive=self.p-1, + randfunc=self._randfunc) + a_blind = (pow(self.g, r, self.p) * M[0]) % self.p + ax=pow(a_blind, self.x, self.p) + plaintext_blind = (ax.inverse(self.p) * M[1] ) % self.p + plaintext = (plaintext_blind * pow(self.y, r, self.p)) % self.p + return int(plaintext) + + def _sign(self, M, K): + if (not hasattr(self, 'x')): + raise TypeError('Private key not available in this object') + p1=self.p-1 + K = Integer(K) + if (K.gcd(p1)!=1): + raise ValueError('Bad K value: GCD(K,p-1)!=1') + a=pow(self.g, K, self.p) + t=(Integer(M)-self.x*a) % p1 + while t<0: t=t+p1 + b=(t*K.inverse(p1)) % p1 + return [int(a), int(b)] + + def _verify(self, M, sig): + sig = [Integer(x) for x in sig] + if sig[0]<1 or sig[0]>self.p-1: + return 0 + v1=pow(self.y, sig[0], self.p) + v1=(v1*pow(sig[0], sig[1], self.p)) % self.p + v2=pow(self.g, M, self.p) + if v1==v2: + return 1 + return 0 + + def has_private(self): + """Whether this is an ElGamal private key""" + + if hasattr(self, 'x'): + return 1 + else: + return 0 + + def can_encrypt(self): + return True + + def can_sign(self): + return True + + def publickey(self): + """A matching ElGamal public key. + + Returns: + a new :class:`ElGamalKey` object + """ + return construct((self.p, self.g, self.y)) + + def __eq__(self, other): + if bool(self.has_private()) != bool(other.has_private()): + return False + + result = True + for comp in self._keydata: + result = result and (getattr(self.key, comp, None) == + getattr(other.key, comp, None)) + return result + + def __ne__(self, other): + return not self.__eq__(other) + + def __getstate__(self): + # ElGamal key is not pickable + from pickle import PicklingError + raise PicklingError + + # Methods defined in PyCryptodome that we don't support anymore + + def sign(self, M, K): + raise NotImplementedError + + def verify(self, M, signature): + raise NotImplementedError + + def encrypt(self, plaintext, K): + raise NotImplementedError + + def decrypt(self, ciphertext): + raise NotImplementedError + + def blind(self, M, B): + raise NotImplementedError + + def unblind(self, M, B): + raise NotImplementedError + + def size(self): + raise NotImplementedError diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.pyi b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.pyi new file mode 100644 index 0000000..9048531 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/ElGamal.pyi @@ -0,0 +1,18 @@ +from typing import Callable, Union, Tuple, Optional + +__all__ = ['generate', 'construct', 'ElGamalKey'] + +RNG = Callable[[int], bytes] + +def generate(bits: int, randfunc: RNG) -> ElGamalKey: ... +def construct(tup: Union[Tuple[int, int, int], Tuple[int, int, int, int]]) -> ElGamalKey: ... + +class ElGamalKey(object): + def __init__(self, randfunc: Optional[RNG]=None) -> None: ... + def has_private(self) -> bool: ... + def can_encrypt(self) -> bool: ... + def can_sign(self) -> bool: ... + def publickey(self) -> ElGamalKey: ... + def __eq__(self, other: object) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __getstate__(self) -> None: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.py new file mode 100644 index 0000000..7466e3a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.py @@ -0,0 +1,833 @@ +# -*- coding: utf-8 -*- +# =================================================================== +# +# Copyright (c) 2016, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = ['generate', 'construct', 'import_key', + 'RsaKey', 'oid'] + +import binascii +import struct + +from Cryptodome import Random +from Cryptodome.Util.py3compat import tobytes, bord, tostr +from Cryptodome.Util.asn1 import DerSequence, DerNull + +from Cryptodome.Math.Numbers import Integer +from Cryptodome.Math.Primality import (test_probable_prime, + generate_probable_prime, COMPOSITE) + +from Cryptodome.PublicKey import (_expand_subject_public_key_info, + _create_subject_public_key_info, + _extract_subject_public_key_info) + + +class RsaKey(object): + r"""Class defining an actual RSA key. + Do not instantiate directly. + Use :func:`generate`, :func:`construct` or :func:`import_key` instead. + + :ivar n: RSA modulus + :vartype n: integer + + :ivar e: RSA public exponent + :vartype e: integer + + :ivar d: RSA private exponent + :vartype d: integer + + :ivar p: First factor of the RSA modulus + :vartype p: integer + + :ivar q: Second factor of the RSA modulus + :vartype q: integer + + :ivar invp: Chinese remainder component (:math:`p^{-1} \text{mod } q`) + :vartype invp: integer + + :ivar invq: Chinese remainder component (:math:`q^{-1} \text{mod } p`) + :vartype invq: integer + + :ivar u: Same as ``invp`` + :vartype u: integer + + :undocumented: exportKey, publickey + """ + + def __init__(self, **kwargs): + """Build an RSA key. + + :Keywords: + n : integer + The modulus. + e : integer + The public exponent. + d : integer + The private exponent. Only required for private keys. + p : integer + The first factor of the modulus. Only required for private keys. + q : integer + The second factor of the modulus. Only required for private keys. + u : integer + The CRT coefficient (inverse of p modulo q). Only required for + private keys. + """ + + input_set = set(kwargs.keys()) + public_set = set(('n', 'e')) + private_set = public_set | set(('p', 'q', 'd', 'u')) + if input_set not in (private_set, public_set): + raise ValueError("Some RSA components are missing") + for component, value in kwargs.items(): + setattr(self, "_" + component, value) + if input_set == private_set: + self._dp = self._d % (self._p - 1) # = (eâ»Â¹) mod (p-1) + self._dq = self._d % (self._q - 1) # = (eâ»Â¹) mod (q-1) + self._invq = None # will be computed on demand + + @property + def n(self): + return int(self._n) + + @property + def e(self): + return int(self._e) + + @property + def d(self): + if not self.has_private(): + raise AttributeError("No private exponent available for public keys") + return int(self._d) + + @property + def p(self): + if not self.has_private(): + raise AttributeError("No CRT component 'p' available for public keys") + return int(self._p) + + @property + def q(self): + if not self.has_private(): + raise AttributeError("No CRT component 'q' available for public keys") + return int(self._q) + + @property + def dp(self): + if not self.has_private(): + raise AttributeError("No CRT component 'dp' available for public keys") + return int(self._dp) + + @property + def dq(self): + if not self.has_private(): + raise AttributeError("No CRT component 'dq' available for public keys") + return int(self._dq) + + @property + def invq(self): + if not self.has_private(): + raise AttributeError("No CRT component 'invq' available for public keys") + if self._invq is None: + self._invq = self._q.inverse(self._p) + return int(self._invq) + + @property + def invp(self): + return self.u + + @property + def u(self): + if not self.has_private(): + raise AttributeError("No CRT component 'u' available for public keys") + return int(self._u) + + def size_in_bits(self): + """Size of the RSA modulus in bits""" + return self._n.size_in_bits() + + def size_in_bytes(self): + """The minimal amount of bytes that can hold the RSA modulus""" + return (self._n.size_in_bits() - 1) // 8 + 1 + + def _encrypt(self, plaintext): + if not 0 <= plaintext < self._n: + raise ValueError("Plaintext too large") + return int(pow(Integer(plaintext), self._e, self._n)) + + def _decrypt(self, ciphertext): + if not 0 <= ciphertext < self._n: + raise ValueError("Ciphertext too large") + if not self.has_private(): + raise TypeError("This is not a private key") + + # Blinded RSA decryption (to prevent timing attacks): + # Step 1: Generate random secret blinding factor r, + # such that 0 < r < n-1 + r = Integer.random_range(min_inclusive=1, max_exclusive=self._n) + # Step 2: Compute c' = c * r**e mod n + cp = Integer(ciphertext) * pow(r, self._e, self._n) % self._n + # Step 3: Compute m' = c'**d mod n (normal RSA decryption) + m1 = pow(cp, self._dp, self._p) + m2 = pow(cp, self._dq, self._q) + h = ((m2 - m1) * self._u) % self._q + mp = h * self._p + m1 + # Step 4: Compute m = m' * (r**(-1)) mod n + result = (r.inverse(self._n) * mp) % self._n + # Verify no faults occurred + if ciphertext != pow(result, self._e, self._n): + raise ValueError("Fault detected in RSA decryption") + return result + + def has_private(self): + """Whether this is an RSA private key""" + + return hasattr(self, "_d") + + def can_encrypt(self): # legacy + return True + + def can_sign(self): # legacy + return True + + def public_key(self): + """A matching RSA public key. + + Returns: + a new :class:`RsaKey` object + """ + return RsaKey(n=self._n, e=self._e) + + def __eq__(self, other): + if self.has_private() != other.has_private(): + return False + if self.n != other.n or self.e != other.e: + return False + if not self.has_private(): + return True + return (self.d == other.d) + + def __ne__(self, other): + return not (self == other) + + def __getstate__(self): + # RSA key is not pickable + from pickle import PicklingError + raise PicklingError + + def __repr__(self): + if self.has_private(): + extra = ", d=%d, p=%d, q=%d, u=%d" % (int(self._d), int(self._p), + int(self._q), int(self._u)) + else: + extra = "" + return "RsaKey(n=%d, e=%d%s)" % (int(self._n), int(self._e), extra) + + def __str__(self): + if self.has_private(): + key_type = "Private" + else: + key_type = "Public" + return "%s RSA key at 0x%X" % (key_type, id(self)) + + def export_key(self, format='PEM', passphrase=None, pkcs=1, + protection=None, randfunc=None): + """Export this RSA key. + + Args: + format (string): + The format to use for wrapping the key: + + - *'PEM'*. (*Default*) Text encoding, done according to `RFC1421`_/`RFC1423`_. + - *'DER'*. Binary encoding. + - *'OpenSSH'*. Textual encoding, done according to OpenSSH specification. + Only suitable for public keys (not private keys). + + passphrase (string): + (*For private keys only*) The pass phrase used for protecting the output. + + pkcs (integer): + (*For private keys only*) The ASN.1 structure to use for + serializing the key. Note that even in case of PEM + encoding, there is an inner ASN.1 DER structure. + + With ``pkcs=1`` (*default*), the private key is encoded in a + simple `PKCS#1`_ structure (``RSAPrivateKey``). + + With ``pkcs=8``, the private key is encoded in a `PKCS#8`_ structure + (``PrivateKeyInfo``). + + .. note:: + This parameter is ignored for a public key. + For DER and PEM, an ASN.1 DER ``SubjectPublicKeyInfo`` + structure is always used. + + protection (string): + (*For private keys only*) + The encryption scheme to use for protecting the private key. + + If ``None`` (default), the behavior depends on :attr:`format`: + + - For *'DER'*, the *PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC* + scheme is used. The following operations are performed: + + 1. A 16 byte Triple DES key is derived from the passphrase + using :func:`Cryptodome.Protocol.KDF.PBKDF2` with 8 bytes salt, + and 1 000 iterations of :mod:`Cryptodome.Hash.HMAC`. + 2. The private key is encrypted using CBC. + 3. The encrypted key is encoded according to PKCS#8. + + - For *'PEM'*, the obsolete PEM encryption scheme is used. + It is based on MD5 for key derivation, and Triple DES for encryption. + + Specifying a value for :attr:`protection` is only meaningful for PKCS#8 + (that is, ``pkcs=8``) and only if a pass phrase is present too. + + The supported schemes for PKCS#8 are listed in the + :mod:`Cryptodome.IO.PKCS8` module (see :attr:`wrap_algo` parameter). + + randfunc (callable): + A function that provides random bytes. Only used for PEM encoding. + The default is :func:`Cryptodome.Random.get_random_bytes`. + + Returns: + byte string: the encoded key + + Raises: + ValueError:when the format is unknown or when you try to encrypt a private + key with *DER* format and PKCS#1. + + .. warning:: + If you don't provide a pass phrase, the private key will be + exported in the clear! + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _`PKCS#1`: http://www.ietf.org/rfc/rfc3447.txt + .. _`PKCS#8`: http://www.ietf.org/rfc/rfc5208.txt + """ + + if passphrase is not None: + passphrase = tobytes(passphrase) + + if randfunc is None: + randfunc = Random.get_random_bytes + + if format == 'OpenSSH': + e_bytes, n_bytes = [x.to_bytes() for x in (self._e, self._n)] + if bord(e_bytes[0]) & 0x80: + e_bytes = b'\x00' + e_bytes + if bord(n_bytes[0]) & 0x80: + n_bytes = b'\x00' + n_bytes + keyparts = [b'ssh-rsa', e_bytes, n_bytes] + keystring = b''.join([struct.pack(">I", len(kp)) + kp for kp in keyparts]) + return b'ssh-rsa ' + binascii.b2a_base64(keystring)[:-1] + + # DER format is always used, even in case of PEM, which simply + # encodes it into BASE64. + if self.has_private(): + binary_key = DerSequence([0, + self.n, + self.e, + self.d, + self.p, + self.q, + self.d % (self.p-1), + self.d % (self.q-1), + Integer(self.q).inverse(self.p) + ]).encode() + if pkcs == 1: + key_type = 'RSA PRIVATE KEY' + if format == 'DER' and passphrase: + raise ValueError("PKCS#1 private key cannot be encrypted") + else: # PKCS#8 + from Cryptodome.IO import PKCS8 + + if format == 'PEM' and protection is None: + key_type = 'PRIVATE KEY' + binary_key = PKCS8.wrap(binary_key, oid, None, + key_params=DerNull()) + else: + key_type = 'ENCRYPTED PRIVATE KEY' + if not protection: + protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC' + binary_key = PKCS8.wrap(binary_key, oid, + passphrase, protection, + key_params=DerNull()) + passphrase = None + else: + key_type = "PUBLIC KEY" + binary_key = _create_subject_public_key_info(oid, + DerSequence([self.n, + self.e]), + DerNull() + ) + + if format == 'DER': + return binary_key + if format == 'PEM': + from Cryptodome.IO import PEM + + pem_str = PEM.encode(binary_key, key_type, passphrase, randfunc) + return tobytes(pem_str) + + raise ValueError("Unknown key format '%s'. Cannot export the RSA key." % format) + + # Backward compatibility + exportKey = export_key + publickey = public_key + + # Methods defined in PyCryptodome that we don't support anymore + def sign(self, M, K): + raise NotImplementedError("Use module Cryptodome.Signature.pkcs1_15 instead") + + def verify(self, M, signature): + raise NotImplementedError("Use module Cryptodome.Signature.pkcs1_15 instead") + + def encrypt(self, plaintext, K): + raise NotImplementedError("Use module Cryptodome.Cipher.PKCS1_OAEP instead") + + def decrypt(self, ciphertext): + raise NotImplementedError("Use module Cryptodome.Cipher.PKCS1_OAEP instead") + + def blind(self, M, B): + raise NotImplementedError + + def unblind(self, M, B): + raise NotImplementedError + + def size(self): + raise NotImplementedError + + +def generate(bits, randfunc=None, e=65537): + """Create a new RSA key pair. + + The algorithm closely follows NIST `FIPS 186-4`_ in its + sections B.3.1 and B.3.3. The modulus is the product of + two non-strong probable primes. + Each prime passes a suitable number of Miller-Rabin tests + with random bases and a single Lucas test. + + Args: + bits (integer): + Key length, or size (in bits) of the RSA modulus. + It must be at least 1024, but **2048 is recommended.** + The FIPS standard only defines 1024, 2048 and 3072. + randfunc (callable): + Function that returns random bytes. + The default is :func:`Cryptodome.Random.get_random_bytes`. + e (integer): + Public RSA exponent. It must be an odd positive integer. + It is typically a small number with very few ones in its + binary representation. + The FIPS standard requires the public exponent to be + at least 65537 (the default). + + Returns: an RSA key object (:class:`RsaKey`, with private key). + + .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + """ + + if bits < 1024: + raise ValueError("RSA modulus length must be >= 1024") + if e % 2 == 0 or e < 3: + raise ValueError("RSA public exponent must be a positive, odd integer larger than 2.") + + if randfunc is None: + randfunc = Random.get_random_bytes + + d = n = Integer(1) + e = Integer(e) + + while n.size_in_bits() != bits and d < (1 << (bits // 2)): + # Generate the prime factors of n: p and q. + # By construciton, their product is always + # 2^{bits-1} < p*q < 2^bits. + size_q = bits // 2 + size_p = bits - size_q + + min_p = min_q = (Integer(1) << (2 * size_q - 1)).sqrt() + if size_q != size_p: + min_p = (Integer(1) << (2 * size_p - 1)).sqrt() + + def filter_p(candidate): + return candidate > min_p and (candidate - 1).gcd(e) == 1 + + p = generate_probable_prime(exact_bits=size_p, + randfunc=randfunc, + prime_filter=filter_p) + + min_distance = Integer(1) << (bits // 2 - 100) + + def filter_q(candidate): + return (candidate > min_q and + (candidate - 1).gcd(e) == 1 and + abs(candidate - p) > min_distance) + + q = generate_probable_prime(exact_bits=size_q, + randfunc=randfunc, + prime_filter=filter_q) + + n = p * q + lcm = (p - 1).lcm(q - 1) + d = e.inverse(lcm) + + if p > q: + p, q = q, p + + u = p.inverse(q) + + return RsaKey(n=n, e=e, d=d, p=p, q=q, u=u) + + +def construct(rsa_components, consistency_check=True): + r"""Construct an RSA key from a tuple of valid RSA components. + + The modulus **n** must be the product of two primes. + The public exponent **e** must be odd and larger than 1. + + In case of a private key, the following equations must apply: + + .. math:: + + \begin{align} + p*q &= n \\ + e*d &\equiv 1 ( \text{mod lcm} [(p-1)(q-1)]) \\ + p*u &\equiv 1 ( \text{mod } q) + \end{align} + + Args: + rsa_components (tuple): + A tuple of integers, with at least 2 and no + more than 6 items. The items come in the following order: + + 1. RSA modulus *n*. + 2. Public exponent *e*. + 3. Private exponent *d*. + Only required if the key is private. + 4. First factor of *n* (*p*). + Optional, but the other factor *q* must also be present. + 5. Second factor of *n* (*q*). Optional. + 6. CRT coefficient *q*, that is :math:`p^{-1} \text{mod }q`. Optional. + + consistency_check (boolean): + If ``True``, the library will verify that the provided components + fulfil the main RSA properties. + + Raises: + ValueError: when the key being imported fails the most basic RSA validity checks. + + Returns: An RSA key object (:class:`RsaKey`). + """ + + class InputComps(object): + pass + + input_comps = InputComps() + for (comp, value) in zip(('n', 'e', 'd', 'p', 'q', 'u'), rsa_components): + setattr(input_comps, comp, Integer(value)) + + n = input_comps.n + e = input_comps.e + if not hasattr(input_comps, 'd'): + key = RsaKey(n=n, e=e) + else: + d = input_comps.d + if hasattr(input_comps, 'q'): + p = input_comps.p + q = input_comps.q + else: + # Compute factors p and q from the private exponent d. + # We assume that n has no more than two factors. + # See 8.2.2(i) in Handbook of Applied Cryptography. + ktot = d * e - 1 + # The quantity d*e-1 is a multiple of phi(n), even, + # and can be represented as t*2^s. + t = ktot + while t % 2 == 0: + t //= 2 + # Cycle through all multiplicative inverses in Zn. + # The algorithm is non-deterministic, but there is a 50% chance + # any candidate a leads to successful factoring. + # See "Digitalized Signatures and Public Key Functions as Intractable + # as Factorization", M. Rabin, 1979 + spotted = False + a = Integer(2) + while not spotted and a < 100: + k = Integer(t) + # Cycle through all values a^{t*2^i}=a^k + while k < ktot: + cand = pow(a, k, n) + # Check if a^k is a non-trivial root of unity (mod n) + if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1: + # We have found a number such that (cand-1)(cand+1)=0 (mod n). + # Either of the terms divides n. + p = Integer(n).gcd(cand + 1) + spotted = True + break + k *= 2 + # This value was not any good... let's try another! + a += 2 + if not spotted: + raise ValueError("Unable to compute factors p and q from exponent d.") + # Found ! + assert ((n % p) == 0) + q = n // p + + if hasattr(input_comps, 'u'): + u = input_comps.u + else: + u = p.inverse(q) + + # Build key object + key = RsaKey(n=n, e=e, d=d, p=p, q=q, u=u) + + # Verify consistency of the key + if consistency_check: + + # Modulus and public exponent must be coprime + if e <= 1 or e >= n: + raise ValueError("Invalid RSA public exponent") + if Integer(n).gcd(e) != 1: + raise ValueError("RSA public exponent is not coprime to modulus") + + # For RSA, modulus must be odd + if not n & 1: + raise ValueError("RSA modulus is not odd") + + if key.has_private(): + # Modulus and private exponent must be coprime + if d <= 1 or d >= n: + raise ValueError("Invalid RSA private exponent") + if Integer(n).gcd(d) != 1: + raise ValueError("RSA private exponent is not coprime to modulus") + # Modulus must be product of 2 primes + if p * q != n: + raise ValueError("RSA factors do not match modulus") + if test_probable_prime(p) == COMPOSITE: + raise ValueError("RSA factor p is composite") + if test_probable_prime(q) == COMPOSITE: + raise ValueError("RSA factor q is composite") + # See Carmichael theorem + phi = (p - 1) * (q - 1) + lcm = phi // (p - 1).gcd(q - 1) + if (e * d % int(lcm)) != 1: + raise ValueError("Invalid RSA condition") + if hasattr(key, 'u'): + # CRT coefficient + if u <= 1 or u >= q: + raise ValueError("Invalid RSA component u") + if (p * u % q) != 1: + raise ValueError("Invalid RSA component u with p") + + return key + + +def _import_pkcs1_private(encoded, *kwargs): + # RSAPrivateKey ::= SEQUENCE { + # version Version, + # modulus INTEGER, -- n + # publicExponent INTEGER, -- e + # privateExponent INTEGER, -- d + # prime1 INTEGER, -- p + # prime2 INTEGER, -- q + # exponent1 INTEGER, -- d mod (p-1) + # exponent2 INTEGER, -- d mod (q-1) + # coefficient INTEGER -- (inverse of q) mod p + # } + # + # Version ::= INTEGER + der = DerSequence().decode(encoded, nr_elements=9, only_ints_expected=True) + if der[0] != 0: + raise ValueError("No PKCS#1 encoding of an RSA private key") + return construct(der[1:6] + [Integer(der[4]).inverse(der[5])]) + + +def _import_pkcs1_public(encoded, *kwargs): + # RSAPublicKey ::= SEQUENCE { + # modulus INTEGER, -- n + # publicExponent INTEGER -- e + # } + der = DerSequence().decode(encoded, nr_elements=2, only_ints_expected=True) + return construct(der) + + +def _import_subjectPublicKeyInfo(encoded, *kwargs): + + algoid, encoded_key, params = _expand_subject_public_key_info(encoded) + if algoid != oid or params is not None: + raise ValueError("No RSA subjectPublicKeyInfo") + return _import_pkcs1_public(encoded_key) + + +def _import_x509_cert(encoded, *kwargs): + + sp_info = _extract_subject_public_key_info(encoded) + return _import_subjectPublicKeyInfo(sp_info) + + +def _import_pkcs8(encoded, passphrase): + from Cryptodome.IO import PKCS8 + + k = PKCS8.unwrap(encoded, passphrase) + if k[0] != oid: + raise ValueError("No PKCS#8 encoded RSA key") + return _import_keyDER(k[1], passphrase) + + +def _import_keyDER(extern_key, passphrase): + """Import an RSA key (public or private half), encoded in DER form.""" + + decodings = (_import_pkcs1_private, + _import_pkcs1_public, + _import_subjectPublicKeyInfo, + _import_x509_cert, + _import_pkcs8) + + for decoding in decodings: + try: + return decoding(extern_key, passphrase) + except ValueError: + pass + + raise ValueError("RSA key format is not supported") + + +def _import_openssh_private_rsa(data, password): + + from ._openssh import (import_openssh_private_generic, + read_bytes, read_string, check_padding) + + ssh_name, decrypted = import_openssh_private_generic(data, password) + + if ssh_name != "ssh-rsa": + raise ValueError("This SSH key is not RSA") + + n, decrypted = read_bytes(decrypted) + e, decrypted = read_bytes(decrypted) + d, decrypted = read_bytes(decrypted) + iqmp, decrypted = read_bytes(decrypted) + p, decrypted = read_bytes(decrypted) + q, decrypted = read_bytes(decrypted) + + _, padded = read_string(decrypted) # Comment + check_padding(padded) + + build = [Integer.from_bytes(x) for x in (n, e, d, q, p, iqmp)] + return construct(build) + + +def import_key(extern_key, passphrase=None): + """Import an RSA key (public or private). + + Args: + extern_key (string or byte string): + The RSA key to import. + + The following formats are supported for an RSA **public key**: + + - X.509 certificate (binary or PEM format) + - X.509 ``subjectPublicKeyInfo`` DER SEQUENCE (binary or PEM + encoding) + - `PKCS#1`_ ``RSAPublicKey`` DER SEQUENCE (binary or PEM encoding) + - An OpenSSH line (e.g. the content of ``~/.ssh/id_ecdsa``, ASCII) + + The following formats are supported for an RSA **private key**: + + - PKCS#1 ``RSAPrivateKey`` DER SEQUENCE (binary or PEM encoding) + - `PKCS#8`_ ``PrivateKeyInfo`` or ``EncryptedPrivateKeyInfo`` + DER SEQUENCE (binary or PEM encoding) + - OpenSSH (text format, introduced in `OpenSSH 6.5`_) + + For details about the PEM encoding, see `RFC1421`_/`RFC1423`_. + + passphrase (string or byte string): + For private keys only, the pass phrase that encrypts the key. + + Returns: An RSA key object (:class:`RsaKey`). + + Raises: + ValueError/IndexError/TypeError: + When the given key cannot be parsed (possibly because the pass + phrase is wrong). + + .. _RFC1421: http://www.ietf.org/rfc/rfc1421.txt + .. _RFC1423: http://www.ietf.org/rfc/rfc1423.txt + .. _`PKCS#1`: http://www.ietf.org/rfc/rfc3447.txt + .. _`PKCS#8`: http://www.ietf.org/rfc/rfc5208.txt + .. _`OpenSSH 6.5`: https://flak.tedunangst.com/post/new-openssh-key-format-and-bcrypt-pbkdf + """ + + from Cryptodome.IO import PEM + + extern_key = tobytes(extern_key) + if passphrase is not None: + passphrase = tobytes(passphrase) + + if extern_key.startswith(b'-----BEGIN OPENSSH PRIVATE KEY'): + text_encoded = tostr(extern_key) + openssh_encoded, marker, enc_flag = PEM.decode(text_encoded, passphrase) + result = _import_openssh_private_rsa(openssh_encoded, passphrase) + return result + + if extern_key.startswith(b'-----'): + # This is probably a PEM encoded key. + (der, marker, enc_flag) = PEM.decode(tostr(extern_key), passphrase) + if enc_flag: + passphrase = None + return _import_keyDER(der, passphrase) + + if extern_key.startswith(b'ssh-rsa '): + # This is probably an OpenSSH key + keystring = binascii.a2b_base64(extern_key.split(b' ')[1]) + keyparts = [] + while len(keystring) > 4: + length = struct.unpack(">I", keystring[:4])[0] + keyparts.append(keystring[4:4 + length]) + keystring = keystring[4 + length:] + e = Integer.from_bytes(keyparts[1]) + n = Integer.from_bytes(keyparts[2]) + return construct([n, e]) + + if len(extern_key) > 0 and bord(extern_key[0]) == 0x30: + # This is probably a DER encoded key + return _import_keyDER(extern_key, passphrase) + + raise ValueError("RSA key format is not supported") + + +# Backward compatibility +importKey = import_key + +#: `Object ID`_ for the RSA encryption algorithm. This OID often indicates +#: a generic RSA key, even when such key will be actually used for digital +#: signatures. +#: +#: .. _`Object ID`: http://www.alvestrand.no/objectid/1.2.840.113549.1.1.1.html +oid = "1.2.840.113549.1.1.1" diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.pyi b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.pyi new file mode 100644 index 0000000..cef70ad --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/RSA.pyi @@ -0,0 +1,61 @@ +from typing import Callable, Union, Tuple, Optional + +from Cryptodome.Math.Numbers import Integer + +__all__ = ['generate', 'construct', 'import_key', + 'RsaKey', 'oid'] + +RNG = Callable[[int], bytes] + +class RsaKey(object): + def __init__(self, **kwargs: int) -> None: ... + + @property + def n(self) -> int: ... + @property + def e(self) -> int: ... + @property + def d(self) -> int: ... + @property + def p(self) -> int: ... + @property + def q(self) -> int: ... + @property + def u(self) -> int: ... + @property + def invp(self) -> int: ... + @property + def invq(self) -> int: ... + + def size_in_bits(self) -> int: ... + def size_in_bytes(self) -> int: ... + def has_private(self) -> bool: ... + def can_encrypt(self) -> bool: ... # legacy + def can_sign(self) -> bool:... # legacy + def public_key(self) -> RsaKey: ... + def __eq__(self, other: object) -> bool: ... + def __ne__(self, other: object) -> bool: ... + def __getstate__(self) -> None: ... + def __repr__(self) -> str: ... + def __str__(self) -> str: ... + def export_key(self, format: Optional[str]="PEM", passphrase: Optional[str]=None, pkcs: Optional[int]=1, + protection: Optional[str]=None, randfunc: Optional[RNG]=None) -> bytes: ... + + # Backward compatibility + exportKey = export_key + publickey = public_key + +Int = Union[int, Integer] + +def generate(bits: int, randfunc: Optional[RNG]=None, e: Optional[int]=65537) -> RsaKey: ... +def construct(rsa_components: Union[Tuple[Int, Int], # n, e + Tuple[Int, Int, Int], # n, e, d + Tuple[Int, Int, Int, Int, Int], # n, e, d, p, q + Tuple[Int, Int, Int, Int, Int, Int]], # n, e, d, p, q, crt_q + consistency_check: Optional[bool]=True) -> RsaKey: ... +def import_key(extern_key: Union[str, bytes], passphrase: Optional[str]=None) -> RsaKey: ... + +# Backward compatibility +importKey = import_key + +oid: str diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.py new file mode 100644 index 0000000..99b67a4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from Cryptodome.Util.asn1 import (DerSequence, DerInteger, DerBitString, + DerObjectId, DerNull) + + +def _expand_subject_public_key_info(encoded): + """Parse a SubjectPublicKeyInfo structure. + + It returns a triple with: + * OID (string) + * encoded public key (bytes) + * Algorithm parameters (bytes or None) + """ + + # + # SubjectPublicKeyInfo ::= SEQUENCE { + # algorithm AlgorithmIdentifier, + # subjectPublicKey BIT STRING + # } + # + # AlgorithmIdentifier ::= SEQUENCE { + # algorithm OBJECT IDENTIFIER, + # parameters ANY DEFINED BY algorithm OPTIONAL + # } + # + + spki = DerSequence().decode(encoded, nr_elements=2) + algo = DerSequence().decode(spki[0], nr_elements=(1,2)) + algo_oid = DerObjectId().decode(algo[0]) + spk = DerBitString().decode(spki[1]).value + + if len(algo) == 1: + algo_params = None + else: + try: + DerNull().decode(algo[1]) + algo_params = None + except: + algo_params = algo[1] + + return algo_oid.value, spk, algo_params + + +def _create_subject_public_key_info(algo_oid, public_key, params): + + if params is None: + algorithm = DerSequence([DerObjectId(algo_oid)]) + else: + algorithm = DerSequence([DerObjectId(algo_oid), params]) + + spki = DerSequence([algorithm, + DerBitString(public_key) + ]) + return spki.encode() + + +def _extract_subject_public_key_info(x509_certificate): + """Extract subjectPublicKeyInfo from a DER X.509 certificate.""" + + certificate = DerSequence().decode(x509_certificate, nr_elements=3) + tbs_certificate = DerSequence().decode(certificate[0], + nr_elements=range(6, 11)) + + index = 5 + try: + tbs_certificate[0] + 1 + # Version not present + version = 1 + except TypeError: + version = DerInteger(explicit=0).decode(tbs_certificate[0]).value + if version not in (2, 3): + raise ValueError("Incorrect X.509 certificate version") + index = 6 + + return tbs_certificate[index] diff --git a/lib/site-packages/pip/_vendor/chardet/cli/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.pyi similarity index 100% rename from lib/site-packages/pip/_vendor/chardet/cli/__init__.py rename to python/lib/python3.11/site-packages/Cryptodome/PublicKey/__init__.pyi diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so new file mode 100755 index 0000000..536cd09 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ec_ws.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so new file mode 100755 index 0000000..e6fed96 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed25519.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed448.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed448.abi3.so new file mode 100755 index 0000000..89e7713 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_ed448.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.py b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.py new file mode 100644 index 0000000..53b16df --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.py @@ -0,0 +1,135 @@ +# =================================================================== +# +# Copyright (c) 2019, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import struct + +from Cryptodome.Cipher import AES +from Cryptodome.Hash import SHA512 +from Cryptodome.Protocol.KDF import _bcrypt_hash +from Cryptodome.Util.strxor import strxor +from Cryptodome.Util.py3compat import tostr, bchr, bord + + +def read_int4(data): + if len(data) < 4: + raise ValueError("Insufficient data") + value = struct.unpack(">I", data[:4])[0] + return value, data[4:] + + +def read_bytes(data): + size, data = read_int4(data) + if len(data) < size: + raise ValueError("Insufficient data (V)") + return data[:size], data[size:] + + +def read_string(data): + s, d = read_bytes(data) + return tostr(s), d + + +def check_padding(pad): + for v, x in enumerate(pad): + if bord(x) != ((v + 1) & 0xFF): + raise ValueError("Incorrect padding") + + +def import_openssh_private_generic(data, password): + # https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/PROTOCOL.key?annotate=HEAD + # https://github.com/openssh/openssh-portable/blob/master/sshkey.c + # https://coolaj86.com/articles/the-openssh-private-key-format/ + # https://coolaj86.com/articles/the-ssh-public-key-format/ + + if not data.startswith(b'openssh-key-v1\x00'): + raise ValueError("Incorrect magic value") + data = data[15:] + + ciphername, data = read_string(data) + kdfname, data = read_string(data) + kdfoptions, data = read_bytes(data) + number_of_keys, data = read_int4(data) + + if number_of_keys != 1: + raise ValueError("We only handle 1 key at a time") + + _, data = read_string(data) # Public key + encrypted, data = read_bytes(data) + if data: + raise ValueError("Too much data") + + if len(encrypted) % 8 != 0: + raise ValueError("Incorrect payload length") + + # Decrypt if necessary + if ciphername == 'none': + decrypted = encrypted + else: + if (ciphername, kdfname) != ('aes256-ctr', 'bcrypt'): + raise ValueError("Unsupported encryption scheme %s/%s" % (ciphername, kdfname)) + + salt, kdfoptions = read_bytes(kdfoptions) + iterations, kdfoptions = read_int4(kdfoptions) + + if len(salt) != 16: + raise ValueError("Incorrect salt length") + if kdfoptions: + raise ValueError("Too much data in kdfoptions") + + pwd_sha512 = SHA512.new(password).digest() + # We need 32+16 = 48 bytes, therefore 2 bcrypt outputs are sufficient + stripes = [] + constant = b"OxychromaticBlowfishSwatDynamite" + for count in range(1, 3): + salt_sha512 = SHA512.new(salt + struct.pack(">I", count)).digest() + out_le = _bcrypt_hash(pwd_sha512, 6, salt_sha512, constant, False) + out = struct.pack("<IIIIIIII", *struct.unpack(">IIIIIIII", out_le)) + acc = bytearray(out) + for _ in range(1, iterations): + out_le = _bcrypt_hash(pwd_sha512, 6, SHA512.new(out).digest(), constant, False) + out = struct.pack("<IIIIIIII", *struct.unpack(">IIIIIIII", out_le)) + strxor(acc, out, output=acc) + stripes.append(acc[:24]) + + result = b"".join([bchr(a)+bchr(b) for (a, b) in zip(*stripes)]) + + cipher = AES.new(result[:32], + AES.MODE_CTR, + nonce=b"", + initial_value=result[32:32+16]) + decrypted = cipher.decrypt(encrypted) + + checkint1, decrypted = read_int4(decrypted) + checkint2, decrypted = read_int4(decrypted) + if checkint1 != checkint2: + raise ValueError("Incorrect checksum") + ssh_name, decrypted = read_string(decrypted) + + return ssh_name, decrypted diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.pyi b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.pyi new file mode 100644 index 0000000..15f3677 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_openssh.pyi @@ -0,0 +1,7 @@ +from typing import Tuple + +def read_int4(data: bytes) -> Tuple[int, bytes]: ... +def read_bytes(data: bytes) -> Tuple[bytes, bytes]: ... +def read_string(data: bytes) -> Tuple[str, bytes]: ... +def check_padding(pad: bytes) -> None: ... +def import_openssh_private_generic(data: bytes, password: bytes) -> Tuple[str, bytes]: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_x25519.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_x25519.abi3.so new file mode 100755 index 0000000..88975a0 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/PublicKey/_x25519.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Random/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Random/__init__.py new file mode 100644 index 0000000..fd18d86 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Random/__init__.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# +# Random/__init__.py : PyCryptodome random number generation +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['new', 'get_random_bytes'] + +from os import urandom + +class _UrandomRNG(object): + + def read(self, n): + """Return a random byte string of the desired size.""" + return urandom(n) + + def flush(self): + """Method provided for backward compatibility only.""" + pass + + def reinit(self): + """Method provided for backward compatibility only.""" + pass + + def close(self): + """Method provided for backward compatibility only.""" + pass + + +def new(*args, **kwargs): + """Return a file-like object that outputs cryptographically random bytes.""" + return _UrandomRNG() + + +def atfork(): + pass + + +#: Function that returns a random byte string of the desired size. +get_random_bytes = urandom + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Random/__init__.pyi b/python/lib/python3.11/site-packages/Cryptodome/Random/__init__.pyi new file mode 100644 index 0000000..ddc5b9b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Random/__init__.pyi @@ -0,0 +1,19 @@ +from typing import Any + +__all__ = ['new', 'get_random_bytes'] + +from os import urandom + +class _UrandomRNG(object): + + def read(self, n: int) -> bytes:... + def flush(self) -> None: ... + def reinit(self) -> None: ... + def close(self) -> None: ... + +def new(*args: Any, **kwargs: Any) -> _UrandomRNG: ... + +def atfork() -> None: ... + +get_random_bytes = urandom + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Random/random.py b/python/lib/python3.11/site-packages/Cryptodome/Random/random.py new file mode 100644 index 0000000..da30795 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Random/random.py @@ -0,0 +1,138 @@ +# -*- coding: utf-8 -*- +# +# Random/random.py : Strong alternative for the standard 'random' module +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__all__ = ['StrongRandom', 'getrandbits', 'randrange', 'randint', 'choice', 'shuffle', 'sample'] + +from Cryptodome import Random + +from Cryptodome.Util.py3compat import is_native_int + +class StrongRandom(object): + def __init__(self, rng=None, randfunc=None): + if randfunc is None and rng is None: + self._randfunc = None + elif randfunc is not None and rng is None: + self._randfunc = randfunc + elif randfunc is None and rng is not None: + self._randfunc = rng.read + else: + raise ValueError("Cannot specify both 'rng' and 'randfunc'") + + def getrandbits(self, k): + """Return an integer with k random bits.""" + + if self._randfunc is None: + self._randfunc = Random.new().read + mask = (1 << k) - 1 + return mask & bytes_to_long(self._randfunc(ceil_div(k, 8))) + + def randrange(self, *args): + """randrange([start,] stop[, step]): + Return a randomly-selected element from range(start, stop, step).""" + if len(args) == 3: + (start, stop, step) = args + elif len(args) == 2: + (start, stop) = args + step = 1 + elif len(args) == 1: + (stop,) = args + start = 0 + step = 1 + else: + raise TypeError("randrange expected at most 3 arguments, got %d" % (len(args),)) + if (not is_native_int(start) or not is_native_int(stop) or not + is_native_int(step)): + raise TypeError("randrange requires integer arguments") + if step == 0: + raise ValueError("randrange step argument must not be zero") + + num_choices = ceil_div(stop - start, step) + if num_choices < 0: + num_choices = 0 + if num_choices < 1: + raise ValueError("empty range for randrange(%r, %r, %r)" % (start, stop, step)) + + # Pick a random number in the range of possible numbers + r = num_choices + while r >= num_choices: + r = self.getrandbits(size(num_choices)) + + return start + (step * r) + + def randint(self, a, b): + """Return a random integer N such that a <= N <= b.""" + if not is_native_int(a) or not is_native_int(b): + raise TypeError("randint requires integer arguments") + N = self.randrange(a, b+1) + assert a <= N <= b + return N + + def choice(self, seq): + """Return a random element from a (non-empty) sequence. + + If the seqence is empty, raises IndexError. + """ + if len(seq) == 0: + raise IndexError("empty sequence") + return seq[self.randrange(len(seq))] + + def shuffle(self, x): + """Shuffle the sequence in place.""" + # Fisher-Yates shuffle. O(n) + # See http://en.wikipedia.org/wiki/Fisher-Yates_shuffle + # Working backwards from the end of the array, we choose a random item + # from the remaining items until all items have been chosen. + for i in range(len(x)-1, 0, -1): # iterate from len(x)-1 downto 1 + j = self.randrange(0, i+1) # choose random j such that 0 <= j <= i + x[i], x[j] = x[j], x[i] # exchange x[i] and x[j] + + def sample(self, population, k): + """Return a k-length list of unique elements chosen from the population sequence.""" + + num_choices = len(population) + if k > num_choices: + raise ValueError("sample larger than population") + + retval = [] + selected = {} # we emulate a set using a dict here + for i in range(k): + r = None + while r is None or r in selected: + r = self.randrange(num_choices) + retval.append(population[r]) + selected[r] = 1 + return retval + +_r = StrongRandom() +getrandbits = _r.getrandbits +randrange = _r.randrange +randint = _r.randint +choice = _r.choice +shuffle = _r.shuffle +sample = _r.sample + +# These are at the bottom to avoid problems with recursive imports +from Cryptodome.Util.number import ceil_div, bytes_to_long, long_to_bytes, size + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/Random/random.pyi b/python/lib/python3.11/site-packages/Cryptodome/Random/random.pyi new file mode 100644 index 0000000..9b7cf7e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Random/random.pyi @@ -0,0 +1,22 @@ +from typing import Callable, Tuple, Union, Sequence, Any, Optional, TypeVar + +__all__ = ['StrongRandom', 'getrandbits', 'randrange', 'randint', 'choice', 'shuffle', 'sample'] + +T = TypeVar('T') + +class StrongRandom(object): + def __init__(self, rng: Optional[Any]=None, randfunc: Optional[Callable]=None) -> None: ... # TODO What is rng? + def getrandbits(self, k: int) -> int: ... + def randrange(self, start: int, stop: int = ..., step: int = ...) -> int: ... + def randint(self, a: int, b: int) -> int: ... + def choice(self, seq: Sequence[T]) -> T: ... + def shuffle(self, x: Sequence) -> None: ... + def sample(self, population: Sequence, k: int) -> list: ... + +_r = StrongRandom() +getrandbits = _r.getrandbits +randrange = _r.randrange +randint = _r.randint +choice = _r.choice +shuffle = _r.shuffle +sample = _r.sample diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/__init__.py new file mode 100644 index 0000000..330fa22 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/__init__.py @@ -0,0 +1,60 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/__init__.py: Self-test for cipher modules +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for cipher modules""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.Cipher import test_AES; tests += test_AES.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_ARC2; tests += test_ARC2.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_ARC4; tests += test_ARC4.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_Blowfish; tests += test_Blowfish.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_CAST; tests += test_CAST.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_DES3; tests += test_DES3.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_DES; tests += test_DES.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_Salsa20; tests += test_Salsa20.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_ChaCha20; tests += test_ChaCha20.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_ChaCha20_Poly1305; tests += test_ChaCha20_Poly1305.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_pkcs1_15; tests += test_pkcs1_15.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_pkcs1_oaep; tests += test_pkcs1_oaep.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_OCB; tests += test_OCB.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_CBC; tests += test_CBC.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_CFB; tests += test_CFB.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_OpenPGP; tests += test_OpenPGP.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_OFB; tests += test_OFB.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_CTR; tests += test_CTR.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_CCM; tests += test_CCM.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_EAX; tests += test_EAX.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_GCM; tests += test_GCM.get_tests(config=config) + from Cryptodome.SelfTest.Cipher import test_SIV; tests += test_SIV.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/common.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/common.py new file mode 100644 index 0000000..a13d4fb --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/common.py @@ -0,0 +1,510 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/common.py: Common code for Cryptodome.SelfTest.Hash +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-testing for PyCryptodome hash modules""" + +import unittest +from binascii import a2b_hex, b2a_hex, hexlify + +from Cryptodome.Util.py3compat import b +from Cryptodome.Util.strxor import strxor_c + +class _NoDefault: pass # sentinel object +def _extract(d, k, default=_NoDefault): + """Get an item from a dictionary, and remove it from the dictionary.""" + try: + retval = d[k] + except KeyError: + if default is _NoDefault: + raise + return default + del d[k] + return retval + +# Generic cipher test case +class CipherSelfTest(unittest.TestCase): + + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + + # Extract the parameters + params = params.copy() + self.description = _extract(params, 'description') + self.key = b(_extract(params, 'key')) + self.plaintext = b(_extract(params, 'plaintext')) + self.ciphertext = b(_extract(params, 'ciphertext')) + self.module_name = _extract(params, 'module_name', None) + self.assoc_data = _extract(params, 'assoc_data', None) + self.mac = _extract(params, 'mac', None) + if self.assoc_data: + self.mac = b(self.mac) + + mode = _extract(params, 'mode', None) + self.mode_name = str(mode) + + if mode is not None: + # Block cipher + self.mode = getattr(self.module, "MODE_" + mode) + + self.iv = _extract(params, 'iv', None) + if self.iv is None: + self.iv = _extract(params, 'nonce', None) + if self.iv is not None: + self.iv = b(self.iv) + + else: + # Stream cipher + self.mode = None + self.iv = _extract(params, 'iv', None) + if self.iv is not None: + self.iv = b(self.iv) + + self.extra_params = params + + def shortDescription(self): + return self.description + + def _new(self): + params = self.extra_params.copy() + key = a2b_hex(self.key) + + old_style = [] + if self.mode is not None: + old_style = [ self.mode ] + if self.iv is not None: + old_style += [ a2b_hex(self.iv) ] + + return self.module.new(key, *old_style, **params) + + def isMode(self, name): + if not hasattr(self.module, "MODE_"+name): + return False + return self.mode == getattr(self.module, "MODE_"+name) + + def runTest(self): + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + assoc_data = [] + if self.assoc_data: + assoc_data = [ a2b_hex(b(x)) for x in self.assoc_data] + + ct = None + pt = None + + # + # Repeat the same encryption or decryption twice and verify + # that the result is always the same + # + for i in range(2): + cipher = self._new() + decipher = self._new() + + # Only AEAD modes + for comp in assoc_data: + cipher.update(comp) + decipher.update(comp) + + ctX = b2a_hex(cipher.encrypt(plaintext)) + ptX = b2a_hex(decipher.decrypt(ciphertext)) + + if ct: + self.assertEqual(ct, ctX) + self.assertEqual(pt, ptX) + ct, pt = ctX, ptX + + self.assertEqual(self.ciphertext, ct) # encrypt + self.assertEqual(self.plaintext, pt) # decrypt + + if self.mac: + mac = b2a_hex(cipher.digest()) + self.assertEqual(self.mac, mac) + decipher.verify(a2b_hex(self.mac)) + +class CipherStreamingSelfTest(CipherSelfTest): + + def shortDescription(self): + desc = self.module_name + if self.mode is not None: + desc += " in %s mode" % (self.mode_name,) + return "%s should behave like a stream cipher" % (desc,) + + def runTest(self): + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + + # The cipher should work like a stream cipher + + # Test counter mode encryption, 3 bytes at a time + ct3 = [] + cipher = self._new() + for i in range(0, len(plaintext), 3): + ct3.append(cipher.encrypt(plaintext[i:i+3])) + ct3 = b2a_hex(b("").join(ct3)) + self.assertEqual(self.ciphertext, ct3) # encryption (3 bytes at a time) + + # Test counter mode decryption, 3 bytes at a time + pt3 = [] + cipher = self._new() + for i in range(0, len(ciphertext), 3): + pt3.append(cipher.encrypt(ciphertext[i:i+3])) + # PY3K: This is meant to be text, do not change to bytes (data) + pt3 = b2a_hex(b("").join(pt3)) + self.assertEqual(self.plaintext, pt3) # decryption (3 bytes at a time) + + +class RoundtripTest(unittest.TestCase): + def __init__(self, module, params): + from Cryptodome import Random + unittest.TestCase.__init__(self) + self.module = module + self.iv = Random.get_random_bytes(module.block_size) + self.key = b(params['key']) + self.plaintext = 100 * b(params['plaintext']) + self.module_name = params.get('module_name', None) + + def shortDescription(self): + return """%s .decrypt() output of .encrypt() should not be garbled""" % (self.module_name,) + + def runTest(self): + + ## ECB mode + mode = self.module.MODE_ECB + encryption_cipher = self.module.new(a2b_hex(self.key), mode) + ciphertext = encryption_cipher.encrypt(self.plaintext) + decryption_cipher = self.module.new(a2b_hex(self.key), mode) + decrypted_plaintext = decryption_cipher.decrypt(ciphertext) + self.assertEqual(self.plaintext, decrypted_plaintext) + + +class IVLengthTest(unittest.TestCase): + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + self.key = b(params['key']) + + def shortDescription(self): + return "Check that all modes except MODE_ECB and MODE_CTR require an IV of the proper length" + + def runTest(self): + self.assertRaises(TypeError, self.module.new, a2b_hex(self.key), + self.module.MODE_ECB, b("")) + + def _dummy_counter(self): + return "\0" * self.module.block_size + + +class NoDefaultECBTest(unittest.TestCase): + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + self.key = b(params['key']) + + def runTest(self): + self.assertRaises(TypeError, self.module.new, a2b_hex(self.key)) + + +class BlockSizeTest(unittest.TestCase): + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + self.key = a2b_hex(b(params['key'])) + + def runTest(self): + cipher = self.module.new(self.key, self.module.MODE_ECB) + self.assertEqual(cipher.block_size, self.module.block_size) + + +class ByteArrayTest(unittest.TestCase): + """Verify we can use bytearray's for encrypting and decrypting""" + + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + + # Extract the parameters + params = params.copy() + self.description = _extract(params, 'description') + self.key = b(_extract(params, 'key')) + self.plaintext = b(_extract(params, 'plaintext')) + self.ciphertext = b(_extract(params, 'ciphertext')) + self.module_name = _extract(params, 'module_name', None) + self.assoc_data = _extract(params, 'assoc_data', None) + self.mac = _extract(params, 'mac', None) + if self.assoc_data: + self.mac = b(self.mac) + + mode = _extract(params, 'mode', None) + self.mode_name = str(mode) + + if mode is not None: + # Block cipher + self.mode = getattr(self.module, "MODE_" + mode) + + self.iv = _extract(params, 'iv', None) + if self.iv is None: + self.iv = _extract(params, 'nonce', None) + if self.iv is not None: + self.iv = b(self.iv) + else: + # Stream cipher + self.mode = None + self.iv = _extract(params, 'iv', None) + if self.iv is not None: + self.iv = b(self.iv) + + self.extra_params = params + + def _new(self): + params = self.extra_params.copy() + key = a2b_hex(self.key) + + old_style = [] + if self.mode is not None: + old_style = [ self.mode ] + if self.iv is not None: + old_style += [ a2b_hex(self.iv) ] + + return self.module.new(key, *old_style, **params) + + def runTest(self): + + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + assoc_data = [] + if self.assoc_data: + assoc_data = [ bytearray(a2b_hex(b(x))) for x in self.assoc_data] + + cipher = self._new() + decipher = self._new() + + # Only AEAD modes + for comp in assoc_data: + cipher.update(comp) + decipher.update(comp) + + ct = b2a_hex(cipher.encrypt(bytearray(plaintext))) + pt = b2a_hex(decipher.decrypt(bytearray(ciphertext))) + + self.assertEqual(self.ciphertext, ct) # encrypt + self.assertEqual(self.plaintext, pt) # decrypt + + if self.mac: + mac = b2a_hex(cipher.digest()) + self.assertEqual(self.mac, mac) + decipher.verify(bytearray(a2b_hex(self.mac))) + + +class MemoryviewTest(unittest.TestCase): + """Verify we can use memoryviews for encrypting and decrypting""" + + def __init__(self, module, params): + unittest.TestCase.__init__(self) + self.module = module + + # Extract the parameters + params = params.copy() + self.description = _extract(params, 'description') + self.key = b(_extract(params, 'key')) + self.plaintext = b(_extract(params, 'plaintext')) + self.ciphertext = b(_extract(params, 'ciphertext')) + self.module_name = _extract(params, 'module_name', None) + self.assoc_data = _extract(params, 'assoc_data', None) + self.mac = _extract(params, 'mac', None) + if self.assoc_data: + self.mac = b(self.mac) + + mode = _extract(params, 'mode', None) + self.mode_name = str(mode) + + if mode is not None: + # Block cipher + self.mode = getattr(self.module, "MODE_" + mode) + + self.iv = _extract(params, 'iv', None) + if self.iv is None: + self.iv = _extract(params, 'nonce', None) + if self.iv is not None: + self.iv = b(self.iv) + else: + # Stream cipher + self.mode = None + self.iv = _extract(params, 'iv', None) + if self.iv is not None: + self.iv = b(self.iv) + + self.extra_params = params + + def _new(self): + params = self.extra_params.copy() + key = a2b_hex(self.key) + + old_style = [] + if self.mode is not None: + old_style = [ self.mode ] + if self.iv is not None: + old_style += [ a2b_hex(self.iv) ] + + return self.module.new(key, *old_style, **params) + + def runTest(self): + + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + assoc_data = [] + if self.assoc_data: + assoc_data = [ memoryview(a2b_hex(b(x))) for x in self.assoc_data] + + cipher = self._new() + decipher = self._new() + + # Only AEAD modes + for comp in assoc_data: + cipher.update(comp) + decipher.update(comp) + + ct = b2a_hex(cipher.encrypt(memoryview(plaintext))) + pt = b2a_hex(decipher.decrypt(memoryview(ciphertext))) + + self.assertEqual(self.ciphertext, ct) # encrypt + self.assertEqual(self.plaintext, pt) # decrypt + + if self.mac: + mac = b2a_hex(cipher.digest()) + self.assertEqual(self.mac, mac) + decipher.verify(memoryview(a2b_hex(self.mac))) + + +def make_block_tests(module, module_name, test_data, additional_params=dict()): + tests = [] + extra_tests_added = False + for i in range(len(test_data)): + row = test_data[i] + + # Build the "params" dictionary with + # - plaintext + # - ciphertext + # - key + # - mode (default is ECB) + # - (optionally) description + # - (optionally) any other parameter that this cipher mode requires + params = {} + if len(row) == 3: + (params['plaintext'], params['ciphertext'], params['key']) = row + elif len(row) == 4: + (params['plaintext'], params['ciphertext'], params['key'], params['description']) = row + elif len(row) == 5: + (params['plaintext'], params['ciphertext'], params['key'], params['description'], extra_params) = row + params.update(extra_params) + else: + raise AssertionError("Unsupported tuple size %d" % (len(row),)) + + if not "mode" in params: + params["mode"] = "ECB" + + # Build the display-name for the test + p2 = params.copy() + p_key = _extract(p2, 'key') + p_plaintext = _extract(p2, 'plaintext') + p_ciphertext = _extract(p2, 'ciphertext') + p_mode = _extract(p2, 'mode') + p_description = _extract(p2, 'description', None) + + if p_description is not None: + description = p_description + elif p_mode == 'ECB' and not p2: + description = "p=%s, k=%s" % (p_plaintext, p_key) + else: + description = "p=%s, k=%s, %r" % (p_plaintext, p_key, p2) + name = "%s #%d: %s" % (module_name, i+1, description) + params['description'] = name + params['module_name'] = module_name + params.update(additional_params) + + # Add extra test(s) to the test suite before the current test + if not extra_tests_added: + tests += [ + RoundtripTest(module, params), + IVLengthTest(module, params), + NoDefaultECBTest(module, params), + ByteArrayTest(module, params), + BlockSizeTest(module, params), + ] + extra_tests_added = True + + # Add the current test to the test suite + tests.append(CipherSelfTest(module, params)) + + return tests + +def make_stream_tests(module, module_name, test_data): + tests = [] + extra_tests_added = False + for i in range(len(test_data)): + row = test_data[i] + + # Build the "params" dictionary + params = {} + if len(row) == 3: + (params['plaintext'], params['ciphertext'], params['key']) = row + elif len(row) == 4: + (params['plaintext'], params['ciphertext'], params['key'], params['description']) = row + elif len(row) == 5: + (params['plaintext'], params['ciphertext'], params['key'], params['description'], extra_params) = row + params.update(extra_params) + else: + raise AssertionError("Unsupported tuple size %d" % (len(row),)) + + # Build the display-name for the test + p2 = params.copy() + p_key = _extract(p2, 'key') + p_plaintext = _extract(p2, 'plaintext') + p_ciphertext = _extract(p2, 'ciphertext') + p_description = _extract(p2, 'description', None) + + if p_description is not None: + description = p_description + elif not p2: + description = "p=%s, k=%s" % (p_plaintext, p_key) + else: + description = "p=%s, k=%s, %r" % (p_plaintext, p_key, p2) + name = "%s #%d: %s" % (module_name, i+1, description) + params['description'] = name + params['module_name'] = module_name + + # Add extra test(s) to the test suite before the current test + if not extra_tests_added: + tests += [ + ByteArrayTest(module, params), + ] + + tests.append(MemoryviewTest(module, params)) + extra_tests_added = True + + # Add the test to the test suite + tests.append(CipherSelfTest(module, params)) + tests.append(CipherStreamingSelfTest(module, params)) + return tests + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py new file mode 100644 index 0000000..bd6c40e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_AES.py @@ -0,0 +1,1351 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/AES.py: Self-test for the AES cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.AES""" + +from __future__ import print_function + +import unittest +from Cryptodome.Hash import SHA256 +from Cryptodome.Cipher import AES +from Cryptodome.Util.py3compat import * +from binascii import hexlify + +# This is a list of (plaintext, ciphertext, key[, description[, params]]) tuples. +test_data = [ + # FIPS PUB 197 test vectors + # http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf + + ('00112233445566778899aabbccddeeff', '69c4e0d86a7b0430d8cdb78070b4c55a', + '000102030405060708090a0b0c0d0e0f', 'FIPS 197 C.1 (AES-128)'), + + ('00112233445566778899aabbccddeeff', 'dda97ca4864cdfe06eaf70a0ec0d7191', + '000102030405060708090a0b0c0d0e0f1011121314151617', + 'FIPS 197 C.2 (AES-192)'), + + ('00112233445566778899aabbccddeeff', '8ea2b7ca516745bfeafc49904b496089', + '000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + 'FIPS 197 C.3 (AES-256)'), + + # Rijndael128 test vectors + # Downloaded 2008-09-13 from + # http://www.iaik.tugraz.at/Research/krypto/AES/old/~rijmen/rijndael/testvalues.tar.gz + + # ecb_tbl.txt, KEYSIZE=128 + ('506812a45f08c889b97f5980038b8359', 'd8f532538289ef7d06b506a4fd5be9c9', + '00010203050607080a0b0c0d0f101112', + 'ecb-tbl-128: I=1'), + ('5c6d71ca30de8b8b00549984d2ec7d4b', '59ab30f4d4ee6e4ff9907ef65b1fb68c', + '14151617191a1b1c1e1f202123242526', + 'ecb-tbl-128: I=2'), + ('53f3f4c64f8616e4e7c56199f48f21f6', 'bf1ed2fcb2af3fd41443b56d85025cb1', + '28292a2b2d2e2f30323334353738393a', + 'ecb-tbl-128: I=3'), + ('a1eb65a3487165fb0f1c27ff9959f703', '7316632d5c32233edcb0780560eae8b2', + '3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-128: I=4'), + ('3553ecf0b1739558b08e350a98a39bfa', '408c073e3e2538072b72625e68b8364b', + '50515253555657585a5b5c5d5f606162', + 'ecb-tbl-128: I=5'), + ('67429969490b9711ae2b01dc497afde8', 'e1f94dfa776597beaca262f2f6366fea', + '64656667696a6b6c6e6f707173747576', + 'ecb-tbl-128: I=6'), + ('93385c1f2aec8bed192f5a8e161dd508', 'f29e986c6a1c27d7b29ffd7ee92b75f1', + '78797a7b7d7e7f80828384858788898a', + 'ecb-tbl-128: I=7'), + ('b5bf946be19beb8db3983b5f4c6e8ddb', '131c886a57f8c2e713aba6955e2b55b5', + '8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-128: I=8'), + ('41321ee10e21bd907227c4450ff42324', 'd2ab7662df9b8c740210e5eeb61c199d', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-128: I=9'), + ('00a82f59c91c8486d12c0a80124f6089', '14c10554b2859c484cab5869bbe7c470', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-128: I=10'), + ('7ce0fd076754691b4bbd9faf8a1372fe', 'db4d498f0a49cf55445d502c1f9ab3b5', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9da', + 'ecb-tbl-128: I=11'), + ('23605a8243d07764541bc5ad355b3129', '6d96fef7d66590a77a77bb2056667f7f', + 'dcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-128: I=12'), + ('12a8cfa23ea764fd876232b4e842bc44', '316fb68edba736c53e78477bf913725c', + 'f0f1f2f3f5f6f7f8fafbfcfdfe010002', + 'ecb-tbl-128: I=13'), + ('bcaf32415e8308b3723e5fdd853ccc80', '6936f2b93af8397fd3a771fc011c8c37', + '04050607090a0b0c0e0f101113141516', + 'ecb-tbl-128: I=14'), + ('89afae685d801ad747ace91fc49adde0', 'f3f92f7a9c59179c1fcc2c2ba0b082cd', + '2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-128: I=15'), + ('f521d07b484357c4a69e76124a634216', '6a95ea659ee3889158e7a9152ff04ebc', + '40414243454647484a4b4c4d4f505152', + 'ecb-tbl-128: I=16'), + ('3e23b3bc065bcc152407e23896d77783', '1959338344e945670678a5d432c90b93', + '54555657595a5b5c5e5f606163646566', + 'ecb-tbl-128: I=17'), + ('79f0fba002be1744670e7e99290d8f52', 'e49bddd2369b83ee66e6c75a1161b394', + '68696a6b6d6e6f70727374757778797a', + 'ecb-tbl-128: I=18'), + ('da23fe9d5bd63e1d72e3dafbe21a6c2a', 'd3388f19057ff704b70784164a74867d', + '7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-128: I=19'), + ('e3f5698ba90b6a022efd7db2c7e6c823', '23aa03e2d5e4cd24f3217e596480d1e1', + 'a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-128: I=20'), + ('bdc2691d4f1b73d2700679c3bcbf9c6e', 'c84113d68b666ab2a50a8bdb222e91b9', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2', + 'ecb-tbl-128: I=21'), + ('ba74e02093217ee1ba1b42bd5624349a', 'ac02403981cd4340b507963db65cb7b6', + '08090a0b0d0e0f10121314151718191a', + 'ecb-tbl-128: I=22'), + ('b5c593b5851c57fbf8b3f57715e8f680', '8d1299236223359474011f6bf5088414', + '6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-128: I=23'), + ('3da9bd9cec072381788f9387c3bbf4ee', '5a1d6ab8605505f7977e55b9a54d9b90', + '80818283858687888a8b8c8d8f909192', + 'ecb-tbl-128: I=24'), + ('4197f3051121702ab65d316b3c637374', '72e9c2d519cf555e4208805aabe3b258', + '94959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-128: I=25'), + ('9f46c62ec4f6ee3f6e8c62554bc48ab7', 'a8f3e81c4a23a39ef4d745dffe026e80', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9ba', + 'ecb-tbl-128: I=26'), + ('0220673fe9e699a4ebc8e0dbeb6979c8', '546f646449d31458f9eb4ef5483aee6c', + 'bcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-128: I=27'), + ('b2b99171337ded9bc8c2c23ff6f18867', '4dbe4bc84ac797c0ee4efb7f1a07401c', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2', + 'ecb-tbl-128: I=28'), + ('a7facf4e301e984e5efeefd645b23505', '25e10bfb411bbd4d625ac8795c8ca3b3', + 'e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-128: I=29'), + ('f7c762e4a9819160fd7acfb6c4eedcdd', '315637405054ec803614e43def177579', + 'f8f9fafbfdfefe00020304050708090a', + 'ecb-tbl-128: I=30'), + ('9b64fc21ea08709f4915436faa70f1be', '60c5bc8a1410247295c6386c59e572a8', + '0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-128: I=31'), + ('52af2c3de07ee6777f55a4abfc100b3f', '01366fc8ca52dfe055d6a00a76471ba6', + '20212223252627282a2b2c2d2f303132', + 'ecb-tbl-128: I=32'), + ('2fca001224386c57aa3f968cbe2c816f', 'ecc46595516ec612449c3f581e7d42ff', + '34353637393a3b3c3e3f404143444546', + 'ecb-tbl-128: I=33'), + ('4149c73658a4a9c564342755ee2c132f', '6b7ffe4c602a154b06ee9c7dab5331c9', + '48494a4b4d4e4f50525354555758595a', + 'ecb-tbl-128: I=34'), + ('af60005a00a1772f7c07a48a923c23d2', '7da234c14039a240dd02dd0fbf84eb67', + '5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-128: I=35'), + ('6fccbc28363759914b6f0280afaf20c6', 'c7dc217d9e3604ffe7e91f080ecd5a3a', + '70717273757677787a7b7c7d7f808182', + 'ecb-tbl-128: I=36'), + ('7d82a43ddf4fefa2fc5947499884d386', '37785901863f5c81260ea41e7580cda5', + '84858687898a8b8c8e8f909193949596', + 'ecb-tbl-128: I=37'), + ('5d5a990eaab9093afe4ce254dfa49ef9', 'a07b9338e92ed105e6ad720fccce9fe4', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aa', + 'ecb-tbl-128: I=38'), + ('4cd1e2fd3f4434b553aae453f0ed1a02', 'ae0fb9722418cc21a7da816bbc61322c', + 'acadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-128: I=39'), + ('5a2c9a9641d4299125fa1b9363104b5e', 'c826a193080ff91ffb21f71d3373c877', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2', + 'ecb-tbl-128: I=40'), + ('b517fe34c0fa217d341740bfd4fe8dd4', '1181b11b0e494e8d8b0aa6b1d5ac2c48', + 'd4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-128: I=41'), + ('014baf2278a69d331d5180103643e99a', '6743c3d1519ab4f2cd9a78ab09a511bd', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fa', + 'ecb-tbl-128: I=42'), + ('b529bd8164f20d0aa443d4932116841c', 'dc55c076d52bacdf2eefd952946a439d', + 'fcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-128: I=43'), + ('2e596dcbb2f33d4216a1176d5bd1e456', '711b17b590ffc72b5c8e342b601e8003', + '10111213151617181a1b1c1d1f202122', + 'ecb-tbl-128: I=44'), + ('7274a1ea2b7ee2424e9a0e4673689143', '19983bb0950783a537e1339f4aa21c75', + '24252627292a2b2c2e2f303133343536', + 'ecb-tbl-128: I=45'), + ('ae20020bd4f13e9d90140bee3b5d26af', '3ba7762e15554169c0f4fa39164c410c', + '38393a3b3d3e3f40424344454748494a', + 'ecb-tbl-128: I=46'), + ('baac065da7ac26e855e79c8849d75a02', 'a0564c41245afca7af8aa2e0e588ea89', + '4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-128: I=47'), + ('7c917d8d1d45fab9e2540e28832540cc', '5e36a42a2e099f54ae85ecd92e2381ed', + '60616263656667686a6b6c6d6f707172', + 'ecb-tbl-128: I=48'), + ('bde6f89e16daadb0e847a2a614566a91', '770036f878cd0f6ca2268172f106f2fe', + '74757677797a7b7c7e7f808183848586', + 'ecb-tbl-128: I=49'), + ('c9de163725f1f5be44ebb1db51d07fbc', '7e4e03908b716116443ccf7c94e7c259', + '88898a8b8d8e8f90929394959798999a', + 'ecb-tbl-128: I=50'), + ('3af57a58f0c07dffa669572b521e2b92', '482735a48c30613a242dd494c7f9185d', + '9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-128: I=51'), + ('3d5ebac306dde4604f1b4fbbbfcdae55', 'b4c0f6c9d4d7079addf9369fc081061d', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2', + 'ecb-tbl-128: I=52'), + ('c2dfa91bceb76a1183c995020ac0b556', 'd5810fe0509ac53edcd74f89962e6270', + 'c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-128: I=53'), + ('c70f54305885e9a0746d01ec56c8596b', '03f17a16b3f91848269ecdd38ebb2165', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9ea', + 'ecb-tbl-128: I=54'), + ('c4f81b610e98012ce000182050c0c2b2', 'da1248c3180348bad4a93b4d9856c9df', + 'ecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-128: I=55'), + ('eaab86b1d02a95d7404eff67489f97d4', '3d10d7b63f3452c06cdf6cce18be0c2c', + '00010203050607080a0b0c0d0f101112', + 'ecb-tbl-128: I=56'), + ('7c55bdb40b88870b52bec3738de82886', '4ab823e7477dfddc0e6789018fcb6258', + '14151617191a1b1c1e1f202123242526', + 'ecb-tbl-128: I=57'), + ('ba6eaa88371ff0a3bd875e3f2a975ce0', 'e6478ba56a77e70cfdaa5c843abde30e', + '28292a2b2d2e2f30323334353738393a', + 'ecb-tbl-128: I=58'), + ('08059130c4c24bd30cf0575e4e0373dc', '1673064895fbeaf7f09c5429ff75772d', + '3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-128: I=59'), + ('9a8eab004ef53093dfcf96f57e7eda82', '4488033ae9f2efd0ca9383bfca1a94e9', + '50515253555657585a5b5c5d5f606162', + 'ecb-tbl-128: I=60'), + ('0745b589e2400c25f117b1d796c28129', '978f3b8c8f9d6f46626cac3c0bcb9217', + '64656667696a6b6c6e6f707173747576', + 'ecb-tbl-128: I=61'), + ('2f1777781216cec3f044f134b1b92bbe', 'e08c8a7e582e15e5527f1d9e2eecb236', + '78797a7b7d7e7f80828384858788898a', + 'ecb-tbl-128: I=62'), + ('353a779ffc541b3a3805d90ce17580fc', 'cec155b76ac5ffda4cf4f9ca91e49a7a', + '8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-128: I=63'), + ('1a1eae4415cefcf08c4ac1c8f68bea8f', 'd5ac7165763225dd2a38cdc6862c29ad', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-128: I=64'), + ('e6e7e4e5b0b3b2b5d4d5aaab16111013', '03680fe19f7ce7275452020be70e8204', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-128: I=65'), + ('f8f9fafbfbf8f9e677767170efe0e1e2', '461df740c9781c388e94bb861ceb54f6', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9da', + 'ecb-tbl-128: I=66'), + ('63626160a1a2a3a445444b4a75727370', '451bd60367f96483042742219786a074', + 'dcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-128: I=67'), + ('717073720605040b2d2c2b2a05fafbf9', 'e4dfa42671a02e57ef173b85c0ea9f2b', + 'f0f1f2f3f5f6f7f8fafbfcfdfe010002', + 'ecb-tbl-128: I=68'), + ('78797a7beae9e8ef3736292891969794', 'ed11b89e76274282227d854700a78b9e', + '04050607090a0b0c0e0f101113141516', + 'ecb-tbl-128: I=69'), + ('838281803231300fdddcdbdaa0afaead', '433946eaa51ea47af33895f2b90b3b75', + '18191a1b1d1e1f20222324252728292a', + 'ecb-tbl-128: I=70'), + ('18191a1bbfbcbdba75747b7a7f78797a', '6bc6d616a5d7d0284a5910ab35022528', + '2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-128: I=71'), + ('848586879b989996a3a2a5a4849b9a99', 'd2a920ecfe919d354b5f49eae9719c98', + '40414243454647484a4b4c4d4f505152', + 'ecb-tbl-128: I=72'), + ('0001020322212027cacbf4f551565754', '3a061b17f6a92885efbd0676985b373d', + '54555657595a5b5c5e5f606163646566', + 'ecb-tbl-128: I=73'), + ('cecfcccdafacadb2515057564a454447', 'fadeec16e33ea2f4688499d157e20d8f', + '68696a6b6d6e6f70727374757778797a', + 'ecb-tbl-128: I=74'), + ('92939091cdcecfc813121d1c80878685', '5cdefede59601aa3c3cda36fa6b1fa13', + '7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-128: I=75'), + ('d2d3d0d16f6c6d6259585f5ed1eeefec', '9574b00039844d92ebba7ee8719265f8', + '90919293959697989a9b9c9d9fa0a1a2', + 'ecb-tbl-128: I=76'), + ('acadaeaf878485820f0e1110d5d2d3d0', '9a9cf33758671787e5006928188643fa', + 'a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-128: I=77'), + ('9091929364676619e6e7e0e1757a7b78', '2cddd634c846ba66bb46cbfea4a674f9', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9ca', + 'ecb-tbl-128: I=78'), + ('babbb8b98a89888f74757a7b92959497', 'd28bae029393c3e7e26e9fafbbb4b98f', + 'cccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-128: I=79'), + ('8d8c8f8e6e6d6c633b3a3d3ccad5d4d7', 'ec27529b1bee0a9ab6a0d73ebc82e9b7', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2', + 'ecb-tbl-128: I=80'), + ('86878485010203040808f7f767606162', '3cb25c09472aff6ee7e2b47ccd7ccb17', + 'f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-128: I=81'), + ('8e8f8c8d656667788a8b8c8d010e0f0c', 'dee33103a7283370d725e44ca38f8fe5', + '08090a0b0d0e0f10121314151718191a', + 'ecb-tbl-128: I=82'), + ('c8c9cacb858687807a7b7475e7e0e1e2', '27f9bcd1aac64bffc11e7815702c1a69', + '1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-128: I=83'), + ('6d6c6f6e5053525d8c8d8a8badd2d3d0', '5df534ffad4ed0749a9988e9849d0021', + '30313233353637383a3b3c3d3f404142', + 'ecb-tbl-128: I=84'), + ('28292a2b393a3b3c0607181903040506', 'a48bee75db04fb60ca2b80f752a8421b', + '44454647494a4b4c4e4f505153545556', + 'ecb-tbl-128: I=85'), + ('a5a4a7a6b0b3b28ddbdadddcbdb2b3b0', '024c8cf70bc86ee5ce03678cb7af45f9', + '58595a5b5d5e5f60626364656768696a', + 'ecb-tbl-128: I=86'), + ('323330316467666130313e3f2c2b2a29', '3c19ac0f8a3a3862ce577831301e166b', + '6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-128: I=87'), + ('27262524080b0a05171611100b141516', 'c5e355b796a57421d59ca6be82e73bca', + '80818283858687888a8b8c8d8f909192', + 'ecb-tbl-128: I=88'), + ('040506074142434435340b0aa3a4a5a6', 'd94033276417abfb05a69d15b6e386e2', + '94959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-128: I=89'), + ('242526271112130c61606766bdb2b3b0', '24b36559ea3a9b9b958fe6da3e5b8d85', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9ba', + 'ecb-tbl-128: I=90'), + ('4b4a4948252627209e9f9091cec9c8cb', '20fd4feaa0e8bf0cce7861d74ef4cb72', + 'bcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-128: I=91'), + ('68696a6b6665646b9f9e9998d9e6e7e4', '350e20d5174277b9ec314c501570a11d', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2', + 'ecb-tbl-128: I=92'), + ('34353637c5c6c7c0f0f1eeef7c7b7a79', '87a29d61b7c604d238fe73045a7efd57', + 'e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-128: I=93'), + ('32333031c2c1c13f0d0c0b0a050a0b08', '2c3164c1cc7d0064816bdc0faa362c52', + 'f8f9fafbfdfefe00020304050708090a', + 'ecb-tbl-128: I=94'), + ('cdcccfcebebdbcbbabaaa5a4181f1e1d', '195fe5e8a05a2ed594f6e4400eee10b3', + '0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-128: I=95'), + ('212023223635343ba0a1a6a7445b5a59', 'e4663df19b9a21a5a284c2bd7f905025', + '20212223252627282a2b2c2d2f303132', + 'ecb-tbl-128: I=96'), + ('0e0f0c0da8abaaad2f2e515002050407', '21b88714cfb4e2a933bd281a2c4743fd', + '34353637393a3b3c3e3f404143444546', + 'ecb-tbl-128: I=97'), + ('070605042a2928378e8f8889bdb2b3b0', 'cbfc3980d704fd0fc54378ab84e17870', + '48494a4b4d4e4f50525354555758595a', + 'ecb-tbl-128: I=98'), + ('cbcac9c893909196a9a8a7a6a5a2a3a0', 'bc5144baa48bdeb8b63e22e03da418ef', + '5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-128: I=99'), + ('80818283c1c2c3cc9c9d9a9b0cf3f2f1', '5a1dbaef1ee2984b8395da3bdffa3ccc', + '70717273757677787a7b7c7d7f808182', + 'ecb-tbl-128: I=100'), + ('1213101125262720fafbe4e5b1b6b7b4', 'f0b11cd0729dfcc80cec903d97159574', + '84858687898a8b8c8e8f909193949596', + 'ecb-tbl-128: I=101'), + ('7f7e7d7c3033320d97969190222d2c2f', '9f95314acfddc6d1914b7f19a9cc8209', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aa', + 'ecb-tbl-128: I=102'), + ('4e4f4c4d484b4a4d81808f8e53545556', '595736f6f0f70914a94e9e007f022519', + 'acadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-128: I=103'), + ('dcdddedfb0b3b2bd15141312a1bebfbc', '1f19f57892cae586fcdfb4c694deb183', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2', + 'ecb-tbl-128: I=104'), + ('93929190282b2a2dc4c5fafb92959497', '540700ee1f6f3dab0b3eddf6caee1ef5', + 'd4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-128: I=105'), + ('f5f4f7f6c4c7c6d9373631307e717073', '14a342a91019a331687a2254e6626ca2', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fa', + 'ecb-tbl-128: I=106'), + ('93929190b6b5b4b364656a6b05020300', '7b25f3c3b2eea18d743ef283140f29ff', + 'fcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-128: I=107'), + ('babbb8b90d0e0f00a4a5a2a3043b3a39', '46c2587d66e5e6fa7f7ca6411ad28047', + '10111213151617181a1b1c1d1f202122', + 'ecb-tbl-128: I=108'), + ('d8d9dadb7f7c7d7a10110e0f787f7e7d', '09470e72229d954ed5ee73886dfeeba9', + '24252627292a2b2c2e2f303133343536', + 'ecb-tbl-128: I=109'), + ('fefffcfdefeced923b3a3d3c6768696a', 'd77c03de92d4d0d79ef8d4824ef365eb', + '38393a3b3d3e3f40424344454748494a', + 'ecb-tbl-128: I=110'), + ('d6d7d4d58a89888f96979899a5a2a3a0', '1d190219f290e0f1715d152d41a23593', + '4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-128: I=111'), + ('18191a1ba8abaaa5303136379b848586', 'a2cd332ce3a0818769616292e87f757b', + '60616263656667686a6b6c6d6f707172', + 'ecb-tbl-128: I=112'), + ('6b6a6968a4a7a6a1d6d72829b0b7b6b5', 'd54afa6ce60fbf9341a3690e21385102', + '74757677797a7b7c7e7f808183848586', + 'ecb-tbl-128: I=113'), + ('000102038a89889755545352a6a9a8ab', '06e5c364ded628a3f5e05e613e356f46', + '88898a8b8d8e8f90929394959798999a', + 'ecb-tbl-128: I=114'), + ('2d2c2f2eb3b0b1b6b6b7b8b9f2f5f4f7', 'eae63c0e62556dac85d221099896355a', + '9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-128: I=115'), + ('979695943536373856575051e09f9e9d', '1fed060e2c6fc93ee764403a889985a2', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2', + 'ecb-tbl-128: I=116'), + ('a4a5a6a7989b9a9db1b0afae7a7d7c7f', 'c25235c1a30fdec1c7cb5c5737b2a588', + 'c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-128: I=117'), + ('c1c0c3c2686b6a55a8a9aeafeae5e4e7', '796dbef95147d4d30873ad8b7b92efc0', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9ea', + 'ecb-tbl-128: I=118'), + ('c1c0c3c2141716118c8d828364636261', 'cbcf0fb34d98d0bd5c22ce37211a46bf', + 'ecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-128: I=119'), + ('93929190cccfcec196979091e0fffefd', '94b44da6466126cafa7c7fd09063fc24', + '00010203050607080a0b0c0d0f101112', + 'ecb-tbl-128: I=120'), + ('b4b5b6b7f9fafbfc25241b1a6e69686b', 'd78c5b5ebf9b4dbda6ae506c5074c8fe', + '14151617191a1b1c1e1f202123242526', + 'ecb-tbl-128: I=121'), + ('868784850704051ac7c6c1c08788898a', '6c27444c27204b043812cf8cf95f9769', + '28292a2b2d2e2f30323334353738393a', + 'ecb-tbl-128: I=122'), + ('f4f5f6f7aaa9a8affdfcf3f277707172', 'be94524ee5a2aa50bba8b75f4c0aebcf', + '3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-128: I=123'), + ('d3d2d1d00605040bc3c2c5c43e010003', 'a0aeaae91ba9f31f51aeb3588cf3a39e', + '50515253555657585a5b5c5d5f606162', + 'ecb-tbl-128: I=124'), + ('73727170424140476a6b74750d0a0b08', '275297779c28266ef9fe4c6a13c08488', + '64656667696a6b6c6e6f707173747576', + 'ecb-tbl-128: I=125'), + ('c2c3c0c10a0908f754555253a1aeafac', '86523d92bb8672cb01cf4a77fd725882', + '78797a7b7d7e7f80828384858788898a', + 'ecb-tbl-128: I=126'), + ('6d6c6f6ef8fbfafd82838c8df8fffefd', '4b8327640e9f33322a04dd96fcbf9a36', + '8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-128: I=127'), + ('f5f4f7f684878689a6a7a0a1d2cdcccf', 'ce52af650d088ca559425223f4d32694', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-128: I=128'), + + # ecb_tbl.txt, KEYSIZE=192 + ('2d33eef2c0430a8a9ebf45e809c40bb6', 'dff4945e0336df4c1c56bc700eff837f', + '00010203050607080a0b0c0d0f10111214151617191a1b1c', + 'ecb-tbl-192: I=1'), + ('6aa375d1fa155a61fb72353e0a5a8756', 'b6fddef4752765e347d5d2dc196d1252', + '1e1f20212324252628292a2b2d2e2f30323334353738393a', + 'ecb-tbl-192: I=2'), + ('bc3736518b9490dcb8ed60eb26758ed4', 'd23684e3d963b3afcf1a114aca90cbd6', + '3c3d3e3f41424344464748494b4c4d4e5051525355565758', + 'ecb-tbl-192: I=3'), + ('aa214402b46cffb9f761ec11263a311e', '3a7ac027753e2a18c2ceab9e17c11fd0', + '5a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-192: I=4'), + ('02aea86e572eeab66b2c3af5e9a46fd6', '8f6786bd007528ba26603c1601cdd0d8', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394', + 'ecb-tbl-192: I=5'), + ('e2aef6acc33b965c4fa1f91c75ff6f36', 'd17d073b01e71502e28b47ab551168b3', + '969798999b9c9d9ea0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-192: I=6'), + ('0659df46427162b9434865dd9499f91d', 'a469da517119fab95876f41d06d40ffa', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6c8c9cacbcdcecfd0', + 'ecb-tbl-192: I=7'), + ('49a44239c748feb456f59c276a5658df', '6091aa3b695c11f5c0b6ad26d3d862ff', + 'd2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-192: I=8'), + ('66208f6e9d04525bdedb2733b6a6be37', '70f9e67f9f8df1294131662dc6e69364', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c', + 'ecb-tbl-192: I=9'), + ('3393f8dfc729c97f5480b950bc9666b0', 'd154dcafad8b207fa5cbc95e9996b559', + '0e0f10111314151618191a1b1d1e1f20222324252728292a', + 'ecb-tbl-192: I=10'), + ('606834c8ce063f3234cf1145325dbd71', '4934d541e8b46fa339c805a7aeb9e5da', + '2c2d2e2f31323334363738393b3c3d3e4041424345464748', + 'ecb-tbl-192: I=11'), + ('fec1c04f529bbd17d8cecfcc4718b17f', '62564c738f3efe186e1a127a0c4d3c61', + '4a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-192: I=12'), + ('32df99b431ed5dc5acf8caf6dc6ce475', '07805aa043986eb23693e23bef8f3438', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384', + 'ecb-tbl-192: I=13'), + ('7fdc2b746f3f665296943b83710d1f82', 'df0b4931038bade848dee3b4b85aa44b', + '868788898b8c8d8e90919293959697989a9b9c9d9fa0a1a2', + 'ecb-tbl-192: I=14'), + ('8fba1510a3c5b87e2eaa3f7a91455ca2', '592d5fded76582e4143c65099309477c', + 'a4a5a6a7a9aaabacaeafb0b1b3b4b5b6b8b9babbbdbebfc0', + 'ecb-tbl-192: I=15'), + ('2c9b468b1c2eed92578d41b0716b223b', 'c9b8d6545580d3dfbcdd09b954ed4e92', + 'c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-192: I=16'), + ('0a2bbf0efc6bc0034f8a03433fca1b1a', '5dccd5d6eb7c1b42acb008201df707a0', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfc', + 'ecb-tbl-192: I=17'), + ('25260e1f31f4104d387222e70632504b', 'a2a91682ffeb6ed1d34340946829e6f9', + 'fefe01010304050608090a0b0d0e0f10121314151718191a', + 'ecb-tbl-192: I=18'), + ('c527d25a49f08a5228d338642ae65137', 'e45d185b797000348d9267960a68435d', + '1c1d1e1f21222324262728292b2c2d2e3031323335363738', + 'ecb-tbl-192: I=19'), + ('3b49fc081432f5890d0e3d87e884a69e', '45e060dae5901cda8089e10d4f4c246b', + '3a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-192: I=20'), + ('d173f9ed1e57597e166931df2754a083', 'f6951afacc0079a369c71fdcff45df50', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374', + 'ecb-tbl-192: I=21'), + ('8c2b7cafa5afe7f13562daeae1adede0', '9e95e00f351d5b3ac3d0e22e626ddad6', + '767778797b7c7d7e80818283858687888a8b8c8d8f909192', + 'ecb-tbl-192: I=22'), + ('aaf4ec8c1a815aeb826cab741339532c', '9cb566ff26d92dad083b51fdc18c173c', + '94959697999a9b9c9e9fa0a1a3a4a5a6a8a9aaabadaeafb0', + 'ecb-tbl-192: I=23'), + ('40be8c5d9108e663f38f1a2395279ecf', 'c9c82766176a9b228eb9a974a010b4fb', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebec', + 'ecb-tbl-192: I=24'), + ('0c8ad9bc32d43e04716753aa4cfbe351', 'd8e26aa02945881d5137f1c1e1386e88', + '2a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-192: I=25'), + ('1407b1d5f87d63357c8dc7ebbaebbfee', 'c0e024ccd68ff5ffa4d139c355a77c55', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364', + 'ecb-tbl-192: I=26'), + ('e62734d1ae3378c4549e939e6f123416', '0b18b3d16f491619da338640df391d43', + '84858687898a8b8c8e8f90919394959698999a9b9d9e9fa0', + 'ecb-tbl-192: I=27'), + ('5a752cff2a176db1a1de77f2d2cdee41', 'dbe09ac8f66027bf20cb6e434f252efc', + 'a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-192: I=28'), + ('a9c8c3a4eabedc80c64730ddd018cd88', '6d04e5e43c5b9cbe05feb9606b6480fe', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdc', + 'ecb-tbl-192: I=29'), + ('ee9b3dbbdb86180072130834d305999a', 'dd1d6553b96be526d9fee0fbd7176866', + '1a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-192: I=30'), + ('a7fa8c3586b8ebde7568ead6f634a879', '0260ca7e3f979fd015b0dd4690e16d2a', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354', + 'ecb-tbl-192: I=31'), + ('37e0f4a87f127d45ac936fe7ad88c10a', '9893734de10edcc8a67c3b110b8b8cc6', + '929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-192: I=32'), + ('3f77d8b5d92bac148e4e46f697a535c5', '93b30b750516b2d18808d710c2ee84ef', + '464748494b4c4d4e50515253555657585a5b5c5d5f606162', + 'ecb-tbl-192: I=33'), + ('d25ebb686c40f7e2c4da1014936571ca', '16f65fa47be3cb5e6dfe7c6c37016c0e', + '828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-192: I=34'), + ('4f1c769d1e5b0552c7eca84dea26a549', 'f3847210d5391e2360608e5acb560581', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbc', + 'ecb-tbl-192: I=35'), + ('8548e2f882d7584d0fafc54372b6633a', '8754462cd223366d0753913e6af2643d', + 'bebfc0c1c3c4c5c6c8c9cacbcdcecfd0d2d3d4d5d7d8d9da', + 'ecb-tbl-192: I=36'), + ('87d7a336cb476f177cd2a51af2a62cdf', '1ea20617468d1b806a1fd58145462017', + 'dcdddedfe1e2e3e4e6e7e8e9ebecedeef0f1f2f3f5f6f7f8', + 'ecb-tbl-192: I=37'), + ('03b1feac668c4e485c1065dfc22b44ee', '3b155d927355d737c6be9dda60136e2e', + 'fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-192: I=38'), + ('bda15e66819fa72d653a6866aa287962', '26144f7b66daa91b6333dbd3850502b3', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334', + 'ecb-tbl-192: I=39'), + ('4d0c7a0d2505b80bf8b62ceb12467f0a', 'e4f9a4ab52ced8134c649bf319ebcc90', + '363738393b3c3d3e40414243454647484a4b4c4d4f505152', + 'ecb-tbl-192: I=40'), + ('626d34c9429b37211330986466b94e5f', 'b9ddd29ac6128a6cab121e34a4c62b36', + '54555657595a5b5c5e5f60616364656668696a6b6d6e6f70', + 'ecb-tbl-192: I=41'), + ('333c3e6bf00656b088a17e5ff0e7f60a', '6fcddad898f2ce4eff51294f5eaaf5c9', + '727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-192: I=42'), + ('687ed0cdc0d2a2bc8c466d05ef9d2891', 'c9a6fe2bf4028080bea6f7fc417bd7e3', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabac', + 'ecb-tbl-192: I=43'), + ('487830e78cc56c1693e64b2a6660c7b6', '6a2026846d8609d60f298a9c0673127f', + 'aeafb0b1b3b4b5b6b8b9babbbdbebfc0c2c3c4c5c7c8c9ca', + 'ecb-tbl-192: I=44'), + ('7a48d6b7b52b29392aa2072a32b66160', '2cb25c005e26efea44336c4c97a4240b', + 'cccdcecfd1d2d3d4d6d7d8d9dbdcdddee0e1e2e3e5e6e7e8', + 'ecb-tbl-192: I=45'), + ('907320e64c8c5314d10f8d7a11c8618d', '496967ab8680ddd73d09a0e4c7dcc8aa', + 'eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-192: I=46'), + ('b561f2ca2d6e65a4a98341f3ed9ff533', 'd5af94de93487d1f3a8c577cb84a66a4', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324', + 'ecb-tbl-192: I=47'), + ('df769380d212792d026f049e2e3e48ef', '84bdac569cae2828705f267cc8376e90', + '262728292b2c2d2e30313233353637383a3b3c3d3f404142', + 'ecb-tbl-192: I=48'), + ('79f374bc445bdabf8fccb8843d6054c6', 'f7401dda5ad5ab712b7eb5d10c6f99b6', + '44454647494a4b4c4e4f50515354555658595a5b5d5e5f60', + 'ecb-tbl-192: I=49'), + ('4e02f1242fa56b05c68dbae8fe44c9d6', '1c9d54318539ebd4c3b5b7e37bf119f0', + '626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-192: I=50'), + ('cf73c93cbff57ac635a6f4ad2a4a1545', 'aca572d65fb2764cffd4a6eca090ea0d', + '80818283858687888a8b8c8d8f90919294959697999a9b9c', + 'ecb-tbl-192: I=51'), + ('9923548e2875750725b886566784c625', '36d9c627b8c2a886a10ccb36eae3dfbb', + '9e9fa0a1a3a4a5a6a8a9aaabadaeafb0b2b3b4b5b7b8b9ba', + 'ecb-tbl-192: I=52'), + ('4888336b723a022c9545320f836a4207', '010edbf5981e143a81d646e597a4a568', + 'bcbdbebfc1c2c3c4c6c7c8c9cbcccdced0d1d2d3d5d6d7d8', + 'ecb-tbl-192: I=53'), + ('f84d9a5561b0608b1160dee000c41ba8', '8db44d538dc20cc2f40f3067fd298e60', + 'dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-192: I=54'), + ('c23192a0418e30a19b45ae3e3625bf22', '930eb53bc71e6ac4b82972bdcd5aafb3', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314', + 'ecb-tbl-192: I=55'), + ('b84e0690b28b0025381ad82a15e501a7', '6c42a81edcbc9517ccd89c30c95597b4', + '161718191b1c1d1e20212223252627282a2b2c2d2f303132', + 'ecb-tbl-192: I=56'), + ('acef5e5c108876c4f06269f865b8f0b0', 'da389847ad06df19d76ee119c71e1dd3', + '34353637393a3b3c3e3f40414344454648494a4b4d4e4f50', + 'ecb-tbl-192: I=57'), + ('0f1b3603e0f5ddea4548246153a5e064', 'e018fdae13d3118f9a5d1a647a3f0462', + '525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-192: I=58'), + ('fbb63893450d42b58c6d88cd3c1809e3', '2aa65db36264239d3846180fabdfad20', + '70717273757677787a7b7c7d7f80818284858687898a8b8c', + 'ecb-tbl-192: I=59'), + ('4bef736df150259dae0c91354e8a5f92', '1472163e9a4f780f1ceb44b07ecf4fdb', + '8e8f90919394959698999a9b9d9e9fa0a2a3a4a5a7a8a9aa', + 'ecb-tbl-192: I=60'), + ('7d2d46242056ef13d3c3fc93c128f4c7', 'c8273fdc8f3a9f72e91097614b62397c', + 'acadaeafb1b2b3b4b6b7b8b9bbbcbdbec0c1c2c3c5c6c7c8', + 'ecb-tbl-192: I=61'), + ('e9c1ba2df415657a256edb33934680fd', '66c8427dcd733aaf7b3470cb7d976e3f', + 'cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-192: I=62'), + ('e23ee277b0aa0a1dfb81f7527c3514f1', '146131cb17f1424d4f8da91e6f80c1d0', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304', + 'ecb-tbl-192: I=63'), + ('3e7445b0b63caaf75e4a911e12106b4c', '2610d0ad83659081ae085266a88770dc', + '060708090b0c0d0e10111213151617181a1b1c1d1f202122', + 'ecb-tbl-192: I=64'), + ('767774752023222544455a5be6e1e0e3', '38a2b5a974b0575c5d733917fb0d4570', + '24252627292a2b2c2e2f30313334353638393a3b3d3e3f40', + 'ecb-tbl-192: I=65'), + ('72737475717e7f7ce9e8ebea696a6b6c', 'e21d401ebc60de20d6c486e4f39a588b', + '424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-192: I=66'), + ('dfdedddc25262728c9c8cfcef1eeefec', 'e51d5f88c670b079c0ca1f0c2c4405a2', + '60616263656667686a6b6c6d6f70717274757677797a7b7c', + 'ecb-tbl-192: I=67'), + ('fffe0100707776755f5e5d5c7675746b', '246a94788a642fb3d1b823c8762380c8', + '7e7f80818384858688898a8b8d8e8f90929394959798999a', + 'ecb-tbl-192: I=68'), + ('e0e1e2e3424140479f9e9190292e2f2c', 'b80c391c5c41a4c3b30c68e0e3d7550f', + '9c9d9e9fa1a2a3a4a6a7a8a9abacadaeb0b1b2b3b5b6b7b8', + 'ecb-tbl-192: I=69'), + ('2120272690efeeed3b3a39384e4d4c4b', 'b77c4754fc64eb9a1154a9af0bb1f21c', + 'babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-192: I=70'), + ('ecedeeef5350516ea1a0a7a6a3acadae', 'fb554de520d159a06bf219fc7f34a02f', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4', + 'ecb-tbl-192: I=71'), + ('32333c3d25222320e9e8ebeacecdccc3', 'a89fba152d76b4927beed160ddb76c57', + 'f6f7f8f9fbfcfdfe00010203050607080a0b0c0d0f101112', + 'ecb-tbl-192: I=72'), + ('40414243626160678a8bb4b511161714', '5676eab4a98d2e8473b3f3d46424247c', + '14151617191a1b1c1e1f20212324252628292a2b2d2e2f30', + 'ecb-tbl-192: I=73'), + ('94959293f5fafbf81f1e1d1c7c7f7e79', '4e8f068bd7ede52a639036ec86c33568', + '323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-192: I=74'), + ('bebfbcbd191a1b14cfcec9c8546b6a69', 'f0193c4d7aff1791ee4c07eb4a1824fc', + '50515253555657585a5b5c5d5f60616264656667696a6b6c', + 'ecb-tbl-192: I=75'), + ('2c2d3233898e8f8cbbbab9b8333031ce', 'ac8686eeca9ba761afe82d67b928c33f', + '6e6f70717374757678797a7b7d7e7f80828384858788898a', + 'ecb-tbl-192: I=76'), + ('84858687bfbcbdba37363938fdfafbf8', '5faf8573e33b145b6a369cd3606ab2c9', + '8c8d8e8f91929394969798999b9c9d9ea0a1a2a3a5a6a7a8', + 'ecb-tbl-192: I=77'), + ('828384857669686b909192930b08090e', '31587e9944ab1c16b844ecad0df2e7da', + 'aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-192: I=78'), + ('bebfbcbd9695948b707176779e919093', 'd017fecd91148aba37f6f3068aa67d8a', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4', + 'ecb-tbl-192: I=79'), + ('8b8a85846067666521202322d0d3d2dd', '788ef2f021a73cba2794b616078a8500', + 'e6e7e8e9ebecedeef0f1f2f3f5f6f7f8fafbfcfdfe010002', + 'ecb-tbl-192: I=80'), + ('76777475f1f2f3f4f8f9e6e777707172', '5d1ef20dced6bcbc12131ac7c54788aa', + '04050607090a0b0c0e0f10111314151618191a1b1d1e1f20', + 'ecb-tbl-192: I=81'), + ('a4a5a2a34f404142b4b5b6b727242522', 'b3c8cf961faf9ea05fdde6d1e4d8f663', + '222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-192: I=82'), + ('94959697e1e2e3ec16171011839c9d9e', '143075c70605861c7fac6526199e459f', + '40414243454647484a4b4c4d4f50515254555657595a5b5c', + 'ecb-tbl-192: I=83'), + ('03023d3c06010003dedfdcddfffcfde2', 'a5ae12eade9a87268d898bfc8fc0252a', + '5e5f60616364656668696a6b6d6e6f70727374757778797a', + 'ecb-tbl-192: I=84'), + ('10111213f1f2f3f4cecfc0c1dbdcddde', '0924f7cf2e877a4819f5244a360dcea9', + '7c7d7e7f81828384868788898b8c8d8e9091929395969798', + 'ecb-tbl-192: I=85'), + ('67666160724d4c4f1d1c1f1e73707176', '3d9e9635afcc3e291cc7ab3f27d1c99a', + '9a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-192: I=86'), + ('e6e7e4e5a8abaad584858283909f9e9d', '9d80feebf87510e2b8fb98bb54fd788c', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4', + 'ecb-tbl-192: I=87'), + ('71707f7e565150537d7c7f7e6162636c', '5f9d1a082a1a37985f174002eca01309', + 'd6d7d8d9dbdcdddee0e1e2e3e5e6e7e8eaebecedeff0f1f2', + 'ecb-tbl-192: I=88'), + ('64656667212223245555aaaa03040506', 'a390ebb1d1403930184a44b4876646e4', + 'f4f5f6f7f9fafbfcfefe01010304050608090a0b0d0e0f10', + 'ecb-tbl-192: I=89'), + ('9e9f9899aba4a5a6cfcecdcc2b28292e', '700fe918981c3195bb6c4bcb46b74e29', + '121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-192: I=90'), + ('c7c6c5c4d1d2d3dc626364653a454447', '907984406f7bf2d17fb1eb15b673d747', + '30313233353637383a3b3c3d3f40414244454647494a4b4c', + 'ecb-tbl-192: I=91'), + ('f6f7e8e9e0e7e6e51d1c1f1e5b585966', 'c32a956dcfc875c2ac7c7cc8b8cc26e1', + '4e4f50515354555658595a5b5d5e5f60626364656768696a', + 'ecb-tbl-192: I=92'), + ('bcbdbebf5d5e5f5868696667f4f3f2f1', '02646e2ebfa9b820cf8424e9b9b6eb51', + '6c6d6e6f71727374767778797b7c7d7e8081828385868788', + 'ecb-tbl-192: I=93'), + ('40414647b0afaead9b9a99989b98999e', '621fda3a5bbd54c6d3c685816bd4ead8', + '8a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-192: I=94'), + ('69686b6a0201001f0f0e0908b4bbbab9', 'd4e216040426dfaf18b152469bc5ac2f', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4', + 'ecb-tbl-192: I=95'), + ('c7c6c9c8d8dfdedd5a5b5859bebdbcb3', '9d0635b9d33b6cdbd71f5d246ea17cc8', + 'c6c7c8c9cbcccdced0d1d2d3d5d6d7d8dadbdcdddfe0e1e2', + 'ecb-tbl-192: I=96'), + ('dedfdcdd787b7a7dfffee1e0b2b5b4b7', '10abad1bd9bae5448808765583a2cc1a', + 'e4e5e6e7e9eaebeceeeff0f1f3f4f5f6f8f9fafbfdfefe00', + 'ecb-tbl-192: I=97'), + ('4d4c4b4a606f6e6dd0d1d2d3fbf8f9fe', '6891889e16544e355ff65a793c39c9a8', + '020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-192: I=98'), + ('b7b6b5b4d7d4d5dae5e4e3e2e1fefffc', 'cc735582e68072c163cd9ddf46b91279', + '20212223252627282a2b2c2d2f30313234353637393a3b3c', + 'ecb-tbl-192: I=99'), + ('cecfb0b1f7f0f1f2aeafacad3e3d3c23', 'c5c68b9aeeb7f878df578efa562f9574', + '3e3f40414344454648494a4b4d4e4f50525354555758595a', + 'ecb-tbl-192: I=100'), + ('cacbc8c9cdcecfc812131c1d494e4f4c', '5f4764395a667a47d73452955d0d2ce8', + '5c5d5e5f61626364666768696b6c6d6e7071727375767778', + 'ecb-tbl-192: I=101'), + ('9d9c9b9ad22d2c2fb1b0b3b20c0f0e09', '701448331f66106cefddf1eb8267c357', + '7a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-192: I=102'), + ('7a7b787964676659959493924f404142', 'cb3ee56d2e14b4e1941666f13379d657', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4', + 'ecb-tbl-192: I=103'), + ('aaaba4a5cec9c8cb1f1e1d1caba8a9a6', '9fe16efd18ab6e1981191851fedb0764', + 'b6b7b8b9bbbcbdbec0c1c2c3c5c6c7c8cacbcccdcfd0d1d2', + 'ecb-tbl-192: I=104'), + ('93929190282b2a2dc4c5fafb92959497', '3dc9ba24e1b223589b147adceb4c8e48', + 'd4d5d6d7d9dadbdcdedfe0e1e3e4e5e6e8e9eaebedeeeff0', + 'ecb-tbl-192: I=105'), + ('efeee9e8ded1d0d339383b3a888b8a8d', '1c333032682e7d4de5e5afc05c3e483c', + 'f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-192: I=106'), + ('7f7e7d7ca2a1a0af78797e7f112e2f2c', 'd593cc99a95afef7e92038e05a59d00a', + '10111213151617181a1b1c1d1f20212224252627292a2b2c', + 'ecb-tbl-192: I=107'), + ('84859a9b2b2c2d2e868784852625245b', '51e7f96f53b4353923452c222134e1ec', + '2e2f30313334353638393a3b3d3e3f40424344454748494a', + 'ecb-tbl-192: I=108'), + ('b0b1b2b3070405026869666710171615', '4075b357a1a2b473400c3b25f32f81a4', + '4c4d4e4f51525354565758595b5c5d5e6061626365666768', + 'ecb-tbl-192: I=109'), + ('acadaaabbda2a3a00d0c0f0e595a5b5c', '302e341a3ebcd74f0d55f61714570284', + '6a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-192: I=110'), + ('121310115655544b5253545569666764', '57abdd8231280da01c5042b78cf76522', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4', + 'ecb-tbl-192: I=111'), + ('dedfd0d166616063eaebe8e94142434c', '17f9ea7eea17ac1adf0e190fef799e92', + 'a6a7a8a9abacadaeb0b1b2b3b5b6b7b8babbbcbdbfc0c1c2', + 'ecb-tbl-192: I=112'), + ('dbdad9d81417161166677879e0e7e6e5', '2e1bdd563dd87ee5c338dd6d098d0a7a', + 'c4c5c6c7c9cacbcccecfd0d1d3d4d5d6d8d9dadbdddedfe0', + 'ecb-tbl-192: I=113'), + ('6a6b6c6de0efeeed2b2a2928c0c3c2c5', 'eb869996e6f8bfb2bfdd9e0c4504dbb2', + 'e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-192: I=114'), + ('b1b0b3b21714151a1a1b1c1d5649484b', 'c2e01549e9decf317468b3e018c61ba8', + '00010203050607080a0b0c0d0f10111214151617191a1b1c', + 'ecb-tbl-192: I=115'), + ('39380706a3a4a5a6c4c5c6c77271706f', '8da875d033c01dd463b244a1770f4a22', + '1e1f20212324252628292a2b2d2e2f30323334353738393a', + 'ecb-tbl-192: I=116'), + ('5c5d5e5f1013121539383736e2e5e4e7', '8ba0dcf3a186844f026d022f8839d696', + '3c3d3e3f41424344464748494b4c4d4e5051525355565758', + 'ecb-tbl-192: I=117'), + ('43424544ead5d4d72e2f2c2d64676661', 'e9691ff9a6cc6970e51670a0fd5b88c1', + '5a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-192: I=118'), + ('55545756989b9a65f8f9feff18171615', 'f2baec06faeed30f88ee63ba081a6e5b', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394', + 'ecb-tbl-192: I=119'), + ('05040b0a525554573c3d3e3f4a494847', '9c39d4c459ae5753394d6094adc21e78', + '969798999b9c9d9ea0a1a2a3a5a6a7a8aaabacadafb0b1b2', + 'ecb-tbl-192: I=120'), + ('14151617595a5b5c8584fbfa8e89888b', '6345b532a11904502ea43ba99c6bd2b2', + 'b4b5b6b7b9babbbcbebfc0c1c3c4c5c6c8c9cacbcdcecfd0', + 'ecb-tbl-192: I=121'), + ('7c7d7a7bfdf2f3f029282b2a51525354', '5ffae3061a95172e4070cedce1e428c8', + 'd2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-192: I=122'), + ('38393a3b1e1d1c1341404746c23d3c3e', '0a4566be4cdf9adce5dec865b5ab34cd', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c', + 'ecb-tbl-192: I=123'), + ('8d8c939240474645818083827c7f7e41', 'ca17fcce79b7404f2559b22928f126fb', + '0e0f10111314151618191a1b1d1e1f20222324252728292a', + 'ecb-tbl-192: I=124'), + ('3b3a39381a19181f32333c3d45424340', '97ca39b849ed73a6470a97c821d82f58', + '2c2d2e2f31323334363738393b3c3d3e4041424345464748', + 'ecb-tbl-192: I=125'), + ('f0f1f6f738272625828380817f7c7d7a', '8198cb06bc684c6d3e9b7989428dcf7a', + '4a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-192: I=126'), + ('89888b8a0407061966676061141b1a19', 'f53c464c705ee0f28d9a4c59374928bd', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384', + 'ecb-tbl-192: I=127'), + ('d3d2dddcaaadacaf9c9d9e9fe8ebeae5', '9adb3d4cca559bb98c3e2ed73dbf1154', + '868788898b8c8d8e90919293959697989a9b9c9d9fa0a1a2', + 'ecb-tbl-192: I=128'), + + # ecb_tbl.txt, KEYSIZE=256 + ('834eadfccac7e1b30664b1aba44815ab', '1946dabf6a03a2a2c3d0b05080aed6fc', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=1'), + ('d9dc4dba3021b05d67c0518f72b62bf1', '5ed301d747d3cc715445ebdec62f2fb4', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=2'), + ('a291d86301a4a739f7392173aa3c604c', '6585c8f43d13a6beab6419fc5935b9d0', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=3'), + ('4264b2696498de4df79788a9f83e9390', '2a5b56a596680fcc0e05f5e0f151ecae', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=4'), + ('ee9932b3721804d5a83ef5949245b6f6', 'f5d6ff414fd2c6181494d20c37f2b8c4', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=5'), + ('e6248f55c5fdcbca9cbbb01c88a2ea77', '85399c01f59fffb5204f19f8482f00b8', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=6'), + ('b8358e41b9dff65fd461d55a99266247', '92097b4c88a041ddf98144bc8d22e8e7', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=7'), + ('f0e2d72260af58e21e015ab3a4c0d906', '89bd5b73b356ab412aef9f76cea2d65c', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=8'), + ('475b8b823ce8893db3c44a9f2a379ff7', '2536969093c55ff9454692f2fac2f530', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=9'), + ('688f5281945812862f5f3076cf80412f', '07fc76a872843f3f6e0081ee9396d637', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=10'), + ('08d1d2bc750af553365d35e75afaceaa', 'e38ba8ec2aa741358dcc93e8f141c491', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=11'), + ('8707121f47cc3efceca5f9a8474950a1', 'd028ee23e4a89075d0b03e868d7d3a42', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=12'), + ('e51aa0b135dba566939c3b6359a980c5', '8cd9423dfc459e547155c5d1d522e540', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=13'), + ('069a007fc76a459f98baf917fedf9521', '080e9517eb1677719acf728086040ae3', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=14'), + ('726165c1723fbcf6c026d7d00b091027', '7c1700211a3991fc0ecded0ab3e576b0', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=15'), + ('d7c544de91d55cfcde1f84ca382200ce', 'dabcbcc855839251db51e224fbe87435', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=16'), + ('fed3c9a161b9b5b2bd611b41dc9da357', '68d56fad0406947a4dd27a7448c10f1d', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=17'), + ('4f634cdc6551043409f30b635832cf82', 'da9a11479844d1ffee24bbf3719a9925', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=18'), + ('109ce98db0dfb36734d9f3394711b4e6', '5e4ba572f8d23e738da9b05ba24b8d81', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=19'), + ('4ea6dfaba2d8a02ffdffa89835987242', 'a115a2065d667e3f0b883837a6e903f8', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=20'), + ('5ae094f54af58e6e3cdbf976dac6d9ef', '3e9e90dc33eac2437d86ad30b137e66e', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=21'), + ('764d8e8e0f29926dbe5122e66354fdbe', '01ce82d8fbcdae824cb3c48e495c3692', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=22'), + ('3f0418f888cdf29a982bf6b75410d6a9', '0c9cff163ce936faaf083cfd3dea3117', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=23'), + ('e4a3e7cb12cdd56aa4a75197a9530220', '5131ba9bd48f2bba85560680df504b52', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=24'), + ('211677684aac1ec1a160f44c4ebf3f26', '9dc503bbf09823aec8a977a5ad26ccb2', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=25'), + ('d21e439ff749ac8f18d6d4b105e03895', '9a6db0c0862e506a9e397225884041d7', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=26'), + ('d9f6ff44646c4725bd4c0103ff5552a7', '430bf9570804185e1ab6365fc6a6860c', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=27'), + ('0b1256c2a00b976250cfc5b0c37ed382', '3525ebc02f4886e6a5a3762813e8ce8a', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=28'), + ('b056447ffc6dc4523a36cc2e972a3a79', '07fa265c763779cce224c7bad671027b', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=29'), + ('5e25ca78f0de55802524d38da3fe4456', 'e8b72b4e8be243438c9fff1f0e205872', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=30'), + ('a5bcf4728fa5eaad8567c0dc24675f83', '109d4f999a0e11ace1f05e6b22cbcb50', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=31'), + ('814e59f97ed84646b78b2ca022e9ca43', '45a5e8d4c3ed58403ff08d68a0cc4029', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=32'), + ('15478beec58f4775c7a7f5d4395514d7', '196865964db3d417b6bd4d586bcb7634', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=33'), + ('253548ffca461c67c8cbc78cd59f4756', '60436ad45ac7d30d99195f815d98d2ae', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=34'), + ('fd7ad8d73b9b0f8cc41600640f503d65', 'bb07a23f0b61014b197620c185e2cd75', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=35'), + ('06199de52c6cbf8af954cd65830bcd56', '5bc0b2850129c854423aff0751fe343b', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=36'), + ('f17c4ffe48e44c61bd891e257e725794', '7541a78f96738e6417d2a24bd2beca40', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=37'), + ('9a5b4a402a3e8a59be6bf5cd8154f029', 'b0a303054412882e464591f1546c5b9e', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=38'), + ('79bd40b91a7e07dc939d441782ae6b17', '778c06d8a355eeee214fcea14b4e0eef', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=39'), + ('d8ceaaf8976e5fbe1012d8c84f323799', '09614206d15cbace63227d06db6beebb', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=40'), + ('3316e2751e2e388b083da23dd6ac3fbe', '41b97fb20e427a9fdbbb358d9262255d', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=41'), + ('8b7cfbe37de7dca793521819242c5816', 'c1940f703d845f957652c2d64abd7adf', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=42'), + ('f23f033c0eebf8ec55752662fd58ce68', 'd2d44fcdae5332343366db297efcf21b', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=43'), + ('59eb34f6c8bdbacc5fc6ad73a59a1301', 'ea8196b79dbe167b6aa9896e287eed2b', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=44'), + ('dcde8b6bd5cf7cc22d9505e3ce81261a', 'd6b0b0c4ba6c7dbe5ed467a1e3f06c2d', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=45'), + ('e33cf7e524fed781e7042ff9f4b35dc7', 'ec51eb295250c22c2fb01816fb72bcae', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=46'), + ('27963c8facdf73062867d164df6d064c', 'aded6630a07ce9c7408a155d3bd0d36f', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=47'), + ('77b1ce386b551b995f2f2a1da994eef8', '697c9245b9937f32f5d1c82319f0363a', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=48'), + ('f083388b013679efcf0bb9b15d52ae5c', 'aad5ad50c6262aaec30541a1b7b5b19c', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-256: I=49'), + ('c5009e0dab55db0abdb636f2600290c8', '7d34b893855341ec625bd6875ac18c0d', + '20212223252627282a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-256: I=50'), + ('7804881e26cd532d8514d3683f00f1b9', '7ef05105440f83862f5d780e88f02b41', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-256: I=51'), + ('46cddcd73d1eb53e675ca012870a92a3', 'c377c06403382061af2c9c93a8e70df6', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=52'), + ('a9fb44062bb07fe130a8e8299eacb1ab', '1dbdb3ffdc052dacc83318853abc6de5', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=53'), + ('2b6ff8d7a5cc3a28a22d5a6f221af26b', '69a6eab00432517d0bf483c91c0963c7', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=54'), + ('1a9527c29b8add4b0e3e656dbb2af8b4', '0797f41dc217c80446e1d514bd6ab197', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=55'), + ('7f99cf2c75244df015eb4b0c1050aeae', '9dfd76575902a637c01343c58e011a03', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=56'), + ('e84ff85b0d9454071909c1381646c4ed', 'acf4328ae78f34b9fa9b459747cc2658', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=57'), + ('89afd40f99521280d5399b12404f6db4', 'b0479aea12bac4fe2384cf98995150c6', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=58'), + ('a09ef32dbc5119a35ab7fa38656f0329', '9dd52789efe3ffb99f33b3da5030109a', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=59'), + ('61773457f068c376c7829b93e696e716', 'abbb755e4621ef8f1214c19f649fb9fd', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=60'), + ('a34f0cae726cce41dd498747d891b967', 'da27fb8174357bce2bed0e7354f380f9', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=61'), + ('856f59496c7388ee2d2b1a27b7697847', 'c59a0663f0993838f6e5856593bdc5ef', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=62'), + ('cb090c593ef7720bd95908fb93b49df4', 'ed60b264b5213e831607a99c0ce5e57e', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=63'), + ('a0ac75cd2f1923d460fc4d457ad95baf', 'e50548746846f3eb77b8c520640884ed', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=64'), + ('2a2b282974777689e8e9eeef525d5c5f', '28282cc7d21d6a2923641e52d188ef0c', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=65'), + ('909192939390919e0f0e09089788898a', '0dfa5b02abb18e5a815305216d6d4f8e', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=66'), + ('777675748d8e8f907170777649464744', '7359635c0eecefe31d673395fb46fb99', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=67'), + ('717073720605040b2d2c2b2a05fafbf9', '73c679f7d5aef2745c9737bb4c47fb36', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=68'), + ('64656667fefdfcc31b1a1d1ca5aaaba8', 'b192bd472a4d2eafb786e97458967626', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=69'), + ('dbdad9d86a696867b5b4b3b2c8d7d6d5', '0ec327f6c8a2b147598ca3fde61dc6a4', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=70'), + ('5c5d5e5fe3e0e1fe31303736333c3d3e', 'fc418eb3c41b859b38d4b6f646629729', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=71'), + ('545556574b48494673727574546b6a69', '30249e5ac282b1c981ea64b609f3a154', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=72'), + ('ecedeeefc6c5c4bb56575051f5fafbf8', '5e6e08646d12150776bb43c2d78a9703', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=73'), + ('464744452724252ac9c8cfced2cdcccf', 'faeb3d5de652cd3447dceb343f30394a', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=74'), + ('e6e7e4e54142435c878681801c131211', 'a8e88706823f6993ef80d05c1c7b2cf0', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=75'), + ('72737071cfcccdc2f9f8fffe710e0f0c', '8ced86677e6e00a1a1b15968f2d3cce6', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=76'), + ('505152537370714ec3c2c5c4010e0f0c', '9fc7c23858be03bdebb84e90db6786a9', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=77'), + ('a8a9aaab5c5f5e51aeafa8a93d222320', 'b4fbd65b33f70d8cf7f1111ac4649c36', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=78'), + ('dedfdcddf6f5f4eb10111617fef1f0f3', 'c5c32d5ed03c4b53cc8c1bd0ef0dbbf6', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=79'), + ('bdbcbfbe5e5d5c530b0a0d0cfac5c4c7', 'd1a7f03b773e5c212464b63709c6a891', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=80'), + ('8a8b8889050606f8f4f5f2f3636c6d6e', '6b7161d8745947ac6950438ea138d028', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-256: I=81'), + ('a6a7a4a54d4e4f40b2b3b4b539262724', 'fd47a9f7e366ee7a09bc508b00460661', + '20212223252627282a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-256: I=82'), + ('9c9d9e9fe9eaebf40e0f08099b949596', '00d40b003dc3a0d9310b659b98c7e416', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-256: I=83'), + ('2d2c2f2e1013121dcccdcacbed121310', 'eea4c79dcc8e2bda691f20ac48be0717', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=84'), + ('f4f5f6f7edeeefd0eaebecedf7f8f9fa', 'e78f43b11c204403e5751f89d05a2509', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=85'), + ('3d3c3f3e282b2a2573727574150a0b08', 'd0f0e3d1f1244bb979931e38dd1786ef', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=86'), + ('b6b7b4b5f8fbfae5b4b5b2b3a0afaead', '042e639dc4e1e4dde7b75b749ea6f765', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=87'), + ('b7b6b5b4989b9a95878681809ba4a5a6', 'bc032fdd0efe29503a980a7d07ab46a8', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=88'), + ('a8a9aaabe5e6e798e9e8efee4748494a', '0c93ac949c0da6446effb86183b6c910', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=89'), + ('ecedeeefd9dadbd4b9b8bfbe657a7b78', 'e0d343e14da75c917b4a5cec4810d7c2', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=90'), + ('7f7e7d7c696a6b74cacbcccd929d9c9f', '0eafb821748408279b937b626792e619', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=91'), + ('08090a0b0605040bfffef9f8b9c6c7c4', 'fa1ac6e02d23b106a1fef18b274a553f', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=92'), + ('08090a0bf1f2f3ccfcfdfafb68676665', '0dadfe019cd12368075507df33c1a1e9', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=93'), + ('cacbc8c93a393837050403020d121310', '3a0879b414465d9ffbaf86b33a63a1b9', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=94'), + ('e9e8ebea8281809f8f8e8988343b3a39', '62199fadc76d0be1805d3ba0b7d914bf', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=95'), + ('515053524645444bd0d1d6d7340b0a09', '1b06d6c5d333e742730130cf78e719b4', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=96'), + ('42434041ecefee1193929594c6c9c8cb', 'f1f848824c32e9dcdcbf21580f069329', + '78797a7b7d7e7f80828384858788898a8c8d8e8f91929394969798999b9c9d9e', + 'ecb-tbl-256: I=97'), + ('efeeedecc2c1c0cf76777071455a5b58', '1a09050cbd684f784d8e965e0782f28a', + 'a0a1a2a3a5a6a7a8aaabacadafb0b1b2b4b5b6b7b9babbbcbebfc0c1c3c4c5c6', + 'ecb-tbl-256: I=98'), + ('5f5e5d5c3f3c3d221d1c1b1a19161714', '79c2969e7ded2ba7d088f3f320692360', + 'c8c9cacbcdcecfd0d2d3d4d5d7d8d9dadcdddedfe1e2e3e4e6e7e8e9ebecedee', + 'ecb-tbl-256: I=99'), + ('000102034142434c1c1d1a1b8d727371', '091a658a2f7444c16accb669450c7b63', + 'f0f1f2f3f5f6f7f8fafbfcfdfe01000204050607090a0b0c0e0f101113141516', + 'ecb-tbl-256: I=100'), + ('8e8f8c8db1b2b38c56575051050a0b08', '97c1e3a72cca65fa977d5ed0e8a7bbfc', + '18191a1b1d1e1f20222324252728292a2c2d2e2f31323334363738393b3c3d3e', + 'ecb-tbl-256: I=101'), + ('a7a6a5a4e8ebeae57f7e7978cad5d4d7', '70c430c6db9a17828937305a2df91a2a', + '40414243454647484a4b4c4d4f50515254555657595a5b5c5e5f606163646566', + 'ecb-tbl-256: I=102'), + ('8a8b888994979689454443429f909192', '629553457fbe2479098571c7c903fde8', + '68696a6b6d6e6f70727374757778797a7c7d7e7f81828384868788898b8c8d8e', + 'ecb-tbl-256: I=103'), + ('8c8d8e8fe0e3e2ed45444342f1cecfcc', 'a25b25a61f612669e7d91265c7d476ba', + '90919293959697989a9b9c9d9fa0a1a2a4a5a6a7a9aaabacaeafb0b1b3b4b5b6', + 'ecb-tbl-256: I=104'), + ('fffefdfc4c4f4e31d8d9dedfb6b9b8bb', 'eb7e4e49b8ae0f024570dda293254fed', + 'b8b9babbbdbebfc0c2c3c4c5c7c8c9cacccdcecfd1d2d3d4d6d7d8d9dbdcddde', + 'ecb-tbl-256: I=105'), + ('fdfcfffecccfcec12f2e29286679787b', '38fe15d61cca84516e924adce5014f67', + 'e0e1e2e3e5e6e7e8eaebecedeff0f1f2f4f5f6f7f9fafbfcfefe010103040506', + 'ecb-tbl-256: I=106'), + ('67666564bab9b8a77071767719161714', '3ad208492249108c9f3ebeb167ad0583', + '08090a0b0d0e0f10121314151718191a1c1d1e1f21222324262728292b2c2d2e', + 'ecb-tbl-256: I=107'), + ('9a9b98992d2e2f2084858283245b5a59', '299ba9f9bf5ab05c3580fc26edd1ed12', + '30313233353637383a3b3c3d3f40414244454647494a4b4c4e4f505153545556', + 'ecb-tbl-256: I=108'), + ('a4a5a6a70b0809365c5d5a5b2c232221', '19dc705b857a60fb07717b2ea5717781', + '58595a5b5d5e5f60626364656768696a6c6d6e6f71727374767778797b7c7d7e', + 'ecb-tbl-256: I=109'), + ('464744455754555af3f2f5f4afb0b1b2', 'ffc8aeb885b5efcad06b6dbebf92e76b', + '80818283858687888a8b8c8d8f90919294959697999a9b9c9e9fa0a1a3a4a5a6', + 'ecb-tbl-256: I=110'), + ('323330317675746b7273747549464744', 'f58900c5e0b385253ff2546250a0142b', + 'a8a9aaabadaeafb0b2b3b4b5b7b8b9babcbdbebfc1c2c3c4c6c7c8c9cbcccdce', + 'ecb-tbl-256: I=111'), + ('a8a9aaab181b1a15808186872b141516', '2ee67b56280bc462429cee6e3370cbc1', + 'd0d1d2d3d5d6d7d8dadbdcdddfe0e1e2e4e5e6e7e9eaebeceeeff0f1f3f4f5f6', + 'ecb-tbl-256: I=112'), + ('e7e6e5e4202323ddaaabacad343b3a39', '20db650a9c8e9a84ab4d25f7edc8f03f', + 'f8f9fafbfdfefe00020304050708090a0c0d0e0f11121314161718191b1c1d1e', + 'ecb-tbl-256: I=113'), + ('a8a9aaab2221202fedecebea1e010003', '3c36da169525cf818843805f25b78ae5', + '20212223252627282a2b2c2d2f30313234353637393a3b3c3e3f404143444546', + 'ecb-tbl-256: I=114'), + ('f9f8fbfa5f5c5d42424344450e010003', '9a781d960db9e45e37779042fea51922', + '48494a4b4d4e4f50525354555758595a5c5d5e5f61626364666768696b6c6d6e', + 'ecb-tbl-256: I=115'), + ('57565554f5f6f7f89697909120dfdedd', '6560395ec269c672a3c288226efdba77', + '70717273757677787a7b7c7d7f80818284858687898a8b8c8e8f909193949596', + 'ecb-tbl-256: I=116'), + ('f8f9fafbcccfcef1dddcdbda0e010003', '8c772b7a189ac544453d5916ebb27b9a', + '98999a9b9d9e9fa0a2a3a4a5a7a8a9aaacadaeafb1b2b3b4b6b7b8b9bbbcbdbe', + 'ecb-tbl-256: I=117'), + ('d9d8dbda7073727d80818687c2dddcdf', '77ca5468cc48e843d05f78eed9d6578f', + 'c0c1c2c3c5c6c7c8cacbcccdcfd0d1d2d4d5d6d7d9dadbdcdedfe0e1e3e4e5e6', + 'ecb-tbl-256: I=118'), + ('c5c4c7c6080b0a1588898e8f68676665', '72cdcc71dc82c60d4429c9e2d8195baa', + 'e8e9eaebedeeeff0f2f3f4f5f7f8f9fafcfdfeff01020304060708090b0c0d0e', + 'ecb-tbl-256: I=119'), + ('83828180dcdfded186878081f0cfcecd', '8080d68ce60e94b40b5b8b69eeb35afa', + '10111213151617181a1b1c1d1f20212224252627292a2b2c2e2f303133343536', + 'ecb-tbl-256: I=120'), + ('98999a9bdddedfa079787f7e0a050407', '44222d3cde299c04369d58ac0eba1e8e', + '38393a3b3d3e3f40424344454748494a4c4d4e4f51525354565758595b5c5d5e', + 'ecb-tbl-256: I=121'), + ('cecfcccd4f4c4d429f9e9998dfc0c1c2', '9b8721b0a8dfc691c5bc5885dbfcb27a', + '60616263656667686a6b6c6d6f70717274757677797a7b7c7e7f808183848586', + 'ecb-tbl-256: I=122'), + ('404142436665647b29282f2eaba4a5a6', '0dc015ce9a3a3414b5e62ec643384183', + '88898a8b8d8e8f90929394959798999a9c9d9e9fa1a2a3a4a6a7a8a9abacadae', + 'ecb-tbl-256: I=123'), + ('33323130e6e5e4eb23222524dea1a0a3', '705715448a8da412025ce38345c2a148', + 'b0b1b2b3b5b6b7b8babbbcbdbfc0c1c2c4c5c6c7c9cacbcccecfd0d1d3d4d5d6', + 'ecb-tbl-256: I=124'), + ('cfcecdccf6f5f4cbe6e7e0e199969794', 'c32b5b0b6fbae165266c569f4b6ecf0b', + 'd8d9dadbdddedfe0e2e3e4e5e7e8e9eaecedeeeff1f2f3f4f6f7f8f9fbfcfdfe', + 'ecb-tbl-256: I=125'), + ('babbb8b97271707fdcdddadb29363734', '4dca6c75192a01ddca9476af2a521e87', + '00010203050607080a0b0c0d0f10111214151617191a1b1c1e1f202123242526', + 'ecb-tbl-256: I=126'), + ('c9c8cbca4447465926272021545b5a59', '058691e627ecbc36ac07b6db423bd698', + '28292a2b2d2e2f30323334353738393a3c3d3e3f41424344464748494b4c4d4e', + 'ecb-tbl-256: I=127'), + ('050407067477767956575051221d1c1f', '7444527095838fe080fc2bcdd30847eb', + '50515253555657585a5b5c5d5f60616264656667696a6b6c6e6f707173747576', + 'ecb-tbl-256: I=128'), + + # FIPS PUB 800-38A test vectors, 2001 edition. Annex F. + + ('6bc1bee22e409f96e93d7e117393172a'+'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+'f69f2445df4f9b17ad2b417be66c3710', + '3ad77bb40d7a3660a89ecaf32466ef97'+'f5d3d58503b9699de785895a96fdbaaf'+ + '43b1cd7f598ece23881b00e3ed030688'+'7b0c785e27e8ad3f8223207104725dd4', + '2b7e151628aed2a6abf7158809cf4f3c', + 'NIST 800-38A, F.1.1, ECB and AES-128'), + + ('6bc1bee22e409f96e93d7e117393172a'+'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+'f69f2445df4f9b17ad2b417be66c3710', + 'bd334f1d6e45f25ff712a214571fa5cc'+'974104846d0ad3ad7734ecb3ecee4eef'+ + 'ef7afd2270e2e60adce0ba2face6444e'+'9a4b41ba738d6c72fb16691603c18e0e', + '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b', + 'NIST 800-38A, F.1.3, ECB and AES-192'), + + ('6bc1bee22e409f96e93d7e117393172a'+'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+'f69f2445df4f9b17ad2b417be66c3710', + 'f3eed1bdb5d2a03c064b5a7e3db181f8'+'591ccb10d410ed26dc5ba74a31362870'+ + 'b6ed21b99ca6f4f9f153e7b1beafed1d'+'23304b7a39f9f3ff067d8d8f9e24ecc7', + '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4', + 'NIST 800-38A, F.1.3, ECB and AES-256'), + +] + +test_data_8_lanes = [] +for td in test_data: + test_data_8_lanes.append((td[0] * 8, td[1] * 8, td[2], td[3])) +test_data += test_data_8_lanes + +class TestMultipleBlocks(unittest.TestCase): + + def __init__(self, use_aesni): + unittest.TestCase.__init__(self) + self.use_aesni = use_aesni + + def runTest(self): + # Encrypt data which is 8*2+4 bytes long, so as to trigger (for the + # AESNI variant) both the path that parallelizes 8 lanes and the one + # that processes data serially + + tvs = [ + (b'a' * 16, 'c0b27011eb15bf144d2fc9fae80ea16d4c231cb230416c5fac02e6835ad9d7d0'), + (b'a' * 24, 'df8435ce361a78c535b41dcb57da952abbf9ee5954dc6fbcd75fd00fa626915d'), + (b'a' * 32, '211402de6c80db1f92ba255881178e1f70783b8cfd3b37808205e48b80486cd8') + ] + + for key, expected in tvs: + + cipher = AES.new(key, AES.MODE_ECB, use_aesni=self.use_aesni) + h = SHA256.new() + + pt = b"".join([ tobytes('{0:016x}'.format(x)) for x in range(20) ]) + ct = cipher.encrypt(pt) + self.assertEqual(SHA256.new(ct).hexdigest(), expected) + + +class TestIncompleteBlocks(unittest.TestCase): + + def __init__(self, use_aesni): + unittest.TestCase.__init__(self) + self.use_aesni = use_aesni + + def runTest(self): + # Encrypt data with length not multiple of 16 bytes + + cipher = AES.new(b'4'*16, AES.MODE_ECB, use_aesni=self.use_aesni) + + for msg_len in range(1, 16): + self.assertRaises(ValueError, cipher.encrypt, b'1' * msg_len) + self.assertRaises(ValueError, cipher.encrypt, b'1' * (msg_len+16)) + self.assertRaises(ValueError, cipher.decrypt, b'1' * msg_len) + self.assertRaises(ValueError, cipher.decrypt, b'1' * (msg_len+16)) + + self.assertEqual(cipher.encrypt(b''), b'') + self.assertEqual(cipher.decrypt(b''), b'') + + +class TestOutput(unittest.TestCase): + + def __init__(self, use_aesni): + unittest.TestCase.__init__(self) + self.use_aesni = use_aesni + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = AES.new(b'4'*16, AES.MODE_ECB, use_aesni=self.use_aesni) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(15) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from Cryptodome.Util import _cpu_features + from .common import make_block_tests + + tests = make_block_tests(AES, "AES", test_data, {'use_aesni': False}) + tests += [ TestMultipleBlocks(False) ] + tests += [ TestIncompleteBlocks(False) ] + if _cpu_features.have_aes_ni(): + # Run tests with AES-NI instructions if they are available. + tests += make_block_tests(AES, "AESNI", test_data, {'use_aesni': True}) + tests += [ TestMultipleBlocks(True) ] + tests += [ TestIncompleteBlocks(True) ] + tests += [ TestOutput(True) ] + else: + print("Skipping AESNI tests") + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py new file mode 100644 index 0000000..0072506 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC2.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/ARC2.py: Self-test for the Alleged-RC2 cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.ARC2""" + +import unittest + +from Cryptodome.Util.py3compat import b, bchr + +from Cryptodome.Cipher import ARC2 + +# This is a list of (plaintext, ciphertext, key[, description[, extra_params]]) tuples. +test_data = [ + # Test vectors from RFC 2268 + + # 63-bit effective key length + ('0000000000000000', 'ebb773f993278eff', '0000000000000000', + 'RFC2268-1', dict(effective_keylen=63)), + + # 64-bit effective key length + ('ffffffffffffffff', '278b27e42e2f0d49', 'ffffffffffffffff', + 'RFC2268-2', dict(effective_keylen=64)), + ('1000000000000001', '30649edf9be7d2c2', '3000000000000000', + 'RFC2268-3', dict(effective_keylen=64)), + #('0000000000000000', '61a8a244adacccf0', '88', + # 'RFC2268-4', dict(effective_keylen=64)), + ('0000000000000000', '6ccf4308974c267f', '88bca90e90875a', + 'RFC2268-5', dict(effective_keylen=64)), + ('0000000000000000', '1a807d272bbe5db1', '88bca90e90875a7f0f79c384627bafb2', + 'RFC2268-6', dict(effective_keylen=64)), + + # 128-bit effective key length + ('0000000000000000', '2269552ab0f85ca6', '88bca90e90875a7f0f79c384627bafb2', + "RFC2268-7", dict(effective_keylen=128)), + ('0000000000000000', '5b78d3a43dfff1f1', + '88bca90e90875a7f0f79c384627bafb216f80a6f85920584c42fceb0be255daf1e', + "RFC2268-8", dict(effective_keylen=129)), + + # Test vectors from PyCryptodome 2.0.1's testdata.py + # 1024-bit effective key length + ('0000000000000000', '624fb3e887419e48', '5068696c6970476c617373', + 'PCTv201-0'), + ('ffffffffffffffff', '79cadef44c4a5a85', '5068696c6970476c617373', + 'PCTv201-1'), + ('0001020304050607', '90411525b34e4c2c', '5068696c6970476c617373', + 'PCTv201-2'), + ('0011223344556677', '078656aaba61cbfb', '5068696c6970476c617373', + 'PCTv201-3'), + ('0000000000000000', 'd7bcc5dbb4d6e56a', 'ffffffffffffffff', + 'PCTv201-4'), + ('ffffffffffffffff', '7259018ec557b357', 'ffffffffffffffff', + 'PCTv201-5'), + ('0001020304050607', '93d20a497f2ccb62', 'ffffffffffffffff', + 'PCTv201-6'), + ('0011223344556677', 'cb15a7f819c0014d', 'ffffffffffffffff', + 'PCTv201-7'), + ('0000000000000000', '63ac98cdf3843a7a', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-8'), + ('ffffffffffffffff', '3fb49e2fa12371dd', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-9'), + ('0001020304050607', '46414781ab387d5f', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-10'), + ('0011223344556677', 'be09dc81feaca271', 'ffffffffffffffff5065746572477265656e6177617953e5ffe553', + 'PCTv201-11'), + ('0000000000000000', 'e64221e608be30ab', '53e5ffe553', + 'PCTv201-12'), + ('ffffffffffffffff', '862bc60fdcd4d9a9', '53e5ffe553', + 'PCTv201-13'), + ('0001020304050607', '6a34da50fa5e47de', '53e5ffe553', + 'PCTv201-14'), + ('0011223344556677', '584644c34503122c', '53e5ffe553', + 'PCTv201-15'), +] + +class BufferOverflowTest(unittest.TestCase): + # Test a buffer overflow found in older versions of PyCrypto + + def runTest(self): + """ARC2 with keylength > 128""" + key = b("x") * 16384 + self.assertRaises(ValueError, ARC2.new, key, ARC2.MODE_ECB) + +class KeyLength(unittest.TestCase): + + def runTest(self): + ARC2.new(b'\x00' * 16, ARC2.MODE_ECB, effective_keylen=40) + self.assertRaises(ValueError, ARC2.new, bchr(0) * 4, ARC2.MODE_ECB) + self.assertRaises(ValueError, ARC2.new, bchr(0) * 129, ARC2.MODE_ECB) + + self.assertRaises(ValueError, ARC2.new, bchr(0) * 16, ARC2.MODE_ECB, + effective_keylen=39) + self.assertRaises(ValueError, ARC2.new, bchr(0) * 16, ARC2.MODE_ECB, + effective_keylen=1025) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = ARC2.new(b'4'*16, ARC2.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from Cryptodome.Cipher import ARC2 + from .common import make_block_tests + + tests = make_block_tests(ARC2, "ARC2", test_data) + tests.append(BufferOverflowTest()) + tests.append(KeyLength()) + tests += [TestOutput()] + + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py new file mode 100644 index 0000000..a160c98 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ARC4.py @@ -0,0 +1,471 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/ARC4.py: Self-test for the Alleged-RC4 cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.ARC4""" + +import unittest + +from Cryptodome.Util.py3compat import b +from Cryptodome.SelfTest.st_common import list_test_cases +from binascii import unhexlify + +from Cryptodome.Cipher import ARC4 + +# This is a list of (plaintext, ciphertext, key[, description]) tuples. +test_data = [ + # Test vectors from Eric Rescorla's message with the subject + # "RC4 compatibility testing", sent to the cipherpunks mailing list on + # September 13, 1994. + # http://cypherpunks.venona.com/date/1994/09/msg00420.html + + ('0123456789abcdef', '75b7878099e0c596', '0123456789abcdef', + 'Test vector 0'), + + ('0000000000000000', '7494c2e7104b0879', '0123456789abcdef', + 'Test vector 1'), + + ('0000000000000000', 'de188941a3375d3a', '0000000000000000', + 'Test vector 2'), + + ('00000000000000000000', 'd6a141a7ec3c38dfbd61', 'ef012345', + 'Test vector 3'), + + ('01' * 512, + '7595c3e6114a09780c4ad452338e1ffd9a1be9498f813d76533449b6778dcad8' + + 'c78a8d2ba9ac66085d0e53d59c26c2d1c490c1ebbe0ce66d1b6b1b13b6b919b8' + + '47c25a91447a95e75e4ef16779cde8bf0a95850e32af9689444fd377108f98fd' + + 'cbd4e726567500990bcc7e0ca3c4aaa304a387d20f3b8fbbcd42a1bd311d7a43' + + '03dda5ab078896ae80c18b0af66dff319616eb784e495ad2ce90d7f772a81747' + + 'b65f62093b1e0db9e5ba532fafec47508323e671327df9444432cb7367cec82f' + + '5d44c0d00b67d650a075cd4b70dedd77eb9b10231b6b5b741347396d62897421' + + 'd43df9b42e446e358e9c11a9b2184ecbef0cd8e7a877ef968f1390ec9b3d35a5' + + '585cb009290e2fcde7b5ec66d9084be44055a619d9dd7fc3166f9487f7cb2729' + + '12426445998514c15d53a18c864ce3a2b7555793988126520eacf2e3066e230c' + + '91bee4dd5304f5fd0405b35bd99c73135d3d9bc335ee049ef69b3867bf2d7bd1' + + 'eaa595d8bfc0066ff8d31509eb0c6caa006c807a623ef84c3d33c195d23ee320' + + 'c40de0558157c822d4b8c569d849aed59d4e0fd7f379586b4b7ff684ed6a189f' + + '7486d49b9c4bad9ba24b96abf924372c8a8fffb10d55354900a77a3db5f205e1' + + 'b99fcd8660863a159ad4abe40fa48934163ddde542a6585540fd683cbfd8c00f' + + '12129a284deacc4cdefe58be7137541c047126c8d49e2755ab181ab7e940b0c0', + '0123456789abcdef', + "Test vector 4"), + # shortest key - generated with arc4 package + ('7468697320697320616e206578616d706c65', + '7260677d38495a09585d69321e17eaf3cdd0', + '01'), +] + + +class RFC6229_Tests(unittest.TestCase): + # Test vectors from RFC 6229. Each test vector is a tuple with two items: + # the ARC4 key and a dictionary. The dictionary has keystream offsets as keys + # and the 16-byte keystream starting at the relevant offset as value. + rfc6229_data = [ + # Page 3 + ( + '0102030405', + { + 0: 'b2 39 63 05 f0 3d c0 27 cc c3 52 4a 0a 11 18 a8', + 16: '69 82 94 4f 18 fc 82 d5 89 c4 03 a4 7a 0d 09 19', + 240: '28 cb 11 32 c9 6c e2 86 42 1d ca ad b8 b6 9e ae', + 256: '1c fc f6 2b 03 ed db 64 1d 77 df cf 7f 8d 8c 93', + 496: '42 b7 d0 cd d9 18 a8 a3 3d d5 17 81 c8 1f 40 41', + 512: '64 59 84 44 32 a7 da 92 3c fb 3e b4 98 06 61 f6', + 752: 'ec 10 32 7b de 2b ee fd 18 f9 27 76 80 45 7e 22', + 768: 'eb 62 63 8d 4f 0b a1 fe 9f ca 20 e0 5b f8 ff 2b', + 1008: '45 12 90 48 e6 a0 ed 0b 56 b4 90 33 8f 07 8d a5', + 1024: '30 ab bc c7 c2 0b 01 60 9f 23 ee 2d 5f 6b b7 df', + 1520: '32 94 f7 44 d8 f9 79 05 07 e7 0f 62 e5 bb ce ea', + 1536: 'd8 72 9d b4 18 82 25 9b ee 4f 82 53 25 f5 a1 30', + 2032: '1e b1 4a 0c 13 b3 bf 47 fa 2a 0b a9 3a d4 5b 8b', + 2048: 'cc 58 2f 8b a9 f2 65 e2 b1 be 91 12 e9 75 d2 d7', + 3056: 'f2 e3 0f 9b d1 02 ec bf 75 aa ad e9 bc 35 c4 3c', + 3072: 'ec 0e 11 c4 79 dc 32 9d c8 da 79 68 fe 96 56 81', + 4080: '06 83 26 a2 11 84 16 d2 1f 9d 04 b2 cd 1c a0 50', + 4096: 'ff 25 b5 89 95 99 67 07 e5 1f bd f0 8b 34 d8 75' + } + ), + # Page 4 + ( + '01020304050607', + { + 0: '29 3f 02 d4 7f 37 c9 b6 33 f2 af 52 85 fe b4 6b', + 16: 'e6 20 f1 39 0d 19 bd 84 e2 e0 fd 75 20 31 af c1', + 240: '91 4f 02 53 1c 92 18 81 0d f6 0f 67 e3 38 15 4c', + 256: 'd0 fd b5 83 07 3c e8 5a b8 39 17 74 0e c0 11 d5', + 496: '75 f8 14 11 e8 71 cf fa 70 b9 0c 74 c5 92 e4 54', + 512: '0b b8 72 02 93 8d ad 60 9e 87 a5 a1 b0 79 e5 e4', + 752: 'c2 91 12 46 b6 12 e7 e7 b9 03 df ed a1 da d8 66', + 768: '32 82 8f 91 50 2b 62 91 36 8d e8 08 1d e3 6f c2', + 1008: 'f3 b9 a7 e3 b2 97 bf 9a d8 04 51 2f 90 63 ef f1', + 1024: '8e cb 67 a9 ba 1f 55 a5 a0 67 e2 b0 26 a3 67 6f', + 1520: 'd2 aa 90 2b d4 2d 0d 7c fd 34 0c d4 58 10 52 9f', + 1536: '78 b2 72 c9 6e 42 ea b4 c6 0b d9 14 e3 9d 06 e3', + 2032: 'f4 33 2f d3 1a 07 93 96 ee 3c ee 3f 2a 4f f0 49', + 2048: '05 45 97 81 d4 1f da 7f 30 c1 be 7e 12 46 c6 23', + 3056: 'ad fd 38 68 b8 e5 14 85 d5 e6 10 01 7e 3d d6 09', + 3072: 'ad 26 58 1c 0c 5b e4 5f 4c ea 01 db 2f 38 05 d5', + 4080: 'f3 17 2c ef fc 3b 3d 99 7c 85 cc d5 af 1a 95 0c', + 4096: 'e7 4b 0b 97 31 22 7f d3 7c 0e c0 8a 47 dd d8 b8' + } + ), + ( + '0102030405060708', + { + 0: '97 ab 8a 1b f0 af b9 61 32 f2 f6 72 58 da 15 a8', + 16: '82 63 ef db 45 c4 a1 86 84 ef 87 e6 b1 9e 5b 09', + 240: '96 36 eb c9 84 19 26 f4 f7 d1 f3 62 bd df 6e 18', + 256: 'd0 a9 90 ff 2c 05 fe f5 b9 03 73 c9 ff 4b 87 0a', + 496: '73 23 9f 1d b7 f4 1d 80 b6 43 c0 c5 25 18 ec 63', + 512: '16 3b 31 99 23 a6 bd b4 52 7c 62 61 26 70 3c 0f', + 752: '49 d6 c8 af 0f 97 14 4a 87 df 21 d9 14 72 f9 66', + 768: '44 17 3a 10 3b 66 16 c5 d5 ad 1c ee 40 c8 63 d0', + 1008: '27 3c 9c 4b 27 f3 22 e4 e7 16 ef 53 a4 7d e7 a4', + 1024: 'c6 d0 e7 b2 26 25 9f a9 02 34 90 b2 61 67 ad 1d', + 1520: '1f e8 98 67 13 f0 7c 3d 9a e1 c1 63 ff 8c f9 d3', + 1536: '83 69 e1 a9 65 61 0b e8 87 fb d0 c7 91 62 aa fb', + 2032: '0a 01 27 ab b4 44 84 b9 fb ef 5a bc ae 1b 57 9f', + 2048: 'c2 cd ad c6 40 2e 8e e8 66 e1 f3 7b db 47 e4 2c', + 3056: '26 b5 1e a3 7d f8 e1 d6 f7 6f c3 b6 6a 74 29 b3', + 3072: 'bc 76 83 20 5d 4f 44 3d c1 f2 9d da 33 15 c8 7b', + 4080: 'd5 fa 5a 34 69 d2 9a aa f8 3d 23 58 9d b8 c8 5b', + 4096: '3f b4 6e 2c 8f 0f 06 8e dc e8 cd cd 7d fc 58 62' + } + ), + # Page 5 + ( + '0102030405060708090a', + { + 0: 'ed e3 b0 46 43 e5 86 cc 90 7d c2 18 51 70 99 02', + 16: '03 51 6b a7 8f 41 3b eb 22 3a a5 d4 d2 df 67 11', + 240: '3c fd 6c b5 8e e0 fd de 64 01 76 ad 00 00 04 4d', + 256: '48 53 2b 21 fb 60 79 c9 11 4c 0f fd 9c 04 a1 ad', + 496: '3e 8c ea 98 01 71 09 97 90 84 b1 ef 92 f9 9d 86', + 512: 'e2 0f b4 9b db 33 7e e4 8b 8d 8d c0 f4 af ef fe', + 752: '5c 25 21 ea cd 79 66 f1 5e 05 65 44 be a0 d3 15', + 768: 'e0 67 a7 03 19 31 a2 46 a6 c3 87 5d 2f 67 8a cb', + 1008: 'a6 4f 70 af 88 ae 56 b6 f8 75 81 c0 e2 3e 6b 08', + 1024: 'f4 49 03 1d e3 12 81 4e c6 f3 19 29 1f 4a 05 16', + 1520: 'bd ae 85 92 4b 3c b1 d0 a2 e3 3a 30 c6 d7 95 99', + 1536: '8a 0f ed db ac 86 5a 09 bc d1 27 fb 56 2e d6 0a', + 2032: 'b5 5a 0a 5b 51 a1 2a 8b e3 48 99 c3 e0 47 51 1a', + 2048: 'd9 a0 9c ea 3c e7 5f e3 96 98 07 03 17 a7 13 39', + 3056: '55 22 25 ed 11 77 f4 45 84 ac 8c fa 6c 4e b5 fc', + 3072: '7e 82 cb ab fc 95 38 1b 08 09 98 44 21 29 c2 f8', + 4080: '1f 13 5e d1 4c e6 0a 91 36 9d 23 22 be f2 5e 3c', + 4096: '08 b6 be 45 12 4a 43 e2 eb 77 95 3f 84 dc 85 53' + } + ), + ( + '0102030405060708090a0b0c0d0e0f10', + { + 0: '9a c7 cc 9a 60 9d 1e f7 b2 93 28 99 cd e4 1b 97', + 16: '52 48 c4 95 90 14 12 6a 6e 8a 84 f1 1d 1a 9e 1c', + 240: '06 59 02 e4 b6 20 f6 cc 36 c8 58 9f 66 43 2f 2b', + 256: 'd3 9d 56 6b c6 bc e3 01 07 68 15 15 49 f3 87 3f', + 496: 'b6 d1 e6 c4 a5 e4 77 1c ad 79 53 8d f2 95 fb 11', + 512: 'c6 8c 1d 5c 55 9a 97 41 23 df 1d bc 52 a4 3b 89', + 752: 'c5 ec f8 8d e8 97 fd 57 fe d3 01 70 1b 82 a2 59', + 768: 'ec cb e1 3d e1 fc c9 1c 11 a0 b2 6c 0b c8 fa 4d', + 1008: 'e7 a7 25 74 f8 78 2a e2 6a ab cf 9e bc d6 60 65', + 1024: 'bd f0 32 4e 60 83 dc c6 d3 ce dd 3c a8 c5 3c 16', + 1520: 'b4 01 10 c4 19 0b 56 22 a9 61 16 b0 01 7e d2 97', + 1536: 'ff a0 b5 14 64 7e c0 4f 63 06 b8 92 ae 66 11 81', + 2032: 'd0 3d 1b c0 3c d3 3d 70 df f9 fa 5d 71 96 3e bd', + 2048: '8a 44 12 64 11 ea a7 8b d5 1e 8d 87 a8 87 9b f5', + 3056: 'fa be b7 60 28 ad e2 d0 e4 87 22 e4 6c 46 15 a3', + 3072: 'c0 5d 88 ab d5 03 57 f9 35 a6 3c 59 ee 53 76 23', + 4080: 'ff 38 26 5c 16 42 c1 ab e8 d3 c2 fe 5e 57 2b f8', + 4096: 'a3 6a 4c 30 1a e8 ac 13 61 0c cb c1 22 56 ca cc' + } + ), + # Page 6 + ( + '0102030405060708090a0b0c0d0e0f101112131415161718', + { + 0: '05 95 e5 7f e5 f0 bb 3c 70 6e da c8 a4 b2 db 11', + 16: 'df de 31 34 4a 1a f7 69 c7 4f 07 0a ee 9e 23 26', + 240: 'b0 6b 9b 1e 19 5d 13 d8 f4 a7 99 5c 45 53 ac 05', + 256: '6b d2 37 8e c3 41 c9 a4 2f 37 ba 79 f8 8a 32 ff', + 496: 'e7 0b ce 1d f7 64 5a db 5d 2c 41 30 21 5c 35 22', + 512: '9a 57 30 c7 fc b4 c9 af 51 ff da 89 c7 f1 ad 22', + 752: '04 85 05 5f d4 f6 f0 d9 63 ef 5a b9 a5 47 69 82', + 768: '59 1f c6 6b cd a1 0e 45 2b 03 d4 55 1f 6b 62 ac', + 1008: '27 53 cc 83 98 8a fa 3e 16 88 a1 d3 b4 2c 9a 02', + 1024: '93 61 0d 52 3d 1d 3f 00 62 b3 c2 a3 bb c7 c7 f0', + 1520: '96 c2 48 61 0a ad ed fe af 89 78 c0 3d e8 20 5a', + 1536: '0e 31 7b 3d 1c 73 b9 e9 a4 68 8f 29 6d 13 3a 19', + 2032: 'bd f0 e6 c3 cc a5 b5 b9 d5 33 b6 9c 56 ad a1 20', + 2048: '88 a2 18 b6 e2 ec e1 e6 24 6d 44 c7 59 d1 9b 10', + 3056: '68 66 39 7e 95 c1 40 53 4f 94 26 34 21 00 6e 40', + 3072: '32 cb 0a 1e 95 42 c6 b3 b8 b3 98 ab c3 b0 f1 d5', + 4080: '29 a0 b8 ae d5 4a 13 23 24 c6 2e 42 3f 54 b4 c8', + 4096: '3c b0 f3 b5 02 0a 98 b8 2a f9 fe 15 44 84 a1 68' + } + ), + ( + '0102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20', + { + 0: 'ea a6 bd 25 88 0b f9 3d 3f 5d 1e 4c a2 61 1d 91', + 16: 'cf a4 5c 9f 7e 71 4b 54 bd fa 80 02 7c b1 43 80', + 240: '11 4a e3 44 de d7 1b 35 f2 e6 0f eb ad 72 7f d8', + 256: '02 e1 e7 05 6b 0f 62 39 00 49 64 22 94 3e 97 b6', + 496: '91 cb 93 c7 87 96 4e 10 d9 52 7d 99 9c 6f 93 6b', + 512: '49 b1 8b 42 f8 e8 36 7c be b5 ef 10 4b a1 c7 cd', + 752: '87 08 4b 3b a7 00 ba de 95 56 10 67 27 45 b3 74', + 768: 'e7 a7 b9 e9 ec 54 0d 5f f4 3b db 12 79 2d 1b 35', + 1008: 'c7 99 b5 96 73 8f 6b 01 8c 76 c7 4b 17 59 bd 90', + 1024: '7f ec 5b fd 9f 9b 89 ce 65 48 30 90 92 d7 e9 58', + 1520: '40 f2 50 b2 6d 1f 09 6a 4a fd 4c 34 0a 58 88 15', + 1536: '3e 34 13 5c 79 db 01 02 00 76 76 51 cf 26 30 73', + 2032: 'f6 56 ab cc f8 8d d8 27 02 7b 2c e9 17 d4 64 ec', + 2048: '18 b6 25 03 bf bc 07 7f ba bb 98 f2 0d 98 ab 34', + 3056: '8a ed 95 ee 5b 0d cb fb ef 4e b2 1d 3a 3f 52 f9', + 3072: '62 5a 1a b0 0e e3 9a 53 27 34 6b dd b0 1a 9c 18', + 4080: 'a1 3a 7c 79 c7 e1 19 b5 ab 02 96 ab 28 c3 00 b9', + 4096: 'f3 e4 c0 a2 e0 2d 1d 01 f7 f0 a7 46 18 af 2b 48' + } + ), + # Page 7 + ( + '833222772a', + { + 0: '80 ad 97 bd c9 73 df 8a 2e 87 9e 92 a4 97 ef da', + 16: '20 f0 60 c2 f2 e5 12 65 01 d3 d4 fe a1 0d 5f c0', + 240: 'fa a1 48 e9 90 46 18 1f ec 6b 20 85 f3 b2 0e d9', + 256: 'f0 da f5 ba b3 d5 96 83 98 57 84 6f 73 fb fe 5a', + 496: '1c 7e 2f c4 63 92 32 fe 29 75 84 b2 96 99 6b c8', + 512: '3d b9 b2 49 40 6c c8 ed ff ac 55 cc d3 22 ba 12', + 752: 'e4 f9 f7 e0 06 61 54 bb d1 25 b7 45 56 9b c8 97', + 768: '75 d5 ef 26 2b 44 c4 1a 9c f6 3a e1 45 68 e1 b9', + 1008: '6d a4 53 db f8 1e 82 33 4a 3d 88 66 cb 50 a1 e3', + 1024: '78 28 d0 74 11 9c ab 5c 22 b2 94 d7 a9 bf a0 bb', + 1520: 'ad b8 9c ea 9a 15 fb e6 17 29 5b d0 4b 8c a0 5c', + 1536: '62 51 d8 7f d4 aa ae 9a 7e 4a d5 c2 17 d3 f3 00', + 2032: 'e7 11 9b d6 dd 9b 22 af e8 f8 95 85 43 28 81 e2', + 2048: '78 5b 60 fd 7e c4 e9 fc b6 54 5f 35 0d 66 0f ab', + 3056: 'af ec c0 37 fd b7 b0 83 8e b3 d7 0b cd 26 83 82', + 3072: 'db c1 a7 b4 9d 57 35 8c c9 fa 6d 61 d7 3b 7c f0', + 4080: '63 49 d1 26 a3 7a fc ba 89 79 4f 98 04 91 4f dc', + 4096: 'bf 42 c3 01 8c 2f 7c 66 bf de 52 49 75 76 81 15' + } + ), + ( + '1910833222772a', + { + 0: 'bc 92 22 db d3 27 4d 8f c6 6d 14 cc bd a6 69 0b', + 16: '7a e6 27 41 0c 9a 2b e6 93 df 5b b7 48 5a 63 e3', + 240: '3f 09 31 aa 03 de fb 30 0f 06 01 03 82 6f 2a 64', + 256: 'be aa 9e c8 d5 9b b6 81 29 f3 02 7c 96 36 11 81', + 496: '74 e0 4d b4 6d 28 64 8d 7d ee 8a 00 64 b0 6c fe', + 512: '9b 5e 81 c6 2f e0 23 c5 5b e4 2f 87 bb f9 32 b8', + 752: 'ce 17 8f c1 82 6e fe cb c1 82 f5 79 99 a4 61 40', + 768: '8b df 55 cd 55 06 1c 06 db a6 be 11 de 4a 57 8a', + 1008: '62 6f 5f 4d ce 65 25 01 f3 08 7d 39 c9 2c c3 49', + 1024: '42 da ac 6a 8f 9a b9 a7 fd 13 7c 60 37 82 56 82', + 1520: 'cc 03 fd b7 91 92 a2 07 31 2f 53 f5 d4 dc 33 d9', + 1536: 'f7 0f 14 12 2a 1c 98 a3 15 5d 28 b8 a0 a8 a4 1d', + 2032: '2a 3a 30 7a b2 70 8a 9c 00 fe 0b 42 f9 c2 d6 a1', + 2048: '86 26 17 62 7d 22 61 ea b0 b1 24 65 97 ca 0a e9', + 3056: '55 f8 77 ce 4f 2e 1d db bf 8e 13 e2 cd e0 fd c8', + 3072: '1b 15 56 cb 93 5f 17 33 37 70 5f bb 5d 50 1f c1', + 4080: 'ec d0 e9 66 02 be 7f 8d 50 92 81 6c cc f2 c2 e9', + 4096: '02 78 81 fa b4 99 3a 1c 26 20 24 a9 4f ff 3f 61' + } + ), + # Page 8 + ( + '641910833222772a', + { + 0: 'bb f6 09 de 94 13 17 2d 07 66 0c b6 80 71 69 26', + 16: '46 10 1a 6d ab 43 11 5d 6c 52 2b 4f e9 36 04 a9', + 240: 'cb e1 ff f2 1c 96 f3 ee f6 1e 8f e0 54 2c bd f0', + 256: '34 79 38 bf fa 40 09 c5 12 cf b4 03 4b 0d d1 a7', + 496: '78 67 a7 86 d0 0a 71 47 90 4d 76 dd f1 e5 20 e3', + 512: '8d 3e 9e 1c ae fc cc b3 fb f8 d1 8f 64 12 0b 32', + 752: '94 23 37 f8 fd 76 f0 fa e8 c5 2d 79 54 81 06 72', + 768: 'b8 54 8c 10 f5 16 67 f6 e6 0e 18 2f a1 9b 30 f7', + 1008: '02 11 c7 c6 19 0c 9e fd 12 37 c3 4c 8f 2e 06 c4', + 1024: 'bd a6 4f 65 27 6d 2a ac b8 f9 02 12 20 3a 80 8e', + 1520: 'bd 38 20 f7 32 ff b5 3e c1 93 e7 9d 33 e2 7c 73', + 1536: 'd0 16 86 16 86 19 07 d4 82 e3 6c da c8 cf 57 49', + 2032: '97 b0 f0 f2 24 b2 d2 31 71 14 80 8f b0 3a f7 a0', + 2048: 'e5 96 16 e4 69 78 79 39 a0 63 ce ea 9a f9 56 d1', + 3056: 'c4 7e 0d c1 66 09 19 c1 11 01 20 8f 9e 69 aa 1f', + 3072: '5a e4 f1 28 96 b8 37 9a 2a ad 89 b5 b5 53 d6 b0', + 4080: '6b 6b 09 8d 0c 29 3b c2 99 3d 80 bf 05 18 b6 d9', + 4096: '81 70 cc 3c cd 92 a6 98 62 1b 93 9d d3 8f e7 b9' + } + ), + ( + '8b37641910833222772a', + { + 0: 'ab 65 c2 6e dd b2 87 60 0d b2 fd a1 0d 1e 60 5c', + 16: 'bb 75 90 10 c2 96 58 f2 c7 2d 93 a2 d1 6d 29 30', + 240: 'b9 01 e8 03 6e d1 c3 83 cd 3c 4c 4d d0 a6 ab 05', + 256: '3d 25 ce 49 22 92 4c 55 f0 64 94 33 53 d7 8a 6c', + 496: '12 c1 aa 44 bb f8 7e 75 e6 11 f6 9b 2c 38 f4 9b', + 512: '28 f2 b3 43 4b 65 c0 98 77 47 00 44 c6 ea 17 0d', + 752: 'bd 9e f8 22 de 52 88 19 61 34 cf 8a f7 83 93 04', + 768: '67 55 9c 23 f0 52 15 84 70 a2 96 f7 25 73 5a 32', + 1008: '8b ab 26 fb c2 c1 2b 0f 13 e2 ab 18 5e ab f2 41', + 1024: '31 18 5a 6d 69 6f 0c fa 9b 42 80 8b 38 e1 32 a2', + 1520: '56 4d 3d ae 18 3c 52 34 c8 af 1e 51 06 1c 44 b5', + 1536: '3c 07 78 a7 b5 f7 2d 3c 23 a3 13 5c 7d 67 b9 f4', + 2032: 'f3 43 69 89 0f cf 16 fb 51 7d ca ae 44 63 b2 dd', + 2048: '02 f3 1c 81 e8 20 07 31 b8 99 b0 28 e7 91 bf a7', + 3056: '72 da 64 62 83 22 8c 14 30 08 53 70 17 95 61 6f', + 3072: '4e 0a 8c 6f 79 34 a7 88 e2 26 5e 81 d6 d0 c8 f4', + 4080: '43 8d d5 ea fe a0 11 1b 6f 36 b4 b9 38 da 2a 68', + 4096: '5f 6b fc 73 81 58 74 d9 71 00 f0 86 97 93 57 d8' + } + ), + # Page 9 + ( + 'ebb46227c6cc8b37641910833222772a', + { + 0: '72 0c 94 b6 3e df 44 e1 31 d9 50 ca 21 1a 5a 30', + 16: 'c3 66 fd ea cf 9c a8 04 36 be 7c 35 84 24 d2 0b', + 240: 'b3 39 4a 40 aa bf 75 cb a4 22 82 ef 25 a0 05 9f', + 256: '48 47 d8 1d a4 94 2d bc 24 9d ef c4 8c 92 2b 9f', + 496: '08 12 8c 46 9f 27 53 42 ad da 20 2b 2b 58 da 95', + 512: '97 0d ac ef 40 ad 98 72 3b ac 5d 69 55 b8 17 61', + 752: '3c b8 99 93 b0 7b 0c ed 93 de 13 d2 a1 10 13 ac', + 768: 'ef 2d 67 6f 15 45 c2 c1 3d c6 80 a0 2f 4a db fe', + 1008: 'b6 05 95 51 4f 24 bc 9f e5 22 a6 ca d7 39 36 44', + 1024: 'b5 15 a8 c5 01 17 54 f5 90 03 05 8b db 81 51 4e', + 1520: '3c 70 04 7e 8c bc 03 8e 3b 98 20 db 60 1d a4 95', + 1536: '11 75 da 6e e7 56 de 46 a5 3e 2b 07 56 60 b7 70', + 2032: '00 a5 42 bb a0 21 11 cc 2c 65 b3 8e bd ba 58 7e', + 2048: '58 65 fd bb 5b 48 06 41 04 e8 30 b3 80 f2 ae de', + 3056: '34 b2 1a d2 ad 44 e9 99 db 2d 7f 08 63 f0 d9 b6', + 3072: '84 a9 21 8f c3 6e 8a 5f 2c cf be ae 53 a2 7d 25', + 4080: 'a2 22 1a 11 b8 33 cc b4 98 a5 95 40 f0 54 5f 4a', + 4096: '5b be b4 78 7d 59 e5 37 3f db ea 6c 6f 75 c2 9b' + } + ), + ( + 'c109163908ebe51debb46227c6cc8b37641910833222772a', + { + 0: '54 b6 4e 6b 5a 20 b5 e2 ec 84 59 3d c7 98 9d a7', + 16: 'c1 35 ee e2 37 a8 54 65 ff 97 dc 03 92 4f 45 ce', + 240: 'cf cc 92 2f b4 a1 4a b4 5d 61 75 aa bb f2 d2 01', + 256: '83 7b 87 e2 a4 46 ad 0e f7 98 ac d0 2b 94 12 4f', + 496: '17 a6 db d6 64 92 6a 06 36 b3 f4 c3 7a 4f 46 94', + 512: '4a 5f 9f 26 ae ee d4 d4 a2 5f 63 2d 30 52 33 d9', + 752: '80 a3 d0 1e f0 0c 8e 9a 42 09 c1 7f 4e eb 35 8c', + 768: 'd1 5e 7d 5f fa aa bc 02 07 bf 20 0a 11 77 93 a2', + 1008: '34 96 82 bf 58 8e aa 52 d0 aa 15 60 34 6a ea fa', + 1024: 'f5 85 4c db 76 c8 89 e3 ad 63 35 4e 5f 72 75 e3', + 1520: '53 2c 7c ec cb 39 df 32 36 31 84 05 a4 b1 27 9c', + 1536: 'ba ef e6 d9 ce b6 51 84 22 60 e0 d1 e0 5e 3b 90', + 2032: 'e8 2d 8c 6d b5 4e 3c 63 3f 58 1c 95 2b a0 42 07', + 2048: '4b 16 e5 0a bd 38 1b d7 09 00 a9 cd 9a 62 cb 23', + 3056: '36 82 ee 33 bd 14 8b d9 f5 86 56 cd 8f 30 d9 fb', + 3072: '1e 5a 0b 84 75 04 5d 9b 20 b2 62 86 24 ed fd 9e', + 4080: '63 ed d6 84 fb 82 62 82 fe 52 8f 9c 0e 92 37 bc', + 4096: 'e4 dd 2e 98 d6 96 0f ae 0b 43 54 54 56 74 33 91' + } + ), + # Page 10 + ( + '1ada31d5cf688221c109163908ebe51debb46227c6cc8b37641910833222772a', + { + 0: 'dd 5b cb 00 18 e9 22 d4 94 75 9d 7c 39 5d 02 d3', + 16: 'c8 44 6f 8f 77 ab f7 37 68 53 53 eb 89 a1 c9 eb', + 240: 'af 3e 30 f9 c0 95 04 59 38 15 15 75 c3 fb 90 98', + 256: 'f8 cb 62 74 db 99 b8 0b 1d 20 12 a9 8e d4 8f 0e', + 496: '25 c3 00 5a 1c b8 5d e0 76 25 98 39 ab 71 98 ab', + 512: '9d cb c1 83 e8 cb 99 4b 72 7b 75 be 31 80 76 9c', + 752: 'a1 d3 07 8d fa 91 69 50 3e d9 d4 49 1d ee 4e b2', + 768: '85 14 a5 49 58 58 09 6f 59 6e 4b cd 66 b1 06 65', + 1008: '5f 40 d5 9e c1 b0 3b 33 73 8e fa 60 b2 25 5d 31', + 1024: '34 77 c7 f7 64 a4 1b ac ef f9 0b f1 4f 92 b7 cc', + 1520: 'ac 4e 95 36 8d 99 b9 eb 78 b8 da 8f 81 ff a7 95', + 1536: '8c 3c 13 f8 c2 38 8b b7 3f 38 57 6e 65 b7 c4 46', + 2032: '13 c4 b9 c1 df b6 65 79 ed dd 8a 28 0b 9f 73 16', + 2048: 'dd d2 78 20 55 01 26 69 8e fa ad c6 4b 64 f6 6e', + 3056: 'f0 8f 2e 66 d2 8e d1 43 f3 a2 37 cf 9d e7 35 59', + 3072: '9e a3 6c 52 55 31 b8 80 ba 12 43 34 f5 7b 0b 70', + 4080: 'd5 a3 9e 3d fc c5 02 80 ba c4 a6 b5 aa 0d ca 7d', + 4096: '37 0b 1c 1f e6 55 91 6d 97 fd 0d 47 ca 1d 72 b8' + } + ) + ] + + def test_keystream(self): + for tv in self.rfc6229_data: + key = unhexlify(b((tv[0]))) + cipher = ARC4.new(key) + count = 0 + for offset in range(0, 4096+1, 16): + ct = cipher.encrypt(b('\x00')*16) + expected = tv[1].get(offset) + if expected: + expected = unhexlify(b(expected.replace(" ", ''))) + self.assertEqual(ct, expected) + count += 1 + self.assertEqual(count, len(tv[1])) + + +class Drop_Tests(unittest.TestCase): + key = b('\xAA')*16 + data = b('\x00')*5000 + + def setUp(self): + self.cipher = ARC4.new(self.key) + + def test_drop256_encrypt(self): + cipher_drop = ARC4.new(self.key, 256) + ct_drop = cipher_drop.encrypt(self.data[:16]) + ct = self.cipher.encrypt(self.data)[256:256+16] + self.assertEqual(ct_drop, ct) + + def test_drop256_decrypt(self): + cipher_drop = ARC4.new(self.key, 256) + pt_drop = cipher_drop.decrypt(self.data[:16]) + pt = self.cipher.decrypt(self.data)[256:256+16] + self.assertEqual(pt_drop, pt) + + +class KeyLength(unittest.TestCase): + + def runTest(self): + self.assertRaises(ValueError, ARC4.new, b'') + self.assertRaises(ValueError, ARC4.new, b'\x00' * 257) + + +def get_tests(config={}): + from .common import make_stream_tests + tests = make_stream_tests(ARC4, "ARC4", test_data) + tests += list_test_cases(RFC6229_Tests) + tests += list_test_cases(Drop_Tests) + tests.append(KeyLength()) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py new file mode 100644 index 0000000..ca5c603 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Blowfish.py @@ -0,0 +1,160 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/test_Blowfish.py: Self-test for the Blowfish cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.Blowfish""" + +import unittest + +from Cryptodome.Util.py3compat import bchr + +from Cryptodome.Cipher import Blowfish + +# This is a list of (plaintext, ciphertext, key) tuples. +test_data = [ + # Test vectors from http://www.schneier.com/code/vectors.txt + ('0000000000000000', '4ef997456198dd78', '0000000000000000'), + ('ffffffffffffffff', '51866fd5b85ecb8a', 'ffffffffffffffff'), + ('1000000000000001', '7d856f9a613063f2', '3000000000000000'), + ('1111111111111111', '2466dd878b963c9d', '1111111111111111'), + ('1111111111111111', '61f9c3802281b096', '0123456789abcdef'), + ('0123456789abcdef', '7d0cc630afda1ec7', '1111111111111111'), + ('0000000000000000', '4ef997456198dd78', '0000000000000000'), + ('0123456789abcdef', '0aceab0fc6a0a28d', 'fedcba9876543210'), + ('01a1d6d039776742', '59c68245eb05282b', '7ca110454a1a6e57'), + ('5cd54ca83def57da', 'b1b8cc0b250f09a0', '0131d9619dc1376e'), + ('0248d43806f67172', '1730e5778bea1da4', '07a1133e4a0b2686'), + ('51454b582ddf440a', 'a25e7856cf2651eb', '3849674c2602319e'), + ('42fd443059577fa2', '353882b109ce8f1a', '04b915ba43feb5b6'), + ('059b5e0851cf143a', '48f4d0884c379918', '0113b970fd34f2ce'), + ('0756d8e0774761d2', '432193b78951fc98', '0170f175468fb5e6'), + ('762514b829bf486a', '13f04154d69d1ae5', '43297fad38e373fe'), + ('3bdd119049372802', '2eedda93ffd39c79', '07a7137045da2a16'), + ('26955f6835af609a', 'd887e0393c2da6e3', '04689104c2fd3b2f'), + ('164d5e404f275232', '5f99d04f5b163969', '37d06bb516cb7546'), + ('6b056e18759f5cca', '4a057a3b24d3977b', '1f08260d1ac2465e'), + ('004bd6ef09176062', '452031c1e4fada8e', '584023641aba6176'), + ('480d39006ee762f2', '7555ae39f59b87bd', '025816164629b007'), + ('437540c8698f3cfa', '53c55f9cb49fc019', '49793ebc79b3258f'), + ('072d43a077075292', '7a8e7bfa937e89a3', '4fb05e1515ab73a7'), + ('02fe55778117f12a', 'cf9c5d7a4986adb5', '49e95d6d4ca229bf'), + ('1d9d5c5018f728c2', 'd1abb290658bc778', '018310dc409b26d6'), + ('305532286d6f295a', '55cb3774d13ef201', '1c587f1c13924fef'), + ('0123456789abcdef', 'fa34ec4847b268b2', '0101010101010101'), + ('0123456789abcdef', 'a790795108ea3cae', '1f1f1f1f0e0e0e0e'), + ('0123456789abcdef', 'c39e072d9fac631d', 'e0fee0fef1fef1fe'), + ('ffffffffffffffff', '014933e0cdaff6e4', '0000000000000000'), + ('0000000000000000', 'f21e9a77b71c49bc', 'ffffffffffffffff'), + ('0000000000000000', '245946885754369a', '0123456789abcdef'), + ('ffffffffffffffff', '6b5c5a9c5d9e0a5a', 'fedcba9876543210'), + #('fedcba9876543210', 'f9ad597c49db005e', 'f0'), + #('fedcba9876543210', 'e91d21c1d961a6d6', 'f0e1'), + #('fedcba9876543210', 'e9c2b70a1bc65cf3', 'f0e1d2'), + ('fedcba9876543210', 'be1e639408640f05', 'f0e1d2c3'), + ('fedcba9876543210', 'b39e44481bdb1e6e', 'f0e1d2c3b4'), + ('fedcba9876543210', '9457aa83b1928c0d', 'f0e1d2c3b4a5'), + ('fedcba9876543210', '8bb77032f960629d', 'f0e1d2c3b4a596'), + ('fedcba9876543210', 'e87a244e2cc85e82', 'f0e1d2c3b4a59687'), + ('fedcba9876543210', '15750e7a4f4ec577', 'f0e1d2c3b4a5968778'), + ('fedcba9876543210', '122ba70b3ab64ae0', 'f0e1d2c3b4a596877869'), + ('fedcba9876543210', '3a833c9affc537f6', 'f0e1d2c3b4a5968778695a'), + ('fedcba9876543210', '9409da87a90f6bf2', 'f0e1d2c3b4a5968778695a4b'), + ('fedcba9876543210', '884f80625060b8b4', 'f0e1d2c3b4a5968778695a4b3c'), + ('fedcba9876543210', '1f85031c19e11968', 'f0e1d2c3b4a5968778695a4b3c2d'), + ('fedcba9876543210', '79d9373a714ca34f', 'f0e1d2c3b4a5968778695a4b3c2d1e'), + ('fedcba9876543210', '93142887ee3be15c', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f'), + ('fedcba9876543210', '03429e838ce2d14b', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f00'), + ('fedcba9876543210', 'a4299e27469ff67b', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f0011'), + ('fedcba9876543210', 'afd5aed1c1bc96a8', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f001122'), + ('fedcba9876543210', '10851c0e3858da9f', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f00112233'), + ('fedcba9876543210', 'e6f51ed79b9db21f', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f0011223344'), + ('fedcba9876543210', '64a6e14afd36b46f', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f001122334455'), + ('fedcba9876543210', '80c7d7d45a5479ad', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f00112233445566'), + ('fedcba9876543210', '05044b62fa52d080', + 'f0e1d2c3b4a5968778695a4b3c2d1e0f0011223344556677'), +] + + +class KeyLength(unittest.TestCase): + + def runTest(self): + self.assertRaises(ValueError, Blowfish.new, bchr(0) * 3, + Blowfish.MODE_ECB) + self.assertRaises(ValueError, Blowfish.new, bchr(0) * 57, + Blowfish.MODE_ECB) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = Blowfish.new(b'4'*16, Blowfish.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + tests = make_block_tests(Blowfish, "Blowfish", test_data) + tests.append(KeyLength()) + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py new file mode 100644 index 0000000..8bc21fd --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CAST.py @@ -0,0 +1,101 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/CAST.py: Self-test for the CAST-128 (CAST5) cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.CAST""" + +import unittest + +from Cryptodome.Util.py3compat import bchr + +from Cryptodome.Cipher import CAST + +# This is a list of (plaintext, ciphertext, key) tuples. +test_data = [ + # Test vectors from RFC 2144, B.1 + ('0123456789abcdef', '238b4fe5847e44b2', + '0123456712345678234567893456789a', + '128-bit key'), + + ('0123456789abcdef', 'eb6a711a2c02271b', + '01234567123456782345', + '80-bit key'), + + ('0123456789abcdef', '7ac816d16e9b302e', + '0123456712', + '40-bit key'), +] + + +class KeyLength(unittest.TestCase): + + def runTest(self): + self.assertRaises(ValueError, CAST.new, bchr(0) * 4, CAST.MODE_ECB) + self.assertRaises(ValueError, CAST.new, bchr(0) * 17, CAST.MODE_ECB) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = CAST.new(b'4'*16, CAST.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + + tests = make_block_tests(CAST, "CAST", test_data) + tests.append(KeyLength()) + tests.append(TestOutput()) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py new file mode 100644 index 0000000..f118eb6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CBC.py @@ -0,0 +1,556 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.py3compat import tobytes, is_string +from Cryptodome.Cipher import AES, DES3, DES +from Cryptodome.Hash import SHAKE128 + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + +class BlockChainingTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 24) + iv_128 = get_tag_random("iv_128", 16) + iv_64 = get_tag_random("iv_64", 8) + data_128 = get_tag_random("data_128", 16) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_iv(self): + # If not passed, the iv is created randomly + cipher = AES.new(self.key_128, self.aes_mode) + iv1 = cipher.iv + cipher = AES.new(self.key_128, self.aes_mode) + iv2 = cipher.iv + self.assertNotEqual(iv1, iv2) + self.assertEqual(len(iv1), 16) + + # IV can be passed in uppercase or lowercase + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + ct = cipher.encrypt(self.data_128) + + cipher = AES.new(self.key_128, self.aes_mode, iv=self.iv_128) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + cipher = AES.new(self.key_128, self.aes_mode, IV=self.iv_128) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + def test_iv_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + iv = u'test1234567890-*') + + def test_only_one_iv(self): + # Only one IV/iv keyword allowed + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + iv=self.iv_128, IV=self.iv_128) + + def test_iv_with_matching_length(self): + self.assertRaises(ValueError, AES.new, self.key_128, self.aes_mode, + b"") + self.assertRaises(ValueError, AES.new, self.key_128, self.aes_mode, + self.iv_128[:15]) + self.assertRaises(ValueError, AES.new, self.key_128, self.aes_mode, + self.iv_128 + b"0") + + def test_block_size_128(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_block_size_64(self): + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + self.assertEqual(cipher.block_size, DES3.block_size) + + def test_unaligned_data_128(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + for wrong_length in range(1,16): + self.assertRaises(ValueError, cipher.encrypt, b"5" * wrong_length) + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + for wrong_length in range(1,16): + self.assertRaises(ValueError, cipher.decrypt, b"5" * wrong_length) + + def test_unaligned_data_64(self): + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + for wrong_length in range(1,8): + self.assertRaises(ValueError, cipher.encrypt, b"5" * wrong_length) + + cipher = DES3.new(self.key_192, self.des3_mode, self.iv_64) + for wrong_length in range(1,8): + self.assertRaises(ValueError, cipher.decrypt, b"5" * wrong_length) + + def test_IV_iv_attributes(self): + data = get_tag_random("data", 16 * 100) + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + getattr(cipher, func)(data) + self.assertEqual(cipher.iv, self.iv_128) + self.assertEqual(cipher.IV, self.iv_128) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + self.iv_128, 7) + self.assertRaises(TypeError, AES.new, self.key_128, self.aes_mode, + iv=self.iv_128, unknown=7) + # But some are only known by the base cipher (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, self.aes_mode, iv=self.iv_128, use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, self.aes_mode, self.iv_128) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_bytearray(self): + data = b"1" * 128 + data_ba = bytearray(data) + + # Encrypt + key_ba = bytearray(self.key_128) + iv_ba = bytearray(self.iv_128) + + cipher1 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref1 = cipher1.encrypt(data) + + cipher2 = AES.new(key_ba, self.aes_mode, iv_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + iv_ba[:3] = b'\xFF\xFF\xFF' + ref2 = cipher2.encrypt(data_ba) + + self.assertEqual(ref1, ref2) + self.assertEqual(cipher1.iv, cipher2.iv) + + # Decrypt + key_ba = bytearray(self.key_128) + iv_ba = bytearray(self.iv_128) + + cipher3 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref3 = cipher3.decrypt(data) + + cipher4 = AES.new(key_ba, self.aes_mode, iv_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + iv_ba[:3] = b'\xFF\xFF\xFF' + ref4 = cipher4.decrypt(data_ba) + + self.assertEqual(ref3, ref4) + + def test_memoryview(self): + data = b"1" * 128 + data_mv = memoryview(bytearray(data)) + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + iv_mv = memoryview(bytearray(self.iv_128)) + + cipher1 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref1 = cipher1.encrypt(data) + + cipher2 = AES.new(key_mv, self.aes_mode, iv_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + iv_mv[:3] = b'\xFF\xFF\xFF' + ref2 = cipher2.encrypt(data_mv) + + self.assertEqual(ref1, ref2) + self.assertEqual(cipher1.iv, cipher2.iv) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + iv_mv = memoryview(bytearray(self.iv_128)) + + cipher3 = AES.new(self.key_128, self.aes_mode, self.iv_128) + ref3 = cipher3.decrypt(data) + + cipher4 = AES.new(key_mv, self.aes_mode, iv_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + iv_mv[:3] = b'\xFF\xFF\xFF' + ref4 = cipher4.decrypt(data_mv) + + self.assertEqual(ref3, ref4) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + output = bytearray(128) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + + def test_output_param_same_buffer(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + pt_ba = bytearray(pt) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.encrypt(pt_ba, output=pt_ba) + self.assertEqual(ct, pt_ba) + self.assertEqual(res, None) + + ct_ba = bytearray(ct) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + res = cipher.decrypt(ct_ba, output=ct_ba) + self.assertEqual(pt, ct_ba) + self.assertEqual(res, None) + + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + ct = cipher.encrypt(pt) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(b'4'*16, self.aes_mode, iv=self.iv_128) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class CbcTests(BlockChainingTests): + aes_mode = AES.MODE_CBC + des3_mode = DES3.MODE_CBC + + +class NistBlockChainingVectors(unittest.TestCase): + + def _do_kat_aes_test(self, file_name): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CBC KAT", + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + + cipher = AES.new(tv.key, self.aes_mode, tv.iv) + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + # See Section 6.4.2 in AESAVS + def _do_mct_aes_test(self, file_name): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CBC Montecarlo", + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + cipher = AES.new(tv.key, self.aes_mode, tv.iv) + + if direction == '[ENCRYPT]': + cts = [ tv.iv ] + for count in range(1000): + cts.append(cipher.encrypt(tv.plaintext)) + tv.plaintext = cts[-2] + self.assertEqual(cts[-1], tv.ciphertext) + elif direction == '[DECRYPT]': + pts = [ tv.iv] + for count in range(1000): + pts.append(cipher.decrypt(tv.ciphertext)) + tv.ciphertext = pts[-2] + self.assertEqual(pts[-1], tv.plaintext) + else: + assert False + + def _do_tdes_test(self, file_name): + + test_vectors = load_test_vectors(("Cipher", "TDES"), + file_name, + "TDES CBC KAT", + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + if hasattr(tv, "keys"): + cipher = DES.new(tv.keys, self.des_mode, tv.iv) + else: + if tv.key1 != tv.key3: + key = tv.key1 + tv.key2 + tv.key3 # Option 3 + else: + key = tv.key1 + tv.key2 # Option 2 + cipher = DES3.new(key, self.des3_mode, tv.iv) + + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + +class NistCbcVectors(NistBlockChainingVectors): + aes_mode = AES.MODE_CBC + des_mode = DES.MODE_CBC + des3_mode = DES3.MODE_CBC + + +# Create one test method per file +nist_aes_kat_mmt_files = ( + # KAT + "CBCGFSbox128.rsp", + "CBCGFSbox192.rsp", + "CBCGFSbox256.rsp", + "CBCKeySbox128.rsp", + "CBCKeySbox192.rsp", + "CBCKeySbox256.rsp", + "CBCVarKey128.rsp", + "CBCVarKey192.rsp", + "CBCVarKey256.rsp", + "CBCVarTxt128.rsp", + "CBCVarTxt192.rsp", + "CBCVarTxt256.rsp", + # MMT + "CBCMMT128.rsp", + "CBCMMT192.rsp", + "CBCMMT256.rsp", + ) +nist_aes_mct_files = ( + "CBCMCT128.rsp", + "CBCMCT192.rsp", + "CBCMCT256.rsp", + ) + +for file_name in nist_aes_kat_mmt_files: + def new_func(self, file_name=file_name): + self._do_kat_aes_test(file_name) + setattr(NistCbcVectors, "test_AES_" + file_name, new_func) + +for file_name in nist_aes_mct_files: + def new_func(self, file_name=file_name): + self._do_mct_aes_test(file_name) + setattr(NistCbcVectors, "test_AES_" + file_name, new_func) +del file_name, new_func + +nist_tdes_files = ( + "TCBCMMT2.rsp", # 2TDES + "TCBCMMT3.rsp", # 3TDES + "TCBCinvperm.rsp", # Single DES + "TCBCpermop.rsp", + "TCBCsubtab.rsp", + "TCBCvarkey.rsp", + "TCBCvartext.rsp", + ) + +for file_name in nist_tdes_files: + def new_func(self, file_name=file_name): + self._do_tdes_test(file_name) + setattr(NistCbcVectors, "test_TDES_" + file_name, new_func) + +# END OF NIST CBC TEST VECTORS + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the CBC test vectors found in Section F.2 + of NIST SP 800-3A""" + + def test_aes_128(self): + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '7649abac8119b246cee98e9b12e9197d' +\ + '5086cb9b507219ee95db113a917678b2' +\ + '73bed6b8e3c1743b7116e69e22229516' +\ + '3ff1caa1681fac09120eca307586e1a7' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192(self): + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '4f021db243bc633d7178183a9fa071e8' +\ + 'b4d9ada9ad7dedf4e5e738763f69145a' +\ + '571b242012fb7ae07fa9baac3df102e0' +\ + '08b0e27988598881d920a9e64f5615cd' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256(self): + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'f58c4c04d6e5f1ba779eabfb5f7bfbd6' +\ + '9cfc4e967edb808d679f777bc6702c7d' +\ + '39f23369a9d9bacfa530e26304231461' +\ + 'b2eb05e2c39be9fcda6c19078c6a9d1b' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CBC, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(CbcTests) + if config.get('slow_tests'): + tests += list_test_cases(NistCbcVectors) + tests += list_test_cases(SP800TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py new file mode 100644 index 0000000..2615720 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CCM.py @@ -0,0 +1,936 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof +from Cryptodome.Util.py3compat import tobytes, bchr +from Cryptodome.Cipher import AES +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class CcmTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # If not passed, the nonce is created randomly + cipher = AES.new(self.key_128, AES.MODE_CCM) + nonce1 = cipher.nonce + cipher = AES.new(self.key_128, AES.MODE_CCM) + nonce2 = cipher.nonce + self.assertEqual(len(nonce1), 11) + self.assertNotEqual(nonce1, nonce2) + + cipher = AES.new(self.key_128, AES.MODE_CCM, self.nonce_96) + ct = cipher.encrypt(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CCM, + nonce=u'test12345678') + + def test_nonce_length(self): + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=b"") + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=bchr(1) * 6) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=bchr(1) * 14) + for x in range(7, 13 + 1): + AES.new(self.key_128, AES.MODE_CCM, nonce=bchr(1) * x) + + def test_block_size(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 11 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_CCM).nonce + nonce2 = AES.new(self.key_128, AES.MODE_CCM).nonce + self.assertEqual(len(nonce1), 11) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CCM, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + for mac_len in range(3, 17 + 1, 2): + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, mac_len=mac_len) + + # Valid MAC length + for mac_len in range(4, 16 + 1, 2): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Cryptodome.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_longer_assoc_data_than_declared(self): + # More than zero + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=0) + self.assertRaises(ValueError, cipher.update, b"1") + + # Too large + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=15) + self.assertRaises(ValueError, cipher.update, self.data) + + def test_shorter_assoc_data_than_expected(self): + DATA_LEN = len(self.data) + + # With plaintext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.encrypt, self.data) + + # With empty plaintext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.digest) + + # With ciphertext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.decrypt, self.data) + + # With empty ciphertext + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + assoc_len=DATA_LEN + 1) + cipher.update(self.data) + self.assertRaises(ValueError, cipher.verify, mac) + + def test_shorter_and_longer_plaintext_than_declared(self): + DATA_LEN = len(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN + 1) + cipher.encrypt(self.data) + self.assertRaises(ValueError, cipher.digest) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN - 1) + self.assertRaises(ValueError, cipher.encrypt, self.data) + + def test_shorter_ciphertext_than_declared(self): + DATA_LEN = len(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN + 1) + cipher.decrypt(ct) + self.assertRaises(ValueError, cipher.verify, mac) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=DATA_LEN - 1) + self.assertRaises(ValueError, cipher.decrypt, ct) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=127, assoc_len=127) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96, + msg_len=127, assoc_len=127) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + data_ba = bytearray(self.data) + + cipher1 = AES.new(self.key_128, + AES.MODE_CCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_CCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + del data_ba + + cipher4 = AES.new(key_ba, + AES.MODE_CCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + data_mv = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_128, + AES.MODE_CCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_CCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + del data_mv + + cipher4 = AES.new(key_mv, + AES.MODE_CCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + tag = cipher.digest() + + output = bytearray(128) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + + pt = b'5' * 16 + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(15) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class CcmFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 16) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + for assoc_len in (None, 0): + for msg_len in (None, len(self.data)): + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + for assoc_len in (None, len(self.data)): + for msg_len in (None, 0): + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + for assoc_len in (None, len(self.data)): + for msg_len in (None, len(self.data)): + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + assoc_len=assoc_len, + msg_len=msg_len) + cipher.update(self.data) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + # Only possible if msg_len is declared in advance + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data, + self.data + b"3"): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + msg_len=64, + assoc_len=assoc_len) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data) + method(self.data) + method(self.data) + method(self.data) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_multiple_encrypt_decrypt_without_msg_len(self): + # Once per method, with or without assoc. data + for method_name in "encrypt", "decrypt": + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data) + method = getattr(cipher, method_name) + method(self.data) + self.assertRaises(TypeError, method, self.data) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_CCM, + nonce=self.nonce_96, + msg_len=32) + if assoc_data_present: + cipher.update(self.data) + getattr(cipher, method1_name)(self.data) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt(self.data) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_CCM, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + +class TestVectors(unittest.TestCase): + """Class exercising the CCM test vectors found in Appendix C + of NIST SP 800-38C and in RFC 3610""" + + # List of test vectors, each made up of: + # - authenticated data + # - plaintext + # - ciphertext + # - MAC + # - AES key + # - nonce + test_vectors_hex = [ + # NIST SP 800 38C + ( '0001020304050607', + '20212223', + '7162015b', + '4dac255d', + '404142434445464748494a4b4c4d4e4f', + '10111213141516'), + ( '000102030405060708090a0b0c0d0e0f', + '202122232425262728292a2b2c2d2e2f', + 'd2a1f0e051ea5f62081a7792073d593d', + '1fc64fbfaccd', + '404142434445464748494a4b4c4d4e4f', + '1011121314151617'), + ( '000102030405060708090a0b0c0d0e0f10111213', + '202122232425262728292a2b2c2d2e2f3031323334353637', + 'e3b201a9f5b71a7a9b1ceaeccd97e70b6176aad9a4428aa5', + '484392fbc1b09951', + '404142434445464748494a4b4c4d4e4f', + '101112131415161718191a1b'), + ( (''.join(["%02X" % (x*16+y) for x in range(0,16) for y in range(0,16)]))*256, + '202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f', + '69915dad1e84c6376a68c2967e4dab615ae0fd1faec44cc484828529463ccf72', + 'b4ac6bec93e8598e7f0dadbcea5b', + '404142434445464748494a4b4c4d4e4f', + '101112131415161718191a1b1c'), + # RFC3610 + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e', + '588c979a61c663d2f066d0c2c0f989806d5f6b61dac384', + '17e8d12cfdf926e0', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000003020100a0a1a2a3a4a5'), + ( + '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '72c91a36e135f8cf291ca894085c87e3cc15c439c9e43a3b', + 'a091d56e10400916', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000004030201a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20', + '51b1e5f44a197d1da46b0f8e2d282ae871e838bb64da859657', + '4adaa76fbd9fb0c5', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000005040302A0A1A2A3A4A5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e', + 'a28c6865939a9a79faaa5c4c2a9d4a91cdac8c', + '96c861b9c9e61ef1', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000006050403a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f', + 'dcf1fb7b5d9e23fb9d4e131253658ad86ebdca3e', + '51e83f077d9c2d93', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000007060504a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f20', + '6fc1b011f006568b5171a42d953d469b2570a4bd87', + '405a0443ac91cb94', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000008070605a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e', + '0135d1b2c95f41d5d1d4fec185d166b8094e999dfed96c', + '048c56602c97acbb7490', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '00000009080706a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '7b75399ac0831dd2f0bbd75879a2fd8f6cae6b6cd9b7db24', + 'c17b4433f434963f34b4', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000a090807a0a1a2a3a4a5'), + ( '0001020304050607', + '08090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20', + '82531a60cc24945a4b8279181ab5c84df21ce7f9b73f42e197', + 'ea9c07e56b5eb17e5f4e', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000b0a0908a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e', + '07342594157785152b074098330abb141b947b', + '566aa9406b4d999988dd', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000c0b0a09a0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f', + '676bb20380b0e301e8ab79590a396da78b834934', + 'f53aa2e9107a8b6c022c', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000d0c0b0aa0a1a2a3a4a5'), + ( '000102030405060708090a0b', + '0c0d0e0f101112131415161718191a1b1c1d1e1f20', + 'c0ffa0d6f05bdb67f24d43a4338d2aa4bed7b20e43', + 'cd1aa31662e7ad65d6db', + 'c0c1c2c3c4c5c6c7c8c9cacbcccdcecf', + '0000000e0d0c0ba0a1a2a3a4a5'), + ( '0be1a88bace018b1', + '08e8cf97d820ea258460e96ad9cf5289054d895ceac47c', + '4cb97f86a2a4689a877947ab8091ef5386a6ffbdd080f8', + 'e78cf7cb0cddd7b3', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00412b4ea9cdbe3c9696766cfa'), + ( '63018f76dc8a1bcb', + '9020ea6f91bdd85afa0039ba4baff9bfb79c7028949cd0ec', + '4ccb1e7ca981befaa0726c55d378061298c85c92814abc33', + 'c52ee81d7d77c08a', + 'd7828d13b2b0bdc325a76236df93cc6b', + '0033568ef7b2633c9696766cfa'), + ( 'aa6cfa36cae86b40', + 'b916e0eacc1c00d7dcec68ec0b3bbb1a02de8a2d1aa346132e', + 'b1d23a2220ddc0ac900d9aa03c61fcf4a559a4417767089708', + 'a776796edb723506', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00103fe41336713c9696766cfa'), + ( 'd0d0735c531e1becf049c244', + '12daac5630efa5396f770ce1a66b21f7b2101c', + '14d253c3967b70609b7cbb7c49916028324526', + '9a6f49975bcadeaf', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00764c63b8058e3c9696766cfa'), + ( '77b60f011c03e1525899bcae', + 'e88b6a46c78d63e52eb8c546efb5de6f75e9cc0d', + '5545ff1a085ee2efbf52b2e04bee1e2336c73e3f', + '762c0c7744fe7e3c', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00f8b678094e3b3c9696766cfa'), + ( 'cd9044d2b71fdb8120ea60c0', + '6435acbafb11a82e2f071d7ca4a5ebd93a803ba87f', + '009769ecabdf48625594c59251e6035722675e04c8', + '47099e5ae0704551', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00d560912d3f703c9696766cfa'), + ( 'd85bc7e69f944fb8', + '8a19b950bcf71a018e5e6701c91787659809d67dbedd18', + 'bc218daa947427b6db386a99ac1aef23ade0b52939cb6a', + '637cf9bec2408897c6ba', + 'd7828d13b2b0bdc325a76236df93cc6b', + '0042fff8f1951c3c9696766cfa'), + ( '74a0ebc9069f5b37', + '1761433c37c5a35fc1f39f406302eb907c6163be38c98437', + '5810e6fd25874022e80361a478e3e9cf484ab04f447efff6', + 'f0a477cc2fc9bf548944', + 'd7828d13b2b0bdc325a76236df93cc6b', + '00920f40e56cdc3c9696766cfa'), + ( '44a3aa3aae6475ca', + 'a434a8e58500c6e41530538862d686ea9e81301b5ae4226bfa', + 'f2beed7bc5098e83feb5b31608f8e29c38819a89c8e776f154', + '4d4151a4ed3a8b87b9ce', + 'd7828d13b2b0bdc325a76236df93cc6b', + '0027ca0c7120bc3c9696766cfa'), + ( 'ec46bb63b02520c33c49fd70', + 'b96b49e21d621741632875db7f6c9243d2d7c2', + '31d750a09da3ed7fddd49a2032aabf17ec8ebf', + '7d22c8088c666be5c197', + 'd7828d13b2b0bdc325a76236df93cc6b', + '005b8ccbcd9af83c9696766cfa'), + ( '47a65ac78b3d594227e85e71', + 'e2fcfbb880442c731bf95167c8ffd7895e337076', + 'e882f1dbd38ce3eda7c23f04dd65071eb41342ac', + 'df7e00dccec7ae52987d', + 'd7828d13b2b0bdc325a76236df93cc6b', + '003ebe94044b9a3c9696766cfa'), + ( '6e37a6ef546d955d34ab6059', + 'abf21c0b02feb88f856df4a37381bce3cc128517d4', + 'f32905b88a641b04b9c9ffb58cc390900f3da12ab1', + '6dce9e82efa16da62059', + 'd7828d13b2b0bdc325a76236df93cc6b', + '008d493b30ae8b3c9696766cfa'), + ] + + test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + # Encrypt + cipher = AES.new(key, AES.MODE_CCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_CCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, **extra_params): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._extra_params = extra_params + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_ccm_test.json", + "Wycheproof AES CCM", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt CCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_CCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) not in range(7, 13 + 1, 2) and "Length of parameter 'nonce'" in str(e): + assert not tv.valid + return + if tv.tag_size not in range(4, 16 + 1, 2) and "Parameter 'mac_len'" in str(e): + assert not tv.valid + return + raise e + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt CCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_CCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) not in range(7, 13 + 1, 2) and "Length of parameter 'nonce'" in str(e): + assert not tv.valid + return + if tv.tag_size not in range(4, 16 + 1, 2) and "Parameter 'mac_len'" in str(e): + assert not tv.valid + return + raise e + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt CCM Test #" + str(tv.id) + if len(tv.iv) not in range(7, 13 + 1, 2) or len(tv.ct) == 0: + return + cipher = AES.new(tv.key, AES.MODE_CCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(CcmTests) + tests += list_test_cases(CcmFSMTests) + tests += [TestVectors()] + tests += [TestVectorsWycheproof(wycheproof_warnings)] + + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py new file mode 100644 index 0000000..673bf8e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CFB.py @@ -0,0 +1,411 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.py3compat import tobytes, is_string +from Cryptodome.Cipher import AES, DES3, DES +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.SelfTest.Cipher.test_CBC import BlockChainingTests + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class CfbTests(BlockChainingTests): + + aes_mode = AES.MODE_CFB + des3_mode = DES3.MODE_CFB + + # Redefine test_unaligned_data_128/64 + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + # Extra + + def test_segment_size_128(self): + for bits in range(8, 129, 8): + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, + segment_size=bits) + + for bits in 0, 7, 9, 127, 129: + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CFB, + self.iv_128, + segment_size=bits) + + def test_segment_size_64(self): + for bits in range(8, 65, 8): + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, + segment_size=bits) + + for bits in 0, 7, 9, 63, 65: + self.assertRaises(ValueError, DES3.new, self.key_192, AES.MODE_CFB, + self.iv_64, + segment_size=bits) + + +class NistCfbVectors(unittest.TestCase): + + def _do_kat_aes_test(self, file_name, segment_size): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CFB%d KAT" % segment_size, + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + cipher = AES.new(tv.key, AES.MODE_CFB, tv.iv, + segment_size=segment_size) + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + # See Section 6.4.5 in AESAVS + def _do_mct_aes_test(self, file_name, segment_size): + + test_vectors = load_test_vectors(("Cipher", "AES"), + file_name, + "AES CFB%d Montecarlo" % segment_size, + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + assert(segment_size in (8, 128)) + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + cipher = AES.new(tv.key, AES.MODE_CFB, tv.iv, + segment_size=segment_size) + + def get_input(input_text, output_seq, j): + # CFB128 + if segment_size == 128: + if j >= 2: + return output_seq[-2] + return [input_text, tv.iv][j] + # CFB8 + if j == 0: + return input_text + elif j <= 16: + return tv.iv[j - 1:j] + return output_seq[j - 17] + + if direction == '[ENCRYPT]': + cts = [] + for j in range(1000): + plaintext = get_input(tv.plaintext, cts, j) + cts.append(cipher.encrypt(plaintext)) + self.assertEqual(cts[-1], tv.ciphertext) + elif direction == '[DECRYPT]': + pts = [] + for j in range(1000): + ciphertext = get_input(tv.ciphertext, pts, j) + pts.append(cipher.decrypt(ciphertext)) + self.assertEqual(pts[-1], tv.plaintext) + else: + assert False + + def _do_tdes_test(self, file_name, segment_size): + + test_vectors = load_test_vectors(("Cipher", "TDES"), + file_name, + "TDES CFB%d KAT" % segment_size, + { "count" : lambda x: int(x) } ) + if test_vectors is None: + return + + direction = None + for tv in test_vectors: + + # The test vector file contains some directive lines + if is_string(tv): + direction = tv + continue + + self.description = tv.desc + if hasattr(tv, "keys"): + cipher = DES.new(tv.keys, DES.MODE_CFB, tv.iv, + segment_size=segment_size) + else: + if tv.key1 != tv.key3: + key = tv.key1 + tv.key2 + tv.key3 # Option 3 + else: + key = tv.key1 + tv.key2 # Option 2 + cipher = DES3.new(key, DES3.MODE_CFB, tv.iv, + segment_size=segment_size) + if direction == "[ENCRYPT]": + self.assertEqual(cipher.encrypt(tv.plaintext), tv.ciphertext) + elif direction == "[DECRYPT]": + self.assertEqual(cipher.decrypt(tv.ciphertext), tv.plaintext) + else: + assert False + + +# Create one test method per file +nist_aes_kat_mmt_files = ( + # KAT + "CFB?GFSbox128.rsp", + "CFB?GFSbox192.rsp", + "CFB?GFSbox256.rsp", + "CFB?KeySbox128.rsp", + "CFB?KeySbox192.rsp", + "CFB?KeySbox256.rsp", + "CFB?VarKey128.rsp", + "CFB?VarKey192.rsp", + "CFB?VarKey256.rsp", + "CFB?VarTxt128.rsp", + "CFB?VarTxt192.rsp", + "CFB?VarTxt256.rsp", + # MMT + "CFB?MMT128.rsp", + "CFB?MMT192.rsp", + "CFB?MMT256.rsp", + ) +nist_aes_mct_files = ( + "CFB?MCT128.rsp", + "CFB?MCT192.rsp", + "CFB?MCT256.rsp", + ) + +for file_gen_name in nist_aes_kat_mmt_files: + for bits in "8", "128": + file_name = file_gen_name.replace("?", bits) + def new_func(self, file_name=file_name, bits=bits): + self._do_kat_aes_test(file_name, int(bits)) + setattr(NistCfbVectors, "test_AES_" + file_name, new_func) + +for file_gen_name in nist_aes_mct_files: + for bits in "8", "128": + file_name = file_gen_name.replace("?", bits) + def new_func(self, file_name=file_name, bits=bits): + self._do_mct_aes_test(file_name, int(bits)) + setattr(NistCfbVectors, "test_AES_" + file_name, new_func) +del file_name, new_func + +nist_tdes_files = ( + "TCFB?MMT2.rsp", # 2TDES + "TCFB?MMT3.rsp", # 3TDES + "TCFB?invperm.rsp", # Single DES + "TCFB?permop.rsp", + "TCFB?subtab.rsp", + "TCFB?varkey.rsp", + "TCFB?vartext.rsp", + ) + +for file_gen_name in nist_tdes_files: + for bits in "8", "64": + file_name = file_gen_name.replace("?", bits) + def new_func(self, file_name=file_name, bits=bits): + self._do_tdes_test(file_name, int(bits)) + setattr(NistCfbVectors, "test_TDES_" + file_name, new_func) + +# END OF NIST CBC TEST VECTORS + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the CFB test vectors found in Section F.3 + of NIST SP 800-3A""" + + def test_aes_128_cfb8(self): + plaintext = '6bc1bee22e409f96e93d7e117393172aae2d' + ciphertext = '3b79424c9c0dd436bace9e0ed4586a4f32b9' + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192_cfb8(self): + plaintext = '6bc1bee22e409f96e93d7e117393172aae2d' + ciphertext = 'cda2521ef0a905ca44cd057cbf0d47a0678a' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256_cfb8(self): + plaintext = '6bc1bee22e409f96e93d7e117393172aae2d' + ciphertext = 'dc1f1a8520a64db55fcc8ac554844e889700' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=8) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_128_cfb128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '3b3fd92eb72dad20333449f8e83cfb4a' +\ + 'c8a64537a0b3a93fcde3cdad9f1ce58b' +\ + '26751f67a3cbb140b1808cf187a4f4df' +\ + 'c04b05357c5d1c0eeac4c66f9ff7f2e6' + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192_cfb128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'cdc80d6fddf18cab34c25909c99a4174' +\ + '67ce7f7f81173621961a2b70171d3d7a' +\ + '2e1e8a1dd59b88b1c8e60fed1efac4c9' +\ + 'c05f9f9ca9834fa042ae8fba584b09ff' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256_cfb128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + + ciphertext = 'dc7e84bfda79164b7ecd8486985d3860' +\ + '39ffed143b28b1c832113c6331e5407b' +\ + 'df10132415e54b92a13ed0a8267ae2f9' +\ + '75a385741ab9cef82031623d55b1e471' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(CfbTests) + if config.get('slow_tests'): + tests += list_test_cases(NistCfbVectors) + tests += list_test_cases(SP800TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py new file mode 100644 index 0000000..ef5be5d --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_CTR.py @@ -0,0 +1,472 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import hexlify, unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.py3compat import tobytes, bchr +from Cryptodome.Cipher import AES, DES3 +from Cryptodome.Hash import SHAKE128, SHA256 +from Cryptodome.Util import Counter + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + +class CtrTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 24) + nonce_32 = get_tag_random("nonce_32", 4) + nonce_64 = get_tag_random("nonce_64", 8) + ctr_64 = Counter.new(32, prefix=nonce_32) + ctr_128 = Counter.new(64, prefix=nonce_64) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_invalid_counter_parameter(self): + # Counter object is required for ciphers with short block size + self.assertRaises(TypeError, DES3.new, self.key_192, AES.MODE_CTR) + # Positional arguments are not allowed (Counter must be passed as + # keyword) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, self.ctr_128) + + def test_nonce_attribute(self): + # Nonce attribute is the prefix passed to Counter (DES3) + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + self.assertEqual(cipher.nonce, self.nonce_32) + + # Nonce attribute is the prefix passed to Counter (AES) + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(cipher.nonce, self.nonce_64) + + # Nonce attribute is not defined if suffix is used in Counter + counter = Counter.new(64, prefix=self.nonce_32, suffix=self.nonce_32) + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertFalse(hasattr(cipher, "nonce")) + + def test_nonce_parameter(self): + # Nonce parameter becomes nonce attribute + cipher1 = AES.new(self.key_128, AES.MODE_CTR, nonce=self.nonce_64) + self.assertEqual(cipher1.nonce, self.nonce_64) + + counter = Counter.new(64, prefix=self.nonce_64, initial_value=0) + cipher2 = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Nonce is implicitly created (for AES) when no parameters are passed + nonce1 = AES.new(self.key_128, AES.MODE_CTR).nonce + nonce2 = AES.new(self.key_128, AES.MODE_CTR).nonce + self.assertNotEqual(nonce1, nonce2) + self.assertEqual(len(nonce1), 8) + + # Nonce can be zero-length + cipher = AES.new(self.key_128, AES.MODE_CTR, nonce=b"") + self.assertEqual(b"", cipher.nonce) + cipher.encrypt(b'0'*300) + + # Nonce and Counter are mutually exclusive + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + counter=self.ctr_128, nonce=self.nonce_64) + + def test_initial_value_parameter(self): + # Test with nonce parameter + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=0xFFFF) + counter = Counter.new(64, prefix=self.nonce_64, initial_value=0xFFFF) + cipher2 = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Test without nonce parameter + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + initial_value=0xFFFF) + counter = Counter.new(64, prefix=cipher1.nonce, initial_value=0xFFFF) + cipher2 = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Initial_value and Counter are mutually exclusive + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + counter=self.ctr_128, initial_value=0) + + def test_initial_value_bytes_parameter(self): + # Same result as when passing an integer + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, + initial_value=b"\x00"*6+b"\xFF\xFF") + cipher2 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=0xFFFF) + pt = get_tag_random("plaintext", 65536) + self.assertEqual(cipher1.encrypt(pt), cipher2.encrypt(pt)) + + # Fail if the iv is too large + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + initial_value=b"5"*17) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=b"5"*9) + + # Fail if the iv is too short + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + initial_value=b"5"*15) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, initial_value=b"5"*7) + + def test_iv_with_matching_length(self): + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + counter=Counter.new(120)) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_CTR, + counter=Counter.new(136)) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_block_size_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_CTR, counter=self.ctr_64) + self.assertEqual(cipher.block_size, DES3.block_size) + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, AES.MODE_CTR, counter=self.ctr_64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + 7, counter=self.ctr_128) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_CTR, + counter=self.ctr_128, unknown=7) + # But some are only known by the base cipher (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128, use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=self.ctr_128) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_wrap_around(self): + # Counter is only 8 bits, so we can only encrypt/decrypt 256 blocks (=4096 bytes) + counter = Counter.new(8, prefix=bchr(9) * 15) + max_bytes = 4096 + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + cipher.encrypt(b'9' * max_bytes) + self.assertRaises(OverflowError, cipher.encrypt, b'9') + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertRaises(OverflowError, cipher.encrypt, b'9' * (max_bytes + 1)) + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + cipher.decrypt(b'9' * max_bytes) + self.assertRaises(OverflowError, cipher.decrypt, b'9') + + cipher = AES.new(self.key_128, AES.MODE_CTR, counter=counter) + self.assertRaises(OverflowError, cipher.decrypt, b'9' * (max_bytes + 1)) + + def test_bytearray(self): + data = b"1" * 16 + iv = b"\x00" * 6 + b"\xFF\xFF" + + # Encrypt + cipher1 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, + initial_value=iv) + ref1 = cipher1.encrypt(data) + + cipher2 = AES.new(self.key_128, AES.MODE_CTR, + nonce=bytearray(self.nonce_64), + initial_value=bytearray(iv)) + ref2 = cipher2.encrypt(bytearray(data)) + + self.assertEqual(ref1, ref2) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + cipher3 = AES.new(self.key_128, AES.MODE_CTR, + nonce=self.nonce_64, + initial_value=iv) + ref3 = cipher3.decrypt(data) + + cipher4 = AES.new(self.key_128, AES.MODE_CTR, + nonce=bytearray(self.nonce_64), + initial_value=bytearray(iv)) + ref4 = cipher4.decrypt(bytearray(data)) + + self.assertEqual(ref3, ref4) + + def test_very_long_data(self): + cipher = AES.new(b'A' * 32, AES.MODE_CTR, nonce=b'') + ct = cipher.encrypt(b'B' * 1000000) + digest = SHA256.new(ct).hexdigest() + self.assertEqual(digest, "96204fc470476561a3a8f3b6fe6d24be85c87510b638142d1d0fb90989f8a6a6") + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + ct = cipher.encrypt(pt) + + output = bytearray(128) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + ct = cipher.encrypt(pt) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(b'4'*16, AES.MODE_CTR, nonce=self.nonce_64) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the CTR test vectors found in Section F.5 + of NIST SP 800-38A""" + + def test_aes_128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '874d6191b620e3261bef6864990db6ce' +\ + '9806f66b7970fdff8617187bb9fffdff' +\ + '5ae4df3edbd5d35e5b4f09020db03eab' +\ + '1e031dda2fbe03d1792170a0f3009cee' + key = '2b7e151628aed2a6abf7158809cf4f3c' + counter = Counter.new(nbits=16, + prefix=unhexlify('f0f1f2f3f4f5f6f7f8f9fafbfcfd'), + initial_value=0xfeff) + + key = unhexlify(key) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_192(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '1abc932417521ca24f2b0459fe7e6e0b' +\ + '090339ec0aa6faefd5ccc2c6f4ce8e94' +\ + '1e36b26bd1ebc670d1bd1d665620abf7' +\ + '4f78a7f6d29809585a97daec58c6b050' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + counter = Counter.new(nbits=16, + prefix=unhexlify('f0f1f2f3f4f5f6f7f8f9fafbfcfd'), + initial_value=0xfeff) + + key = unhexlify(key) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + def test_aes_256(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '601ec313775789a5b7a7f504bbf3d228' +\ + 'f443e3ca4d62b59aca84e990cacaf5c5' +\ + '2b0930daa23de94ce87017ba2d84988d' +\ + 'dfc9c58db67aada613c2dd08457941a6' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + counter = Counter.new(nbits=16, + prefix=unhexlify('f0f1f2f3f4f5f6f7f8f9fafbfcfd'), + initial_value=0xfeff) + key = unhexlify(key) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + +class RFC3686TestVectors(unittest.TestCase): + + # Each item is a test vector with: + # - plaintext + # - ciphertext + # - key (AES 128, 192 or 256 bits) + # - counter prefix (4 byte nonce + 8 byte nonce) + data = ( + ('53696e676c6520626c6f636b206d7367', + 'e4095d4fb7a7b3792d6175a3261311b8', + 'ae6852f8121067cc4bf7a5765577f39e', + '000000300000000000000000'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '5104a106168a72d9790d41ee8edad388eb2e1efc46da57c8fce630df9141be28', + '7e24067817fae0d743d6ce1f32539163', + '006cb6dbc0543b59da48d90b'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223', + 'c1cf48a89f2ffdd9cf4652e9efdb72d74540a42bde6d7836d59a5ceaaef3105325b2072f', + '7691be035e5020a8ac6e618529f9a0dc', + '00e0017b27777f3f4a1786f0'), + ('53696e676c6520626c6f636b206d7367', + '4b55384fe259c9c84e7935a003cbe928', + '16af5b145fc9f579c175f93e3bfb0eed863d06ccfdb78515', + '0000004836733c147d6d93cb'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + '453243fc609b23327edfaafa7131cd9f8490701c5ad4a79cfc1fe0ff42f4fb00', + '7c5cb2401b3dc33c19e7340819e0f69c678c3db8e6f6a91a', + '0096b03b020c6eadc2cb500d'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223', + '96893fc55e5c722f540b7dd1ddf7e758d288bc95c69165884536c811662f2188abee0935', + '02bf391ee8ecb159b959617b0965279bf59b60a786d3e0fe', + '0007bdfd5cbd60278dcc0912'), + ('53696e676c6520626c6f636b206d7367', + '145ad01dbf824ec7560863dc71e3e0c0', + '776beff2851db06f4c8a0542c8696f6c6a81af1eec96b4d37fc1d689e6c1c104', + '00000060db5672c97aa8f0b2'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f', + 'f05e231b3894612c49ee000b804eb2a9b8306b508f839d6a5530831d9344af1c', + 'f6d66d6bd52d59bb0796365879eff886c66dd51a5b6a99744b50590c87a23884', + '00faac24c1585ef15a43d875'), + ('000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20212223', + 'eb6c52821d0bbbf7ce7594462aca4faab407df866569fd07f48cc0b583d6071f1ec0e6b8', + 'ff7a617ce69148e4f1726e2f43581de2aa62d9f805532edff1eed687fb54153d', + '001cc5b751a51d70a1c11148') + ) + + bindata = [] + for tv in data: + bindata.append([unhexlify(x) for x in tv]) + + def runTest(self): + for pt, ct, key, prefix in self.bindata: + counter = Counter.new(32, prefix=prefix) + cipher = AES.new(key, AES.MODE_CTR, counter=counter) + result = cipher.encrypt(pt) + self.assertEqual(hexlify(ct), hexlify(result)) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(CtrTests) + tests += list_test_cases(SP800TestVectors) + tests += [ RFC3686TestVectors() ] + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py new file mode 100644 index 0000000..92c6f3c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20.py @@ -0,0 +1,529 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import unittest +from binascii import hexlify, unhexlify + +from Cryptodome.Util.py3compat import b, tobytes, bchr +from Cryptodome.Util.strxor import strxor_c +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Cipher import ChaCha20 + + +class ChaCha20Test(unittest.TestCase): + + def test_new_positive(self): + cipher = ChaCha20.new(key=b("0")*32, nonce=b"0"*8) + self.assertEqual(cipher.nonce, b"0" * 8) + cipher = ChaCha20.new(key=b("0")*32, nonce=b"0"*12) + self.assertEqual(cipher.nonce, b"0" * 12) + + def test_new_negative(self): + new = ChaCha20.new + self.assertRaises(TypeError, new) + self.assertRaises(TypeError, new, nonce=b("0")) + self.assertRaises(ValueError, new, nonce=b("0")*8, key=b("0")) + self.assertRaises(ValueError, new, nonce=b("0"), key=b("0")*32) + + def test_default_nonce(self): + cipher1 = ChaCha20.new(key=bchr(1) * 32) + cipher2 = ChaCha20.new(key=bchr(1) * 32) + self.assertEqual(len(cipher1.nonce), 8) + self.assertNotEqual(cipher1.nonce, cipher2.nonce) + + def test_nonce(self): + key = b'A' * 32 + + nonce1 = b'P' * 8 + cipher1 = ChaCha20.new(key=key, nonce=nonce1) + self.assertEqual(nonce1, cipher1.nonce) + + nonce2 = b'Q' * 12 + cipher2 = ChaCha20.new(key=key, nonce=nonce2) + self.assertEqual(nonce2, cipher2.nonce) + + def test_eiter_encrypt_or_decrypt(self): + """Verify that a cipher cannot be used for both decrypting and encrypting""" + + c1 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + c1.encrypt(b("8")) + self.assertRaises(TypeError, c1.decrypt, b("9")) + + c2 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + c2.decrypt(b("8")) + self.assertRaises(TypeError, c2.encrypt, b("9")) + + def test_round_trip(self): + pt = b("A") * 1024 + c1 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + c2 = ChaCha20.new(key=b("5") * 32, nonce=b("6") * 8) + ct = c1.encrypt(pt) + self.assertEqual(c2.decrypt(ct), pt) + + self.assertEqual(c1.encrypt(b("")), b("")) + self.assertEqual(c2.decrypt(b("")), b("")) + + def test_streaming(self): + """Verify that an arbitrary number of bytes can be encrypted/decrypted""" + from Cryptodome.Hash import SHA1 + + segments = (1, 3, 5, 7, 11, 17, 23) + total = sum(segments) + + pt = b("") + while len(pt) < total: + pt += SHA1.new(pt).digest() + + cipher1 = ChaCha20.new(key=b("7") * 32, nonce=b("t") * 8) + ct = cipher1.encrypt(pt) + + cipher2 = ChaCha20.new(key=b("7") * 32, nonce=b("t") * 8) + cipher3 = ChaCha20.new(key=b("7") * 32, nonce=b("t") * 8) + idx = 0 + for segment in segments: + self.assertEqual(cipher2.decrypt(ct[idx:idx+segment]), pt[idx:idx+segment]) + self.assertEqual(cipher3.encrypt(pt[idx:idx+segment]), ct[idx:idx+segment]) + idx += segment + + def test_seek(self): + cipher1 = ChaCha20.new(key=b("9") * 32, nonce=b("e") * 8) + + offset = 64 * 900 + 7 + pt = b("1") * 64 + + cipher1.encrypt(b("0") * offset) + ct1 = cipher1.encrypt(pt) + + cipher2 = ChaCha20.new(key=b("9") * 32, nonce=b("e") * 8) + cipher2.seek(offset) + ct2 = cipher2.encrypt(pt) + + self.assertEqual(ct1, ct2) + + def test_seek_tv(self): + # Test Vector #4, A.1 from + # http://tools.ietf.org/html/draft-nir-cfrg-chacha20-poly1305-04 + key = bchr(0) + bchr(255) + bchr(0) * 30 + nonce = bchr(0) * 8 + cipher = ChaCha20.new(key=key, nonce=nonce) + cipher.seek(64 * 2) + expected_key_stream = unhexlify(b( + "72d54dfbf12ec44b362692df94137f32" + "8fea8da73990265ec1bbbea1ae9af0ca" + "13b25aa26cb4a648cb9b9d1be65b2c09" + "24a66c54d545ec1b7374f4872e99f096" + )) + ct = cipher.encrypt(bchr(0) * len(expected_key_stream)) + self.assertEqual(expected_key_stream, ct) + + def test_rfc7539(self): + # from https://tools.ietf.org/html/rfc7539 Annex A.1 + # Each item is: key, nonce, block #, plaintext, ciphertext + tvs = [ + # Test Vector #1 + ( + "00"*32, + "00"*12, + 0, + "00"*16*4, + "76b8e0ada0f13d90405d6ae55386bd28" + "bdd219b8a08ded1aa836efcc8b770dc7" + "da41597c5157488d7724e03fb8d84a37" + "6a43b8f41518a11cc387b669b2ee6586" + ), + # Test Vector #2 + ( + "00"*31 + "01", + "00"*11 + "02", + 1, + "416e79207375626d697373696f6e2074" + "6f20746865204945544620696e74656e" + "6465642062792074686520436f6e7472" + "696275746f7220666f72207075626c69" + "636174696f6e20617320616c6c206f72" + "2070617274206f6620616e2049455446" + "20496e7465726e65742d447261667420" + "6f722052464320616e6420616e792073" + "746174656d656e74206d616465207769" + "7468696e2074686520636f6e74657874" + "206f6620616e20494554462061637469" + "7669747920697320636f6e7369646572" + "656420616e20224945544620436f6e74" + "7269627574696f6e222e205375636820" + "73746174656d656e747320696e636c75" + "6465206f72616c2073746174656d656e" + "747320696e2049455446207365737369" + "6f6e732c2061732077656c6c20617320" + "7772697474656e20616e6420656c6563" + "74726f6e696320636f6d6d756e696361" + "74696f6e73206d61646520617420616e" + "792074696d65206f7220706c6163652c" + "20776869636820617265206164647265" + "7373656420746f", + "a3fbf07df3fa2fde4f376ca23e827370" + "41605d9f4f4f57bd8cff2c1d4b7955ec" + "2a97948bd3722915c8f3d337f7d37005" + "0e9e96d647b7c39f56e031ca5eb6250d" + "4042e02785ececfa4b4bb5e8ead0440e" + "20b6e8db09d881a7c6132f420e527950" + "42bdfa7773d8a9051447b3291ce1411c" + "680465552aa6c405b7764d5e87bea85a" + "d00f8449ed8f72d0d662ab052691ca66" + "424bc86d2df80ea41f43abf937d3259d" + "c4b2d0dfb48a6c9139ddd7f76966e928" + "e635553ba76c5c879d7b35d49eb2e62b" + "0871cdac638939e25e8a1e0ef9d5280f" + "a8ca328b351c3c765989cbcf3daa8b6c" + "cc3aaf9f3979c92b3720fc88dc95ed84" + "a1be059c6499b9fda236e7e818b04b0b" + "c39c1e876b193bfe5569753f88128cc0" + "8aaa9b63d1a16f80ef2554d7189c411f" + "5869ca52c5b83fa36ff216b9c1d30062" + "bebcfd2dc5bce0911934fda79a86f6e6" + "98ced759c3ff9b6477338f3da4f9cd85" + "14ea9982ccafb341b2384dd902f3d1ab" + "7ac61dd29c6f21ba5b862f3730e37cfd" + "c4fd806c22f221" + ), + # Test Vector #3 + ( + "1c9240a5eb55d38af333888604f6b5f0" + "473917c1402b80099dca5cbc207075c0", + "00"*11 + "02", + 42, + "2754776173206272696c6c69672c2061" + "6e642074686520736c6974687920746f" + "7665730a446964206779726520616e64" + "2067696d626c6520696e207468652077" + "6162653a0a416c6c206d696d73792077" + "6572652074686520626f726f676f7665" + "732c0a416e6420746865206d6f6d6520" + "7261746873206f757467726162652e", + "62e6347f95ed87a45ffae7426f27a1df" + "5fb69110044c0d73118effa95b01e5cf" + "166d3df2d721caf9b21e5fb14c616871" + "fd84c54f9d65b283196c7fe4f60553eb" + "f39c6402c42234e32a356b3e764312a6" + "1a5532055716ead6962568f87d3f3f77" + "04c6a8d1bcd1bf4d50d6154b6da731b1" + "87b58dfd728afa36757a797ac188d1" + ) + ] + + for tv in tvs: + key = unhexlify(tv[0]) + nonce = unhexlify(tv[1]) + offset = tv[2] * 64 + pt = unhexlify(tv[3]) + ct_expect = unhexlify(tv[4]) + + cipher = ChaCha20.new(key=key, nonce=nonce) + if offset != 0: + cipher.seek(offset) + ct = cipher.encrypt(pt) + assert(ct == ct_expect) + + +class XChaCha20Test(unittest.TestCase): + + # From https://tools.ietf.org/html/draft-arciszewski-xchacha-03 + + def test_hchacha20(self): + # Section 2.2.1 + + from Cryptodome.Cipher.ChaCha20 import _HChaCha20 + + key = b"00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f:10:11:12:13:14:15:16:17:18:19:1a:1b:1c:1d:1e:1f" + key = unhexlify(key.replace(b":", b"")) + + nonce = b"00:00:00:09:00:00:00:4a:00:00:00:00:31:41:59:27" + nonce = unhexlify(nonce.replace(b":", b"")) + + subkey = _HChaCha20(key, nonce) + + expected = b"82413b42 27b27bfe d30e4250 8a877d73 a0f9e4d5 8a74a853 c12ec413 26d3ecdc" + expected = unhexlify(expected.replace(b" ", b"")) + + self.assertEqual(subkey, expected) + + def test_nonce(self): + key = b'A' * 32 + nonce = b'P' * 24 + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertEqual(nonce, cipher.nonce) + + def test_encrypt(self): + # Section A.3.2 + + pt = b""" + 5468652064686f6c65202870726f6e6f756e6365642022646f6c652229206973 + 20616c736f206b6e6f776e2061732074686520417369617469632077696c6420 + 646f672c2072656420646f672c20616e642077686973746c696e6720646f672e + 2049742069732061626f7574207468652073697a65206f662061204765726d61 + 6e20736865706865726420627574206c6f6f6b73206d6f7265206c696b652061 + 206c6f6e672d6c656767656420666f782e205468697320686967686c7920656c + 757369766520616e6420736b696c6c6564206a756d70657220697320636c6173 + 736966696564207769746820776f6c7665732c20636f796f7465732c206a6163 + 6b616c732c20616e6420666f78657320696e20746865207461786f6e6f6d6963 + 2066616d696c792043616e696461652e""" + pt = unhexlify(pt.replace(b"\n", b"").replace(b" ", b"")) + + key = unhexlify(b"808182838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9f") + iv = unhexlify(b"404142434445464748494a4b4c4d4e4f5051525354555658") + + ct = b""" + 7d0a2e6b7f7c65a236542630294e063b7ab9b555a5d5149aa21e4ae1e4fbce87 + ecc8e08a8b5e350abe622b2ffa617b202cfad72032a3037e76ffdcdc4376ee05 + 3a190d7e46ca1de04144850381b9cb29f051915386b8a710b8ac4d027b8b050f + 7cba5854e028d564e453b8a968824173fc16488b8970cac828f11ae53cabd201 + 12f87107df24ee6183d2274fe4c8b1485534ef2c5fbc1ec24bfc3663efaa08bc + 047d29d25043532db8391a8a3d776bf4372a6955827ccb0cdd4af403a7ce4c63 + d595c75a43e045f0cce1f29c8b93bd65afc5974922f214a40b7c402cdb91ae73 + c0b63615cdad0480680f16515a7ace9d39236464328a37743ffc28f4ddb324f4 + d0f5bbdc270c65b1749a6efff1fbaa09536175ccd29fb9e6057b307320d31683 + 8a9c71f70b5b5907a66f7ea49aadc409""" + ct = unhexlify(ct.replace(b"\n", b"").replace(b" ", b"")) + + cipher = ChaCha20.new(key=key, nonce=iv) + cipher.seek(64) # Counter = 1 + ct_test = cipher.encrypt(pt) + self.assertEqual(ct, ct_test) + + +class ByteArrayTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_ba = bytearray(data) + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + + cipher1 = ChaCha20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = ChaCha20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_ba) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + ct_ba = bytearray(ct) + + cipher3 = ChaCha20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_ba) + + self.assertEqual(data, pt_test) + + +class MemoryviewTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_mv = memoryview(bytearray(data)) + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + + cipher1 = ChaCha20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = ChaCha20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_mv) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + ct_mv = memoryview(bytearray(ct)) + + cipher3 = ChaCha20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_mv) + + self.assertEqual(data, pt_test) + + +class ChaCha20_AGL_NIR(unittest.TestCase): + + # From http://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-04 + # and http://tools.ietf.org/html/draft-nir-cfrg-chacha20-poly1305-04 + tv = [ + ( "00" * 32, + "00" * 8, + "76b8e0ada0f13d90405d6ae55386bd28bdd219b8a08ded1aa836efcc" + "8b770dc7da41597c5157488d7724e03fb8d84a376a43b8f41518a11c" + "c387b669b2ee6586" + "9f07e7be5551387a98ba977c732d080d" + "cb0f29a048e3656912c6533e32ee7aed" + "29b721769ce64e43d57133b074d839d5" + "31ed1f28510afb45ace10a1f4b794d6f" + ), + ( "00" * 31 + "01", + "00" * 8, + "4540f05a9f1fb296d7736e7b208e3c96eb4fe1834688d2604f450952" + "ed432d41bbe2a0b6ea7566d2a5d1e7e20d42af2c53d792b1c43fea81" + "7e9ad275ae546963" + "3aeb5224ecf849929b9d828db1ced4dd" + "832025e8018b8160b82284f3c949aa5a" + "8eca00bbb4a73bdad192b5c42f73f2fd" + "4e273644c8b36125a64addeb006c13a0" + ), + ( "00" * 32, + "00" * 7 + "01", + "de9cba7bf3d69ef5e786dc63973f653a0b49e015adbff7134fcb7df1" + "37821031e85a050278a7084527214f73efc7fa5b5277062eb7a0433e" + "445f41e3" + ), + ( "00" * 32, + "01" + "00" * 7, + "ef3fdfd6c61578fbf5cf35bd3dd33b8009631634d21e42ac33960bd1" + "38e50d32111e4caf237ee53ca8ad6426194a88545ddc497a0b466e7d" + "6bbdb0041b2f586b" + ), + ( "000102030405060708090a0b0c0d0e0f101112131415161718191a1b" + "1c1d1e1f", + "0001020304050607", + "f798a189f195e66982105ffb640bb7757f579da31602fc93ec01ac56" + "f85ac3c134a4547b733b46413042c9440049176905d3be59ea1c53f1" + "5916155c2be8241a38008b9a26bc35941e2444177c8ade6689de9526" + "4986d95889fb60e84629c9bd9a5acb1cc118be563eb9b3a4a472f82e" + "09a7e778492b562ef7130e88dfe031c79db9d4f7c7a899151b9a4750" + "32b63fc385245fe054e3dd5a97a5f576fe064025d3ce042c566ab2c5" + "07b138db853e3d6959660996546cc9c4a6eafdc777c040d70eaf46f7" + "6dad3979e5c5360c3317166a1c894c94a371876a94df7628fe4eaaf2" + "ccb27d5aaae0ad7ad0f9d4b6ad3b54098746d4524d38407a6deb3ab7" + "8fab78c9" + ), + ( "00" * 32, + "00" * 7 + "02", + "c2c64d378cd536374ae204b9ef933fcd" + "1a8b2288b3dfa49672ab765b54ee27c7" + "8a970e0e955c14f3a88e741b97c286f7" + "5f8fc299e8148362fa198a39531bed6d" + ), + ] + + def runTest(self): + for (key, nonce, stream) in self.tv: + c = ChaCha20.new(key=unhexlify(b(key)), nonce=unhexlify(b(nonce))) + ct = unhexlify(b(stream)) + pt = b("\x00") * len(ct) + self.assertEqual(c.encrypt(pt), ct) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + key = b'4' * 32 + nonce = b'5' * 8 + cipher = ChaCha20.new(key=key, nonce=nonce) + + pt = b'5' * 300 + ct = cipher.encrypt(pt) + + output = bytearray(len(pt)) + cipher = ChaCha20.new(key=key, nonce=nonce) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = ChaCha20.new(key=key, nonce=nonce) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(len(pt))) + cipher = ChaCha20.new(key=key, nonce=nonce) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = ChaCha20.new(key=key, nonce=nonce) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*len(pt)) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*len(pt)) + + shorter_output = bytearray(len(pt) - 1) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + + cipher = ChaCha20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ChaCha20Test) + tests += list_test_cases(XChaCha20Test) + tests.append(ChaCha20_AGL_NIR()) + tests.append(ByteArrayTest()) + tests.append(MemoryviewTest()) + tests.append(TestOutput()) + + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py new file mode 100644 index 0000000..495028a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_ChaCha20_Poly1305.py @@ -0,0 +1,776 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.Cipher import ChaCha20_Poly1305 +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class ChaCha20Poly1305Tests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + nonce_96 = get_tag_random("nonce_96", 12) + data_128 = get_tag_random("data_128", 16) + + def test_loopback(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Nonce can only be 8 or 12 bytes + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=b'H' * 8) + self.assertEqual(len(cipher.nonce), 8) + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=b'H' * 12) + self.assertEqual(len(cipher.nonce), 12) + + # If not passed, the nonce is created randomly + cipher = ChaCha20_Poly1305.new(key=self.key_256) + nonce1 = cipher.nonce + cipher = ChaCha20_Poly1305.new(key=self.key_256) + nonce2 = cipher.nonce + self.assertEqual(len(nonce1), 12) + self.assertNotEqual(nonce1, nonce2) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, + ChaCha20_Poly1305.new, + key=self.key_256, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can only be 8 or 12 bytes long + self.assertRaises(ValueError, + ChaCha20_Poly1305.new, + key=self.key_256, + nonce=b'0' * 7) + self.assertRaises(ValueError, + ChaCha20_Poly1305.new, + key=self.key_256, + nonce=b'') + + def test_block_size(self): + # Not based on block ciphers + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertFalse(hasattr(cipher, 'block_size')) + + def test_nonce_attribute(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 12 bytes long nonce is randomly generated + nonce1 = ChaCha20_Poly1305.new(key=self.key_256).nonce + nonce2 = ChaCha20_Poly1305.new(key=self.key_256).nonce + self.assertEqual(len(nonce1), 12) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, + ChaCha20_Poly1305.new, + key=self.key_256, + param=9) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data_128) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Cryptodome.Util.strxor import strxor_c + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_256) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + data_ba = bytearray(self.data_128) + + cipher1 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_256) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + ct_ba = bytearray(ct) + tag_ba = bytearray(tag) + del data_ba + + cipher3 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_ba) + ct_ba[:3] = b'\xFF\xFF\xFF' + cipher3.verify(tag_ba) + + self.assertEqual(pt_test, self.data_128) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_256)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + data_mv = memoryview(bytearray(self.data_128)) + + cipher1 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_256)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + ct_mv = memoryview(bytearray(ct)) + tag_mv = memoryview(bytearray(tag)) + del data_mv + + cipher3 = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_mv) + ct_mv[:3] = b'\x99\x99\x99' + cipher3.verify(tag_mv) + + self.assertEqual(pt_test, self.data_128) + + +class XChaCha20Poly1305Tests(unittest.TestCase): + + def test_nonce(self): + # Nonce can only be 24 bytes + cipher = ChaCha20_Poly1305.new(key=b'Y' * 32, + nonce=b'H' * 24) + self.assertEqual(len(cipher.nonce), 24) + self.assertEqual(cipher.nonce, b'H' * 24) + + def test_encrypt(self): + # From https://tools.ietf.org/html/draft-arciszewski-xchacha-03 + # Section A.3.1 + + pt = b""" + 4c616469657320616e642047656e746c656d656e206f662074686520636c6173 + 73206f66202739393a204966204920636f756c64206f6666657220796f75206f + 6e6c79206f6e652074697020666f7220746865206675747572652c2073756e73 + 637265656e20776f756c642062652069742e""" + pt = unhexlify(pt.replace(b"\n", b"").replace(b" ", b"")) + + aad = unhexlify(b"50515253c0c1c2c3c4c5c6c7") + key = unhexlify(b"808182838485868788898a8b8c8d8e8f909192939495969798999a9b9c9d9e9f") + iv = unhexlify(b"404142434445464748494a4b4c4d4e4f5051525354555657") + + ct = b""" + bd6d179d3e83d43b9576579493c0e939572a1700252bfaccbed2902c21396cbb + 731c7f1b0b4aa6440bf3a82f4eda7e39ae64c6708c54c216cb96b72e1213b452 + 2f8c9ba40db5d945b11b69b982c1bb9e3f3fac2bc369488f76b2383565d3fff9 + 21f9664c97637da9768812f615c68b13b52e""" + ct = unhexlify(ct.replace(b"\n", b"").replace(b" ", b"")) + + tag = unhexlify(b"c0875924c1c7987947deafd8780acf49") + + cipher = ChaCha20_Poly1305.new(key=key, nonce=iv) + cipher.update(aad) + ct_test, tag_test = cipher.encrypt_and_digest(pt) + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=iv) + cipher.update(aad) + cipher.decrypt_and_verify(ct, tag) + + +class ChaCha20Poly1305FSMTests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + nonce_96 = get_tag_random("nonce_96", 12) + data_128 = get_tag_random("data_128", 16) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + mac = cipher.digest() + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data_128, + self.data_128 + b"3"): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data_128) + method(self.data_128) + method(self.data_128) + method(self.data_128) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + # decrypt_and_verify + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.update(self.data_128) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data_128, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data_128) + getattr(cipher, method1_name)(self.data_128) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data_128) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.encrypt(self.data_128) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data_128) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = ChaCha20_Poly1305.new(key=self.key_256, + nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + +def compact(x): + return unhexlify(x.replace(" ", "").replace(":", "")) + + +class TestVectorsRFC(unittest.TestCase): + """Test cases from RFC7539""" + + # AAD, PT, CT, MAC, KEY, NONCE + test_vectors_hex = [ + ( '50 51 52 53 c0 c1 c2 c3 c4 c5 c6 c7', + '4c 61 64 69 65 73 20 61 6e 64 20 47 65 6e 74 6c' + '65 6d 65 6e 20 6f 66 20 74 68 65 20 63 6c 61 73' + '73 20 6f 66 20 27 39 39 3a 20 49 66 20 49 20 63' + '6f 75 6c 64 20 6f 66 66 65 72 20 79 6f 75 20 6f' + '6e 6c 79 20 6f 6e 65 20 74 69 70 20 66 6f 72 20' + '74 68 65 20 66 75 74 75 72 65 2c 20 73 75 6e 73' + '63 72 65 65 6e 20 77 6f 75 6c 64 20 62 65 20 69' + '74 2e', + 'd3 1a 8d 34 64 8e 60 db 7b 86 af bc 53 ef 7e c2' + 'a4 ad ed 51 29 6e 08 fe a9 e2 b5 a7 36 ee 62 d6' + '3d be a4 5e 8c a9 67 12 82 fa fb 69 da 92 72 8b' + '1a 71 de 0a 9e 06 0b 29 05 d6 a5 b6 7e cd 3b 36' + '92 dd bd 7f 2d 77 8b 8c 98 03 ae e3 28 09 1b 58' + 'fa b3 24 e4 fa d6 75 94 55 85 80 8b 48 31 d7 bc' + '3f f4 de f0 8e 4b 7a 9d e5 76 d2 65 86 ce c6 4b' + '61 16', + '1a:e1:0b:59:4f:09:e2:6a:7e:90:2e:cb:d0:60:06:91', + '80 81 82 83 84 85 86 87 88 89 8a 8b 8c 8d 8e 8f' + '90 91 92 93 94 95 96 97 98 99 9a 9b 9c 9d 9e 9f', + '07 00 00 00' + '40 41 42 43 44 45 46 47', + ), + ( 'f3 33 88 86 00 00 00 00 00 00 4e 91', + '49 6e 74 65 72 6e 65 74 2d 44 72 61 66 74 73 20' + '61 72 65 20 64 72 61 66 74 20 64 6f 63 75 6d 65' + '6e 74 73 20 76 61 6c 69 64 20 66 6f 72 20 61 20' + '6d 61 78 69 6d 75 6d 20 6f 66 20 73 69 78 20 6d' + '6f 6e 74 68 73 20 61 6e 64 20 6d 61 79 20 62 65' + '20 75 70 64 61 74 65 64 2c 20 72 65 70 6c 61 63' + '65 64 2c 20 6f 72 20 6f 62 73 6f 6c 65 74 65 64' + '20 62 79 20 6f 74 68 65 72 20 64 6f 63 75 6d 65' + '6e 74 73 20 61 74 20 61 6e 79 20 74 69 6d 65 2e' + '20 49 74 20 69 73 20 69 6e 61 70 70 72 6f 70 72' + '69 61 74 65 20 74 6f 20 75 73 65 20 49 6e 74 65' + '72 6e 65 74 2d 44 72 61 66 74 73 20 61 73 20 72' + '65 66 65 72 65 6e 63 65 20 6d 61 74 65 72 69 61' + '6c 20 6f 72 20 74 6f 20 63 69 74 65 20 74 68 65' + '6d 20 6f 74 68 65 72 20 74 68 61 6e 20 61 73 20' + '2f e2 80 9c 77 6f 72 6b 20 69 6e 20 70 72 6f 67' + '72 65 73 73 2e 2f e2 80 9d', + '64 a0 86 15 75 86 1a f4 60 f0 62 c7 9b e6 43 bd' + '5e 80 5c fd 34 5c f3 89 f1 08 67 0a c7 6c 8c b2' + '4c 6c fc 18 75 5d 43 ee a0 9e e9 4e 38 2d 26 b0' + 'bd b7 b7 3c 32 1b 01 00 d4 f0 3b 7f 35 58 94 cf' + '33 2f 83 0e 71 0b 97 ce 98 c8 a8 4a bd 0b 94 81' + '14 ad 17 6e 00 8d 33 bd 60 f9 82 b1 ff 37 c8 55' + '97 97 a0 6e f4 f0 ef 61 c1 86 32 4e 2b 35 06 38' + '36 06 90 7b 6a 7c 02 b0 f9 f6 15 7b 53 c8 67 e4' + 'b9 16 6c 76 7b 80 4d 46 a5 9b 52 16 cd e7 a4 e9' + '90 40 c5 a4 04 33 22 5e e2 82 a1 b0 a0 6c 52 3e' + 'af 45 34 d7 f8 3f a1 15 5b 00 47 71 8c bc 54 6a' + '0d 07 2b 04 b3 56 4e ea 1b 42 22 73 f5 48 27 1a' + '0b b2 31 60 53 fa 76 99 19 55 eb d6 31 59 43 4e' + 'ce bb 4e 46 6d ae 5a 10 73 a6 72 76 27 09 7a 10' + '49 e6 17 d9 1d 36 10 94 fa 68 f0 ff 77 98 71 30' + '30 5b ea ba 2e da 04 df 99 7b 71 4d 6c 6f 2c 29' + 'a6 ad 5c b4 02 2b 02 70 9b', + 'ee ad 9d 67 89 0c bb 22 39 23 36 fe a1 85 1f 38', + '1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0' + '47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0', + '00 00 00 00 01 02 03 04 05 06 07 08', + ) + ] + + test_vectors = [[unhexlify(x.replace(" ", "").replace(":", "")) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + # Encrypt + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def load_tests(self, filename): + + def filter_tag(group): + return group['tagSize'] // 8 + + def filter_algo(root): + return root['algorithm'] + + result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + filename, + "Wycheproof ChaCha20-Poly1305", + root_tag={'algo': filter_algo}, + group_tag={'tag_size': filter_tag}) + return result + + def setUp(self): + self.tv = [] + self.tv.extend(self.load_tests("chacha20_poly1305_test.json")) + self.tv.extend(self.load_tests("xchacha20_poly1305_test.json")) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt %s Test #%s" % (tv.algo, tv.id) + + try: + cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) + except ValueError as e: + assert len(tv.iv) not in (8, 12) and "Nonce must be" in str(e) + return + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt %s Test #%s" % (tv.algo, tv.id) + + try: + cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) + except ValueError as e: + assert len(tv.iv) not in (8, 12) and "Nonce must be" in str(e) + return + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt ChaCha20-Poly1305 Test #" + str(tv.id) + if len(tv.iv) == 0 or len(tv.ct) < 1: + return + cipher = ChaCha20_Poly1305.new(key=tv.key, nonce=tv.iv) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + key = b'4' * 32 + nonce = b'5' * 12 + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + + cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(ChaCha20Poly1305Tests) + tests += list_test_cases(XChaCha20Poly1305Tests) + tests += list_test_cases(ChaCha20Poly1305FSMTests) + tests += [TestVectorsRFC()] + tests += [TestVectorsWycheproof(wycheproof_warnings)] + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py new file mode 100644 index 0000000..df1313a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES.py @@ -0,0 +1,374 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/DES.py: Self-test for the (Single) DES cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.DES""" + +import unittest + +from Cryptodome.Cipher import DES + +# This is a list of (plaintext, ciphertext, key, description) tuples. +SP800_17_B1_KEY = '01' * 8 +SP800_17_B2_PT = '00' * 8 +test_data = [ + # Test vectors from Appendix A of NIST SP 800-17 + # "Modes of Operation Validation System (MOVS): Requirements and Procedures" + # http://csrc.nist.gov/publications/nistpubs/800-17/800-17.pdf + + # Appendix A - "Sample Round Outputs for the DES" + ('0000000000000000', '82dcbafbdeab6602', '10316e028c8f3b4a', + "NIST SP800-17 A"), + + # Table B.1 - Variable Plaintext Known Answer Test + ('8000000000000000', '95f8a5e5dd31d900', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #0'), + ('4000000000000000', 'dd7f121ca5015619', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #1'), + ('2000000000000000', '2e8653104f3834ea', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #2'), + ('1000000000000000', '4bd388ff6cd81d4f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #3'), + ('0800000000000000', '20b9e767b2fb1456', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #4'), + ('0400000000000000', '55579380d77138ef', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #5'), + ('0200000000000000', '6cc5defaaf04512f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #6'), + ('0100000000000000', '0d9f279ba5d87260', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #7'), + ('0080000000000000', 'd9031b0271bd5a0a', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #8'), + ('0040000000000000', '424250b37c3dd951', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #9'), + ('0020000000000000', 'b8061b7ecd9a21e5', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #10'), + ('0010000000000000', 'f15d0f286b65bd28', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #11'), + ('0008000000000000', 'add0cc8d6e5deba1', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #12'), + ('0004000000000000', 'e6d5f82752ad63d1', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #13'), + ('0002000000000000', 'ecbfe3bd3f591a5e', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #14'), + ('0001000000000000', 'f356834379d165cd', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #15'), + ('0000800000000000', '2b9f982f20037fa9', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #16'), + ('0000400000000000', '889de068a16f0be6', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #17'), + ('0000200000000000', 'e19e275d846a1298', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #18'), + ('0000100000000000', '329a8ed523d71aec', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #19'), + ('0000080000000000', 'e7fce22557d23c97', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #20'), + ('0000040000000000', '12a9f5817ff2d65d', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #21'), + ('0000020000000000', 'a484c3ad38dc9c19', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #22'), + ('0000010000000000', 'fbe00a8a1ef8ad72', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #23'), + ('0000008000000000', '750d079407521363', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #24'), + ('0000004000000000', '64feed9c724c2faf', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #25'), + ('0000002000000000', 'f02b263b328e2b60', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #26'), + ('0000001000000000', '9d64555a9a10b852', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #27'), + ('0000000800000000', 'd106ff0bed5255d7', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #28'), + ('0000000400000000', 'e1652c6b138c64a5', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #29'), + ('0000000200000000', 'e428581186ec8f46', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #30'), + ('0000000100000000', 'aeb5f5ede22d1a36', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #31'), + ('0000000080000000', 'e943d7568aec0c5c', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #32'), + ('0000000040000000', 'df98c8276f54b04b', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #33'), + ('0000000020000000', 'b160e4680f6c696f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #34'), + ('0000000010000000', 'fa0752b07d9c4ab8', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #35'), + ('0000000008000000', 'ca3a2b036dbc8502', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #36'), + ('0000000004000000', '5e0905517bb59bcf', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #37'), + ('0000000002000000', '814eeb3b91d90726', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #38'), + ('0000000001000000', '4d49db1532919c9f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #39'), + ('0000000000800000', '25eb5fc3f8cf0621', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #40'), + ('0000000000400000', 'ab6a20c0620d1c6f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #41'), + ('0000000000200000', '79e90dbc98f92cca', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #42'), + ('0000000000100000', '866ecedd8072bb0e', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #43'), + ('0000000000080000', '8b54536f2f3e64a8', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #44'), + ('0000000000040000', 'ea51d3975595b86b', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #45'), + ('0000000000020000', 'caffc6ac4542de31', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #46'), + ('0000000000010000', '8dd45a2ddf90796c', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #47'), + ('0000000000008000', '1029d55e880ec2d0', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #48'), + ('0000000000004000', '5d86cb23639dbea9', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #49'), + ('0000000000002000', '1d1ca853ae7c0c5f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #50'), + ('0000000000001000', 'ce332329248f3228', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #51'), + ('0000000000000800', '8405d1abe24fb942', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #52'), + ('0000000000000400', 'e643d78090ca4207', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #53'), + ('0000000000000200', '48221b9937748a23', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #54'), + ('0000000000000100', 'dd7c0bbd61fafd54', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #55'), + ('0000000000000080', '2fbc291a570db5c4', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #56'), + ('0000000000000040', 'e07c30d7e4e26e12', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #57'), + ('0000000000000020', '0953e2258e8e90a1', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #58'), + ('0000000000000010', '5b711bc4ceebf2ee', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #59'), + ('0000000000000008', 'cc083f1e6d9e85f6', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #60'), + ('0000000000000004', 'd2fd8867d50d2dfe', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #61'), + ('0000000000000002', '06e7ea22ce92708f', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #62'), + ('0000000000000001', '166b40b44aba4bd6', SP800_17_B1_KEY, + 'NIST SP800-17 B.1 #63'), + + # Table B.2 - Variable Key Known Answer Test + (SP800_17_B2_PT, '95a8d72813daa94d', '8001010101010101', + 'NIST SP800-17 B.2 #0'), + (SP800_17_B2_PT, '0eec1487dd8c26d5', '4001010101010101', + 'NIST SP800-17 B.2 #1'), + (SP800_17_B2_PT, '7ad16ffb79c45926', '2001010101010101', + 'NIST SP800-17 B.2 #2'), + (SP800_17_B2_PT, 'd3746294ca6a6cf3', '1001010101010101', + 'NIST SP800-17 B.2 #3'), + (SP800_17_B2_PT, '809f5f873c1fd761', '0801010101010101', + 'NIST SP800-17 B.2 #4'), + (SP800_17_B2_PT, 'c02faffec989d1fc', '0401010101010101', + 'NIST SP800-17 B.2 #5'), + (SP800_17_B2_PT, '4615aa1d33e72f10', '0201010101010101', + 'NIST SP800-17 B.2 #6'), + (SP800_17_B2_PT, '2055123350c00858', '0180010101010101', + 'NIST SP800-17 B.2 #7'), + (SP800_17_B2_PT, 'df3b99d6577397c8', '0140010101010101', + 'NIST SP800-17 B.2 #8'), + (SP800_17_B2_PT, '31fe17369b5288c9', '0120010101010101', + 'NIST SP800-17 B.2 #9'), + (SP800_17_B2_PT, 'dfdd3cc64dae1642', '0110010101010101', + 'NIST SP800-17 B.2 #10'), + (SP800_17_B2_PT, '178c83ce2b399d94', '0108010101010101', + 'NIST SP800-17 B.2 #11'), + (SP800_17_B2_PT, '50f636324a9b7f80', '0104010101010101', + 'NIST SP800-17 B.2 #12'), + (SP800_17_B2_PT, 'a8468ee3bc18f06d', '0102010101010101', + 'NIST SP800-17 B.2 #13'), + (SP800_17_B2_PT, 'a2dc9e92fd3cde92', '0101800101010101', + 'NIST SP800-17 B.2 #14'), + (SP800_17_B2_PT, 'cac09f797d031287', '0101400101010101', + 'NIST SP800-17 B.2 #15'), + (SP800_17_B2_PT, '90ba680b22aeb525', '0101200101010101', + 'NIST SP800-17 B.2 #16'), + (SP800_17_B2_PT, 'ce7a24f350e280b6', '0101100101010101', + 'NIST SP800-17 B.2 #17'), + (SP800_17_B2_PT, '882bff0aa01a0b87', '0101080101010101', + 'NIST SP800-17 B.2 #18'), + (SP800_17_B2_PT, '25610288924511c2', '0101040101010101', + 'NIST SP800-17 B.2 #19'), + (SP800_17_B2_PT, 'c71516c29c75d170', '0101020101010101', + 'NIST SP800-17 B.2 #20'), + (SP800_17_B2_PT, '5199c29a52c9f059', '0101018001010101', + 'NIST SP800-17 B.2 #21'), + (SP800_17_B2_PT, 'c22f0a294a71f29f', '0101014001010101', + 'NIST SP800-17 B.2 #22'), + (SP800_17_B2_PT, 'ee371483714c02ea', '0101012001010101', + 'NIST SP800-17 B.2 #23'), + (SP800_17_B2_PT, 'a81fbd448f9e522f', '0101011001010101', + 'NIST SP800-17 B.2 #24'), + (SP800_17_B2_PT, '4f644c92e192dfed', '0101010801010101', + 'NIST SP800-17 B.2 #25'), + (SP800_17_B2_PT, '1afa9a66a6df92ae', '0101010401010101', + 'NIST SP800-17 B.2 #26'), + (SP800_17_B2_PT, 'b3c1cc715cb879d8', '0101010201010101', + 'NIST SP800-17 B.2 #27'), + (SP800_17_B2_PT, '19d032e64ab0bd8b', '0101010180010101', + 'NIST SP800-17 B.2 #28'), + (SP800_17_B2_PT, '3cfaa7a7dc8720dc', '0101010140010101', + 'NIST SP800-17 B.2 #29'), + (SP800_17_B2_PT, 'b7265f7f447ac6f3', '0101010120010101', + 'NIST SP800-17 B.2 #30'), + (SP800_17_B2_PT, '9db73b3c0d163f54', '0101010110010101', + 'NIST SP800-17 B.2 #31'), + (SP800_17_B2_PT, '8181b65babf4a975', '0101010108010101', + 'NIST SP800-17 B.2 #32'), + (SP800_17_B2_PT, '93c9b64042eaa240', '0101010104010101', + 'NIST SP800-17 B.2 #33'), + (SP800_17_B2_PT, '5570530829705592', '0101010102010101', + 'NIST SP800-17 B.2 #34'), + (SP800_17_B2_PT, '8638809e878787a0', '0101010101800101', + 'NIST SP800-17 B.2 #35'), + (SP800_17_B2_PT, '41b9a79af79ac208', '0101010101400101', + 'NIST SP800-17 B.2 #36'), + (SP800_17_B2_PT, '7a9be42f2009a892', '0101010101200101', + 'NIST SP800-17 B.2 #37'), + (SP800_17_B2_PT, '29038d56ba6d2745', '0101010101100101', + 'NIST SP800-17 B.2 #38'), + (SP800_17_B2_PT, '5495c6abf1e5df51', '0101010101080101', + 'NIST SP800-17 B.2 #39'), + (SP800_17_B2_PT, 'ae13dbd561488933', '0101010101040101', + 'NIST SP800-17 B.2 #40'), + (SP800_17_B2_PT, '024d1ffa8904e389', '0101010101020101', + 'NIST SP800-17 B.2 #41'), + (SP800_17_B2_PT, 'd1399712f99bf02e', '0101010101018001', + 'NIST SP800-17 B.2 #42'), + (SP800_17_B2_PT, '14c1d7c1cffec79e', '0101010101014001', + 'NIST SP800-17 B.2 #43'), + (SP800_17_B2_PT, '1de5279dae3bed6f', '0101010101012001', + 'NIST SP800-17 B.2 #44'), + (SP800_17_B2_PT, 'e941a33f85501303', '0101010101011001', + 'NIST SP800-17 B.2 #45'), + (SP800_17_B2_PT, 'da99dbbc9a03f379', '0101010101010801', + 'NIST SP800-17 B.2 #46'), + (SP800_17_B2_PT, 'b7fc92f91d8e92e9', '0101010101010401', + 'NIST SP800-17 B.2 #47'), + (SP800_17_B2_PT, 'ae8e5caa3ca04e85', '0101010101010201', + 'NIST SP800-17 B.2 #48'), + (SP800_17_B2_PT, '9cc62df43b6eed74', '0101010101010180', + 'NIST SP800-17 B.2 #49'), + (SP800_17_B2_PT, 'd863dbb5c59a91a0', '0101010101010140', + 'NIST SP800-17 B.2 #50'), + (SP800_17_B2_PT, 'a1ab2190545b91d7', '0101010101010120', + 'NIST SP800-17 B.2 #51'), + (SP800_17_B2_PT, '0875041e64c570f7', '0101010101010110', + 'NIST SP800-17 B.2 #52'), + (SP800_17_B2_PT, '5a594528bebef1cc', '0101010101010108', + 'NIST SP800-17 B.2 #53'), + (SP800_17_B2_PT, 'fcdb3291de21f0c0', '0101010101010104', + 'NIST SP800-17 B.2 #54'), + (SP800_17_B2_PT, '869efd7f9f265a09', '0101010101010102', + 'NIST SP800-17 B.2 #55'), +] + +class RonRivestTest(unittest.TestCase): + """ Ronald L. Rivest's DES test, see + http://people.csail.mit.edu/rivest/Destest.txt + ABSTRACT + -------- + + We present a simple way to test the correctness of a DES implementation: + Use the recurrence relation: + + X0 = 9474B8E8C73BCA7D (hexadecimal) + + X(i+1) = IF (i is even) THEN E(Xi,Xi) ELSE D(Xi,Xi) + + to compute a sequence of 64-bit values: X0, X1, X2, ..., X16. Here + E(X,K) denotes the DES encryption of X using key K, and D(X,K) denotes + the DES decryption of X using key K. If you obtain + + X16 = 1B1A2DDB4C642438 + + your implementation does not have any of the 36,568 possible single-fault + errors described herein. + """ + def runTest(self): + from binascii import b2a_hex + + X = [] + X[0:] = [b'\x94\x74\xB8\xE8\xC7\x3B\xCA\x7D'] + + for i in range(16): + c = DES.new(X[i],DES.MODE_ECB) + if not (i&1): # (num&1) returns 1 for odd numbers + X[i+1:] = [c.encrypt(X[i])] # even + else: + X[i+1:] = [c.decrypt(X[i])] # odd + + self.assertEqual(b2a_hex(X[16]), + b2a_hex(b'\x1B\x1A\x2D\xDB\x4C\x64\x24\x38')) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = DES.new(b'4'*8, DES.MODE_ECB) + + pt = b'5' * 8 + ct = cipher.encrypt(pt) + + output = bytearray(8) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(8)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*8) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*8) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + tests = make_block_tests(DES, "DES", test_data) + tests += [RonRivestTest()] + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py new file mode 100644 index 0000000..8f8479b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_DES3.py @@ -0,0 +1,195 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/DES3.py: Self-test for the Triple-DES cipher +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.DES3""" + +import unittest +from binascii import hexlify, unhexlify + +from Cryptodome.Cipher import DES3 + +from Cryptodome.Util.strxor import strxor_c +from Cryptodome.Util.py3compat import bchr, tostr +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases + +# This is a list of (plaintext, ciphertext, key, description) tuples. +test_data = [ + # Test vector from Appendix B of NIST SP 800-67 + # "Recommendation for the Triple Data Encryption Algorithm (TDEA) Block + # Cipher" + # http://csrc.nist.gov/publications/nistpubs/800-67/SP800-67.pdf + ('54686520717566636b2062726f776e20666f78206a756d70', + 'a826fd8ce53b855fcce21c8112256fe668d5c05dd9b6b900', + '0123456789abcdef23456789abcdef01456789abcdef0123', + 'NIST SP800-67 B.1'), + + # This test is designed to test the DES3 API, not the correctness of the + # output. + ('21e81b7ade88a259', '5c577d4d9b20c0f8', + '9b397ebf81b1181e282f4bb8adbadc6b', 'Two-key 3DES'), +] + +# NIST CAVP test vectors + +nist_tdes_mmt_files = ("TECBMMT2.rsp", "TECBMMT3.rsp") + +for tdes_file in nist_tdes_mmt_files: + + test_vectors = load_test_vectors( + ("Cipher", "TDES"), + tdes_file, + "TDES ECB (%s)" % tdes_file, + {"count": lambda x: int(x)}) or [] + + for index, tv in enumerate(test_vectors): + + # The test vector file contains some directive lines + if isinstance(tv, str): + continue + + key = tv.key1 + tv.key2 + tv.key3 + test_data_item = (tostr(hexlify(tv.plaintext)), + tostr(hexlify(tv.ciphertext)), + tostr(hexlify(key)), + "%s (%s)" % (tdes_file, index)) + test_data.append(test_data_item) + + +class CheckParity(unittest.TestCase): + + def test_parity_option2(self): + before_2k = unhexlify("CABF326FA56734324FFCCABCDEFACABF") + after_2k = DES3.adjust_key_parity(before_2k) + self.assertEqual(after_2k, + unhexlify("CBBF326EA46734324FFDCBBCDFFBCBBF")) + + def test_parity_option3(self): + before_3k = unhexlify("AAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCC") + after_3k = DES3.adjust_key_parity(before_3k) + self.assertEqual(after_3k, + unhexlify("ABABABABABABABABBABABABABABABABACDCDCDCDCDCDCDCD")) + + def test_degradation(self): + sub_key1 = bchr(1) * 8 + sub_key2 = bchr(255) * 8 + + # K1 == K2 + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 * 2 + sub_key2) + + # K2 == K3 + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 + sub_key2 * 2) + + # K1 == K2 == K3 + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 * 3) + + # K1 == K2 (with different parity) + self.assertRaises(ValueError, DES3.adjust_key_parity, + sub_key1 + strxor_c(sub_key1, 1) + sub_key2) + + +class DegenerateToDESTest(unittest.TestCase): + + def runTest(self): + sub_key1 = bchr(1) * 8 + sub_key2 = bchr(255) * 8 + + # K1 == K2 + self.assertRaises(ValueError, DES3.new, + sub_key1 * 2 + sub_key2, + DES3.MODE_ECB) + + # K2 == K3 + self.assertRaises(ValueError, DES3.new, + sub_key1 + sub_key2 * 2, + DES3.MODE_ECB) + + # K1 == K2 == K3 + self.assertRaises(ValueError, DES3.new, + sub_key1 * 3, + DES3.MODE_ECB) + + # K2 == K3 (parity is ignored) + self.assertRaises(ValueError, DES3.new, + sub_key1 + sub_key2 + strxor_c(sub_key2, 0x1), + DES3.MODE_ECB) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + cipher = DES3.new(b'4'*8 + b'G'*8 + b'T'*8, DES3.MODE_ECB) + + pt = b'5' * 16 + ct = cipher.encrypt(pt) + + output = bytearray(16) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(16)) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*16) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*16) + + shorter_output = bytearray(7) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + from .common import make_block_tests + + tests = [] + tests = make_block_tests(DES3, "DES3", test_data) + tests.append(DegenerateToDESTest()) + tests += list_test_cases(CheckParity) + tests += [TestOutput()] + return tests + + +if __name__ == '__main__': + import unittest + + def suite(): + unittest.TestSuite(get_tests()) + + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py new file mode 100644 index 0000000..4127a88 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_EAX.py @@ -0,0 +1,773 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof +from Cryptodome.Util.py3compat import tobytes, bchr +from Cryptodome.Cipher import AES, DES3 +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class EaxTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data_128 = get_tag_random("data_128", 16) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_EAX, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + cipher = DES3.new(self.key_192, DES3.MODE_EAX, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # If not passed, the nonce is created randomly + cipher = AES.new(self.key_128, AES.MODE_EAX) + nonce1 = cipher.nonce + cipher = AES.new(self.key_128, AES.MODE_EAX) + nonce2 = cipher.nonce + self.assertEqual(len(nonce1), 16) + self.assertNotEqual(nonce1, nonce2) + + cipher = AES.new(self.key_128, AES.MODE_EAX, self.nonce_96) + ct = cipher.encrypt(self.data_128) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data_128)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_EAX, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can be of any length (but not empty) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_EAX, + nonce=b"") + + for x in range(1, 128): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=bchr(1) * x) + cipher.encrypt(bchr(1)) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_block_size_64(self): + cipher = DES3.new(self.key_192, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, DES3.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 16 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_EAX).nonce + nonce2 = AES.new(self.key_128, AES.MODE_EAX).nonce + self.assertEqual(len(nonce1), 16) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_EAX, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_EAX, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_EAX, + nonce=self.nonce_96, mac_len=2-1) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_EAX, + nonce=self.nonce_96, mac_len=16+1) + + # Valid MAC length + for mac_len in range(2, 16 + 1): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data_128) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data_128) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Cryptodome.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + data_ba = bytearray(self.data_128) + + cipher1 = AES.new(self.key_128, + AES.MODE_EAX, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_EAX, + nonce=nonce_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data_128) + ct_ba = bytearray(ct) + tag_ba = bytearray(tag) + del data_ba + + cipher3 = AES.new(key_ba, + AES.MODE_EAX, + nonce=nonce_ba) + key_ba[:3] = b'\xFF\xFF\xFF' + nonce_ba[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_ba) + header_ba[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_ba) + ct_ba[:3] = b'\xFF\xFF\xFF' + cipher3.verify(tag_ba) + + self.assertEqual(pt_test, self.data_128) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + data_mv = memoryview(bytearray(self.data_128)) + + cipher1 = AES.new(self.key_128, + AES.MODE_EAX, + nonce=self.nonce_96) + cipher1.update(self.data_128) + ct = cipher1.encrypt(self.data_128) + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_EAX, + nonce=nonce_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher2.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b'\x99\x99\x99' + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data_128)) + ct_mv = memoryview(bytearray(ct)) + tag_mv = memoryview(bytearray(tag)) + del data_mv + + cipher3 = AES.new(key_mv, + AES.MODE_EAX, + nonce=nonce_mv) + key_mv[:3] = b'\xFF\xFF\xFF' + nonce_mv[:3] = b'\xFF\xFF\xFF' + cipher3.update(header_mv) + header_mv[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt(ct_mv) + ct_mv[:3] = b'\x99\x99\x99' + cipher3.verify(tag_mv) + + self.assertEqual(pt_test, self.data_128) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + tag = cipher.digest() + + output = bytearray(128) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 16 + + pt = b'5' * LEN_PT + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class EaxFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data_128 = get_tag_random("data_128", 16) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + cipher.update(self.data_128) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data_128, + self.data_128 + b"3"): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data_128) + method(self.data_128) + method(self.data_128) + method(self.data_128) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + ct, mac = cipher.encrypt_and_digest(self.data_128) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.update(self.data_128) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data_128, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_EAX, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data_128) + getattr(cipher, method1_name)(self.data_128) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data_128) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt(self.data_128) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data_128) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + ct = cipher.encrypt(self.data_128) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + cipher = AES.new(self.key_128, AES.MODE_EAX, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data_128) + + +class TestVectorsPaper(unittest.TestCase): + """Class exercising the EAX test vectors found in + http://www.cs.ucdavis.edu/~rogaway/papers/eax.pdf""" + + test_vectors_hex = [ + ( '6bfb914fd07eae6b', + '', + '', + 'e037830e8389f27b025a2d6527e79d01', + '233952dee4d5ed5f9b9c6d6ff80ff478', + '62EC67F9C3A4A407FCB2A8C49031A8B3' + ), + ( + 'fa3bfd4806eb53fa', + 'f7fb', + '19dd', + '5c4c9331049d0bdab0277408f67967e5', + '91945d3f4dcbee0bf45ef52255f095a4', + 'BECAF043B0A23D843194BA972C66DEBD' + ), + ( '234a3463c1264ac6', + '1a47cb4933', + 'd851d5bae0', + '3a59f238a23e39199dc9266626c40f80', + '01f74ad64077f2e704c0f60ada3dd523', + '70C3DB4F0D26368400A10ED05D2BFF5E' + ), + ( + '33cce2eabff5a79d', + '481c9e39b1', + '632a9d131a', + 'd4c168a4225d8e1ff755939974a7bede', + 'd07cf6cbb7f313bdde66b727afd3c5e8', + '8408DFFF3C1A2B1292DC199E46B7D617' + ), + ( + 'aeb96eaebe2970e9', + '40d0c07da5e4', + '071dfe16c675', + 'cb0677e536f73afe6a14b74ee49844dd', + '35b6d0580005bbc12b0587124557d2c2', + 'FDB6B06676EEDC5C61D74276E1F8E816' + ), + ( + 'd4482d1ca78dce0f', + '4de3b35c3fc039245bd1fb7d', + '835bb4f15d743e350e728414', + 'abb8644fd6ccb86947c5e10590210a4f', + 'bd8e6e11475e60b268784c38c62feb22', + '6EAC5C93072D8E8513F750935E46DA1B' + ), + ( + '65d2017990d62528', + '8b0a79306c9ce7ed99dae4f87f8dd61636', + '02083e3979da014812f59f11d52630da30', + '137327d10649b0aa6e1c181db617d7f2', + '7c77d6e813bed5ac98baa417477a2e7d', + '1A8C98DCD73D38393B2BF1569DEEFC19' + ), + ( + '54b9f04e6a09189a', + '1bda122bce8a8dbaf1877d962b8592dd2d56', + '2ec47b2c4954a489afc7ba4897edcdae8cc3', + '3b60450599bd02c96382902aef7f832a', + '5fff20cafab119ca2fc73549e20f5b0d', + 'DDE59B97D722156D4D9AFF2BC7559826' + ), + ( + '899a175897561d7e', + '6cf36720872b8513f6eab1a8a44438d5ef11', + '0de18fd0fdd91e7af19f1d8ee8733938b1e8', + 'e7f6d2231618102fdb7fe55ff1991700', + 'a4a4782bcffd3ec5e7ef6d8c34a56123', + 'B781FCF2F75FA5A8DE97A9CA48E522EC' + ), + ( + '126735fcc320d25a', + 'ca40d7446e545ffaed3bd12a740a659ffbbb3ceab7', + 'cb8920f87a6c75cff39627b56e3ed197c552d295a7', + 'cfc46afc253b4652b1af3795b124ab6e', + '8395fcf1e95bebd697bd010bc766aac3', + '22E7ADD93CFC6393C57EC0B3C17D6B44' + ), + ] + + test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + # Encrypt + cipher = AES.new(key, AES.MODE_EAX, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_EAX, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_eax_test.json", + "Wycheproof EAX", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt EAX Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_EAX, tv.iv, mac_len=tv.tag_size) + except ValueError as e: + assert len(tv.iv) == 0 and "Nonce cannot be empty" in str(e) + return + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt EAX Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_EAX, tv.iv, mac_len=tv.tag_size) + except ValueError as e: + assert len(tv.iv) == 0 and "Nonce cannot be empty" in str(e) + return + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt EAX Test #" + str(tv.id) + if len(tv.iv) == 0 or len(tv.ct) < 1: + return + cipher = AES.new(tv.key, AES.MODE_EAX, tv.iv, mac_len=tv.tag_size) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +class TestOtherCiphers(unittest.TestCase): + + @classmethod + def create_test(cls, name, factory, key_size): + + def test_template(self, factory=factory, key_size=key_size): + cipher = factory.new(get_tag_random("cipher", key_size), + factory.MODE_EAX, + nonce=b"nonce") + ct, mac = cipher.encrypt_and_digest(b"plaintext") + + cipher = factory.new(get_tag_random("cipher", key_size), + factory.MODE_EAX, + nonce=b"nonce") + pt2 = cipher.decrypt_and_verify(ct, mac) + + self.assertEqual(b"plaintext", pt2) + + setattr(cls, "test_" + name, test_template) + + +from Cryptodome.Cipher import DES, DES3, ARC2, CAST, Blowfish + +TestOtherCiphers.create_test("DES_" + str(DES.key_size), DES, DES.key_size) +for ks in DES3.key_size: + TestOtherCiphers.create_test("DES3_" + str(ks), DES3, ks) +for ks in ARC2.key_size: + TestOtherCiphers.create_test("ARC2_" + str(ks), ARC2, ks) +for ks in CAST.key_size: + TestOtherCiphers.create_test("CAST_" + str(ks), CAST, ks) +for ks in Blowfish.key_size: + TestOtherCiphers.create_test("Blowfish_" + str(ks), Blowfish, ks) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(EaxTests) + tests += list_test_cases(EaxFSMTests) + tests += [ TestVectorsPaper() ] + tests += [ TestVectorsWycheproof(wycheproof_warnings) ] + tests += list_test_cases(TestOtherCiphers) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py new file mode 100644 index 0000000..ac8e741 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_GCM.py @@ -0,0 +1,951 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from __future__ import print_function + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof + +from Cryptodome.Util.py3compat import tobytes, bchr +from Cryptodome.Cipher import AES +from Cryptodome.Hash import SHAKE128, SHA256 + +from Cryptodome.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class GcmTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Nonce is optional (a random one will be created) + AES.new(self.key_128, AES.MODE_GCM) + + cipher = AES.new(self.key_128, AES.MODE_GCM, self.nonce_96) + ct = cipher.encrypt(self.data) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can be of any length (but not empty) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, + nonce=b"") + + for x in range(1, 128): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=bchr(1) * x) + cipher.encrypt(bchr(1)) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 15 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_GCM).nonce + nonce2 = AES.new(self.key_128, AES.MODE_GCM).nonce + self.assertEqual(len(nonce1), 16) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_GCM, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + result = getattr(cipher, func)(b"") + self.assertEqual(result, b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, + nonce=self.nonce_96, mac_len=3) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_GCM, + nonce=self.nonce_96, mac_len=16+1) + + # Valid MAC length + for mac_len in range(5, 16 + 1): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Cryptodome.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b"" + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b"" + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + data_ba = bytearray(self.data) + + cipher1 = AES.new(self.key_128, + AES.MODE_GCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_GCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_ba) + data_ba[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + del data_ba + + cipher4 = AES.new(key_ba, + AES.MODE_GCM, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + data_mv = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_128, + AES.MODE_GCM, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_GCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_mv) + data_mv[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + del data_mv + + cipher4 = AES.new(key_mv, + AES.MODE_GCM, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + tag = cipher.digest() + + output = bytearray(128) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(pt) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0' * LEN_PT) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +class GcmFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(ct) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b"333", self.data, + self.data + b"3"): + if auth_data is None: + assoc_len = None + else: + assoc_len = len(auth_data) + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data) + method(self.data) + method(self.data) + method(self.data) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_GCM, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data) + getattr(cipher, method1_name)(self.data) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt(self.data) + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_GCM, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + +class TestVectors(unittest.TestCase): + """Class exercising the GCM test vectors found in + http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf""" + + # List of test vectors, each made up of: + # - authenticated data + # - plaintext + # - ciphertext + # - MAC + # - AES key + # - nonce + test_vectors_hex = [ + ( + '', + '', + '', + '58e2fccefa7e3061367f1d57a4e7455a', + '00000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + '00000000000000000000000000000000', + '0388dace60b6a392f328c2b971b2fe78', + 'ab6e47d42cec13bdf53a67b21257bddf', + '00000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', + '42831ec2217774244b7221b784d0d49ce3aa212f2c02a4e035c17e2329aca12e' + + '21d514b25466931c7d8f6a5aac84aa051ba30b396a0aac973d58e091473f5985', + '4d5c2af327cd64a62cf35abd2ba6fab4', + 'feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '42831ec2217774244b7221b784d0d49ce3aa212f2c02a4e035c17e2329aca12e' + + '21d514b25466931c7d8f6a5aac84aa051ba30b396a0aac973d58e091', + '5bc94fbc3221a5db94fae95ae7121a47', + 'feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '61353b4c2806934a777ff51fa22a4755699b2a714fcdc6f83766e5f97b6c7423' + + '73806900e49f24b22b097544d4896b424989b5e1ebac0f07c23f4598', + '3612d2e79e3b0785561be14aaca2fccb', + 'feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbad' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '8ce24998625615b603a033aca13fb894be9112a5c3a211a8ba262a3cca7e2ca7' + + '01e4a9a4fba43c90ccdcb281d48c7c6fd62875d2aca417034c34aee5', + '619cc5aefffe0bfa462af43c1699d050', + 'feffe9928665731c6d6a8f9467308308', + '9313225df88406e555909c5aff5269aa' + + '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + + '16aedbf5a0de6a57a637b39b' + ), + ( + '', + '', + '', + 'cd33b28ac773f74ba00ed1f312572435', + '000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + '00000000000000000000000000000000', + '98e7247c07f0fe411c267e4384b0f600', + '2ff58d80033927ab8ef4d4587514f0fb', + '000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', + '3980ca0b3c00e841eb06fac4872a2757859e1ceaa6efd984628593b40ca1e19c' + + '7d773d00c144c525ac619d18c84a3f4718e2448b2fe324d9ccda2710acade256', + '9924a7c8587336bfb118024db8674a14', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '3980ca0b3c00e841eb06fac4872a2757859e1ceaa6efd984628593b40ca1e19c' + + '7d773d00c144c525ac619d18c84a3f4718e2448b2fe324d9ccda2710', + '2519498e80f1478f37ba55bd6d27618c', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '0f10f599ae14a154ed24b36e25324db8c566632ef2bbb34f8347280fc4507057' + + 'fddc29df9a471f75c66541d4d4dad1c9e93a19a58e8b473fa0f062f7', + '65dcc57fcf623a24094fcca40d3533f8', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + 'cafebabefacedbad' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + 'd27e88681ce3243c4830165a8fdcf9ff1de9a1d8e6b447ef6ef7b79828666e45' + + '81e79012af34ddd9e2f037589b292db3e67c036745fa22e7e9b7373b', + 'dcf566ff291c25bbb8568fc3d376a6d9', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c', + '9313225df88406e555909c5aff5269aa' + + '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + + '16aedbf5a0de6a57a637b39b' + ), + ( + '', + '', + '', + '530f8afbc74536b9a963b4f1c4cb738b', + '0000000000000000000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( + '', + '00000000000000000000000000000000', + 'cea7403d4d606b6e074ec5d3baf39d18', + 'd0d1c8a799996bf0265b98b5d48ab919', + '0000000000000000000000000000000000000000000000000000000000000000', + '000000000000000000000000' + ), + ( '', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b391aafd255', + '522dc1f099567d07f47f37a32a84427d643a8cdcbfe5c0c97598a2bd2555d1aa' + + '8cb08e48590dbb3da7b08b1056828838c5f61e6393ba7a0abcc9f662898015ad', + 'b094dac5d93471bdec1a502270e3cc6c', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '522dc1f099567d07f47f37a32a84427d643a8cdcbfe5c0c97598a2bd2555d1aa' + + '8cb08e48590dbb3da7b08b1056828838c5f61e6393ba7a0abcc9f662', + '76fc6ece0f4e1768cddf8853bb2d551b', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbaddecaf888' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + 'c3762df1ca787d32ae47c13bf19844cbaf1ae14d0b976afac52ff7d79bba9de0' + + 'feb582d33934a4f0954cc2363bc73f7862ac430e64abe499f47c9b1f', + '3a337dbf46a792c45e454913fe2ea8f2', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + 'cafebabefacedbad' + ), + ( + 'feedfacedeadbeeffeedfacedeadbeefabaddad2', + 'd9313225f88406e5a55909c5aff5269a86a7a9531534f7da2e4c303d8a318a72' + + '1c3c0c95956809532fcf0e2449a6b525b16aedf5aa0de657ba637b39', + '5a8def2f0c9e53f1f75d7853659e2a20eeb2b22aafde6419a058ab4f6f746bf4' + + '0fc0c3b780f244452da3ebf1c5d82cdea2418997200ef82e44ae7e3f', + 'a44a8266ee1c8eb0c8b5d4cf5ae9f19a', + 'feffe9928665731c6d6a8f9467308308feffe9928665731c6d6a8f9467308308', + '9313225df88406e555909c5aff5269aa' + + '6a7a9538534f7da1e4c303d2a318a728c3c0c95156809539fcf0e2429a6b5254' + + '16aedbf5a0de6a57a637b39b' + ) + ] + + test_vectors = [[unhexlify(x) for x in tv] for tv in test_vectors_hex] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + + # Encrypt + cipher = AES.new(key, AES.MODE_GCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_GCM, nonce, mac_len=len(mac)) + cipher.update(assoc_data) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsGueronKrasnov(unittest.TestCase): + """Class exercising the GCM test vectors found in + 'The fragility of AES-GCM authentication algorithm', Gueron, Krasnov + https://eprint.iacr.org/2013/157.pdf""" + + def test_1(self): + key = unhexlify("3da6c536d6295579c0959a7043efb503") + iv = unhexlify("2b926197d34e091ef722db94") + aad = unhexlify("00000000000000000000000000000000" + + "000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f") + digest = unhexlify("69dd586555ce3fcc89663801a71d957b") + + cipher = AES.new(key, AES.MODE_GCM, iv).update(aad) + self.assertEqual(digest, cipher.digest()) + + def test_2(self): + key = unhexlify("843ffcf5d2b72694d19ed01d01249412") + iv = unhexlify("dbcca32ebf9b804617c3aa9e") + aad = unhexlify("00000000000000000000000000000000" + + "101112131415161718191a1b1c1d1e1f") + pt = unhexlify("000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f" + + "404142434445464748494a4b4c4d4e4f") + ct = unhexlify("6268c6fa2a80b2d137467f092f657ac0" + + "4d89be2beaa623d61b5a868c8f03ff95" + + "d3dcee23ad2f1ab3a6c80eaf4b140eb0" + + "5de3457f0fbc111a6b43d0763aa422a3" + + "013cf1dc37fe417d1fbfc449b75d4cc5") + digest = unhexlify("3b629ccfbc1119b7319e1dce2cd6fd6d") + + cipher = AES.new(key, AES.MODE_GCM, iv).update(aad) + ct2, digest2 = cipher.encrypt_and_digest(pt) + + self.assertEqual(ct, ct2) + self.assertEqual(digest, digest2) + + +class NISTTestVectorsGCM(unittest.TestCase): + + def __init__(self, a): + self.use_clmul = True + unittest.TestCase.__init__(self, a) + + +class NISTTestVectorsGCM_no_clmul(unittest.TestCase): + + def __init__(self, a): + self.use_clmul = False + unittest.TestCase.__init__(self, a) + + +test_vectors_nist = load_test_vectors( + ("Cipher", "AES"), + "gcmDecrypt128.rsp", + "GCM decrypt", + {"count": lambda x: int(x)}) or [] + +test_vectors_nist += load_test_vectors( + ("Cipher", "AES"), + "gcmEncryptExtIV128.rsp", + "GCM encrypt", + {"count": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_nist): + + # The test vector file contains some directive lines + if isinstance(tv, str): + continue + + def single_test(self, tv=tv): + + self.description = tv.desc + cipher = AES.new(tv.key, AES.MODE_GCM, nonce=tv.iv, + mac_len=len(tv.tag), use_clmul=self.use_clmul) + cipher.update(tv.aad) + if "FAIL" in tv.others: + self.assertRaises(ValueError, cipher.decrypt_and_verify, + tv.ct, tv.tag) + else: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + self.assertEqual(pt, tv.pt) + + setattr(NISTTestVectorsGCM, "test_%d" % idx, single_test) + setattr(NISTTestVectorsGCM_no_clmul, "test_%d" % idx, single_test) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, **extra_params): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._extra_params = extra_params + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_gcm_test.json", + "Wycheproof GCM", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt GCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) == 0 and "Nonce cannot be empty" in str(e): + return + raise e + + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt GCM Test #" + str(tv.id) + + try: + cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + except ValueError as e: + if len(tv.iv) == 0 and "Nonce cannot be empty" in str(e): + return + raise e + + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def test_corrupt_decrypt(self, tv): + self._id = "Wycheproof Corrupt Decrypt GCM Test #" + str(tv.id) + if len(tv.iv) == 0 or len(tv.ct) < 1: + return + cipher = AES.new(tv.key, AES.MODE_GCM, tv.iv, mac_len=tv.tag_size, + **self._extra_params) + cipher.update(tv.aad) + ct_corrupt = strxor(tv.ct, b"\x00" * (len(tv.ct) - 1) + b"\x01") + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct_corrupt, tv.tag) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + self.test_corrupt_decrypt(tv) + + +class TestVariableLength(unittest.TestCase): + + def __init__(self, **extra_params): + unittest.TestCase.__init__(self) + self._extra_params = extra_params + + def runTest(self): + key = b'0' * 16 + h = SHA256.new() + + for length in range(160): + nonce = '{0:04d}'.format(length).encode('utf-8') + data = bchr(length) * length + cipher = AES.new(key, AES.MODE_GCM, nonce=nonce, **self._extra_params) + ct, tag = cipher.encrypt_and_digest(data) + h.update(ct) + h.update(tag) + + self.assertEqual(h.hexdigest(), "7b7eb1ffbe67a2e53a912067c0ec8e62ebc7ce4d83490ea7426941349811bdf4") + + +def get_tests(config={}): + from Cryptodome.Util import _cpu_features + + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(GcmTests) + tests += list_test_cases(GcmFSMTests) + tests += [TestVectors()] + tests += [TestVectorsWycheproof(wycheproof_warnings)] + tests += list_test_cases(TestVectorsGueronKrasnov) + tests += [TestVariableLength()] + if config.get('slow_tests'): + tests += list_test_cases(NISTTestVectorsGCM) + + if _cpu_features.have_clmul(): + tests += [TestVectorsWycheproof(wycheproof_warnings, use_clmul=False)] + tests += [TestVariableLength(use_clmul=False)] + if config.get('slow_tests'): + tests += list_test_cases(NISTTestVectorsGCM_no_clmul) + else: + print("Skipping test of PCLMULDQD in AES GCM") + + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py new file mode 100644 index 0000000..1f2ffbc --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OCB.py @@ -0,0 +1,845 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.Util.py3compat import b, tobytes, bchr +from Cryptodome.Util.number import long_to_bytes +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Cipher import AES +from Cryptodome.Hash import SHAKE128 + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class OcbTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct, mac = cipher.encrypt_and_digest(pt) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Nonce is optional + AES.new(self.key_128, AES.MODE_OCB) + + cipher = AES.new(self.key_128, AES.MODE_OCB, self.nonce_96) + ct = cipher.encrypt(self.data) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertEqual(ct, cipher.encrypt(self.data)) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce cannot be empty + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=b("")) + + # nonce can be up to 15 bytes long + for length in range(1, 16): + AES.new(self.key_128, AES.MODE_OCB, nonce=self.data[:length]) + + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.data) + + def test_block_size_128(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + # By default, a 15 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_OCB).nonce + nonce2 = AES.new(self.key_128, AES.MODE_OCB).nonce + self.assertEqual(len(nonce1), 15) + self.assertNotEqual(nonce1, nonce2) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, a 15 bytes long nonce is randomly generated + nonce1 = AES.new(self.key_128, AES.MODE_OCB).nonce + nonce2 = AES.new(self.key_128, AES.MODE_OCB).nonce + self.assertEqual(len(nonce1), 15) + self.assertNotEqual(nonce1, nonce2) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96, + use_aesni=False) + + def test_null_encryption_decryption(self): + for func in "encrypt", "decrypt": + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + result = getattr(cipher, func)(b("")) + self.assertEqual(result, b("")) + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.encrypt(b("xyz")) + self.assertRaises(TypeError, cipher.decrypt, b("xyz")) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.decrypt(b("xyz")) + self.assertRaises(TypeError, cipher.encrypt, b("xyz")) + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') + + def test_mac_len(self): + # Invalid MAC length + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.nonce_96, mac_len=7) + self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, + nonce=self.nonce_96, mac_len=16+1) + + # Valid MAC length + for mac_len in range(8, 16 + 1): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96, + mac_len=mac_len) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), mac_len) + + # Default MAC length + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Cryptodome.Util.strxor import strxor_c + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_message_chunks(self): + # Validate that both associated data and plaintext/ciphertext + # can be broken up in chunks of arbitrary length + + auth_data = get_tag_random("authenticated data", 127) + plaintext = get_tag_random("plaintext", 127) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(auth_data) + ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) + + def break_up(data, chunk_length): + return [data[i:i+chunk_length] for i in range(0, len(data), + chunk_length)] + + # Encryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + pt2 = b("") + for chunk in break_up(ciphertext, chunk_length): + pt2 += cipher.decrypt(chunk) + pt2 += cipher.decrypt() + self.assertEqual(plaintext, pt2) + cipher.verify(ref_mac) + + # Decryption + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + + for chunk in break_up(auth_data, chunk_length): + cipher.update(chunk) + ct2 = b("") + for chunk in break_up(plaintext, chunk_length): + ct2 += cipher.encrypt(chunk) + ct2 += cipher.encrypt() + self.assertEqual(ciphertext, ct2) + self.assertEqual(cipher.digest(), ref_mac) + + def test_bytearray(self): + + # Encrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + data_ba = bytearray(self.data) + + cipher1 = AES.new(self.key_128, + AES.MODE_OCB, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + cipher1.encrypt() + tag = cipher1.digest() + + cipher2 = AES.new(key_ba, + AES.MODE_OCB, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_ba) + cipher2.encrypt() + data_ba[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_ba = bytearray(self.key_128) + nonce_ba = bytearray(self.nonce_96) + header_ba = bytearray(self.data) + del data_ba + + cipher4 = AES.new(key_ba, + AES.MODE_OCB, + nonce=nonce_ba) + key_ba[:3] = b"\xFF\xFF\xFF" + nonce_ba[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_ba) + header_ba[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + data_mv = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_128, + AES.MODE_OCB, + nonce=self.nonce_96) + cipher1.update(self.data) + ct = cipher1.encrypt(self.data) + cipher1.encrypt() + tag = cipher1.digest() + + cipher2 = AES.new(key_mv, + AES.MODE_OCB, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher2.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + ct_test = cipher2.encrypt(data_mv) + cipher2.encrypt() + data_mv[:3] = b"\xFF\xFF\xFF" + tag_test = cipher2.digest() + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key_mv = memoryview(bytearray(self.key_128)) + nonce_mv = memoryview(bytearray(self.nonce_96)) + header_mv = memoryview(bytearray(self.data)) + del data_mv + + cipher4 = AES.new(key_mv, + AES.MODE_OCB, + nonce=nonce_mv) + key_mv[:3] = b"\xFF\xFF\xFF" + nonce_mv[:3] = b"\xFF\xFF\xFF" + cipher4.update(header_mv) + header_mv[:3] = b"\xFF\xFF\xFF" + pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) + + self.assertEqual(self.data, pt_test) + + +class OcbFSMTests(unittest.TestCase): + + key_128 = get_tag_random("key_128", 16) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_valid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->ENCRYPT(NONE)->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + ct += cipher.encrypt() + mac = cipher.digest() + + # Verify path INIT->DECRYPT->DECRYPT(NONCE)->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.decrypt() + cipher.verify(mac) + + def test_invalid_init_encrypt_decrypt_digest_verify(self): + # No authenticated data, fixed plaintext + # Verify path INIT->ENCRYPT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + self.assertRaises(TypeError, cipher.digest) + + # Verify path INIT->DECRYPT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.decrypt(ct) + self.assertRaises(TypeError, cipher.verify) + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_full_path(self): + # Fixed authenticated data, fixed plaintext + # Verify path INIT->UPDATE->ENCRYPT->ENCRYPT(NONE)->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + ct = cipher.encrypt(self.data) + ct += cipher.encrypt() + mac = cipher.digest() + + # Verify path INIT->UPDATE->DECRYPT->DECRYPT(NONE)->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(ct) + cipher.decrypt() + cipher.verify(mac) + + # Verify path INIT->UPDATE->ENCRYPT->ENCRYPT_AND_DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + ct1 = cipher.encrypt(self.data[:2]) + ct2, mac = cipher.encrypt_and_digest(self.data[2:]) + + # Verify path INIT->UPDATE->DECRYPT->DECRYPT_AND_VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(ct1) + cipher.decrypt_and_verify(ct2, mac) + + def test_invalid_encrypt_after_final(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.encrypt(self.data) + cipher.encrypt() + self.assertRaises(TypeError, cipher.encrypt, self.data) + + def test_invalid_decrypt_after_final(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.decrypt(self.data) + cipher.decrypt() + self.assertRaises(TypeError, cipher.decrypt, self.data) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_encrypt_or_decrypt(self): + for method_name in "encrypt", "decrypt": + for auth_data in (None, b("333"), self.data, + self.data + b("3")): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + if auth_data is not None: + cipher.update(auth_data) + method = getattr(cipher, method_name) + method(self.data) + method(self.data) + method(self.data) + method(self.data) + method() + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_mixing_encrypt_decrypt(self): + # Once per method, with or without assoc. data + for method1_name, method2_name in (("encrypt", "decrypt"), + ("decrypt", "encrypt")): + for assoc_data_present in (True, False): + cipher = AES.new(self.key_128, AES.MODE_OCB, + nonce=self.nonce_96) + if assoc_data_present: + cipher.update(self.data) + getattr(cipher, method1_name)(self.data) + self.assertRaises(TypeError, getattr(cipher, method2_name), + self.data) + + def test_invalid_encrypt_or_update_after_digest(self): + for method_name in "encrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.encrypt(self.data) + cipher.encrypt() + cipher.digest() + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + + def test_invalid_decrypt_or_update_after_verify(self): + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + ct = cipher.encrypt(self.data) + ct += cipher.encrypt() + mac = cipher.digest() + + for method_name in "decrypt", "update": + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.decrypt(ct) + cipher.decrypt() + cipher.verify(mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, mac) + self.assertRaises(TypeError, getattr(cipher, method_name), + self.data) + + +def algo_rfc7253(keylen, taglen, noncelen): + """Implement the algorithm at page 18 of RFC 7253""" + + key = bchr(0) * (keylen // 8 - 1) + bchr(taglen) + C = b"" + + for i in range(128): + S = bchr(0) * i + + N = long_to_bytes(3 * i + 1, noncelen // 8) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + cipher.update(S) + C += cipher.encrypt(S) + cipher.encrypt() + cipher.digest() + + N = long_to_bytes(3 * i + 2, noncelen // 8) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + C += cipher.encrypt(S) + cipher.encrypt() + cipher.digest() + + N = long_to_bytes(3 * i + 3, noncelen // 8) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + cipher.update(S) + C += cipher.encrypt() + cipher.digest() + + N = long_to_bytes(385, noncelen // 8) + cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) + cipher.update(C) + return cipher.encrypt() + cipher.digest() + + +class OcbRfc7253Test(unittest.TestCase): + + # Tuple with + # - nonce + # - authenticated data + # - plaintext + # - ciphertext and 16 byte MAC tag + tv1_key = "000102030405060708090A0B0C0D0E0F" + tv1 = ( + ( + "BBAA99887766554433221100", + "", + "", + "785407BFFFC8AD9EDCC5520AC9111EE6" + ), + ( + "BBAA99887766554433221101", + "0001020304050607", + "0001020304050607", + "6820B3657B6F615A5725BDA0D3B4EB3A257C9AF1F8F03009" + ), + ( + "BBAA99887766554433221102", + "0001020304050607", + "", + "81017F8203F081277152FADE694A0A00" + ), + ( + "BBAA99887766554433221103", + "", + "0001020304050607", + "45DD69F8F5AAE72414054CD1F35D82760B2CD00D2F99BFA9" + ), + ( + "BBAA99887766554433221104", + "000102030405060708090A0B0C0D0E0F", + "000102030405060708090A0B0C0D0E0F", + "571D535B60B277188BE5147170A9A22C3AD7A4FF3835B8C5" + "701C1CCEC8FC3358" + ), + ( + "BBAA99887766554433221105", + "000102030405060708090A0B0C0D0E0F", + "", + "8CF761B6902EF764462AD86498CA6B97" + ), + ( + "BBAA99887766554433221106", + "", + "000102030405060708090A0B0C0D0E0F", + "5CE88EC2E0692706A915C00AEB8B2396F40E1C743F52436B" + "DF06D8FA1ECA343D" + ), + ( + "BBAA99887766554433221107", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "1CA2207308C87C010756104D8840CE1952F09673A448A122" + "C92C62241051F57356D7F3C90BB0E07F" + ), + ( + "BBAA99887766554433221108", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "", + "6DC225A071FC1B9F7C69F93B0F1E10DE" + ), + ( + "BBAA99887766554433221109", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "221BD0DE7FA6FE993ECCD769460A0AF2D6CDED0C395B1C3C" + "E725F32494B9F914D85C0B1EB38357FF" + ), + ( + "BBAA9988776655443322110A", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "BD6F6C496201C69296C11EFD138A467ABD3C707924B964DE" + "AFFC40319AF5A48540FBBA186C5553C68AD9F592A79A4240" + ), + ( + "BBAA9988776655443322110B", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "", + "FE80690BEE8A485D11F32965BC9D2A32" + ), + ( + "BBAA9988776655443322110C", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F", + "2942BFC773BDA23CABC6ACFD9BFD5835BD300F0973792EF4" + "6040C53F1432BCDFB5E1DDE3BC18A5F840B52E653444D5DF" + ), + ( + "BBAA9988776655443322110D", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "D5CA91748410C1751FF8A2F618255B68A0A12E093FF45460" + "6E59F9C1D0DDC54B65E8628E568BAD7AED07BA06A4A69483" + "A7035490C5769E60" + ), + ( + "BBAA9988776655443322110E", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "", + "C5CD9D1850C141E358649994EE701B68" + ), + ( + "BBAA9988776655443322110F", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "4412923493C57D5DE0D700F753CCE0D1D2D95060122E9F15" + "A5DDBFC5787E50B5CC55EE507BCB084E479AD363AC366B95" + "A98CA5F3000B1479" + ) + ) + + # Tuple with + # - key + # - nonce + # - authenticated data + # - plaintext + # - ciphertext and 12 byte MAC tag + tv2 = ( + "0F0E0D0C0B0A09080706050403020100", + "BBAA9988776655443322110D", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627", + "1792A4E31E0755FB03E31B22116E6C2DDF9EFD6E33D536F1" + "A0124B0A55BAE884ED93481529C76B6AD0C515F4D1CDD4FD" + "AC4F02AA" + ) + + # Tuple with + # - key length + # - MAC tag length + # - Expected output + tv3 = ( + (128, 128, "67E944D23256C5E0B6C61FA22FDF1EA2"), + (192, 128, "F673F2C3E7174AAE7BAE986CA9F29E17"), + (256, 128, "D90EB8E9C977C88B79DD793D7FFA161C"), + (128, 96, "77A3D8E73589158D25D01209"), + (192, 96, "05D56EAD2752C86BE6932C5E"), + (256, 96, "5458359AC23B0CBA9E6330DD"), + (128, 64, "192C9B7BD90BA06A"), + (192, 64, "0066BC6E0EF34E24"), + (256, 64, "7D4EA5D445501CBE"), + ) + + def test1(self): + key = unhexlify(b(self.tv1_key)) + for tv in self.tv1: + nonce, aad, pt, ct = [unhexlify(b(x)) for x in tv] + ct, mac_tag = ct[:-16], ct[-16:] + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) + cipher.update(aad) + ct2 = cipher.encrypt(pt) + cipher.encrypt() + self.assertEqual(ct, ct2) + self.assertEqual(mac_tag, cipher.digest()) + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) + cipher.update(aad) + pt2 = cipher.decrypt(ct) + cipher.decrypt() + self.assertEqual(pt, pt2) + cipher.verify(mac_tag) + + def test2(self): + + key, nonce, aad, pt, ct = [unhexlify(b(x)) for x in self.tv2] + ct, mac_tag = ct[:-12], ct[-12:] + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce, mac_len=12) + cipher.update(aad) + ct2 = cipher.encrypt(pt) + cipher.encrypt() + self.assertEqual(ct, ct2) + self.assertEqual(mac_tag, cipher.digest()) + + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce, mac_len=12) + cipher.update(aad) + pt2 = cipher.decrypt(ct) + cipher.decrypt() + self.assertEqual(pt, pt2) + cipher.verify(mac_tag) + + def test3(self): + for keylen, taglen, result in self.tv3: + result2 = algo_rfc7253(keylen, taglen, 96) + self.assertEqual(unhexlify(b(result)), result2) + + +class OcbDkgTest(unittest.TestCase): + """Test vectors from https://gitlab.com/dkg/ocb-test-vectors""" + + def test_1_2(self): + tvs = [] + for fi in (1, 2): + for nb in (104, 112, 120): + tv_file = load_test_vectors(("Cipher", "AES"), + "test-vector-%d-nonce%d.txt" % (fi, nb), + "DKG tests, %d, %d bits" % (fi, nb), + {}) + if tv_file is None: + break + key = tv_file[0].k + for tv in tv_file[1:]: + tv.k = key + tvs.append(tv) + + for tv in tvs: + k, n, a, p, c = tv.k, tv.n, tv.a, tv.p, tv.c + mac_len = len(c) - len(p) + cipher = AES.new(k, AES.MODE_OCB, nonce=n, mac_len=mac_len) + cipher.update(a) + c_out, tag_out = cipher.encrypt_and_digest(p) + self.assertEqual(c, c_out + tag_out) + + def test_3(self): + + def check(keylen, taglen, noncelen, exp): + result = algo_rfc7253(keylen, taglen, noncelen) + self.assertEqual(result, unhexlify(exp)) + + # test-vector-3-nonce104.txt + check(128, 128, 104, "C47F5F0341E15326D4D1C46F47F05062") + check(192, 128, 104, "95B9167A38EB80495DFC561A8486E109") + check(256, 128, 104, "AFE1CDDB97028FD92F8FB3C8CFBA7D83") + check(128, 96, 104, "F471B4983BA80946DF217A54") + check(192, 96, 104, "5AE828BC51C24D85FA5CC7B2") + check(256, 96, 104, "8C8335982E2B734616CAD14C") + check(128, 64, 104, "B553F74B85FD1E5B") + check(192, 64, 104, "3B49D20E513531F9") + check(256, 64, 104, "ED6DA5B1216BF8BB") + + # test-vector-3-nonce112.txt + check(128, 128, 112, "CA8AFCA031BAC3F480A583BD6C50A547") + check(192, 128, 112, "D170C1DF356308079DA9A3F619147148") + check(256, 128, 112, "57F94381F2F9231EFB04AECD323757C3") + check(128, 96, 112, "3A618B2531ED39F260C750DC") + check(192, 96, 112, "9071EB89FEDBADDA88FD286E") + check(256, 96, 112, "FDF0EFB97F21A39AC4BAB5AC") + check(128, 64, 112, "FAB2FF3A8DD82A13") + check(192, 64, 112, "AC01D912BD0737D3") + check(256, 64, 112, "9D1FD0B500EA4ECF") + + # test-vector-3-nonce120.txt + check(128, 128, 120, "9E043A7140A25FB91F43BCC9DD7E0F46") + check(192, 128, 120, "680000E53908323A7F396B955B8EC641") + check(256, 128, 120, "8304B97FAACDA56E676602E1878A7E6F") + check(128, 96, 120, "81F978AC9867E825D339847D") + check(192, 96, 120, "EFCF2D60B24926ADA48CF5B1") + check(256, 96, 120, "84961DC56E917B165E58C174") + check(128, 64, 120, "227AEE6C9D905A61") + check(192, 64, 120, "541DE691B9E1A2F9") + check(256, 64, 120, "B0E761381C7129FC") + + def test_2_bugfix(self): + nonce = unhexlify("EEDDCCBBAA9988776655443322110D") + key = unhexlify("0F0E0D0C0B0A09080706050403020100") + A = unhexlify("000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627") + P = unhexlify("000102030405060708090A0B0C0D0E0F1011121314151617" + "18191A1B1C1D1E1F2021222324252627") + C = unhexlify("07E903BFC49552411ABC865F5ECE60F6FAD1F5A9F14D3070" + "FA2F1308A563207FFE14C1EEA44B22059C7484319D8A2C53" + "C236A7B3") + mac_len = len(C) - len(P) + + # Prior to version 3.17, a nonce of maximum length (15 bytes) + # was actually used as a 14 byte nonce. The last byte was erroneously + # ignored. + buggy_result = unhexlify("BA015C4E5AE54D76C890AE81BD40DC57" + "03EDC30E8AC2A58BC5D8FA4D61C5BAE6" + "C39BEAC435B2FD56A2A5085C1B135D77" + "0C8264B7") + cipher = AES.new(key, AES.MODE_OCB, nonce=nonce[:-1], mac_len=mac_len) + cipher.update(A) + C_out2, tag_out2 = cipher.encrypt_and_digest(P) + self.assertEqual(buggy_result, C_out2 + tag_out2) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(OcbTests) + tests += list_test_cases(OcbFSMTests) + tests += list_test_cases(OcbRfc7253Test) + tests += list_test_cases(OcbDkgTest) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py new file mode 100644 index 0000000..9a8ef0a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OFB.py @@ -0,0 +1,238 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.Cipher import AES, DES3, DES +from Cryptodome.Hash import SHAKE128 +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + +from Cryptodome.SelfTest.Cipher.test_CBC import BlockChainingTests + +class OfbTests(BlockChainingTests): + + aes_mode = AES.MODE_OFB + des3_mode = DES3.MODE_OFB + + # Redefine test_unaligned_data_128/64 + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_CFB, self.iv_128, segment_size=128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=8) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_CFB, self.iv_64, segment_size=64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + +from Cryptodome.SelfTest.Cipher.test_CBC import NistBlockChainingVectors + +class NistOfbVectors(NistBlockChainingVectors): + aes_mode = AES.MODE_OFB + des_mode = DES.MODE_OFB + des3_mode = DES3.MODE_OFB + + +# Create one test method per file +nist_aes_kat_mmt_files = ( + # KAT + "OFBGFSbox128.rsp", + "OFBGFSbox192.rsp", + "OFBGFSbox256.rsp", + "OFBKeySbox128.rsp", + "OFBKeySbox192.rsp", + "OFBKeySbox256.rsp", + "OFBVarKey128.rsp", + "OFBVarKey192.rsp", + "OFBVarKey256.rsp", + "OFBVarTxt128.rsp", + "OFBVarTxt192.rsp", + "OFBVarTxt256.rsp", + # MMT + "OFBMMT128.rsp", + "OFBMMT192.rsp", + "OFBMMT256.rsp", + ) +nist_aes_mct_files = ( + "OFBMCT128.rsp", + "OFBMCT192.rsp", + "OFBMCT256.rsp", + ) + +for file_name in nist_aes_kat_mmt_files: + def new_func(self, file_name=file_name): + self._do_kat_aes_test(file_name) + setattr(NistOfbVectors, "test_AES_" + file_name, new_func) + +for file_name in nist_aes_mct_files: + def new_func(self, file_name=file_name): + self._do_mct_aes_test(file_name) + setattr(NistOfbVectors, "test_AES_" + file_name, new_func) +del file_name, new_func + +nist_tdes_files = ( + "TOFBMMT2.rsp", # 2TDES + "TOFBMMT3.rsp", # 3TDES + "TOFBinvperm.rsp", # Single DES + "TOFBpermop.rsp", + "TOFBsubtab.rsp", + "TOFBvarkey.rsp", + "TOFBvartext.rsp", + ) + +for file_name in nist_tdes_files: + def new_func(self, file_name=file_name): + self._do_tdes_test(file_name) + setattr(NistOfbVectors, "test_TDES_" + file_name, new_func) + +# END OF NIST OFB TEST VECTORS + + +class SP800TestVectors(unittest.TestCase): + """Class exercising the OFB test vectors found in Section F.4 + of NIST SP 800-3A""" + + def test_aes_128(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = '3b3fd92eb72dad20333449f8e83cfb4a' +\ + '7789508d16918f03f53c52dac54ed825' +\ + '9740051e9c5fecf64344f7a82260edcc' +\ + '304c6528f659c77866a510d9c1d6ae5e' + key = '2b7e151628aed2a6abf7158809cf4f3c' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext[:-8]), ciphertext[:-8]) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext[:-8]), plaintext[:-8]) + + def test_aes_192(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'cdc80d6fddf18cab34c25909c99a4174' +\ + 'fcc28b8d4c63837c09e81700c1100401' +\ + '8d9a9aeac0f6596f559c6d4daf59a5f2' +\ + '6d9f200857ca6c3e9cac524bd9acc92a' + key = '8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext[:-8]), ciphertext[:-8]) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext[:-8]), plaintext[:-8]) + + def test_aes_256(self): + plaintext = '6bc1bee22e409f96e93d7e117393172a' +\ + 'ae2d8a571e03ac9c9eb76fac45af8e51' +\ + '30c81c46a35ce411e5fbc1191a0a52ef' +\ + 'f69f2445df4f9b17ad2b417be66c3710' + ciphertext = 'dc7e84bfda79164b7ecd8486985d3860' +\ + '4febdc6740d20b3ac88f6ad82a4fb08d' +\ + '71ab47a086e86eedf39d1c5bba97c408' +\ + '0126141d67f37be8538f5a8be740e484' + key = '603deb1015ca71be2b73aef0857d77811f352c073b6108d72d9810a30914dff4' + iv = '000102030405060708090a0b0c0d0e0f' + + key = unhexlify(key) + iv = unhexlify(iv) + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext), ciphertext) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext), plaintext) + + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.encrypt(plaintext[:-8]), ciphertext[:-8]) + cipher = AES.new(key, AES.MODE_OFB, iv) + self.assertEqual(cipher.decrypt(ciphertext[:-8]), plaintext[:-8]) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(OfbTests) + if config.get('slow_tests'): + tests += list_test_cases(NistOfbVectors) + tests += list_test_cases(SP800TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py new file mode 100644 index 0000000..4090a1a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_OpenPGP.py @@ -0,0 +1,218 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.Cipher import AES, DES3, DES +from Cryptodome.Hash import SHAKE128 + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +from Cryptodome.SelfTest.Cipher.test_CBC import BlockChainingTests + +class OpenPGPTests(BlockChainingTests): + + aes_mode = AES.MODE_OPENPGP + des3_mode = DES3.MODE_OPENPGP + + # Redefine test_unaligned_data_128/64 + + key_128 = get_tag_random("key_128", 16) + key_192 = get_tag_random("key_192", 24) + iv_128 = get_tag_random("iv_128", 16) + iv_64 = get_tag_random("iv_64", 8) + data_128 = get_tag_random("data_128", 16) + + def test_loopback_128(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + pt = get_tag_random("plaintext", 16 * 100) + ct = cipher.encrypt(pt) + + eiv, ct = ct[:18], ct[18:] + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_loopback_64(self): + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, self.iv_64) + pt = get_tag_random("plaintext", 8 * 100) + ct = cipher.encrypt(pt) + + eiv, ct = ct[:10], ct[10:] + + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, eiv) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def test_IV_iv_attributes(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + eiv = cipher.encrypt(b"") + self.assertEqual(cipher.iv, self.iv_128) + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + self.assertEqual(cipher.iv, self.iv_128) + + def test_null_encryption_decryption(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + eiv = cipher.encrypt(b"") + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + self.assertEqual(cipher.decrypt(b""), b"") + + def test_either_encrypt_or_decrypt(self): + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + eiv = cipher.encrypt(b"") + self.assertRaises(TypeError, cipher.decrypt, b"") + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, eiv) + cipher.decrypt(b"") + self.assertRaises(TypeError, cipher.encrypt, b"") + + def test_unaligned_data_128(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = AES.new(self.key_128, AES.MODE_OPENPGP, self.iv_128) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_unaligned_data_64(self): + plaintexts = [ b"7777777" ] * 100 + + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, self.iv_64) + ciphertexts = [ cipher.encrypt(x) for x in plaintexts ] + cipher = DES3.new(self.key_192, DES3.MODE_OPENPGP, self.iv_64) + self.assertEqual(b"".join(ciphertexts), cipher.encrypt(b"".join(plaintexts))) + + def test_output_param(self): + pass + + def test_output_param_same_buffer(self): + pass + + def test_output_param_memoryview(self): + pass + + def test_output_param_neg(self): + pass + + +class TestVectors(unittest.TestCase): + + def test_aes(self): + # The following test vectors have been generated with gpg v1.4.0. + # The command line used was: + # + # gpg -c -z 0 --cipher-algo AES --passphrase secret_passphrase \ + # --disable-mdc --s2k-mode 0 --output ct pt + # + # As result, the content of the file 'pt' is encrypted with a key derived + # from 'secret_passphrase' and written to file 'ct'. + # Test vectors must be extracted from 'ct', which is a collection of + # TLVs (see RFC4880 for all details): + # - the encrypted data (with the encrypted IV as prefix) is the payload + # of the TLV with tag 9 (Symmetrical Encrypted Data Packet). + # This is the ciphertext in the test vector. + # - inside the encrypted part, there is a further layer of TLVs. One must + # look for tag 11 (Literal Data Packet); in its payload, after a short + # but time dependent header, there is the content of file 'pt'. + # In the test vector, the plaintext is the complete set of TLVs that gets + # encrypted. It is not just the content of 'pt'. + # - the key is the leftmost 16 bytes of the SHA1 digest of the password. + # The test vector contains such shortened digest. + # + # Note that encryption uses a clear IV, and decryption an encrypted IV + + plaintext = 'ac18620270744fb4f647426c61636b4361745768697465436174' + ciphertext = 'dc6b9e1f095de609765c59983db5956ae4f63aea7405389d2ebb' + key = '5baa61e4c9b93f3f0682250b6cf8331b' + iv = '3d7d3e62282add7eb203eeba5c800733' + encrypted_iv='fd934601ef49cb58b6d9aebca6056bdb96ef' + + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + key = unhexlify(key) + iv = unhexlify(iv) + encrypted_iv = unhexlify(encrypted_iv) + + cipher = AES.new(key, AES.MODE_OPENPGP, iv) + ct = cipher.encrypt(plaintext) + self.assertEqual(ct[:18], encrypted_iv) + self.assertEqual(ct[18:], ciphertext) + + cipher = AES.new(key, AES.MODE_OPENPGP, encrypted_iv) + pt = cipher.decrypt(ciphertext) + self.assertEqual(pt, plaintext) + + def test_des3(self): + # The following test vectors have been generated with gpg v1.4.0. + # The command line used was: + # gpg -c -z 0 --cipher-algo 3DES --passphrase secret_passphrase \ + # --disable-mdc --s2k-mode 0 --output ct pt + # For an explanation, see test_AES.py . + + plaintext = 'ac1762037074324fb53ba3596f73656d69746556616c6c6579' + ciphertext = '9979238528357b90e2e0be549cb0b2d5999b9a4a447e5c5c7d' + key = '7ade65b460f5ea9be35f9e14aa883a2048e3824aa616c0b2' + iv='cd47e2afb8b7e4b0' + encrypted_iv='6a7eef0b58050e8b904a' + + plaintext = unhexlify(plaintext) + ciphertext = unhexlify(ciphertext) + key = unhexlify(key) + iv = unhexlify(iv) + encrypted_iv = unhexlify(encrypted_iv) + + cipher = DES3.new(key, DES3.MODE_OPENPGP, iv) + ct = cipher.encrypt(plaintext) + self.assertEqual(ct[:10], encrypted_iv) + self.assertEqual(ct[10:], ciphertext) + + cipher = DES3.new(key, DES3.MODE_OPENPGP, encrypted_iv) + pt = cipher.decrypt(ciphertext) + self.assertEqual(pt, plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(OpenPGPTests) + tests += list_test_cases(TestVectors) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py new file mode 100644 index 0000000..d4bb5a9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_SIV.py @@ -0,0 +1,552 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import json +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof + +from Cryptodome.Util.py3compat import tobytes, bchr +from Cryptodome.Cipher import AES +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.Util.strxor import strxor + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class SivTests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + key_384 = get_tag_random("key_384", 48) + key_512 = get_tag_random("key_512", 64) + nonce_96 = get_tag_random("nonce_128", 12) + data = get_tag_random("data", 128) + + def test_loopback_128(self): + for key in self.key_256, self.key_384, self.key_512: + cipher = AES.new(key, AES.MODE_SIV, nonce=self.nonce_96) + pt = get_tag_random("plaintext", 16 * 100) + ct, mac = cipher.encrypt_and_digest(pt) + + cipher = AES.new(key, AES.MODE_SIV, nonce=self.nonce_96) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + def test_nonce(self): + # Deterministic encryption + AES.new(self.key_256, AES.MODE_SIV) + + cipher = AES.new(self.key_256, AES.MODE_SIV, self.nonce_96) + ct1, tag1 = cipher.encrypt_and_digest(self.data) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct2, tag2 = cipher.encrypt_and_digest(self.data) + self.assertEqual(ct1 + tag1, ct2 + tag2) + + def test_nonce_must_be_bytes(self): + self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, + nonce=u'test12345678') + + def test_nonce_length(self): + # nonce can be of any length (but not empty) + self.assertRaises(ValueError, AES.new, self.key_256, AES.MODE_SIV, + nonce=b"") + + for x in range(1, 128): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=bchr(1) * x) + cipher.encrypt_and_digest(b'\x01') + + def test_block_size_128(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertEqual(cipher.block_size, AES.block_size) + + def test_nonce_attribute(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertEqual(cipher.nonce, self.nonce_96) + + # By default, no nonce is randomly generated + self.assertFalse(hasattr(AES.new(self.key_256, AES.MODE_SIV), "nonce")) + + def test_unknown_parameters(self): + self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, + self.nonce_96, 7) + self.assertRaises(TypeError, AES.new, self.key_256, AES.MODE_SIV, + nonce=self.nonce_96, unknown=7) + + # But some are only known by the base cipher + # (e.g. use_aesni consumed by the AES module) + AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96, + use_aesni=False) + + def test_encrypt_excludes_decrypt(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + self.assertRaises(TypeError, cipher.decrypt, self.data) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.encrypt_and_digest(self.data) + self.assertRaises(TypeError, cipher.decrypt_and_verify, + self.data, self.data) + + def test_data_must_be_bytes(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt_and_verify, + u'test1234567890-*', b"xxxx") + + def test_mac_len(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + _, mac = cipher.encrypt_and_digest(self.data) + self.assertEqual(len(mac), 16) + + def test_invalid_mac(self): + from Cryptodome.Util.strxor import strxor_c + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, mac = cipher.encrypt_and_digest(self.data) + + invalid_mac = strxor_c(mac, 0x01) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, + invalid_mac) + + def test_hex_mac(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + mac_hex = cipher.hexdigest() + self.assertEqual(cipher.digest(), unhexlify(mac_hex)) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.hexverify(mac_hex) + + def test_bytearray(self): + + # Encrypt + key = bytearray(self.key_256) + nonce = bytearray(self.nonce_96) + data = bytearray(self.data) + header = bytearray(self.data) + + cipher1 = AES.new(self.key_256, + AES.MODE_SIV, + nonce=self.nonce_96) + cipher1.update(self.data) + ct, tag = cipher1.encrypt_and_digest(self.data) + + cipher2 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher2.update(header) + header[:3] = b'\xFF\xFF\xFF' + ct_test, tag_test = cipher2.encrypt_and_digest(data) + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key = bytearray(self.key_256) + nonce = bytearray(self.nonce_96) + header = bytearray(self.data) + ct_ba = bytearray(ct) + tag_ba = bytearray(tag) + + cipher3 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher3.update(header) + header[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt_and_verify(ct_ba, tag_ba) + + self.assertEqual(self.data, pt_test) + + def test_memoryview(self): + + # Encrypt + key = memoryview(bytearray(self.key_256)) + nonce = memoryview(bytearray(self.nonce_96)) + data = memoryview(bytearray(self.data)) + header = memoryview(bytearray(self.data)) + + cipher1 = AES.new(self.key_256, + AES.MODE_SIV, + nonce=self.nonce_96) + cipher1.update(self.data) + ct, tag = cipher1.encrypt_and_digest(self.data) + + cipher2 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher2.update(header) + header[:3] = b'\xFF\xFF\xFF' + ct_test, tag_test= cipher2.encrypt_and_digest(data) + + self.assertEqual(ct, ct_test) + self.assertEqual(tag, tag_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decrypt + key = memoryview(bytearray(self.key_256)) + nonce = memoryview(bytearray(self.nonce_96)) + header = memoryview(bytearray(self.data)) + ct_ba = memoryview(bytearray(ct)) + tag_ba = memoryview(bytearray(tag)) + + cipher3 = AES.new(key, + AES.MODE_SIV, + nonce=nonce) + key[:3] = b'\xFF\xFF\xFF' + nonce[:3] = b'\xFF\xFF\xFF' + cipher3.update(header) + header[:3] = b'\xFF\xFF\xFF' + pt_test = cipher3.decrypt_and_verify(ct_ba, tag_ba) + + self.assertEqual(self.data, pt_test) + + def test_output_param(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(pt) + + output = bytearray(128) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + res, tag_out = cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + self.assertEqual(tag, tag_out) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + res = cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + def test_output_param_memoryview(self): + + pt = b'5' * 128 + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(pt) + + output = memoryview(bytearray(128)) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.encrypt_and_digest(pt, output=output) + self.assertEqual(ct, output) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, tag, output=output) + self.assertEqual(pt, output) + + def test_output_param_neg(self): + LEN_PT = 128 + + pt = b'5' * LEN_PT + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(pt) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt_and_digest, pt, output=b'0' * LEN_PT) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt_and_verify, ct, tag, output=b'0' * LEN_PT) + + shorter_output = bytearray(LEN_PT - 1) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.encrypt_and_digest, pt, output=shorter_output) + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, tag, output=shorter_output) + + +class SivFSMTests(unittest.TestCase): + + key_256 = get_tag_random("key_256", 32) + nonce_96 = get_tag_random("nonce_96", 12) + data = get_tag_random("data", 128) + + def test_invalid_init_encrypt(self): + # Path INIT->ENCRYPT fails + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.encrypt, b"xxx") + + def test_invalid_init_decrypt(self): + # Path INIT->DECRYPT fails + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + self.assertRaises(TypeError, cipher.decrypt, b"xxx") + + def test_valid_init_update_digest_verify(self): + # No plaintext, fixed authenticated data + # Verify path INIT->UPDATE->DIGEST + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + cipher.update(self.data) + mac = cipher.digest() + + # Verify path INIT->UPDATE->VERIFY + cipher = AES.new(self.key_256, AES.MODE_SIV, + nonce=self.nonce_96) + cipher.update(self.data) + cipher.verify(mac) + + def test_valid_init_digest(self): + # Verify path INIT->DIGEST + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.digest() + + def test_valid_init_verify(self): + # Verify path INIT->VERIFY + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + mac = cipher.digest() + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.verify(mac) + + def test_valid_multiple_digest_or_verify(self): + # Multiple calls to digest + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + first_mac = cipher.digest() + for x in range(4): + self.assertEqual(first_mac, cipher.digest()) + + # Multiple calls to verify + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + for x in range(5): + cipher.verify(first_mac) + + def test_valid_encrypt_and_digest_decrypt_and_verify(self): + # encrypt_and_digest + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + ct, mac = cipher.encrypt_and_digest(self.data) + + # decrypt_and_verify + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.update(self.data) + pt = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(self.data, pt) + + def test_invalid_multiple_encrypt_and_digest(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(self.data) + self.assertRaises(TypeError, cipher.encrypt_and_digest, b'') + + def test_invalid_multiple_decrypt_and_verify(self): + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + ct, tag = cipher.encrypt_and_digest(self.data) + + cipher = AES.new(self.key_256, AES.MODE_SIV, nonce=self.nonce_96) + cipher.decrypt_and_verify(ct, tag) + self.assertRaises(TypeError, cipher.decrypt_and_verify, ct, tag) + + +def transform(tv): + new_tv = [[unhexlify(x) for x in tv[0].split("-")]] + new_tv += [ unhexlify(x) for x in tv[1:5]] + if tv[5]: + nonce = unhexlify(tv[5]) + else: + nonce = None + new_tv += [ nonce ] + return new_tv + + +class TestVectors(unittest.TestCase): + """Class exercising the SIV test vectors found in RFC5297""" + + # This is a list of tuples with 5 items: + # + # 1. Header + '|' + plaintext + # 2. Header + '|' + ciphertext + '|' + MAC + # 3. AES-128 key + # 4. Description + # 5. Dictionary of parameters to be passed to AES.new(). + # It must include the nonce. + # + # A "Header" is a dash ('-') separated sequece of components. + # + test_vectors_hex = [ + ( + '101112131415161718191a1b1c1d1e1f2021222324252627', + '112233445566778899aabbccddee', + '40c02b9690c4dc04daef7f6afe5c', + '85632d07c6e8f37f950acd320a2ecc93', + 'fffefdfcfbfaf9f8f7f6f5f4f3f2f1f0f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff', + None + ), + ( + '00112233445566778899aabbccddeeffdeaddadadeaddadaffeeddccbbaa9988' + + '7766554433221100-102030405060708090a0', + '7468697320697320736f6d6520706c61696e7465787420746f20656e63727970' + + '74207573696e67205349562d414553', + 'cb900f2fddbe404326601965c889bf17dba77ceb094fa663b7a3f748ba8af829' + + 'ea64ad544a272e9c485b62a3fd5c0d', + '7bdb6e3b432667eb06f4d14bff2fbd0f', + '7f7e7d7c7b7a79787776757473727170404142434445464748494a4b4c4d4e4f', + '09f911029d74e35bd84156c5635688c0' + ), + ] + + test_vectors = [ transform(tv) for tv in test_vectors_hex ] + + def runTest(self): + for assoc_data, pt, ct, mac, key, nonce in self.test_vectors: + + # Encrypt + cipher = AES.new(key, AES.MODE_SIV, nonce=nonce) + for x in assoc_data: + cipher.update(x) + ct2, mac2 = cipher.encrypt_and_digest(pt) + self.assertEqual(ct, ct2) + self.assertEqual(mac, mac2) + + # Decrypt + cipher = AES.new(key, AES.MODE_SIV, nonce=nonce) + for x in assoc_data: + cipher.update(x) + pt2 = cipher.decrypt_and_verify(ct, mac) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self): + unittest.TestCase.__init__(self) + self._id = "None" + + def setUp(self): + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aes_siv_cmac_test.json", + "Wycheproof AES SIV") + + def shortDescription(self): + return self._id + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt AES-SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV) + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(tag + ct, tv.ct) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt AES_SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV) + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct[16:], tv.ct[:16]) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + + +class TestVectorsWycheproof2(unittest.TestCase): + + def __init__(self): + unittest.TestCase.__init__(self) + self._id = "None" + + def setUp(self): + self.tv = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + "aead_aes_siv_cmac_test.json", + "Wycheproof AEAD SIV") + + def shortDescription(self): + return self._id + + def test_encrypt(self, tv): + self._id = "Wycheproof Encrypt AEAD-AES-SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV, nonce=tv.iv) + cipher.update(tv.aad) + ct, tag = cipher.encrypt_and_digest(tv.msg) + if tv.valid: + self.assertEqual(ct, tv.ct) + self.assertEqual(tag, tv.tag) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt AEAD-AES-SIV Test #" + str(tv.id) + + cipher = AES.new(tv.key, AES.MODE_SIV, nonce=tv.iv) + cipher.update(tv.aad) + try: + pt = cipher.decrypt_and_verify(tv.ct, tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + + def runTest(self): + + for tv in self.tv: + self.test_encrypt(tv) + self.test_decrypt(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(SivTests) + tests += list_test_cases(SivFSMTests) + tests += [ TestVectors() ] + tests += [ TestVectorsWycheproof() ] + tests += [ TestVectorsWycheproof2() ] + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py new file mode 100644 index 0000000..a444906 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_Salsa20.py @@ -0,0 +1,367 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/Salsa20.py: Self-test for the Salsa20 stream cipher +# +# Written in 2013 by Fabrizio Tarizzo <fabrizio@fabriziotarizzo.org> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Cipher.Salsa20""" + +import unittest + +from Cryptodome.Util.py3compat import bchr + +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Cipher import Salsa20 + +from .common import make_stream_tests + +# This is a list of (plaintext, ciphertext, key[, description[, params]]) +# tuples. +test_data = [ + # Test vectors are taken from + # http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/verified.test-vectors + ( '00' * 512, + '4dfa5e481da23ea09a31022050859936da52fcee218005164f267cb65f5cfd7f' + + '2b4f97e0ff16924a52df269515110a07f9e460bc65ef95da58f740b7d1dbb0aa' + + 'd64cec189c7eb8c6bbf3d7376c80a481d43e628701f6a27afb9fe23919f24114' + + '8db44f70d7063efcc3dd55a0893a613c3c6fe1c127bd6f59910589293bb6ef9e' + + 'e24819066dee1a64f49b0bbad5988635272b169af861f85df881939f29ada6fd' + + '0241410e8d332ae4798d929434a2630de451ec4e0169694cbaa7ebb121ea6a2b' + + 'da9c1581f429e0a00f7d67e23b730676783b262e8eb43a25f55fb90b3e753aef' + + '8c6713ec66c51881111593ccb3e8cb8f8de124080501eeeb389c4bcb6977cf95' + + '7d5789631eb4554400e1e025935dfa7b3e9039d61bdc58a8697d36815bf1985c' + + 'efdf7ae112e5bb81e37ecf0616ce7147fc08a93a367e08631f23c03b00a8da2f' + + 'aa5024e5c8d30aca43fc2d5082067b21b234bc741d68fb292c6012c3764ccee3' + + '1e364a5403e00cfee338a21a01e7d3cefd5a770ca0ab48c435ea6116435f7ad8' + + '30b217b49f978a68e207ed9f462af7fb195b2115fe8f24f152e4ddc32202d6f2' + + 'b52fafbcfbc202d8a259a611e901d3f62d065eb13f09bbc45cd45119b843efaa' + + 'b375703739daced4dd4059fd71c3c47fc2f9939670fad4a46066adcc6a564578' + + '3308b90ffb72be04a6b147cbe38cc0c3b9267c296a92a7c69873f9f263be9703', + '80000000000000000000000000000000', + '128 bits key, set 1, vector 0', + dict (iv='00'*8)), + + ( '00' * 512, + 'e3be8fdd8beca2e3ea8ef9475b29a6e7003951e1097a5c38d23b7a5fad9f6844' + + 'b22c97559e2723c7cbbd3fe4fc8d9a0744652a83e72a9c461876af4d7ef1a117' + + '8da2b74eef1b6283e7e20166abcae538e9716e4669e2816b6b20c5c356802001' + + 'cc1403a9a117d12a2669f456366d6ebb0f1246f1265150f793cdb4b253e348ae' + + '203d89bc025e802a7e0e00621d70aa36b7e07cb1e7d5b38d5e222b8b0e4b8407' + + '0142b1e29504767d76824850320b5368129fdd74e861b498e3be8d16f2d7d169' + + '57be81f47b17d9ae7c4ff15429a73e10acf250ed3a90a93c711308a74c6216a9' + + 'ed84cd126da7f28e8abf8bb63517e1ca98e712f4fb2e1a6aed9fdc73291faa17' + + '958211c4ba2ebd5838c635edb81f513a91a294e194f1c039aeec657dce40aa7e' + + '7c0af57cacefa40c9f14b71a4b3456a63e162ec7d8d10b8ffb1810d71001b618' + + '2f9f73da53b85405c11f7b2d890fa8ae0c7f2e926d8a98c7ec4e91b65120e988' + + '349631a700c6facec3471cb0413656e75e309456584084d7e12c5b43a41c43ed' + + '9a048abd9b880da65f6a665a20fe7b77cd292fe62cae644b7f7df69f32bdb331' + + '903e6505ce44fdc293920c6a9ec7057e23df7dad298f82ddf4efb7fdc7bfc622' + + '696afcfd0cddcc83c7e77f11a649d79acdc3354e9635ff137e929933a0bd6f53' + + '77efa105a3a4266b7c0d089d08f1e855cc32b15b93784a36e56a76cc64bc8477', + '8000000000000000000000000000000000000000000000000000000000000000', + '256 bits key, set 1, vector 0', + dict (iv='00'*8)), + + ( '00' * 512, + '169060ccb42bea7bee4d8012a02f3635eb7bca12859fa159cd559094b3507db8' + + '01735d1a1300102a9c9415546829cbd2021ba217b39b81d89c55b13d0c603359' + + '3f84159a3c84f4b4f4a0edcd9d38ff261a737909e0b66d68b5cac496f3a5be99' + + 'cb12c321ab711afaab36cc0947955e1a9bb952ed54425e7711279fbc81bb83f5' + + '6e55cea44e6daddb05858a153ea6213b3350c12aa1a83ef2726f09485fa71790' + + 'f9b9f922c7dda1113b1f9d56658ed3402803f511bc1f122601d5e7f0ff036e23' + + '23ef24bb24195b9fd574823cd8a40c29d86bd35c191e2038779ff696c712b6d8' + + '2e7014dbe1ac5d527af076c088c4a8d44317958189f6ef54933a7e0816b5b916' + + 'd8f12ed8afe9422b85e5cc9b8adec9d6cfabe8dbc1082bccc02f5a7266aa074c' + + 'a284e583a35837798cc0e69d4ce937653b8cdd65ce414b89138615ccb165ad19' + + '3c6b9c3d05eef4be921a10ea811fe61d11c6867600188e065daff90b509ec56b' + + 'd41e7e8968c478c78d590c2d2ee24ea009c8f49bc3d81672cfc47895a9e21c9a' + + '471ebf8e294bee5d2de436ac8d052bf31111b345f1da23c3a4d13b9fc5f0900a' + + 'a298f98f538973b8fad40d4d159777de2cfe2a3dead1645ddb49794827dba040' + + 'f70a0ff4ecd155e0f033604693a51e2363880e2ecf98699e7174af7c2c6b0fc6' + + '59ae329599a3949272a37b9b2183a0910922a3f325ae124dcbdd735364055ceb', + '09090909090909090909090909090909', + '128 bits key, set 2, vector 9', + dict (iv='00'*8)), + + ( '00' * 512, + '7041e747ceb22ed7812985465f50333124f971da1c5d6efe5ca201b886f31046' + + 'e757e5c3ec914f60ed1f6bce2819b6810953f12b8ba1199bf82d746a8b8a88f1' + + '142002978ec4c35b95dc2c82990f9e847a0ab45f2ca72625f5190c820f29f3aa' + + 'f5f0b5572b06b70a144f2a240c3b3098d4831fa1ce1459f8d1df226a6a79b0ab' + + '41e91799ef31b5ff3d756c19126b19025858ee70fbd69f2be955cb011c005e31' + + '32b271b378f39b0cb594e95c99ce6ff17735a541891845bbf0450afcb4a850b9' + + '4ee90afb713ae7e01295c74381180a3816d7020d5a396c0d97aaa783eaabb6ec' + + '44d5111157f2212d1b1b8fca7893e8b520cd482418c272ab119b569a2b9598eb' + + '355624d12e79adab81153b58cd22eaf1b2a32395dedc4a1c66f4d274070b9800' + + 'ea95766f0245a8295f8aadb36ddbbdfa936417c8dbc6235d19494036964d3e70' + + 'b125b0f800c3d53881d9d11e7970f827c2f9556935cd29e927b0aceb8cae5fd4' + + '0fd88a8854010a33db94c96c98735858f1c5df6844f864feaca8f41539313e7f' + + '3c0610214912cd5e6362197646207e2d64cd5b26c9dfe0822629dcbeb16662e8' + + '9ff5bf5cf2e499138a5e27bd5027329d0e68ddf53103e9e409523662e27f61f6' + + '5cf38c1232023e6a6ef66c315bcb2a4328642faabb7ca1e889e039e7c444b34b' + + 'b3443f596ac730f3df3dfcdb343c307c80f76e43e8898c5e8f43dc3bb280add0', + '0909090909090909090909090909090909090909090909090909090909090909', + '256 bits key, set 2, vector 9', + dict (iv='00'*8)), + + ( '00' * 1024, + '71daee5142d0728b41b6597933ebf467e43279e30978677078941602629cbf68' + + 'b73d6bd2c95f118d2b3e6ec955dabb6dc61c4143bc9a9b32b99dbe6866166dc0' + + '8631b7d6553050303d7252c264d3a90d26c853634813e09ad7545a6ce7e84a5d' + + 'fc75ec43431207d5319970b0faadb0e1510625bb54372c8515e28e2accf0a993' + + '0ad15f431874923d2a59e20d9f2a5367dba6051564f150287debb1db536ff9b0' + + '9ad981f25e5010d85d76ee0c305f755b25e6f09341e0812f95c94f42eead346e' + + '81f39c58c5faa2c88953dc0cac90469db2063cb5cdb22c9eae22afbf0506fca4' + + '1dc710b846fbdfe3c46883dd118f3a5e8b11b6afd9e71680d8666557301a2daa' + + 'fb9496c559784d35a035360885f9b17bd7191977deea932b981ebdb29057ae3c' + + '92cfeff5e6c5d0cb62f209ce342d4e35c69646ccd14e53350e488bb310a32f8b' + + '0248e70acc5b473df537ced3f81a014d4083932bedd62ed0e447b6766cd2604b' + + '706e9b346c4468beb46a34ecf1610ebd38331d52bf33346afec15eefb2a7699e' + + '8759db5a1f636a48a039688e39de34d995df9f27ed9edc8dd795e39e53d9d925' + + 'b278010565ff665269042f05096d94da3433d957ec13d2fd82a0066283d0d1ee' + + 'b81bf0ef133b7fd90248b8ffb499b2414cd4fa003093ff0864575a43749bf596' + + '02f26c717fa96b1d057697db08ebc3fa664a016a67dcef8807577cc3a09385d3' + + 'f4dc79b34364bb3b166ce65fe1dd28e3950fe6fa81063f7b16ce1c0e6daac1f8' + + '188455b77752045e863c9b256ad92bc6e2d08314c5bba191c274f42dfbb3d652' + + 'bb771956555e880f84cd8b827a4c5a52f3a099fa0259bd4aac3efd541f191170' + + '4412d6e85fbcc628b335875b9fef24807f6e1bc66c3186159e1e7f5a13913e02' + + 'd241ce2efdbcaa275039fb14eac5923d17ffbc7f1abd3b45e92127575bfbabf9' + + '3a257ebef0aa1437b326e41b585af572f7239c33b32981a1577a4f629b027e1e' + + 'b49d58cc497e944d79cef44357c2bf25442ab779651e991147bf79d6fd3a8868' + + '0cd3b1748e07fd10d78aceef6db8a5e563570d40127f754146c34a440f2a991a' + + '23fa39d365141f255041f2135c5cba4373452c114da1801bacca38610e3a6524' + + '2b822d32de4ab5a7d3cf9b61b37493c863bd12e2cae10530cddcda2cb7a5436b' + + 'ef8988d4d24e8cdc31b2d2a3586340bc5141f8f6632d0dd543bfed81eb471ba1' + + 'f3dc2225a15ffddcc03eb48f44e27e2aa390598adf83f15c6608a5f18d4dfcf0' + + 'f547d467a4d70b281c83a595d7660d0b62de78b9cca023cca89d7b1f83484638' + + '0e228c25f049184a612ef5bb3d37454e6cfa5b10dceda619d898a699b3c8981a' + + '173407844bb89b4287bf57dd6600c79e352c681d74b03fa7ea0d7bf6ad69f8a6' + + '8ecb001963bd2dd8a2baa0083ec09751cd9742402ad716be16d5c052304cfca1', + '0F62B5085BAE0154A7FA4DA0F34699EC', + '128 bits key, Set 6, vector# 3', + dict (iv='288FF65DC42B92F9')), + + ( '00' * 1024, + '5e5e71f90199340304abb22a37b6625bf883fb89ce3b21f54a10b81066ef87da' + + '30b77699aa7379da595c77dd59542da208e5954f89e40eb7aa80a84a6176663f' + + 'd910cde567cf1ff60f7040548d8f376bfd1f44c4774aac37410ede7d5c3463fc' + + '4508a603201d8495ad257894e5eb1914b53e8da5e4bf2bc83ac87ce55cc67df7' + + '093d9853d2a83a9c8be969175df7c807a17156df768445dd0874a9271c6537f5' + + 'ce0466473582375f067fa4fcdaf65dbc0139cd75e8c21a482f28c0fb8c3d9f94' + + '22606cc8e88fe28fe73ec3cb10ff0e8cc5f2a49e540f007265c65b7130bfdb98' + + '795b1df9522da46e48b30e55d9f0d787955ece720205b29c85f3ad9be33b4459' + + '7d21b54d06c9a60b04b8e640c64e566e51566730e86cf128ab14174f91bd8981' + + 'a6fb00fe587bbd6c38b5a1dfdb04ea7e61536fd229f957aa9b070ca931358e85' + + '11b92c53c523cb54828fb1513c5636fa9a0645b4a3c922c0db94986d92f314ff' + + '7852c03b231e4dceea5dd8cced621869cff818daf3c270ff3c8be2e5c74be767' + + 'a4e1fdf3327a934fe31e46df5a74ae2021cee021d958c4f615263d99a5ddae7f' + + 'eab45e6eccbafefe4761c57750847b7e75ee2e2f14333c0779ce4678f47b1e1b' + + '760a03a5f17d6e91d4b42313b3f1077ee270e432fe04917ed1fc8babebf7c941' + + '42b80dfb44a28a2a3e59093027606f6860bfb8c2e5897078cfccda7314c70035' + + 'f137de6f05daa035891d5f6f76e1df0fce1112a2ff0ac2bd3534b5d1bf4c7165' + + 'fb40a1b6eacb7f295711c4907ae457514a7010f3a342b4427593d61ba993bc59' + + '8bd09c56b9ee53aac5dd861fa4b4bb53888952a4aa9d8ca8671582de716270e1' + + '97375b3ee49e51fa2bf4ef32015dd9a764d966aa2ae541592d0aa650849e99ca' + + '5c6c39beebf516457cc32fe4c105bff314a12f1ec94bdf4d626f5d9b1cbbde42' + + 'e5733f0885765ba29e2e82c829d312f5fc7e180679ac84826c08d0a644b326d0' + + '44da0fdcc75fa53cfe4ced0437fa4df5a7ecbca8b4cb7c4a9ecf9a60d00a56eb' + + '81da52adc21f508dbb60a9503a3cc94a896616d86020d5b0e5c637329b6d396a' + + '41a21ba2c4a9493cf33fa2d4f10f77d5b12fdad7e478ccfe79b74851fc96a7ca' + + '6320c5efd561a222c0ab0fb44bbda0e42149611d2262bb7d1719150fa798718a' + + '0eec63ee297cad459869c8b0f06c4e2b56cbac03cd2605b2a924efedf85ec8f1' + + '9b0b6c90e7cbd933223ffeb1b3a3f9677657905829294c4c70acdb8b0891b47d' + + '0875d0cd6c0f4efe2917fc44b581ef0d1e4280197065d07da34ab33283364552' + + 'efad0bd9257b059acdd0a6f246812feb69e7e76065f27dbc2eee94da9cc41835' + + 'bf826e36e5cebe5d4d6a37a6a666246290ce51a0c082718ab0ec855668db1add' + + 'a658e5f257e0db39384d02e6145c4c00eaa079098f6d820d872de711b6ed08cf', + '0F62B5085BAE0154A7FA4DA0F34699EC3F92E5388BDE3184D72A7DD02376C91C', + '256 bits key, Set 6, vector# 3', + dict (iv='288FF65DC42B92F9')), + +] + + +class KeyLength(unittest.TestCase): + + def runTest(self): + + nonce = bchr(0) * 8 + for key_length in (15, 30, 33): + key = bchr(1) * key_length + self.assertRaises(ValueError, Salsa20.new, key, nonce) + + +class NonceTests(unittest.TestCase): + + def test_invalid_nonce_length(self): + key = bchr(1) * 16 + self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 7) + self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 9) + + def test_default_nonce(self): + + cipher1 = Salsa20.new(bchr(1) * 16) + cipher2 = Salsa20.new(bchr(1) * 16) + self.assertEqual(len(cipher1.nonce), 8) + self.assertNotEqual(cipher1.nonce, cipher2.nonce) + + +class ByteArrayTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_ba = bytearray(data) + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + + cipher1 = Salsa20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = Salsa20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_ba) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_ba = bytearray(key) + nonce_ba = bytearray(nonce) + ct_ba = bytearray(ct) + + cipher3 = Salsa20.new(key=key_ba, nonce=nonce_ba) + key_ba[:1] = b'\xFF' + nonce_ba[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_ba) + + self.assertEqual(data, pt_test) + + +class MemoryviewTest(unittest.TestCase): + """Verify we can encrypt or decrypt bytearrays""" + + def runTest(self): + + data = b"0123" + key = b"9" * 32 + nonce = b"t" * 8 + + # Encryption + data_mv = memoryview(bytearray(data)) + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + + cipher1 = Salsa20.new(key=key, nonce=nonce) + ct = cipher1.encrypt(data) + + cipher2 = Salsa20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + ct_test = cipher2.encrypt(data_mv) + + self.assertEqual(ct, ct_test) + self.assertEqual(cipher1.nonce, cipher2.nonce) + + # Decryption + key_mv = memoryview(bytearray(key)) + nonce_mv = memoryview(bytearray(nonce)) + ct_mv = memoryview(bytearray(ct)) + + cipher3 = Salsa20.new(key=key_mv, nonce=nonce_mv) + key_mv[:1] = b'\xFF' + nonce_mv[:1] = b'\xFF' + pt_test = cipher3.decrypt(ct_mv) + + self.assertEqual(data, pt_test) + + +class TestOutput(unittest.TestCase): + + def runTest(self): + # Encrypt/Decrypt data and test output parameter + + key = b'4' * 32 + nonce = b'5' * 8 + cipher = Salsa20.new(key=key, nonce=nonce) + + pt = b'5' * 300 + ct = cipher.encrypt(pt) + + output = bytearray(len(pt)) + cipher = Salsa20.new(key=key, nonce=nonce) + res = cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + self.assertEqual(res, None) + + cipher = Salsa20.new(key=key, nonce=nonce) + res = cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + self.assertEqual(res, None) + + output = memoryview(bytearray(len(pt))) + cipher = Salsa20.new(key=key, nonce=nonce) + cipher.encrypt(pt, output=output) + self.assertEqual(ct, output) + + cipher = Salsa20.new(key=key, nonce=nonce) + cipher.decrypt(ct, output=output) + self.assertEqual(pt, output) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*len(pt)) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*len(ct)) + + shorter_output = bytearray(len(pt) - 1) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) + + cipher = Salsa20.new(key=key, nonce=nonce) + self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) + + +def get_tests(config={}): + tests = make_stream_tests(Salsa20, "Salsa20", test_data) + tests.append(KeyLength()) + tests += list_test_cases(NonceTests) + tests.append(ByteArrayTest()) + tests.append(MemoryviewTest()) + tests.append(TestOutput()) + + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py new file mode 100644 index 0000000..12c09dd --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_15.py @@ -0,0 +1,283 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/test_pkcs1_15.py: Self-test for PKCS#1 v1.5 encryption +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from __future__ import print_function + +import unittest + +from Cryptodome.PublicKey import RSA +from Cryptodome.SelfTest.st_common import list_test_cases, a2b_hex +from Cryptodome import Random +from Cryptodome.Cipher import PKCS1_v1_5 as PKCS +from Cryptodome.Util.py3compat import b +from Cryptodome.Util.number import bytes_to_long, long_to_bytes +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof + + +def rws(t): + """Remove white spaces, tabs, and new lines from a string""" + for c in ['\n', '\t', ' ']: + t = t.replace(c, '') + return t + + +def t2b(t): + """Convert a text string with bytes in hex form to a byte string""" + clean = b(rws(t)) + if len(clean) % 2 == 1: + raise ValueError("Even number of characters expected") + return a2b_hex(clean) + + +class PKCS1_15_Tests(unittest.TestCase): + + def setUp(self): + self.rng = Random.new().read + self.key1024 = RSA.generate(1024, self.rng) + + # List of tuples with test data for PKCS#1 v1.5. + # Each tuple is made up by: + # Item #0: dictionary with RSA key component, or key to import + # Item #1: plaintext + # Item #2: ciphertext + # Item #3: random data + + _testData = ( + + # + # Generated with openssl 0.9.8o + # + ( + # Private key + '''-----BEGIN RSA PRIVATE KEY----- +MIICXAIBAAKBgQDAiAnvIAOvqVwJTaYzsKnefZftgtXGE2hPJppGsWl78yz9jeXY +W/FxX/gTPURArNhdnhP6n3p2ZaDIBrO2zizbgIXs0IsljTTcr4vnI8fMXzyNUOjA +zP3nzMqZDZK6757XQAobOssMkBFqRWwilT/3DsBhRpl3iMUhF+wvpTSHewIDAQAB +AoGAC4HV/inOrpgTvSab8Wj0riyZgQOZ3U3ZpSlsfR8ra9Ib9Uee3jCYnKscu6Gk +y6zI/cdt8EPJ4PuwAWSNJzbpbVaDvUq25OD+CX8/uRT08yBS4J8TzBitZJTD4lS7 +atdTnKT0Wmwk+u8tDbhvMKwnUHdJLcuIsycts9rwJVapUtkCQQDvDpx2JMun0YKG +uUttjmL8oJ3U0m3ZvMdVwBecA0eebZb1l2J5PvI3EJD97eKe91Nsw8T3lwpoN40k +IocSVDklAkEAzi1HLHE6EzVPOe5+Y0kGvrIYRRhncOb72vCvBZvD6wLZpQgqo6c4 +d3XHFBBQWA6xcvQb5w+VVEJZzw64y25sHwJBAMYReRl6SzL0qA0wIYrYWrOt8JeQ +8mthulcWHXmqTgC6FEXP9Es5GD7/fuKl4wqLKZgIbH4nqvvGay7xXLCXD/ECQH9a +1JYNMtRen5unSAbIOxRcKkWz92F0LKpm9ZW/S9vFHO+mBcClMGoKJHiuQxLBsLbT +NtEZfSJZAeS2sUtn3/0CQDb2M2zNBTF8LlM0nxmh0k9VGm5TVIyBEMcipmvOgqIs +HKukWBcq9f/UOmS0oEhai/6g+Uf7VHJdWaeO5LzuvwU= +-----END RSA PRIVATE KEY-----''', + # Plaintext + '''THIS IS PLAINTEXT\x0A''', + # Ciphertext + '''3f dc fd 3c cd 5c 9b 12 af 65 32 e3 f7 d0 da 36 + 8f 8f d9 e3 13 1c 7f c8 b3 f9 c1 08 e4 eb 79 9c + 91 89 1f 96 3b 94 77 61 99 a4 b1 ee 5d e6 17 c9 + 5d 0a b5 63 52 0a eb 00 45 38 2a fb b0 71 3d 11 + f7 a1 9e a7 69 b3 af 61 c0 bb 04 5b 5d 4b 27 44 + 1f 5b 97 89 ba 6a 08 95 ee 4f a2 eb 56 64 e5 0f + da 7c f9 9a 61 61 06 62 ed a0 bc 5f aa 6c 31 78 + 70 28 1a bb 98 3c e3 6a 60 3c d1 0b 0f 5a f4 75''', + # Random data + '''eb d7 7d 86 a4 35 23 a3 54 7e 02 0b 42 1d + 61 6c af 67 b8 4e 17 56 80 66 36 04 64 34 26 8a + 47 dd 44 b3 1a b2 17 60 f4 91 2e e2 b5 95 64 cc + f9 da c8 70 94 54 86 4c ef 5b 08 7d 18 c4 ab 8d + 04 06 33 8f ca 15 5f 52 60 8a a1 0c f5 08 b5 4c + bb 99 b8 94 25 04 9c e6 01 75 e6 f9 63 7a 65 61 + 13 8a a7 47 77 81 ae 0d b8 2c 4d 50 a5''' + ), + ) + + def testEncrypt1(self): + for test in self._testData: + # Build the key + key = RSA.importKey(test[0]) + # RNG that takes its random numbers from a pool given + # at initialization + class randGen: + def __init__(self, data): + self.data = data + self.idx = 0 + def __call__(self, N): + r = self.data[self.idx:self.idx+N] + self.idx += N + return r + # The real test + cipher = PKCS.new(key, randfunc=randGen(t2b(test[3]))) + ct = cipher.encrypt(b(test[1])) + self.assertEqual(ct, t2b(test[2])) + + def testEncrypt2(self): + # Verify that encryption fail if plaintext is too long + pt = '\x00'*(128-11+1) + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.encrypt, pt) + + def testVerify1(self): + for test in self._testData: + key = RSA.importKey(test[0]) + expected_pt = b(test[1]) + ct = t2b(test[2]) + cipher = PKCS.new(key) + + # The real test + pt = cipher.decrypt(ct, None) + self.assertEqual(pt, expected_pt) + + pt = cipher.decrypt(ct, b'\xFF' * len(expected_pt)) + self.assertEqual(pt, expected_pt) + + def testVerify2(self): + # Verify that decryption fails if ciphertext is not as long as + # RSA modulus + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.decrypt, '\x00'*127, "---") + self.assertRaises(ValueError, cipher.decrypt, '\x00'*129, "---") + + # Verify that decryption fails if there are less then 8 non-zero padding + # bytes + pt = b('\x00\x02' + '\xFF'*7 + '\x00' + '\x45'*118) + pt_int = bytes_to_long(pt) + ct_int = self.key1024._encrypt(pt_int) + ct = long_to_bytes(ct_int, 128) + self.assertEqual(b"---", cipher.decrypt(ct, b"---")) + + def testEncryptVerify1(self): + # Encrypt/Verify messages of length [0..RSAlen-11] + # and therefore padding [8..117] + for pt_len in range(0, 128 - 11 + 1): + pt = self.rng(pt_len) + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(pt) + pt2 = cipher.decrypt(ct, b'\xAA' * pt_len) + self.assertEqual(pt, pt2) + + def test_encrypt_verify_exp_pt_len(self): + + cipher = PKCS.new(self.key1024) + pt = b'5' * 16 + ct = cipher.encrypt(pt) + sentinel = b'\xAA' * 16 + + pt_A = cipher.decrypt(ct, sentinel, 16) + self.assertEqual(pt, pt_A) + + pt_B = cipher.decrypt(ct, sentinel, 15) + self.assertEqual(sentinel, pt_B) + + pt_C = cipher.decrypt(ct, sentinel, 17) + self.assertEqual(sentinel, pt_C) + + def testByteArray(self): + pt = b"XER" + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(bytearray(pt)) + pt2 = cipher.decrypt(bytearray(ct), '\xFF' * len(pt)) + self.assertEqual(pt, pt2) + + def testMemoryview(self): + pt = b"XER" + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(memoryview(bytearray(pt))) + pt2 = cipher.decrypt(memoryview(bytearray(ct)), b'\xFF' * len(pt)) + self.assertEqual(pt, pt2) + + def test_return_type(self): + pt = b"XYZ" + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(pt) + self.assertTrue(isinstance(ct, bytes)) + pt2 = cipher.decrypt(ct, b'\xAA' * 3) + self.assertTrue(isinstance(pt2, bytes)) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, skip_slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._skip_slow_tests = skip_slow_tests + self._id = "None" + + def load_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['privateKeyPem']) + + result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + filename, + "Wycheproof PKCS#1v1.5 (%s)" % filename, + group_tag={'rsa_key': filter_rsa} + ) + return result + + def setUp(self): + self.tv = [] + self.tv.extend(self.load_tests("rsa_pkcs1_2048_test.json")) + if not self._skip_slow_tests: + self.tv.extend(self.load_tests("rsa_pkcs1_3072_test.json")) + self.tv.extend(self.load_tests("rsa_pkcs1_4096_test.json")) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt PKCS#1v1.5 Test #%s" % tv.id + sentinel = b'\xAA' * max(3, len(tv.msg)) + cipher = PKCS.new(tv.rsa_key) + try: + pt = cipher.decrypt(tv.ct, sentinel=sentinel) + except ValueError: + assert not tv.valid + else: + if pt == sentinel: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def runTest(self): + + for tv in self.tv: + self.test_decrypt(tv) + + +def get_tests(config={}): + skip_slow_tests = not config.get('slow_tests') + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(PKCS1_15_Tests) + tests += [TestVectorsWycheproof(wycheproof_warnings, skip_slow_tests)] + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py new file mode 100644 index 0000000..aa00c9c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Cipher/test_pkcs1_oaep.py @@ -0,0 +1,506 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Cipher/test_pkcs1_oaep.py: Self-test for PKCS#1 OAEP encryption +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import unittest + +from Cryptodome.SelfTest.st_common import list_test_cases, a2b_hex +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof + +from Cryptodome.PublicKey import RSA +from Cryptodome.Cipher import PKCS1_OAEP as PKCS +from Cryptodome.Hash import MD2, MD5, SHA1, SHA256, RIPEMD160, SHA224, SHA384, SHA512 +from Cryptodome import Random +from Cryptodome.Signature.pss import MGF1 + +from Cryptodome.Util.py3compat import b, bchr + + +def rws(t): + """Remove white spaces, tabs, and new lines from a string""" + for c in ['\n', '\t', ' ']: + t = t.replace(c, '') + return t + + +def t2b(t): + """Convert a text string with bytes in hex form to a byte string""" + clean = rws(t) + if len(clean) % 2 == 1: + raise ValueError("Even number of characters expected") + return a2b_hex(clean) + + +class PKCS1_OAEP_Tests(unittest.TestCase): + + def setUp(self): + self.rng = Random.new().read + self.key1024 = RSA.generate(1024, self.rng) + + # List of tuples with test data for PKCS#1 OAEP + # Each tuple is made up by: + # Item #0: dictionary with RSA key component + # Item #1: plaintext + # Item #2: ciphertext + # Item #3: random data (=seed) + # Item #4: hash object + + _testData = ( + + # + # From in oaep-int.txt to be found in + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''bb f8 2f 09 06 82 ce 9c 23 38 ac 2b 9d a8 71 f7 + 36 8d 07 ee d4 10 43 a4 40 d6 b6 f0 74 54 f5 1f + b8 df ba af 03 5c 02 ab 61 ea 48 ce eb 6f cd 48 + 76 ed 52 0d 60 e1 ec 46 19 71 9d 8a 5b 8b 80 7f + af b8 e0 a3 df c7 37 72 3e e6 b4 b7 d9 3a 25 84 + ee 6a 64 9d 06 09 53 74 88 34 b2 45 45 98 39 4e + e0 aa b1 2d 7b 61 a5 1f 52 7a 9a 41 f6 c1 68 7f + e2 53 72 98 ca 2a 8f 59 46 f8 e5 fd 09 1d bd cb''', + # Public key + 'e':'11', + # In the test vector, only p and q were given... + # d is computed offline as e^{-1} mod (p-1)(q-1) + 'd':'''a5dafc5341faf289c4b988db30c1cdf83f31251e0 + 668b42784813801579641b29410b3c7998d6bc465745e5c3 + 92669d6870da2c082a939e37fdcb82ec93edac97ff3ad595 + 0accfbc111c76f1a9529444e56aaf68c56c092cd38dc3bef + 5d20a939926ed4f74a13eddfbe1a1cecc4894af9428c2b7b + 8883fe4463a4bc85b1cb3c1''' + } + , + # Plaintext + '''d4 36 e9 95 69 fd 32 a7 c8 a0 5b bc 90 d3 2c 49''', + # Ciphertext + '''12 53 e0 4d c0 a5 39 7b b4 4a 7a b8 7e 9b f2 a0 + 39 a3 3d 1e 99 6f c8 2a 94 cc d3 00 74 c9 5d f7 + 63 72 20 17 06 9e 52 68 da 5d 1c 0b 4f 87 2c f6 + 53 c1 1d f8 23 14 a6 79 68 df ea e2 8d ef 04 bb + 6d 84 b1 c3 1d 65 4a 19 70 e5 78 3b d6 eb 96 a0 + 24 c2 ca 2f 4a 90 fe 9f 2e f5 c9 c1 40 e5 bb 48 + da 95 36 ad 87 00 c8 4f c9 13 0a de a7 4e 55 8d + 51 a7 4d df 85 d8 b5 0d e9 68 38 d6 06 3e 09 55''', + # Random + '''aa fd 12 f6 59 ca e6 34 89 b4 79 e5 07 6d de c2 + f0 6c b5 8f''', + # Hash + SHA1, + ), + + # + # From in oaep-vect.txt to be found in Example 1.1 + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''a8 b3 b2 84 af 8e b5 0b 38 70 34 a8 60 f1 46 c4 + 91 9f 31 87 63 cd 6c 55 98 c8 ae 48 11 a1 e0 ab + c4 c7 e0 b0 82 d6 93 a5 e7 fc ed 67 5c f4 66 85 + 12 77 2c 0c bc 64 a7 42 c6 c6 30 f5 33 c8 cc 72 + f6 2a e8 33 c4 0b f2 58 42 e9 84 bb 78 bd bf 97 + c0 10 7d 55 bd b6 62 f5 c4 e0 fa b9 84 5c b5 14 + 8e f7 39 2d d3 aa ff 93 ae 1e 6b 66 7b b3 d4 24 + 76 16 d4 f5 ba 10 d4 cf d2 26 de 88 d3 9f 16 fb''', + 'e':'''01 00 01''', + 'd':'''53 33 9c fd b7 9f c8 46 6a 65 5c 73 16 ac a8 5c + 55 fd 8f 6d d8 98 fd af 11 95 17 ef 4f 52 e8 fd + 8e 25 8d f9 3f ee 18 0f a0 e4 ab 29 69 3c d8 3b + 15 2a 55 3d 4a c4 d1 81 2b 8b 9f a5 af 0e 7f 55 + fe 73 04 df 41 57 09 26 f3 31 1f 15 c4 d6 5a 73 + 2c 48 31 16 ee 3d 3d 2d 0a f3 54 9a d9 bf 7c bf + b7 8a d8 84 f8 4d 5b eb 04 72 4d c7 36 9b 31 de + f3 7d 0c f5 39 e9 cf cd d3 de 65 37 29 ea d5 d1 ''' + } + , + # Plaintext + '''66 28 19 4e 12 07 3d b0 3b a9 4c da 9e f9 53 23 + 97 d5 0d ba 79 b9 87 00 4a fe fe 34''', + # Ciphertext + '''35 4f e6 7b 4a 12 6d 5d 35 fe 36 c7 77 79 1a 3f + 7b a1 3d ef 48 4e 2d 39 08 af f7 22 fa d4 68 fb + 21 69 6d e9 5d 0b e9 11 c2 d3 17 4f 8a fc c2 01 + 03 5f 7b 6d 8e 69 40 2d e5 45 16 18 c2 1a 53 5f + a9 d7 bf c5 b8 dd 9f c2 43 f8 cf 92 7d b3 13 22 + d6 e8 81 ea a9 1a 99 61 70 e6 57 a0 5a 26 64 26 + d9 8c 88 00 3f 84 77 c1 22 70 94 a0 d9 fa 1e 8c + 40 24 30 9c e1 ec cc b5 21 00 35 d4 7a c7 2e 8a''', + # Random + '''18 b7 76 ea 21 06 9d 69 77 6a 33 e9 6b ad 48 e1 + dd a0 a5 ef''', + SHA1 + ), + + # + # From in oaep-vect.txt to be found in Example 2.1 + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''01 94 7c 7f ce 90 42 5f 47 27 9e 70 85 1f 25 d5 + e6 23 16 fe 8a 1d f1 93 71 e3 e6 28 e2 60 54 3e + 49 01 ef 60 81 f6 8c 0b 81 41 19 0d 2a e8 da ba + 7d 12 50 ec 6d b6 36 e9 44 ec 37 22 87 7c 7c 1d + 0a 67 f1 4b 16 94 c5 f0 37 94 51 a4 3e 49 a3 2d + de 83 67 0b 73 da 91 a1 c9 9b c2 3b 43 6a 60 05 + 5c 61 0f 0b af 99 c1 a0 79 56 5b 95 a3 f1 52 66 + 32 d1 d4 da 60 f2 0e da 25 e6 53 c4 f0 02 76 6f + 45''', + 'e':'''01 00 01''', + 'd':'''08 23 f2 0f ad b5 da 89 08 8a 9d 00 89 3e 21 fa + 4a 1b 11 fb c9 3c 64 a3 be 0b aa ea 97 fb 3b 93 + c3 ff 71 37 04 c1 9c 96 3c 1d 10 7a ae 99 05 47 + 39 f7 9e 02 e1 86 de 86 f8 7a 6d de fe a6 d8 cc + d1 d3 c8 1a 47 bf a7 25 5b e2 06 01 a4 a4 b2 f0 + 8a 16 7b 5e 27 9d 71 5b 1b 45 5b dd 7e ab 24 59 + 41 d9 76 8b 9a ce fb 3c cd a5 95 2d a3 ce e7 25 + 25 b4 50 16 63 a8 ee 15 c9 e9 92 d9 24 62 fe 39''' + }, + # Plaintext + '''8f f0 0c aa 60 5c 70 28 30 63 4d 9a 6c 3d 42 c6 + 52 b5 8c f1 d9 2f ec 57 0b ee e7''', + # Ciphertext + '''01 81 af 89 22 b9 fc b4 d7 9d 92 eb e1 98 15 99 + 2f c0 c1 43 9d 8b cd 49 13 98 a0 f4 ad 3a 32 9a + 5b d9 38 55 60 db 53 26 83 c8 b7 da 04 e4 b1 2a + ed 6a ac df 47 1c 34 c9 cd a8 91 ad dc c2 df 34 + 56 65 3a a6 38 2e 9a e5 9b 54 45 52 57 eb 09 9d + 56 2b be 10 45 3f 2b 6d 13 c5 9c 02 e1 0f 1f 8a + bb 5d a0 d0 57 09 32 da cf 2d 09 01 db 72 9d 0f + ef cc 05 4e 70 96 8e a5 40 c8 1b 04 bc ae fe 72 + 0e''', + # Random + '''8c 40 7b 5e c2 89 9e 50 99 c5 3e 8c e7 93 bf 94 + e7 1b 17 82''', + SHA1 + ), + + # + # From in oaep-vect.txt to be found in Example 10.1 + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # + ( + # Private key + { + 'n':'''ae 45 ed 56 01 ce c6 b8 cc 05 f8 03 93 5c 67 4d + db e0 d7 5c 4c 09 fd 79 51 fc 6b 0c ae c3 13 a8 + df 39 97 0c 51 8b ff ba 5e d6 8f 3f 0d 7f 22 a4 + 02 9d 41 3f 1a e0 7e 4e be 9e 41 77 ce 23 e7 f5 + 40 4b 56 9e 4e e1 bd cf 3c 1f b0 3e f1 13 80 2d + 4f 85 5e b9 b5 13 4b 5a 7c 80 85 ad ca e6 fa 2f + a1 41 7e c3 76 3b e1 71 b0 c6 2b 76 0e de 23 c1 + 2a d9 2b 98 08 84 c6 41 f5 a8 fa c2 6b da d4 a0 + 33 81 a2 2f e1 b7 54 88 50 94 c8 25 06 d4 01 9a + 53 5a 28 6a fe b2 71 bb 9b a5 92 de 18 dc f6 00 + c2 ae ea e5 6e 02 f7 cf 79 fc 14 cf 3b dc 7c d8 + 4f eb bb f9 50 ca 90 30 4b 22 19 a7 aa 06 3a ef + a2 c3 c1 98 0e 56 0c d6 4a fe 77 95 85 b6 10 76 + 57 b9 57 85 7e fd e6 01 09 88 ab 7d e4 17 fc 88 + d8 f3 84 c4 e6 e7 2c 3f 94 3e 0c 31 c0 c4 a5 cc + 36 f8 79 d8 a3 ac 9d 7d 59 86 0e aa da 6b 83 bb''', + 'e':'''01 00 01''', + 'd':'''05 6b 04 21 6f e5 f3 54 ac 77 25 0a 4b 6b 0c 85 + 25 a8 5c 59 b0 bd 80 c5 64 50 a2 2d 5f 43 8e 59 + 6a 33 3a a8 75 e2 91 dd 43 f4 8c b8 8b 9d 5f c0 + d4 99 f9 fc d1 c3 97 f9 af c0 70 cd 9e 39 8c 8d + 19 e6 1d b7 c7 41 0a 6b 26 75 df bf 5d 34 5b 80 + 4d 20 1a dd 50 2d 5c e2 df cb 09 1c e9 99 7b be + be 57 30 6f 38 3e 4d 58 81 03 f0 36 f7 e8 5d 19 + 34 d1 52 a3 23 e4 a8 db 45 1d 6f 4a 5b 1b 0f 10 + 2c c1 50 e0 2f ee e2 b8 8d ea 4a d4 c1 ba cc b2 + 4d 84 07 2d 14 e1 d2 4a 67 71 f7 40 8e e3 05 64 + fb 86 d4 39 3a 34 bc f0 b7 88 50 1d 19 33 03 f1 + 3a 22 84 b0 01 f0 f6 49 ea f7 93 28 d4 ac 5c 43 + 0a b4 41 49 20 a9 46 0e d1 b7 bc 40 ec 65 3e 87 + 6d 09 ab c5 09 ae 45 b5 25 19 01 16 a0 c2 61 01 + 84 82 98 50 9c 1c 3b f3 a4 83 e7 27 40 54 e1 5e + 97 07 50 36 e9 89 f6 09 32 80 7b 52 57 75 1e 79''' + }, + # Plaintext + '''8b ba 6b f8 2a 6c 0f 86 d5 f1 75 6e 97 95 68 70 + b0 89 53 b0 6b 4e b2 05 bc 16 94 ee''', + # Ciphertext + '''53 ea 5d c0 8c d2 60 fb 3b 85 85 67 28 7f a9 15 + 52 c3 0b 2f eb fb a2 13 f0 ae 87 70 2d 06 8d 19 + ba b0 7f e5 74 52 3d fb 42 13 9d 68 c3 c5 af ee + e0 bf e4 cb 79 69 cb f3 82 b8 04 d6 e6 13 96 14 + 4e 2d 0e 60 74 1f 89 93 c3 01 4b 58 b9 b1 95 7a + 8b ab cd 23 af 85 4f 4c 35 6f b1 66 2a a7 2b fc + c7 e5 86 55 9d c4 28 0d 16 0c 12 67 85 a7 23 eb + ee be ff 71 f1 15 94 44 0a ae f8 7d 10 79 3a 87 + 74 a2 39 d4 a0 4c 87 fe 14 67 b9 da f8 52 08 ec + 6c 72 55 79 4a 96 cc 29 14 2f 9a 8b d4 18 e3 c1 + fd 67 34 4b 0c d0 82 9d f3 b2 be c6 02 53 19 62 + 93 c6 b3 4d 3f 75 d3 2f 21 3d d4 5c 62 73 d5 05 + ad f4 cc ed 10 57 cb 75 8f c2 6a ee fa 44 12 55 + ed 4e 64 c1 99 ee 07 5e 7f 16 64 61 82 fd b4 64 + 73 9b 68 ab 5d af f0 e6 3e 95 52 01 68 24 f0 54 + bf 4d 3c 8c 90 a9 7b b6 b6 55 32 84 eb 42 9f cc''', + # Random + '''47 e1 ab 71 19 fe e5 6c 95 ee 5e aa d8 6f 40 d0 + aa 63 bd 33''', + SHA1 + ), + ) + + def testEncrypt1(self): + # Verify encryption using all test vectors + for test in self._testData: + # Build the key + comps = [int(rws(test[0][x]), 16) for x in ('n', 'e')] + key = RSA.construct(comps) + + # RNG that takes its random numbers from a pool given + # at initialization + class randGen: + + def __init__(self, data): + self.data = data + self.idx = 0 + + def __call__(self, N): + r = self.data[self.idx:N] + self.idx += N + return r + + # The real test + cipher = PKCS.new(key, test[4], randfunc=randGen(t2b(test[3]))) + ct = cipher.encrypt(t2b(test[1])) + self.assertEqual(ct, t2b(test[2])) + + def testEncrypt2(self): + # Verify that encryption fails if plaintext is too long + pt = '\x00'*(128-2*20-2+1) + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.encrypt, pt) + + def testDecrypt1(self): + # Verify decryption using all test vectors + for test in self._testData: + # Build the key + comps = [int(rws(test[0][x]),16) for x in ('n', 'e', 'd')] + key = RSA.construct(comps) + # The real test + cipher = PKCS.new(key, test[4]) + pt = cipher.decrypt(t2b(test[2])) + self.assertEqual(pt, t2b(test[1])) + + def testDecrypt2(self): + # Simplest possible negative tests + for ct_size in (127, 128, 129): + cipher = PKCS.new(self.key1024) + self.assertRaises(ValueError, cipher.decrypt, bchr(0x00)*ct_size) + + def testEncryptDecrypt1(self): + # Encrypt/Decrypt messages of length [0..128-2*20-2] + for pt_len in range(0, 128-2*20-2): + pt = self.rng(pt_len) + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(pt) + pt2 = cipher.decrypt(ct) + self.assertEqual(pt, pt2) + + def testEncryptDecrypt2(self): + # Helper function to monitor what's requested from RNG + global asked + + def localRng(N): + global asked + asked += N + return self.rng(N) + + # Verify that OAEP is friendly to all hashes + for hashmod in (MD2, MD5, SHA1, SHA256, RIPEMD160): + # Verify that encrypt() asks for as many random bytes + # as the hash output size + asked = 0 + pt = self.rng(40) + cipher = PKCS.new(self.key1024, hashmod, randfunc=localRng) + ct = cipher.encrypt(pt) + self.assertEqual(cipher.decrypt(ct), pt) + self.assertEqual(asked, hashmod.digest_size) + + def testEncryptDecrypt3(self): + # Verify that OAEP supports labels + pt = self.rng(35) + xlabel = self.rng(22) + cipher = PKCS.new(self.key1024, label=xlabel) + ct = cipher.encrypt(pt) + self.assertEqual(cipher.decrypt(ct), pt) + + def testEncryptDecrypt4(self): + # Verify that encrypt() uses the custom MGF + global mgfcalls + # Helper function to monitor what's requested from MGF + + def newMGF(seed, maskLen): + global mgfcalls + mgfcalls += 1 + return b'\x00' * maskLen + + mgfcalls = 0 + pt = self.rng(32) + cipher = PKCS.new(self.key1024, mgfunc=newMGF) + ct = cipher.encrypt(pt) + self.assertEqual(mgfcalls, 2) + self.assertEqual(cipher.decrypt(ct), pt) + + def testByteArray(self): + pt = b("XER") + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(bytearray(pt)) + pt2 = cipher.decrypt(bytearray(ct)) + self.assertEqual(pt, pt2) + + def testMemoryview(self): + pt = b("XER") + cipher = PKCS.new(self.key1024) + ct = cipher.encrypt(memoryview(bytearray(pt))) + pt2 = cipher.decrypt(memoryview(bytearray(ct))) + self.assertEqual(pt, pt2) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, skip_slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._skip_slow_tests = skip_slow_tests + self._id = "None" + + def load_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['privateKeyPem']) + + def filter_sha(group): + if group['sha'] == "SHA-1": + return SHA1 + elif group['sha'] == "SHA-224": + return SHA224 + elif group['sha'] == "SHA-256": + return SHA256 + elif group['sha'] == "SHA-384": + return SHA384 + elif group['sha'] == "SHA-512": + return SHA512 + else: + raise ValueError("Unknown sha " + group['sha']) + + def filter_mgf(group): + if group['mgfSha'] == "SHA-1": + return lambda x, y: MGF1(x, y, SHA1) + elif group['mgfSha'] == "SHA-224": + return lambda x, y: MGF1(x, y, SHA224) + elif group['mgfSha'] == "SHA-256": + return lambda x, y: MGF1(x, y, SHA256) + elif group['mgfSha'] == "SHA-384": + return lambda x, y: MGF1(x, y, SHA384) + elif group['mgfSha'] == "SHA-512": + return lambda x, y: MGF1(x, y, SHA512) + else: + raise ValueError("Unknown mgf/sha " + group['mgfSha']) + + def filter_algo(group): + return "%s with MGF1/%s" % (group['sha'], group['mgfSha']) + + result = load_test_vectors_wycheproof(("Cipher", "wycheproof"), + filename, + "Wycheproof PKCS#1 OAEP (%s)" % filename, + group_tag={'rsa_key': filter_rsa, + 'hash_mod': filter_sha, + 'mgf': filter_mgf, + 'algo': filter_algo} + ) + return result + + def setUp(self): + self.tv = [] + self.tv.extend(self.load_tests("rsa_oaep_2048_sha1_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha224_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha224_mgf1sha224_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha256_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha256_mgf1sha256_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha384_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha384_mgf1sha384_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha512_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_2048_sha512_mgf1sha512_test.json")) + if not self._skip_slow_tests: + self.tv.extend(self.load_tests("rsa_oaep_3072_sha256_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_3072_sha256_mgf1sha256_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_3072_sha512_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_3072_sha512_mgf1sha512_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha256_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha256_mgf1sha256_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha512_mgf1sha1_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha512_mgf1sha512_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_4096_sha512_mgf1sha512_test.json")) + self.tv.extend(self.load_tests("rsa_oaep_misc_test.json")) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_decrypt(self, tv): + self._id = "Wycheproof Decrypt %s Test #%s" % (tv.algo, tv.id) + + cipher = PKCS.new(tv.rsa_key, hashAlgo=tv.hash_mod, mgfunc=tv.mgf, label=tv.label) + try: + pt = cipher.decrypt(tv.ct) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.assertEqual(pt, tv.msg) + self.warn(tv) + + def runTest(self): + + for tv in self.tv: + self.test_decrypt(tv) + + +def get_tests(config={}): + skip_slow_tests = not config.get('slow_tests') + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(PKCS1_OAEP_Tests) + tests += [TestVectorsWycheproof(wycheproof_warnings, skip_slow_tests)] + return tests + + +if __name__ == '__main__': + def suite(): + unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/__init__.py new file mode 100644 index 0000000..d4d9f2e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/__init__.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/__init__.py: Self-test for hash modules +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for hash modules""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.Hash import test_HMAC; tests += test_HMAC.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_CMAC; tests += test_CMAC.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_MD2; tests += test_MD2.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_MD4; tests += test_MD4.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_MD5; tests += test_MD5.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_RIPEMD160; tests += test_RIPEMD160.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA1; tests += test_SHA1.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA224; tests += test_SHA224.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA256; tests += test_SHA256.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA384; tests += test_SHA384.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA512; tests += test_SHA512.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA3_224; tests += test_SHA3_224.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA3_256; tests += test_SHA3_256.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA3_384; tests += test_SHA3_384.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHA3_512; tests += test_SHA3_512.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_keccak; tests += test_keccak.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_SHAKE; tests += test_SHAKE.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_BLAKE2; tests += test_BLAKE2.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_Poly1305; tests += test_Poly1305.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_cSHAKE; tests += test_cSHAKE.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_KMAC; tests += test_KMAC.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_TupleHash; tests += test_TupleHash.get_tests(config=config) + from Cryptodome.SelfTest.Hash import test_KangarooTwelve; tests += test_KangarooTwelve.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/common.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/common.py new file mode 100644 index 0000000..4ed9234 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/common.py @@ -0,0 +1,290 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/common.py: Common code for Cryptodome.SelfTest.Hash +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-testing for PyCryptodome hash modules""" + +import re +import sys +import unittest +import binascii +import Cryptodome.Hash +from binascii import hexlify, unhexlify +from Cryptodome.Util.py3compat import b, tobytes +from Cryptodome.Util.strxor import strxor_c + +def t2b(hex_string): + shorter = re.sub(br'\s+', b'', tobytes(hex_string)) + return unhexlify(shorter) + + +class HashDigestSizeSelfTest(unittest.TestCase): + + def __init__(self, hashmod, description, expected, extra_params): + unittest.TestCase.__init__(self) + self.hashmod = hashmod + self.expected = expected + self.description = description + self.extra_params = extra_params + + def shortDescription(self): + return self.description + + def runTest(self): + if "truncate" not in self.extra_params: + self.assertTrue(hasattr(self.hashmod, "digest_size")) + self.assertEqual(self.hashmod.digest_size, self.expected) + h = self.hashmod.new(**self.extra_params) + self.assertTrue(hasattr(h, "digest_size")) + self.assertEqual(h.digest_size, self.expected) + + +class HashSelfTest(unittest.TestCase): + + def __init__(self, hashmod, description, expected, input, extra_params): + unittest.TestCase.__init__(self) + self.hashmod = hashmod + self.expected = expected.lower() + self.input = input + self.description = description + self.extra_params = extra_params + + def shortDescription(self): + return self.description + + def runTest(self): + h = self.hashmod.new(**self.extra_params) + h.update(self.input) + + out1 = binascii.b2a_hex(h.digest()) + out2 = h.hexdigest() + + h = self.hashmod.new(self.input, **self.extra_params) + + out3 = h.hexdigest() + out4 = binascii.b2a_hex(h.digest()) + + # PY3K: hexdigest() should return str(), and digest() bytes + self.assertEqual(self.expected, out1) # h = .new(); h.update(data); h.digest() + if sys.version_info[0] == 2: + self.assertEqual(self.expected, out2) # h = .new(); h.update(data); h.hexdigest() + self.assertEqual(self.expected, out3) # h = .new(data); h.hexdigest() + else: + self.assertEqual(self.expected.decode(), out2) # h = .new(); h.update(data); h.hexdigest() + self.assertEqual(self.expected.decode(), out3) # h = .new(data); h.hexdigest() + self.assertEqual(self.expected, out4) # h = .new(data); h.digest() + + # Verify that the .new() method produces a fresh hash object, except + # for MD5 and SHA1, which are hashlib objects. (But test any .new() + # method that does exist.) + if self.hashmod.__name__ not in ('Cryptodome.Hash.MD5', 'Cryptodome.Hash.SHA1') or hasattr(h, 'new'): + h2 = h.new() + h2.update(self.input) + out5 = binascii.b2a_hex(h2.digest()) + self.assertEqual(self.expected, out5) + + +class HashTestOID(unittest.TestCase): + def __init__(self, hashmod, oid, extra_params): + unittest.TestCase.__init__(self) + self.hashmod = hashmod + self.oid = oid + self.extra_params = extra_params + + def runTest(self): + h = self.hashmod.new(**self.extra_params) + self.assertEqual(h.oid, self.oid) + + +class ByteArrayTest(unittest.TestCase): + + def __init__(self, module, extra_params): + unittest.TestCase.__init__(self) + self.module = module + self.extra_params = extra_params + + def runTest(self): + data = b("\x00\x01\x02") + + # Data can be a bytearray (during initialization) + ba = bytearray(data) + + h1 = self.module.new(data, **self.extra_params) + h2 = self.module.new(ba, **self.extra_params) + ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + ba = bytearray(data) + + h1 = self.module.new(**self.extra_params) + h2 = self.module.new(**self.extra_params) + + h1.update(data) + h2.update(ba) + + ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MemoryViewTest(unittest.TestCase): + + def __init__(self, module, extra_params): + unittest.TestCase.__init__(self) + self.module = module + self.extra_params = extra_params + + def runTest(self): + + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in get_mv_ro, get_mv_rw: + + # Data can be a memoryview (during initialization) + mv = get_mv(data) + + h1 = self.module.new(data, **self.extra_params) + h2 = self.module.new(mv, **self.extra_params) + if not mv.readonly: + mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + mv = get_mv(data) + + h1 = self.module.new(**self.extra_params) + h2 = self.module.new(**self.extra_params) + h1.update(data) + h2.update(mv) + if not mv.readonly: + mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MACSelfTest(unittest.TestCase): + + def __init__(self, module, description, result, data, key, params): + unittest.TestCase.__init__(self) + self.module = module + self.result = t2b(result) + self.data = t2b(data) + self.key = t2b(key) + self.params = params + self.description = description + + def shortDescription(self): + return self.description + + def runTest(self): + + result_hex = hexlify(self.result) + + # Verify result + h = self.module.new(self.key, **self.params) + h.update(self.data) + self.assertEqual(self.result, h.digest()) + self.assertEqual(hexlify(self.result).decode('ascii'), h.hexdigest()) + + # Verify that correct MAC does not raise any exception + h.verify(self.result) + h.hexverify(result_hex) + + # Verify that incorrect MAC does raise ValueError exception + wrong_mac = strxor_c(self.result, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + # Verify again, with data passed to new() + h = self.module.new(self.key, self.data, **self.params) + self.assertEqual(self.result, h.digest()) + self.assertEqual(hexlify(self.result).decode('ascii'), h.hexdigest()) + + # Test .copy() + try: + h = self.module.new(self.key, self.data, **self.params) + h2 = h.copy() + h3 = h.copy() + + # Verify that changing the copy does not change the original + h2.update(b"bla") + self.assertEqual(h3.digest(), self.result) + + # Verify that both can reach the same state + h.update(b"bla") + self.assertEqual(h.digest(), h2.digest()) + except NotImplementedError: + pass + + # PY3K: Check that hexdigest() returns str and digest() returns bytes + self.assertTrue(isinstance(h.digest(), type(b""))) + self.assertTrue(isinstance(h.hexdigest(), type(""))) + + # PY3K: Check that .hexverify() accepts bytes or str + h.hexverify(h.hexdigest()) + h.hexverify(h.hexdigest().encode('ascii')) + + +def make_hash_tests(module, module_name, test_data, digest_size, oid=None, + extra_params={}): + tests = [] + for i in range(len(test_data)): + row = test_data[i] + (expected, input) = map(tobytes,row[0:2]) + if len(row) < 3: + description = repr(input) + else: + description = row[2] + name = "%s #%d: %s" % (module_name, i+1, description) + tests.append(HashSelfTest(module, name, expected, input, extra_params)) + + name = "%s #%d: digest_size" % (module_name, len(test_data) + 1) + tests.append(HashDigestSizeSelfTest(module, name, digest_size, extra_params)) + + if oid is not None: + tests.append(HashTestOID(module, oid, extra_params)) + + tests.append(ByteArrayTest(module, extra_params)) + + tests.append(MemoryViewTest(module, extra_params)) + + return tests + + +def make_mac_tests(module, module_name, test_data): + tests = [] + for i, row in enumerate(test_data): + if len(row) == 4: + (key, data, results, description, params) = list(row) + [ {} ] + else: + (key, data, results, description, params) = row + name = "%s #%d: %s" % (module_name, i+1, description) + tests.append(MACSelfTest(module, name, results, data, key, params)) + return tests + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py new file mode 100644 index 0000000..e5ed63b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_BLAKE2.py @@ -0,0 +1,482 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import unittest +import warnings +from binascii import unhexlify, hexlify + +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.Util.strxor import strxor_c +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import BLAKE2b, BLAKE2s + + +class Blake2Test(unittest.TestCase): + + def test_new_positive(self): + + h = self.BLAKE2.new(digest_bits=self.max_bits) + for new_func in self.BLAKE2.new, h.new: + + for dbits in range(8, self.max_bits + 1, 8): + hobj = new_func(digest_bits=dbits) + self.assertEqual(hobj.digest_size, dbits // 8) + + for dbytes in range(1, self.max_bytes + 1): + hobj = new_func(digest_bytes=dbytes) + self.assertEqual(hobj.digest_size, dbytes) + + digest1 = new_func(data=b"\x90", digest_bytes=self.max_bytes).digest() + digest2 = new_func(digest_bytes=self.max_bytes).update(b"\x90").digest() + self.assertEqual(digest1, digest2) + + new_func(data=b"A", key=b"5", digest_bytes=self.max_bytes) + + hobj = h.new() + self.assertEqual(hobj.digest_size, self.max_bytes) + + def test_new_negative(self): + + h = self.BLAKE2.new(digest_bits=self.max_bits) + for new_func in self.BLAKE2.new, h.new: + self.assertRaises(TypeError, new_func, + digest_bytes=self.max_bytes, + digest_bits=self.max_bits) + self.assertRaises(ValueError, new_func, digest_bytes=0) + self.assertRaises(ValueError, new_func, + digest_bytes=self.max_bytes + 1) + self.assertRaises(ValueError, new_func, digest_bits=7) + self.assertRaises(ValueError, new_func, digest_bits=15) + self.assertRaises(ValueError, new_func, + digest_bits=self.max_bits + 1) + self.assertRaises(TypeError, new_func, + digest_bytes=self.max_bytes, + key=u"string") + self.assertRaises(TypeError, new_func, + digest_bytes=self.max_bytes, + data=u"string") + + def test_default_digest_size(self): + digest = self.BLAKE2.new(data=b'abc').digest() + self.assertEqual(len(digest), self.max_bytes) + + def test_update(self): + pieces = [b"\x0A" * 200, b"\x14" * 300] + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + h.update(pieces[0]).update(pieces[1]) + digest = h.digest() + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.digest(), digest) + + def test_update_negative(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes) + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg = b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = self.BLAKE2.new(digest_bits=256, data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = self.BLAKE2.new(digest_bits=256, data=msg).digest() + + # With the proper flag, it is allowed + h = self.BLAKE2.new(digest_bits=256, data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + def test_hex_digest(self): + mac = self.BLAKE2.new(digest_bits=self.max_bits) + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_verify(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes, key=b"4") + mac = h.digest() + h.verify(mac) + wrong_mac = strxor_c(mac, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + + def test_hexverify(self): + h = self.BLAKE2.new(digest_bytes=self.max_bytes, key=b"4") + mac = h.hexdigest() + h.hexverify(mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + def test_oid(self): + + prefix = "1.3.6.1.4.1.1722.12.2." + self.oid_variant + "." + + for digest_bits in self.digest_bits_oid: + h = self.BLAKE2.new(digest_bits=digest_bits) + self.assertEqual(h.oid, prefix + str(digest_bits // 8)) + + h = self.BLAKE2.new(digest_bits=digest_bits, key=b"secret") + self.assertRaises(AttributeError, lambda: h.oid) + + for digest_bits in (8, self.max_bits): + if digest_bits in self.digest_bits_oid: + continue + self.assertRaises(AttributeError, lambda: h.oid) + + def test_bytearray(self): + + key = b'0' * 16 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = self.BLAKE2.new(data=data, key=key) + h2 = self.BLAKE2.new(data=data_ba, key=key_ba) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = self.BLAKE2.new() + h2 = self.BLAKE2.new() + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + key = b'0' * 16 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = self.BLAKE2.new(data=data, key=key) + h2 = self.BLAKE2.new(data=data_mv, key=key_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + key_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = self.BLAKE2.new() + h2 = self.BLAKE2.new() + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class Blake2bTest(Blake2Test): + #: Module + BLAKE2 = BLAKE2b + #: Max output size (in bits) + max_bits = 512 + #: Max output size (in bytes) + max_bytes = 64 + #: Bit size of the digests for which an ASN OID exists + digest_bits_oid = (160, 256, 384, 512) + # http://tools.ietf.org/html/draft-saarinen-blake2-02 + oid_variant = "1" + + +class Blake2sTest(Blake2Test): + #: Module + BLAKE2 = BLAKE2s + #: Max output size (in bits) + max_bits = 256 + #: Max output size (in bytes) + max_bytes = 32 + #: Bit size of the digests for which an ASN OID exists + digest_bits_oid = (128, 160, 224, 256) + # http://tools.ietf.org/html/draft-saarinen-blake2-02 + oid_variant = "2" + + +class Blake2OfficialTestVector(unittest.TestCase): + + def _load_tests(self, test_vector_file): + expected = "in" + test_vectors = [] + with open(test_vector_file, "rt") as test_vector_fd: + for line_number, line in enumerate(test_vector_fd): + + if line.strip() == "" or line.startswith("#"): + continue + + res = re.match("%s:\t([0-9A-Fa-f]*)" % expected, line) + if not res: + raise ValueError("Incorrect test vector format (line %d)" + % line_number) + + if res.group(1): + bin_value = unhexlify(tobytes(res.group(1))) + else: + bin_value = b"" + if expected == "in": + input_data = bin_value + expected = "key" + elif expected == "key": + key = bin_value + expected = "hash" + else: + result = bin_value + expected = "in" + test_vectors.append((input_data, key, result)) + return test_vectors + + def setUp(self): + + dir_comps = ("Hash", self.name) + file_name = self.name.lower() + "-test.txt" + self.description = "%s tests" % self.name + + try: + import pycryptodome_test_vectors # type: ignore + except ImportError: + warnings.warn("Warning: skipping extended tests for %s" % self.name, + UserWarning) + self.test_vectors = [] + return + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + self.test_vectors = self._load_tests(full_file_name) + + def runTest(self): + for (input_data, key, result) in self.test_vectors: + mac = self.BLAKE2.new(key=key, digest_bytes=self.max_bytes) + mac.update(input_data) + self.assertEqual(mac.digest(), result) + + +class Blake2bOfficialTestVector(Blake2OfficialTestVector): + #: Module + BLAKE2 = BLAKE2b + #: Hash name + name = "BLAKE2b" + #: Max digest size + max_bytes = 64 + + +class Blake2sOfficialTestVector(Blake2OfficialTestVector): + #: Module + BLAKE2 = BLAKE2s + #: Hash name + name = "BLAKE2s" + #: Max digest size + max_bytes = 32 + + +class Blake2TestVector1(unittest.TestCase): + + def _load_tests(self, test_vector_file): + test_vectors = [] + with open(test_vector_file, "rt") as test_vector_fd: + for line_number, line in enumerate(test_vector_fd): + if line.strip() == "" or line.startswith("#"): + continue + res = re.match("digest: ([0-9A-Fa-f]*)", line) + if not res: + raise ValueError("Incorrect test vector format (line %d)" + % line_number) + + test_vectors.append(unhexlify(tobytes(res.group(1)))) + return test_vectors + + def setUp(self): + dir_comps = ("Hash", self.name) + file_name = "tv1.txt" + self.description = "%s tests" % self.name + + try: + import pycryptodome_test_vectors + except ImportError: + warnings.warn("Warning: skipping extended tests for %s" % self.name, + UserWarning) + self.test_vectors = [] + return + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + self.test_vectors = self._load_tests(full_file_name) + + def runTest(self): + + for tv in self.test_vectors: + digest_bytes = len(tv) + next_data = b"" + for _ in range(100): + h = self.BLAKE2.new(digest_bytes=digest_bytes) + h.update(next_data) + next_data = h.digest() + next_data + self.assertEqual(h.digest(), tv) + + +class Blake2bTestVector1(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2b + #: Hash name + name = "BLAKE2b" + + +class Blake2sTestVector1(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2s + #: Hash name + name = "BLAKE2s" + + +class Blake2TestVector2(unittest.TestCase): + + def _load_tests(self, test_vector_file): + test_vectors = [] + with open(test_vector_file, "rt") as test_vector_fd: + for line_number, line in enumerate(test_vector_fd): + if line.strip() == "" or line.startswith("#"): + continue + res = re.match(r"digest\(([0-9]+)\): ([0-9A-Fa-f]*)", line) + if not res: + raise ValueError("Incorrect test vector format (line %d)" + % line_number) + key_size = int(res.group(1)) + result = unhexlify(tobytes(res.group(2))) + test_vectors.append((key_size, result)) + return test_vectors + + def setUp(self): + dir_comps = ("Hash", self.name) + file_name = "tv2.txt" + self.description = "%s tests" % self.name + + try: + import pycryptodome_test_vectors # type: ignore + except ImportError: + warnings.warn("Warning: skipping extended tests for %s" % self.name, + UserWarning) + self.test_vectors = [] + return + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + self.test_vectors = self._load_tests(full_file_name) + + def runTest(self): + + for key_size, result in self.test_vectors: + next_data = b"" + for _ in range(100): + h = self.BLAKE2.new(digest_bytes=self.max_bytes, + key=b"A" * key_size) + h.update(next_data) + next_data = h.digest() + next_data + self.assertEqual(h.digest(), result) + + +class Blake2bTestVector2(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2b + #: Hash name + name = "BLAKE2b" + #: Max digest size in bytes + max_bytes = 64 + + +class Blake2sTestVector2(Blake2TestVector1): + #: Module + BLAKE2 = BLAKE2s + #: Hash name + name = "BLAKE2s" + #: Max digest size in bytes + max_bytes = 32 + + +def get_tests(config={}): + tests = [] + + tests += list_test_cases(Blake2bTest) + tests.append(Blake2bOfficialTestVector()) + tests.append(Blake2bTestVector1()) + tests.append(Blake2bTestVector2()) + + tests += list_test_cases(Blake2sTest) + tests.append(Blake2sOfficialTestVector()) + tests.append(Blake2sTestVector1()) + tests.append(Blake2sTestVector2()) + + return tests + + +if __name__ == '__main__': + import unittest + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py new file mode 100644 index 0000000..f88f1cd --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_CMAC.py @@ -0,0 +1,448 @@ +# +# SelfTest/Hash/CMAC.py: Self-test for the CMAC module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.CMAC""" + +import json +import unittest +from binascii import unhexlify + +from Cryptodome.Util.py3compat import tobytes + +from Cryptodome.Hash import CMAC +from Cryptodome.Cipher import AES, DES3 +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.Util.strxor import strxor + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof + +# This is a list of (key, data, result, description, module) tuples. +test_data = [ + + ## Test vectors from RFC 4493 ## + ## The are also in NIST SP 800 38B D.2 ## + ( '2b7e151628aed2a6abf7158809cf4f3c', + '', + 'bb1d6929e95937287fa37d129b756746', + 'RFC 4493 #1', + AES + ), + + ( '2b7e151628aed2a6abf7158809cf4f3c', + '6bc1bee22e409f96e93d7e117393172a', + '070a16b46b4d4144f79bdd9dd04a287c', + 'RFC 4493 #2', + AES + ), + + ( '2b7e151628aed2a6abf7158809cf4f3c', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411', + 'dfa66747de9ae63030ca32611497c827', + 'RFC 4493 #3', + AES + ), + + ( '2b7e151628aed2a6abf7158809cf4f3c', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+ + 'f69f2445df4f9b17ad2b417be66c3710', + '51f0bebf7e3b9d92fc49741779363cfe', + 'RFC 4493 #4', + AES + ), + + ## The rest of Appendix D of NIST SP 800 38B + ## was not totally correct. + ## Values in Examples 14, 15, 18, and 19 were wrong. + ## The updated test values are published in: + ## http://csrc.nist.gov/publications/nistpubs/800-38B/Updated_CMAC_Examples.pdf + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '', + 'd17ddf46adaacde531cac483de7a9367', + 'NIST SP 800 38B D.2 Example 5', + AES + ), + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '6bc1bee22e409f96e93d7e117393172a', + '9e99a7bf31e710900662f65e617c5184', + 'NIST SP 800 38B D.2 Example 6', + AES + ), + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411', + '8a1de5be2eb31aad089a82e6ee908b0e', + 'NIST SP 800 38B D.2 Example 7', + AES + ), + + ( '8e73b0f7da0e6452c810f32b809079e5'+ + '62f8ead2522c6b7b', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+ + 'f69f2445df4f9b17ad2b417be66c3710', + 'a1d5df0eed790f794d77589659f39a11', + 'NIST SP 800 38B D.2 Example 8', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '', + '028962f61b7bf89efc6b551f4667d983', + 'NIST SP 800 38B D.3 Example 9', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '6bc1bee22e409f96e93d7e117393172a', + '28a7023f452e8f82bd4bf28d8c37c35c', + 'NIST SP 800 38B D.3 Example 10', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411', + 'aaf3d8f1de5640c232f5b169b9c911e6', + 'NIST SP 800 38B D.3 Example 11', + AES + ), + + ( '603deb1015ca71be2b73aef0857d7781'+ + '1f352c073b6108d72d9810a30914dff4', + '6bc1bee22e409f96e93d7e117393172a'+ + 'ae2d8a571e03ac9c9eb76fac45af8e51'+ + '30c81c46a35ce411e5fbc1191a0a52ef'+ + 'f69f2445df4f9b17ad2b417be66c3710', + 'e1992190549f6ed5696a2c056c315410', + 'NIST SP 800 38B D.3 Example 12', + AES + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '', + 'b7a688e122ffaf95', + 'NIST SP 800 38B D.4 Example 13', + DES3 + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '6bc1bee22e409f96', + '8e8f293136283797', + 'NIST SP 800 38B D.4 Example 14', + DES3 + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a57', + '743ddbe0ce2dc2ed', + 'NIST SP 800 38B D.4 Example 15', + DES3 + ), + + ( '8aa83bf8cbda1062'+ + '0bc1bf19fbb6cd58'+ + 'bc313d4a371ca8b5', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a571e03ac9c'+ + '9eb76fac45af8e51', + '33e6b1092400eae5', + 'NIST SP 800 38B D.4 Example 16', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '', + 'bd2ebf9a3ba00361', + 'NIST SP 800 38B D.7 Example 17', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '6bc1bee22e409f96', + '4ff2ab813c53ce83', + 'NIST SP 800 38B D.7 Example 18', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a57', + '62dd1b471902bd4e', + 'NIST SP 800 38B D.7 Example 19', + DES3 + ), + + ( '4cf15134a2850dd5'+ + '8a3d10ba80570d38', + '6bc1bee22e409f96'+ + 'e93d7e117393172a'+ + 'ae2d8a571e03ac9c'+ + '9eb76fac45af8e51', + '31b1e431dabc4eb8', + 'NIST SP 800 38B D.7 Example 20', + DES3 + ), + +] + + +def get_tag_random(tag, length): + return SHAKE128.new(data=tobytes(tag)).read(length) + + +class TestCMAC(unittest.TestCase): + + def test_internal_caching(self): + """Verify that internal caching is implemented correctly""" + + data_to_mac = get_tag_random("data_to_mac", 128) + key = get_tag_random("key", 16) + ref_mac = CMAC.new(key, msg=data_to_mac, ciphermod=AES).digest() + + # Break up in chunks of different length + # The result must always be the same + for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: + + chunks = [data_to_mac[i:i+chunk_length] for i in + range(0, len(data_to_mac), chunk_length)] + + mac = CMAC.new(key, ciphermod=AES) + for chunk in chunks: + mac.update(chunk) + self.assertEqual(ref_mac, mac.digest()) + + def test_update_after_digest(self): + msg = b"rrrrttt" + key = b"4" * 16 + + # Normally, update() cannot be done after digest() + h = CMAC.new(key, msg[:4], ciphermod=AES) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = CMAC.new(key, msg, ciphermod=AES).digest() + + # With the proper flag, it is allowed + h2 = CMAC.new(key, msg[:4], ciphermod=AES, update_after_digest=True) + self.assertEqual(h2.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h2.update(msg[4:]) + self.assertEqual(h2.digest(), dig2) + + +class ByteArrayTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = CMAC.new(key, data, ciphermod=AES) + h2 = CMAC.new(key_ba, data_ba, ciphermod=AES) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = CMAC.new(key, ciphermod=AES) + h2 = CMAC.new(key, ciphermod=AES) + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MemoryViewTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = CMAC.new(key, data, ciphermod=AES) + h2 = CMAC.new(key_mv, data_mv, ciphermod=AES) + if not data_mv.readonly: + key_mv[:1] = b'\xFF' + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = CMAC.new(key, ciphermod=AES) + h2 = CMAC.new(key, ciphermod=AES) + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def setUp(self): + + def filter_tag(group): + return group['tagSize'] // 8 + + self.tv = load_test_vectors_wycheproof(("Hash", "wycheproof"), + "aes_cmac_test.json", + "Wycheproof CMAC", + group_tag={'tag_size': filter_tag}) + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_create_mac(self, tv): + self._id = "Wycheproof MAC creation Test #" + str(tv.id) + + try: + tag = CMAC.new(tv.key, tv.msg, ciphermod=AES, mac_len=tv.tag_size).digest() + except ValueError as e: + if len(tv.key) not in (16, 24, 32) and "key length" in str(e): + return + raise e + if tv.valid: + self.assertEqual(tag, tv.tag) + self.warn(tv) + + def test_verify_mac(self, tv): + self._id = "Wycheproof MAC verification Test #" + str(tv.id) + + try: + mac = CMAC.new(tv.key, tv.msg, ciphermod=AES, mac_len=tv.tag_size) + except ValueError as e: + if len(tv.key) not in (16, 24, 32) and "key length" in str(e): + return + raise e + try: + mac.verify(tv.tag) + except ValueError: + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + + for tv in self.tv: + self.test_create_mac(tv) + self.test_verify_mac(tv) + + +def get_tests(config={}): + global test_data + import types + from .common import make_mac_tests + + wycheproof_warnings = config.get('wycheproof_warnings') + + # Add new() parameters to the back of each test vector + params_test_data = [] + for row in test_data: + t = list(row) + t[4] = dict(ciphermod=t[4]) + params_test_data.append(t) + + tests = make_mac_tests(CMAC, "CMAC", params_test_data) + tests.append(ByteArrayTests()) + tests.append(list_test_cases(TestCMAC)) + tests.append(MemoryViewTests()) + tests += [ TestVectorsWycheproof(wycheproof_warnings) ] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py new file mode 100644 index 0000000..ecec1a8 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_HMAC.py @@ -0,0 +1,548 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/HMAC.py: Self-test for the HMAC module +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.HMAC""" + +import unittest +from binascii import hexlify +from Cryptodome.Util.py3compat import tostr, tobytes + +from Cryptodome.Hash import (HMAC, MD5, SHA1, SHA256, + SHA224, SHA384, SHA512, + RIPEMD160, + SHA3_224, SHA3_256, SHA3_384, SHA3_512) + + +hash_modules = dict(MD5=MD5, SHA1=SHA1, SHA256=SHA256, + SHA224=SHA224, SHA384=SHA384, SHA512=SHA512, + RIPEMD160=RIPEMD160, + SHA3_224=SHA3_224, SHA3_256=SHA3_256, + SHA3_384=SHA3_384, SHA3_512=SHA3_512) + +default_hash = None + +def xl(text): + return tostr(hexlify(tobytes(text))) + +# This is a list of (key, data, results, description) tuples. +test_data = [ + ## Test vectors from RFC 2202 ## + # Test that the default hashmod is MD5 + ('0b' * 16, + '4869205468657265', + dict(default_hash='9294727a3638bb1c13f48ef8158bfc9d'), + 'default-is-MD5'), + + # Test case 1 (MD5) + ('0b' * 16, + '4869205468657265', + dict(MD5='9294727a3638bb1c13f48ef8158bfc9d'), + 'RFC 2202 #1-MD5 (HMAC-MD5)'), + + # Test case 1 (SHA1) + ('0b' * 20, + '4869205468657265', + dict(SHA1='b617318655057264e28bc0b6fb378c8ef146be00'), + 'RFC 2202 #1-SHA1 (HMAC-SHA1)'), + + # Test case 2 + ('4a656665', + '7768617420646f2079612077616e7420666f72206e6f7468696e673f', + dict(MD5='750c783e6ab0b503eaa86e310a5db738', + SHA1='effcdf6ae5eb2fa2d27416d5f184df9c259a7c79'), + 'RFC 2202 #2 (HMAC-MD5/SHA1)'), + + # Test case 3 (MD5) + ('aa' * 16, + 'dd' * 50, + dict(MD5='56be34521d144c88dbb8c733f0e8b3f6'), + 'RFC 2202 #3-MD5 (HMAC-MD5)'), + + # Test case 3 (SHA1) + ('aa' * 20, + 'dd' * 50, + dict(SHA1='125d7342b9ac11cd91a39af48aa17b4f63f175d3'), + 'RFC 2202 #3-SHA1 (HMAC-SHA1)'), + + # Test case 4 + ('0102030405060708090a0b0c0d0e0f10111213141516171819', + 'cd' * 50, + dict(MD5='697eaf0aca3a3aea3a75164746ffaa79', + SHA1='4c9007f4026250c6bc8414f9bf50c86c2d7235da'), + 'RFC 2202 #4 (HMAC-MD5/SHA1)'), + + # Test case 5 (MD5) + ('0c' * 16, + '546573742057697468205472756e636174696f6e', + dict(MD5='56461ef2342edc00f9bab995690efd4c'), + 'RFC 2202 #5-MD5 (HMAC-MD5)'), + + # Test case 5 (SHA1) + # NB: We do not implement hash truncation, so we only test the full hash here. + ('0c' * 20, + '546573742057697468205472756e636174696f6e', + dict(SHA1='4c1a03424b55e07fe7f27be1d58bb9324a9a5a04'), + 'RFC 2202 #5-SHA1 (HMAC-SHA1)'), + + # Test case 6 + ('aa' * 80, + '54657374205573696e67204c6172676572205468616e20426c6f636b2d53697a' + + '65204b6579202d2048617368204b6579204669727374', + dict(MD5='6b1ab7fe4bd7bf8f0b62e6ce61b9d0cd', + SHA1='aa4ae5e15272d00e95705637ce8a3b55ed402112'), + 'RFC 2202 #6 (HMAC-MD5/SHA1)'), + + # Test case 7 + ('aa' * 80, + '54657374205573696e67204c6172676572205468616e20426c6f636b2d53697a' + + '65204b657920616e64204c6172676572205468616e204f6e6520426c6f636b2d' + + '53697a652044617461', + dict(MD5='6f630fad67cda0ee1fb1f562db3aa53e', + SHA1='e8e99d0f45237d786d6bbaa7965c7808bbff1a91'), + 'RFC 2202 #7 (HMAC-MD5/SHA1)'), + + ## Test vectors from RFC 4231 ## + # 4.2. Test Case 1 + ('0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b', + '4869205468657265', + dict(SHA256=''' + b0344c61d8db38535ca8afceaf0bf12b + 881dc200c9833da726e9376c2e32cff7 + '''), + 'RFC 4231 #1 (HMAC-SHA256)'), + + # 4.3. Test Case 2 - Test with a key shorter than the length of the HMAC + # output. + ('4a656665', + '7768617420646f2079612077616e7420666f72206e6f7468696e673f', + dict(SHA256=''' + 5bdcc146bf60754e6a042426089575c7 + 5a003f089d2739839dec58b964ec3843 + '''), + 'RFC 4231 #2 (HMAC-SHA256)'), + + # 4.4. Test Case 3 - Test with a combined length of key and data that is + # larger than 64 bytes (= block-size of SHA-224 and SHA-256). + ('aa' * 20, + 'dd' * 50, + dict(SHA256=''' + 773ea91e36800e46854db8ebd09181a7 + 2959098b3ef8c122d9635514ced565fe + '''), + 'RFC 4231 #3 (HMAC-SHA256)'), + + # 4.5. Test Case 4 - Test with a combined length of key and data that is + # larger than 64 bytes (= block-size of SHA-224 and SHA-256). + ('0102030405060708090a0b0c0d0e0f10111213141516171819', + 'cd' * 50, + dict(SHA256=''' + 82558a389a443c0ea4cc819899f2083a + 85f0faa3e578f8077a2e3ff46729665b + '''), + 'RFC 4231 #4 (HMAC-SHA256)'), + + # 4.6. Test Case 5 - Test with a truncation of output to 128 bits. + # + # Not included because we do not implement hash truncation. + # + + # 4.7. Test Case 6 - Test with a key larger than 128 bytes (= block-size of + # SHA-384 and SHA-512). + ('aa' * 131, + '54657374205573696e67204c6172676572205468616e20426c6f636b2d53697a' + + '65204b6579202d2048617368204b6579204669727374', + dict(SHA256=''' + 60e431591ee0b67f0d8a26aacbf5b77f + 8e0bc6213728c5140546040f0ee37f54 + '''), + 'RFC 4231 #6 (HMAC-SHA256)'), + + # 4.8. Test Case 7 - Test with a key and data that is larger than 128 bytes + # (= block-size of SHA-384 and SHA-512). + ('aa' * 131, + '5468697320697320612074657374207573696e672061206c6172676572207468' + + '616e20626c6f636b2d73697a65206b657920616e642061206c61726765722074' + + '68616e20626c6f636b2d73697a6520646174612e20546865206b6579206e6565' + + '647320746f20626520686173686564206265666f7265206265696e6720757365' + + '642062792074686520484d414320616c676f726974686d2e', + dict(SHA256=''' + 9b09ffa71b942fcb27635fbcd5b0e944 + bfdc63644f0713938a7f51535c3a35e2 + '''), + 'RFC 4231 #7 (HMAC-SHA256)'), + + # Test case 8 (SHA224) + ('4a656665', + '7768617420646f2079612077616e74' + + '20666f72206e6f7468696e673f', + dict(SHA224='a30e01098bc6dbbf45690f3a7e9e6d0f8bbea2a39e6148008fd05e44'), + 'RFC 4634 8.4 SHA224 (HMAC-SHA224)'), + + # Test case 9 (SHA384) + ('4a656665', + '7768617420646f2079612077616e74' + + '20666f72206e6f7468696e673f', + dict(SHA384='af45d2e376484031617f78d2b58a6b1b9c7ef464f5a01b47e42ec3736322445e8e2240ca5e69e2c78b3239ecfab21649'), + 'RFC 4634 8.4 SHA384 (HMAC-SHA384)'), + + # Test case 10 (SHA512) + ('4a656665', + '7768617420646f2079612077616e74' + + '20666f72206e6f7468696e673f', + dict(SHA512='164b7a7bfcf819e2e395fbe73b56e0a387bd64222e831fd610270cd7ea2505549758bf75c05a994a6d034f65f8f0e6fdcaeab1a34d4a6b4b636e070a38bce737'), + 'RFC 4634 8.4 SHA512 (HMAC-SHA512)'), + + # Test case 11 (RIPEMD) + ('0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b', + xl("Hi There"), + dict(RIPEMD160='24cb4bd67d20fc1a5d2ed7732dcc39377f0a5668'), + 'RFC 2286 #1 (HMAC-RIPEMD)'), + + # Test case 12 (RIPEMD) + (xl("Jefe"), + xl("what do ya want for nothing?"), + dict(RIPEMD160='dda6c0213a485a9e24f4742064a7f033b43c4069'), + 'RFC 2286 #2 (HMAC-RIPEMD)'), + + # Test case 13 (RIPEMD) + ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', + 'dd' * 50, + dict(RIPEMD160='b0b105360de759960ab4f35298e116e295d8e7c1'), + 'RFC 2286 #3 (HMAC-RIPEMD)'), + + # Test case 14 (RIPEMD) + ('0102030405060708090a0b0c0d0e0f10111213141516171819', + 'cd' * 50, + dict(RIPEMD160='d5ca862f4d21d5e610e18b4cf1beb97a4365ecf4'), + 'RFC 2286 #4 (HMAC-RIPEMD)'), + + # Test case 15 (RIPEMD) + ('0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c', + xl("Test With Truncation"), + dict(RIPEMD160='7619693978f91d90539ae786500ff3d8e0518e39'), + 'RFC 2286 #5 (HMAC-RIPEMD)'), + + # Test case 16 (RIPEMD) + ('aa' * 80, + xl("Test Using Larger Than Block-Size Key - Hash Key First"), + dict(RIPEMD160='6466ca07ac5eac29e1bd523e5ada7605b791fd8b'), + 'RFC 2286 #6 (HMAC-RIPEMD)'), + + # Test case 17 (RIPEMD) + ('aa' * 80, + xl("Test Using Larger Than Block-Size Key and Larger Than One Block-Size Data"), + dict(RIPEMD160='69ea60798d71616cce5fd0871e23754cd75d5a0a'), + 'RFC 2286 #7 (HMAC-RIPEMD)'), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-224.pdf + ( + '000102030405060708090a0b0c0d0e0f' + '101112131415161718191a1b', + xl('Sample message for keylen<blocklen'), + dict(SHA3_224='332cfd59347fdb8e576e77260be4aba2d6dc53117b3bfb52c6d18c04'), + 'NIST CSRC Sample #1 (SHA3-224)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '606162636465666768696a6b6c6d6e6f'\ + '707172737475767778797a7b7c7d7e7f'\ + '808182838485868788898a8b8c8d8e8f', + xl('Sample message for keylen=blocklen'), + dict(SHA3_224='d8b733bcf66c644a12323d564e24dcf3fc75f231f3b67968359100c7'), + 'NIST CSRC Sample #2 (SHA3-224)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '606162636465666768696a6b6c6d6e6f'\ + '707172737475767778797a7b7c7d7e7f'\ + '808182838485868788898a8b8c8d8e8f'\ + '909192939495969798999a9b9c9d9e9f'\ + 'a0a1a2a3a4a5a6a7a8a9aaab', + xl('Sample message for keylen>blocklen'), + dict(SHA3_224='078695eecc227c636ad31d063a15dd05a7e819a66ec6d8de1e193e59'), + 'NIST CSRC Sample #3 (SHA3-224)' + ), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-256.pdf + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f', + xl('Sample message for keylen<blocklen'), + dict(SHA3_256='4fe8e202c4f058e8dddc23d8c34e467343e23555e24fc2f025d598f558f67205'), + 'NIST CSRC Sample #1 (SHA3-256)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '606162636465666768696a6b6c6d6e6f'\ + '707172737475767778797a7b7c7d7e7f'\ + '8081828384858687', + xl('Sample message for keylen=blocklen'), + dict(SHA3_256='68b94e2e538a9be4103bebb5aa016d47961d4d1aa906061313b557f8af2c3faa'), + 'NIST CSRC Sample #2 (SHA3-256)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '606162636465666768696a6b6c6d6e6f'\ + '707172737475767778797a7b7c7d7e7f'\ + '808182838485868788898a8b8c8d8e8f'\ + '909192939495969798999a9b9c9d9e9f'\ + 'a0a1a2a3a4a5a6a7', + xl('Sample message for keylen>blocklen'), + dict(SHA3_256='9bcf2c238e235c3ce88404e813bd2f3a97185ac6f238c63d6229a00b07974258'), + 'NIST CSRC Sample #3 (SHA3-256)' + ), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-384.pdf + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f' + '202122232425262728292a2b2c2d2e2f', + xl('Sample message for keylen<blocklen'), + dict(SHA3_384='d588a3c51f3f2d906e8298c1199aa8ff6296218127f6b38a90b6afe2c5617725bc99987f79b22a557b6520db710b7f42'), + 'NIST CSRC Sample #1 (SHA3-384)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '6061626364656667', + xl('Sample message for keylen=blocklen'), + dict(SHA3_384='a27d24b592e8c8cbf6d4ce6fc5bf62d8fc98bf2d486640d9eb8099e24047837f5f3bffbe92dcce90b4ed5b1e7e44fa90'), + 'NIST CSRC Sample #2 (SHA3-384)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '606162636465666768696a6b6c6d6e6f'\ + '707172737475767778797a7b7c7d7e7f'\ + '808182838485868788898a8b8c8d8e8f'\ + '9091929394959697', + xl('Sample message for keylen>blocklen'), + dict(SHA3_384='e5ae4c739f455279368ebf36d4f5354c95aa184c899d3870e460ebc288ef1f9470053f73f7c6da2a71bcaec38ce7d6ac'), + 'NIST CSRC Sample #3 (SHA3-384)' + ), + + # From https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/HMAC_SHA3-512.pdf + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f', + xl('Sample message for keylen<blocklen'), + dict(SHA3_512='4efd629d6c71bf86162658f29943b1c308ce27cdfa6db0d9c3ce81763f9cbce5f7ebe9868031db1a8f8eb7b6b95e5c5e3f657a8996c86a2f6527e307f0213196'), + 'NIST CSRC Sample #1 (SHA3-512)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '4041424344454647', + xl('Sample message for keylen=blocklen'), + dict(SHA3_512='544e257ea2a3e5ea19a590e6a24b724ce6327757723fe2751b75bf007d80f6b360744bf1b7a88ea585f9765b47911976d3191cf83c039f5ffab0d29cc9d9b6da'), + 'NIST CSRC Sample #2 (SHA3-512)' + ), + ( + '000102030405060708090a0b0c0d0e0f'\ + '101112131415161718191a1b1c1d1e1f'\ + '202122232425262728292a2b2c2d2e2f'\ + '303132333435363738393a3b3c3d3e3f'\ + '404142434445464748494a4b4c4d4e4f'\ + '505152535455565758595a5b5c5d5e5f'\ + '606162636465666768696a6b6c6d6e6f'\ + '707172737475767778797a7b7c7d7e7f'\ + '8081828384858687', + xl('Sample message for keylen>blocklen'), + dict(SHA3_512='5f464f5e5b7848e3885e49b2c385f0694985d0e38966242dc4a5fe3fea4b37d46b65ceced5dcf59438dd840bab22269f0ba7febdb9fcf74602a35666b2a32915'), + 'NIST CSRC Sample #3 (SHA3-512)' + ), + +] + + +class HMAC_Module_and_Instance_Test(unittest.TestCase): + """Test the HMAC construction and verify that it does not + matter if you initialize it with a hash module or + with an hash instance. + + See https://bugs.launchpad.net/pycrypto/+bug/1209399 + """ + + def __init__(self, hashmods): + """Initialize the test with a dictionary of hash modules + indexed by their names""" + + unittest.TestCase.__init__(self) + self.hashmods = hashmods + self.description = "" + + def shortDescription(self): + return self.description + + def runTest(self): + key = b"\x90\x91\x92\x93" * 4 + payload = b"\x00" * 100 + + for hashname, hashmod in self.hashmods.items(): + if hashmod is None: + continue + self.description = "Test HMAC in combination with " + hashname + one = HMAC.new(key, payload, hashmod).digest() + two = HMAC.new(key, payload, hashmod.new()).digest() + self.assertEqual(one, two) + + +class HMAC_None(unittest.TestCase): + + def runTest(self): + + key = b"\x04" * 20 + one = HMAC.new(key, b"", SHA1).digest() + two = HMAC.new(key, None, SHA1).digest() + self.assertEqual(one, two) + + +class ByteArrayTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = HMAC.new(key, data) + h2 = HMAC.new(key_ba, data_ba) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = HMAC.new(key) + h2 = HMAC.new(key) + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +class MemoryViewTests(unittest.TestCase): + + def runTest(self): + + key = b"0" * 16 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = HMAC.new(key, data) + h2 = HMAC.new(key_mv, data_mv) + if not data_mv.readonly: + key_mv[:1] = b'\xFF' + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = HMAC.new(key) + h2 = HMAC.new(key) + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + self.assertEqual(h1.digest(), h2.digest()) + + +def get_tests(config={}): + global test_data + import types + from .common import make_mac_tests + + # A test vector contains multiple results, each one for a + # different hash algorithm. + # Here we expand each test vector into multiple ones, + # and add the relevant parameters that will be passed to new() + exp_test_data = [] + for row in test_data: + for modname in row[2].keys(): + t = list(row) + t[2] = row[2][modname] + t.append(dict(digestmod=globals()[modname])) + exp_test_data.append(t) + tests = make_mac_tests(HMAC, "HMAC", exp_test_data) + tests.append(HMAC_Module_and_Instance_Test(hash_modules)) + tests.append(HMAC_None()) + + tests.append(ByteArrayTests()) + tests.append(MemoryViewTests()) + + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py new file mode 100644 index 0000000..0543a4c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KMAC.py @@ -0,0 +1,346 @@ +import unittest +from binascii import unhexlify, hexlify + +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.Util.strxor import strxor_c +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import KMAC128, KMAC256 + + +class KMACTest(unittest.TestCase): + + def new(self, *args, **kwargs): + return self.KMAC.new(key=b'X' * (self.minimum_key_bits // 8), *args, **kwargs) + + def test_new_positive(self): + + key = b'X' * 32 + + h = self.new() + for new_func in self.KMAC.new, h.new: + + for dbytes in range(self.minimum_bytes, 128 + 1): + hobj = new_func(key=key, mac_len=dbytes) + self.assertEqual(hobj.digest_size, dbytes) + + digest1 = new_func(key=key, data=b"\x90").digest() + digest2 = new_func(key=key).update(b"\x90").digest() + self.assertEqual(digest1, digest2) + + new_func(data=b"A", key=key, custom=b"g") + + hobj = h.new(key=key) + self.assertEqual(hobj.digest_size, self.default_bytes) + + def test_new_negative(self): + + h = self.new() + for new_func in self.KMAC.new, h.new: + self.assertRaises(ValueError, new_func, key=b'X'*32, + mac_len=0) + self.assertRaises(ValueError, new_func, key=b'X'*32, + mac_len=self.minimum_bytes - 1) + self.assertRaises(TypeError, new_func, + key=u"string") + self.assertRaises(TypeError, new_func, + data=u"string") + + def test_default_digest_size(self): + digest = self.new(data=b'abc').digest() + self.assertEqual(len(digest), self.default_bytes) + + def test_update(self): + pieces = [b"\x0A" * 200, b"\x14" * 300] + h = self.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.digest() + h = self.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.digest(), digest) + + def test_update_negative(self): + h = self.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.new() + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg = b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = self.new(mac_len=32, data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, dig1) + + def test_hex_digest(self): + mac = self.new() + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_verify(self): + h = self.new() + mac = h.digest() + h.verify(mac) + wrong_mac = strxor_c(mac, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + + def test_hexverify(self): + h = self.new() + mac = h.hexdigest() + h.hexverify(mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + def test_oid(self): + + oid = "2.16.840.1.101.3.4.2." + self.oid_variant + h = self.new() + self.assertEqual(h.oid, oid) + + def test_bytearray(self): + + key = b'0' * 32 + data = b"\x00\x01\x02" + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(key) + data_ba = bytearray(data) + + h1 = self.KMAC.new(data=data, key=key) + h2 = self.KMAC.new(data=data_ba, key=key_ba) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + key = b'0' * 32 + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(key) + data_mv = get_mv(data) + + h1 = self.KMAC.new(data=data, key=key) + h2 = self.KMAC.new(data=data_mv, key=key_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + key_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class KMAC128Test(KMACTest): + + KMAC = KMAC128 + + minimum_key_bits = 128 + + minimum_bytes = 8 + default_bytes = 64 + + oid_variant = "19" + + +class KMAC256Test(KMACTest): + + KMAC = KMAC256 + + minimum_key_bits = 256 + + minimum_bytes = 8 + default_bytes = 64 + + oid_variant = "20" + + +class NISTExampleTestVectors(unittest.TestCase): + + # https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Standards-and-Guidelines/documents/examples/KMAC_samples.pdf + test_data = [ + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03", + "", + "E5 78 0B 0D 3E A6 F7 D3 A4 29 C5 70 6A A4 3A 00" + "FA DB D7 D4 96 28 83 9E 31 87 24 3F 45 6E E1 4E", + "Sample #1 NIST", + KMAC128 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03", + "My Tagged Application", + "3B 1F BA 96 3C D8 B0 B5 9E 8C 1A 6D 71 88 8B 71" + "43 65 1A F8 BA 0A 70 70 C0 97 9E 28 11 32 4A A5", + "Sample #2 NIST", + KMAC128 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F" + "10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F" + "20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F" + "30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F" + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F" + "60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F" + "70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F" + "80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F" + "90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F" + "A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF" + "B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF" + "C0 C1 C2 C3 C4 C5 C6 C7", + "My Tagged Application", + "1F 5B 4E 6C CA 02 20 9E 0D CB 5C A6 35 B8 9A 15" + "E2 71 EC C7 60 07 1D FD 80 5F AA 38 F9 72 92 30", + "Sample #3 NIST", + KMAC128 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03", + "My Tagged Application", + "20 C5 70 C3 13 46 F7 03 C9 AC 36 C6 1C 03 CB 64" + "C3 97 0D 0C FC 78 7E 9B 79 59 9D 27 3A 68 D2 F7" + "F6 9D 4C C3 DE 9D 10 4A 35 16 89 F2 7C F6 F5 95" + "1F 01 03 F3 3F 4F 24 87 10 24 D9 C2 77 73 A8 DD", + "Sample #4 NIST", + KMAC256 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F" + "10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F" + "20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F" + "30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F" + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F" + "60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F" + "70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F" + "80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F" + "90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F" + "A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF" + "B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF" + "C0 C1 C2 C3 C4 C5 C6 C7", + "", + "75 35 8C F3 9E 41 49 4E 94 97 07 92 7C EE 0A F2" + "0A 3F F5 53 90 4C 86 B0 8F 21 CC 41 4B CF D6 91" + "58 9D 27 CF 5E 15 36 9C BB FF 8B 9A 4C 2E B1 78" + "00 85 5D 02 35 FF 63 5D A8 25 33 EC 6B 75 9B 69", + "Sample #5 NIST", + KMAC256 + ), + ( + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F", + "00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F" + "10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F" + "20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F" + "30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F" + "40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F" + "50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F" + "60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F" + "70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F" + "80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F" + "90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F" + "A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF" + "B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF" + "C0 C1 C2 C3 C4 C5 C6 C7", + "My Tagged Application", + "B5 86 18 F7 1F 92 E1 D5 6C 1B 8C 55 DD D7 CD 18" + "8B 97 B4 CA 4D 99 83 1E B2 69 9A 83 7D A2 E4 D9" + "70 FB AC FD E5 00 33 AE A5 85 F1 A2 70 85 10 C3" + "2D 07 88 08 01 BD 18 28 98 FE 47 68 76 FC 89 65", + "Sample #6 NIST", + KMAC256 + ), + ] + + def setUp(self): + td = [] + for key, data, custom, mac, text, module in self.test_data: + ni = ( + unhexlify(key.replace(" ", "")), + unhexlify(data.replace(" ", "")), + custom.encode(), + unhexlify(mac.replace(" ", "")), + text, + module + ) + td.append(ni) + self.test_data = td + + def runTest(self): + + for key, data, custom, mac, text, module in self.test_data: + h = module.new(data=data, key=key, custom=custom, mac_len=len(mac)) + mac_tag = h.digest() + self.assertEqual(mac_tag, mac, msg=text) + + +def get_tests(config={}): + tests = [] + + tests += list_test_cases(KMAC128Test) + tests += list_test_cases(KMAC256Test) + tests.append(NISTExampleTestVectors()) + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py new file mode 100644 index 0000000..d247217 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_KangarooTwelve.py @@ -0,0 +1,324 @@ +# =================================================================== +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.KangarooTwelve""" + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import KangarooTwelve as K12 +from Cryptodome.Util.py3compat import b, bchr + + +class KangarooTwelveTest(unittest.TestCase): + + def test_length_encode(self): + self.assertEqual(K12._length_encode(0), b'\x00') + self.assertEqual(K12._length_encode(12), b'\x0C\x01') + self.assertEqual(K12._length_encode(65538), b'\x01\x00\x02\x03') + + def test_new_positive(self): + + xof1 = K12.new() + xof2 = K12.new(data=b("90")) + xof3 = K12.new().update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + xof1 = K12.new() + ref = xof1.read(10) + xof2 = K12.new(custom=b("")) + xof3 = K12.new(custom=b("foo")) + + self.assertEqual(ref, xof2.read(10)) + self.assertNotEqual(ref, xof3.read(10)) + + xof1 = K12.new(custom=b("foo")) + xof2 = K12.new(custom=b("foo"), data=b("90")) + xof3 = K12.new(custom=b("foo")).update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = K12.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.read(10) + h = K12.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.read(10), digest) + + def test_update_negative(self): + h = K12.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = K12.new() + digest = h.read(90) + + # read returns a byte string of the right length + self.assertTrue(isinstance(digest, type(b("digest")))) + self.assertEqual(len(digest), 90) + + def test_update_after_read(self): + mac = K12.new() + mac.update(b("rrrr")) + mac.read(90) + self.assertRaises(TypeError, mac.update, b("ttt")) + + +def txt2bin(txt): + clean = txt.replace(" ", "").replace("\n", "").replace("\r", "") + return unhexlify(clean) + + +def ptn(n): + res = bytearray(n) + pattern = b"".join([bchr(x) for x in range(0, 0xFB)]) + for base in range(0, n - 0xFB, 0xFB): + res[base:base + 0xFB] = pattern + remain = n % 0xFB + if remain: + base = (n // 0xFB) * 0xFB + res[base:] = pattern[:remain] + assert(len(res) == n) + return res + + +def chunked(source, size): + for i in range(0, len(source), size): + yield source[i:i+size] + + +# https://github.com/XKCP/XKCP/blob/master/tests/TestVectors/KangarooTwelve.txt +class KangarooTwelveTV(unittest.TestCase): + + def test_zero_1(self): + tv = """1A C2 D4 50 FC 3B 42 05 D1 9D A7 BF CA 1B 37 51 + 3C 08 03 57 7A C7 16 7F 06 FE 2C E1 F0 EF 39 E5""" + + btv = txt2bin(tv) + res = K12.new().read(32) + self.assertEqual(res, btv) + + def test_zero_2(self): + tv = """1A C2 D4 50 FC 3B 42 05 D1 9D A7 BF CA 1B 37 51 + 3C 08 03 57 7A C7 16 7F 06 FE 2C E1 F0 EF 39 E5 + 42 69 C0 56 B8 C8 2E 48 27 60 38 B6 D2 92 96 6C + C0 7A 3D 46 45 27 2E 31 FF 38 50 81 39 EB 0A 71""" + + btv = txt2bin(tv) + res = K12.new().read(64) + self.assertEqual(res, btv) + + def test_zero_3(self): + tv = """E8 DC 56 36 42 F7 22 8C 84 68 4C 89 84 05 D3 A8 + 34 79 91 58 C0 79 B1 28 80 27 7A 1D 28 E2 FF 6D""" + + btv = txt2bin(tv) + res = K12.new().read(10032) + self.assertEqual(res[-32:], btv) + + def test_ptn_1(self): + tv = """2B DA 92 45 0E 8B 14 7F 8A 7C B6 29 E7 84 A0 58 + EF CA 7C F7 D8 21 8E 02 D3 45 DF AA 65 24 4A 1F""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(1)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17(self): + tv = """6B F7 5F A2 23 91 98 DB 47 72 E3 64 78 F8 E1 9B + 0F 37 12 05 F6 A9 A9 3A 27 3F 51 DF 37 12 28 88""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(17)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17_2(self): + tv = """0C 31 5E BC DE DB F6 14 26 DE 7D CF 8F B7 25 D1 + E7 46 75 D7 F5 32 7A 50 67 F3 67 B1 08 EC B6 7C""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(17**2)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17_3(self): + tv = """CB 55 2E 2E C7 7D 99 10 70 1D 57 8B 45 7D DF 77 + 2C 12 E3 22 E4 EE 7F E4 17 F9 2C 75 8F 0D 59 D0""" + + btv = txt2bin(tv) + res = K12.new(data=ptn(17**3)).read(32) + self.assertEqual(res, btv) + + def test_ptn_17_4(self): + tv = """87 01 04 5E 22 20 53 45 FF 4D DA 05 55 5C BB 5C + 3A F1 A7 71 C2 B8 9B AE F3 7D B4 3D 99 98 B9 FE""" + + btv = txt2bin(tv) + data = ptn(17**4) + + # All at once + res = K12.new(data=data).read(32) + self.assertEqual(res, btv) + + # Byte by byte + k12 = K12.new() + for x in data: + k12.update(bchr(x)) + res = k12.read(32) + self.assertEqual(res, btv) + + # Chunks of various prime sizes + for chunk_size in (13, 17, 19, 23, 31): + k12 = K12.new() + for x in chunked(data, chunk_size): + k12.update(x) + res = k12.read(32) + self.assertEqual(res, btv) + + def test_ptn_17_5(self): + tv = """84 4D 61 09 33 B1 B9 96 3C BD EB 5A E3 B6 B0 5C + C7 CB D6 7C EE DF 88 3E B6 78 A0 A8 E0 37 16 82""" + + btv = txt2bin(tv) + data = ptn(17**5) + + # All at once + res = K12.new(data=data).read(32) + self.assertEqual(res, btv) + + # Chunks + k12 = K12.new() + for chunk in chunked(data, 8192): + k12.update(chunk) + res = k12.read(32) + self.assertEqual(res, btv) + + def test_ptn_17_6(self): + tv = """3C 39 07 82 A8 A4 E8 9F A6 36 7F 72 FE AA F1 32 + 55 C8 D9 58 78 48 1D 3C D8 CE 85 F5 8E 88 0A F8""" + + btv = txt2bin(tv) + data = ptn(17**6) + + # All at once + res = K12.new(data=data).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_1(self): + tv = """FA B6 58 DB 63 E9 4A 24 61 88 BF 7A F6 9A 13 30 + 45 F4 6E E9 84 C5 6E 3C 33 28 CA AF 1A A1 A5 83""" + + btv = txt2bin(tv) + custom = ptn(1) + + # All at once + res = K12.new(custom=custom).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_41(self): + tv = """D8 48 C5 06 8C ED 73 6F 44 62 15 9B 98 67 FD 4C + 20 B8 08 AC C3 D5 BC 48 E0 B0 6B A0 A3 76 2E C4""" + + btv = txt2bin(tv) + custom = ptn(41) + + # All at once + res = K12.new(data=b'\xFF', custom=custom).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_41_2(self): + tv = """C3 89 E5 00 9A E5 71 20 85 4C 2E 8C 64 67 0A C0 + 13 58 CF 4C 1B AF 89 44 7A 72 42 34 DC 7C ED 74""" + + btv = txt2bin(tv) + custom = ptn(41**2) + + # All at once + res = K12.new(data=b'\xFF' * 3, custom=custom).read(32) + self.assertEqual(res, btv) + + def test_ptn_c_41_3(self): + tv = """75 D2 F8 6A 2E 64 45 66 72 6B 4F BC FC 56 57 B9 + DB CF 07 0C 7B 0D CA 06 45 0A B2 91 D7 44 3B CF""" + + btv = txt2bin(tv) + custom = ptn(41**3) + + # All at once + res = K12.new(data=b'\xFF' * 7, custom=custom).read(32) + self.assertEqual(res, btv) + + ### + + def test_1(self): + tv = "fd608f91d81904a9916e78a18f65c157a78d63f93d8f6367db0524526a5ea2bb" + + btv = txt2bin(tv) + res = K12.new(data=b'', custom=ptn(100)).read(32) + self.assertEqual(res, btv) + + def test_2(self): + tv4 = "5a4ec9a649f81916d4ce1553492962f7868abf8dd1ceb2f0cb3682ea95cda6a6" + tv3 = "441688fe4fe4ae9425eb3105eb445eb2b3a6f67b66eff8e74ebfbc49371f6d4c" + tv2 = "17269a57759af0214c84a0fd9bc851f4d95f80554cfed4e7da8a6ee1ff080131" + tv1 = "33826990c09dc712ba7224f0d9be319e2720de95a4c1afbd2211507dae1c703a" + tv0 = "9f4d3aba908ddc096e4d3a71da954f917b9752f05052b9d26d916a6fbc75bf3e" + + res = K12.new(data=b'A' * (8192 - 4), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv4)) + + res = K12.new(data=b'A' * (8192 - 3), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv3)) + + res = K12.new(data=b'A' * (8192 - 2), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv2)) + + res = K12.new(data=b'A' * (8192 - 1), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv1)) + + res = K12.new(data=b'A' * (8192 - 0), custom=b'B').read(32) + self.assertEqual(res, txt2bin(tv0)) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(KangarooTwelveTest) + tests += list_test_cases(KangarooTwelveTV) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py new file mode 100644 index 0000000..beae38a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD2.py @@ -0,0 +1,62 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/MD2.py: Self-test for the MD2 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.MD2""" + +from Cryptodome.Util.py3compat import * + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors from RFC 1319 + ('8350e5a3e24c153df2275c9f80692773', '', "'' (empty string)"), + ('32ec01ec4a6dac72c0ab96fb34c0b5d1', 'a'), + ('da853b0d3f88d99b30283a69e6ded6bb', 'abc'), + ('ab4f496bfb2a530b219ff33031fe06b0', 'message digest'), + + ('4e8ddff3650292ab5a4108c3aa47940b', 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('da33def2a42df13975352846c30338cd', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('d5976f79d83d3a0dc9806c3c66f3efd8', + '1234567890123456789012345678901234567890123456' + + '7890123456789012345678901234567890', + "'1234567890' * 8"), +] + +def get_tests(config={}): + from Cryptodome.Hash import MD2 + from .common import make_hash_tests + return make_hash_tests(MD2, "MD2", test_data, + digest_size=16, + oid="1.2.840.113549.2.2") + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py new file mode 100644 index 0000000..41de977 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD4.py @@ -0,0 +1,64 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/MD4.py: Self-test for the MD4 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.MD4""" + +__revision__ = "$Id$" + +from Cryptodome.Util.py3compat import * + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors from RFC 1320 + ('31d6cfe0d16ae931b73c59d7e0c089c0', '', "'' (empty string)"), + ('bde52cb31de33e46245e05fbdbd6fb24', 'a'), + ('a448017aaf21d8525fc10ae87aa6729d', 'abc'), + ('d9130a8164549fe818874806e1c7014b', 'message digest'), + + ('d79e1c308aa5bbcdeea8ed63df412da9', 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('043f8582f241db351ce627e153e7f0e4', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('e33b4ddc9c38f2199c3e7b164fcc0536', + '1234567890123456789012345678901234567890123456' + + '7890123456789012345678901234567890', + "'1234567890' * 8"), +] + +def get_tests(config={}): + from Cryptodome.Hash import MD4 + from .common import make_hash_tests + return make_hash_tests(MD4, "MD4", test_data, + digest_size=16, + oid="1.2.840.113549.2.4") + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py new file mode 100644 index 0000000..3f7a005 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_MD5.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/MD5.py: Self-test for the MD5 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.MD5""" + +from Cryptodome.Util.py3compat import * +from Cryptodome.Hash import MD5 +from binascii import unhexlify +import unittest +from Cryptodome.SelfTest.st_common import list_test_cases + + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors from RFC 1321 + ('d41d8cd98f00b204e9800998ecf8427e', '', "'' (empty string)"), + ('0cc175b9c0f1b6a831c399e269772661', 'a'), + ('900150983cd24fb0d6963f7d28e17f72', 'abc'), + ('f96b697d7cb7938d525a2f31aaf161d0', 'message digest'), + + ('c3fcd3d76192e4007dfb496cca67e13b', 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('d174ab98d277d9f5a5611c2c9f419d9f', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('57edf4a22be3c955ac49da2e2107b67a', + '1234567890123456789012345678901234567890123456' + + '7890123456789012345678901234567890', + "'1234567890' * 8"), + + # https://www.cosic.esat.kuleuven.be/nessie/testvectors/hash/md5/Md5-128.unverified.test-vectors + ('57EDF4A22BE3C955AC49DA2E2107B67A', '1234567890' * 8, 'Set 1, vector #7'), + ('7707D6AE4E027C70EEA2A935C2296F21', 'a'*1000000, 'Set 1, vector #8'), +] + + +class Md5IterTest(unittest.TestCase): + + def runTest(self): + message = b("\x00") * 16 + result1 = "4AE71336E44BF9BF79D2752E234818A5".lower() + result2 = "1A83F51285E4D89403D00C46EF8508FE".lower() + + h = MD5.new(message) + message = h.digest() + self.assertEqual(h.hexdigest(), result1) + + for _ in range(99999): + h = MD5.new(message) + message = h.digest() + + self.assertEqual(h.hexdigest(), result2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = make_hash_tests(MD5, "MD5", test_data, + digest_size=16, + oid="1.2.840.113549.2.5") + if config.get('slow_tests'): + tests += [ Md5IterTest() ] + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py new file mode 100644 index 0000000..19cacb4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_Poly1305.py @@ -0,0 +1,542 @@ +# +# SelfTest/Hash/test_Poly1305.py: Self-test for the Poly1305 module +# +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash._Poly1305""" + +import json +import unittest +from binascii import unhexlify, hexlify + +from .common import make_mac_tests +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import Poly1305 +from Cryptodome.Cipher import AES, ChaCha20 + +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.Util.strxor import strxor_c + +# This is a list of (r+s keypair, data, result, description, keywords) tuples. +test_data_basic = [ + ( + "85d6be7857556d337f4452fe42d506a80103808afb0db2fd4abff6af4149f51b", + hexlify(b"Cryptographic Forum Research Group").decode(), + "a8061dc1305136c6c22b8baf0c0127a9", + "RFC7539" + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "0000000000000000000000000000000000000000000000000000000000000000", + "49ec78090e481ec6c26b33b91ccc0307", + "https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-00#section-7 A", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "48656c6c6f20776f726c6421", + "a6f745008f81c916a20dcc74eef2b2f0", + "https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-00#section-7 B", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "", + "6b657920666f7220506f6c7931333035", + "Generated with pure Python", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "FF", + "f7e4e0ef4c46d106219da3d1bdaeb3ff", + "Generated with pure Python", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "FF00", + "7471eceeb22988fc936da1d6e838b70e", + "Generated with pure Python", + ), + ( + "746869732069732033322d62797465206b657920666f7220506f6c7931333035", + "AA" * 17, + "32590bc07cb2afaccca3f67f122975fe", + "Generated with pure Python", + ), + ( + "00" * 32, + "00" * 64, + "00" * 16, + "RFC7539 A.3 #1", + ), + ( + "0000000000000000000000000000000036e5f6b5c5e06070f0efca96227a863e", + hexlify( + b"Any submission t" + b"o the IETF inten" + b"ded by the Contr" + b"ibutor for publi" + b"cation as all or" + b" part of an IETF" + b" Internet-Draft " + b"or RFC and any s" + b"tatement made wi" + b"thin the context" + b" of an IETF acti" + b"vity is consider" + b"ed an \"IETF Cont" + b"ribution\". Such " + b"statements inclu" + b"de oral statemen" + b"ts in IETF sessi" + b"ons, as well as " + b"written and elec" + b"tronic communica" + b"tions made at an" + b"y time or place," + b" which are addre" + b"ssed to").decode(), + "36e5f6b5c5e06070f0efca96227a863e", + "RFC7539 A.3 #2", + ), + ( + "36e5f6b5c5e06070f0efca96227a863e00000000000000000000000000000000", + hexlify( + b"Any submission t" + b"o the IETF inten" + b"ded by the Contr" + b"ibutor for publi" + b"cation as all or" + b" part of an IETF" + b" Internet-Draft " + b"or RFC and any s" + b"tatement made wi" + b"thin the context" + b" of an IETF acti" + b"vity is consider" + b"ed an \"IETF Cont" + b"ribution\". Such " + b"statements inclu" + b"de oral statemen" + b"ts in IETF sessi" + b"ons, as well as " + b"written and elec" + b"tronic communica" + b"tions made at an" + b"y time or place," + b" which are addre" + b"ssed to").decode(), + "f3477e7cd95417af89a6b8794c310cf0", + "RFC7539 A.3 #3", + ), + ( + "1c9240a5eb55d38af333888604f6b5f0473917c1402b80099dca5cbc207075c0", + "2754776173206272696c6c69672c2061" + "6e642074686520736c6974687920746f" + "7665730a446964206779726520616e64" + "2067696d626c6520696e207468652077" + "6162653a0a416c6c206d696d73792077" + "6572652074686520626f726f676f7665" + "732c0a416e6420746865206d6f6d6520" + "7261746873206f757467726162652e", + "4541669a7eaaee61e708dc7cbcc5eb62", + "RFC7539 A.3 #4", + ), + ( + "02" + "00" * 31, + "FF" * 16, + "03" + "00" * 15, + "RFC7539 A.3 #5", + ), + ( + "02" + "00" * 15 + "FF" * 16, + "02" + "00" * 15, + "03" + "00" * 15, + "RFC7539 A.3 #6", + ), + ( + "01" + "00" * 31, + "FF" * 16 + "F0" + "FF" * 15 + "11" + "00" * 15, + "05" + "00" * 15, + "RFC7539 A.3 #7", + ), + ( + "01" + "00" * 31, + "FF" * 16 + "FB" + "FE" * 15 + "01" * 16, + "00" * 16, + "RFC7539 A.3 #8", + ), + ( + "02" + "00" * 31, + "FD" + "FF" * 15, + "FA" + "FF" * 15, + "RFC7539 A.3 #9", + ), + ( + "01 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "E3 35 94 D7 50 5E 43 B9 00 00 00 00 00 00 00 00" + "33 94 D7 50 5E 43 79 CD 01 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00" + "01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "14 00 00 00 00 00 00 00 55 00 00 00 00 00 00 00", + "RFC7539 A.3 #10", + ), + ( + "01 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "E3 35 94 D7 50 5E 43 B9 00 00 00 00 00 00 00 00" + "33 94 D7 50 5E 43 79 CD 01 00 00 00 00 00 00 00" + "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00", + "13" + "00" * 15, + "RFC7539 A.3 #11", + ), +] + +# This is a list of (key(k+r), data, result, description, keywords) tuples. +test_data_aes = [ + ( + "ec074c835580741701425b623235add6851fc40c3467ac0be05cc20404f3f700", + "f3f6", + "f4c633c3044fc145f84f335cb81953de", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("fb447350c4e868c52ac3275cf9d4327e") } + ), + ( + "75deaa25c09f208e1dc4ce6b5cad3fbfa0f3080000f46400d0c7e9076c834403", + "", + "dd3fab2251f11ac759f0887129cc2ee7", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("61ee09218d29b0aaed7e154a2c5509cc") } + ), + ( + "6acb5f61a7176dd320c5c1eb2edcdc7448443d0bb0d21109c89a100b5ce2c208", + "663cea190ffb83d89593f3f476b6bc24" + "d7e679107ea26adb8caf6652d0656136", + "0ee1c16bb73f0f4fd19881753c01cdbe", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("ae212a55399729595dea458bc621ff0e") } + ), + ( + "e1a5668a4d5b66a5f68cc5424ed5982d12976a08c4426d0ce8a82407c4f48207", + "ab0812724a7f1e342742cbed374d94d1" + "36c6b8795d45b3819830f2c04491faf0" + "990c62e48b8018b2c3e4a0fa3134cb67" + "fa83e158c994d961c4cb21095c1bf9", + "5154ad0d2cb26e01274fc51148491f1b", + "http://cr.yp.to/mac/poly1305-20050329.pdf", + { 'cipher':AES, 'nonce':unhexlify("9ae831e743978d3a23527c7128149e3a") } + ), +] + +test_data_chacha20 = [ + ( + "00" * 32, + "FF" * 15, + "13cc5bbadc36b03a5163928f0bcb65aa", + "RFC7539 A.4 #1", + { 'cipher':ChaCha20, 'nonce':unhexlify("00" * 12) } + ), + ( + "00" * 31 + "01", + "FF" * 15, + "0baf33c1d6df211bdd50a6767e98e00a", + "RFC7539 A.4 #2", + { 'cipher':ChaCha20, 'nonce':unhexlify("00" * 11 + "02") } + ), + ( + "1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0" + "47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0", + "FF" * 15, + "e8b4c6db226cd8939e65e02eebf834ce", + "RFC7539 A.4 #3", + { 'cipher':ChaCha20, 'nonce':unhexlify("00" * 11 + "02") } + ), + ( + "1c 92 40 a5 eb 55 d3 8a f3 33 88 86 04 f6 b5 f0" + "47 39 17 c1 40 2b 80 09 9d ca 5c bc 20 70 75 c0", + "f3 33 88 86 00 00 00 00 00 00 4e 91 00 00 00 00" + "64 a0 86 15 75 86 1a f4 60 f0 62 c7 9b e6 43 bd" + "5e 80 5c fd 34 5c f3 89 f1 08 67 0a c7 6c 8c b2" + "4c 6c fc 18 75 5d 43 ee a0 9e e9 4e 38 2d 26 b0" + "bd b7 b7 3c 32 1b 01 00 d4 f0 3b 7f 35 58 94 cf" + "33 2f 83 0e 71 0b 97 ce 98 c8 a8 4a bd 0b 94 81" + "14 ad 17 6e 00 8d 33 bd 60 f9 82 b1 ff 37 c8 55" + "97 97 a0 6e f4 f0 ef 61 c1 86 32 4e 2b 35 06 38" + "36 06 90 7b 6a 7c 02 b0 f9 f6 15 7b 53 c8 67 e4" + "b9 16 6c 76 7b 80 4d 46 a5 9b 52 16 cd e7 a4 e9" + "90 40 c5 a4 04 33 22 5e e2 82 a1 b0 a0 6c 52 3e" + "af 45 34 d7 f8 3f a1 15 5b 00 47 71 8c bc 54 6a" + "0d 07 2b 04 b3 56 4e ea 1b 42 22 73 f5 48 27 1a" + "0b b2 31 60 53 fa 76 99 19 55 eb d6 31 59 43 4e" + "ce bb 4e 46 6d ae 5a 10 73 a6 72 76 27 09 7a 10" + "49 e6 17 d9 1d 36 10 94 fa 68 f0 ff 77 98 71 30" + "30 5b ea ba 2e da 04 df 99 7b 71 4d 6c 6f 2c 29" + "a6 ad 5c b4 02 2b 02 70 9b 00 00 00 00 00 00 00" + "0c 00 00 00 00 00 00 00 09 01 00 00 00 00 00 00", + "ee ad 9d 67 89 0c bb 22 39 23 36 fe a1 85 1f 38", + "RFC7539 A.5", + { 'cipher':ChaCha20, 'nonce':unhexlify("000000000102030405060708") } + ), +] + + +class Poly1305Test_AES(unittest.TestCase): + + key = b'\x11' * 32 + + def test_new_positive(self): + + data = b'r' * 100 + + h1 = Poly1305.new(key=self.key, cipher=AES) + self.assertEqual(h1.digest_size, 16) + self.assertEqual(len(h1.nonce), 16) + d1 = h1.update(data).digest() + self.assertEqual(len(d1), 16) + + h2 = Poly1305.new(key=self.key, nonce=h1.nonce, data=data, cipher=AES) + d2 = h2.digest() + self.assertEqual(h1.nonce, h2.nonce) + self.assertEqual(d1, d2) + + def test_new_negative(self): + from Cryptodome.Cipher import DES3 + + self.assertRaises(ValueError, Poly1305.new, key=self.key[:31], cipher=AES) + self.assertRaises(ValueError, Poly1305.new, key=self.key, cipher=DES3) + self.assertRaises(ValueError, Poly1305.new, key=self.key, nonce=b'1' * 15, cipher=AES) + self.assertRaises(TypeError, Poly1305.new, key=u"2" * 32, cipher=AES) + self.assertRaises(TypeError, Poly1305.new, key=self.key, data=u"2" * 100, cipher=AES) + + def test_update(self): + pieces = [b"\x0A" * 200, b"\x14" * 300] + h1 = Poly1305.new(key=self.key, cipher=AES) + h1.update(pieces[0]).update(pieces[1]) + d1 = h1.digest() + + h2 = Poly1305.new(key=self.key, cipher=AES, nonce=h1.nonce) + h2.update(pieces[0] + pieces[1]) + d2 = h2.digest() + self.assertEqual(d1, d2) + + def test_update_negative(self): + h = Poly1305.new(key=self.key, cipher=AES) + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = Poly1305.new(key=self.key, cipher=AES) + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg=b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = Poly1305.new(key=self.key, data=msg[:4], cipher=AES) + h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + + def test_hex_digest(self): + mac = Poly1305.new(key=self.key, cipher=AES) + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_verify(self): + h = Poly1305.new(key=self.key, cipher=AES) + mac = h.digest() + h.verify(mac) + wrong_mac = strxor_c(mac, 255) + self.assertRaises(ValueError, h.verify, wrong_mac) + + def test_hexverify(self): + h = Poly1305.new(key=self.key, cipher=AES) + mac = h.hexdigest() + h.hexverify(mac) + self.assertRaises(ValueError, h.hexverify, "4556") + + def test_bytearray(self): + + data = b"\x00\x01\x02" + h0 = Poly1305.new(key=self.key, data=data, cipher=AES) + d_ref = h0.digest() + + # Data and key can be a bytearray (during initialization) + key_ba = bytearray(self.key) + data_ba = bytearray(data) + + h1 = Poly1305.new(key=self.key, data=data, cipher=AES, nonce=h0.nonce) + h2 = Poly1305.new(key=key_ba, data=data_ba, cipher=AES, nonce=h0.nonce) + key_ba[:1] = b'\xFF' + data_ba[:1] = b'\xEE' + + self.assertEqual(h1.digest(), d_ref) + self.assertEqual(h2.digest(), d_ref) + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = Poly1305.new(key=self.key, cipher=AES) + h2 = Poly1305.new(key=self.key, cipher=AES, nonce=h1.nonce) + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data and key can be a memoryview (during initialization) + key_mv = get_mv(self.key) + data_mv = get_mv(data) + + h1 = Poly1305.new(key=self.key, data=data, cipher=AES) + h2 = Poly1305.new(key=key_mv, data=data_mv, cipher=AES, + nonce=h1.nonce) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + key_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = Poly1305.new(key=self.key, cipher=AES) + h2 = Poly1305.new(key=self.key, cipher=AES, nonce=h1.nonce) + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class Poly1305Test_ChaCha20(unittest.TestCase): + + key = b'\x11' * 32 + + def test_new_positive(self): + data = b'r' * 100 + + h1 = Poly1305.new(key=self.key, cipher=ChaCha20) + self.assertEqual(h1.digest_size, 16) + self.assertEqual(len(h1.nonce), 12) + + h2 = Poly1305.new(key=self.key, cipher=ChaCha20, nonce = b'8' * 8) + self.assertEqual(len(h2.nonce), 8) + self.assertEqual(h2.nonce, b'8' * 8) + + def test_new_negative(self): + + self.assertRaises(ValueError, Poly1305.new, key=self.key, nonce=b'1' * 7, cipher=ChaCha20) + + +# +# make_mac_tests() expect a new() function with signature new(key, data, +# **kwargs), and we need to adapt Poly1305's, as it only uses keywords +# +class Poly1305_New(object): + + @staticmethod + def new(key, *data, **kwds): + _kwds = dict(kwds) + if len(data) == 1: + _kwds['data'] = data[0] + _kwds['key'] = key + return Poly1305.new(**_kwds) + + +class Poly1305_Basic(object): + + @staticmethod + def new(key, *data, **kwds): + from Cryptodome.Hash.Poly1305 import Poly1305_MAC + + if len(data) == 1: + msg = data[0] + else: + msg = None + + return Poly1305_MAC(key[:16], key[16:], msg) + + +class Poly1305AES_MC(unittest.TestCase): + + def runTest(self): + tag = unhexlify(b"fb447350c4e868c52ac3275cf9d4327e") + + msg = b'' + for msg_len in range(5000 + 1): + key = tag + strxor_c(tag, 0xFF) + nonce = tag[::-1] + if msg_len > 0: + msg = msg + tobytes(tag[0]) + auth = Poly1305.new(key=key, nonce=nonce, cipher=AES, data=msg) + tag = auth.digest() + + # Compare against output of original DJB's poly1305aes-20050218 + self.assertEqual("CDFA436DDD629C7DC20E1128530BAED2", auth.hexdigest().upper()) + + +def get_tests(config={}): + tests = make_mac_tests(Poly1305_Basic, "Poly1305", test_data_basic) + tests += make_mac_tests(Poly1305_New, "Poly1305", test_data_aes) + tests += make_mac_tests(Poly1305_New, "Poly1305", test_data_chacha20) + tests += [ Poly1305AES_MC() ] + tests += list_test_cases(Poly1305Test_AES) + tests += list_test_cases(Poly1305Test_ChaCha20) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py new file mode 100644 index 0000000..c05a877 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_RIPEMD160.py @@ -0,0 +1,71 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_RIPEMD160.py: Self-test for the RIPEMD-160 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +#"""Self-test suite for Cryptodome.Hash.RIPEMD160""" + +from Cryptodome.Util.py3compat import * + +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + # Test vectors downloaded 2008-09-12 from + # http://homes.esat.kuleuven.be/~bosselae/ripemd160.html + ('9c1185a5c5e9fc54612808977ee8f548b2258d31', '', "'' (empty string)"), + ('0bdc9d2d256b3ee9daae347be6f4dc835a467ffe', 'a'), + ('8eb208f7e05d987a9b044a8e98c6b087f15a0bfc', 'abc'), + ('5d0689ef49d2fae572b881b123a85ffa21595f36', 'message digest'), + + ('f71c27109c692c1b56bbdceb5b9d2865b3708dbc', + 'abcdefghijklmnopqrstuvwxyz', + 'a-z'), + + ('12a053384a9c0c88e405a06c27dcf49ada62eb2b', + 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq', + 'abcdbcd...pnopq'), + + ('b0e20b6e3116640286ed3a87a5713079b21f5189', + 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', + 'A-Z, a-z, 0-9'), + + ('9b752e45573d4b39f4dbd3323cab82bf63326bfb', + '1234567890' * 8, + "'1234567890' * 8"), + + ('52783243c1697bdbe16d37f97f68f08325dc1528', + 'a' * 10**6, + '"a" * 10**6'), +] + +def get_tests(config={}): + from Cryptodome.Hash import RIPEMD160 + from .common import make_hash_tests + return make_hash_tests(RIPEMD160, "RIPEMD160", test_data, + digest_size=20, + oid="1.3.36.3.2.1") + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py new file mode 100644 index 0000000..a879e68 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA1.py @@ -0,0 +1,84 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/SHA1.py: Self-test for the SHA-1 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA""" + +from binascii import hexlify + +from Cryptodome.SelfTest.loader import load_test_vectors + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data_various = [ + # FIPS PUB 180-2, A.1 - "One-Block Message" + ('a9993e364706816aba3e25717850c26c9cd0d89d', 'abc'), + + # FIPS PUB 180-2, A.2 - "Multi-Block Message" + ('84983e441c3bd26ebaae4aa1f95129e5e54670f1', + 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), + + # FIPS PUB 180-2, A.3 - "Long Message" +# ('34aa973cd4c4daa4f61eeb2bdbad27316534016f', +# 'a' * 10**6, +# '"a" * 10**6'), + + # RFC 3174: Section 7.3, "TEST4" (multiple of 512 bits) + ('dea356a2cddd90c7a7ecedc5ebb563934f460452', + '01234567' * 80, + '"01234567" * 80'), +] + +def get_tests(config={}): + from Cryptodome.Hash import SHA1 + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA1"), + "SHA1ShortMsg.rsp", + "KAT SHA-1", + { "len" : lambda x: int(x) } ) or [] + + test_data = test_data_various[:] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA1, "SHA1", test_data, + digest_size=20, + oid="1.3.14.3.2.26") + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py new file mode 100644 index 0000000..da32423 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA224.py @@ -0,0 +1,63 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA224.py: Self-test for the SHA-224 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA224""" + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + + # RFC 3874: Section 3.1, "Test Vector #1 + ('23097d223405d8228642a477bda255b32aadbce4bda0b3f7e36c9da7', 'abc'), + + # RFC 3874: Section 3.2, "Test Vector #2 + ('75388b16512776cc5dba5da1fd890150b0c6455cb4f58b1952522525', 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), + + # RFC 3874: Section 3.3, "Test Vector #3 + ('20794655980c91d8bbb4c1ea97618a4bf03f42581948b2ee4ee7ad67', 'a' * 10**6, "'a' * 10**6"), + + # Examples from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('d14a028c2a3a2bc9476102bb288234c415a2b01f828ea62ac5b3e42f', ''), + + ('49b08defa65e644cbf8a2dd9270bdededabc741997d1dadd42026d7b', + 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), + + ('58911e7fccf2971a7d07f93162d8bd13568e71aa8fc86fc1fe9043d1', + 'Frank jagt im komplett verwahrlosten Taxi quer durch Bayern'), + +] + +def get_tests(config={}): + from Cryptodome.Hash import SHA224 + from .common import make_hash_tests + return make_hash_tests(SHA224, "SHA224", test_data, + digest_size=28, + oid='2.16.840.1.101.3.4.2.4') + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py new file mode 100644 index 0000000..23d1145 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA256.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA256.py: Self-test for the SHA-256 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA256""" + +import unittest +from Cryptodome.Util.py3compat import * + +class LargeSHA256Test(unittest.TestCase): + def runTest(self): + """SHA256: 512/520 MiB test""" + from Cryptodome.Hash import SHA256 + zeros = bchr(0x00) * (1024*1024) + + h = SHA256.new(zeros) + for i in range(511): + h.update(zeros) + + # This test vector is from PyCrypto's old testdata.py file. + self.assertEqual('9acca8e8c22201155389f65abbf6bc9723edc7384ead80503839f49dcc56d767', h.hexdigest()) # 512 MiB + + for i in range(8): + h.update(zeros) + + # This test vector is from PyCrypto's old testdata.py file. + self.assertEqual('abf51ad954b246009dfe5a50ecd582fd5b8f1b8b27f30393853c3ef721e7fa6e', h.hexdigest()) # 520 MiB + +def get_tests(config={}): + # Test vectors from FIPS PUB 180-2 + # This is a list of (expected_result, input[, description]) tuples. + test_data = [ + # FIPS PUB 180-2, B.1 - "One-Block Message" + ('ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad', + 'abc'), + + # FIPS PUB 180-2, B.2 - "Multi-Block Message" + ('248d6a61d20638b8e5c026930c3e6039a33ce45964ff2167f6ecedd419db06c1', + 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), + + # FIPS PUB 180-2, B.3 - "Long Message" + ('cdc76e5c9914fb9281a1c7e284d73e67f1809a48a497200e046d39ccc7112cd0', + 'a' * 10**6, + '"a" * 10**6'), + + # Test for an old PyCryptodome bug. + ('f7fd017a3c721ce7ff03f3552c0813adcc48b7f33f07e5e2ba71e23ea393d103', + 'This message is precisely 55 bytes long, to test a bug.', + 'Length = 55 (mod 64)'), + + # Example from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', ''), + + ('d32b568cd1b96d459e7291ebf4b25d007f275c9f13149beeb782fac0716613f8', + 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), + ] + + from Cryptodome.Hash import SHA256 + from .common import make_hash_tests + tests = make_hash_tests(SHA256, "SHA256", test_data, + digest_size=32, + oid="2.16.840.1.101.3.4.2.1") + + if config.get('slow_tests'): + tests += [LargeSHA256Test()] + + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py new file mode 100644 index 0000000..5233d13 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA384.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA.py: Self-test for the SHA-384 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA384""" + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data = [ + + # RFC 4634: Section Page 8.4, "Test 1" + ('cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e7cc2358baeca134c825a7', 'abc'), + + # RFC 4634: Section Page 8.4, "Test 2.2" + ('09330c33f71147e83d192fc782cd1b4753111b173b3b05d22fa08086e3b0f712fcc7c71a557e2db966c3e9fa91746039', 'abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmnhijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu'), + + # RFC 4634: Section Page 8.4, "Test 3" + ('9d0e1809716474cb086e834e310a4a1ced149e9c00f248527972cec5704c2a5b07b8b3dc38ecc4ebae97ddd87f3d8985', 'a' * 10**6, "'a' * 10**6"), + + # Taken from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b', ''), + + # Example from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('71e8383a4cea32d6fd6877495db2ee353542f46fa44bc23100bca48f3366b84e809f0708e81041f427c6d5219a286677', + 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), + +] + +def get_tests(config={}): + from Cryptodome.Hash import SHA384 + from .common import make_hash_tests + return make_hash_tests(SHA384, "SHA384", test_data, + digest_size=48, + oid='2.16.840.1.101.3.4.2.2') + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py new file mode 100644 index 0000000..3141880 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_224.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_224.py: Self-test for the SHA-3/224 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA3_224""" + +import unittest +from binascii import hexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Hash import SHA3_224 as SHA3 +from Cryptodome.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-224.txt", + "KAT SHA-3 224", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests += make_hash_tests(SHA3, "SHA3_224", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.7") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py new file mode 100644 index 0000000..9dee551 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_256.py @@ -0,0 +1,80 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_256.py: Self-test for the SHA-3/256 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA3_256""" + +import unittest +from binascii import hexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Hash import SHA3_256 as SHA3 +from Cryptodome.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-256.txt", + "KAT SHA-3 256", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + + tests += make_hash_tests(SHA3, "SHA3_256", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.8") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py new file mode 100644 index 0000000..c5030b5 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_384.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_384.py: Self-test for the SHA-3/384 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA3_384""" + +import unittest +from binascii import hexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Hash import SHA3_384 as SHA3 +from Cryptodome.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-384.txt", + "KAT SHA-3 384", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests += make_hash_tests(SHA3, "SHA3_384", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.9") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py new file mode 100644 index 0000000..b7a57f8 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA3_512.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA3_512.py: Self-test for the SHA-3/512 hash function +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA3_512""" + +import unittest +from binascii import hexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Hash import SHA3_512 as SHA3 +from Cryptodome.Util.py3compat import b + + +class APITest(unittest.TestCase): + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = SHA3.new(data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = SHA3.new(data=msg).digest() + + # With the proper flag, it is allowed + h = SHA3.new(data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +def get_tests(config={}): + from .common import make_hash_tests + + tests = [] + + test_vectors = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHA3-512.txt", + "KAT SHA-3 512", + { "len" : lambda x: int(x) } ) or [] + + test_data = [] + for tv in test_vectors: + if tv.len == 0: + tv.msg = b("") + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests += make_hash_tests(SHA3, "SHA3_512", test_data, + digest_size=SHA3.digest_size, + oid="2.16.840.1.101.3.4.2.10") + tests += list_test_cases(APITest) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py new file mode 100644 index 0000000..e6c74b3 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHA512.py @@ -0,0 +1,140 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Hash/test_SHA512.py: Self-test for the SHA-512 hash function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHA512""" + +from binascii import hexlify + +from Cryptodome.Hash import SHA512 +from .common import make_hash_tests +from Cryptodome.SelfTest.loader import load_test_vectors + +# Test vectors from various sources +# This is a list of (expected_result, input[, description]) tuples. +test_data_512_other = [ + + # RFC 4634: Section Page 8.4, "Test 1" + ('ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f', 'abc'), + + # RFC 4634: Section Page 8.4, "Test 2.1" + ('8e959b75dae313da8cf4f72814fc143f8f7779c6eb9f7fa17299aeadb6889018501d289e4900f7e4331b99dec4b5433ac7d329eeb6dd26545e96e55b874be909', 'abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmnhijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu'), + + # RFC 4634: Section Page 8.4, "Test 3" + ('e718483d0ce769644e2e42c7bc15b4638e1f98b13b2044285632a803afa973ebde0ff244877ea60a4cb0432ce577c31beb009c5c2c49aa2e4eadb217ad8cc09b', 'a' * 10**6, "'a' * 10**6"), + + # Taken from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm + ('cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e', ''), + + ('af9ed2de700433b803240a552b41b5a472a6ef3fe1431a722b2063c75e9f07451f67a28e37d09cde769424c96aea6f8971389db9e1993d6c565c3c71b855723c', 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), +] + + +def get_tests_SHA512(): + + test_vectors = load_test_vectors(("Hash", "SHA2"), + "SHA512ShortMsg.rsp", + "KAT SHA-512", + {"len": lambda x: int(x)}) or [] + + test_data = test_data_512_other[:] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA512, "SHA512", test_data, + digest_size=64, + oid="2.16.840.1.101.3.4.2.3") + return tests + + +def get_tests_SHA512_224(): + + test_vectors = load_test_vectors(("Hash", "SHA2"), + "SHA512_224ShortMsg.rsp", + "KAT SHA-512/224", + {"len": lambda x: int(x)}) or [] + + test_data = [] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA512, "SHA512/224", test_data, + digest_size=28, + oid="2.16.840.1.101.3.4.2.5", + extra_params={ "truncate" : "224" }) + return tests + + +def get_tests_SHA512_256(): + + test_vectors = load_test_vectors(("Hash", "SHA2"), + "SHA512_256ShortMsg.rsp", + "KAT SHA-512/256", + {"len": lambda x: int(x)}) or [] + + test_data = [] + for tv in test_vectors: + try: + if tv.startswith('['): + continue + except AttributeError: + pass + if tv.len == 0: + tv.msg = b"" + test_data.append((hexlify(tv.md), tv.msg, tv.desc)) + + tests = make_hash_tests(SHA512, "SHA512/256", test_data, + digest_size=32, + oid="2.16.840.1.101.3.4.2.6", + extra_params={ "truncate" : "256" }) + return tests + + +def get_tests(config={}): + + tests = [] + tests += get_tests_SHA512() + tests += get_tests_SHA512_224() + tests += get_tests_SHA512_256() + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py new file mode 100644 index 0000000..2283308 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_SHAKE.py @@ -0,0 +1,143 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.SHAKE128 and SHAKE256""" + +import unittest +from binascii import hexlify, unhexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import SHAKE128, SHAKE256 +from Cryptodome.Util.py3compat import b, bchr, bord, tobytes + +class SHAKETest(unittest.TestCase): + + def test_new_positive(self): + + xof1 = self.shake.new() + xof2 = self.shake.new(data=b("90")) + xof3 = self.shake.new().update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = self.shake.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.read(10) + h = self.shake.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.read(10), digest) + + def test_update_negative(self): + h = self.shake.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.shake.new() + digest = h.read(90) + + # read returns a byte string of the right length + self.assertTrue(isinstance(digest, type(b("digest")))) + self.assertEqual(len(digest), 90) + + def test_update_after_read(self): + mac = self.shake.new() + mac.update(b("rrrr")) + mac.read(90) + self.assertRaises(TypeError, mac.update, b("ttt")) + + +class SHAKE128Test(SHAKETest): + shake = SHAKE128 + + +class SHAKE256Test(SHAKETest): + shake = SHAKE256 + + +class SHAKEVectors(unittest.TestCase): + pass + + +test_vectors_128 = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHAKE128.txt", + "Short Messages KAT SHAKE128", + { "len" : lambda x: int(x) } ) or [] + +for idx, tv in enumerate(test_vectors_128): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = SHAKE128.new(data=data) + digest = hobj.read(len(result)) + self.assertEqual(digest, result) + + setattr(SHAKEVectors, "test_128_%d" % idx, new_test) + + +test_vectors_256 = load_test_vectors(("Hash", "SHA3"), + "ShortMsgKAT_SHAKE256.txt", + "Short Messages KAT SHAKE256", + { "len" : lambda x: int(x) } ) or [] + +for idx, tv in enumerate(test_vectors_256): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = SHAKE256.new(data=data) + digest = hobj.read(len(result)) + self.assertEqual(digest, result) + + setattr(SHAKEVectors, "test_256_%d" % idx, new_test) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(SHAKE128Test) + tests += list_test_cases(SHAKE256Test) + tests += list_test_cases(SHAKEVectors) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py new file mode 100644 index 0000000..b5e6a0a --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_TupleHash.py @@ -0,0 +1,286 @@ +import unittest +from binascii import unhexlify, hexlify + +from Cryptodome.Util.py3compat import tobytes +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import TupleHash128, TupleHash256 + + +class TupleHashTest(unittest.TestCase): + + def new(self, *args, **kwargs): + return self.TupleHash.new(*args, **kwargs) + + def test_new_positive(self): + + h = self.new() + for new_func in self.TupleHash.new, h.new: + + for dbits in range(64, 1024 + 1, 8): + hobj = new_func(digest_bits=dbits) + self.assertEqual(hobj.digest_size * 8, dbits) + + for dbytes in range(8, 128 + 1): + hobj = new_func(digest_bytes=dbytes) + self.assertEqual(hobj.digest_size, dbytes) + + hobj = h.new() + self.assertEqual(hobj.digest_size, self.default_bytes) + + def test_new_negative(self): + + h = self.new() + for new_func in self.TupleHash.new, h.new: + self.assertRaises(TypeError, new_func, + digest_bytes=self.minimum_bytes, + digest_bits=self.minimum_bits) + self.assertRaises(ValueError, new_func, digest_bytes=0) + self.assertRaises(ValueError, new_func, + digest_bits=self.minimum_bits + 7) + self.assertRaises(ValueError, new_func, + digest_bits=self.minimum_bits - 8) + self.assertRaises(ValueError, new_func, + digest_bits=self.minimum_bytes - 1) + + def test_default_digest_size(self): + digest = self.new().digest() + self.assertEqual(len(digest), self.default_bytes) + + def test_update(self): + h = self.new() + h.update(b'') + h.digest() + + h = self.new() + h.update(b'') + h.update(b'STRING1') + h.update(b'STRING2') + mac1 = h.digest() + + h = self.new() + h.update(b'STRING1') + h.update(b'STRING2') + mac2 = h.digest() + + self.assertNotEqual(mac1, mac2) + + def test_update_negative(self): + h = self.new() + self.assertRaises(TypeError, h.update, u"string") + self.assertRaises(TypeError, h.update, None) + + def test_digest(self): + h = self.new() + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b"digest"))) + + def test_update_after_digest(self): + msg = b"rrrrttt" + + # Normally, update() cannot be done after digest() + h = self.new() + h.update(msg) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, dig1) + + def test_hex_digest(self): + mac = self.new() + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_bytearray(self): + + data = b"\x00\x01\x02" + + # Data can be a bytearray (during operation) + data_ba = bytearray(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_ba) + data_ba[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + def test_memoryview(self): + + data = b"\x00\x01\x02" + + def get_mv_ro(data): + return memoryview(data) + + def get_mv_rw(data): + return memoryview(bytearray(data)) + + for get_mv in (get_mv_ro, get_mv_rw): + + # Data can be a memoryview (during operation) + data_mv = get_mv(data) + + h1 = self.new() + h2 = self.new() + h1.update(data) + h2.update(data_mv) + if not data_mv.readonly: + data_mv[:1] = b'\xFF' + + self.assertEqual(h1.digest(), h2.digest()) + + +class TupleHash128Test(TupleHashTest): + + TupleHash = TupleHash128 + + minimum_bytes = 8 + default_bytes = 64 + + minimum_bits = 64 + default_bits = 512 + + +class TupleHash256Test(TupleHashTest): + + TupleHash = TupleHash256 + + minimum_bytes = 8 + default_bytes = 64 + + minimum_bits = 64 + default_bits = 512 + + +class NISTExampleTestVectors(unittest.TestCase): + + # http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/TupleHash_samples.pdf + test_data = [ + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "", + "C5 D8 78 6C 1A FB 9B 82 11 1A B3 4B 65 B2 C0 04" + "8F A6 4E 6D 48 E2 63 26 4C E1 70 7D 3F FC 8E D1", + "KMAC128 Sample #1 NIST", + TupleHash128 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "My Tuple App", + "75 CD B2 0F F4 DB 11 54 E8 41 D7 58 E2 41 60 C5" + "4B AE 86 EB 8C 13 E7 F5 F4 0E B3 55 88 E9 6D FB", + "KMAC128 Sample #2 NIST", + TupleHash128 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + "20 21 22 23 24 25 26 27 28", + ), + "My Tuple App", + "E6 0F 20 2C 89 A2 63 1E DA 8D 4C 58 8C A5 FD 07" + "F3 9E 51 51 99 8D EC CF 97 3A DB 38 04 BB 6E 84", + "KMAC128 Sample #3 NIST", + TupleHash128 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "", + "CF B7 05 8C AC A5 E6 68 F8 1A 12 A2 0A 21 95 CE" + "97 A9 25 F1 DB A3 E7 44 9A 56 F8 22 01 EC 60 73" + "11 AC 26 96 B1 AB 5E A2 35 2D F1 42 3B DE 7B D4" + "BB 78 C9 AE D1 A8 53 C7 86 72 F9 EB 23 BB E1 94", + "KMAC256 Sample #4 NIST", + TupleHash256 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + ), + "My Tuple App", + "14 7C 21 91 D5 ED 7E FD 98 DB D9 6D 7A B5 A1 16" + "92 57 6F 5F E2 A5 06 5F 3E 33 DE 6B BA 9F 3A A1" + "C4 E9 A0 68 A2 89 C6 1C 95 AA B3 0A EE 1E 41 0B" + "0B 60 7D E3 62 0E 24 A4 E3 BF 98 52 A1 D4 36 7E", + "KMAC256 Sample #5 NIST", + TupleHash256 + ), + ( + ( + "00 01 02", + "10 11 12 13 14 15", + "20 21 22 23 24 25 26 27 28", + ), + "My Tuple App", + "45 00 0B E6 3F 9B 6B FD 89 F5 47 17 67 0F 69 A9" + "BC 76 35 91 A4 F0 5C 50 D6 88 91 A7 44 BC C6 E7" + "D6 D5 B5 E8 2C 01 8D A9 99 ED 35 B0 BB 49 C9 67" + "8E 52 6A BD 8E 85 C1 3E D2 54 02 1D B9 E7 90 CE", + "KMAC256 Sample #6 NIST", + TupleHash256 + ), + + + + ] + + def setUp(self): + td = [] + for tv_in in self.test_data: + tv_out = [None] * len(tv_in) + + tv_out[0] = [] + for string in tv_in[0]: + tv_out[0].append(unhexlify(string.replace(" ", ""))) + + tv_out[1] = tobytes(tv_in[1]) # Custom + tv_out[2] = unhexlify(tv_in[2].replace(" ", "")) + tv_out[3] = tv_in[3] + tv_out[4] = tv_in[4] + td.append(tv_out) + self.test_data = td + + def runTest(self): + + for data, custom, digest, text, module in self.test_data: + hd = module.new(custom=custom, digest_bytes=len(digest)) + for string in data: + hd.update(string) + self.assertEqual(hd.digest(), digest, msg=text) + + +def get_tests(config={}): + tests = [] + + tests += list_test_cases(TupleHash128Test) + tests += list_test_cases(TupleHash256Test) + tests.append(NISTExampleTestVectors()) + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py new file mode 100644 index 0000000..6797160 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_cSHAKE.py @@ -0,0 +1,178 @@ +# =================================================================== +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.cSHAKE128 and cSHAKE256""" + +import unittest + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import cSHAKE128, cSHAKE256, SHAKE128, SHAKE256 +from Cryptodome.Util.py3compat import b, bchr, tobytes + + +class cSHAKETest(unittest.TestCase): + + def test_left_encode(self): + from Cryptodome.Hash.cSHAKE128 import _left_encode + self.assertEqual(_left_encode(0), b'\x01\x00') + self.assertEqual(_left_encode(1), b'\x01\x01') + self.assertEqual(_left_encode(256), b'\x02\x01\x00') + + def test_bytepad(self): + from Cryptodome.Hash.cSHAKE128 import _bytepad + self.assertEqual(_bytepad(b'', 4), b'\x01\x04\x00\x00') + self.assertEqual(_bytepad(b'A', 4), b'\x01\x04A\x00') + self.assertEqual(_bytepad(b'AA', 4), b'\x01\x04AA') + self.assertEqual(_bytepad(b'AAA', 4), b'\x01\x04AAA\x00\x00\x00') + self.assertEqual(_bytepad(b'AAAA', 4), b'\x01\x04AAAA\x00\x00') + self.assertEqual(_bytepad(b'AAAAA', 4), b'\x01\x04AAAAA\x00') + self.assertEqual(_bytepad(b'AAAAAA', 4), b'\x01\x04AAAAAA') + self.assertEqual(_bytepad(b'AAAAAAA', 4), b'\x01\x04AAAAAAA\x00\x00\x00') + + def test_new_positive(self): + + xof1 = self.cshake.new() + xof2 = self.cshake.new(data=b("90")) + xof3 = self.cshake.new().update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + xof1 = self.cshake.new() + ref = xof1.read(10) + xof2 = self.cshake.new(custom=b("")) + xof3 = self.cshake.new(custom=b("foo")) + + self.assertEqual(ref, xof2.read(10)) + self.assertNotEqual(ref, xof3.read(10)) + + xof1 = self.cshake.new(custom=b("foo")) + xof2 = self.cshake.new(custom=b("foo"), data=b("90")) + xof3 = self.cshake.new(custom=b("foo")).update(b("90")) + + self.assertNotEqual(xof1.read(10), xof2.read(10)) + xof3.read(10) + self.assertEqual(xof2.read(10), xof3.read(10)) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = self.cshake.new() + h.update(pieces[0]).update(pieces[1]) + digest = h.read(10) + h = self.cshake.new() + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.read(10), digest) + + def test_update_negative(self): + h = self.cshake.new() + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = self.cshake.new() + digest = h.read(90) + + # read returns a byte string of the right length + self.assertTrue(isinstance(digest, type(b("digest")))) + self.assertEqual(len(digest), 90) + + def test_update_after_read(self): + mac = self.cshake.new() + mac.update(b("rrrr")) + mac.read(90) + self.assertRaises(TypeError, mac.update, b("ttt")) + + def test_shake(self): + # When no customization string is passed, results must match SHAKE + for digest_len in range(64): + xof1 = self.cshake.new(b'TEST') + xof2 = self.shake.new(b'TEST') + self.assertEqual(xof1.read(digest_len), xof2.read(digest_len)) + + +class cSHAKE128Test(cSHAKETest): + cshake = cSHAKE128 + shake = SHAKE128 + + +class cSHAKE256Test(cSHAKETest): + cshake = cSHAKE256 + shake = SHAKE256 + + +class cSHAKEVectors(unittest.TestCase): + pass + + +vector_files = [("ShortMsgSamples_cSHAKE128.txt", "Short Message Samples cSHAKE128", "128_cshake", cSHAKE128), + ("ShortMsgSamples_cSHAKE256.txt", "Short Message Samples cSHAKE256", "256_cshake", cSHAKE256), + ("CustomMsgSamples_cSHAKE128.txt", "Custom Message Samples cSHAKE128", "custom_128_cshake", cSHAKE128), + ("CustomMsgSamples_cSHAKE256.txt", "Custom Message Samples cSHAKE256", "custom_256_cshake", cSHAKE256), + ] + +for file, descr, tag, test_class in vector_files: + + test_vectors = load_test_vectors(("Hash", "SHA3"), file, descr, + {"len": lambda x: int(x), + "nlen": lambda x: int(x), + "slen": lambda x: int(x)}) or [] + + for idx, tv in enumerate(test_vectors): + if getattr(tv, "len", 0) == 0: + data = b("") + else: + data = tobytes(tv.msg) + assert(tv.len == len(tv.msg)*8) + if getattr(tv, "nlen", 0) != 0: + raise ValueError("Unsupported cSHAKE test vector") + if getattr(tv, "slen", 0) == 0: + custom = b("") + else: + custom = tobytes(tv.s) + assert(tv.slen == len(tv.s)*8) + + def new_test(self, data=data, result=tv.md, custom=custom, test_class=test_class): + hobj = test_class.new(data=data, custom=custom) + digest = hobj.read(len(result)) + self.assertEqual(digest, result) + + setattr(cSHAKEVectors, "test_%s_%d" % (tag, idx), new_test) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(cSHAKE128Test) + tests += list_test_cases(cSHAKE256Test) + tests += list_test_cases(cSHAKEVectors) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py new file mode 100644 index 0000000..dcc0d13 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Hash/test_keccak.py @@ -0,0 +1,250 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test suite for Cryptodome.Hash.keccak""" + +import unittest +from binascii import hexlify, unhexlify + +from Cryptodome.SelfTest.loader import load_test_vectors +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Hash import keccak +from Cryptodome.Util.py3compat import b, tobytes, bchr + +class KeccakTest(unittest.TestCase): + + def test_new_positive(self): + + for digest_bits in (224, 256, 384, 512): + hobj = keccak.new(digest_bits=digest_bits) + self.assertEqual(hobj.digest_size, digest_bits // 8) + + hobj2 = hobj.new() + self.assertEqual(hobj2.digest_size, digest_bits // 8) + + for digest_bytes in (28, 32, 48, 64): + hobj = keccak.new(digest_bytes=digest_bytes) + self.assertEqual(hobj.digest_size, digest_bytes) + + hobj2 = hobj.new() + self.assertEqual(hobj2.digest_size, digest_bytes) + + def test_new_positive2(self): + + digest1 = keccak.new(data=b("\x90"), digest_bytes=64).digest() + digest2 = keccak.new(digest_bytes=64).update(b("\x90")).digest() + self.assertEqual(digest1, digest2) + + def test_new_negative(self): + + # keccak.new needs digest size + self.assertRaises(TypeError, keccak.new) + + h = keccak.new(digest_bits=512) + + # Either bits or bytes can be specified + self.assertRaises(TypeError, keccak.new, + digest_bytes=64, + digest_bits=512) + + # Range + self.assertRaises(ValueError, keccak.new, digest_bytes=0) + self.assertRaises(ValueError, keccak.new, digest_bytes=1) + self.assertRaises(ValueError, keccak.new, digest_bytes=65) + self.assertRaises(ValueError, keccak.new, digest_bits=0) + self.assertRaises(ValueError, keccak.new, digest_bits=1) + self.assertRaises(ValueError, keccak.new, digest_bits=513) + + def test_update(self): + pieces = [bchr(10) * 200, bchr(20) * 300] + h = keccak.new(digest_bytes=64) + h.update(pieces[0]).update(pieces[1]) + digest = h.digest() + h = keccak.new(digest_bytes=64) + h.update(pieces[0] + pieces[1]) + self.assertEqual(h.digest(), digest) + + def test_update_negative(self): + h = keccak.new(digest_bytes=64) + self.assertRaises(TypeError, h.update, u"string") + + def test_digest(self): + h = keccak.new(digest_bytes=64) + digest = h.digest() + + # hexdigest does not change the state + self.assertEqual(h.digest(), digest) + # digest returns a byte string + self.assertTrue(isinstance(digest, type(b("digest")))) + + def test_hex_digest(self): + mac = keccak.new(digest_bits=512) + digest = mac.digest() + hexdigest = mac.hexdigest() + + # hexdigest is equivalent to digest + self.assertEqual(hexlify(digest), tobytes(hexdigest)) + # hexdigest does not change the state + self.assertEqual(mac.hexdigest(), hexdigest) + # hexdigest returns a string + self.assertTrue(isinstance(hexdigest, type("digest"))) + + def test_update_after_digest(self): + msg=b("rrrrttt") + + # Normally, update() cannot be done after digest() + h = keccak.new(digest_bits=512, data=msg[:4]) + dig1 = h.digest() + self.assertRaises(TypeError, h.update, msg[4:]) + dig2 = keccak.new(digest_bits=512, data=msg).digest() + + # With the proper flag, it is allowed + h = keccak.new(digest_bits=512, data=msg[:4], update_after_digest=True) + self.assertEqual(h.digest(), dig1) + # ... and the subsequent digest applies to the entire message + # up to that point + h.update(msg[4:]) + self.assertEqual(h.digest(), dig2) + + +class KeccakVectors(unittest.TestCase): + pass + + # TODO: add ExtremelyLong tests + + +test_vectors_224 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_224.txt", + "Short Messages KAT 224", + {"len": lambda x: int(x)}) or [] + +test_vectors_224 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_224.txt", + "Long Messages KAT 224", + {"len": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_224): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=224, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_224_%d" % idx, new_test) + +# --- + +test_vectors_256 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_256.txt", + "Short Messages KAT 256", + { "len" : lambda x: int(x) } ) or [] + +test_vectors_256 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_256.txt", + "Long Messages KAT 256", + { "len" : lambda x: int(x) } ) or [] + +for idx, tv in enumerate(test_vectors_256): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=256, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_256_%d" % idx, new_test) + + +# --- + +test_vectors_384 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_384.txt", + "Short Messages KAT 384", + {"len": lambda x: int(x)}) or [] + +test_vectors_384 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_384.txt", + "Long Messages KAT 384", + {"len": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_384): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=384, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_384_%d" % idx, new_test) + +# --- + +test_vectors_512 = load_test_vectors(("Hash", "keccak"), + "ShortMsgKAT_512.txt", + "Short Messages KAT 512", + {"len": lambda x: int(x)}) or [] + +test_vectors_512 += load_test_vectors(("Hash", "keccak"), + "LongMsgKAT_512.txt", + "Long Messages KAT 512", + {"len": lambda x: int(x)}) or [] + +for idx, tv in enumerate(test_vectors_512): + if tv.len == 0: + data = b("") + else: + data = tobytes(tv.msg) + + def new_test(self, data=data, result=tv.md): + hobj = keccak.new(digest_bits=512, data=data) + self.assertEqual(hobj.digest(), result) + + setattr(KeccakVectors, "test_512_%d" % idx, new_test) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(KeccakTest) + tests += list_test_cases(KeccakVectors) + return tests + + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/__init__.py new file mode 100644 index 0000000..f15f141 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/__init__.py @@ -0,0 +1,47 @@ +# +# SelfTest/IO/__init__.py: Self-test for input/output module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for I/O""" + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.IO import test_PKCS8; tests += test_PKCS8.get_tests(config=config) + from Cryptodome.SelfTest.IO import test_PBES; tests += test_PBES.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PBES.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PBES.py new file mode 100644 index 0000000..bd055ab --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PBES.py @@ -0,0 +1,93 @@ +# +# SelfTest/IO/test_PBES.py: Self-test for the _PBES module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-tests for Cryptodome.IO._PBES module""" + +import unittest +from Cryptodome.Util.py3compat import * + +from Cryptodome.IO._PBES import PBES2 + + +class TestPBES2(unittest.TestCase): + + def setUp(self): + self.ref = b("Test data") + self.passphrase = b("Passphrase") + + def test1(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test2(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'PBKDF2WithHMAC-SHA1AndAES128-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test3(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'PBKDF2WithHMAC-SHA1AndAES192-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test4(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'scryptAndAES128-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test5(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'scryptAndAES192-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + def test6(self): + ct = PBES2.encrypt(self.ref, self.passphrase, + 'scryptAndAES256-CBC') + pt = PBES2.decrypt(ct, self.passphrase) + self.assertEqual(self.ref, pt) + + +def get_tests(config={}): + from Cryptodome.SelfTest.st_common import list_test_cases + listTests = [] + listTests += list_test_cases(TestPBES2) + return listTests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py new file mode 100644 index 0000000..3cef9bc --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/IO/test_PKCS8.py @@ -0,0 +1,425 @@ +# +# SelfTest/IO/test_PKCS8.py: Self-test for the PKCS8 module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-tests for Cryptodome.IO.PKCS8 module""" + +import unittest +from binascii import unhexlify + +from Cryptodome.Util.py3compat import * +from Cryptodome.IO import PKCS8 + +from Cryptodome.Util.asn1 import DerNull + +oid_key = '1.2.840.113549.1.1.1' + +# Original RSA key (in DER format) +# hexdump -v -e '32/1 "%02x" "\n"' key.der +clear_key=""" +308201ab020100025a00b94a7f7075ab9e79e8196f47be707781e80dd965cf16 +0c951a870b71783b6aaabbd550c0e65e5a3dfe15b8620009f6d7e5efec42a3f0 +6fe20faeebb0c356e79cdec6db4dd427e82d8ae4a5b90996227b8ba54ccfc4d2 +5c08050203010001025a00afa09c70d528299b7552fe766b5d20f9a221d66938 +c3b68371d48515359863ff96f0978d700e08cd6fd3d8a3f97066fc2e0d5f78eb +3a50b8e17ba297b24d1b8e9cdfd18d608668198d724ad15863ef0329195dee89 +3f039395022d0ebe0518df702a8b25954301ec60a97efdcec8eaa4f2e76ca7e8 +8dfbc3f7e0bb83f9a0e8dc47c0f8c746e9df6b022d0c9195de13f09b7be1fdd7 +1f56ae7d973e08bd9fd2c3dfd8936bb05be9cc67bd32d663c7f00d70932a0be3 +c24f022d0ac334eb6cabf1933633db007b763227b0d9971a9ea36aca8b669ec9 +4fcf16352f6b3dcae28e4bd6137db4ddd3022d0400a09f15ee7b351a2481cb03 +09920905c236d09c87afd3022f3afc2a19e3b746672b635238956ee7e6dd62d5 +022d0cd88ed14fcfbda5bbf0257f700147137bbab9c797af7df866704b889aa3 +7e2e93df3ff1a0fd3490111dcdbc4c +""" + +# Same key as above, wrapped in PKCS#8 but w/o password +# +# openssl pkcs8 -topk8 -inform DER -nocrypt -in key.der -outform DER -out keyp8.der +# hexdump -v -e '32/1 "%02x" "\n"' keyp8.der +wrapped_clear_key=""" +308201c5020100300d06092a864886f70d0101010500048201af308201ab0201 +00025a00b94a7f7075ab9e79e8196f47be707781e80dd965cf160c951a870b71 +783b6aaabbd550c0e65e5a3dfe15b8620009f6d7e5efec42a3f06fe20faeebb0 +c356e79cdec6db4dd427e82d8ae4a5b90996227b8ba54ccfc4d25c0805020301 +0001025a00afa09c70d528299b7552fe766b5d20f9a221d66938c3b68371d485 +15359863ff96f0978d700e08cd6fd3d8a3f97066fc2e0d5f78eb3a50b8e17ba2 +97b24d1b8e9cdfd18d608668198d724ad15863ef0329195dee893f039395022d +0ebe0518df702a8b25954301ec60a97efdcec8eaa4f2e76ca7e88dfbc3f7e0bb +83f9a0e8dc47c0f8c746e9df6b022d0c9195de13f09b7be1fdd71f56ae7d973e +08bd9fd2c3dfd8936bb05be9cc67bd32d663c7f00d70932a0be3c24f022d0ac3 +34eb6cabf1933633db007b763227b0d9971a9ea36aca8b669ec94fcf16352f6b +3dcae28e4bd6137db4ddd3022d0400a09f15ee7b351a2481cb0309920905c236 +d09c87afd3022f3afc2a19e3b746672b635238956ee7e6dd62d5022d0cd88ed1 +4fcfbda5bbf0257f700147137bbab9c797af7df866704b889aa37e2e93df3ff1 +a0fd3490111dcdbc4c +""" + +### +# +# The key above will now be encrypted with different algorithms. +# The password is always 'TestTest'. +# +# Each item in the wrapped_enc_keys list contains: +# * wrap algorithm +# * iteration count +# * Salt +# * IV +# * Expected result +### +wrapped_enc_keys = [] + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -outform DER -out keyenc.der -v2 des3 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC', +2048, +"47EA7227D8B22E2F", # IV +"E3F7A838AB911A4D", # Salt +""" +30820216304006092a864886f70d01050d3033301b06092a864886f70d01050c +300e0408e3f7a838ab911a4d02020800301406082a864886f70d0307040847ea +7227d8b22e2f048201d0ea388b374d2d0e4ceb7a5139f850fdff274884a6e6c0 +64326e09d00dbba9018834edb5a51a6ae3d1806e6e91eebf33788ce71fee0637 +a2ebf58859dd32afc644110c390274a6128b50c39b8d907823810ec471bada86 +6f5b75d8ea04ad310fad2e73621696db8e426cd511ee93ec1714a1a7db45e036 +4bf20d178d1f16bbb250b32c2d200093169d588de65f7d99aad9ddd0104b44f1 +326962e1520dfac3c2a800e8a14f678dff2b3d0bb23f69da635bf2a643ac934e +219a447d2f4460b67149e860e54f365da130763deefa649c72b0dcd48966a2d3 +4a477444782e3e66df5a582b07bbb19778a79bd355074ce331f4a82eb966b0c4 +52a09eab6116f2722064d314ae433b3d6e81d2436e93fdf446112663cde93b87 +9c8be44beb45f18e2c78fee9b016033f01ecda51b9b142091fa69f65ab784d2c +5ad8d34be6f7f1464adfc1e0ef3f7848f40d3bdea4412758f2fcb655c93d8f4d +f6fa48fc5aa4b75dd1c017ab79ac9d737233a6d668f5364ccf47786debd37334 +9c10c9e6efbe78430a61f71c89948aa32cdc3cc7338cf994147819ce7ab23450 +c8f7d9b94c3bb377d17a3fa204b601526317824b142ff6bc843fa7815ece89c0 +839573f234dac8d80cc571a045353d61db904a4398d8ef3df5ac +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der -outform DER -out keyenc.der +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithMD5AndDES-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d010503300e0408f9b990c89af1d41b020208 +00048201d0c6267fe8592903891933d559e71a7ca68b2e39150f19daca0f7921 +52f97e249d72f670d5140e9150433310ed7c7ee51927693fd39884cb9551cea5 +a7b746f7edf199f8787d4787a35dad930d7db057b2118851211b645ac8b90fa6 +b0e7d49ac8567cbd5fff226e87aa9129a0f52c45e9307752e8575c3b0ff756b7 +31fda6942d15ecb6b27ea19370ccc79773f47891e80d22b440d81259c4c28eac +e0ca839524116bcf52d8c566e49a95ddb0e5493437279a770a39fd333f3fca91 +55884fad0ba5aaf273121f893059d37dd417da7dcfd0d6fa7494968f13b2cc95 +65633f2c891340193e5ec00e4ee0b0e90b3b93da362a4906360845771ade1754 +9df79140be5993f3424c012598eadd3e7c7c0b4db2c72cf103d7943a5cf61420 +93370b9702386c3dd4eb0a47f34b579624a46a108b2d13921fa1b367495fe345 +6aa128aa70f8ca80ae13eb301e96c380724ce67c54380bbea2316c1faf4d058e +b4ca2e23442047606b9bc4b3bf65b432cb271bea4eb35dd3eb360d3be8612a87 +a50e96a2264490aeabdc07c6e78e5dbf4fe3388726d0e2a228346bf3c2907d68 +2a6276b22ae883fb30fa611f4e4193e7a08480fcd7db48308bacbd72bf4807aa +11fd394859f97d22982f7fe890b2e2a0f7e7ffb693 +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v1 PBE-SHA1-RC2-64 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithSHA1AndRC2-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d01050b300e04083ee943bdae185008020208 +00048201d0e4614d9371d3ff10ceabc2f6a7a13a0f449f9a714144e46518ea55 +e3e6f0cde24031d01ef1f37ec40081449ef01914faf45983dde0d2bc496712de +8dd15a5527dff4721d9016c13f34fb93e3ce68577e30146266d71b539f854e56 +753a192cf126ed4812734d86f81884374f1100772f78d0646e9946407637c565 +d070acab413c55952f7237437f2e48cae7fa0ff8d370de2bf446dd08049a3663 +d9c813ac197468c02e2b687e7ca994cf7f03f01b6eca87dbfed94502c2094157 +ea39f73fe4e591df1a68b04d19d9adab90bb9898467c1464ad20bf2b8fb9a5ff +d3ec91847d1c67fd768a4b9cfb46572eccc83806601372b6fad0243f58f623b7 +1c5809dea0feb8278fe27e5560eed8448dc93f5612f546e5dd7c5f6404365eb2 +5bf3396814367ae8b15c5c432b57eaed1f882c05c7f6517ee9e42b87b7b8d071 +9d6125d1b52f7b2cca1f6bd5f584334bf90bce1a7d938274cafe27b68e629698 +b16e27ae528db28593af9adcfccbebb3b9e1f2af5cd5531b51968389caa6c091 +e7de1f1b96f0d258e54e540d961a7c0ef51fda45d6da5fddd33e9bbfd3a5f8d7 +d7ab2e971de495cddbc86d38444fee9f0ac097b00adaf7802dabe0cff5b43b45 +4f26b7b547016f89be52676866189911c53e2f2477""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v1 PBE-MD5-RC2-64 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithMD5AndRC2-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d010506300e0408f5cd2fee56d9b4b8020208 +00048201d086454942d6166a19d6b108465bd111e7080911f573d54b1369c676 +df28600e84936bfec04f91023ff16499e2e07178c340904f12ffa6886ab66228 +32bf43c2bff5a0ed14e765918cf5fc543ad49566246f7eb3fc044fa5a9c25f40 +8fc8c8296b91658d3bb1067c0aba008c4fefd9e2bcdbbbd63fdc8085482bccf4 +f150cec9a084259ad441a017e5d81a1034ef2484696a7a50863836d0eeda45cd +8cee8ecabfed703f8d9d4bbdf3a767d32a0ccdc38550ee2928d7fe3fa27eda5b +5c7899e75ad55d076d2c2d3c37d6da3d95236081f9671dab9a99afdb1cbc890e +332d1a91105d9a8ce08b6027aa07367bd1daec3059cb51f5d896124da16971e4 +0ca4bcadb06c854bdf39f42dd24174011414e51626d198775eff3449a982df7b +ace874e77e045eb6d7c3faef0750792b29a068a6291f7275df1123fac5789c51 +27ace42836d81633faf9daf38f6787fff0394ea484bbcd465b57d4dbee3cf8df +b77d1db287b3a6264c466805be5a4fe85cfbca180699859280f2dd8e2c2c10b5 +7a7d2ac670c6039d41952fbb0e4f99b560ebe1d020e1b96d02403283819c00cc +529c51f0b0101555e4c58002ba3c6e3c12e3fde1aec94382792e96d9666a2b33 +3dc397b22ecab67ee38a552fec29a1d4ff8719c748""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v1 PBE-SHA1-DES +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'skip encryption', # pbeWithSHA1AndDES-CBC, only decoding is supported +-1, +"", +"", +""" +308201f1301b06092a864886f70d01050a300e04089bacc9cf1e8f734e020208 +00048201d03e502f3ceafe8fd19ab2939576bfdded26d719b2441db1459688f5 +9673218b41ec1f739edf1e460bd927bc28470c87b2d4fc8ea02ba17b47a63c49 +c5c1bee40529dadfd3ef8b4472c730bc136678c78abfb34670ec9d7dcd17ee3f +892f93f2629e6e0f4b24ecb9f954069bf722f466dece3913bb6abbd2c471d9a5 +c5eea89b14aaccda43d30b0dd0f6eb6e9850d9747aa8aa8414c383ad01c374ee +26d3552abec9ba22669cc9622ccf2921e3d0c8ecd1a70e861956de0bec6104b5 +b649ac994970c83f8a9e84b14a7dff7843d4ca3dd4af87cea43b5657e15ae0b5 +a940ce5047f006ab3596506600724764f23757205fe374fee04911336d655acc +03e159ec27789191d1517c4f3f9122f5242d44d25eab8f0658cafb928566ca0e +8f6589aa0c0ab13ca7a618008ae3eafd4671ee8fe0b562e70b3623b0e2a16eee +97fd388087d2e03530c9fe7db6e52eccc7c48fd701ede35e08922861a9508d12 +bc8bbf24f0c6bee6e63dbcb489b603d4c4a78ce45bf2eab1d5d10456c42a65a8 +3a606f4e4b9b46eb13b57f2624b651859d3d2d5192b45dbd5a2ead14ff20ca76 +48f321309aa56d8c0c4a192b580821cc6c70c75e6f19d1c5414da898ec4dd39d +b0eb93d6ba387a80702dfd2db610757ba340f63230 +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v2 aes128 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndAES128-CBC', +2048, +"4F66EE5D3BCD531FE6EBF4B4E73016B8", # IV +"479F25156176C53A", # Salt +""" +3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c +300e0408479f25156176c53a02020800301d060960864801650304010204104f +66ee5d3bcd531fe6ebf4b4e73016b8048201d0e33cfa560423f589d097d21533 +3b880a5ebac5b2ac58b4e73b0d787aee7764f034fe34ca1d1bd845c0a7c3316f +afbfb2129e03dcaf5a5031394206492828dacef1e04639bee5935e0f46114202 +10bc6c37182f4889be11c5d0486c398f4be952e5740f65de9d8edeb275e2b406 +e19bc29ad5ebb97fa536344fc3d84c7e755696f12b810898de4e6f069b8a81c8 +0aab0d45d7d062303aaa4a10c2ce84fdb5a03114039cfe138e38bb15b2ced717 +93549cdad85e730b14d9e2198b663dfdc8d04a4349eb3de59b076ad40b116d4a +25ed917c576bc7c883c95ef0f1180e28fc9981bea069594c309f1aa1b253ceab +a2f0313bb1372bcb51a745056be93d77a1f235a762a45e8856512d436b2ca0f7 +dd60fbed394ba28978d2a2b984b028529d0a58d93aba46c6bbd4ac1e4013cbaa +63b00988bc5f11ccc40141c346762d2b28f64435d4be98ec17c1884985e3807e +e550db606600993efccf6de0dfc2d2d70b5336a3b018fa415d6bdd59f5777118 +16806b7bc17c4c7e20ad7176ebfa5a1aa3f6bc10f04b77afd443944642ac9cca +d740e082b4a3bbb8bafdd34a0b3c5f2f3c2aceccccdccd092b78994b845bfa61 +706c3b9df5165ed1dbcbf1244fe41fc9bf993f52f7658e2f87e1baaeacb0f562 +9d905c +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v2 aes192 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndAES192-CBC', +2048, +"5CFC2A4FF7B63201A4A8A5B021148186", # IV +"D718541C264944CE", # Salt +""" +3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c +300e0408d718541c264944ce02020800301d060960864801650304011604105c +fc2a4ff7b63201a4a8a5b021148186048201d08e74aaa21b8bcfb15b9790fe95 +b0e09ddb0f189b6fb1682fdb9f122b804650ddec3c67a1df093a828b3e5fbcc6 +286abbcc5354c482fd796d972e919ca8a5eba1eaa2293af1d648013ddad72106 +75622264dfba55dafdda39e338f058f1bdb9846041ffff803797d3fdf3693135 +8a192729ea8346a7e5e58e925a2e2e4af0818581859e8215d87370eb4194a5ff +bae900857d4c591dbc651a241865a817eaede9987c9f9ae4f95c0bf930eea88c +4d7596e535ffb7ca369988aba75027a96b9d0bc9c8b0b75f359067fd145a378b +02aaa15e9db7a23176224da48a83249005460cc6e429168657f2efa8b1af7537 +d7d7042f2d683e8271b21d591090963eeb57aea6172f88da139e1614d6a7d1a2 +1002d5a7a93d6d21156e2b4777f6fc069287a85a1538c46b7722ccde591ab55c +630e1ceeb1ac42d1b41f3f654e9da86b5efced43775ea68b2594e50e4005e052 +0fe753c0898120c2c07265367ff157f6538a1e4080d6f9d1ca9eb51939c9574e +f2e4e1e87c1434affd5808563cddd376776dbbf790c6a40028f311a8b58dafa2 +0970ed34acd6e3e89d063987893b2b9570ddb8cc032b05a723bba9444933ebf3 +c624204be72f4190e0245197d0cb772bec933fd8442445f9a28bd042d5a3a1e9 +9a8a07 +""" +)) + +# +# openssl pkcs8 -topk8 -passin pass:TestTest -inform DER -in key.der +# -outform DER -out keyenc.der -v2 aes192 +# hexdump -v -e '32/1 "%02x" "\n"' keyenc.der +# +wrapped_enc_keys.append(( +'PBKDF2WithHMAC-SHA1AndAES256-CBC', +2048, +"323351F94462AC563E053A056252C2C4", # IV +"02A6CD0D12E727B5", # Salt +""" +3082021f304906092a864886f70d01050d303c301b06092a864886f70d01050c +300e040802a6cd0d12e727b502020800301d060960864801650304012a041032 +3351f94462ac563e053a056252c2c4048201d07f4ef1c7be21aae738a20c5632 +b8bdbbb9083b6e7f68822267b1f481fd27fdafd61a90660de6e4058790e4c912 +bf3f319a7c37e6eb3d956daaa143865020d554bf6215e8d7492359aaeef45d6e +d85a686ed26c0bf7c18d071d827a86f0b73e1db0c0e7f3d42201544093302a90 +551ad530692468c47ac15c69500b8ca67d4a17b64d15cecc035ae50b768a36cf +07c395afa091e9e6f86f665455fbdc1b21ad79c0908b73da5de75a9b43508d5d +44dc97a870cd3cd9f01ca24452e9b11c1b4982946702cfcbfda5b2fcc0203fb5 +0b52a115760bd635c94d4c95ac2c640ee9a04ffaf6ccff5a8d953dd5d88ca478 +c377811c521f2191639c643d657a9e364af88bb7c14a356c2b0b4870a23c2f54 +d41f8157afff731471dccc6058b15e1151bcf84b39b5e622a3a1d65859c912a5 +591b85e034a1f6af664f030a6bfc8c3d20c70f32b54bcf4da9c2da83cef49cf8 +e9a74f0e5d358fe50b88acdce6a9db9a7ad61536212fc5f877ebfc7957b8bda4 +b1582a0f10d515a20ee06cf768db9c977aa6fbdca7540d611ff953012d009dac +e8abd059f8e8ffea637c9c7721f817aaf0bb23403e26a0ef0ff0e2037da67d41 +af728481f53443551a9bff4cea023164e9622b5441a309e1f4bff98e5bf76677 +8d7cd9 +""" +)) + +def txt2bin(inputs): + s = b('').join([b(x) for x in inputs if not (x in '\n\r\t ')]) + return unhexlify(s) + +class Rng: + def __init__(self, output): + self.output=output + self.idx=0 + def __call__(self, n): + output = self.output[self.idx:self.idx+n] + self.idx += n + return output + +class PKCS8_Decrypt(unittest.TestCase): + + def setUp(self): + self.oid_key = oid_key + self.clear_key = txt2bin(clear_key) + self.wrapped_clear_key = txt2bin(wrapped_clear_key) + self.wrapped_enc_keys = [] + for t in wrapped_enc_keys: + self.wrapped_enc_keys.append(( + t[0], + t[1], + txt2bin(t[2]), + txt2bin(t[3]), + txt2bin(t[4]) + )) + + ### NO ENCRYTION + + def test1(self): + """Verify unwrapping w/o encryption""" + res1, res2, res3 = PKCS8.unwrap(self.wrapped_clear_key) + self.assertEqual(res1, self.oid_key) + self.assertEqual(res2, self.clear_key) + + def test2(self): + """Verify wrapping w/o encryption""" + wrapped = PKCS8.wrap(self.clear_key, self.oid_key) + res1, res2, res3 = PKCS8.unwrap(wrapped) + self.assertEqual(res1, self.oid_key) + self.assertEqual(res2, self.clear_key) + + ## ENCRYPTION + + def test3(self): + """Verify unwrapping with encryption""" + + for t in self.wrapped_enc_keys: + res1, res2, res3 = PKCS8.unwrap(t[4], b("TestTest")) + self.assertEqual(res1, self.oid_key) + self.assertEqual(res2, self.clear_key) + + def test4(self): + """Verify wrapping with encryption""" + + for t in self.wrapped_enc_keys: + if t[0] == 'skip encryption': + continue + rng = Rng(t[2]+t[3]) + params = { 'iteration_count':t[1] } + wrapped = PKCS8.wrap( + self.clear_key, + self.oid_key, + b("TestTest"), + protection=t[0], + prot_params=params, + key_params=DerNull(), + randfunc=rng) + self.assertEqual(wrapped, t[4]) + +def get_tests(config={}): + from Cryptodome.SelfTest.st_common import list_test_cases + listTests = [] + listTests += list_test_cases(PKCS8_Decrypt) + return listTests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/__init__.py new file mode 100644 index 0000000..f5aa3aa --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/__init__.py @@ -0,0 +1,49 @@ +# +# SelfTest/Math/__init__.py: Self-test for math module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for Math""" + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.Math import test_Numbers + from Cryptodome.SelfTest.Math import test_Primality + from Cryptodome.SelfTest.Math import test_modexp + tests += test_Numbers.get_tests(config=config) + tests += test_Primality.get_tests(config=config) + tests += test_modexp.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py new file mode 100644 index 0000000..9802570 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Numbers.py @@ -0,0 +1,797 @@ +# +# SelfTest/Math/test_Numbers.py: Self-test for Numbers module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for Math.Numbers""" + +import sys +import unittest + +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Util.py3compat import * + +from Cryptodome.Math._IntegerNative import IntegerNative + + +class TestIntegerBase(unittest.TestCase): + + def setUp(self): + raise NotImplementedError("To be implemented") + + def Integers(self, *arg): + return map(self.Integer, arg) + + def test_init_and_equality(self): + Integer = self.Integer + + v1 = Integer(23) + v2 = Integer(v1) + v3 = Integer(-9) + self.assertRaises(ValueError, Integer, 1.0) + + v4 = Integer(10**10) + v5 = Integer(-10**10) + + v6 = Integer(0xFFFF) + v7 = Integer(0xFFFFFFFF) + v8 = Integer(0xFFFFFFFFFFFFFFFF) + + self.assertEqual(v1, v1) + self.assertEqual(v1, 23) + self.assertEqual(v1, v2) + self.assertEqual(v3, -9) + self.assertEqual(v4, 10 ** 10) + self.assertEqual(v5, -10 ** 10) + self.assertEqual(v6, 0xFFFF) + self.assertEqual(v7, 0xFFFFFFFF) + self.assertEqual(v8, 0xFFFFFFFFFFFFFFFF) + + self.assertFalse(v1 == v4) + + # Init and comparison between Integer's + v6 = Integer(v1) + self.assertEqual(v1, v6) + + self.assertFalse(Integer(0) == None) + + def test_conversion_to_int(self): + v1, v2 = self.Integers(-23, 2 ** 1000) + self.assertEqual(int(v1), -23) + self.assertEqual(int(v2), 2 ** 1000) + + def test_equality_with_ints(self): + v1, v2, v3 = self.Integers(23, -89, 2 ** 1000) + self.assertTrue(v1 == 23) + self.assertTrue(v2 == -89) + self.assertFalse(v1 == 24) + self.assertTrue(v3 == 2 ** 1000) + + def test_conversion_to_str(self): + v1, v2, v3, v4 = self.Integers(20, 0, -20, 2 ** 1000) + self.assertTrue(str(v1) == "20") + self.assertTrue(str(v2) == "0") + self.assertTrue(str(v3) == "-20") + self.assertTrue(str(v4) == "10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376") + + def test_repr(self): + v1, v2 = self.Integers(-1, 2**80) + self.assertEqual(repr(v1), "Integer(-1)") + self.assertEqual(repr(v2), "Integer(1208925819614629174706176)") + + def test_conversion_to_bytes(self): + Integer = self.Integer + + v1 = Integer(0x17) + self.assertEqual(b("\x17"), v1.to_bytes()) + + v2 = Integer(0xFFFE) + self.assertEqual(b("\xFF\xFE"), v2.to_bytes()) + self.assertEqual(b("\x00\xFF\xFE"), v2.to_bytes(3)) + self.assertRaises(ValueError, v2.to_bytes, 1) + + self.assertEqual(b("\xFE\xFF"), v2.to_bytes(byteorder='little')) + self.assertEqual(b("\xFE\xFF\x00"), v2.to_bytes(3, byteorder='little')) + + v3 = Integer(-90) + self.assertRaises(ValueError, v3.to_bytes) + self.assertRaises(ValueError, v3.to_bytes, byteorder='bittle') + + def test_conversion_from_bytes(self): + Integer = self.Integer + + v1 = Integer.from_bytes(b"\x00") + self.assertTrue(isinstance(v1, Integer)) + self.assertEqual(0, v1) + + v2 = Integer.from_bytes(b"\x00\x01") + self.assertEqual(1, v2) + + v3 = Integer.from_bytes(b"\xFF\xFF") + self.assertEqual(0xFFFF, v3) + + v4 = Integer.from_bytes(b"\x00\x01", 'big') + self.assertEqual(1, v4) + + v5 = Integer.from_bytes(b"\x00\x01", byteorder='big') + self.assertEqual(1, v5) + + v6 = Integer.from_bytes(b"\x00\x01", byteorder='little') + self.assertEqual(0x0100, v6) + + self.assertRaises(ValueError, Integer.from_bytes, b'\x09', 'bittle') + + def test_inequality(self): + # Test Integer!=Integer and Integer!=int + v1, v2, v3, v4 = self.Integers(89, 89, 90, -8) + self.assertTrue(v1 != v3) + self.assertTrue(v1 != 90) + self.assertFalse(v1 != v2) + self.assertFalse(v1 != 89) + self.assertTrue(v1 != v4) + self.assertTrue(v4 != v1) + self.assertTrue(self.Integer(0) != None) + + def test_less_than(self): + # Test Integer<Integer and Integer<int + v1, v2, v3, v4, v5 = self.Integers(13, 13, 14, -8, 2 ** 10) + self.assertTrue(v1 < v3) + self.assertTrue(v1 < 14) + self.assertFalse(v1 < v2) + self.assertFalse(v1 < 13) + self.assertTrue(v4 < v1) + self.assertFalse(v1 < v4) + self.assertTrue(v1 < v5) + self.assertFalse(v5 < v1) + + def test_less_than_or_equal(self): + # Test Integer<=Integer and Integer<=int + v1, v2, v3, v4, v5 = self.Integers(13, 13, 14, -4, 2 ** 10) + self.assertTrue(v1 <= v1) + self.assertTrue(v1 <= 13) + self.assertTrue(v1 <= v2) + self.assertTrue(v1 <= 14) + self.assertTrue(v1 <= v3) + self.assertFalse(v1 <= v4) + self.assertTrue(v1 <= v5) + self.assertFalse(v5 <= v1) + + def test_more_than(self): + # Test Integer>Integer and Integer>int + v1, v2, v3, v4, v5 = self.Integers(13, 13, 14, -8, 2 ** 10) + self.assertTrue(v3 > v1) + self.assertTrue(v3 > 13) + self.assertFalse(v1 > v1) + self.assertFalse(v1 > v2) + self.assertFalse(v1 > 13) + self.assertTrue(v1 > v4) + self.assertFalse(v4 > v1) + self.assertTrue(v5 > v1) + self.assertFalse(v1 > v5) + + def test_more_than_or_equal(self): + # Test Integer>=Integer and Integer>=int + v1, v2, v3, v4 = self.Integers(13, 13, 14, -4) + self.assertTrue(v3 >= v1) + self.assertTrue(v3 >= 13) + self.assertTrue(v1 >= v2) + self.assertTrue(v1 >= v1) + self.assertTrue(v1 >= 13) + self.assertFalse(v4 >= v1) + + def test_bool(self): + v1, v2, v3, v4 = self.Integers(0, 10, -9, 2 ** 10) + self.assertFalse(v1) + self.assertFalse(bool(v1)) + self.assertTrue(v2) + self.assertTrue(bool(v2)) + self.assertTrue(v3) + self.assertTrue(v4) + + def test_is_negative(self): + v1, v2, v3, v4, v5 = self.Integers(-3 ** 100, -3, 0, 3, 3**100) + self.assertTrue(v1.is_negative()) + self.assertTrue(v2.is_negative()) + self.assertFalse(v4.is_negative()) + self.assertFalse(v5.is_negative()) + + def test_addition(self): + # Test Integer+Integer and Integer+int + v1, v2, v3 = self.Integers(7, 90, -7) + self.assertTrue(isinstance(v1 + v2, self.Integer)) + self.assertEqual(v1 + v2, 97) + self.assertEqual(v1 + 90, 97) + self.assertEqual(v1 + v3, 0) + self.assertEqual(v1 + (-7), 0) + self.assertEqual(v1 + 2 ** 10, 2 ** 10 + 7) + + def test_subtraction(self): + # Test Integer-Integer and Integer-int + v1, v2, v3 = self.Integers(7, 90, -7) + self.assertTrue(isinstance(v1 - v2, self.Integer)) + self.assertEqual(v2 - v1, 83) + self.assertEqual(v2 - 7, 83) + self.assertEqual(v2 - v3, 97) + self.assertEqual(v1 - (-7), 14) + self.assertEqual(v1 - 2 ** 10, 7 - 2 ** 10) + + def test_multiplication(self): + # Test Integer-Integer and Integer-int + v1, v2, v3, v4 = self.Integers(4, 5, -2, 2 ** 10) + self.assertTrue(isinstance(v1 * v2, self.Integer)) + self.assertEqual(v1 * v2, 20) + self.assertEqual(v1 * 5, 20) + self.assertEqual(v1 * -2, -8) + self.assertEqual(v1 * 2 ** 10, 4 * (2 ** 10)) + + def test_floor_div(self): + v1, v2, v3 = self.Integers(3, 8, 2 ** 80) + self.assertTrue(isinstance(v1 // v2, self.Integer)) + self.assertEqual(v2 // v1, 2) + self.assertEqual(v2 // 3, 2) + self.assertEqual(v2 // -3, -3) + self.assertEqual(v3 // 2 ** 79, 2) + self.assertRaises(ZeroDivisionError, lambda: v1 // 0) + + def test_remainder(self): + # Test Integer%Integer and Integer%int + v1, v2, v3 = self.Integers(23, 5, -4) + self.assertTrue(isinstance(v1 % v2, self.Integer)) + self.assertEqual(v1 % v2, 3) + self.assertEqual(v1 % 5, 3) + self.assertEqual(v3 % 5, 1) + self.assertEqual(v1 % 2 ** 10, 23) + self.assertRaises(ZeroDivisionError, lambda: v1 % 0) + self.assertRaises(ValueError, lambda: v1 % -6) + + def test_simple_exponentiation(self): + v1, v2, v3 = self.Integers(4, 3, -2) + self.assertTrue(isinstance(v1 ** v2, self.Integer)) + self.assertEqual(v1 ** v2, 64) + self.assertEqual(pow(v1, v2), 64) + self.assertEqual(v1 ** 3, 64) + self.assertEqual(pow(v1, 3), 64) + self.assertEqual(v3 ** 2, 4) + self.assertEqual(v3 ** 3, -8) + + self.assertRaises(ValueError, pow, v1, -3) + + def test_modular_exponentiation(self): + v1, v2, v3 = self.Integers(23, 5, 17) + + self.assertTrue(isinstance(pow(v1, v2, v3), self.Integer)) + self.assertEqual(pow(v1, v2, v3), 7) + self.assertEqual(pow(v1, 5, v3), 7) + self.assertEqual(pow(v1, v2, 17), 7) + self.assertEqual(pow(v1, 5, 17), 7) + self.assertEqual(pow(v1, 0, 17), 1) + self.assertEqual(pow(v1, 1, 2 ** 80), 23) + self.assertEqual(pow(v1, 2 ** 80, 89298), 17689) + + self.assertRaises(ZeroDivisionError, pow, v1, 5, 0) + self.assertRaises(ValueError, pow, v1, 5, -4) + self.assertRaises(ValueError, pow, v1, -3, 8) + + def test_inplace_exponentiation(self): + v1 = self.Integer(4) + v1.inplace_pow(2) + self.assertEqual(v1, 16) + + v1 = self.Integer(4) + v1.inplace_pow(2, 15) + self.assertEqual(v1, 1) + + def test_abs(self): + v1, v2, v3, v4, v5 = self.Integers(-2 ** 100, -2, 0, 2, 2 ** 100) + self.assertEqual(abs(v1), 2 ** 100) + self.assertEqual(abs(v2), 2) + self.assertEqual(abs(v3), 0) + self.assertEqual(abs(v4), 2) + self.assertEqual(abs(v5), 2 ** 100) + + def test_sqrt(self): + v1, v2, v3, v4 = self.Integers(-2, 0, 49, 10**100) + + self.assertRaises(ValueError, v1.sqrt) + self.assertEqual(v2.sqrt(), 0) + self.assertEqual(v3.sqrt(), 7) + self.assertEqual(v4.sqrt(), 10**50) + + def test_sqrt_module(self): + + # Invalid modulus (non positive) + self.assertRaises(ValueError, self.Integer(5).sqrt, 0) + self.assertRaises(ValueError, self.Integer(5).sqrt, -1) + + # Simple cases + assert self.Integer(0).sqrt(5) == 0 + assert self.Integer(1).sqrt(5) in (1, 4) + + # Test with all quadratic residues in several fields + for p in (11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53): + for i in range(0, p): + square = i**2 % p + res = self.Integer(square).sqrt(p) + assert res in (i, p - i) + + # 2 is a non-quadratic reside in Z_11 + self.assertRaises(ValueError, self.Integer(2).sqrt, 11) + + # 10 is not a prime + self.assertRaises(ValueError, self.Integer(4).sqrt, 10) + + # 5 is square residue of 4 and 7 + assert self.Integer(5 - 11).sqrt(11) in (4, 7) + assert self.Integer(5 + 11).sqrt(11) in (4, 7) + + def test_in_place_add(self): + v1, v2 = self.Integers(10, 20) + + v1 += v2 + self.assertEqual(v1, 30) + v1 += 10 + self.assertEqual(v1, 40) + v1 += -1 + self.assertEqual(v1, 39) + v1 += 2 ** 1000 + self.assertEqual(v1, 39 + 2 ** 1000) + + def test_in_place_sub(self): + v1, v2 = self.Integers(10, 20) + + v1 -= v2 + self.assertEqual(v1, -10) + v1 -= -100 + self.assertEqual(v1, 90) + v1 -= 90000 + self.assertEqual(v1, -89910) + v1 -= -100000 + self.assertEqual(v1, 10090) + + def test_in_place_mul(self): + v1, v2 = self.Integers(3, 5) + + v1 *= v2 + self.assertEqual(v1, 15) + v1 *= 2 + self.assertEqual(v1, 30) + v1 *= -2 + self.assertEqual(v1, -60) + v1 *= 2 ** 1000 + self.assertEqual(v1, -60 * (2 ** 1000)) + + def test_in_place_modulus(self): + v1, v2 = self.Integers(20, 7) + + v1 %= v2 + self.assertEqual(v1, 6) + v1 %= 2 ** 1000 + self.assertEqual(v1, 6) + v1 %= 2 + self.assertEqual(v1, 0) + def t(): + v3 = self.Integer(9) + v3 %= 0 + self.assertRaises(ZeroDivisionError, t) + + def test_and(self): + v1, v2, v3 = self.Integers(0xF4, 0x31, -0xF) + self.assertTrue(isinstance(v1 & v2, self.Integer)) + self.assertEqual(v1 & v2, 0x30) + self.assertEqual(v1 & 0x31, 0x30) + self.assertEqual(v1 & v3, 0xF0) + self.assertEqual(v1 & -0xF, 0xF0) + self.assertEqual(v3 & -0xF, -0xF) + self.assertEqual(v2 & (2 ** 1000 + 0x31), 0x31) + + def test_or(self): + v1, v2, v3 = self.Integers(0x40, 0x82, -0xF) + self.assertTrue(isinstance(v1 | v2, self.Integer)) + self.assertEqual(v1 | v2, 0xC2) + self.assertEqual(v1 | 0x82, 0xC2) + self.assertEqual(v2 | v3, -0xD) + self.assertEqual(v2 | 2 ** 1000, 2 ** 1000 + 0x82) + + def test_right_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + self.assertEqual(v1 >> 0, v1) + self.assertTrue(isinstance(v1 >> v2, self.Integer)) + self.assertEqual(v1 >> v2, 0x08) + self.assertEqual(v1 >> 1, 0x08) + self.assertRaises(ValueError, lambda: v1 >> -1) + self.assertEqual(v1 >> (2 ** 1000), 0) + + self.assertEqual(v3 >> 1, -0x08) + self.assertEqual(v3 >> (2 ** 1000), -1) + + def test_in_place_right_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + v1 >>= 0 + self.assertEqual(v1, 0x10) + v1 >>= 1 + self.assertEqual(v1, 0x08) + v1 >>= v2 + self.assertEqual(v1, 0x04) + v3 >>= 1 + self.assertEqual(v3, -0x08) + def l(): + v4 = self.Integer(0x90) + v4 >>= -1 + self.assertRaises(ValueError, l) + def m1(): + v4 = self.Integer(0x90) + v4 >>= 2 ** 1000 + return v4 + self.assertEqual(0, m1()) + def m2(): + v4 = self.Integer(-1) + v4 >>= 2 ** 1000 + return v4 + self.assertEqual(-1, m2()) + + def _test_left_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + self.assertEqual(v1 << 0, v1) + self.assertTrue(isinstance(v1 << v2, self.Integer)) + self.assertEqual(v1 << v2, 0x20) + self.assertEqual(v1 << 1, 0x20) + self.assertEqual(v3 << 1, -0x20) + self.assertRaises(ValueError, lambda: v1 << -1) + self.assertRaises(ValueError, lambda: v1 << (2 ** 1000)) + + def test_in_place_left_shift(self): + v1, v2, v3 = self.Integers(0x10, 1, -0x10) + v1 <<= 0 + self.assertEqual(v1, 0x10) + v1 <<= 1 + self.assertEqual(v1, 0x20) + v1 <<= v2 + self.assertEqual(v1, 0x40) + v3 <<= 1 + self.assertEqual(v3, -0x20) + def l(): + v4 = self.Integer(0x90) + v4 <<= -1 + self.assertRaises(ValueError, l) + def m(): + v4 = self.Integer(0x90) + v4 <<= 2 ** 1000 + self.assertRaises(ValueError, m) + + + def test_get_bit(self): + v1, v2, v3 = self.Integers(0x102, -3, 1) + self.assertEqual(v1.get_bit(0), 0) + self.assertEqual(v1.get_bit(1), 1) + self.assertEqual(v1.get_bit(v3), 1) + self.assertEqual(v1.get_bit(8), 1) + self.assertEqual(v1.get_bit(9), 0) + + self.assertRaises(ValueError, v1.get_bit, -1) + self.assertEqual(v1.get_bit(2 ** 1000), 0) + + self.assertRaises(ValueError, v2.get_bit, -1) + self.assertRaises(ValueError, v2.get_bit, 0) + self.assertRaises(ValueError, v2.get_bit, 1) + self.assertRaises(ValueError, v2.get_bit, 2 * 1000) + + def test_odd_even(self): + v1, v2, v3, v4, v5 = self.Integers(0, 4, 17, -4, -17) + + self.assertTrue(v1.is_even()) + self.assertTrue(v2.is_even()) + self.assertFalse(v3.is_even()) + self.assertTrue(v4.is_even()) + self.assertFalse(v5.is_even()) + + self.assertFalse(v1.is_odd()) + self.assertFalse(v2.is_odd()) + self.assertTrue(v3.is_odd()) + self.assertFalse(v4.is_odd()) + self.assertTrue(v5.is_odd()) + + def test_size_in_bits(self): + v1, v2, v3, v4 = self.Integers(0, 1, 0x100, -90) + self.assertEqual(v1.size_in_bits(), 1) + self.assertEqual(v2.size_in_bits(), 1) + self.assertEqual(v3.size_in_bits(), 9) + self.assertRaises(ValueError, v4.size_in_bits) + + def test_size_in_bytes(self): + v1, v2, v3, v4, v5, v6 = self.Integers(0, 1, 0xFF, 0x1FF, 0x10000, -9) + self.assertEqual(v1.size_in_bytes(), 1) + self.assertEqual(v2.size_in_bytes(), 1) + self.assertEqual(v3.size_in_bytes(), 1) + self.assertEqual(v4.size_in_bytes(), 2) + self.assertEqual(v5.size_in_bytes(), 3) + self.assertRaises(ValueError, v6.size_in_bits) + + def test_perfect_square(self): + + self.assertFalse(self.Integer(-9).is_perfect_square()) + self.assertTrue(self.Integer(0).is_perfect_square()) + self.assertTrue(self.Integer(1).is_perfect_square()) + self.assertFalse(self.Integer(2).is_perfect_square()) + self.assertFalse(self.Integer(3).is_perfect_square()) + self.assertTrue(self.Integer(4).is_perfect_square()) + self.assertTrue(self.Integer(39*39).is_perfect_square()) + self.assertFalse(self.Integer(39*39+1).is_perfect_square()) + + for x in range(100, 1000): + self.assertFalse(self.Integer(x**2+1).is_perfect_square()) + self.assertTrue(self.Integer(x**2).is_perfect_square()) + + def test_fail_if_divisible_by(self): + v1, v2, v3 = self.Integers(12, -12, 4) + + # No failure expected + v1.fail_if_divisible_by(7) + v2.fail_if_divisible_by(7) + v2.fail_if_divisible_by(2 ** 80) + + # Failure expected + self.assertRaises(ValueError, v1.fail_if_divisible_by, 4) + self.assertRaises(ValueError, v1.fail_if_divisible_by, v3) + + def test_multiply_accumulate(self): + v1, v2, v3 = self.Integers(4, 3, 2) + v1.multiply_accumulate(v2, v3) + self.assertEqual(v1, 10) + v1.multiply_accumulate(v2, 2) + self.assertEqual(v1, 16) + v1.multiply_accumulate(3, v3) + self.assertEqual(v1, 22) + v1.multiply_accumulate(1, -2) + self.assertEqual(v1, 20) + v1.multiply_accumulate(-2, 1) + self.assertEqual(v1, 18) + v1.multiply_accumulate(1, 2 ** 1000) + self.assertEqual(v1, 18 + 2 ** 1000) + v1.multiply_accumulate(2 ** 1000, 1) + self.assertEqual(v1, 18 + 2 ** 1001) + + def test_set(self): + v1, v2 = self.Integers(3, 6) + v1.set(v2) + self.assertEqual(v1, 6) + v1.set(9) + self.assertEqual(v1, 9) + v1.set(-2) + self.assertEqual(v1, -2) + v1.set(2 ** 1000) + self.assertEqual(v1, 2 ** 1000) + + def test_inverse(self): + v1, v2, v3, v4, v5, v6 = self.Integers(2, 5, -3, 0, 723872, 3433) + + self.assertTrue(isinstance(v1.inverse(v2), self.Integer)) + self.assertEqual(v1.inverse(v2), 3) + self.assertEqual(v1.inverse(5), 3) + self.assertEqual(v3.inverse(5), 3) + self.assertEqual(v5.inverse(92929921), 58610507) + self.assertEqual(v6.inverse(9912), 5353) + + self.assertRaises(ValueError, v2.inverse, 10) + self.assertRaises(ValueError, v1.inverse, -3) + self.assertRaises(ValueError, v4.inverse, 10) + self.assertRaises(ZeroDivisionError, v2.inverse, 0) + + def test_inplace_inverse(self): + v1, v2 = self.Integers(2, 5) + + v1.inplace_inverse(v2) + self.assertEqual(v1, 3) + + def test_gcd(self): + v1, v2, v3, v4 = self.Integers(6, 10, 17, -2) + self.assertTrue(isinstance(v1.gcd(v2), self.Integer)) + self.assertEqual(v1.gcd(v2), 2) + self.assertEqual(v1.gcd(10), 2) + self.assertEqual(v1.gcd(v3), 1) + self.assertEqual(v1.gcd(-2), 2) + self.assertEqual(v4.gcd(6), 2) + + def test_lcm(self): + v1, v2, v3, v4, v5 = self.Integers(6, 10, 17, -2, 0) + self.assertTrue(isinstance(v1.lcm(v2), self.Integer)) + self.assertEqual(v1.lcm(v2), 30) + self.assertEqual(v1.lcm(10), 30) + self.assertEqual(v1.lcm(v3), 102) + self.assertEqual(v1.lcm(-2), 6) + self.assertEqual(v4.lcm(6), 6) + self.assertEqual(v1.lcm(0), 0) + self.assertEqual(v5.lcm(0), 0) + + def test_jacobi_symbol(self): + + data = ( + (1001, 1, 1), + (19, 45, 1), + (8, 21, -1), + (5, 21, 1), + (610, 987, -1), + (1001, 9907, -1), + (5, 3439601197, -1) + ) + + js = self.Integer.jacobi_symbol + + # Jacobi symbol is always 1 for k==1 or n==1 + for k in range(1, 30): + self.assertEqual(js(k, 1), 1) + for n in range(1, 30, 2): + self.assertEqual(js(1, n), 1) + + # Fail if n is not positive odd + self.assertRaises(ValueError, js, 6, -2) + self.assertRaises(ValueError, js, 6, -1) + self.assertRaises(ValueError, js, 6, 0) + self.assertRaises(ValueError, js, 0, 0) + self.assertRaises(ValueError, js, 6, 2) + self.assertRaises(ValueError, js, 6, 4) + self.assertRaises(ValueError, js, 6, 6) + self.assertRaises(ValueError, js, 6, 8) + + for tv in data: + self.assertEqual(js(tv[0], tv[1]), tv[2]) + self.assertEqual(js(self.Integer(tv[0]), tv[1]), tv[2]) + self.assertEqual(js(tv[0], self.Integer(tv[1])), tv[2]) + + def test_jacobi_symbol_wikipedia(self): + + # Test vectors from https://en.wikipedia.org/wiki/Jacobi_symbol + tv = [ + (3, [(1, 1), (2, -1), (3, 0), (4, 1), (5, -1), (6, 0), (7, 1), (8, -1), (9, 0), (10, 1), (11, -1), (12, 0), (13, 1), (14, -1), (15, 0), (16, 1), (17, -1), (18, 0), (19, 1), (20, -1), (21, 0), (22, 1), (23, -1), (24, 0), (25, 1), (26, -1), (27, 0), (28, 1), (29, -1), (30, 0)]), + (5, [(1, 1), (2, -1), (3, -1), (4, 1), (5, 0), (6, 1), (7, -1), (8, -1), (9, 1), (10, 0), (11, 1), (12, -1), (13, -1), (14, 1), (15, 0), (16, 1), (17, -1), (18, -1), (19, 1), (20, 0), (21, 1), (22, -1), (23, -1), (24, 1), (25, 0), (26, 1), (27, -1), (28, -1), (29, 1), (30, 0)]), + (7, [(1, 1), (2, 1), (3, -1), (4, 1), (5, -1), (6, -1), (7, 0), (8, 1), (9, 1), (10, -1), (11, 1), (12, -1), (13, -1), (14, 0), (15, 1), (16, 1), (17, -1), (18, 1), (19, -1), (20, -1), (21, 0), (22, 1), (23, 1), (24, -1), (25, 1), (26, -1), (27, -1), (28, 0), (29, 1), (30, 1)]), + (9, [(1, 1), (2, 1), (3, 0), (4, 1), (5, 1), (6, 0), (7, 1), (8, 1), (9, 0), (10, 1), (11, 1), (12, 0), (13, 1), (14, 1), (15, 0), (16, 1), (17, 1), (18, 0), (19, 1), (20, 1), (21, 0), (22, 1), (23, 1), (24, 0), (25, 1), (26, 1), (27, 0), (28, 1), (29, 1), (30, 0)]), + (11, [(1, 1), (2, -1), (3, 1), (4, 1), (5, 1), (6, -1), (7, -1), (8, -1), (9, 1), (10, -1), (11, 0), (12, 1), (13, -1), (14, 1), (15, 1), (16, 1), (17, -1), (18, -1), (19, -1), (20, 1), (21, -1), (22, 0), (23, 1), (24, -1), (25, 1), (26, 1), (27, 1), (28, -1), (29, -1), (30, -1)]), + (13, [(1, 1), (2, -1), (3, 1), (4, 1), (5, -1), (6, -1), (7, -1), (8, -1), (9, 1), (10, 1), (11, -1), (12, 1), (13, 0), (14, 1), (15, -1), (16, 1), (17, 1), (18, -1), (19, -1), (20, -1), (21, -1), (22, 1), (23, 1), (24, -1), (25, 1), (26, 0), (27, 1), (28, -1), (29, 1), (30, 1)]), + (15, [(1, 1), (2, 1), (3, 0), (4, 1), (5, 0), (6, 0), (7, -1), (8, 1), (9, 0), (10, 0), (11, -1), (12, 0), (13, -1), (14, -1), (15, 0), (16, 1), (17, 1), (18, 0), (19, 1), (20, 0), (21, 0), (22, -1), (23, 1), (24, 0), (25, 0), (26, -1), (27, 0), (28, -1), (29, -1), (30, 0)]), + (17, [(1, 1), (2, 1), (3, -1), (4, 1), (5, -1), (6, -1), (7, -1), (8, 1), (9, 1), (10, -1), (11, -1), (12, -1), (13, 1), (14, -1), (15, 1), (16, 1), (17, 0), (18, 1), (19, 1), (20, -1), (21, 1), (22, -1), (23, -1), (24, -1), (25, 1), (26, 1), (27, -1), (28, -1), (29, -1), (30, 1)]), + (19, [(1, 1), (2, -1), (3, -1), (4, 1), (5, 1), (6, 1), (7, 1), (8, -1), (9, 1), (10, -1), (11, 1), (12, -1), (13, -1), (14, -1), (15, -1), (16, 1), (17, 1), (18, -1), (19, 0), (20, 1), (21, -1), (22, -1), (23, 1), (24, 1), (25, 1), (26, 1), (27, -1), (28, 1), (29, -1), (30, 1)]), + (21, [(1, 1), (2, -1), (3, 0), (4, 1), (5, 1), (6, 0), (7, 0), (8, -1), (9, 0), (10, -1), (11, -1), (12, 0), (13, -1), (14, 0), (15, 0), (16, 1), (17, 1), (18, 0), (19, -1), (20, 1), (21, 0), (22, 1), (23, -1), (24, 0), (25, 1), (26, 1), (27, 0), (28, 0), (29, -1), (30, 0)]), + (23, [(1, 1), (2, 1), (3, 1), (4, 1), (5, -1), (6, 1), (7, -1), (8, 1), (9, 1), (10, -1), (11, -1), (12, 1), (13, 1), (14, -1), (15, -1), (16, 1), (17, -1), (18, 1), (19, -1), (20, -1), (21, -1), (22, -1), (23, 0), (24, 1), (25, 1), (26, 1), (27, 1), (28, -1), (29, 1), (30, -1)]), + (25, [(1, 1), (2, 1), (3, 1), (4, 1), (5, 0), (6, 1), (7, 1), (8, 1), (9, 1), (10, 0), (11, 1), (12, 1), (13, 1), (14, 1), (15, 0), (16, 1), (17, 1), (18, 1), (19, 1), (20, 0), (21, 1), (22, 1), (23, 1), (24, 1), (25, 0), (26, 1), (27, 1), (28, 1), (29, 1), (30, 0)]), + (27, [(1, 1), (2, -1), (3, 0), (4, 1), (5, -1), (6, 0), (7, 1), (8, -1), (9, 0), (10, 1), (11, -1), (12, 0), (13, 1), (14, -1), (15, 0), (16, 1), (17, -1), (18, 0), (19, 1), (20, -1), (21, 0), (22, 1), (23, -1), (24, 0), (25, 1), (26, -1), (27, 0), (28, 1), (29, -1), (30, 0)]), + (29, [(1, 1), (2, -1), (3, -1), (4, 1), (5, 1), (6, 1), (7, 1), (8, -1), (9, 1), (10, -1), (11, -1), (12, -1), (13, 1), (14, -1), (15, -1), (16, 1), (17, -1), (18, -1), (19, -1), (20, 1), (21, -1), (22, 1), (23, 1), (24, 1), (25, 1), (26, -1), (27, -1), (28, 1), (29, 0), (30, 1)]), + ] + + js = self.Integer.jacobi_symbol + + for n, kj in tv: + for k, j in kj: + self.assertEqual(js(k, n), j) + + def test_hex(self): + v1, = self.Integers(0x10) + self.assertEqual(hex(v1), "0x10") + + +class TestIntegerInt(TestIntegerBase): + + def setUp(self): + self.Integer = IntegerNative + + +class testIntegerRandom(unittest.TestCase): + + def test_random_exact_bits(self): + + for _ in range(1000): + a = IntegerNative.random(exact_bits=8) + self.assertFalse(a < 128) + self.assertFalse(a >= 256) + + for bits_value in range(1024, 1024 + 8): + a = IntegerNative.random(exact_bits=bits_value) + self.assertFalse(a < 2**(bits_value - 1)) + self.assertFalse(a >= 2**bits_value) + + def test_random_max_bits(self): + + flag = False + for _ in range(1000): + a = IntegerNative.random(max_bits=8) + flag = flag or a < 128 + self.assertFalse(a>=256) + self.assertTrue(flag) + + for bits_value in range(1024, 1024 + 8): + a = IntegerNative.random(max_bits=bits_value) + self.assertFalse(a >= 2**bits_value) + + def test_random_bits_custom_rng(self): + + class CustomRNG(object): + def __init__(self): + self.counter = 0 + + def __call__(self, size): + self.counter += size + return bchr(0) * size + + custom_rng = CustomRNG() + a = IntegerNative.random(exact_bits=32, randfunc=custom_rng) + self.assertEqual(custom_rng.counter, 4) + + def test_random_range(self): + + func = IntegerNative.random_range + + for x in range(200): + a = func(min_inclusive=1, max_inclusive=15) + self.assertTrue(1 <= a <= 15) + + for x in range(200): + a = func(min_inclusive=1, max_exclusive=15) + self.assertTrue(1 <= a < 15) + + self.assertRaises(ValueError, func, min_inclusive=1, max_inclusive=2, + max_exclusive=3) + self.assertRaises(ValueError, func, max_inclusive=2, max_exclusive=3) + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestIntegerInt) + + try: + from Cryptodome.Math._IntegerGMP import IntegerGMP + + class TestIntegerGMP(TestIntegerBase): + def setUp(self): + self.Integer = IntegerGMP + + tests += list_test_cases(TestIntegerGMP) + except (ImportError, OSError) as e: + if sys.platform == "win32": + sys.stdout.write("Skipping GMP tests on Windows\n") + else: + sys.stdout.write("Skipping GMP tests (%s)\n" % str(e) ) + + try: + from Cryptodome.Math._IntegerCustom import IntegerCustom + + class TestIntegerCustomModexp(TestIntegerBase): + def setUp(self): + self.Integer = IntegerCustom + + tests += list_test_cases(TestIntegerCustomModexp) + except (ImportError, OSError) as e: + sys.stdout.write("Skipping custom modexp tests (%s)\n" % str(e) ) + + tests += list_test_cases(testIntegerRandom) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Primality.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Primality.py new file mode 100644 index 0000000..475d1d4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_Primality.py @@ -0,0 +1,118 @@ +# +# SelfTest/Math/test_Primality.py: Self-test for Primality module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for Math.Numbers""" + +import unittest + +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Util.py3compat import * + +from Cryptodome.Math.Numbers import Integer +from Cryptodome.Math.Primality import ( + PROBABLY_PRIME, COMPOSITE, + miller_rabin_test, lucas_test, + test_probable_prime, + generate_probable_prime, + generate_probable_safe_prime, + ) + + +class TestPrimality(unittest.TestCase): + + primes = (1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 2**127-1, 175637383534939453397801320455508570374088202376942372758907369518414308188137781042871856139027160010343454418881888953150175357127346872102307696660678617989191485418582475696230580407111841072614783095326672517315988762029036079794994990250662362650625650262324085116467511357592728695033227611029693067539) + composites = (0, 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 7*23, (2**19-1)*(2**67-1), 9746347772161,) + + def test_miller_rabin(self): + for prime in self.primes: + self.assertEqual(miller_rabin_test(prime, 3), PROBABLY_PRIME) + for composite in self.composites: + self.assertEqual(miller_rabin_test(composite, 3), COMPOSITE) + self.assertRaises(ValueError, miller_rabin_test, -1, 3) + + def test_lucas(self): + for prime in self.primes: + res = lucas_test(prime) + self.assertEqual(res, PROBABLY_PRIME) + for composite in self.composites: + res = lucas_test(composite) + self.assertEqual(res, COMPOSITE) + self.assertRaises(ValueError, lucas_test, -1) + + def test_is_prime(self): + primes = (170141183460469231731687303715884105727, + 19175002942688032928599, + 1363005552434666078217421284621279933627102780881053358473, + 2 ** 521 - 1) + for p in primes: + self.assertEqual(test_probable_prime(p), PROBABLY_PRIME) + + not_primes = ( + 4754868377601046732119933839981363081972014948522510826417784001, + 1334733877147062382486934807105197899496002201113849920496510541601, + 260849323075371835669784094383812120359260783810157225730623388382401, + ) + for np in not_primes: + self.assertEqual(test_probable_prime(np), COMPOSITE) + + from Cryptodome.Util.number import sieve_base + for p in sieve_base[:100]: + res = test_probable_prime(p) + self.assertEqual(res, PROBABLY_PRIME) + + def test_generate_prime_bit_size(self): + p = generate_probable_prime(exact_bits=512) + self.assertEqual(p.size_in_bits(), 512) + + def test_generate_prime_filter(self): + def ending_with_one(number): + return number % 10 == 1 + + for x in range(20): + q = generate_probable_prime(exact_bits=160, + prime_filter=ending_with_one) + self.assertEqual(q % 10, 1) + + def test_generate_safe_prime(self): + p = generate_probable_safe_prime(exact_bits=161) + self.assertEqual(p.size_in_bits(), 161) + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestPrimality) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_modexp.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_modexp.py new file mode 100644 index 0000000..d63f43c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Math/test_modexp.py @@ -0,0 +1,201 @@ +# +# SelfTest/Math/test_modexp.py: Self-test for module exponentiation +# +# =================================================================== +# +# Copyright (c) 2017, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-test for the custom module exponentiation""" + +import unittest + +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Util.number import long_to_bytes, bytes_to_long + +from Cryptodome.Util.py3compat import * + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, + create_string_buffer, + get_raw_buffer, + c_size_t, + c_ulonglong) + +from Cryptodome.Hash import SHAKE128 +from Cryptodome.Math.Numbers import Integer +from Cryptodome.Math._IntegerCustom import _raw_montgomery + +from Cryptodome.Random.random import StrongRandom + + +def create_rng(tag): + rng = StrongRandom(SHAKE128.new(data=tag)) + return rng + +class ExceptionModulus(ValueError): + pass + +def monty_pow(base, exp, modulus): + max_len = len(long_to_bytes(max(base, exp, modulus))) + + base_b, exp_b, modulus_b = [ long_to_bytes(x, max_len) for x in + (base, exp, modulus) ] + + out = create_string_buffer(max_len) + error = _raw_montgomery.monty_pow( + out, + base_b, + exp_b, + modulus_b, + c_size_t(max_len), + c_ulonglong(32) + ) + + if error == 17: + raise ExceptionModulus() + if error: + raise ValueError("monty_pow failed with error: %d" % error) + + result = bytes_to_long(get_raw_buffer(out)) + return result + +exponent1 = 0x2ce0af628901460a419a08ef950d498b9fd6f271a1a52ac293b86fe5c60efe8e8ba93fa1ebe1eb3d614d2e7b328cb60a2591440e163441a190ecf101ceec245f600fffdcf3f5b3a17a7baeacb96a424db1d7ec985e8ec998bb479fecfffed6a75f9a90fc97062fd973303bce855ad7b8d8272a94025e8532be9aabd54a183f303538d2a7e621b4131d59e823a4625f39bd7d518d7784f7c3a8f19061da74974ff42fa1c063dec2db97d461e291a7d6e721708a5229de166c1246363372854e27f3f08ae274bc16bfd205b028a4d81386494433d516dfbb35f495acba5e4e1d1843cb3c3129b6642a85fc7244ce5845fac071c7f622e4ee12ac43fabeeaa0cd01 +modulus1 = 0xd66691b20071be4d66d4b71032b37fa007cfabf579fcb91e50bfc2753b3f0ce7be74e216aef7e26d4ae180bc20d7bd3ea88a6cbf6f87380e613c8979b5b043b200a8ff8856a3b12875e36e98a7569f3852d028e967551000b02c19e9fa52e83115b89309aabb1e1cf1e2cb6369d637d46775ce4523ea31f64ad2794cbc365dd8a35e007ed3b57695877fbf102dbeb8b3212491398e494314e93726926e1383f8abb5889bea954eb8c0ca1c62c8e9d83f41888095c5e645ed6d32515fe0c58c1368cad84694e18da43668c6f43e61d7c9bca633ddcda7aef5b79bc396d4a9f48e2a9abe0836cc455e435305357228e93d25aaed46b952defae0f57339bf26f5a9 + + +class TestModExp(unittest.TestCase): + + def test_small(self): + self.assertEqual(1, monty_pow(11,12,19)) + + def test_large_1(self): + base = 0xfffffffffffffffffffffffffffffffffffffffffffffffffff + expected = pow(base, exponent1, modulus1) + result = monty_pow(base, exponent1, modulus1) + self.assertEqual(result, expected) + + def test_zero_exp(self): + base = 0xfffffffffffffffffffffffffffffffffffffffffffffffffff + result = monty_pow(base, 0, modulus1) + self.assertEqual(result, 1) + + def test_zero_base(self): + result = monty_pow(0, exponent1, modulus1) + self.assertEqual(result, 0) + + def test_zero_modulus(self): + base = 0xfffffffffffffffffffffffffffffffffffffffffffffffff + self.assertRaises(ExceptionModulus, monty_pow, base, exponent1, 0) + self.assertRaises(ExceptionModulus, monty_pow, 0, 0, 0) + + def test_larger_exponent(self): + base = modulus1 - 0xFFFFFFF + expected = pow(base, modulus1<<64, modulus1) + result = monty_pow(base, modulus1<<64, modulus1) + self.assertEqual(result, expected) + + def test_even_modulus(self): + base = modulus1 >> 4 + self.assertRaises(ExceptionModulus, monty_pow, base, exponent1, modulus1-1) + + def test_several_lengths(self): + prng = SHAKE128.new().update(b('Test')) + for length in range(1, 100): + modulus2 = Integer.from_bytes(prng.read(length)) | 1 + base = Integer.from_bytes(prng.read(length)) % modulus2 + exponent2 = Integer.from_bytes(prng.read(length)) + + expected = pow(base, exponent2, modulus2) + result = monty_pow(base, exponent2, modulus2) + self.assertEqual(result, expected) + + def test_variable_exponent(self): + prng = create_rng(b('Test variable exponent')) + for i in range(20): + for j in range(7): + modulus = prng.getrandbits(8*30) | 1 + base = prng.getrandbits(8*30) % modulus + exponent = prng.getrandbits(i*8+j) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + exponent ^= (1 << (i*8+j)) - 1 + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + def test_stress_63(self): + prng = create_rng(b('Test 63')) + length = 63 + for _ in range(2000): + modulus = prng.getrandbits(8*length) | 1 + base = prng.getrandbits(8*length) % modulus + exponent = prng.getrandbits(8*length) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + def test_stress_64(self): + prng = create_rng(b('Test 64')) + length = 64 + for _ in range(2000): + modulus = prng.getrandbits(8*length) | 1 + base = prng.getrandbits(8*length) % modulus + exponent = prng.getrandbits(8*length) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + def test_stress_65(self): + prng = create_rng(b('Test 65')) + length = 65 + for _ in range(2000): + modulus = prng.getrandbits(8*length) | 1 + base = prng.getrandbits(8*length) % modulus + exponent = prng.getrandbits(8*length) + + expected = pow(base, exponent, modulus) + result = monty_pow(base, exponent, modulus) + self.assertEqual(result, expected) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestModExp) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/__init__.py new file mode 100644 index 0000000..18cf8f5 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/__init__.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Protocol/__init__.py: Self-tests for Cryptodome.Protocol +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for Cryptodome.Protocol""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.Protocol import test_rfc1751; tests += test_rfc1751.get_tests(config=config) + from Cryptodome.SelfTest.Protocol import test_KDF; tests += test_KDF.get_tests(config=config) + + from Cryptodome.SelfTest.Protocol import test_SecretSharing; + tests += test_SecretSharing.get_tests(config=config) + + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py new file mode 100644 index 0000000..f2c5b11 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_KDF.py @@ -0,0 +1,807 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Protocol/test_KDF.py: Self-test for key derivation functions +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import re +import unittest +from binascii import unhexlify + +from Cryptodome.Util.py3compat import b, bchr + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof +from Cryptodome.Hash import SHA1, HMAC, SHA256, MD5, SHA224, SHA384, SHA512 +from Cryptodome.Cipher import AES, DES3 + +from Cryptodome.Protocol.KDF import (PBKDF1, PBKDF2, _S2V, HKDF, scrypt, + bcrypt, bcrypt_check, + SP800_108_Counter) + +from Cryptodome.Protocol.KDF import _bcrypt_decode + + +def t2b(t): + if t is None: + return None + t2 = t.replace(" ", "").replace("\n", "") + return unhexlify(b(t2)) + + +class TestVector(object): + pass + + +class PBKDF1_Tests(unittest.TestCase): + + # List of tuples with test data. + # Each tuple is made up by: + # Item #0: a pass phrase + # Item #1: salt (8 bytes encoded in hex) + # Item #2: output key length + # Item #3: iterations to use + # Item #4: expected result (encoded in hex) + _testData = ( + # From http://www.di-mgt.com.au/cryptoKDFs.html#examplespbkdf + ("password", "78578E5A5D63CB06", 16, 1000, "DC19847E05C64D2FAF10EBFB4A3D2A20"), + ) + + def test1(self): + v = self._testData[0] + res = PBKDF1(v[0], t2b(v[1]), v[2], v[3], SHA1) + self.assertEqual(res, t2b(v[4])) + + +class PBKDF2_Tests(unittest.TestCase): + + # List of tuples with test data. + # Each tuple is made up by: + # Item #0: a pass phrase + # Item #1: salt (encoded in hex) + # Item #2: output key length + # Item #3: iterations to use + # Item #4: hash module + # Item #5: expected result (encoded in hex) + _testData = ( + # From http://www.di-mgt.com.au/cryptoKDFs.html#examplespbkdf + ("password","78578E5A5D63CB06",24,2048, SHA1, "BFDE6BE94DF7E11DD409BCE20A0255EC327CB936FFE93643"), + # From RFC 6050 + ("password","73616c74", 20, 1, SHA1, "0c60c80f961f0e71f3a9b524af6012062fe037a6"), + ("password","73616c74", 20, 2, SHA1, "ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957"), + ("password","73616c74", 20, 4096, SHA1, "4b007901b765489abead49d926f721d065a429c1"), + ("passwordPASSWORDpassword","73616c7453414c5473616c7453414c5473616c7453414c5473616c7453414c5473616c74", + 25, 4096, SHA1, "3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038"), + ( 'pass\x00word',"7361006c74",16,4096, SHA1, "56fa6aa75548099dcc37d7f03425e0c3"), + # From draft-josefsson-scrypt-kdf-01, Chapter 10 + ( 'passwd', '73616c74', 64, 1, SHA256, "55ac046e56e3089fec1691c22544b605f94185216dde0465e68b9d57c20dacbc49ca9cccf179b645991664b39d77ef317c71b845b1e30bd509112041d3a19783"), + ( 'Password', '4e61436c', 64, 80000, SHA256, "4ddcd8f60b98be21830cee5ef22701f9641a4418d04c0414aeff08876b34ab56a1d425a1225833549adb841b51c9b3176a272bdebba1d078478f62b397f33c8d"), + ) + + def test1(self): + # Test only for HMAC-SHA1 as PRF + + def prf_SHA1(p,s): + return HMAC.new(p,s,SHA1).digest() + + def prf_SHA256(p,s): + return HMAC.new(p,s,SHA256).digest() + + for i in range(len(self._testData)): + v = self._testData[i] + password = v[0] + salt = t2b(v[1]) + out_len = v[2] + iters = v[3] + hash_mod = v[4] + expected = t2b(v[5]) + + if hash_mod is SHA1: + res = PBKDF2(password, salt, out_len, iters) + self.assertEqual(res, expected) + + res = PBKDF2(password, salt, out_len, iters, prf_SHA1) + self.assertEqual(res, expected) + else: + res = PBKDF2(password, salt, out_len, iters, prf_SHA256) + self.assertEqual(res, expected) + + def test2(self): + # Verify that prf and hmac_hash_module are mutual exclusive + def prf_SHA1(p,s): + return HMAC.new(p,s,SHA1).digest() + + self.assertRaises(ValueError, PBKDF2, b("xxx"), b("yyy"), 16, 100, + prf=prf_SHA1, hmac_hash_module=SHA1) + + def test3(self): + # Verify that hmac_hash_module works like prf + + password = b("xxx") + salt = b("yyy") + + for hashmod in (MD5, SHA1, SHA224, SHA256, SHA384, SHA512): + + pr1 = PBKDF2(password, salt, 16, 100, + prf=lambda p, s: HMAC.new(p,s,hashmod).digest()) + pr2 = PBKDF2(password, salt, 16, 100, hmac_hash_module=hashmod) + + self.assertEqual(pr1, pr2) + + def test4(self): + # Verify that PBKDF2 can take bytes or strings as password or salt + k1 = PBKDF2("xxx", b("yyy"), 16, 10) + k2 = PBKDF2(b("xxx"), b("yyy"), 16, 10) + self.assertEqual(k1, k2) + + k1 = PBKDF2(b("xxx"), "yyy", 16, 10) + k2 = PBKDF2(b("xxx"), b("yyy"), 16, 10) + self.assertEqual(k1, k2) + + +class S2V_Tests(unittest.TestCase): + + # Sequence of test vectors. + # Each test vector is made up by: + # Item #0: a tuple of strings + # Item #1: an AES key + # Item #2: the result + # Item #3: the cipher module S2V is based on + # Everything is hex encoded + _testData = [ + + # RFC5297, A.1 + ( + ( '101112131415161718191a1b1c1d1e1f2021222324252627', + '112233445566778899aabbccddee' ), + 'fffefdfcfbfaf9f8f7f6f5f4f3f2f1f0', + '85632d07c6e8f37f950acd320a2ecc93', + AES + ), + + # RFC5297, A.2 + ( + ( '00112233445566778899aabbccddeeffdeaddadadeaddadaffeeddcc'+ + 'bbaa99887766554433221100', + '102030405060708090a0', + '09f911029d74e35bd84156c5635688c0', + '7468697320697320736f6d6520706c61'+ + '696e7465787420746f20656e63727970'+ + '74207573696e67205349562d414553'), + '7f7e7d7c7b7a79787776757473727170', + '7bdb6e3b432667eb06f4d14bff2fbd0f', + AES + ), + + ] + + def test1(self): + """Verify correctness of test vector""" + for tv in self._testData: + s2v = _S2V.new(t2b(tv[1]), tv[3]) + for s in tv[0]: + s2v.update(t2b(s)) + result = s2v.derive() + self.assertEqual(result, t2b(tv[2])) + + def test2(self): + """Verify that no more than 127(AES) and 63(TDES) + components are accepted.""" + key = bchr(0) * 8 + bchr(255) * 8 + for module in (AES, DES3): + s2v = _S2V.new(key, module) + max_comps = module.block_size*8-1 + for i in range(max_comps): + s2v.update(b("XX")) + self.assertRaises(TypeError, s2v.update, b("YY")) + + +class HKDF_Tests(unittest.TestCase): + + # Test vectors from RFC5869, Appendix A + # Each tuple is made up by: + # Item #0: hash module + # Item #1: secret + # Item #2: salt + # Item #3: context + # Item #4: expected result + _test_vector = ( + ( + SHA256, + "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b", + "000102030405060708090a0b0c", + "f0f1f2f3f4f5f6f7f8f9", + 42, + "3cb25f25faacd57a90434f64d0362f2a" + + "2d2d0a90cf1a5a4c5db02d56ecc4c5bf" + + "34007208d5b887185865" + ), + ( + SHA256, + "000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f" + + "404142434445464748494a4b4c4d4e4f", + "606162636465666768696a6b6c6d6e6f" + + "707172737475767778797a7b7c7d7e7f" + + "808182838485868788898a8b8c8d8e8f" + + "909192939495969798999a9b9c9d9e9f" + + "a0a1a2a3a4a5a6a7a8a9aaabacadaeaf", + "b0b1b2b3b4b5b6b7b8b9babbbcbdbebf" + + "c0c1c2c3c4c5c6c7c8c9cacbcccdcecf" + + "d0d1d2d3d4d5d6d7d8d9dadbdcdddedf" + + "e0e1e2e3e4e5e6e7e8e9eaebecedeeef" + + "f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff", + 82, + "b11e398dc80327a1c8e7f78c596a4934" + + "4f012eda2d4efad8a050cc4c19afa97c" + + "59045a99cac7827271cb41c65e590e09" + + "da3275600c2f09b8367793a9aca3db71" + + "cc30c58179ec3e87c14c01d5c1f3434f" + + "1d87" + ), + ( + SHA256, + "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b", + None, + None, + 42, + "8da4e775a563c18f715f802a063c5a31" + + "b8a11f5c5ee1879ec3454e5f3c738d2d" + + "9d201395faa4b61a96c8" + ), + ( + SHA1, + "0b0b0b0b0b0b0b0b0b0b0b", + "000102030405060708090a0b0c", + "f0f1f2f3f4f5f6f7f8f9", + 42, + "085a01ea1b10f36933068b56efa5ad81" + + "a4f14b822f5b091568a9cdd4f155fda2" + + "c22e422478d305f3f896" + ), + ( + SHA1, + "000102030405060708090a0b0c0d0e0f" + + "101112131415161718191a1b1c1d1e1f" + + "202122232425262728292a2b2c2d2e2f" + + "303132333435363738393a3b3c3d3e3f" + + "404142434445464748494a4b4c4d4e4f", + "606162636465666768696a6b6c6d6e6f" + + "707172737475767778797a7b7c7d7e7f" + + "808182838485868788898a8b8c8d8e8f" + + "909192939495969798999a9b9c9d9e9f" + + "a0a1a2a3a4a5a6a7a8a9aaabacadaeaf", + "b0b1b2b3b4b5b6b7b8b9babbbcbdbebf" + + "c0c1c2c3c4c5c6c7c8c9cacbcccdcecf" + + "d0d1d2d3d4d5d6d7d8d9dadbdcdddedf" + + "e0e1e2e3e4e5e6e7e8e9eaebecedeeef" + + "f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff", + 82, + "0bd770a74d1160f7c9f12cd5912a06eb" + + "ff6adcae899d92191fe4305673ba2ffe" + + "8fa3f1a4e5ad79f3f334b3b202b2173c" + + "486ea37ce3d397ed034c7f9dfeb15c5e" + + "927336d0441f4c4300e2cff0d0900b52" + + "d3b4" + ), + ( + SHA1, + "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b", + "", + "", + 42, + "0ac1af7002b3d761d1e55298da9d0506" + + "b9ae52057220a306e07b6b87e8df21d0" + + "ea00033de03984d34918" + ), + ( + SHA1, + "0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c0c", + None, + "", + 42, + "2c91117204d745f3500d636a62f64f0a" + + "b3bae548aa53d423b0d1f27ebba6f5e5" + + "673a081d70cce7acfc48" + ) + ) + + def test1(self): + for tv in self._test_vector: + secret, salt, info, exp = [ t2b(tv[x]) for x in (1,2,3,5) ] + key_len, hashmod = [ tv[x] for x in (4,0) ] + + output = HKDF(secret, key_len, salt, hashmod, 1, info) + self.assertEqual(output, exp) + + def test2(self): + ref = HKDF(b("XXXXXX"), 12, b("YYYY"), SHA1) + + # Same output, but this time split over 2 keys + key1, key2 = HKDF(b("XXXXXX"), 6, b("YYYY"), SHA1, 2) + self.assertEqual((ref[:6], ref[6:]), (key1, key2)) + + # Same output, but this time split over 3 keys + key1, key2, key3 = HKDF(b("XXXXXX"), 4, b("YYYY"), SHA1, 3) + self.assertEqual((ref[:4], ref[4:8], ref[8:]), (key1, key2, key3)) + + +class scrypt_Tests(unittest.TestCase): + + # Test vectors taken from + # https://tools.ietf.org/html/rfc7914 + # - password + # - salt + # - N + # - r + # - p + data = ( + ( + "", + "", + 16, # 2K + 1, + 1, + """ + 77 d6 57 62 38 65 7b 20 3b 19 ca 42 c1 8a 04 97 + f1 6b 48 44 e3 07 4a e8 df df fa 3f ed e2 14 42 + fc d0 06 9d ed 09 48 f8 32 6a 75 3a 0f c8 1f 17 + e8 d3 e0 fb 2e 0d 36 28 cf 35 e2 0c 38 d1 89 06 + """ + ), + ( + "password", + "NaCl", + 1024, # 1M + 8, + 16, + """ + fd ba be 1c 9d 34 72 00 78 56 e7 19 0d 01 e9 fe + 7c 6a d7 cb c8 23 78 30 e7 73 76 63 4b 37 31 62 + 2e af 30 d9 2e 22 a3 88 6f f1 09 27 9d 98 30 da + c7 27 af b9 4a 83 ee 6d 83 60 cb df a2 cc 06 40 + """ + ), + ( + "pleaseletmein", + "SodiumChloride", + 16384, # 16M + 8, + 1, + """ + 70 23 bd cb 3a fd 73 48 46 1c 06 cd 81 fd 38 eb + fd a8 fb ba 90 4f 8e 3e a9 b5 43 f6 54 5d a1 f2 + d5 43 29 55 61 3f 0f cf 62 d4 97 05 24 2a 9a f9 + e6 1e 85 dc 0d 65 1e 40 df cf 01 7b 45 57 58 87 + """ + ), + ( + "pleaseletmein", + "SodiumChloride", + 1048576, # 1G + 8, + 1, + """ + 21 01 cb 9b 6a 51 1a ae ad db be 09 cf 70 f8 81 + ec 56 8d 57 4a 2f fd 4d ab e5 ee 98 20 ad aa 47 + 8e 56 fd 8f 4b a5 d0 9f fa 1c 6d 92 7c 40 f4 c3 + 37 30 40 49 e8 a9 52 fb cb f4 5c 6f a7 7a 41 a4 + """ + ), + ) + + def setUp(self): + new_test_vectors = [] + for tv in self.data: + new_tv = TestVector() + new_tv.P = b(tv[0]) + new_tv.S = b(tv[1]) + new_tv.N = tv[2] + new_tv.r = tv[3] + new_tv.p = tv[4] + new_tv.output = t2b(tv[5]) + new_tv.dkLen = len(new_tv.output) + new_test_vectors.append(new_tv) + self.data = new_test_vectors + + def test2(self): + + for tv in self.data: + try: + output = scrypt(tv.P, tv.S, tv.dkLen, tv.N, tv.r, tv.p) + except ValueError as e: + if " 2 " in str(e) and tv.N >= 1048576: + import warnings + warnings.warn("Not enough memory to unit test scrypt() with N=1048576", RuntimeWarning) + continue + else: + raise e + self.assertEqual(output, tv.output) + + def test3(self): + ref = scrypt(b("password"), b("salt"), 12, 16, 1, 1) + + # Same output, but this time split over 2 keys + key1, key2 = scrypt(b("password"), b("salt"), 6, 16, 1, 1, 2) + self.assertEqual((ref[:6], ref[6:]), (key1, key2)) + + # Same output, but this time split over 3 keys + key1, key2, key3 = scrypt(b("password"), b("salt"), 4, 16, 1, 1, 3) + self.assertEqual((ref[:4], ref[4:8], ref[8:]), (key1, key2, key3)) + + +class bcrypt_Tests(unittest.TestCase): + + def test_negative_cases(self): + self.assertRaises(ValueError, bcrypt, b"1" * 73, 10) + self.assertRaises(ValueError, bcrypt, b"1" * 10, 3) + self.assertRaises(ValueError, bcrypt, b"1" * 10, 32) + self.assertRaises(ValueError, bcrypt, b"1" * 10, 4, salt=b"") + self.assertRaises(ValueError, bcrypt, b"1" * 10, 4, salt=b"1") + self.assertRaises(ValueError, bcrypt, b"1" * 10, 4, salt=b"1" * 17) + self.assertRaises(ValueError, bcrypt, b"1\x00" * 10, 4) + + def test_bytearray_mismatch(self): + ref = bcrypt("pwd", 4) + bcrypt_check("pwd", ref) + bref = bytearray(ref) + bcrypt_check("pwd", bref) + + wrong = ref[:-1] + bchr(bref[-1] ^ 0x01) + self.assertRaises(ValueError, bcrypt_check, "pwd", wrong) + + wrong = b"x" + ref[1:] + self.assertRaises(ValueError, bcrypt_check, "pwd", wrong) + + # https://github.com/patrickfav/bcrypt/wiki/Published-Test-Vectors + + def test_empty_password(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"", 4, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$04$zVHmKQtGGQob.b/Nc7l9NO8UlrYcW05FiuCj/SxsFO/ZtiN9.mNzy"), + (b"", 5, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$05$zVHmKQtGGQob.b/Nc7l9NOWES.1hkVBgy5IWImh9DOjKNU8atY4Iy"), + (b"", 6, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$06$zVHmKQtGGQob.b/Nc7l9NOjOl7l4oz3WSh5fJ6414Uw8IXRAUoiaO"), + (b"", 7, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$07$zVHmKQtGGQob.b/Nc7l9NOBsj1dQpBA1HYNGpIETIByoNX9jc.hOi"), + (b"", 8, b"zVHmKQtGGQob.b/Nc7l9NO", b"$2a$08$zVHmKQtGGQob.b/Nc7l9NOiLTUh/9MDpX86/DLyEzyiFjqjBFePgO"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_random_password_and_salt_short_pw(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"<.S.2K(Zq'", 4, b"VYAclAMpaXY/oqAo9yUpku", b"$2a$04$VYAclAMpaXY/oqAo9yUpkuWmoYywaPzyhu56HxXpVltnBIfmO9tgu"), + (b"5.rApO%5jA", 5, b"kVNDrnYKvbNr5AIcxNzeIu", b"$2a$05$kVNDrnYKvbNr5AIcxNzeIuRcyIF5cZk6UrwHGxENbxP5dVv.WQM/G"), + (b"oW++kSrQW^", 6, b"QLKkRMH9Am6irtPeSKN5sO", b"$2a$06$QLKkRMH9Am6irtPeSKN5sObJGr3j47cO6Pdf5JZ0AsJXuze0IbsNm"), + (b"ggJ\\KbTnDG", 7, b"4H896R09bzjhapgCPS/LYu", b"$2a$07$4H896R09bzjhapgCPS/LYuMzAQluVgR5iu/ALF8L8Aln6lzzYXwbq"), + (b"49b0:;VkH/", 8, b"hfvO2retKrSrx5f2RXikWe", b"$2a$08$hfvO2retKrSrx5f2RXikWeFWdtSesPlbj08t/uXxCeZoHRWDz/xFe"), + (b">9N^5jc##'", 9, b"XZLvl7rMB3EvM0c1.JHivu", b"$2a$09$XZLvl7rMB3EvM0c1.JHivuIDPJWeNJPTVrpjZIEVRYYB/mF6cYgJK"), + (b"\\$ch)s4WXp", 10, b"aIjpMOLK5qiS9zjhcHR5TO", b"$2a$10$aIjpMOLK5qiS9zjhcHR5TOU7v2NFDmcsBmSFDt5EHOgp/jeTF3O/q"), + (b"RYoj\\_>2P7", 12, b"esIAHiQAJNNBrsr5V13l7.", b"$2a$12$esIAHiQAJNNBrsr5V13l7.RFWWJI2BZFtQlkFyiWXjou05GyuREZa"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_random_password_and_salt_long_pw(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"^Q&\"]A`%/A(BVGt>QaX0M-#<Q148&f", 4, b"vrRP5vQxyD4LrqiLd/oWRO", b"$2a$04$vrRP5vQxyD4LrqiLd/oWROgrrGINsw3gb4Ga5x2sn01jNmiLVECl6"), + (b"nZa!rRf\\U;OL;R?>1ghq_+\":Y0CRmY", 5, b"YuQvhokOGVnevctykUYpKu", b"$2a$05$YuQvhokOGVnevctykUYpKutZD2pWeGGYn3auyLOasguMY3/0BbIyq"), + (b"F%uN/j>[GuB7-jB'_Yj!Tnb7Y!u^6)", 6, b"5L3vpQ0tG9O7k5gQ8nAHAe", b"$2a$06$5L3vpQ0tG9O7k5gQ8nAHAe9xxQiOcOLh8LGcI0PLWhIznsDt.S.C6"), + (b"Z>BobP32ub\"Cfe*Q<<WUq3rc=[GJr-", 7, b"hp8IdLueqE6qFh1zYycUZ.", b"$2a$07$hp8IdLueqE6qFh1zYycUZ.twmUH8eSTPQAEpdNXKMlwms9XfKqfea"), + (b"Ik&8N['7*[1aCc1lOm8\\jWeD*H$eZM", 8, b"2ANDTYCB9m7vf0Prh7rSru", b"$2a$08$2ANDTYCB9m7vf0Prh7rSrupqpO3jJOkIz2oW/QHB4lCmK7qMytGV6"), + (b"O)=%3[E$*q+>-q-=tRSjOBh8\\mLNW.", 9, b"nArqOfdCsD9kIbVnAixnwe", b"$2a$09$nArqOfdCsD9kIbVnAixnwe6s8QvyPYWtQBpEXKir2OJF9/oNBsEFe"), + (b"/MH51`!BP&0tj3%YCA;Xk%e3S`o\\EI", 10, b"ePiAc.s.yoBi3B6p1iQUCe", b"$2a$10$ePiAc.s.yoBi3B6p1iQUCezn3mraLwpVJ5XGelVyYFKyp5FZn/y.u"), + (b"ptAP\"mcg6oH.\";c0U2_oll.OKi<!ku", 12, b"aroG/pwwPj1tU5fl9a9pkO", b"$2a$12$aroG/pwwPj1tU5fl9a9pkO4rydAmkXRj/LqfHZOSnR6LGAZ.z.jwa"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_same_password_and_random_salt(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"Q/A:k3DP;X@=<0\"hg&9c", 4, b"wbgDTvLMtyjQlNK7fjqwyO", b"$2a$04$wbgDTvLMtyjQlNK7fjqwyOakBoACQuYh11.VsKNarF4xUIOBWgD6S"), + (b"Q/A:k3DP;X@=<0\"hg&9c", 5, b"zbAaOmloOhxiKItjznRqru", b"$2a$05$zbAaOmloOhxiKItjznRqrunRqHlu3MAa7pMGv26Rr3WwyfGcwoRm6"), + (b"Q/A:k3DP;X@=<0\"hg&9c", 6, b"aOK0bWUvLI0qLkc3ti5jyu", b"$2a$06$aOK0bWUvLI0qLkc3ti5jyuAIQoqRzuqoK09kQqQ6Ou/YKDhW50/qa"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_same_password_and_salt_increasing_cost_factor(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"o<&+X'F4AQ8H,LU,N`&r", 4, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$04$BK5u.QHk1Driey7bvnFTH.3smGwxd91PtoK2GxH5nZ7pcBsYX4lMq"), + (b"o<&+X'F4AQ8H,LU,N`&r", 5, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$05$BK5u.QHk1Driey7bvnFTH.t5P.jZvFBMzDB1IY4PwkkRPOyVbEtFG"), + (b"o<&+X'F4AQ8H,LU,N`&r", 6, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$06$BK5u.QHk1Driey7bvnFTH.6Ea1Z5db2p25CPXZbxb/3OyKQagg3pa"), + (b"o<&+X'F4AQ8H,LU,N`&r", 7, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$07$BK5u.QHk1Driey7bvnFTH.sruuQi8Lhv/0LWKDvNp3AGFk7ltdkm6"), + (b"o<&+X'F4AQ8H,LU,N`&r", 8, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$08$BK5u.QHk1Driey7bvnFTH.IE7KsaUzc4m7gzAMlyUPUeiYyACWe0q"), + (b"o<&+X'F4AQ8H,LU,N`&r", 9, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$09$BK5u.QHk1Driey7bvnFTH.1v4Xj1dwkp44QNg0cVAoQt4FQMMrvnS"), + (b"o<&+X'F4AQ8H,LU,N`&r", 10, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$10$BK5u.QHk1Driey7bvnFTH.ESINe9YntUMcVgFDfkC.Vbhc9vMhNX2"), + (b"o<&+X'F4AQ8H,LU,N`&r", 12, b"BK5u.QHk1Driey7bvnFTH.", b"$2a$12$BK5u.QHk1Driey7bvnFTH.QM1/nnGe/f5cTzb6XTTi/vMzcAnycqG"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_long_passwords(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"g*3Q45=\"8NNgpT&mbMJ$Omfr.#ZeW?FP=CE$#roHd?97uL0F-]`?u73c\"\\[.\"*)qU34@VG", + 4, b"T2XJ5MOWvHQZRijl8LIKkO", b"$2a$04$T2XJ5MOWvHQZRijl8LIKkOQKIyX75KBfuLsuRYOJz5OjwBNF2lM8a"), + (b"\\M+*8;&QE=Ll[>5?Ui\"^ai#iQH7ZFtNMfs3AROnIncE9\"BNNoEgO[[*Yk8;RQ(#S,;I+aT", + 5, b"wgkOlGNXIVE2fWkT3gyRoO", b"$2a$05$wgkOlGNXIVE2fWkT3gyRoOqWi4gbi1Wv2Q2Jx3xVs3apl1w.Wtj8C"), + (b"M.E1=dt<.L0Q&p;94NfGm_Oo23+Kpl@M5?WIAL.[@/:'S)W96G8N^AWb7_smmC]>7#fGoB", + 6, b"W9zTCl35nEvUukhhFzkKMe", b"$2a$06$W9zTCl35nEvUukhhFzkKMekjT9/pj7M0lihRVEZrX3m8/SBNZRX7i"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_increasing_password_length(self): + # password, cost, salt, bcrypt hash + tvs = [ + (b"a", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.l4WvgHIVg17ZawDIrDM2IjlE64GDNQS"), + (b"aa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.AyUxBk.ThHlsLvRTH7IqcG7yVHJ3SXq"), + (b"aaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.BxOVac5xPB6XFdRc/ZrzM9FgZkqmvbW"), + (b"aaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.Qbr209bpCtfl5hN7UQlG/L4xiD3AKau"), + (b"aaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.oWszihPjDZI0ypReKsaDOW1jBl7oOii"), + (b"aaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ./k.Xxn9YiqtV/sxh3EHbnOHd0Qsq27K"), + (b"aaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.PYJqRFQbgRbIjMd5VNKmdKS4sBVOyDe"), + (b"aaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ..VMYfzaw1wP/SGxowpLeGf13fxCCt.q"), + (b"aaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.5B0p054nO5WgAD1n04XslDY/bqY9RJi"), + (b"aaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.INBTgqm7sdlBJDg.J5mLMSRK25ri04y"), + (b"aaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.s3y7CdFD0OR5p6rsZw/eZ.Dla40KLfm"), + (b"aaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.Jx742Djra6Q7PqJWnTAS.85c28g.Siq"), + (b"aaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.oKMXW3EZcPHcUV0ib5vDBnh9HojXnLu"), + (b"aaaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.w6nIjWpDPNSH5pZUvLjC1q25ONEQpeS"), + (b"aaaaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.k1b2/r9A/hxdwKEKurg6OCn4MwMdiGq"), + (b"aaaaaaaaaaaaaaaa", 4, b"5DCebwootqWMCp59ISrMJ.", b"$2a$04$5DCebwootqWMCp59ISrMJ.3prCNHVX1Ws.7Hm2bJxFUnQOX9f7DFa"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_non_ascii_characters(self): + # password, cost, salt, bcrypt hash + tvs = [ + ("àèìòùÀÈÌÒÙáéíóúýÃÉÃÓÚÃðÃ", 4, b"D3qS2aoTVyqM7z8v8crLm.", b"$2a$04$D3qS2aoTVyqM7z8v8crLm.3nKt4CzBZJbyFB.ZebmfCvRw7BGs.Xm"), + ("àèìòùÀÈÌÒÙáéíóúýÃÉÃÓÚÃðÃ", 5, b"VA1FujiOCMPkUHQ8kF7IaO", b"$2a$05$VA1FujiOCMPkUHQ8kF7IaOg7NGaNvpxwWzSluQutxEVmbZItRTsAa"), + ("àèìòùÀÈÌÒÙáéíóúýÃÉÃÓÚÃðÃ", 6, b"TXiaNrPeBSz5ugiQlehRt.", b"$2a$06$TXiaNrPeBSz5ugiQlehRt.gwpeDQnXWteQL4z2FulouBr6G7D9KUi"), + ("âêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿ", 4, b"YTn1Qlvps8e1odqMn6G5x.", b"$2a$04$YTn1Qlvps8e1odqMn6G5x.85pqKql6w773EZJAExk7/BatYAI4tyO"), + ("âêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿ", 5, b"C.8k5vJKD2NtfrRI9o17DO", b"$2a$05$C.8k5vJKD2NtfrRI9o17DOfIW0XnwItA529vJnh2jzYTb1QdoY0py"), + ("âêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿ", 6, b"xqfRPj3RYAgwurrhcA6uRO", b"$2a$06$xqfRPj3RYAgwurrhcA6uROtGlXDp/U6/gkoDYHwlubtcVcNft5.vW"), + ("ÄËÃÖÜŸåÅæÆœŒßçÇøØ¢¿¡€", 4, b"y8vGgMmr9EdyxP9rmMKjH.", b"$2a$04$y8vGgMmr9EdyxP9rmMKjH.wv2y3r7yRD79gykQtmb3N3zrwjKsyay"), + ("ÄËÃÖÜŸåÅæÆœŒßçÇøØ¢¿¡€", 5, b"iYH4XIKAOOm/xPQs7xKP1u", b"$2a$05$iYH4XIKAOOm/xPQs7xKP1upD0cWyMn3Jf0ZWiizXbEkVpS41K1dcO"), + ("ÄËÃÖÜŸåÅæÆœŒßçÇøØ¢¿¡€", 6, b"wCOob.D0VV8twafNDB2ape", b"$2a$06$wCOob.D0VV8twafNDB2apegiGD5nqF6Y1e6K95q6Y.R8C4QGd265q"), + ("ΔημοσιεÏθηκεστηνΕφημεÏίδατης", 4, b"E5SQtS6P4568MDXW7cyUp.", b"$2a$04$E5SQtS6P4568MDXW7cyUp.18wfDisKZBxifnPZjAI1d/KTYMfHPYO"), + ("ÐБбВвГгДдЕеÐёЖжЗзИиЙйКкЛлМмÐ", 4, b"03e26gQFHhQwRNf81/ww9.", b"$2a$04$03e26gQFHhQwRNf81/ww9.p1UbrNwxpzWjLuT.zpTLH4t/w5WhAhC"), + ("нОоПпРрСÑТтУуФфХхЦцЧчШшЩщЪъЫыЬьЭÑЮю", 4, b"PHNoJwpXCfe32nUtLv2Upu", b"$2a$04$PHNoJwpXCfe32nUtLv2UpuhJXOzd4k7IdFwnEpYwfJVCZ/f/.8Pje"), + ("電电電島岛島兔兔兎龜龟亀國国国å€åŒºåŒº", 4, b"wU4/0i1TmNl2u.1jIwBX.u", b"$2a$04$wU4/0i1TmNl2u.1jIwBX.uZUaOL3Rc5ID7nlQRloQh6q5wwhV/zLW"), + ("诶比伊艾弗豆è´å°”ç»´å¾è‰¾å°ºå¼€è‰¾ä¸ç»´è´¼å¾·", 4, b"P4kreGLhCd26d4WIy7DJXu", b"$2a$04$P4kreGLhCd26d4WIy7DJXusPkhxLvBouzV6OXkL5EB0jux0osjsry"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + def test_special_case_salt(self): + # password, cost, salt, bcrypt hash + tvs = [ + ("-O_=*N!2JP", 4, b"......................", b"$2a$04$......................JjuKLOX9OOwo5PceZZXSkaLDvdmgb82"), + ("7B[$Q<4b>U", 5, b"......................", b"$2a$05$......................DRiedDQZRL3xq5A5FL8y7/6NM8a2Y5W"), + (">d5-I_8^.h", 6, b"......................", b"$2a$06$......................5Mq1Ng8jgDY.uHNU4h5p/x6BedzNH2W"), + (")V`/UM/]1t", 4, b".OC/.OC/.OC/.OC/.OC/.O", b"$2a$04$.OC/.OC/.OC/.OC/.OC/.OQIvKRDAam.Hm5/IaV/.hc7P8gwwIbmi"), + (":@t2.bWuH]", 5, b".OC/.OC/.OC/.OC/.OC/.O", b"$2a$05$.OC/.OC/.OC/.OC/.OC/.ONDbUvdOchUiKmQORX6BlkPofa/QxW9e"), + ("b(#KljF5s\"", 6, b".OC/.OC/.OC/.OC/.OC/.O", b"$2a$06$.OC/.OC/.OC/.OC/.OC/.OHfTd9e7svOu34vi1PCvOcAEq07ST7.K"), + ("@3YaJ^Xs]*", 4, b"eGA.eGA.eGA.eGA.eGA.e.", b"$2a$04$eGA.eGA.eGA.eGA.eGA.e.stcmvh.R70m.0jbfSFVxlONdj1iws0C"), + ("'\"5\\!k*C(p", 5, b"eGA.eGA.eGA.eGA.eGA.e.", b"$2a$05$eGA.eGA.eGA.eGA.eGA.e.vR37mVSbfdHwu.F0sNMvgn8oruQRghy"), + ("edEu7C?$'W", 6, b"eGA.eGA.eGA.eGA.eGA.e.", b"$2a$06$eGA.eGA.eGA.eGA.eGA.e.tSq0FN8MWHQXJXNFnHTPQKtA.n2a..G"), + ("N7dHmg\\PI^", 4, b"999999999999999999999u", b"$2a$04$999999999999999999999uCZfA/pLrlyngNDMq89r1uUk.bQ9icOu"), + ("\"eJuHh!)7*", 5, b"999999999999999999999u", b"$2a$05$999999999999999999999uj8Pfx.ufrJFAoWFLjapYBS5vVEQQ/hK"), + ("ZeDRJ:_tu:", 6, b"999999999999999999999u", b"$2a$06$999999999999999999999u6RB0P9UmbdbQgjoQFEJsrvrKe.BoU6q"), + ] + + for (idx, (password, cost, salt64, result)) in enumerate(tvs): + x = bcrypt(password, cost, salt=_bcrypt_decode(salt64)) + self.assertEqual(x, result) + bcrypt_check(password, result) + + +class TestVectorsHKDFWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def add_tests(self, filename): + + def filter_algo(root): + algo_name = root['algorithm'] + if algo_name == "HKDF-SHA-1": + return SHA1 + elif algo_name == "HKDF-SHA-256": + return SHA256 + elif algo_name == "HKDF-SHA-384": + return SHA384 + elif algo_name == "HKDF-SHA-512": + return SHA512 + else: + raise ValueError("Unknown algorithm " + algo_name) + + def filter_size(unit): + return int(unit['size']) + + result = load_test_vectors_wycheproof(("Protocol", "wycheproof"), + filename, + "Wycheproof HMAC (%s)" % filename, + root_tag={'hash_module': filter_algo}, + unit_tag={'size': filter_size}) + return result + + def setUp(self): + self.tv = [] + self.add_tests("hkdf_sha1_test.json") + self.add_tests("hkdf_sha256_test.json") + self.add_tests("hkdf_sha384_test.json") + self.add_tests("hkdf_sha512_test.json") + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof HKDF Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + + try: + key = HKDF(tv.ikm, tv.size, tv.salt, tv.hash_module, 1, tv.info) + except ValueError: + assert not tv.valid + else: + if key != tv.okm: + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def load_hash_by_name(hash_name): + return __import__("Cryptodome.Hash." + hash_name, globals(), locals(), ["new"]) + + +class SP800_180_Counter_Tests(unittest.TestCase): + + def test_negative_zeroes(self): + def prf(s, x): + return HMAC.new(s, x, SHA256).digest() + + self.assertRaises(ValueError, SP800_108_Counter, b'0' * 16, 1, prf, + label=b'A\x00B') + self.assertRaises(ValueError, SP800_108_Counter, b'0' * 16, 1, prf, + context=b'A\x00B') + + def test_multiple_keys(self): + def prf(s, x): + return HMAC.new(s, x, SHA256).digest() + + key = b'0' * 16 + expected = SP800_108_Counter(key, 2*3*23, prf) + for r in (1, 2, 3, 23): + dks = SP800_108_Counter(key, r, prf, 138//r) + self.assertEqual(len(dks), 138//r) + self.assertEqual(len(dks[0]), r) + self.assertEqual(b''.join(dks), expected) + + +def add_tests_sp800_108_counter(cls): + + test_vectors_sp800_108_counter = load_test_vectors(("Protocol", ), + "KDF_SP800_108_COUNTER.txt", + "NIST SP 800 108 KDF Counter Mode", + {'count': lambda x: int(x)}, + ) or [] + + mac_type = None + for idx, tv in enumerate(test_vectors_sp800_108_counter): + + if isinstance(tv, str): + res = re.match(r"\[HMAC-(SHA-[0-9]+)\]", tv) + if res: + hash_name = res.group(1).replace("-", "") + hash_module = load_hash_by_name(hash_name) + mac_type = "hmac" + continue + res = re.match(r"\[CMAC-AES-128\]", tv) + if res: + mac_type = "cmac" + continue + assert res + + if mac_type == "hmac": + def prf(s, x, hash_module=hash_module): + return HMAC.new(s, x, hash_module).digest() + elif mac_type == "cmac": + def prf(s, x, hash_module=hash_module): + return CMAC.new(s, x, AES).digest() + continue + + def kdf_test(self, prf=prf, kin=tv.kin, label=tv.label, + context=tv.context, kout=tv.kout, count=tv.count): + result = SP800_108_Counter(kin, len(kout), prf, 1, label, context) + assert(len(result) == len(kout)) + self.assertEqual(result, kout) + + setattr(cls, "test_kdf_sp800_108_counter_%d" % idx, kdf_test) + + +add_tests_sp800_108_counter(SP800_180_Counter_Tests) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + if not config.get('slow_tests'): + PBKDF2_Tests._testData = PBKDF2_Tests._testData[:3] + scrypt_Tests.data = scrypt_Tests.data[:3] + + tests = [] + tests += list_test_cases(PBKDF1_Tests) + tests += list_test_cases(PBKDF2_Tests) + tests += list_test_cases(S2V_Tests) + tests += list_test_cases(HKDF_Tests) + tests += [TestVectorsHKDFWycheproof(wycheproof_warnings)] + tests += list_test_cases(scrypt_Tests) + tests += list_test_cases(bcrypt_Tests) + tests += list_test_cases(SP800_180_Counter_Tests) + + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py new file mode 100644 index 0000000..57d97df --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_SecretSharing.py @@ -0,0 +1,267 @@ +# +# SelfTest/Protocol/test_secret_sharing.py: Self-test for secret sharing protocols +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from unittest import main, TestCase, TestSuite +from binascii import unhexlify, hexlify + +from Cryptodome.Util.py3compat import * +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Protocol.SecretSharing import Shamir, _Element, \ + _mult_gf2, _div_gf2 + +class GF2_Tests(TestCase): + + def test_mult_gf2(self): + # Prove mult by zero + x = _mult_gf2(0,0) + self.assertEqual(x, 0) + + # Prove mult by unity + x = _mult_gf2(34, 1) + self.assertEqual(x, 34) + + z = 3 # (x+1) + y = _mult_gf2(z, z) + self.assertEqual(y, 5) # (x+1)^2 = x^2 + 1 + y = _mult_gf2(y, z) + self.assertEqual(y, 15) # (x+1)^3 = x^3 + x^2 + x + 1 + y = _mult_gf2(y, z) + self.assertEqual(y, 17) # (x+1)^4 = x^4 + 1 + + # Prove linearity works + comps = [1, 4, 128, 2**34] + sum_comps = 1+4+128+2**34 + y = 908 + z = _mult_gf2(sum_comps, y) + w = 0 + for x in comps: + w ^= _mult_gf2(x, y) + self.assertEqual(w, z) + + def test_div_gf2(self): + from Cryptodome.Util.number import size as deg + + x, y = _div_gf2(567, 7) + self.assertTrue(deg(y) < deg(7)) + + w = _mult_gf2(x, 7) ^ y + self.assertEqual(567, w) + + x, y = _div_gf2(7, 567) + self.assertEqual(x, 0) + self.assertEqual(y, 7) + +class Element_Tests(TestCase): + + def test1(self): + # Test encondings + e = _Element(256) + self.assertEqual(int(e), 256) + self.assertEqual(e.encode(), bchr(0)*14 + b("\x01\x00")) + + e = _Element(bchr(0)*14 + b("\x01\x10")) + self.assertEqual(int(e), 0x110) + self.assertEqual(e.encode(), bchr(0)*14 + b("\x01\x10")) + + # Only 16 byte string are a valid encoding + self.assertRaises(ValueError, _Element, bchr(0)) + + def test2(self): + # Test addition + e = _Element(0x10) + f = _Element(0x0A) + self.assertEqual(int(e+f), 0x1A) + + def test3(self): + # Test multiplication + zero = _Element(0) + one = _Element(1) + two = _Element(2) + + x = _Element(6) * zero + self.assertEqual(int(x), 0) + + x = _Element(6) * one + self.assertEqual(int(x), 6) + + x = _Element(2**127) * two + self.assertEqual(int(x), 1 + 2 + 4 + 128) + + def test4(self): + # Test inversion + one = _Element(1) + + x = one.inverse() + self.assertEqual(int(x), 1) + + x = _Element(82323923) + y = x.inverse() + self.assertEqual(int(x * y), 1) + +class Shamir_Tests(TestCase): + + def test1(self): + # Test splitting + shares = Shamir.split(2, 3, bchr(90)*16) + self.assertEqual(len(shares), 3) + for index in range(3): + self.assertEqual(shares[index][0], index+1) + self.assertEqual(len(shares[index][1]), 16) + + def test2(self): + # Test recombine + from itertools import permutations + + test_vectors = ( + (2, "d9fe73909bae28b3757854c0af7ad405", + "1-594ae8964294174d95c33756d2504170", + "2-d897459d29da574eb40e93ec552ffe6e", + "3-5823de9bf0e068b054b5f07a28056b1b", + "4-db2c1f8bff46d748f795da995bd080cb"), + (2, "bf4f902d9a7efafd1f3ffd9291fd5de9", + "1-557bd3b0748064b533469722d1cc7935", + "2-6b2717164783c66d47cd28f2119f14d0", + "3-8113548ba97d58256bb4424251ae300c", + "4-179e9e5a218483ddaeda57539139cf04"), + (3, "ec96aa5c14c9faa699354cf1da74e904", + "1-64579fbf1908d66f7239bf6e2b4e41e1", + "2-6cd9428df8017b52322561e8c672ae3e", + "3-e418776ef5c0579bd9299277374806dd", + "4-ab3f77a0107398d23b323e581bb43f5d", + "5-23fe42431db2b41bd03ecdc7ea8e97ac"), + (3, "44cf249b68b80fcdc27b47be60c2c145", + "1-d6515a3905cd755119b86e311c801e31", + "2-16693d9ac9f10c254036ced5f8917fa3", + "3-84f74338a48476b99bf5e75a84d3a0d1", + "4-3fe8878dc4a5d35811cf3cbcd33dbe52", + "5-ad76f92fa9d0a9c4ca0c1533af7f6132"), + (5, "5398717c982db935d968eebe53a47f5a", + "1-be7be2dd4c068e7ef576aaa1b1c11b01", + "2-f821f5848441cb98b3eb467e2733ee21", + "3-25ee52f53e203f6e29a0297b5ab486b5", + "4-fc9fb58ef74dab947fbf9acd9d5d83cd", + "5-b1949cce46d81552e65f248d3f74cc5c", + "6-d64797f59977c4d4a7956ad916da7699", + "7-ab608a6546a8b9af8820ff832b1135c7"), + (5, "4a78db90fbf35da5545d2fb728e87596", + "1-08daf9a25d8aa184cfbf02b30a0ed6a0", + "2-dda28261e36f0b14168c2cf153fb734e", + "3-e9fdec5505d674a57f9836c417c1ecaa", + "4-4dce5636ae06dee42d2c82e65f06c735", + "5-3963dc118afc2ba798fa1d452b28ef00", + "6-6dfe6ff5b09e94d2f84c382b12f42424", + "7-6faea9d4d4a4e201bf6c90b9000630c3"), + (10, "eccbf6d66d680b49b073c4f1ddf804aa", + "01-7d8ac32fe4ae209ead1f3220fda34466", + "02-f9144e76988aad647d2e61353a6e96d5", + "03-b14c3b80179203363922d60760271c98", + "04-770bb2a8c28f6cee89e00f4d5cc7f861", + "05-6e3d7073ea368334ef67467871c66799", + "06-248792bc74a98ce024477c13c8fb5f8d", + "07-fcea4640d2db820c0604851e293d2487", + "08-2776c36fb714bb1f8525a0be36fc7dba", + "09-6ee7ac8be773e473a4bf75ee5f065762", + "10-33657fc073354cf91d4a68c735aacfc8", + "11-7645c65094a5868bf225c516fdee2d0c", + "12-840485aacb8226631ecd9c70e3018086"), + (10, "377e63bdbb5f7d4dc58a483d035212bb", + "01-32c53260103be431c843b1a633afe3bd", + "02-0107eb16cb8695084d452d2cc50bc7d6", + "03-df1e5c66cd755287fb0446faccd72a06", + "04-361bbcd5d40797f49dfa1898652da197", + "05-160d3ad1512f7dec7fd9344aed318591", + "06-659af6d95df4f25beca4fb9bfee3b7e8", + "07-37f3b208977bad50b3724566b72bfa9d", + "08-6c1de2dfc69c2986142c26a8248eb316", + "09-5e19220837a396bd4bc8cd685ff314c3", + "10-86e7b864fb0f3d628e46d50c1ba92f1c", + "11-065d0082c80b1aea18f4abe0c49df72e", + "12-84a09430c1d20ea9f388f3123c3733a3"), + ) + + def get_share(p): + pos = p.find('-') + return int(p[:pos]), unhexlify(p[pos + 1:]) + + for tv in test_vectors: + k = tv[0] + secret = unhexlify(tv[1]) + max_perms = 10 + for perm, shares_idx in enumerate(permutations(range(2, len(tv)), k)): + if perm > max_perms: + break + shares = [ get_share(tv[x]) for x in shares_idx ] + result = Shamir.combine(shares, True) + self.assertEqual(secret, result) + + def test3(self): + # Loopback split/recombine + secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) + + shares = Shamir.split(2, 3, secret) + + secret2 = Shamir.combine(shares[:2]) + self.assertEqual(secret, secret2) + + secret3 = Shamir.combine([ shares[0], shares[2] ]) + self.assertEqual(secret, secret3) + + def test4(self): + # Loopback split/recombine (SSSS) + secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) + + shares = Shamir.split(2, 3, secret, ssss=True) + + secret2 = Shamir.combine(shares[:2], ssss=True) + self.assertEqual(secret, secret2) + + def test5(self): + # Detect duplicate shares + secret = unhexlify(b("000102030405060708090a0b0c0d0e0f")) + + shares = Shamir.split(2, 3, secret) + self.assertRaises(ValueError, Shamir.combine, (shares[0], shares[0])) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(GF2_Tests) + tests += list_test_cases(Element_Tests) + tests += list_test_cases(Shamir_Tests) + return tests + +if __name__ == '__main__': + suite = lambda: TestSuite(get_tests()) + main(defaultTest='suite') + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py new file mode 100644 index 0000000..a79769c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Protocol/test_rfc1751.py @@ -0,0 +1,62 @@ +# +# Test script for Cryptodome.Util.RFC1751. +# +# Part of the Python Cryptography Toolkit +# +# Written by Andrew Kuchling and others +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +__revision__ = "$Id$" + +import binascii +import unittest +from Cryptodome.Util import RFC1751 +from Cryptodome.Util.py3compat import * + +test_data = [('EB33F77EE73D4053', 'TIDE ITCH SLOW REIN RULE MOT'), + ('CCAC2AED591056BE4F90FD441C534766', + 'RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE'), + ('EFF81F9BFBC65350920CDD7416DE8009', + 'TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL') + ] + +class RFC1751Test_k2e (unittest.TestCase): + + def runTest (self): + "Check converting keys to English" + for key, words in test_data: + key=binascii.a2b_hex(b(key)) + self.assertEqual(RFC1751.key_to_english(key), words) + +class RFC1751Test_e2k (unittest.TestCase): + + def runTest (self): + "Check converting English strings to keys" + for key, words in test_data: + key=binascii.a2b_hex(b(key)) + self.assertEqual(RFC1751.english_to_key(words), key) + +# class RFC1751Test + +def get_tests(config={}): + return [RFC1751Test_k2e(), RFC1751Test_e2k()] + +if __name__ == "__main__": + unittest.main() diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py new file mode 100644 index 0000000..7cdd320 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/__init__.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/__init__.py: Self-test for public key crypto +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for public-key crypto""" + +import unittest +from Cryptodome.SelfTest.PublicKey import (test_DSA, test_RSA, + test_ECC_NIST, test_ECC_25519, test_ECC_448, + test_import_DSA, test_import_RSA, + test_import_ECC, test_ElGamal) + + +def get_tests(config={}): + tests = [] + tests += test_DSA.get_tests(config=config) + tests += test_RSA.get_tests(config=config) + tests += test_ECC_NIST.get_tests(config=config) + tests += test_ECC_25519.get_tests(config=config) + tests += test_ECC_448.get_tests(config=config) + + tests += test_import_DSA.get_tests(config=config) + tests += test_import_RSA.get_tests(config=config) + tests += test_import_ECC.get_tests(config=config) + + tests += test_ElGamal.get_tests(config=config) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py new file mode 100644 index 0000000..160d882 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_DSA.py @@ -0,0 +1,247 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_DSA.py: Self-test for the DSA primitive +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.PublicKey.DSA""" + +import os +from Cryptodome.Util.py3compat import * + +import unittest +from Cryptodome.SelfTest.st_common import list_test_cases, a2b_hex, b2a_hex + +def _sws(s): + """Remove whitespace from a text or byte string""" + if isinstance(s,str): + return "".join(s.split()) + else: + return b("").join(s.split()) + +class DSATest(unittest.TestCase): + # Test vector from "Appendix 5. Example of the DSA" of + # "Digital Signature Standard (DSS)", + # U.S. Department of Commerce/National Institute of Standards and Technology + # FIPS 186-2 (+Change Notice), 2000 January 27. + # http://csrc.nist.gov/publications/fips/fips186-2/fips186-2-change1.pdf + + y = _sws("""19131871 d75b1612 a819f29d 78d1b0d7 346f7aa7 7bb62a85 + 9bfd6c56 75da9d21 2d3a36ef 1672ef66 0b8c7c25 5cc0ec74 + 858fba33 f44c0669 9630a76b 030ee333""") + + g = _sws("""626d0278 39ea0a13 413163a5 5b4cb500 299d5522 956cefcb + 3bff10f3 99ce2c2e 71cb9de5 fa24babf 58e5b795 21925c9c + c42e9f6f 464b088c c572af53 e6d78802""") + + p = _sws("""8df2a494 492276aa 3d25759b b06869cb eac0d83a fb8d0cf7 + cbb8324f 0d7882e5 d0762fc5 b7210eaf c2e9adac 32ab7aac + 49693dfb f83724c2 ec0736ee 31c80291""") + + q = _sws("""c773218c 737ec8ee 993b4f2d ed30f48e dace915f""") + + x = _sws("""2070b322 3dba372f de1c0ffc 7b2e3b49 8b260614""") + + k = _sws("""358dad57 1462710f 50e254cf 1a376b2b deaadfbf""") + k_inverse = _sws("""0d516729 8202e49b 4116ac10 4fc3f415 ae52f917""") + m = b2a_hex(b("abc")) + m_hash = _sws("""a9993e36 4706816a ba3e2571 7850c26c 9cd0d89d""") + r = _sws("""8bac1ab6 6410435c b7181f95 b16ab97c 92b341c0""") + s = _sws("""41e2345f 1f56df24 58f426d1 55b4ba2d b6dcd8c8""") + + def setUp(self): + global DSA, Random, bytes_to_long, size + from Cryptodome.PublicKey import DSA + from Cryptodome import Random + from Cryptodome.Util.number import bytes_to_long, inverse, size + + self.dsa = DSA + + def test_generate_1arg(self): + """DSA (default implementation) generated key (1 argument)""" + dsaObj = self.dsa.generate(1024) + self._check_private_key(dsaObj) + pub = dsaObj.public_key() + self._check_public_key(pub) + + def test_generate_2arg(self): + """DSA (default implementation) generated key (2 arguments)""" + dsaObj = self.dsa.generate(1024, Random.new().read) + self._check_private_key(dsaObj) + pub = dsaObj.public_key() + self._check_public_key(pub) + + def test_construct_4tuple(self): + """DSA (default implementation) constructed key (4-tuple)""" + (y, g, p, q) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q)] + dsaObj = self.dsa.construct((y, g, p, q)) + self._test_verification(dsaObj) + + def test_construct_5tuple(self): + """DSA (default implementation) constructed key (5-tuple)""" + (y, g, p, q, x) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q, self.x)] + dsaObj = self.dsa.construct((y, g, p, q, x)) + self._test_signing(dsaObj) + self._test_verification(dsaObj) + + def test_construct_bad_key4(self): + (y, g, p, q) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q)] + tup = (y, g, p+1, q) + self.assertRaises(ValueError, self.dsa.construct, tup) + + tup = (y, g, p, q+1) + self.assertRaises(ValueError, self.dsa.construct, tup) + + tup = (y, 1, p, q) + self.assertRaises(ValueError, self.dsa.construct, tup) + + def test_construct_bad_key5(self): + (y, g, p, q, x) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q, self.x)] + tup = (y, g, p, q, x+1) + self.assertRaises(ValueError, self.dsa.construct, tup) + + tup = (y, g, p, q, q+10) + self.assertRaises(ValueError, self.dsa.construct, tup) + + def _check_private_key(self, dsaObj): + # Check capabilities + self.assertEqual(1, dsaObj.has_private()) + self.assertEqual(1, dsaObj.can_sign()) + self.assertEqual(0, dsaObj.can_encrypt()) + + # Sanity check key data + self.assertEqual(1, dsaObj.p > dsaObj.q) # p > q + self.assertEqual(160, size(dsaObj.q)) # size(q) == 160 bits + self.assertEqual(0, (dsaObj.p - 1) % dsaObj.q) # q is a divisor of p-1 + self.assertEqual(dsaObj.y, pow(dsaObj.g, dsaObj.x, dsaObj.p)) # y == g**x mod p + self.assertEqual(1, 0 < dsaObj.x < dsaObj.q) # 0 < x < q + + def _check_public_key(self, dsaObj): + k = bytes_to_long(a2b_hex(self.k)) + m_hash = bytes_to_long(a2b_hex(self.m_hash)) + + # Check capabilities + self.assertEqual(0, dsaObj.has_private()) + self.assertEqual(1, dsaObj.can_sign()) + self.assertEqual(0, dsaObj.can_encrypt()) + + # Check that private parameters are all missing + self.assertEqual(0, hasattr(dsaObj, 'x')) + + # Sanity check key data + self.assertEqual(1, dsaObj.p > dsaObj.q) # p > q + self.assertEqual(160, size(dsaObj.q)) # size(q) == 160 bits + self.assertEqual(0, (dsaObj.p - 1) % dsaObj.q) # q is a divisor of p-1 + + # Public-only key objects should raise an error when .sign() is called + self.assertRaises(TypeError, dsaObj._sign, m_hash, k) + + # Check __eq__ and __ne__ + self.assertEqual(dsaObj.public_key() == dsaObj.public_key(),True) # assert_ + self.assertEqual(dsaObj.public_key() != dsaObj.public_key(),False) # assertFalse + + self.assertEqual(dsaObj.public_key(), dsaObj.publickey()) + + def _test_signing(self, dsaObj): + k = bytes_to_long(a2b_hex(self.k)) + m_hash = bytes_to_long(a2b_hex(self.m_hash)) + r = bytes_to_long(a2b_hex(self.r)) + s = bytes_to_long(a2b_hex(self.s)) + (r_out, s_out) = dsaObj._sign(m_hash, k) + self.assertEqual((r, s), (r_out, s_out)) + + def _test_verification(self, dsaObj): + m_hash = bytes_to_long(a2b_hex(self.m_hash)) + r = bytes_to_long(a2b_hex(self.r)) + s = bytes_to_long(a2b_hex(self.s)) + self.assertTrue(dsaObj._verify(m_hash, (r, s))) + self.assertFalse(dsaObj._verify(m_hash + 1, (r, s))) + + def test_repr(self): + (y, g, p, q) = [bytes_to_long(a2b_hex(param)) for param in (self.y, self.g, self.p, self.q)] + dsaObj = self.dsa.construct((y, g, p, q)) + repr(dsaObj) + + +class DSADomainTest(unittest.TestCase): + + def test_domain1(self): + """Verify we can generate new keys in a given domain""" + dsa_key_1 = DSA.generate(1024) + domain_params = dsa_key_1.domain() + + dsa_key_2 = DSA.generate(1024, domain=domain_params) + self.assertEqual(dsa_key_1.p, dsa_key_2.p) + self.assertEqual(dsa_key_1.q, dsa_key_2.q) + self.assertEqual(dsa_key_1.g, dsa_key_2.g) + + self.assertEqual(dsa_key_1.domain(), dsa_key_2.domain()) + + def _get_weak_domain(self): + + from Cryptodome.Math.Numbers import Integer + from Cryptodome.Math import Primality + + p = Integer(4) + while p.size_in_bits() != 1024 or Primality.test_probable_prime(p) != Primality.PROBABLY_PRIME: + q1 = Integer.random(exact_bits=80) + q2 = Integer.random(exact_bits=80) + q = q1 * q2 + z = Integer.random(exact_bits=1024-160) + p = z * q + 1 + + h = Integer(2) + g = 1 + while g == 1: + g = pow(h, z, p) + h += 1 + + return (p, q, g) + + + def test_generate_error_weak_domain(self): + """Verify that domain parameters with composite q are rejected""" + + domain_params = self._get_weak_domain() + self.assertRaises(ValueError, DSA.generate, 1024, domain=domain_params) + + + def test_construct_error_weak_domain(self): + """Verify that domain parameters with composite q are rejected""" + + from Cryptodome.Math.Numbers import Integer + + p, q, g = self._get_weak_domain() + y = pow(g, 89, p) + self.assertRaises(ValueError, DSA.construct, (y, g, p, q)) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(DSATest) + tests += list_test_cases(DSADomainTest) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py new file mode 100644 index 0000000..1362e58 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_25519.py @@ -0,0 +1,333 @@ +# =================================================================== +# +# Copyright (c) 2022, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors + +from Cryptodome.PublicKey import ECC +from Cryptodome.PublicKey.ECC import EccPoint, _curves, EccKey + +from Cryptodome.Math.Numbers import Integer + +from Cryptodome.Hash import SHAKE128 + + +class TestEccPoint_Ed25519(unittest.TestCase): + + Gxy = {"x": 15112221349535400772501151409588531511454012693041857206046113283949847762202, + "y": 46316835694926478169428394003475163141307993866256225615783033603165251855960} + + G2xy = {"x": 24727413235106541002554574571675588834622768167397638456726423682521233608206, + "y": 15549675580280190176352668710449542251549572066445060580507079593062643049417} + + G3xy = {"x": 46896733464454938657123544595386787789046198280132665686241321779790909858396, + "y": 8324843778533443976490377120369201138301417226297555316741202210403726505172} + + pointG = EccPoint(Gxy['x'], Gxy['y'], curve="Ed25519") + pointG2 = EccPoint(G2xy['x'], G2xy['y'], curve="Ed25519") + pointG3 = EccPoint(G3xy['x'], G3xy['y'], curve="Ed25519") + + def test_init_xy(self): + EccPoint(self.Gxy['x'], self.Gxy['y'], curve="Ed25519") + + # Neutral point + pai = EccPoint(0, 1, curve="Ed25519") + self.assertEqual(pai.x, 0) + self.assertEqual(pai.y, 1) + self.assertEqual(pai.xy, (0, 1)) + + # G + bp = self.pointG.copy() + self.assertEqual(bp.x, 15112221349535400772501151409588531511454012693041857206046113283949847762202) + self.assertEqual(bp.y, 46316835694926478169428394003475163141307993866256225615783033603165251855960) + self.assertEqual(bp.xy, (bp.x, bp.y)) + + # 2G + bp2 = self.pointG2.copy() + self.assertEqual(bp2.x, 24727413235106541002554574571675588834622768167397638456726423682521233608206) + self.assertEqual(bp2.y, 15549675580280190176352668710449542251549572066445060580507079593062643049417) + self.assertEqual(bp2.xy, (bp2.x, bp2.y)) + + # 5G + EccPoint(x=33467004535436536005251147249499675200073690106659565782908757308821616914995, + y=43097193783671926753355113395909008640284023746042808659097434958891230611693, + curve="Ed25519") + + # Catch if point is not on the curve + self.assertRaises(ValueError, EccPoint, 34, 35, curve="Ed25519") + + def test_set(self): + pointW = EccPoint(0, 1, curve="Ed25519") + pointW.set(self.pointG) + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_copy(self): + pointW = self.pointG.copy() + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_equal(self): + pointH = self.pointG.copy() + pointI = self.pointG2.copy() + self.assertEqual(self.pointG, pointH) + self.assertNotEqual(self.pointG, pointI) + + def test_pai(self): + pai = EccPoint(0, 1, curve="Ed25519") + self.assertTrue(pai.is_point_at_infinity()) + self.assertEqual(pai, pai.point_at_infinity()) + + def test_negate(self): + negG = -self.pointG + sum = self.pointG + negG + self.assertTrue(sum.is_point_at_infinity()) + + def test_addition(self): + self.assertEqual(self.pointG + self.pointG2, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG.point_at_infinity(), self.pointG2) + self.assertEqual(self.pointG.point_at_infinity() + self.pointG2, self.pointG2) + + G5 = self.pointG2 + self.pointG3 + self.assertEqual(G5.x, 33467004535436536005251147249499675200073690106659565782908757308821616914995) + self.assertEqual(G5.y, 43097193783671926753355113395909008640284023746042808659097434958891230611693) + + def test_inplace_addition(self): + pointH = self.pointG.copy() + pointH += self.pointG + self.assertEqual(pointH, self.pointG2) + pointH += self.pointG + self.assertEqual(pointH, self.pointG3) + pointH += self.pointG.point_at_infinity() + self.assertEqual(pointH, self.pointG3) + + def test_doubling(self): + pointH = self.pointG.copy() + pointH.double() + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + # 2*0 + pai = self.pointG.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + def test_scalar_multiply(self): + d = 0 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0) + self.assertEqual(pointH.y, 1) + + d = 1 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG.x) + self.assertEqual(pointH.y, self.pointG.y) + + d = 2 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + d = 3 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG3.x) + self.assertEqual(pointH.y, self.pointG3.y) + + d = 4 + pointH = d * self.pointG + self.assertEqual(pointH.x, 14582954232372986451776170844943001818709880559417862259286374126315108956272) + self.assertEqual(pointH.y, 32483318716863467900234833297694612235682047836132991208333042722294373421359) + + d = 5 + pointH = d * self.pointG + self.assertEqual(pointH.x, 33467004535436536005251147249499675200073690106659565782908757308821616914995) + self.assertEqual(pointH.y, 43097193783671926753355113395909008640284023746042808659097434958891230611693) + + d = 10 + pointH = d * self.pointG + self.assertEqual(pointH.x, 43500613248243327786121022071801015118933854441360174117148262713429272820047) + self.assertEqual(pointH.y, 45005105423099817237495816771148012388779685712352441364231470781391834741548) + + d = 20 + pointH = d * self.pointG + self.assertEqual(pointH.x, 46694936775300686710656303283485882876784402425210400817529601134760286812591) + self.assertEqual(pointH.y, 8786390172762935853260670851718824721296437982862763585171334833968259029560) + + d = 255 + pointH = d * self.pointG + self.assertEqual(pointH.x, 36843863416400016952258312492144504209624961884991522125275155377549541182230) + self.assertEqual(pointH.y, 22327030283879720808995671630924669697661065034121040761798775626517750047180) + + d = 256 + pointH = d * self.pointG + self.assertEqual(pointH.x, 42740085206947573681423002599456489563927820004573071834350074001818321593686) + self.assertEqual(pointH.y, 6935684722522267618220753829624209639984359598320562595061366101608187623111) + + def test_sizes(self): + self.assertEqual(self.pointG.size_in_bits(), 255) + self.assertEqual(self.pointG.size_in_bytes(), 32) + + +class TestEccKey_Ed25519(unittest.TestCase): + + def test_private_key(self): + seed = unhexlify("9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60") + Px = 38815646466658113194383306759739515082307681141926459231621296960732224964046 + Py = 11903303657706407974989296177215005343713679411332034699907763981919547054807 + + key = EccKey(curve="Ed25519", seed=seed) + self.assertEqual(key.seed, seed) + self.assertEqual(key.d, 36144925721603087658594284515452164870581325872720374094707712194495455132720) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, Px) + self.assertEqual(key.pointQ.y, Py) + + point = EccPoint(Px, Py, "ed25519") + key = EccKey(curve="Ed25519", seed=seed, point=point) + self.assertEqual(key.d, 36144925721603087658594284515452164870581325872720374094707712194495455132720) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="ed25519", seed=seed) + + # Must not accept d parameter + self.assertRaises(ValueError, EccKey, curve="ed25519", d=1) + + def test_public_key(self): + point = EccPoint(_curves['ed25519'].Gx, _curves['ed25519'].Gy, curve='ed25519') + key = EccKey(curve="ed25519", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + priv_key = EccKey(curve="ed25519", seed=b'H'*32) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_seed(self): + self.assertRaises(ValueError, lambda: EccKey(curve="ed25519", seed=b'H' * 31)) + + def test_equality(self): + private_key = ECC.construct(seed=b'H'*32, curve="Ed25519") + private_key2 = ECC.construct(seed=b'H'*32, curve="ed25519") + private_key3 = ECC.construct(seed=b'C'*32, curve="Ed25519") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='ed25519') + self.assertIn("curve='Ed25519'", repr(key)) + self.assertEqual(key.curve, 'Ed25519') + self.assertEqual(key.public_key().curve, 'Ed25519') + + +class TestEccModule_Ed25519(unittest.TestCase): + + def test_generate(self): + key = ECC.generate(curve="Ed25519") + self.assertTrue(key.has_private()) + point = EccPoint(_curves['Ed25519'].Gx, _curves['Ed25519'].Gy, curve="Ed25519") * key.d + self.assertEqual(key.pointQ, point) + + # Always random + key2 = ECC.generate(curve="Ed25519") + self.assertNotEqual(key, key2) + + # Other names + ECC.generate(curve="Ed25519") + + # Random source + key1 = ECC.generate(curve="Ed25519", randfunc=SHAKE128.new().read) + key2 = ECC.generate(curve="Ed25519", randfunc=SHAKE128.new().read) + self.assertEqual(key1, key2) + + def test_construct(self): + seed = unhexlify("9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60") + Px = 38815646466658113194383306759739515082307681141926459231621296960732224964046 + Py = 11903303657706407974989296177215005343713679411332034699907763981919547054807 + d = 36144925721603087658594284515452164870581325872720374094707712194495455132720 + point = EccPoint(Px, Py, curve="Ed25519") + + # Private key only + key = ECC.construct(curve="Ed25519", seed=seed) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Public key only + key = ECC.construct(curve="Ed25519", point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertFalse(key.has_private()) + + # Private and public key + key = ECC.construct(curve="Ed25519", seed=seed, point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Other names + key = ECC.construct(curve="ed25519", seed=seed) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['ed25519'].Gx, point_y=_curves['ed25519'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="Ed25519", **coord) + self.assertRaises(ValueError, ECC.construct, curve="Ed25519", d=2, **coordG) + self.assertRaises(ValueError, ECC.construct, curve="Ed25519", seed=b'H'*31) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestEccPoint_Ed25519) + tests += list_test_cases(TestEccKey_Ed25519) + tests += list_test_cases(TestEccModule_Ed25519) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py new file mode 100644 index 0000000..fbebaea --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_448.py @@ -0,0 +1,333 @@ +# =================================================================== +# +# Copyright (c) 2022, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors + +from Cryptodome.PublicKey import ECC +from Cryptodome.PublicKey.ECC import EccPoint, _curves, EccKey + +from Cryptodome.Math.Numbers import Integer + +from Cryptodome.Hash import SHAKE128 + + +class TestEccPoint_Ed448(unittest.TestCase): + + Gxy = {"x": 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e, + "y": 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14} + + G2xy = {"x": 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa955555555555555555555555555555555555555555555555555555555, + "y": 0xae05e9634ad7048db359d6205086c2b0036ed7a035884dd7b7e36d728ad8c4b80d6565833a2a3098bbbcb2bed1cda06bdaeafbcdea9386ed} + + G3xy = {"x": 0x865886b9108af6455bd64316cb6943332241b8b8cda82c7e2ba077a4a3fcfe8daa9cbf7f6271fd6e862b769465da8575728173286ff2f8f, + "y": 0xe005a8dbd5125cf706cbda7ad43aa6449a4a8d952356c3b9fce43c82ec4e1d58bb3a331bdb6767f0bffa9a68fed02dafb822ac13588ed6fc} + + pointG = EccPoint(Gxy['x'], Gxy['y'], curve="Ed448") + pointG2 = EccPoint(G2xy['x'], G2xy['y'], curve="Ed448") + pointG3 = EccPoint(G3xy['x'], G3xy['y'], curve="Ed448") + + def test_init_xy(self): + EccPoint(self.Gxy['x'], self.Gxy['y'], curve="Ed448") + + # Neutral point + pai = EccPoint(0, 1, curve="Ed448") + self.assertEqual(pai.x, 0) + self.assertEqual(pai.y, 1) + self.assertEqual(pai.xy, (0, 1)) + + # G + bp = self.pointG.copy() + self.assertEqual(bp.x, 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e) + self.assertEqual(bp.y, 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14) + self.assertEqual(bp.xy, (bp.x, bp.y)) + + # 2G + bp2 = self.pointG2.copy() + self.assertEqual(bp2.x, 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa955555555555555555555555555555555555555555555555555555555) + self.assertEqual(bp2.y, 0xae05e9634ad7048db359d6205086c2b0036ed7a035884dd7b7e36d728ad8c4b80d6565833a2a3098bbbcb2bed1cda06bdaeafbcdea9386ed) + self.assertEqual(bp2.xy, (bp2.x, bp2.y)) + + # 5G + EccPoint(x=0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034, + y=0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb, + curve="Ed448") + + # Catch if point is not on the curve + self.assertRaises(ValueError, EccPoint, 34, 35, curve="Ed448") + + def test_set(self): + pointW = EccPoint(0, 1, curve="Ed448") + pointW.set(self.pointG) + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_copy(self): + pointW = self.pointG.copy() + self.assertEqual(pointW.x, self.pointG.x) + self.assertEqual(pointW.y, self.pointG.y) + + def test_equal(self): + pointH = self.pointG.copy() + pointI = self.pointG2.copy() + self.assertEqual(self.pointG, pointH) + self.assertNotEqual(self.pointG, pointI) + + def test_pai(self): + pai = EccPoint(0, 1, curve="Ed448") + self.assertTrue(pai.is_point_at_infinity()) + self.assertEqual(pai, pai.point_at_infinity()) + + def test_negate(self): + negG = -self.pointG + sum = self.pointG + negG + self.assertTrue(sum.is_point_at_infinity()) + + def test_addition(self): + self.assertEqual(self.pointG + self.pointG2, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG, self.pointG3) + self.assertEqual(self.pointG2 + self.pointG.point_at_infinity(), self.pointG2) + self.assertEqual(self.pointG.point_at_infinity() + self.pointG2, self.pointG2) + + G5 = self.pointG2 + self.pointG3 + self.assertEqual(G5.x, 0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034) + self.assertEqual(G5.y, 0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb) + + def test_inplace_addition(self): + pointH = self.pointG.copy() + pointH += self.pointG + self.assertEqual(pointH, self.pointG2) + pointH += self.pointG + self.assertEqual(pointH, self.pointG3) + pointH += self.pointG.point_at_infinity() + self.assertEqual(pointH, self.pointG3) + + def test_doubling(self): + pointH = self.pointG.copy() + pointH.double() + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + # 2*0 + pai = self.pointG.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + def test_scalar_multiply(self): + d = 0 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0) + self.assertEqual(pointH.y, 1) + + d = 1 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG.x) + self.assertEqual(pointH.y, self.pointG.y) + + d = 2 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG2.x) + self.assertEqual(pointH.y, self.pointG2.y) + + d = 3 + pointH = d * self.pointG + self.assertEqual(pointH.x, self.pointG3.x) + self.assertEqual(pointH.y, self.pointG3.y) + + d = 4 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x49dcbc5c6c0cce2c1419a17226f929ea255a09cf4e0891c693fda4be70c74cc301b7bdf1515dd8ba21aee1798949e120e2ce42ac48ba7f30) + self.assertEqual(pointH.y, 0xd49077e4accde527164b33a5de021b979cb7c02f0457d845c90dc3227b8a5bc1c0d8f97ea1ca9472b5d444285d0d4f5b32e236f86de51839) + + d = 5 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034) + self.assertEqual(pointH.y, 0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb) + + d = 10 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x77486f9d19f6411cdd35d30d1c3235f71936452c787e5c034134d3e8172278aca61622bc805761ce3dab65118a0122d73b403165d0ed303d) + self.assertEqual(pointH.y, 0x4d2fea0b026be11024f1f0fe7e94e618e8ac17381ada1d1bf7ee293a68ff5d0bf93c1997dc1aabdc0c7e6381428d85b6b1954a89e4cddf67) + + d = 20 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0x3c236422354600fe6763defcc1503737e4ed89e262d0de3ec1e552020f2a56fe3b9e1e012d021072598c3c2821e18268bb8fb8339c0d1216) + self.assertEqual(pointH.y, 0xb555b9721f630ccb05fc466de4c74d3d2781e69eca88e1b040844f04cab39fd946f91c688fa42402bb38fb9c3e61231017020b219b4396e1) + + d = 255 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0xbeb7f8388b05cd9c1aa2e3c0dcf31e2b563659361826225390e7748654f627d5c36cbe627e9019936b56d15d4dad7c337c09bac64ff4197f) + self.assertEqual(pointH.y, 0x1e37312b2dd4e9440c43c6e7725fc4fa3d11e582d4863f1d018e28f50c0efdb1f53f9b01ada7c87fa162b1f0d72401015d57613d25f1ad53) + + d = 256 + pointH = d * self.pointG + self.assertEqual(pointH.x, 0xf19c34feb56730e3e2be761ac0a2a2b24853b281dda019fc35a5ab58e3696beb39609ae756b0d20fb7ccf0d79aaf5f3bca2e4fdb25bfac1c) + self.assertEqual(pointH.y, 0x3beb69cc9111bffcaddc61d363ce6fe5dd44da4aadce78f52e92e985d5442344ced72c4611ed0daac9f4f5661eab73d7a12d25ce8a30241e) + + def test_sizes(self): + self.assertEqual(self.pointG.size_in_bits(), 448) + self.assertEqual(self.pointG.size_in_bytes(), 56) + + +class TestEccKey_Ed448(unittest.TestCase): + + def test_private_key(self): + seed = unhexlify("4adf5d37ac6785e83e99a924f92676d366a78690af59c92b6bdf14f9cdbcf26fdad478109607583d633b60078d61d51d81b7509c5433b0d4c9") + Px = 0x72a01eea003a35f9ac44231dc4aae2a382f351d80bf32508175b0855edcf389aa2bbf308dd961ce361a6e7c2091bc78957f6ebcf3002a617 + Py = 0x9e0d08d84586e9aeefecacb41d049b831f1a3ee0c3eada63e34557b30702b50ab59fb372feff7c30b8cbb7dd51afbe88444ec56238722ec1 + + key = EccKey(curve="Ed448", seed=seed) + self.assertEqual(key.seed, seed) + self.assertEqual(key.d, 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, Px) + self.assertEqual(key.pointQ.y, Py) + + point = EccPoint(Px, Py, "ed448") + key = EccKey(curve="Ed448", seed=seed, point=point) + self.assertEqual(key.d, 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="ed448", seed=seed) + + # Must not accept d parameter + self.assertRaises(ValueError, EccKey, curve="ed448", d=1) + + def test_public_key(self): + point = EccPoint(_curves['ed448'].Gx, _curves['ed448'].Gy, curve='ed448') + key = EccKey(curve="ed448", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + priv_key = EccKey(curve="ed448", seed=b'H'*57) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_seed(self): + self.assertRaises(ValueError, lambda: EccKey(curve="ed448", seed=b'H' * 56)) + + def test_equality(self): + private_key = ECC.construct(seed=b'H'*57, curve="Ed448") + private_key2 = ECC.construct(seed=b'H'*57, curve="ed448") + private_key3 = ECC.construct(seed=b'C'*57, curve="Ed448") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='ed448') + self.assertIn("curve='Ed448'", repr(key)) + self.assertEqual(key.curve, 'Ed448') + self.assertEqual(key.public_key().curve, 'Ed448') + + +class TestEccModule_Ed448(unittest.TestCase): + + def test_generate(self): + key = ECC.generate(curve="Ed448") + self.assertTrue(key.has_private()) + point = EccPoint(_curves['Ed448'].Gx, _curves['Ed448'].Gy, curve="Ed448") * key.d + self.assertEqual(key.pointQ, point) + + # Always random + key2 = ECC.generate(curve="Ed448") + self.assertNotEqual(key, key2) + + # Other names + ECC.generate(curve="Ed448") + + # Random source + key1 = ECC.generate(curve="Ed448", randfunc=SHAKE128.new().read) + key2 = ECC.generate(curve="Ed448", randfunc=SHAKE128.new().read) + self.assertEqual(key1, key2) + + def test_construct(self): + seed = unhexlify("4adf5d37ac6785e83e99a924f92676d366a78690af59c92b6bdf14f9cdbcf26fdad478109607583d633b60078d61d51d81b7509c5433b0d4c9") + Px = 0x72a01eea003a35f9ac44231dc4aae2a382f351d80bf32508175b0855edcf389aa2bbf308dd961ce361a6e7c2091bc78957f6ebcf3002a617 + Py = 0x9e0d08d84586e9aeefecacb41d049b831f1a3ee0c3eada63e34557b30702b50ab59fb372feff7c30b8cbb7dd51afbe88444ec56238722ec1 + d = 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80 + point = EccPoint(Px, Py, curve="Ed448") + + # Private key only + key = ECC.construct(curve="Ed448", seed=seed) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Public key only + key = ECC.construct(curve="Ed448", point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertFalse(key.has_private()) + + # Private and public key + key = ECC.construct(curve="Ed448", seed=seed, point_x=Px, point_y=Py) + self.assertEqual(key.pointQ, point) + self.assertTrue(key.has_private()) + + # Other names + key = ECC.construct(curve="ed448", seed=seed) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['ed448'].Gx, point_y=_curves['ed448'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="Ed448", **coord) + self.assertRaises(ValueError, ECC.construct, curve="Ed448", d=2, **coordG) + self.assertRaises(ValueError, ECC.construct, curve="Ed448", seed=b'H'*58) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestEccPoint_Ed448) + tests += list_test_cases(TestEccKey_Ed448) + tests += list_test_cases(TestEccModule_Ed448) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py new file mode 100644 index 0000000..cadbd12 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ECC_NIST.py @@ -0,0 +1,1425 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors + +from Cryptodome.PublicKey import ECC +from Cryptodome.PublicKey.ECC import EccPoint, _curves, EccKey + +from Cryptodome.Math.Numbers import Integer + + +class TestEccPoint(unittest.TestCase): + + def test_mix(self): + + p1 = ECC.generate(curve='P-256').pointQ + p2 = ECC.generate(curve='P-384').pointQ + + try: + p1 + p2 + assert(False) + except ValueError as e: + assert "not on the same curve" in str(e) + + try: + p1 += p2 + assert(False) + except ValueError as e: + assert "not on the same curve" in str(e) + + class OtherKeyType: + pass + + self.assertFalse(p1 == OtherKeyType()) + self.assertTrue(p1 != OtherKeyType()) + + def test_repr(self): + p1 = ECC.construct(curve='P-256', + d=75467964919405407085864614198393977741148485328036093939970922195112333446269, + point_x=20573031766139722500939782666697015100983491952082159880539639074939225934381, + point_y=108863130203210779921520632367477406025152638284581252625277850513266505911389) + self.assertEqual(repr(p1), "EccKey(curve='NIST P-256', point_x=20573031766139722500939782666697015100983491952082159880539639074939225934381, point_y=108863130203210779921520632367477406025152638284581252625277850513266505911389, d=75467964919405407085864614198393977741148485328036093939970922195112333446269)") + + +class TestEccPoint_NIST_P192(unittest.TestCase): + """Tests defined in section 4.1 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0xd458e7d127ae671b0c330266d246769353a012073e97acf8, + 0x325930500d851f336bddc050cf7fb11b5673a1645086df3b, + curve='p192') + + pointT = EccPoint( + 0xf22c4395213e9ebe67ddecdd87fdbd01be16fb059b9753a4, + 0x264424096af2b3597796db48f8dfb41fa9cecc97691a9c79, + curve='p192') + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x48e1e4096b9b8e5ca9d0f1f077b8abf58e843894de4d0290 + pointRy = 0x408fa77c797cd7dbfb16aa48a3648d3d63c94117d7b6aa4b + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x48e1e4096b9b8e5ca9d0f1f077b8abf58e843894de4d0290 + pointRy = 0x408fa77c797cd7dbfb16aa48a3648d3d63c94117d7b6aa4b + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x30c5bc6b8c7da25354b373dc14dd8a0eba42d25a3f6e6962 + pointRy = 0x0dde14bc4249a721c407aedbf011e2ddbbcb2968c9d889cf + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xa78a236d60baec0c5dd41b33a542463a8255391af64c74ee + pointRx = 0x1faee4205a4f669d2d0a8f25e3bcec9a62a6952965bf6d31 + pointRy = 0x5ff2cdfa508a2581892367087c696f179e7a4d7e8260fb06 + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + # Reverse order + pointR = d * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pointR = Integer(d) * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_joint_scalar_multiply(self): + d = 0xa78a236d60baec0c5dd41b33a542463a8255391af64c74ee + e = 0xc4be3d53ec3089e71e4de8ceab7cce889bc393cd85b972bc + pointRx = 0x019f64eed8fa9b72b7dfea82c17c9bfa60ecb9e1778b5bde + pointRy = 0x16590c5fcd8655fa4ced33fb800e2a7e3c61f35d83503644 + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 192) + self.assertEqual(self.pointS.size_in_bytes(), 24) + + +class TestEccPoint_NIST_P224(unittest.TestCase): + """Tests defined in section 4.2 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0x6eca814ba59a930843dc814edd6c97da95518df3c6fdf16e9a10bb5b, + 0xef4b497f0963bc8b6aec0ca0f259b89cd80994147e05dc6b64d7bf22, + curve='p224') + + pointT = EccPoint( + 0xb72b25aea5cb03fb88d7e842002969648e6ef23c5d39ac903826bd6d, + 0xc42a8a4d34984f0b71b5b4091af7dceb33ea729c1a2dc8b434f10c34, + curve='p224') + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x236f26d9e84c2f7d776b107bd478ee0a6d2bcfcaa2162afae8d2fd15 + pointRy = 0xe53cc0a7904ce6c3746f6a97471297a0b7d5cdf8d536ae25bb0fda70 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x236f26d9e84c2f7d776b107bd478ee0a6d2bcfcaa2162afae8d2fd15 + pointRy = 0xe53cc0a7904ce6c3746f6a97471297a0b7d5cdf8d536ae25bb0fda70 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0xa9c96f2117dee0f27ca56850ebb46efad8ee26852f165e29cb5cdfc7 + pointRy = 0xadf18c84cf77ced4d76d4930417d9579207840bf49bfbf5837dfdd7d + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xa78ccc30eaca0fcc8e36b2dd6fbb03df06d37f52711e6363aaf1d73b + pointRx = 0x96a7625e92a8d72bff1113abdb95777e736a14c6fdaacc392702bca4 + pointRy = 0x0f8e5702942a3c5e13cd2fd5801915258b43dfadc70d15dbada3ed10 + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + # Reverse order + pointR = d * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pointR = Integer(d) * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_joing_scalar_multiply(self): + d = 0xa78ccc30eaca0fcc8e36b2dd6fbb03df06d37f52711e6363aaf1d73b + e = 0x54d549ffc08c96592519d73e71e8e0703fc8177fa88aa77a6ed35736 + pointRx = 0xdbfe2958c7b2cda1302a67ea3ffd94c918c5b350ab838d52e288c83e + pointRy = 0x2f521b83ac3b0549ff4895abcc7f0c5a861aacb87acbc5b8147bb18b + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 224) + self.assertEqual(self.pointS.size_in_bytes(), 28) + + +class TestEccPoint_NIST_P256(unittest.TestCase): + """Tests defined in section 4.3 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0xde2444bebc8d36e682edd27e0f271508617519b3221a8fa0b77cab3989da97c9, + 0xc093ae7ff36e5380fc01a5aad1e66659702de80f53cec576b6350b243042a256) + + pointT = EccPoint( + 0x55a8b00f8da1d44e62f6b3b25316212e39540dc861c89575bb8cf92e35e0986b, + 0x5421c3209c2d6c704835d82ac4c3dd90f61a8a52598b9e7ab656e9d8c8b24316) + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x72b13dd4354b6b81745195e98cc5ba6970349191ac476bd4553cf35a545a067e + pointRy = 0x8d585cbb2e1327d75241a8a122d7620dc33b13315aa5c9d46d013011744ac264 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x72b13dd4354b6b81745195e98cc5ba6970349191ac476bd4553cf35a545a067e + pointRy = 0x8d585cbb2e1327d75241a8a122d7620dc33b13315aa5c9d46d013011744ac264 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x7669e6901606ee3ba1a8eef1e0024c33df6c22f3b17481b82a860ffcdb6127b0 + pointRy = 0xfa878162187a54f6c39f6ee0072f33de389ef3eecd03023de10ca2c1db61d0c7 + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + pointRx = 0x51d08d5f2d4278882946d88d83c97d11e62becc3cfc18bedacc89ba34eeca03f + pointRy = 0x75ee68eb8bf626aa5b673ab51f6e744e06f8fcf8a6c0cf3035beca956a7b41d5 + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + # Reverse order + pointR = d * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pointR = Integer(d) * self.pointS + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_joing_scalar_multiply(self): + d = 0xc51e4753afdec1e6b6c6a5b992f43f8dd0c7a8933072708b6522468b2ffb06fd + e = 0xd37f628ece72a462f0145cbefe3f0b355ee8332d37acdd83a358016aea029db7 + pointRx = 0xd867b4679221009234939221b8046245efcf58413daacbeff857b8588341f6b8 + pointRy = 0xf2504055c03cede12d22720dad69c745106b6607ec7e50dd35d54bd80f615275 + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 256) + self.assertEqual(self.pointS.size_in_bytes(), 32) + + +class TestEccPoint_NIST_P384(unittest.TestCase): + """Tests defined in section 4.4 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0xfba203b81bbd23f2b3be971cc23997e1ae4d89e69cb6f92385dda82768ada415ebab4167459da98e62b1332d1e73cb0e, + 0x5ffedbaefdeba603e7923e06cdb5d0c65b22301429293376d5c6944e3fa6259f162b4788de6987fd59aed5e4b5285e45, + "p384") + + pointT = EccPoint( + 0xaacc05202e7fda6fc73d82f0a66220527da8117ee8f8330ead7d20ee6f255f582d8bd38c5a7f2b40bcdb68ba13d81051, + 0x84009a263fefba7c2c57cffa5db3634d286131afc0fca8d25afa22a7b5dce0d9470da89233cee178592f49b6fecb5092, + "p384") + + def test_set(self): + pointW = EccPoint(0, 0, "p384") + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x12dc5ce7acdfc5844d939f40b4df012e68f865b89c3213ba97090a247a2fc009075cf471cd2e85c489979b65ee0b5eed + pointRy = 0x167312e58fe0c0afa248f2854e3cddcb557f983b3189b67f21eee01341e7e9fe67f6ee81b36988efa406945c8804a4b0 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def _test_inplace_addition(self): + pointRx = 0x72b13dd4354b6b81745195e98cc5ba6970349191ac476bd4553cf35a545a067e + pointRy = 0x8d585cbb2e1327d75241a8a122d7620dc33b13315aa5c9d46d013011744ac264 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x2a2111b1e0aa8b2fc5a1975516bc4d58017ff96b25e1bdff3c229d5fac3bacc319dcbec29f9478f42dee597b4641504c + pointRy = 0xfa2e3d9dc84db8954ce8085ef28d7184fddfd1344b4d4797343af9b5f9d837520b450f726443e4114bd4e5bdb2f65ddd + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0xa4ebcae5a665983493ab3e626085a24c104311a761b5a8fdac052ed1f111a5c44f76f45659d2d111a61b5fdd97583480 + pointRx = 0xe4f77e7ffeb7f0958910e3a680d677a477191df166160ff7ef6bb5261f791aa7b45e3e653d151b95dad3d93ca0290ef2 + pointRy = 0xac7dee41d8c5f4a7d5836960a773cfc1376289d3373f8cf7417b0c6207ac32e913856612fc9ff2e357eb2ee05cf9667f + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + def test_joing_scalar_multiply(self): + d = 0xa4ebcae5a665983493ab3e626085a24c104311a761b5a8fdac052ed1f111a5c44f76f45659d2d111a61b5fdd97583480 + e = 0xafcf88119a3a76c87acbd6008e1349b29f4ba9aa0e12ce89bcfcae2180b38d81ab8cf15095301a182afbc6893e75385d + pointRx = 0x917ea28bcd641741ae5d18c2f1bd917ba68d34f0f0577387dc81260462aea60e2417b8bdc5d954fc729d211db23a02dc + pointRy = 0x1a29f7ce6d074654d77b40888c73e92546c8f16a5ff6bcbd307f758d4aee684beff26f6742f597e2585c86da908f7186 + + pointR = self.pointS * d + self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 384) + self.assertEqual(self.pointS.size_in_bytes(), 48) + + +class TestEccPoint_NIST_P521(unittest.TestCase): + """Tests defined in section 4.5 of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.9073&rep=rep1&type=pdf""" + + pointS = EccPoint( + 0x000001d5c693f66c08ed03ad0f031f937443458f601fd098d3d0227b4bf62873af50740b0bb84aa157fc847bcf8dc16a8b2b8bfd8e2d0a7d39af04b089930ef6dad5c1b4, + 0x00000144b7770963c63a39248865ff36b074151eac33549b224af5c8664c54012b818ed037b2b7c1a63ac89ebaa11e07db89fcee5b556e49764ee3fa66ea7ae61ac01823, + "p521") + + pointT = EccPoint( + 0x000000f411f2ac2eb971a267b80297ba67c322dba4bb21cec8b70073bf88fc1ca5fde3ba09e5df6d39acb2c0762c03d7bc224a3e197feaf760d6324006fe3be9a548c7d5, + 0x000001fdf842769c707c93c630df6d02eff399a06f1b36fb9684f0b373ed064889629abb92b1ae328fdb45534268384943f0e9222afe03259b32274d35d1b9584c65e305, + "p521") + + def test_set(self): + pointW = EccPoint(0, 0) + pointW.set(self.pointS) + self.assertEqual(pointW, self.pointS) + + def test_copy(self): + pointW = self.pointS.copy() + self.assertEqual(pointW, self.pointS) + pointW.set(self.pointT) + self.assertEqual(pointW, self.pointT) + self.assertNotEqual(self.pointS, self.pointT) + + def test_negate(self): + negS = -self.pointS + sum = self.pointS + negS + self.assertEqual(sum, self.pointS.point_at_infinity()) + + def test_addition(self): + pointRx = 0x000001264ae115ba9cbc2ee56e6f0059e24b52c8046321602c59a339cfb757c89a59c358a9a8e1f86d384b3f3b255ea3f73670c6dc9f45d46b6a196dc37bbe0f6b2dd9e9 + pointRy = 0x00000062a9c72b8f9f88a271690bfa017a6466c31b9cadc2fc544744aeb817072349cfddc5ad0e81b03f1897bd9c8c6efbdf68237dc3bb00445979fb373b20c9a967ac55 + + pointR = self.pointS + self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS + pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai + self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai + pai + self.assertEqual(pointR, pai) + + def test_inplace_addition(self): + pointRx = 0x000001264ae115ba9cbc2ee56e6f0059e24b52c8046321602c59a339cfb757c89a59c358a9a8e1f86d384b3f3b255ea3f73670c6dc9f45d46b6a196dc37bbe0f6b2dd9e9 + pointRy = 0x00000062a9c72b8f9f88a271690bfa017a6466c31b9cadc2fc544744aeb817072349cfddc5ad0e81b03f1897bd9c8c6efbdf68237dc3bb00445979fb373b20c9a967ac55 + + pointR = self.pointS.copy() + pointR += self.pointT + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + pai = pointR.point_at_infinity() + + # S + 0 + pointR = self.pointS.copy() + pointR += pai + self.assertEqual(pointR, self.pointS) + + # 0 + S + pointR = pai.copy() + pointR += self.pointS + self.assertEqual(pointR, self.pointS) + + # 0 + 0 + pointR = pai.copy() + pointR += pai + self.assertEqual(pointR, pai) + + def test_doubling(self): + pointRx = 0x0000012879442f2450c119e7119a5f738be1f1eba9e9d7c6cf41b325d9ce6d643106e9d61124a91a96bcf201305a9dee55fa79136dc700831e54c3ca4ff2646bd3c36bc6 + pointRy = 0x0000019864a8b8855c2479cbefe375ae553e2393271ed36fadfc4494fc0583f6bd03598896f39854abeae5f9a6515a021e2c0eef139e71de610143f53382f4104dccb543 + + pointR = self.pointS.copy() + pointR.double() + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 2*0 + pai = self.pointS.point_at_infinity() + pointR = pai.copy() + pointR.double() + self.assertEqual(pointR, pai) + + # S + S + pointR = self.pointS.copy() + pointR += pointR + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_scalar_multiply(self): + d = 0x000001eb7f81785c9629f136a7e8f8c674957109735554111a2a866fa5a166699419bfa9936c78b62653964df0d6da940a695c7294d41b2d6600de6dfcf0edcfc89fdcb1 + pointRx = 0x00000091b15d09d0ca0353f8f96b93cdb13497b0a4bb582ae9ebefa35eee61bf7b7d041b8ec34c6c00c0c0671c4ae063318fb75be87af4fe859608c95f0ab4774f8c95bb + pointRy = 0x00000130f8f8b5e1abb4dd94f6baaf654a2d5810411e77b7423965e0c7fd79ec1ae563c207bd255ee9828eb7a03fed565240d2cc80ddd2cecbb2eb50f0951f75ad87977f + + pointR = self.pointS * d + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + # 0*S + pai = self.pointS.point_at_infinity() + pointR = self.pointS * 0 + self.assertEqual(pointR, pai) + + # -1*S + self.assertRaises(ValueError, lambda: self.pointS * -1) + + def test_joing_scalar_multiply(self): + d = 0x000001eb7f81785c9629f136a7e8f8c674957109735554111a2a866fa5a166699419bfa9936c78b62653964df0d6da940a695c7294d41b2d6600de6dfcf0edcfc89fdcb1 + e = 0x00000137e6b73d38f153c3a7575615812608f2bab3229c92e21c0d1c83cfad9261dbb17bb77a63682000031b9122c2f0cdab2af72314be95254de4291a8f85f7c70412e3 + pointRx = 0x0000009d3802642b3bea152beb9e05fba247790f7fc168072d363340133402f2585588dc1385d40ebcb8552f8db02b23d687cae46185b27528adb1bf9729716e4eba653d + pointRy = 0x0000000fe44344e79da6f49d87c1063744e5957d9ac0a505bafa8281c9ce9ff25ad53f8da084a2deb0923e46501de5797850c61b229023dd9cf7fc7f04cd35ebb026d89d + + pointR = self.pointS * d + pointR += self.pointT * e + self.assertEqual(pointR.x, pointRx) + self.assertEqual(pointR.y, pointRy) + + def test_sizes(self): + self.assertEqual(self.pointS.size_in_bits(), 521) + self.assertEqual(self.pointS.size_in_bytes(), 66) + + +class TestEccPoint_PAI_P192(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p192'] + pointG = EccPoint(curve.Gx, curve.Gy, "p192") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P192.txt", + "P-192 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P192, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P224(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p224'] + pointG = EccPoint(curve.Gx, curve.Gy, "p224") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P224.txt", + "P-224 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P224, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P256(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p256'] + pointG = EccPoint(curve.Gx, curve.Gy, "p256") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P256.txt", + "P-256 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P256, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P384(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p384'] + pointG = EccPoint(curve.Gx, curve.Gy, "p384") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P384.txt", + "P-384 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P384, "test_%d" % tv.count, new_test) + + +class TestEccPoint_PAI_P521(unittest.TestCase): + """Test vectors from http://point-at-infinity.org/ecc/nisttv""" + + curve = _curves['p521'] + pointG = EccPoint(curve.Gx, curve.Gy, "p521") + + +tv_pai = load_test_vectors(("PublicKey", "ECC"), + "point-at-infinity.org-P521.txt", + "P-521 tests from point-at-infinity.org", + {"k": lambda k: int(k), + "x": lambda x: int(x, 16), + "y": lambda y: int(y, 16)}) or [] +for tv in tv_pai: + def new_test(self, scalar=tv.k, x=tv.x, y=tv.y): + result = self.pointG * scalar + self.assertEqual(result.x, x) + self.assertEqual(result.y, y) + setattr(TestEccPoint_PAI_P521, "test_%d" % tv.count, new_test) + + +class TestEccKey_P192(unittest.TestCase): + + def test_private_key(self): + + key = EccKey(curve="P-192", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, _curves['p192'].Gx) + self.assertEqual(key.pointQ.y, _curves['p192'].Gy) + + point = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, curve='P-192') + key = EccKey(curve="P-192", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="secp192r1", d=1) + key = EccKey(curve="prime192v1", d=1) + + def test_public_key(self): + + point = EccPoint(_curves['p192'].Gx, _curves['p192'].Gy, curve='P-192') + key = EccKey(curve="P-192", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-192", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-193", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-192", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-192", + d=_curves['p192'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-192") + private_key2 = ECC.construct(d=3, curve="P-192") + private_key3 = ECC.construct(d=4, curve="P-192") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='p192') + self.assertIn("curve='NIST P-192'", repr(key)) + self.assertEqual(key.curve, 'NIST P-192') + self.assertEqual(key.public_key().curve, 'NIST P-192') + + +class TestEccKey_P224(unittest.TestCase): + + def test_private_key(self): + + key = EccKey(curve="P-224", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, _curves['p224'].Gx) + self.assertEqual(key.pointQ.y, _curves['p224'].Gy) + + point = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, curve='P-224') + key = EccKey(curve="P-224", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="secp224r1", d=1) + key = EccKey(curve="prime224v1", d=1) + + def test_public_key(self): + + point = EccPoint(_curves['p224'].Gx, _curves['p224'].Gy, curve='P-224') + key = EccKey(curve="P-224", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-224", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-225", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-224", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-224", + d=_curves['p224'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-224") + private_key2 = ECC.construct(d=3, curve="P-224") + private_key3 = ECC.construct(d=4, curve="P-224") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='p224') + self.assertIn("curve='NIST P-224'", repr(key)) + self.assertEqual(key.curve, 'NIST P-224') + self.assertEqual(key.public_key().curve, 'NIST P-224') + + +class TestEccKey_P256(unittest.TestCase): + + def test_private_key(self): + + key = EccKey(curve="P-256", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, _curves['p256'].Gx) + self.assertEqual(key.pointQ.y, _curves['p256'].Gy) + + point = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy) + key = EccKey(curve="P-256", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="secp256r1", d=1) + key = EccKey(curve="prime256v1", d=1) + + # Must not accept d parameter + self.assertRaises(ValueError, EccKey, curve="p256", seed=b'H'*32) + + def test_public_key(self): + + point = EccPoint(_curves['p256'].Gx, _curves['p256'].Gy) + key = EccKey(curve="P-256", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-256", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-257", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-256", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-256", d=_curves['p256'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-256") + private_key2 = ECC.construct(d=3, curve="P-256") + private_key3 = ECC.construct(d=4, curve="P-256") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='p256') + self.assertIn("curve='NIST P-256'", repr(key)) + self.assertEqual(key.curve, 'NIST P-256') + self.assertEqual(key.public_key().curve, 'NIST P-256') + + +class TestEccKey_P384(unittest.TestCase): + + def test_private_key(self): + + p384 = _curves['p384'] + + key = EccKey(curve="P-384", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, p384.Gx) + self.assertEqual(key.pointQ.y, p384.Gy) + + point = EccPoint(p384.Gx, p384.Gy, "p384") + key = EccKey(curve="P-384", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="p384", d=1) + key = EccKey(curve="secp384r1", d=1) + key = EccKey(curve="prime384v1", d=1) + + def test_public_key(self): + + p384 = _curves['p384'] + point = EccPoint(p384.Gx, p384.Gy, 'p384') + key = EccKey(curve="P-384", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-384", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-385", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-384", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-384", + d=_curves['p384'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-384") + private_key2 = ECC.construct(d=3, curve="P-384") + private_key3 = ECC.construct(d=4, curve="P-384") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='p384') + self.assertIn("curve='NIST P-384'", repr(key)) + self.assertEqual(key.curve, 'NIST P-384') + self.assertEqual(key.public_key().curve, 'NIST P-384') + + +class TestEccKey_P521(unittest.TestCase): + + def test_private_key(self): + + p521 = _curves['p521'] + + key = EccKey(curve="P-521", d=1) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ.x, p521.Gx) + self.assertEqual(key.pointQ.y, p521.Gy) + + point = EccPoint(p521.Gx, p521.Gy, "p521") + key = EccKey(curve="P-521", d=1, point=point) + self.assertEqual(key.d, 1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, point) + + # Other names + key = EccKey(curve="p521", d=1) + key = EccKey(curve="secp521r1", d=1) + key = EccKey(curve="prime521v1", d=1) + + def test_public_key(self): + + p521 = _curves['p521'] + point = EccPoint(p521.Gx, p521.Gy, 'p521') + key = EccKey(curve="P-384", point=point) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, point) + + def test_public_key_derived(self): + + priv_key = EccKey(curve="P-521", d=3) + pub_key = priv_key.public_key() + self.assertFalse(pub_key.has_private()) + self.assertEqual(priv_key.pointQ, pub_key.pointQ) + + def test_invalid_curve(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-522", d=1)) + + def test_invalid_d(self): + self.assertRaises(ValueError, lambda: EccKey(curve="P-521", d=0)) + self.assertRaises(ValueError, lambda: EccKey(curve="P-521", + d=_curves['p521'].order)) + + def test_equality(self): + + private_key = ECC.construct(d=3, curve="P-521") + private_key2 = ECC.construct(d=3, curve="P-521") + private_key3 = ECC.construct(d=4, curve="P-521") + + public_key = private_key.public_key() + public_key2 = private_key2.public_key() + public_key3 = private_key3.public_key() + + self.assertEqual(private_key, private_key2) + self.assertNotEqual(private_key, private_key3) + + self.assertEqual(public_key, public_key2) + self.assertNotEqual(public_key, public_key3) + + self.assertNotEqual(public_key, private_key) + + def test_name_consistency(self): + key = ECC.generate(curve='p521') + self.assertIn("curve='NIST P-521'", repr(key)) + self.assertEqual(key.curve, 'NIST P-521') + self.assertEqual(key.public_key().curve, 'NIST P-521') + + +class TestEccModule_P192(unittest.TestCase): + + def test_generate(self): + + key = ECC.generate(curve="P-192") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(_curves['p192'].Gx, + _curves['p192'].Gy, + "P-192") * key.d, + "p192") + + # Other names + ECC.generate(curve="secp192r1") + ECC.generate(curve="prime192v1") + + def test_construct(self): + + key = ECC.construct(curve="P-192", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p192'].G) + + key = ECC.construct(curve="P-192", point_x=_curves['p192'].Gx, + point_y=_curves['p192'].Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, _curves['p192'].G) + + # Other names + ECC.construct(curve="p192", d=1) + ECC.construct(curve="secp192r1", d=1) + ECC.construct(curve="prime192v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p192'].Gx, point_y=_curves['p192'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-192", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-192", d=2, **coordG) + + +class TestEccModule_P224(unittest.TestCase): + + def test_generate(self): + + key = ECC.generate(curve="P-224") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(_curves['p224'].Gx, + _curves['p224'].Gy, + "P-224") * key.d, + "p224") + + # Other names + ECC.generate(curve="secp224r1") + ECC.generate(curve="prime224v1") + + def test_construct(self): + + key = ECC.construct(curve="P-224", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p224'].G) + + key = ECC.construct(curve="P-224", point_x=_curves['p224'].Gx, + point_y=_curves['p224'].Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, _curves['p224'].G) + + # Other names + ECC.construct(curve="p224", d=1) + ECC.construct(curve="secp224r1", d=1) + ECC.construct(curve="prime224v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p224'].Gx, point_y=_curves['p224'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-224", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-224", d=2, **coordG) + + +class TestEccModule_P256(unittest.TestCase): + + def test_generate(self): + + key = ECC.generate(curve="P-256") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(_curves['p256'].Gx, + _curves['p256'].Gy) * key.d, + "p256") + + # Other names + ECC.generate(curve="secp256r1") + ECC.generate(curve="prime256v1") + + def test_construct(self): + + key = ECC.construct(curve="P-256", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p256'].G) + + key = ECC.construct(curve="P-256", point_x=_curves['p256'].Gx, + point_y=_curves['p256'].Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, _curves['p256'].G) + + # Other names + ECC.construct(curve="p256", d=1) + ECC.construct(curve="secp256r1", d=1) + ECC.construct(curve="prime256v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p256'].Gx, point_y=_curves['p256'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-256", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-256", d=2, **coordG) + + +class TestEccModule_P384(unittest.TestCase): + + def test_generate(self): + + curve = _curves['p384'] + key = ECC.generate(curve="P-384") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(curve.Gx, curve.Gy, "p384") * key.d) + + # Other names + ECC.generate(curve="secp384r1") + ECC.generate(curve="prime384v1") + + def test_construct(self): + + curve = _curves['p384'] + key = ECC.construct(curve="P-384", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p384'].G) + + key = ECC.construct(curve="P-384", point_x=curve.Gx, point_y=curve.Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, curve.G) + + # Other names + ECC.construct(curve="p384", d=1) + ECC.construct(curve="secp384r1", d=1) + ECC.construct(curve="prime384v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p384'].Gx, point_y=_curves['p384'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-384", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-384", d=2, **coordG) + + +class TestEccModule_P521(unittest.TestCase): + + def test_generate(self): + + curve = _curves['p521'] + key = ECC.generate(curve="P-521") + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, EccPoint(curve.Gx, curve.Gy, "p521") * key.d) + + # Other names + ECC.generate(curve="secp521r1") + ECC.generate(curve="prime521v1") + + def test_construct(self): + + curve = _curves['p521'] + key = ECC.construct(curve="P-521", d=1) + self.assertTrue(key.has_private()) + self.assertEqual(key.pointQ, _curves['p521'].G) + + key = ECC.construct(curve="P-521", point_x=curve.Gx, point_y=curve.Gy) + self.assertFalse(key.has_private()) + self.assertEqual(key.pointQ, curve.G) + + # Other names + ECC.construct(curve="p521", d=1) + ECC.construct(curve="secp521r1", d=1) + ECC.construct(curve="prime521v1", d=1) + + def test_negative_construct(self): + coord = dict(point_x=10, point_y=4) + coordG = dict(point_x=_curves['p521'].Gx, point_y=_curves['p521'].Gy) + + self.assertRaises(ValueError, ECC.construct, curve="P-521", **coord) + self.assertRaises(ValueError, ECC.construct, curve="P-521", d=2, **coordG) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestEccPoint) + tests += list_test_cases(TestEccPoint_NIST_P192) + tests += list_test_cases(TestEccPoint_NIST_P224) + tests += list_test_cases(TestEccPoint_NIST_P256) + tests += list_test_cases(TestEccPoint_NIST_P384) + tests += list_test_cases(TestEccPoint_NIST_P521) + tests += list_test_cases(TestEccPoint_PAI_P192) + tests += list_test_cases(TestEccPoint_PAI_P224) + tests += list_test_cases(TestEccPoint_PAI_P256) + tests += list_test_cases(TestEccPoint_PAI_P384) + tests += list_test_cases(TestEccPoint_PAI_P521) + tests += list_test_cases(TestEccKey_P192) + tests += list_test_cases(TestEccKey_P224) + tests += list_test_cases(TestEccKey_P256) + tests += list_test_cases(TestEccKey_P384) + tests += list_test_cases(TestEccKey_P521) + tests += list_test_cases(TestEccModule_P192) + tests += list_test_cases(TestEccModule_P224) + tests += list_test_cases(TestEccModule_P256) + tests += list_test_cases(TestEccModule_P384) + tests += list_test_cases(TestEccModule_P521) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py new file mode 100644 index 0000000..67d2e0b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_ElGamal.py @@ -0,0 +1,217 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_ElGamal.py: Self-test for the ElGamal primitive +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.PublicKey.ElGamal""" + +__revision__ = "$Id$" + +import unittest +from Cryptodome.SelfTest.st_common import list_test_cases, a2b_hex, b2a_hex +from Cryptodome import Random +from Cryptodome.PublicKey import ElGamal +from Cryptodome.Util.number import bytes_to_long +from Cryptodome.Util.py3compat import * + +class ElGamalTest(unittest.TestCase): + + # + # Test vectors + # + # There seem to be no real ElGamal test vectors available in the + # public domain. The following test vectors have been generated + # with libgcrypt 1.5.0. + # + # Encryption + tve=[ + { + # 256 bits + 'p' :'BA4CAEAAED8CBE952AFD2126C63EB3B345D65C2A0A73D2A3AD4138B6D09BD933', + 'g' :'05', + 'y' :'60D063600ECED7C7C55146020E7A31C4476E9793BEAED420FEC9E77604CAE4EF', + 'x' :'1D391BA2EE3C37FE1BA175A69B2C73A11238AD77675932', + 'k' :'F5893C5BAB4131264066F57AB3D8AD89E391A0B68A68A1', + 'pt' :'48656C6C6F207468657265', + 'ct1':'32BFD5F487966CEA9E9356715788C491EC515E4ED48B58F0F00971E93AAA5EC7', + 'ct2':'7BE8FBFF317C93E82FCEF9BD515284BA506603FEA25D01C0CB874A31F315EE68' + }, + + { + # 512 bits + 'p' :'F1B18AE9F7B4E08FDA9A04832F4E919D89462FD31BF12F92791A93519F75076D6CE3942689CDFF2F344CAFF0F82D01864F69F3AECF566C774CBACF728B81A227', + 'g' :'07', + 'y' :'688628C676E4F05D630E1BE39D0066178CA7AA83836B645DE5ADD359B4825A12B02EF4252E4E6FA9BEC1DB0BE90F6D7C8629CABB6E531F472B2664868156E20C', + 'x' :'14E60B1BDFD33436C0DA8A22FDC14A2CCDBBED0627CE68', + 'k' :'38DBF14E1F319BDA9BAB33EEEADCAF6B2EA5250577ACE7', + 'pt' :'48656C6C6F207468657265', + 'ct1':'290F8530C2CC312EC46178724F196F308AD4C523CEABB001FACB0506BFED676083FE0F27AC688B5C749AB3CB8A80CD6F7094DBA421FB19442F5A413E06A9772B', + 'ct2':'1D69AAAD1DC50493FB1B8E8721D621D683F3BF1321BE21BC4A43E11B40C9D4D9C80DE3AAC2AB60D31782B16B61112E68220889D53C4C3136EE6F6CE61F8A23A0' + } + ] + + # Signature + tvs=[ + { + # 256 bits + 'p' :'D2F3C41EA66530838A704A48FFAC9334F4701ECE3A97CEE4C69DD01AE7129DD7', + 'g' :'05', + 'y' :'C3F9417DC0DAFEA6A05C1D2333B7A95E63B3F4F28CC962254B3256984D1012E7', + 'x' :'165E4A39BE44D5A2D8B1332D416BC559616F536BC735BB', + 'k' :'C7F0C794A7EAD726E25A47FF8928013680E73C51DD3D7D99BFDA8F492585928F', + 'h' :'48656C6C6F207468657265', + 'sig1':'35CA98133779E2073EF31165AFCDEB764DD54E96ADE851715495F9C635E1E7C2', + 'sig2':'0135B88B1151279FE5D8078D4FC685EE81177EE9802AB123A73925FC1CB059A7', + }, + { + # 512 bits + 'p' :'E24CF3A4B8A6AF749DCA6D714282FE4AABEEE44A53BB6ED15FBE32B5D3C3EF9CC4124A2ECA331F3C1C1B667ACA3766825217E7B5F9856648D95F05330C6A19CF', + 'g' :'0B', + 'y' :'2AD3A1049CA5D4ED207B2431C79A8719BB4073D4A94E450EA6CEE8A760EB07ADB67C0D52C275EE85D7B52789061EE45F2F37D9B2AE522A51C28329766BFE68AC', + 'x' :'16CBB4F46D9ECCF24FF9F7E63CAA3BD8936341555062AB', + 'k' :'8A3D89A4E429FD2476D7D717251FB79BF900FFE77444E6BB8299DC3F84D0DD57ABAB50732AE158EA52F5B9E7D8813E81FD9F79470AE22F8F1CF9AEC820A78C69', + 'h' :'48656C6C6F207468657265', + 'sig1':'BE001AABAFFF976EC9016198FBFEA14CBEF96B000CCC0063D3324016F9E91FE80D8F9325812ED24DDB2B4D4CF4430B169880B3CE88313B53255BD4EC0378586F', + 'sig2':'5E266F3F837BA204E3BBB6DBECC0611429D96F8C7CE8F4EFDF9D4CB681C2A954468A357BF4242CEC7418B51DFC081BCD21299EF5B5A0DDEF3A139A1817503DDE', + } + ] + + def test_generate_180(self): + self._test_random_key(180) + + def test_encryption(self): + for tv in self.tve: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + ct = key._encrypt(d['pt'], d['k']) + self.assertEqual(ct[0], d['ct1']) + self.assertEqual(ct[1], d['ct2']) + + def test_decryption(self): + for tv in self.tve: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + pt = key._decrypt((d['ct1'], d['ct2'])) + self.assertEqual(pt, d['pt']) + + def test_signing(self): + for tv in self.tvs: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + sig1, sig2 = key._sign(d['h'], d['k']) + self.assertEqual(sig1, d['sig1']) + self.assertEqual(sig2, d['sig2']) + + def test_verification(self): + for tv in self.tvs: + d = self.convert_tv(tv, True) + key = ElGamal.construct(d['key']) + # Positive test + res = key._verify( d['h'], (d['sig1'],d['sig2']) ) + self.assertTrue(res) + # Negative test + res = key._verify( d['h'], (d['sig1']+1,d['sig2']) ) + self.assertFalse(res) + + def test_bad_key3(self): + tup = tup0 = list(self.convert_tv(self.tvs[0], 1)['key'])[:3] + tup[0] += 1 # p += 1 (not prime) + self.assertRaises(ValueError, ElGamal.construct, tup) + + tup = tup0 + tup[1] = 1 # g = 1 + self.assertRaises(ValueError, ElGamal.construct, tup) + + tup = tup0 + tup[2] = tup[0]*2 # y = 2*p + self.assertRaises(ValueError, ElGamal.construct, tup) + + def test_bad_key4(self): + tup = tup0 = list(self.convert_tv(self.tvs[0], 1)['key']) + tup[3] += 1 # x += 1 + self.assertRaises(ValueError, ElGamal.construct, tup) + + def convert_tv(self, tv, as_longs=0): + """Convert a test vector from textual form (hexadecimal ascii + to either integers or byte strings.""" + key_comps = 'p','g','y','x' + tv2 = {} + for c in tv.keys(): + tv2[c] = a2b_hex(tv[c]) + if as_longs or c in key_comps or c in ('sig1','sig2'): + tv2[c] = bytes_to_long(tv2[c]) + tv2['key']=[] + for c in key_comps: + tv2['key'] += [tv2[c]] + del tv2[c] + return tv2 + + def _test_random_key(self, bits): + elgObj = ElGamal.generate(bits, Random.new().read) + self._check_private_key(elgObj) + self._exercise_primitive(elgObj) + pub = elgObj.publickey() + self._check_public_key(pub) + self._exercise_public_primitive(elgObj) + + def _check_private_key(self, elgObj): + + # Check capabilities + self.assertTrue(elgObj.has_private()) + + # Sanity check key data + self.assertTrue(1<elgObj.g<(elgObj.p-1)) + self.assertEqual(pow(elgObj.g, elgObj.p-1, elgObj.p), 1) + self.assertTrue(1<elgObj.x<(elgObj.p-1)) + self.assertEqual(pow(elgObj.g, elgObj.x, elgObj.p), elgObj.y) + + def _check_public_key(self, elgObj): + + # Check capabilities + self.assertFalse(elgObj.has_private()) + + # Sanity check key data + self.assertTrue(1<elgObj.g<(elgObj.p-1)) + self.assertEqual(pow(elgObj.g, elgObj.p-1, elgObj.p), 1) + + def _exercise_primitive(self, elgObj): + # Test encryption/decryption + plaintext = 127218 + ciphertext = elgObj._encrypt(plaintext, 123456789) + plaintextP = elgObj._decrypt(ciphertext) + self.assertEqual(plaintext, plaintextP) + + # Test signature/verification + signature = elgObj._sign(plaintext, 987654321) + elgObj._verify(plaintext, signature) + + def _exercise_public_primitive(self, elgObj): + plaintext = 92987276 + ciphertext = elgObj._encrypt(plaintext, 123456789) + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ElGamalTest) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py new file mode 100644 index 0000000..b77dbbf --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_RSA.py @@ -0,0 +1,320 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_RSA.py: Self-test for the RSA primitive +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.PublicKey.RSA""" + +__revision__ = "$Id$" + +import os +import pickle +from pickle import PicklingError +from Cryptodome.Util.py3compat import * + +import unittest +from Cryptodome.SelfTest.st_common import list_test_cases, a2b_hex, b2a_hex + +class RSATest(unittest.TestCase): + # Test vectors from "RSA-OAEP and RSA-PSS test vectors (.zip file)" + # ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1-vec.zip + # See RSADSI's PKCS#1 page at + # http://www.rsa.com/rsalabs/node.asp?id=2125 + + # from oaep-int.txt + + # TODO: PyCryptodome treats the message as starting *after* the leading "00" + # TODO: That behaviour should probably be changed in the future. + plaintext = """ + eb 7a 19 ac e9 e3 00 63 50 e3 29 50 4b 45 e2 + ca 82 31 0b 26 dc d8 7d 5c 68 f1 ee a8 f5 52 67 + c3 1b 2e 8b b4 25 1f 84 d7 e0 b2 c0 46 26 f5 af + f9 3e dc fb 25 c9 c2 b3 ff 8a e1 0e 83 9a 2d db + 4c dc fe 4f f4 77 28 b4 a1 b7 c1 36 2b aa d2 9a + b4 8d 28 69 d5 02 41 21 43 58 11 59 1b e3 92 f9 + 82 fb 3e 87 d0 95 ae b4 04 48 db 97 2f 3a c1 4f + 7b c2 75 19 52 81 ce 32 d2 f1 b7 6d 4d 35 3e 2d + """ + + ciphertext = """ + 12 53 e0 4d c0 a5 39 7b b4 4a 7a b8 7e 9b f2 a0 + 39 a3 3d 1e 99 6f c8 2a 94 cc d3 00 74 c9 5d f7 + 63 72 20 17 06 9e 52 68 da 5d 1c 0b 4f 87 2c f6 + 53 c1 1d f8 23 14 a6 79 68 df ea e2 8d ef 04 bb + 6d 84 b1 c3 1d 65 4a 19 70 e5 78 3b d6 eb 96 a0 + 24 c2 ca 2f 4a 90 fe 9f 2e f5 c9 c1 40 e5 bb 48 + da 95 36 ad 87 00 c8 4f c9 13 0a de a7 4e 55 8d + 51 a7 4d df 85 d8 b5 0d e9 68 38 d6 06 3e 09 55 + """ + + modulus = """ + bb f8 2f 09 06 82 ce 9c 23 38 ac 2b 9d a8 71 f7 + 36 8d 07 ee d4 10 43 a4 40 d6 b6 f0 74 54 f5 1f + b8 df ba af 03 5c 02 ab 61 ea 48 ce eb 6f cd 48 + 76 ed 52 0d 60 e1 ec 46 19 71 9d 8a 5b 8b 80 7f + af b8 e0 a3 df c7 37 72 3e e6 b4 b7 d9 3a 25 84 + ee 6a 64 9d 06 09 53 74 88 34 b2 45 45 98 39 4e + e0 aa b1 2d 7b 61 a5 1f 52 7a 9a 41 f6 c1 68 7f + e2 53 72 98 ca 2a 8f 59 46 f8 e5 fd 09 1d bd cb + """ + + e = 0x11 # public exponent + + prime_factor = """ + c9 7f b1 f0 27 f4 53 f6 34 12 33 ea aa d1 d9 35 + 3f 6c 42 d0 88 66 b1 d0 5a 0f 20 35 02 8b 9d 86 + 98 40 b4 16 66 b4 2e 92 ea 0d a3 b4 32 04 b5 cf + ce 33 52 52 4d 04 16 a5 a4 41 e7 00 af 46 15 03 + """ + + def setUp(self): + global RSA, Random, bytes_to_long + from Cryptodome.PublicKey import RSA + from Cryptodome import Random + from Cryptodome.Util.number import bytes_to_long, inverse + self.n = bytes_to_long(a2b_hex(self.modulus)) + self.p = bytes_to_long(a2b_hex(self.prime_factor)) + + # Compute q, d, and u from n, e, and p + self.q = self.n // self.p + self.d = inverse(self.e, (self.p-1)*(self.q-1)) + self.u = inverse(self.p, self.q) # u = e**-1 (mod q) + + self.rsa = RSA + + def test_generate_1arg(self): + """RSA (default implementation) generated key (1 argument)""" + rsaObj = self.rsa.generate(1024) + self._check_private_key(rsaObj) + self._exercise_primitive(rsaObj) + pub = rsaObj.public_key() + self._check_public_key(pub) + self._exercise_public_primitive(rsaObj) + + def test_generate_2arg(self): + """RSA (default implementation) generated key (2 arguments)""" + rsaObj = self.rsa.generate(1024, Random.new().read) + self._check_private_key(rsaObj) + self._exercise_primitive(rsaObj) + pub = rsaObj.public_key() + self._check_public_key(pub) + self._exercise_public_primitive(rsaObj) + + def test_generate_3args(self): + rsaObj = self.rsa.generate(1024, Random.new().read,e=65537) + self._check_private_key(rsaObj) + self._exercise_primitive(rsaObj) + pub = rsaObj.public_key() + self._check_public_key(pub) + self._exercise_public_primitive(rsaObj) + self.assertEqual(65537,rsaObj.e) + + def test_construct_2tuple(self): + """RSA (default implementation) constructed key (2-tuple)""" + pub = self.rsa.construct((self.n, self.e)) + self._check_public_key(pub) + self._check_encryption(pub) + + def test_construct_3tuple(self): + """RSA (default implementation) constructed key (3-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d)) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_4tuple(self): + """RSA (default implementation) constructed key (4-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p)) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_5tuple(self): + """RSA (default implementation) constructed key (5-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p, self.q)) + self._check_private_key(rsaObj) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_6tuple(self): + """RSA (default implementation) constructed key (6-tuple)""" + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p, self.q, self.u)) + self._check_private_key(rsaObj) + self._check_encryption(rsaObj) + self._check_decryption(rsaObj) + + def test_construct_bad_key2(self): + tup = (self.n, 1) + self.assertRaises(ValueError, self.rsa.construct, tup) + + # An even modulus is wrong + tup = (self.n+1, self.e) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_construct_bad_key3(self): + tup = (self.n, self.e, self.d+1) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_construct_bad_key5(self): + tup = (self.n, self.e, self.d, self.p, self.p) + self.assertRaises(ValueError, self.rsa.construct, tup) + + tup = (self.p*self.p, self.e, self.p, self.p) + self.assertRaises(ValueError, self.rsa.construct, tup) + + tup = (self.p*self.p, 3, self.p, self.q) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_construct_bad_key6(self): + tup = (self.n, self.e, self.d, self.p, self.q, 10) + self.assertRaises(ValueError, self.rsa.construct, tup) + + from Cryptodome.Util.number import inverse + tup = (self.n, self.e, self.d, self.p, self.q, inverse(self.q, self.p)) + self.assertRaises(ValueError, self.rsa.construct, tup) + + def test_factoring(self): + rsaObj = self.rsa.construct([self.n, self.e, self.d]) + self.assertTrue(rsaObj.p==self.p or rsaObj.p==self.q) + self.assertTrue(rsaObj.q==self.p or rsaObj.q==self.q) + self.assertTrue(rsaObj.q*rsaObj.p == self.n) + + self.assertRaises(ValueError, self.rsa.construct, [self.n, self.e, self.n-1]) + + def test_repr(self): + rsaObj = self.rsa.construct((self.n, self.e, self.d, self.p, self.q)) + repr(rsaObj) + + def test_serialization(self): + """RSA keys are unpickable""" + + rsa_key = self.rsa.generate(1024) + self.assertRaises(PicklingError, pickle.dumps, rsa_key) + + def test_raw_rsa_boundary(self): + # The argument of every RSA raw operation (encrypt/decrypt) must be + # non-negative and no larger than the modulus + rsa_obj = self.rsa.generate(1024) + + self.assertRaises(ValueError, rsa_obj._decrypt, rsa_obj.n) + self.assertRaises(ValueError, rsa_obj._encrypt, rsa_obj.n) + + self.assertRaises(ValueError, rsa_obj._decrypt, -1) + self.assertRaises(ValueError, rsa_obj._encrypt, -1) + + def test_size(self): + pub = self.rsa.construct((self.n, self.e)) + self.assertEqual(pub.size_in_bits(), 1024) + self.assertEqual(pub.size_in_bytes(), 128) + + def _check_private_key(self, rsaObj): + from Cryptodome.Math.Numbers import Integer + + # Check capabilities + self.assertEqual(1, rsaObj.has_private()) + + # Sanity check key data + self.assertEqual(rsaObj.n, rsaObj.p * rsaObj.q) # n = pq + lcm = int(Integer(rsaObj.p-1).lcm(rsaObj.q-1)) + self.assertEqual(1, rsaObj.d * rsaObj.e % lcm) # ed = 1 (mod LCM(p-1, q-1)) + self.assertEqual(1, rsaObj.p * rsaObj.u % rsaObj.q) # pu = 1 (mod q) + self.assertEqual(1, rsaObj.p > 1) # p > 1 + self.assertEqual(1, rsaObj.q > 1) # q > 1 + self.assertEqual(1, rsaObj.e > 1) # e > 1 + self.assertEqual(1, rsaObj.d > 1) # d > 1 + + self.assertEqual(rsaObj.u, rsaObj.invp) + self.assertEqual(1, rsaObj.q * rsaObj.invq % rsaObj.p) + + def _check_public_key(self, rsaObj): + ciphertext = a2b_hex(self.ciphertext) + + # Check capabilities + self.assertEqual(0, rsaObj.has_private()) + + # Check rsaObj.[ne] -> rsaObj.[ne] mapping + self.assertEqual(rsaObj.n, rsaObj.n) + self.assertEqual(rsaObj.e, rsaObj.e) + + # Check that private parameters are all missing + self.assertEqual(0, hasattr(rsaObj, 'd')) + self.assertEqual(0, hasattr(rsaObj, 'p')) + self.assertEqual(0, hasattr(rsaObj, 'q')) + self.assertEqual(0, hasattr(rsaObj, 'u')) + + # Sanity check key data + self.assertEqual(1, rsaObj.e > 1) # e > 1 + + # Public keys should not be able to sign or decrypt + self.assertRaises(TypeError, rsaObj._decrypt, + bytes_to_long(ciphertext)) + + # Check __eq__ and __ne__ + self.assertEqual(rsaObj.public_key() == rsaObj.public_key(),True) # assert_ + self.assertEqual(rsaObj.public_key() != rsaObj.public_key(),False) # assertFalse + + self.assertEqual(rsaObj.publickey(), rsaObj.public_key()) + + def _exercise_primitive(self, rsaObj): + # Since we're using a randomly-generated key, we can't check the test + # vector, but we can make sure encryption and decryption are inverse + # operations. + ciphertext = bytes_to_long(a2b_hex(self.ciphertext)) + + # Test decryption + plaintext = rsaObj._decrypt(ciphertext) + + # Test encryption (2 arguments) + new_ciphertext2 = rsaObj._encrypt(plaintext) + self.assertEqual(ciphertext, new_ciphertext2) + + def _exercise_public_primitive(self, rsaObj): + plaintext = a2b_hex(self.plaintext) + + # Test encryption (2 arguments) + new_ciphertext2 = rsaObj._encrypt(bytes_to_long(plaintext)) + + def _check_encryption(self, rsaObj): + plaintext = a2b_hex(self.plaintext) + ciphertext = a2b_hex(self.ciphertext) + + # Test encryption + new_ciphertext2 = rsaObj._encrypt(bytes_to_long(plaintext)) + self.assertEqual(bytes_to_long(ciphertext), new_ciphertext2) + + def _check_decryption(self, rsaObj): + plaintext = bytes_to_long(a2b_hex(self.plaintext)) + ciphertext = bytes_to_long(a2b_hex(self.ciphertext)) + + # Test plain decryption + new_plaintext = rsaObj._decrypt(ciphertext) + self.assertEqual(plaintext, new_plaintext) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(RSATest) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py new file mode 100644 index 0000000..5ff0113 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_DSA.py @@ -0,0 +1,554 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_import_DSA.py: Self-test for importing DSA keys +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import unittest +import re + +from Cryptodome.PublicKey import DSA +from Cryptodome.SelfTest.st_common import * +from Cryptodome.Util.py3compat import * + +from binascii import unhexlify + +class ImportKeyTests(unittest.TestCase): + + y = 92137165128186062214622779787483327510946462589285775188003362705875131352591574106484271700740858696583623951844732128165434284507709057439633739849986759064015013893156866539696757799934634945787496920169462601722830899660681779448742875054459716726855443681559131362852474817534616736104831095601710736729 + p = 162452170958135306109773853318304545923250830605675936228618290525164105310663722368377131295055868997377338797580997938253236213714988311430600065853662861806894003694743806769284131194035848116051021923956699231855223389086646903420682639786976554552864568460372266462812137447840653688476258666833303658691 + q = 988791743931120302950649732173330531512663554851 + g = 85583152299197514738065570254868711517748965097380456700369348466136657764813442044039878840094809620913085570225318356734366886985903212775602770761953571967834823306046501307810937486758039063386311593890777319935391363872375452381836756832784184928202587843258855704771836753434368484556809100537243908232 + x = 540873410045082450874416847965843801027716145253 + + def setUp(self): + + # It is easier to write test vectors in text form, + # and convert them to byte strigs dynamically here + for mname, mvalue in ImportKeyTests.__dict__.items(): + if mname[:4] in ('der_', 'pem_', 'ssh_'): + if mname[:4] == 'der_': + mvalue = unhexlify(tobytes(mvalue)) + mvalue = tobytes(mvalue) + setattr(self, mname, mvalue) + + # 1. SubjectPublicKeyInfo + der_public=\ + '308201b73082012b06072a8648ce3804013082011e02818100e756ee1717f4b6'+\ + '794c7c214724a19763742c45572b4b3f8ff3b44f3be9f44ce039a2757695ec91'+\ + '5697da74ef914fcd1b05660e2419c761d639f45d2d79b802dbd23e7ab8b81b47'+\ + '9a380e1f30932584ba2a0b955032342ebc83cb5ca906e7b0d7cd6fe656cecb4c'+\ + '8b5a77123a8c6750a481e3b06057aff6aa6eba620b832d60c3021500ad32f48c'+\ + 'd3ae0c45a198a61fa4b5e20320763b2302818079dfdc3d614fe635fceb7eaeae'+\ + '3718dc2efefb45282993ac6749dc83c223d8c1887296316b3b0b54466cf444f3'+\ + '4b82e3554d0b90a778faaf1306f025dae6a3e36c7f93dd5bac4052b92370040a'+\ + 'ca70b8d5820599711900efbc961812c355dd9beffe0981da85c5548074b41c56'+\ + 'ae43fd300d89262e4efd89943f99a651b03888038185000281810083352a69a1'+\ + '32f34843d2a0eb995bff4e2f083a73f0049d2c91ea2f0ce43d144abda48199e4'+\ + 'b003c570a8af83303d45105f606c5c48d925a40ed9c2630c2fa4cdbf838539de'+\ + 'b9a29f919085f2046369f627ca84b2cb1e2c7940564b670f963ab1164d4e2ca2'+\ + 'bf6ffd39f12f548928bf4d2d1b5e6980b4f1be4c92a91986fba559' + + def testImportKey1(self): + key_obj = DSA.importKey(self.der_public) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def testExportKey1(self): + tup = (self.y, self.g, self.p, self.q) + key = DSA.construct(tup) + encoded = key.export_key('DER') + self.assertEqual(self.der_public, encoded) + + # 2. + pem_public="""\ +-----BEGIN PUBLIC KEY----- +MIIBtzCCASsGByqGSM44BAEwggEeAoGBAOdW7hcX9LZ5THwhRyShl2N0LEVXK0s/ +j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4uBtH +mjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47BgV6/2 +qm66YguDLWDDAhUArTL0jNOuDEWhmKYfpLXiAyB2OyMCgYB539w9YU/mNfzrfq6u +NxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG8CXa +5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0tBxW +rkP9MA2JJi5O/YmUP5mmUbA4iAOBhQACgYEAgzUqaaEy80hD0qDrmVv/Ti8IOnPw +BJ0skeovDOQ9FEq9pIGZ5LADxXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTne +uaKfkZCF8gRjafYnyoSyyx4seUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmA +tPG+TJKpGYb7pVk= +-----END PUBLIC KEY-----""" + + def testImportKey2(self): + for pem in (self.pem_public, tostr(self.pem_public)): + key_obj = DSA.importKey(pem) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def testExportKey2(self): + tup = (self.y, self.g, self.p, self.q) + key = DSA.construct(tup) + encoded = key.export_key('PEM') + self.assertEqual(self.pem_public, encoded) + + # 3. OpenSSL/OpenSSH format + der_private=\ + '308201bb02010002818100e756ee1717f4b6794c7c214724a19763742c45572b'+\ + '4b3f8ff3b44f3be9f44ce039a2757695ec915697da74ef914fcd1b05660e2419'+\ + 'c761d639f45d2d79b802dbd23e7ab8b81b479a380e1f30932584ba2a0b955032'+\ + '342ebc83cb5ca906e7b0d7cd6fe656cecb4c8b5a77123a8c6750a481e3b06057'+\ + 'aff6aa6eba620b832d60c3021500ad32f48cd3ae0c45a198a61fa4b5e2032076'+\ + '3b2302818079dfdc3d614fe635fceb7eaeae3718dc2efefb45282993ac6749dc'+\ + '83c223d8c1887296316b3b0b54466cf444f34b82e3554d0b90a778faaf1306f0'+\ + '25dae6a3e36c7f93dd5bac4052b92370040aca70b8d5820599711900efbc9618'+\ + '12c355dd9beffe0981da85c5548074b41c56ae43fd300d89262e4efd89943f99'+\ + 'a651b038880281810083352a69a132f34843d2a0eb995bff4e2f083a73f0049d'+\ + '2c91ea2f0ce43d144abda48199e4b003c570a8af83303d45105f606c5c48d925'+\ + 'a40ed9c2630c2fa4cdbf838539deb9a29f919085f2046369f627ca84b2cb1e2c'+\ + '7940564b670f963ab1164d4e2ca2bf6ffd39f12f548928bf4d2d1b5e6980b4f1'+\ + 'be4c92a91986fba55902145ebd9a3f0b82069d98420986b314215025756065' + + def testImportKey3(self): + key_obj = DSA.importKey(self.der_private) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey3(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('DER', pkcs8=False) + self.assertEqual(self.der_private, encoded) + + # 4. + pem_private="""\ +-----BEGIN DSA PRIVATE KEY----- +MIIBuwIBAAKBgQDnVu4XF/S2eUx8IUckoZdjdCxFVytLP4/ztE876fRM4DmidXaV +7JFWl9p075FPzRsFZg4kGcdh1jn0XS15uALb0j56uLgbR5o4Dh8wkyWEuioLlVAy +NC68g8tcqQbnsNfNb+ZWzstMi1p3EjqMZ1CkgeOwYFev9qpuumILgy1gwwIVAK0y +9IzTrgxFoZimH6S14gMgdjsjAoGAed/cPWFP5jX8636urjcY3C7++0UoKZOsZ0nc +g8Ij2MGIcpYxazsLVEZs9ETzS4LjVU0LkKd4+q8TBvAl2uaj42x/k91brEBSuSNw +BArKcLjVggWZcRkA77yWGBLDVd2b7/4JgdqFxVSAdLQcVq5D/TANiSYuTv2JlD+Z +plGwOIgCgYEAgzUqaaEy80hD0qDrmVv/Ti8IOnPwBJ0skeovDOQ9FEq9pIGZ5LAD +xXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTneuaKfkZCF8gRjafYnyoSyyx4s +eUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmAtPG+TJKpGYb7pVkCFF69mj8L +ggadmEIJhrMUIVAldWBl +-----END DSA PRIVATE KEY-----""" + + def testImportKey4(self): + for pem in (self.pem_private, tostr(self.pem_private)): + key_obj = DSA.importKey(pem) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey4(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('PEM', pkcs8=False) + self.assertEqual(self.pem_private, encoded) + + # 5. PKCS8 (unencrypted) + der_pkcs8=\ + '3082014a0201003082012b06072a8648ce3804013082011e02818100e756ee17'+\ + '17f4b6794c7c214724a19763742c45572b4b3f8ff3b44f3be9f44ce039a27576'+\ + '95ec915697da74ef914fcd1b05660e2419c761d639f45d2d79b802dbd23e7ab8'+\ + 'b81b479a380e1f30932584ba2a0b955032342ebc83cb5ca906e7b0d7cd6fe656'+\ + 'cecb4c8b5a77123a8c6750a481e3b06057aff6aa6eba620b832d60c3021500ad'+\ + '32f48cd3ae0c45a198a61fa4b5e20320763b2302818079dfdc3d614fe635fceb'+\ + '7eaeae3718dc2efefb45282993ac6749dc83c223d8c1887296316b3b0b54466c'+\ + 'f444f34b82e3554d0b90a778faaf1306f025dae6a3e36c7f93dd5bac4052b923'+\ + '70040aca70b8d5820599711900efbc961812c355dd9beffe0981da85c5548074'+\ + 'b41c56ae43fd300d89262e4efd89943f99a651b03888041602145ebd9a3f0b82'+\ + '069d98420986b314215025756065' + + def testImportKey5(self): + key_obj = DSA.importKey(self.der_pkcs8) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey5(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('DER') + self.assertEqual(self.der_pkcs8, encoded) + encoded = key.export_key('DER', pkcs8=True) + self.assertEqual(self.der_pkcs8, encoded) + + # 6. + pem_pkcs8="""\ +-----BEGIN PRIVATE KEY----- +MIIBSgIBADCCASsGByqGSM44BAEwggEeAoGBAOdW7hcX9LZ5THwhRyShl2N0LEVX +K0s/j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4 +uBtHmjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47Bg +V6/2qm66YguDLWDDAhUArTL0jNOuDEWhmKYfpLXiAyB2OyMCgYB539w9YU/mNfzr +fq6uNxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG +8CXa5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0 +tBxWrkP9MA2JJi5O/YmUP5mmUbA4iAQWAhRevZo/C4IGnZhCCYazFCFQJXVgZQ== +-----END PRIVATE KEY-----""" + + def testImportKey6(self): + for pem in (self.pem_pkcs8, tostr(self.pem_pkcs8)): + key_obj = DSA.importKey(pem) + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey6(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('PEM') + self.assertEqual(self.pem_pkcs8, encoded) + encoded = key.export_key('PEM', pkcs8=True) + self.assertEqual(self.pem_pkcs8, encoded) + + # 7. OpenSSH/RFC4253 + ssh_pub="""ssh-dss AAAAB3NzaC1kc3MAAACBAOdW7hcX9LZ5THwhRyShl2N0LEVXK0s/j/O0Tzvp9EzgOaJ1dpXskVaX2nTvkU/NGwVmDiQZx2HWOfRdLXm4AtvSPnq4uBtHmjgOHzCTJYS6KguVUDI0LryDy1ypBuew181v5lbOy0yLWncSOoxnUKSB47BgV6/2qm66YguDLWDDAAAAFQCtMvSM064MRaGYph+kteIDIHY7IwAAAIB539w9YU/mNfzrfq6uNxjcLv77RSgpk6xnSdyDwiPYwYhyljFrOwtURmz0RPNLguNVTQuQp3j6rxMG8CXa5qPjbH+T3VusQFK5I3AECspwuNWCBZlxGQDvvJYYEsNV3Zvv/gmB2oXFVIB0tBxWrkP9MA2JJi5O/YmUP5mmUbA4iAAAAIEAgzUqaaEy80hD0qDrmVv/Ti8IOnPwBJ0skeovDOQ9FEq9pIGZ5LADxXCor4MwPUUQX2BsXEjZJaQO2cJjDC+kzb+DhTneuaKfkZCF8gRjafYnyoSyyx4seUBWS2cPljqxFk1OLKK/b/058S9UiSi/TS0bXmmAtPG+TJKpGYb7pVk=""" + + def testImportKey7(self): + for ssh in (self.ssh_pub, tostr(self.ssh_pub)): + key_obj = DSA.importKey(ssh) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def testExportKey7(self): + tup = (self.y, self.g, self.p, self.q) + key = DSA.construct(tup) + encoded = key.export_key('OpenSSH') + self.assertEqual(self.ssh_pub, encoded) + + # 8. Encrypted OpenSSL/OpenSSH + pem_private_encrypted="""\ +-----BEGIN DSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: AES-128-CBC,70B6908939D65E9F2EB999E8729788CE + +4V6GHRDpCrdZ8MBjbyp5AlGUrjvr2Pn2e2zVxy5RBt4FBj9/pa0ae0nnyUPMLSUU +kKyOR0topRYTVRLElm4qVrb5uNZ3hRwfbklr+pSrB7O9eHz9V5sfOQxyODS07JxK +k1OdOs70/ouMXLF9EWfAZOmWUccZKHNblUwg1p1UrZIz5jXw4dUE/zqhvXh6d+iC +ADsICaBCjCrRQJKDp50h3+ndQjkYBKVH+pj8TiQ79U7lAvdp3+iMghQN6YXs9mdI +gFpWw/f97oWM4GHZFqHJ+VSMNFjBiFhAvYV587d7Lk4dhD8sCfbxj42PnfRgUItc +nnPqHxmhMQozBWzYM4mQuo3XbF2WlsNFbOzFVyGhw1Bx1s91qvXBVWJh2ozrW0s6 +HYDV7ZkcTml/4kjA/d+mve6LZ8kuuR1qCiZx6rkffhh1gDN/1Xz3HVvIy/dQ+h9s +5zp7PwUoWbhqp3WCOr156P6gR8qo7OlT6wMh33FSXK/mxikHK136fV2shwTKQVII +rJBvXpj8nACUmi7scKuTWGeUoXa+dwTZVVe+b+L2U1ZM7+h/neTJiXn7u99PFUwu +xVJtxaV37m3aXxtCsPnbBg== +-----END DSA PRIVATE KEY-----""" + + def testImportKey8(self): + for pem in (self.pem_private_encrypted, tostr(self.pem_private_encrypted)): + key_obj = DSA.importKey(pem, "PWDTEST") + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey8(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + encoded = key.export_key('PEM', pkcs8=False, passphrase="PWDTEST") + key = DSA.importKey(encoded, "PWDTEST") + self.assertEqual(self.y, key.y) + self.assertEqual(self.p, key.p) + self.assertEqual(self.q, key.q) + self.assertEqual(self.g, key.g) + self.assertEqual(self.x, key.x) + + # 9. Encrypted PKCS8 + # pbeWithMD5AndDES-CBC + pem_pkcs8_encrypted="""\ +-----BEGIN ENCRYPTED PRIVATE KEY----- +MIIBcTAbBgkqhkiG9w0BBQMwDgQI0GC3BJ/jSw8CAggABIIBUHc1cXZpExIE9tC7 +7ryiW+5ihtF2Ekurq3e408GYSAu5smJjN2bvQXmzRFBz8W38K8eMf1sbWroZ4+zn +kZSbb9nSm5kAa8lR2+oF2k+WRswMR/PTC3f/D9STO2X0QxdrzKgIHEcSGSHp5jTx +aVvbkCDHo9vhBTl6S3ogZ48As/MEro76+9igUwJ1jNhIQZPJ7e20QH5qDpQFFJN4 +CKl2ENSEuwGiqBszItFy4dqH0g63ZGZV/xt9wSO9Rd7SK/EbA/dklOxBa5Y/VItM +gnIhs9XDMoGYyn6F023EicNJm6g/bVQk81BTTma4tm+12TKGdYm+QkeZvCOMZylr +Wv67cKwO3cAXt5C3QXMDgYR64XvuaT5h7C0igMp2afSXJlnbHEbFxQVJlv83T4FM +eZ4k+NQDbEL8GiHmFxzDWQAuPPZKJWEEEV2p/To+WOh+kSDHQw== +-----END ENCRYPTED PRIVATE KEY-----""" + + def testImportKey9(self): + for pem in (self.pem_pkcs8_encrypted, tostr(self.pem_pkcs8_encrypted)): + key_obj = DSA.importKey(pem, "PWDTEST") + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + # 10. Encrypted PKCS8 + # pkcs5PBES2 / + # pkcs5PBKDF2 (rounds=1000, salt=D725BF1B6B8239F4) / + # des-EDE3-CBC (iv=27A1C66C42AFEECE) + # + der_pkcs8_encrypted=\ + '30820196304006092a864886f70d01050d3033301b06092a864886f70d01050c'+\ + '300e0408d725bf1b6b8239f4020203e8301406082a864886f70d0307040827a1'+\ + 'c66c42afeece048201505cacfde7bf8edabb3e0d387950dc872662ea7e9b1ed4'+\ + '400d2e7e6186284b64668d8d0328c33a9d9397e6f03df7cb68268b0a06b4e22f'+\ + '7d132821449ecf998a8b696dbc6dd2b19e66d7eb2edfeb4153c1771d49702395'+\ + '4f36072868b5fcccf93413a5ac4b2eb47d4b3f681c6bd67ae363ed776f45ae47'+\ + '174a00098a7c930a50f820b227ddf50f9742d8e950d02586ff2dac0e3c372248'+\ + 'e5f9b6a7a02f4004f20c87913e0f7b52bccc209b95d478256a890b31d4c9adec'+\ + '21a4d157a179a93a3dad06f94f3ce486b46dfa7fc15fd852dd7680bbb2f17478'+\ + '7e71bd8dbaf81eca7518d76c1d26256e95424864ba45ca5d47d7c5a421be02fa'+\ + 'b94ab01e18593f66cf9094eb5c94b9ecf3aa08b854a195cf87612fbe5e96c426'+\ + '2b0d573e52dc71ba3f5e468c601e816c49b7d32c698b22175e89aaef0c443770'+\ + '5ef2f88a116d99d8e2869a4fd09a771b84b49e4ccb79aadcb1c9' + + def testImportKey10(self): + key_obj = DSA.importKey(self.der_pkcs8_encrypted, "PWDTEST") + self.assertTrue(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + self.assertEqual(self.x, key_obj.x) + + def testExportKey10(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + randfunc = BytesIO(unhexlify(b("27A1C66C42AFEECE") + b("D725BF1B6B8239F4"))).read + encoded = key.export_key('DER', pkcs8=True, passphrase="PWDTEST", randfunc=randfunc) + self.assertEqual(self.der_pkcs8_encrypted, encoded) + + # ---- + + def testImportError1(self): + self.assertRaises(ValueError, DSA.importKey, self.der_pkcs8_encrypted, "wrongpwd") + + def testExportError2(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + self.assertRaises(ValueError, key.export_key, 'DER', pkcs8=False, passphrase="PWDTEST") + + def test_import_key(self): + """Verify importKey is an alias to import_key""" + + key_obj = DSA.import_key(self.der_public) + self.assertFalse(key_obj.has_private()) + self.assertEqual(self.y, key_obj.y) + self.assertEqual(self.p, key_obj.p) + self.assertEqual(self.q, key_obj.q) + self.assertEqual(self.g, key_obj.g) + + def test_exportKey(self): + tup = (self.y, self.g, self.p, self.q, self.x) + key = DSA.construct(tup) + self.assertEqual(key.exportKey(), key.export_key()) + + + def test_import_empty(self): + self.assertRaises(ValueError, DSA.import_key, b'') + + +class ImportKeyFromX509Cert(unittest.TestCase): + + def test_x509v1(self): + + # Sample V1 certificate with a 1024 bit DSA key + x509_v1_cert = """ +-----BEGIN CERTIFICATE----- +MIIDUjCCArsCAQIwDQYJKoZIhvcNAQEFBQAwfjENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxHDAaBgkqhkiG9w0BCQEWDXNwYW1AYWNtZS5vcmcxEzARBgNVBAcT +Ck1ldHJvcG9saXMxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYDVQQGEwJVUzENMAsG +A1UEAxMEdGVzdDAeFw0xNDA3MTEyMDM4NDNaFw0xNzA0MDYyMDM4NDNaME0xCzAJ +BgNVBAYTAlVTMREwDwYDVQQIEwhOZXcgWW9yazENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxDzANBgNVBAMTBnBvbGFuZDCCAbYwggErBgcqhkjOOAQBMIIBHgKB +gQDOrN4Ox4+t3T6wKeHfhzArhcrNEFMQ4Ss+4PIKyimDy9Bn64WPkL1B/9dvYIga +23GLu6tVJmXo6EdJnVOHEMhr99EeOwuDWWeP7Awq7RSlKEejokr4BEzMTW/tExSD +cO6/GI7xzh0eTH+VTTPDfyrJMYCkh0rJAfCP+5xrmPNetwIVALtXYOV1yoRrzJ2Q +M5uEjidH6GiZAoGAfUqA1SAm5g5U68SILMVX9l5rq0OpB0waBMpJQ31/R/yXNDqo +c3gGWZTOJFU4IzwNpGhrGNADUByz/lc1SAOAdEJIr0JVrhbGewQjB4pWqoLGbBKz +RoavTNDc/zD7SYa12evWDHADwvlXoeQg+lWop1zS8OqaDC7aLGKpWN3/m8kDgYQA +AoGAKoirPAfcp1rbbl4y2FFAIktfW8f4+T7d2iKSg73aiVfujhNOt1Zz1lfC0NI2 +eonLWO3tAM4XGKf1TLjb5UXngGn40okPsaA81YE6ZIKm20ywjlOY3QkAEdMaLVY3 +9PJvM8RGB9m7pLKxyHfGMfF40MVN4222zKeGp7xhM0CNiCUwDQYJKoZIhvcNAQEF +BQADgYEAfbNZfpYa2KlALEM1FZnwvQDvJHntHz8LdeJ4WM7CXDlKi67wY2HKM30w +s2xej75imkVOFd1kF2d0A8sjfriXLVIt1Hwq9ANZomhu4Edx0xpH8tqdh/bDtnM2 +TmduZNY9OWkb07h0CtWD6Zt8fhRllVsSSrlWd/2or7FXNC5weFQ= +-----END CERTIFICATE----- + """.strip() + + # DSA public key as dumped by openssl + y_str = """ +2a:88:ab:3c:07:dc:a7:5a:db:6e:5e:32:d8:51:40: +22:4b:5f:5b:c7:f8:f9:3e:dd:da:22:92:83:bd:da: +89:57:ee:8e:13:4e:b7:56:73:d6:57:c2:d0:d2:36: +7a:89:cb:58:ed:ed:00:ce:17:18:a7:f5:4c:b8:db: +e5:45:e7:80:69:f8:d2:89:0f:b1:a0:3c:d5:81:3a: +64:82:a6:db:4c:b0:8e:53:98:dd:09:00:11:d3:1a: +2d:56:37:f4:f2:6f:33:c4:46:07:d9:bb:a4:b2:b1: +c8:77:c6:31:f1:78:d0:c5:4d:e3:6d:b6:cc:a7:86: +a7:bc:61:33:40:8d:88:25 + """ + p_str = """ +00:ce:ac:de:0e:c7:8f:ad:dd:3e:b0:29:e1:df:87: +30:2b:85:ca:cd:10:53:10:e1:2b:3e:e0:f2:0a:ca: +29:83:cb:d0:67:eb:85:8f:90:bd:41:ff:d7:6f:60: +88:1a:db:71:8b:bb:ab:55:26:65:e8:e8:47:49:9d: +53:87:10:c8:6b:f7:d1:1e:3b:0b:83:59:67:8f:ec: +0c:2a:ed:14:a5:28:47:a3:a2:4a:f8:04:4c:cc:4d: +6f:ed:13:14:83:70:ee:bf:18:8e:f1:ce:1d:1e:4c: +7f:95:4d:33:c3:7f:2a:c9:31:80:a4:87:4a:c9:01: +f0:8f:fb:9c:6b:98:f3:5e:b7 + """ + q_str = """ +00:bb:57:60:e5:75:ca:84:6b:cc:9d:90:33:9b:84: +8e:27:47:e8:68:99 + """ + g_str = """ +7d:4a:80:d5:20:26:e6:0e:54:eb:c4:88:2c:c5:57: +f6:5e:6b:ab:43:a9:07:4c:1a:04:ca:49:43:7d:7f: +47:fc:97:34:3a:a8:73:78:06:59:94:ce:24:55:38: +23:3c:0d:a4:68:6b:18:d0:03:50:1c:b3:fe:57:35: +48:03:80:74:42:48:af:42:55:ae:16:c6:7b:04:23: +07:8a:56:aa:82:c6:6c:12:b3:46:86:af:4c:d0:dc: +ff:30:fb:49:86:b5:d9:eb:d6:0c:70:03:c2:f9:57: +a1:e4:20:fa:55:a8:a7:5c:d2:f0:ea:9a:0c:2e:da: +2c:62:a9:58:dd:ff:9b:c9 + """ + + key = DSA.importKey(x509_v1_cert) + for comp_name in ('y', 'p', 'q', 'g'): + comp_str = locals()[comp_name + "_str"] + comp = int(re.sub("[^0-9a-f]", "", comp_str), 16) + self.assertEqual(getattr(key, comp_name), comp) + self.assertFalse(key.has_private()) + + def test_x509v3(self): + + # Sample V3 certificate with a 1024 bit DSA key + x509_v3_cert = """ +-----BEGIN CERTIFICATE----- +MIIFhjCCA26gAwIBAgIBAzANBgkqhkiG9w0BAQsFADBhMQswCQYDVQQGEwJVUzEL +MAkGA1UECAwCTUQxEjAQBgNVBAcMCUJhbHRpbW9yZTEQMA4GA1UEAwwHVGVzdCBD +QTEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFtcGxlLmNvbTAeFw0xNDA3MTMyMDUz +MjBaFw0xNzA0MDgyMDUzMjBaMEAxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJNRDES +MBAGA1UEBwwJQmFsdGltb3JlMRAwDgYDVQQDDAdhdXN0cmlhMIIBtjCCASsGByqG +SM44BAEwggEeAoGBALfd8gyEpVPA0ZI69Kp3nyJcu5N0ZZ3K1K9hleQLNqKEcZOh +7a/C2J1TPdmHTLJ0rAwBZ1nWxnARSgRphziGDFspKCYQwYcSMz8KoFgvXbXpuchy +oFACiQ2LqZnc5MakuLQtLcQciSYGYj3zmZdYMoa904F1aDWr+DxQI6DVC3/bAhUA +hqXMCJ6fQK3G2O9S3/CC/yVZXCsCgYBRXROl3R2khX7l10LQjDEgo3B1IzjXU/jP +McMBl6XO+nBJXxr/scbq8Ajiv7LTnGpSjgryHtvfj887kfvo8QbSS3kp3vq5uSqI +ui7E7r3jguWaLj616AG1HWOctXJUjqsiabZwsp2h09gHTzmHEXBOmiARu8xFxKAH +xsuo7onAbwOBhAACgYBylWjWSnKHE8mHx1A5m/0GQx6xnhWIe3+MJAnEhRGxA2J4 +SCsfWU0OwglIQToh1z5uUU9oDi9cYgNPBevOFRnDhc2yaJY6VAYnI+D+6J5IU6Yd +0iaG/iSc4sV4bFr0axcPpse3SN0XaQxiKeSFBfFnoMqL+dd9Gb3QPZSllBcVD6OB +1TCB0jAdBgNVHQ4EFgQUx5wN0Puotv388M9Tp/fsPbZpzAUwHwYDVR0jBBgwFoAU +a0hkif3RMaraiWtsOOZZlLu9wJwwCQYDVR0TBAIwADALBgNVHQ8EBAMCBeAwSgYD +VR0RBEMwQYILZXhhbXBsZS5jb22CD3d3dy5leGFtcGxlLmNvbYIQbWFpbC5leGFt +cGxlLmNvbYIPZnRwLmV4YW1wbGUuY29tMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NM +IEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsFAAOCAgEAyWf1TiJI +aNEIA9o/PG8/JiGASTS2/HBVTJbkq03k6NkJVk/GxC1DPziTUJ+CdWlHWcAi1EOW +Ach3QxNDRrVfCOfCMDgElIO1094/reJgdFYG00LRi8QkRJuxANV7YS4tLudhyHJC +kR2lhdMNmEuzWK+s2y+5cLrdm7qdvdENQCcV67uvGPx4sc+EaE7x13SczKjWBtbo +QCs6JTOW+EkPRl4Zo27K4OIZ43/J+GxvwU9QUVH3wPVdbbLNw+QeTFBYMTEcxyc4 +kv50HPBFaithziXBFyvdIs19FjkFzu0Uz/e0zb1+vMzQlJMD94HVOrMnIj5Sb2cL +KKdYXS4uhxFJmdV091Xur5JkYYwEzuaGav7J3zOzYutrIGTgDluLCvA+VQkRcTsy +jZ065SkY/v+38QHp+cmm8WRluupJTs8wYzVp6Fu0iFaaK7ztFmaZmHpiPIfDFjva +aCIgzzT5NweJd/b71A2SyzHXJ14zBXsr1PMylMp2TpHIidhuuNuQL6I0HaollB4M +Z3FsVBMhVDw4Z76qnFPr8mZE2tar33hSlJI/3pS/bBiukuBk8U7VB0X8OqaUnP3C +7b2Z4G8GtqDVcKGMzkvMjT4n9rKd/Le+qHSsQOGO9W/0LB7UDAZSwUsfAPnoBgdS +5t9tIomLCOstByXi+gGZue1TcdCa3Ph4kO0= +-----END CERTIFICATE----- + """.strip() + + # DSA public key as dumped by openssl + y_str = """ +72:95:68:d6:4a:72:87:13:c9:87:c7:50:39:9b:fd: +06:43:1e:b1:9e:15:88:7b:7f:8c:24:09:c4:85:11: +b1:03:62:78:48:2b:1f:59:4d:0e:c2:09:48:41:3a: +21:d7:3e:6e:51:4f:68:0e:2f:5c:62:03:4f:05:eb: +ce:15:19:c3:85:cd:b2:68:96:3a:54:06:27:23:e0: +fe:e8:9e:48:53:a6:1d:d2:26:86:fe:24:9c:e2:c5: +78:6c:5a:f4:6b:17:0f:a6:c7:b7:48:dd:17:69:0c: +62:29:e4:85:05:f1:67:a0:ca:8b:f9:d7:7d:19:bd: +d0:3d:94:a5:94:17:15:0f + """ + p_str = """ +00:b7:dd:f2:0c:84:a5:53:c0:d1:92:3a:f4:aa:77: +9f:22:5c:bb:93:74:65:9d:ca:d4:af:61:95:e4:0b: +36:a2:84:71:93:a1:ed:af:c2:d8:9d:53:3d:d9:87: +4c:b2:74:ac:0c:01:67:59:d6:c6:70:11:4a:04:69: +87:38:86:0c:5b:29:28:26:10:c1:87:12:33:3f:0a: +a0:58:2f:5d:b5:e9:b9:c8:72:a0:50:02:89:0d:8b: +a9:99:dc:e4:c6:a4:b8:b4:2d:2d:c4:1c:89:26:06: +62:3d:f3:99:97:58:32:86:bd:d3:81:75:68:35:ab: +f8:3c:50:23:a0:d5:0b:7f:db + """ + q_str = """ +00:86:a5:cc:08:9e:9f:40:ad:c6:d8:ef:52:df:f0: +82:ff:25:59:5c:2b + """ + g_str = """ +51:5d:13:a5:dd:1d:a4:85:7e:e5:d7:42:d0:8c:31: +20:a3:70:75:23:38:d7:53:f8:cf:31:c3:01:97:a5: +ce:fa:70:49:5f:1a:ff:b1:c6:ea:f0:08:e2:bf:b2: +d3:9c:6a:52:8e:0a:f2:1e:db:df:8f:cf:3b:91:fb: +e8:f1:06:d2:4b:79:29:de:fa:b9:b9:2a:88:ba:2e: +c4:ee:bd:e3:82:e5:9a:2e:3e:b5:e8:01:b5:1d:63: +9c:b5:72:54:8e:ab:22:69:b6:70:b2:9d:a1:d3:d8: +07:4f:39:87:11:70:4e:9a:20:11:bb:cc:45:c4:a0: +07:c6:cb:a8:ee:89:c0:6f + """ + + key = DSA.importKey(x509_v3_cert) + for comp_name in ('y', 'p', 'q', 'g'): + comp_str = locals()[comp_name + "_str"] + comp = int(re.sub("[^0-9a-f]", "", comp_str), 16) + self.assertEqual(getattr(key, comp_name), comp) + self.assertFalse(key.has_private()) + + +if __name__ == '__main__': + unittest.main() + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ImportKeyTests) + tests += list_test_cases(ImportKeyFromX509Cert) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py new file mode 100644 index 0000000..9e3d6ad --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_ECC.py @@ -0,0 +1,2653 @@ +# =================================================================== +# +# Copyright (c) 2015, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import errno +import warnings +import unittest +from binascii import unhexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.py3compat import bord, tostr, FileNotFoundError +from Cryptodome.Util.asn1 import DerSequence, DerBitString +from Cryptodome.Util.number import bytes_to_long +from Cryptodome.Hash import SHAKE128 + +from Cryptodome.PublicKey import ECC + +try: + import pycryptodome_test_vectors # type: ignore + test_vectors_available = True +except ImportError: + test_vectors_available = False + + +class MissingTestVectorException(ValueError): + pass + + +def load_file(file_name, mode="rb"): + results = None + + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + dir_comps = ("PublicKey", "ECC") + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name, mode) as file_in: + results = file_in.read() + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for ECC", + UserWarning, + stacklevel=2) + + if results is None: + raise MissingTestVectorException("Missing %s" % file_name) + + return results + + +def compact(lines): + ext = b"".join(lines) + return unhexlify(tostr(ext).replace(" ", "").replace(":", "")) + + +def create_ref_keys_p192(): + key_len = 24 + key_lines = load_file("ecc_p192.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:4])) + public_key_xy = compact(key_lines[5:9]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-192", d=private_key_d), + ECC.construct(curve="P-192", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p224(): + key_len = 28 + key_lines = load_file("ecc_p224.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:4])) + public_key_xy = compact(key_lines[5:9]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-224", d=private_key_d), + ECC.construct(curve="P-224", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p256(): + key_len = 32 + key_lines = load_file("ecc_p256.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:5])) + public_key_xy = compact(key_lines[6:11]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-256", d=private_key_d), + ECC.construct(curve="P-256", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p384(): + key_len = 48 + key_lines = load_file("ecc_p384.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:6])) + public_key_xy = compact(key_lines[7:14]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-384", d=private_key_d), + ECC.construct(curve="P-384", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_p521(): + key_len = 66 + key_lines = load_file("ecc_p521.txt").splitlines() + private_key_d = bytes_to_long(compact(key_lines[2:7])) + public_key_xy = compact(key_lines[8:17]) + assert bord(public_key_xy[0]) == 4 # Uncompressed + public_key_x = bytes_to_long(public_key_xy[1:key_len+1]) + public_key_y = bytes_to_long(public_key_xy[key_len+1:]) + + return (ECC.construct(curve="P-521", d=private_key_d), + ECC.construct(curve="P-521", point_x=public_key_x, point_y=public_key_y)) + + +def create_ref_keys_ed25519(): + key_lines = load_file("ecc_ed25519.txt").splitlines() + seed = compact(key_lines[5:8]) + key = ECC.construct(curve="Ed25519", seed=seed) + return (key, key.public_key()) + + +def create_ref_keys_ed448(): + key_lines = load_file("ecc_ed448.txt").splitlines() + seed = compact(key_lines[6:10]) + key = ECC.construct(curve="Ed448", seed=seed) + return (key, key.public_key()) + + +# Create reference key pair +# ref_private, ref_public = create_ref_keys_p521() + +def get_fixed_prng(): + return SHAKE128.new().update(b"SEED").read + + +def extract_bitstring_from_spki(data): + seq = DerSequence() + seq.decode(data) + bs = DerBitString() + bs.decode(seq[1]) + return bs.value + + +class TestImport(unittest.TestCase): + + def test_empty(self): + self.assertRaises(ValueError, ECC.import_key, b"") + + def test_mismatch(self): + # The private key does not match the public key + mismatch = """-----BEGIN PRIVATE KEY----- +MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDAAAAAAAAAAAAAAAAAA +AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJChZANiAAQarFRaqflo +I+d61SRvU8Za2EurxtW20eZzca7dnNYMYf3boIkDuAUU7FfO7l0/4iGzzvfUinng +o4N+LZfQYcTxmdwlkWOrfzCjtHDix6EznPO/LlxTsV+zfTJ/ijTjeXk= +-----END PRIVATE KEY-----""" + self.assertRaises(ValueError, ECC.import_key, mismatch) + + +class TestImport_P192(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P192, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p192() + + def test_import_public_der(self): + key_file = load_file("ecc_p192_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p192_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P192') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p192_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P192') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p192_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p192_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p192_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p192_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p192_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p192_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p192_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p192_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p192_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p192_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + +class TestImport_P224(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P224, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p224() + + def test_import_public_der(self): + key_file = load_file("ecc_p224_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p224_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P224') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p224_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P224') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p224_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p224_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p224_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p224_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p224_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p224_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p224_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p224_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p224_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p224_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + +class TestImport_P256(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P256, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p256() + + def test_import_public_der(self): + key_file = load_file("ecc_p256_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p256_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P256') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p256_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P256') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p256_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p256_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p256_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p256_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p256_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p256_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p256_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p256_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_with_ecparams(self): + key_file = load_file("ecc_p256_private_ecparams.pem") + key = ECC.import_key(key_file) + # We just check if the import succeeds + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p256_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p256_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_p256_public_openssh.txt") + + key = ECC._import_openssh_public(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_p256_private_openssh.pem") + key_file_old = load_file("ecc_p256_private_openssh_old.pem") + + key = ECC.import_key(key_file) + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_p256_private_openssh_pwd.pem") + key_file_old = load_file("ecc_p256_private_openssh_pwd_old.pem") + + key = ECC.import_key(key_file, b"password") + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + +class TestImport_P384(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P384, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p384() + + def test_import_public_der(self): + key_file = load_file("ecc_p384_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p384_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P384') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p384_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P384') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p384_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p384_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p384_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p384_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p384_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p384_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p384_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p384_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p384_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p384_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_p384_public_openssh.txt") + + key = ECC._import_openssh_public(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_p384_private_openssh.pem") + key_file_old = load_file("ecc_p384_private_openssh_old.pem") + + key = ECC.import_key(key_file) + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_p384_private_openssh_pwd.pem") + key_file_old = load_file("ecc_p384_private_openssh_pwd_old.pem") + + key = ECC.import_key(key_file, b"password") + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + +class TestImport_P521(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_P521, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p521() + + def test_import_public_der(self): + key_file = load_file("ecc_p521_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_sec1_uncompressed(self): + key_file = load_file("ecc_p521_public.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P521') + self.assertEqual(self.ref_public, key) + + def test_import_sec1_compressed(self): + key_file = load_file("ecc_p521_public_compressed.der") + value = extract_bitstring_from_spki(key_file) + key = ECC.import_key(key_file, curve_name='P521') + self.assertEqual(self.ref_public, key) + + def test_import_rfc5915_der(self): + key_file = load_file("ecc_p521_private.der") + + key = ECC._import_rfc5915_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_clear(self): + key_file = load_file("ecc_p521_private_p8_clear.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_in_pem_clear(self): + key_file = load_file("ecc_p521_private_p8_clear.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_p521_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_p521_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_p521_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_p521_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_p521_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256", "aes256_gcm": + key_file = load_file("ecc_p521_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_p521_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_p521_public_openssh.txt") + + key = ECC._import_openssh_public(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_p521_private_openssh.pem") + key_file_old = load_file("ecc_p521_private_openssh_old.pem") + + key = ECC.import_key(key_file) + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_p521_private_openssh_pwd.pem") + key_file_old = load_file("ecc_p521_private_openssh_pwd_old.pem") + + key = ECC.import_key(key_file, b"password") + key_old = ECC.import_key(key_file_old) + self.assertEqual(key, key_old) + + +class TestExport_P192(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P192, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p192() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p192_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p192_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p192_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p192_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p192_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p192_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p192_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p192_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p192_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p192_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p192_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p192_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p192_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + def test_compressed_curve(self): + + # Compressed P-192 curve (Y-point is even) + pem1 = """-----BEGIN EC PRIVATE KEY----- + MF8CAQEEGHvhXmIW95JxZYfd4AUPu9BwknjuvS36aqAKBggqhkjOPQMBAaE0AzIA + BLJZCyTu35DQIlqvMlBynn3k1Ig+dWfg/brRhHecxptrbloqFSP8ITw0CwbGF+2X + 5g== + -----END EC PRIVATE KEY-----""" + + # Compressed P-192 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- + MF8CAQEEGA3rAotUaWl7d47eX6tz9JmLzOMJwl13XaAKBggqhkjOPQMBAaE0AzIA + BG4tHlTBBBGokcWmGm2xubVB0NvPC/Ou5AYwivs+3iCxmEjsymVAj6iiuX2Lxr6g + /Q== + -----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x97E6) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0xA0FD) + + +class TestExport_P224(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P224, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p224() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p224_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p224_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p224_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p224_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p224_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p224_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p224_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p224_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p224_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p224_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p224_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p224_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p224_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + def test_compressed_curve(self): + + # Compressed P-224 curve (Y-point is even) + pem1 = """-----BEGIN EC PRIVATE KEY----- + MGgCAQEEHPYicBNI9nd6wDKAX2l+f3A0Q+KWUQeMqSt5GoOgBwYFK4EEACGhPAM6 + AATCL6rUIDT14zXKoS5GQUMDP/tpc+1iI/FyEZikt2roKDkhU5q08srmqaysbfJN + eUr7Xf1lnCVGag== + -----END EC PRIVATE KEY-----""" + + # Compressed P-224 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- + MGgCAQEEHEFjbaVPLJ3ngZyCibCvT0RLUqSlHjC5Z3e0FtugBwYFK4EEACGhPAM6 + AAT5IvL2V6m48y1JLMGr6ZbnOqNKP9hMf9mxyVkk6/SaRoBoJVkXrNIpYL0P7DS7 + QF8E/OGeZRwvow== + -----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x466A) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0x2FA3) + + +class TestExport_P256(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P256, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p256() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p256_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p256_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p256_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p256_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p256_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p256_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p256_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p256_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p256_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p256_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p256_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p256_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p256_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh_uncompressed(self): + key_file = load_file("ecc_p256_public_openssh.txt", "rt") + + encoded = self.ref_public._export_openssh(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="OpenSSH", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_openssh_compressed(self): + key_file = load_file("ecc_p256_public_openssh.txt", "rt") + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="OpenSSH", compress=True) + assert len(key_file) > len(key_file_compressed) + self.assertEqual(pub_key, ECC.import_key(key_file_compressed)) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + + def test_compressed_curve(self): + + # Compressed P-256 curve (Y-point is even) + pem1 = """-----BEGIN EC PRIVATE KEY----- + MFcCAQEEIHTuc09jC51xXomV6MVCDN+DpAAvSmaJWZPTEHM6D5H1oAoGCCqGSM49 + AwEHoSQDIgACWFuGbHe8yJ43rir7PMTE9w8vHz0BSpXHq90Xi7/s+a0= + -----END EC PRIVATE KEY-----""" + + # Compressed P-256 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- + MFcCAQEEIFggiPN9SQP+FAPTCPp08fRUz7rHp2qNBRcBJ1DXhb3ZoAoGCCqGSM49 + AwEHoSQDIgADLpph1trTIlVfa8NJvlMUPyWvL+wP+pW3BJITUL/wj9A= + -----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0xA6FC) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0x6E57) + + +class TestExport_P384(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P384, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p384() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p384_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p384_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p384_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p384_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p384_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p384_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p384_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p384_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p384_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p384_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p384_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p384_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p384_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh_uncompressed(self): + key_file = load_file("ecc_p384_public_openssh.txt", "rt") + + encoded = self.ref_public._export_openssh(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="OpenSSH", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_openssh_compressed(self): + key_file = load_file("ecc_p384_public_openssh.txt", "rt") + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="OpenSSH", compress=True) + assert len(key_file) > len(key_file_compressed) + self.assertEqual(pub_key, ECC.import_key(key_file_compressed)) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + def test_compressed_curve(self): + + # Compressed P-384 curve (Y-point is even) + # openssl ecparam -name secp384p1 -genkey -noout -conv_form compressed -out /tmp/a.pem + # openssl ec -in /tmp/a.pem -text -noout + pem1 = """-----BEGIN EC PRIVATE KEY----- +MIGkAgEBBDAM0lEIhvXuekK2SWtdbgOcZtBaxa9TxfpO/GcDFZLCJ3JVXaTgwken +QT+C+XLtD6WgBwYFK4EEACKhZANiAATs0kZMhFDu8DoBC21jrSDPyAUn4aXZ/DM4 +ylhDfWmb4LEbeszXceIzfhIUaaGs5y1xXaqf5KXTiAAYx2pKUzAAM9lcGUHCGKJG +k4AgUmVJON29XoUilcFrzjDmuye3B6Q= +-----END EC PRIVATE KEY-----""" + + # Compressed P-384 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- +MIGkAgEBBDDHPFTslYLltE16fHdSDTtE/2HTmd3M8mqy5MttAm4wZ833KXiGS9oe +kFdx9sNV0KygBwYFK4EEACKhZANiAASLIE5RqVMtNhtBH/u/p/ifqOAlKnK/+RrQ +YC46ZRsnKNayw3wATdPjgja7L/DSII3nZK0G6KOOVwJBznT/e+zudUJYhZKaBLRx +/bgXyxUtYClOXxb1Y/5N7txLstYRyP0= +-----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x07a4) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0xc8fd) + + +class TestExport_P521(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_P521, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_p521() + + def test_export_public_der_uncompressed(self): + key_file = load_file("ecc_p521_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(False) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_der_compressed(self): + key_file = load_file("ecc_p521_public.der") + pub_key = ECC.import_key(key_file) + key_file_compressed = pub_key.export_key(format="DER", compress=True) + + key_file_compressed_ref = load_file("ecc_p521_public_compressed.der") + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_public_sec1_uncompressed(self): + key_file = load_file("ecc_p521_public.der") + value = extract_bitstring_from_spki(key_file) + + encoded = self.ref_public.export_key(format="SEC1") + self.assertEqual(value, encoded) + + encoded = self.ref_public.export_key(format="raw") + self.assertEqual(value, encoded) + + def test_export_public_sec1_compressed(self): + key_file = load_file("ecc_p521_public.der") + encoded = self.ref_public.export_key(format="SEC1", compress=True) + + key_file_compressed_ref = load_file("ecc_p521_public_compressed.der") + value = extract_bitstring_from_spki(key_file_compressed_ref) + self.assertEqual(value, encoded) + + encoded = self.ref_public.export_key(format="raw", compress=True) + self.assertEqual(value, encoded) + + def test_export_rfc5915_private_der(self): + key_file = load_file("ecc_p521_private.der") + + encoded = self.ref_private._export_rfc5915_private_der() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_p521_private_p8_clear.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem_uncompressed(self): + key_file = load_file("ecc_p521_public.pem", "rt").strip() + + encoded = self.ref_private._export_public_pem(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="PEM", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_pem_compressed(self): + key_file = load_file("ecc_p521_public.pem", "rt").strip() + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="PEM", compress=True) + key_file_compressed_ref = load_file("ecc_p521_public_compressed.pem", "rt").strip() + + self.assertEqual(key_file_compressed, key_file_compressed_ref) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_p521_private.pem", "rt").strip() + + encoded = self.ref_private._export_private_pem(None) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", use_pkcs8=False) + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private._export_private_pem(passphrase=b"secret") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "EC PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + use_pkcs8=False) + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_private_pkcs8_and_pem_1(self): + # PKCS8 inside PEM with both unencrypted + key_file = load_file("ecc_p521_private_p8_clear.pem", "rt").strip() + + encoded = self.ref_private._export_private_clear_pkcs8_in_clear_pem() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM") + self.assertEqual(key_file, encoded) + + def test_export_private_pkcs8_and_pem_2(self): + # PKCS8 inside PEM with PKCS8 encryption + encoded = self.ref_private._export_private_encrypted_pkcs8_in_clear_pem("secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh_uncompressed(self): + key_file = load_file("ecc_p521_public_openssh.txt", "rt") + + encoded = self.ref_public._export_openssh(False) + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_public.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="OpenSSH", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_openssh_compressed(self): + key_file = load_file("ecc_p521_public_openssh.txt", "rt") + pub_key = ECC.import_key(key_file) + + key_file_compressed = pub_key.export_key(format="OpenSSH", compress=True) + assert len(key_file) > len(key_file_compressed) + self.assertEqual(pub_key, ECC.import_key(key_file_compressed)) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + # --- + + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase="secret", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + use_pkcs8=False, + passphrase=b"secret", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.ref_private.export_key(format="PEM", passphrase="secret", + use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # DER format but no PKCS#8 + self.assertRaises(ValueError, self.ref_private.export_key, format="DER", + passphrase="secret", + use_pkcs8=False, + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # Incorrect parameters for public keys + self.assertRaises(ValueError, self.ref_public.export_key, format="DER", + use_pkcs8=False) + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + def test_compressed_curve(self): + + # Compressed P-521 curve (Y-point is even) + # openssl ecparam -name secp521r1 -genkey -noout -conv_form compressed -out /tmp/a.pem + # openssl ec -in /tmp/a.pem -text -noout + pem1 = """-----BEGIN EC PRIVATE KEY----- +MIHcAgEBBEIAnm1CEjVjvNfXEN730p+D6su5l+mOztdc5XmTEoti+s2R4GQ4mAv3 +0zYLvyklvOHw0+yy8d0cyGEJGb8T3ZVKmg2gBwYFK4EEACOhgYkDgYYABAHzjTI1 +ckxQ3Togi0LAxiG0PucdBBBs5oIy3df95xv6SInp70z+4qQ2EltEmdNMssH8eOrl +M5CYdZ6nbcHMVaJUvQEzTrYxvFjOgJiOd+E9eBWbLkbMNqsh1UKVO6HbMbW0ohCI +uGxO8tM6r3w89/qzpG2SvFM/fvv3mIR30wSZDD84qA== +-----END EC PRIVATE KEY-----""" + + # Compressed P-521 curve (Y-point is odd) + pem2 = """-----BEGIN EC PRIVATE KEY----- +MIHcAgEBBEIB84OfhJluLBRLn3+cC/RQ37C2SfQVP/t0gQK2tCsTf5avRcWYRrOJ +PmX9lNnkC0Hobd75QFRmdxrB0Wd1/M4jZOWgBwYFK4EEACOhgYkDgYYABAAMZcdJ +1YLCGHt3bHCEzdidVy6+brlJIbv1aQ9fPQLF7WKNv4c8w3H8d5a2+SDZilBOsk5c +6cNJDMz2ExWQvxl4CwDJtJGt1+LHVKFGy73NANqVxMbRu+2F8lOxkNp/ziFTbVyV +vv6oYkMIIi7r5oQWAiQDrR2mlrrFDL9V7GH/r8SWQw== +-----END EC PRIVATE KEY-----""" + + key1 = ECC.import_key(pem1) + low16 = int(key1.pointQ.y % 65536) + self.assertEqual(low16, 0x38a8) + + key2 = ECC.import_key(pem2) + low16 = int(key2.pointQ.y % 65536) + self.assertEqual(low16, 0x9643) + + +class TestImport_Ed25519(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_Ed25519, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed25519() + + def test_import_public_der(self): + key_file = load_file("ecc_ed25519_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_pkcs8_der(self): + key_file = load_file("ecc_ed25519_private.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_ed25519_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_ed25519_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_ed25519_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_ed25519_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_ed25519_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256": + key_file = load_file("ecc_ed25519_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_ed25519_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_openssh_public(self): + key_file = load_file("ecc_ed25519_public_openssh.txt") + key = ECC._import_openssh_public(key_file) + self.failIf(key.has_private()) + key = ECC.import_key(key_file) + self.failIf(key.has_private()) + + def test_import_openssh_private_clear(self): + key_file = load_file("ecc_ed25519_private_openssh.pem") + key = ECC.import_key(key_file) + + def test_import_openssh_private_password(self): + key_file = load_file("ecc_ed25519_private_openssh_pwd.pem") + key = ECC.import_key(key_file, b"password") + + +class TestExport_Ed25519(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_Ed25519, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed25519() + + def test_export_public_der(self): + key_file = load_file("ecc_ed25519_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(True) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_sec1(self): + self.assertRaises(ValueError, self.ref_public.export_key, format="SEC1") + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_ed25519_private.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + self.assertRaises(ValueError, self.ref_private.export_key, + format="DER", use_pkcs8=False) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem(self): + key_file_ref = load_file("ecc_ed25519_public.pem", "rt").strip() + key_file = self.ref_public.export_key(format="PEM").strip() + self.assertEqual(key_file_ref, key_file) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_ed25519_private.pem", "rt").strip() + encoded = self.ref_private.export_key(format="PEM").strip() + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh(self): + key_file = load_file("ecc_ed25519_public_openssh.txt", "rt") + public_key = ECC.import_key(key_file) + key_file = " ".join(key_file.split(' ')[:2]) # remove comment + + encoded = public_key._export_openssh(False) + self.assertEqual(key_file, encoded.strip()) + + encoded = public_key.export_key(format="OpenSSH") + self.assertEqual(key_file, encoded.strip()) + + def test_export_raw(self): + encoded = self.ref_public.export_key(format='raw') + self.assertEqual(encoded, unhexlify(b'bc85b8cf585d20a4de47e84d1cb6183f63d9ba96223fcbc886e363ffdea20cff')) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + +class TestImport_Ed448(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestImport_Ed448, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed448() + + def test_import_public_der(self): + key_file = load_file("ecc_ed448_public.der") + + key = ECC._import_subjectPublicKeyInfo(key_file) + self.assertEqual(self.ref_public, key) + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_pkcs8_der(self): + key_file = load_file("ecc_ed448_private.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_1(self): + key_file = load_file("ecc_ed448_private_p8.der") + + key = ECC._import_der(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_private_pkcs8_encrypted_2(self): + key_file = load_file("ecc_ed448_private_p8.pem") + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_der(self): + key_file = load_file("ecc_ed448_x509.der") + + key = ECC._import_der(key_file, None) + self.assertEqual(self.ref_public, key) + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_public_pem(self): + key_file = load_file("ecc_ed448_public.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + def test_import_private_pem(self): + key_file = load_file("ecc_ed448_private.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_private, key) + + def test_import_private_pem_encrypted(self): + for algo in "des3", "aes128", "aes192", "aes256": + key_file = load_file("ecc_ed448_private_enc_%s.pem" % algo) + + key = ECC.import_key(key_file, "secret") + self.assertEqual(self.ref_private, key) + + key = ECC.import_key(tostr(key_file), b"secret") + self.assertEqual(self.ref_private, key) + + def test_import_x509_pem(self): + key_file = load_file("ecc_ed448_x509.pem") + + key = ECC.import_key(key_file) + self.assertEqual(self.ref_public, key) + + +class TestExport_Ed448(unittest.TestCase): + + def __init__(self, *args, **kwargs): + super(TestExport_Ed448, self).__init__(*args, **kwargs) + self.ref_private, self.ref_public = create_ref_keys_ed448() + + def test_export_public_der(self): + key_file = load_file("ecc_ed448_public.der") + + encoded = self.ref_public._export_subjectPublicKeyInfo(True) + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER") + self.assertEqual(key_file, encoded) + + encoded = self.ref_public.export_key(format="DER", compress=False) + self.assertEqual(key_file, encoded) + + def test_export_public_sec1(self): + self.assertRaises(ValueError, self.ref_public.export_key, format="SEC1") + + def test_export_private_pkcs8_clear(self): + key_file = load_file("ecc_ed448_private.der") + + encoded = self.ref_private._export_pkcs8() + self.assertEqual(key_file, encoded) + + # --- + + encoded = self.ref_private.export_key(format="DER") + self.assertEqual(key_file, encoded) + + self.assertRaises(ValueError, self.ref_private.export_key, + format="DER", use_pkcs8=False) + + def test_export_private_pkcs8_encrypted(self): + encoded = self.ref_private._export_pkcs8(passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC._import_pkcs8, encoded, None) + + decoded = ECC._import_pkcs8(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + # --- + + encoded = self.ref_private.export_key(format="DER", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_public_pem(self): + key_file_ref = load_file("ecc_ed448_public.pem", "rt").strip() + key_file = self.ref_public.export_key(format="PEM").strip() + self.assertEqual(key_file_ref, key_file) + + def test_export_private_pem_clear(self): + key_file = load_file("ecc_ed448_private.pem", "rt").strip() + encoded = self.ref_private.export_key(format="PEM").strip() + self.assertEqual(key_file, encoded) + + def test_export_private_pem_encrypted(self): + encoded = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # This should prove that the output is password-protected + self.assertRaises(ValueError, ECC.import_key, encoded) + + assert "ENCRYPTED PRIVATE KEY" in encoded + + decoded = ECC.import_key(encoded, "secret") + self.assertEqual(self.ref_private, decoded) + + def test_export_openssh(self): + # Not supported + self.assertRaises(ValueError, self.ref_public.export_key, format="OpenSSH") + + def test_export_raw(self): + encoded = self.ref_public.export_key(format='raw') + self.assertEqual(encoded, unhexlify(b'899014ddc0a0e1260cfc1085afdf952019e9fd63372e3e366e26dad32b176624884330a14617237e3081febd9d1a15069e7499433d2f55dd80')) + + def test_prng(self): + # Test that password-protected containers use the provided PRNG + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_byte_or_string_passphrase(self): + encoded1 = self.ref_private.export_key(format="PEM", + passphrase="secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + encoded2 = self.ref_private.export_key(format="PEM", + passphrase=b"secret", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC", + randfunc=get_fixed_prng()) + self.assertEqual(encoded1, encoded2) + + def test_error_params1(self): + # Unknown format + self.assertRaises(ValueError, self.ref_private.export_key, format="XXX") + + # Missing 'protection' parameter when PKCS#8 is used + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="secret") + + # Empty password + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", use_pkcs8=False) + self.assertRaises(ValueError, self.ref_private.export_key, format="PEM", + passphrase="", + protection="PBKDF2WithHMAC-SHA1AndAES128-CBC") + + # No private keys with OpenSSH + self.assertRaises(ValueError, self.ref_private.export_key, format="OpenSSH", + passphrase="secret") + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(TestImport) + try: + tests += list_test_cases(TestImport_P192) + tests += list_test_cases(TestImport_P224) + tests += list_test_cases(TestImport_P256) + tests += list_test_cases(TestImport_P384) + tests += list_test_cases(TestImport_P521) + tests += list_test_cases(TestImport_Ed25519) + tests += list_test_cases(TestImport_Ed448) + + tests += list_test_cases(TestExport_P192) + tests += list_test_cases(TestExport_P224) + tests += list_test_cases(TestExport_P256) + tests += list_test_cases(TestExport_P384) + tests += list_test_cases(TestExport_P521) + tests += list_test_cases(TestExport_Ed25519) + tests += list_test_cases(TestExport_Ed448) + + except MissingTestVectorException: + pass + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py new file mode 100644 index 0000000..a1a2238 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/PublicKey/test_import_RSA.py @@ -0,0 +1,590 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/PublicKey/test_importKey.py: Self-test for importing RSA keys +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import os +import re +import errno +import warnings +import unittest + +from Cryptodome.PublicKey import RSA +from Cryptodome.SelfTest.st_common import a2b_hex, list_test_cases +from Cryptodome.Util.py3compat import b, tostr, FileNotFoundError +from Cryptodome.Util.number import inverse +from Cryptodome.Util import asn1 + +try: + import pycryptodome_test_vectors # type: ignore + test_vectors_available = True +except ImportError: + test_vectors_available = False + + +def load_file(file_name, mode="rb"): + results = None + + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + dir_comps = ("PublicKey", "RSA") + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name, mode) as file_in: + results = file_in.read() + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for RSA", + UserWarning, + stacklevel=2) + + return results + + +def der2pem(der, text='PUBLIC'): + import binascii + chunks = [binascii.b2a_base64(der[i:i+48]) for i in range(0, len(der), 48)] + pem = b('-----BEGIN %s KEY-----\n' % text) + pem += b('').join(chunks) + pem += b('-----END %s KEY-----' % text) + return pem + + +class ImportKeyTests(unittest.TestCase): + # 512-bit RSA key generated with openssl + rsaKeyPEM = u'''-----BEGIN RSA PRIVATE KEY----- +MIIBOwIBAAJBAL8eJ5AKoIsjURpcEoGubZMxLD7+kT+TLr7UkvEtFrRhDDKMtuII +q19FrL4pUIMymPMSLBn3hJLe30Dw48GQM4UCAwEAAQJACUSDEp8RTe32ftq8IwG8 +Wojl5mAd1wFiIOrZ/Uv8b963WJOJiuQcVN29vxU5+My9GPZ7RA3hrDBEAoHUDPrI +OQIhAPIPLz4dphiD9imAkivY31Rc5AfHJiQRA7XixTcjEkojAiEAyh/pJHks/Mlr ++rdPNEpotBjfV4M4BkgGAA/ipcmaAjcCIQCHvhwwKVBLzzTscT2HeUdEeBMoiXXK +JACAr3sJQJGxIQIgarRp+m1WSKV1MciwMaTOnbU7wxFs9DP1pva76lYBzgUCIQC9 +n0CnZCJ6IZYqSt0H5N7+Q+2Ro64nuwV/OSQfM6sBwQ== +-----END RSA PRIVATE KEY-----''' + + # As above, but this is actually an unencrypted PKCS#8 key + rsaKeyPEM8 = u'''-----BEGIN PRIVATE KEY----- +MIIBVQIBADANBgkqhkiG9w0BAQEFAASCAT8wggE7AgEAAkEAvx4nkAqgiyNRGlwS +ga5tkzEsPv6RP5MuvtSS8S0WtGEMMoy24girX0WsvilQgzKY8xIsGfeEkt7fQPDj +wZAzhQIDAQABAkAJRIMSnxFN7fZ+2rwjAbxaiOXmYB3XAWIg6tn9S/xv3rdYk4mK +5BxU3b2/FTn4zL0Y9ntEDeGsMEQCgdQM+sg5AiEA8g8vPh2mGIP2KYCSK9jfVFzk +B8cmJBEDteLFNyMSSiMCIQDKH+kkeSz8yWv6t080Smi0GN9XgzgGSAYAD+KlyZoC +NwIhAIe+HDApUEvPNOxxPYd5R0R4EyiJdcokAICvewlAkbEhAiBqtGn6bVZIpXUx +yLAxpM6dtTvDEWz0M/Wm9rvqVgHOBQIhAL2fQKdkInohlipK3Qfk3v5D7ZGjrie7 +BX85JB8zqwHB +-----END PRIVATE KEY-----''' + + # The same RSA private key as in rsaKeyPEM, but now encrypted + rsaKeyEncryptedPEM = ( + + # PEM encryption + # With DES and passphrase 'test' + ('test', u'''-----BEGIN RSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: DES-CBC,AF8F9A40BD2FA2FC + +Ckl9ex1kaVEWhYC2QBmfaF+YPiR4NFkRXA7nj3dcnuFEzBnY5XULupqQpQI3qbfA +u8GYS7+b3toWWiHZivHbAAUBPDIZG9hKDyB9Sq2VMARGsX1yW1zhNvZLIiVJzUHs +C6NxQ1IJWOXzTew/xM2I26kPwHIvadq+/VaT8gLQdjdH0jOiVNaevjWnLgrn1mLP +BCNRMdcexozWtAFNNqSzfW58MJL2OdMi21ED184EFytIc1BlB+FZiGZduwKGuaKy +9bMbdb/1PSvsSzPsqW7KSSrTw6MgJAFJg6lzIYvR5F4poTVBxwBX3+EyEmShiaNY +IRX3TgQI0IjrVuLmvlZKbGWP18FXj7I7k9tSsNOOzllTTdq3ny5vgM3A+ynfAaxp +dysKznQ6P+IoqML1WxAID4aGRMWka+uArOJ148Rbj9s= +-----END RSA PRIVATE KEY-----'''), + + # PKCS8 encryption + ('winter', u'''-----BEGIN ENCRYPTED PRIVATE KEY----- +MIIBpjBABgkqhkiG9w0BBQ0wMzAbBgkqhkiG9w0BBQwwDgQIeZIsbW3O+JcCAggA +MBQGCCqGSIb3DQMHBAgSM2p0D8FilgSCAWBhFyP2tiGKVpGj3mO8qIBzinU60ApR +3unvP+N6j7LVgnV2lFGaXbJ6a1PbQXe+2D6DUyBLo8EMXrKKVLqOMGkFMHc0UaV6 +R6MmrsRDrbOqdpTuVRW+NVd5J9kQQh4xnfU/QrcPPt7vpJvSf4GzG0n666Ki50OV +M/feuVlIiyGXY6UWdVDpcOV72cq02eNUs/1JWdh2uEBvA9fCL0c07RnMrdT+CbJQ +NjJ7f8ULtp7xvR9O3Al/yJ4Wv3i4VxF1f3MCXzhlUD4I0ONlr0kJWgeQ80q/cWhw +ntvgJwnCn2XR1h6LA8Wp+0ghDTsL2NhJpWd78zClGhyU4r3hqu1XDjoXa7YCXCix +jCV15+ViDJzlNCwg+W6lRg18sSLkCT7alviIE0U5tHc6UPbbHwT5QqAxAABaP+nZ +CGqJGyiwBzrKebjgSm/KRd4C91XqcsysyH2kKPfT51MLAoD4xelOURBP +-----END ENCRYPTED PRIVATE KEY-----''' + ), + ) + + rsaPublicKeyPEM = u'''-----BEGIN PUBLIC KEY----- +MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL8eJ5AKoIsjURpcEoGubZMxLD7+kT+T +Lr7UkvEtFrRhDDKMtuIIq19FrL4pUIMymPMSLBn3hJLe30Dw48GQM4UCAwEAAQ== +-----END PUBLIC KEY-----''' + + # Obtained using 'ssh-keygen -i -m PKCS8 -f rsaPublicKeyPEM' + rsaPublicKeyOpenSSH = b('''ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAQQC/HieQCqCLI1EaXBKBrm2TMSw+/pE/ky6+1JLxLRa0YQwyjLbiCKtfRay+KVCDMpjzEiwZ94SS3t9A8OPBkDOF comment\n''') + + # The private key, in PKCS#1 format encoded with DER + rsaKeyDER = a2b_hex( + '''3082013b020100024100bf1e27900aa08b23511a5c1281ae6d93312c3efe + 913f932ebed492f12d16b4610c328cb6e208ab5f45acbe2950833298f312 + 2c19f78492dedf40f0e3c190338502030100010240094483129f114dedf6 + 7edabc2301bc5a88e5e6601dd7016220ead9fd4bfc6fdeb75893898ae41c + 54ddbdbf1539f8ccbd18f67b440de1ac30440281d40cfac839022100f20f + 2f3e1da61883f62980922bd8df545ce407c726241103b5e2c53723124a23 + 022100ca1fe924792cfcc96bfab74f344a68b418df578338064806000fe2 + a5c99a023702210087be1c3029504bcf34ec713d877947447813288975ca + 240080af7b094091b12102206ab469fa6d5648a57531c8b031a4ce9db53b + c3116cf433f5a6f6bbea5601ce05022100bd9f40a764227a21962a4add07 + e4defe43ed91a3ae27bb057f39241f33ab01c1 + '''.replace(" ","")) + + # The private key, in unencrypted PKCS#8 format encoded with DER + rsaKeyDER8 = a2b_hex( + '''30820155020100300d06092a864886f70d01010105000482013f3082013 + b020100024100bf1e27900aa08b23511a5c1281ae6d93312c3efe913f932 + ebed492f12d16b4610c328cb6e208ab5f45acbe2950833298f3122c19f78 + 492dedf40f0e3c190338502030100010240094483129f114dedf67edabc2 + 301bc5a88e5e6601dd7016220ead9fd4bfc6fdeb75893898ae41c54ddbdb + f1539f8ccbd18f67b440de1ac30440281d40cfac839022100f20f2f3e1da + 61883f62980922bd8df545ce407c726241103b5e2c53723124a23022100c + a1fe924792cfcc96bfab74f344a68b418df578338064806000fe2a5c99a0 + 23702210087be1c3029504bcf34ec713d877947447813288975ca240080a + f7b094091b12102206ab469fa6d5648a57531c8b031a4ce9db53bc3116cf + 433f5a6f6bbea5601ce05022100bd9f40a764227a21962a4add07e4defe4 + 3ed91a3ae27bb057f39241f33ab01c1 + '''.replace(" ","")) + + rsaPublicKeyDER = a2b_hex( + '''305c300d06092a864886f70d0101010500034b003048024100bf1e27900a + a08b23511a5c1281ae6d93312c3efe913f932ebed492f12d16b4610c328c + b6e208ab5f45acbe2950833298f3122c19f78492dedf40f0e3c190338502 + 03010001 + '''.replace(" ","")) + + n = int('BF 1E 27 90 0A A0 8B 23 51 1A 5C 12 81 AE 6D 93 31 2C 3E FE 91 3F 93 2E BE D4 92 F1 2D 16 B4 61 0C 32 8C B6 E2 08 AB 5F 45 AC BE 29 50 83 32 98 F3 12 2C 19 F7 84 92 DE DF 40 F0 E3 C1 90 33 85'.replace(" ",""),16) + e = 65537 + d = int('09 44 83 12 9F 11 4D ED F6 7E DA BC 23 01 BC 5A 88 E5 E6 60 1D D7 01 62 20 EA D9 FD 4B FC 6F DE B7 58 93 89 8A E4 1C 54 DD BD BF 15 39 F8 CC BD 18 F6 7B 44 0D E1 AC 30 44 02 81 D4 0C FA C8 39'.replace(" ",""),16) + p = int('00 F2 0F 2F 3E 1D A6 18 83 F6 29 80 92 2B D8 DF 54 5C E4 07 C7 26 24 11 03 B5 E2 C5 37 23 12 4A 23'.replace(" ",""),16) + q = int('00 CA 1F E9 24 79 2C FC C9 6B FA B7 4F 34 4A 68 B4 18 DF 57 83 38 06 48 06 00 0F E2 A5 C9 9A 02 37'.replace(" ",""),16) + + # This is q^{-1} mod p). fastmath and slowmath use pInv (p^{-1} + # mod q) instead! + qInv = int('00 BD 9F 40 A7 64 22 7A 21 96 2A 4A DD 07 E4 DE FE 43 ED 91 A3 AE 27 BB 05 7F 39 24 1F 33 AB 01 C1'.replace(" ",""),16) + pInv = inverse(p,q) + + def testImportKey1(self): + """Verify import of RSAPrivateKey DER SEQUENCE""" + key = RSA.importKey(self.rsaKeyDER) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey2(self): + """Verify import of SubjectPublicKeyInfo DER SEQUENCE""" + key = RSA.importKey(self.rsaPublicKeyDER) + self.assertFalse(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey3unicode(self): + """Verify import of RSAPrivateKey DER SEQUENCE, encoded with PEM as unicode""" + key = RSA.importKey(self.rsaKeyPEM) + self.assertEqual(key.has_private(),True) # assert_ + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey3bytes(self): + """Verify import of RSAPrivateKey DER SEQUENCE, encoded with PEM as byte string""" + key = RSA.importKey(b(self.rsaKeyPEM)) + self.assertEqual(key.has_private(),True) # assert_ + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey4unicode(self): + """Verify import of RSAPrivateKey DER SEQUENCE, encoded with PEM as unicode""" + key = RSA.importKey(self.rsaPublicKeyPEM) + self.assertEqual(key.has_private(),False) # assertFalse + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey4bytes(self): + """Verify import of SubjectPublicKeyInfo DER SEQUENCE, encoded with PEM as byte string""" + key = RSA.importKey(b(self.rsaPublicKeyPEM)) + self.assertEqual(key.has_private(),False) # assertFalse + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey5(self): + """Verifies that the imported key is still a valid RSA pair""" + key = RSA.importKey(self.rsaKeyPEM) + idem = key._encrypt(key._decrypt(89)) + self.assertEqual(idem, 89) + + def testImportKey6(self): + """Verifies that the imported key is still a valid RSA pair""" + key = RSA.importKey(self.rsaKeyDER) + idem = key._encrypt(key._decrypt(65)) + self.assertEqual(idem, 65) + + def testImportKey7(self): + """Verify import of OpenSSH public key""" + key = RSA.importKey(self.rsaPublicKeyOpenSSH) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def testImportKey8(self): + """Verify import of encrypted PrivateKeyInfo DER SEQUENCE""" + for t in self.rsaKeyEncryptedPEM: + key = RSA.importKey(t[1], t[0]) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey9(self): + """Verify import of unencrypted PrivateKeyInfo DER SEQUENCE""" + key = RSA.importKey(self.rsaKeyDER8) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey10(self): + """Verify import of unencrypted PrivateKeyInfo DER SEQUENCE, encoded with PEM""" + key = RSA.importKey(self.rsaKeyPEM8) + self.assertTrue(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def testImportKey11(self): + """Verify import of RSAPublicKey DER SEQUENCE""" + der = asn1.DerSequence([17, 3]).encode() + key = RSA.importKey(der) + self.assertEqual(key.n, 17) + self.assertEqual(key.e, 3) + + def testImportKey12(self): + """Verify import of RSAPublicKey DER SEQUENCE, encoded with PEM""" + der = asn1.DerSequence([17, 3]).encode() + pem = der2pem(der) + key = RSA.importKey(pem) + self.assertEqual(key.n, 17) + self.assertEqual(key.e, 3) + + def test_import_key_windows_cr_lf(self): + pem_cr_lf = "\r\n".join(self.rsaKeyPEM.splitlines()) + key = RSA.importKey(pem_cr_lf) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + self.assertEqual(key.d, self.d) + self.assertEqual(key.p, self.p) + self.assertEqual(key.q, self.q) + + def test_import_empty(self): + self.assertRaises(ValueError, RSA.import_key, b"") + + ### + def testExportKey1(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + derKey = key.export_key("DER") + self.assertEqual(derKey, self.rsaKeyDER) + + def testExportKey2(self): + key = RSA.construct([self.n, self.e]) + derKey = key.export_key("DER") + self.assertEqual(derKey, self.rsaPublicKeyDER) + + def testExportKey3(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + pemKey = key.export_key("PEM") + self.assertEqual(pemKey, b(self.rsaKeyPEM)) + + def testExportKey4(self): + key = RSA.construct([self.n, self.e]) + pemKey = key.export_key("PEM") + self.assertEqual(pemKey, b(self.rsaPublicKeyPEM)) + + def testExportKey5(self): + key = RSA.construct([self.n, self.e]) + openssh_1 = key.export_key("OpenSSH").split() + openssh_2 = self.rsaPublicKeyOpenSSH.split() + self.assertEqual(openssh_1[0], openssh_2[0]) + self.assertEqual(openssh_1[1], openssh_2[1]) + + def testExportKey7(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + derKey = key.export_key("DER", pkcs=8) + self.assertEqual(derKey, self.rsaKeyDER8) + + def testExportKey8(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + pemKey = key.export_key("PEM", pkcs=8) + self.assertEqual(pemKey, b(self.rsaKeyPEM8)) + + def testExportKey9(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + self.assertRaises(ValueError, key.export_key, "invalid-format") + + def testExportKey10(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#1, old PEM encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test') + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')!=-1) + self.assertTrue(tostr(outkey).find('BEGIN RSA PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey11(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#1, old PEM encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test', pkcs=1) + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')!=-1) + self.assertTrue(tostr(outkey).find('BEGIN RSA PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey12(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#8, old PEM encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test', pkcs=8) + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')!=-1) + self.assertTrue(tostr(outkey).find('BEGIN PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey13(self): + # Export and re-import the encrypted key. It must match. + # PEM envelope, PKCS#8, PKCS#8 encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('PEM', 'test', pkcs=8, + protection='PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC') + self.assertTrue(tostr(outkey).find('4,ENCRYPTED')==-1) + self.assertTrue(tostr(outkey).find('BEGIN ENCRYPTED PRIVATE KEY')!=-1) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey14(self): + # Export and re-import the encrypted key. It must match. + # DER envelope, PKCS#8, PKCS#8 encryption + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + outkey = key.export_key('DER', 'test', pkcs=8) + inkey = RSA.importKey(outkey, 'test') + self.assertEqual(key.n, inkey.n) + self.assertEqual(key.e, inkey.e) + self.assertEqual(key.d, inkey.d) + + def testExportKey15(self): + # Verify that that error an condition is detected when trying to + # use a password with DER encoding and PKCS#1. + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + self.assertRaises(ValueError, key.export_key, 'DER', 'test', 1) + + def test_import_key(self): + """Verify that import_key is an alias to importKey""" + key = RSA.import_key(self.rsaPublicKeyDER) + self.assertFalse(key.has_private()) + self.assertEqual(key.n, self.n) + self.assertEqual(key.e, self.e) + + def test_import_key_ba_mv(self): + """Verify that import_key can be used on bytearrays and memoryviews""" + key = RSA.import_key(bytearray(self.rsaPublicKeyDER)) + key = RSA.import_key(memoryview(self.rsaPublicKeyDER)) + + def test_exportKey(self): + key = RSA.construct([self.n, self.e, self.d, self.p, self.q, self.pInv]) + self.assertEqual(key.export_key(), key.exportKey()) + + +class ImportKeyFromX509Cert(unittest.TestCase): + + def test_x509v1(self): + + # Sample V1 certificate with a 1024 bit RSA key + x509_v1_cert = """ +-----BEGIN CERTIFICATE----- +MIICOjCCAaMCAQEwDQYJKoZIhvcNAQEEBQAwfjENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxHDAaBgkqhkiG9w0BCQEWDXNwYW1AYWNtZS5vcmcxEzARBgNVBAcT +Ck1ldHJvcG9saXMxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYDVQQGEwJVUzENMAsG +A1UEAxMEdGVzdDAeFw0xNDA3MTExOTU3MjRaFw0xNzA0MDYxOTU3MjRaME0xCzAJ +BgNVBAYTAlVTMREwDwYDVQQIEwhOZXcgWW9yazENMAsGA1UEChMEQWNtZTELMAkG +A1UECxMCUkQxDzANBgNVBAMTBmxhdHZpYTCBnzANBgkqhkiG9w0BAQEFAAOBjQAw +gYkCgYEAyG+kytdRj3TFbRmHDYp3TXugVQ81chew0qeOxZWOz80IjtWpgdOaCvKW +NCuc8wUR9BWrEQW+39SaRMLiQfQtyFSQZijc3nsEBu/Lo4uWZ0W/FHDRVSvkJA/V +Ex5NL5ikI+wbUeCV5KajGNDalZ8F1pk32+CBs8h1xNx5DyxuEHUCAwEAATANBgkq +hkiG9w0BAQQFAAOBgQCVQF9Y//Q4Psy+umEM38pIlbZ2hxC5xNz/MbVPwuCkNcGn +KYNpQJP+JyVTsPpO8RLZsAQDzRueMI3S7fbbwTzAflN0z19wvblvu93xkaBytVok +9VBAH28olVhy9b1MMeg2WOt5sUEQaFNPnwwsyiY9+HsRpvpRnPSQF+kyYVsshQ== +-----END CERTIFICATE----- + """.strip() + + # RSA public key as dumped by openssl + exponent = 65537 + modulus_str = """ +00:c8:6f:a4:ca:d7:51:8f:74:c5:6d:19:87:0d:8a: +77:4d:7b:a0:55:0f:35:72:17:b0:d2:a7:8e:c5:95: +8e:cf:cd:08:8e:d5:a9:81:d3:9a:0a:f2:96:34:2b: +9c:f3:05:11:f4:15:ab:11:05:be:df:d4:9a:44:c2: +e2:41:f4:2d:c8:54:90:66:28:dc:de:7b:04:06:ef: +cb:a3:8b:96:67:45:bf:14:70:d1:55:2b:e4:24:0f: +d5:13:1e:4d:2f:98:a4:23:ec:1b:51:e0:95:e4:a6: +a3:18:d0:da:95:9f:05:d6:99:37:db:e0:81:b3:c8: +75:c4:dc:79:0f:2c:6e:10:75 + """ + modulus = int(re.sub("[^0-9a-f]","", modulus_str), 16) + + key = RSA.importKey(x509_v1_cert) + self.assertEqual(key.e, exponent) + self.assertEqual(key.n, modulus) + self.assertFalse(key.has_private()) + + def test_x509v3(self): + + # Sample V3 certificate with a 1024 bit RSA key + x509_v3_cert = """ +-----BEGIN CERTIFICATE----- +MIIEcjCCAlqgAwIBAgIBATANBgkqhkiG9w0BAQsFADBhMQswCQYDVQQGEwJVUzEL +MAkGA1UECAwCTUQxEjAQBgNVBAcMCUJhbHRpbW9yZTEQMA4GA1UEAwwHVGVzdCBD +QTEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFtcGxlLmNvbTAeFw0xNDA3MTIwOTM1 +MTJaFw0xNzA0MDcwOTM1MTJaMEQxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJNRDES +MBAGA1UEBwwJQmFsdGltb3JlMRQwEgYDVQQDDAtUZXN0IFNlcnZlcjCBnzANBgkq +hkiG9w0BAQEFAAOBjQAwgYkCgYEA/S7GJV2OcFdyNMQ4K75KrYFtMEn3VnEFdPHa +jyS37XlMxSh0oS4GeTGVUCJInl5Cpsv8WQdh03FfeOdvzp5IZ46OcjeOPiWnmjgl +2G5j7e2bDH7RSchGV+OD6Fb1Agvuu2/9iy8fdf3rPQ/7eAddzKUrzwacVbnW+tg2 +QtSXKRcCAwEAAaOB1TCB0jAdBgNVHQ4EFgQU/WwCX7FfWMIPDFfJ+I8a2COG+l8w +HwYDVR0jBBgwFoAUa0hkif3RMaraiWtsOOZZlLu9wJwwCQYDVR0TBAIwADALBgNV +HQ8EBAMCBeAwSgYDVR0RBEMwQYILZXhhbXBsZS5jb22CD3d3dy5leGFtcGxlLmNv +bYIQbWFpbC5leGFtcGxlLmNvbYIPZnRwLmV4YW1wbGUuY29tMCwGCWCGSAGG+EIB +DQQfFh1PcGVuU1NMIEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsF +AAOCAgEAvO6xfdsGbnoK4My3eJthodTAjMjPwFVY133LH04QLcCv54TxKhtUg1fi +PgdjVe1HpTytPBfXy2bSZbXAN0abZCtw1rYrnn7o1g2pN8iypVq3zVn0iMTzQzxs +zEPO3bpR/UhNSf90PmCsS5rqZpAAnXSaAy1ClwHWk/0eG2pYkhE1m1ABVMN2lsAW +e9WxGk6IFqaI9O37NYQwmEypMs4DC+ECJEvbPFiqi3n0gbXCZJJ6omDA5xJldaYK +Oa7KR3s/qjBsu9UAiWpLBuFoSTHIF2aeRKRFmUdmzwo43eVPep65pY6eQ4AdL2RF +rqEuINbGlzI5oQyYhu71IwB+iPZXaZZPlwjLgOsuad/p2hOgDb5WxUi8FnDPursQ +ujfpIpmrOP/zpvvQWnwePI3lI+5n41kTBSbefXEdv6rXpHk3QRzB90uPxnXPdxSC +16ASA8bQT5an/1AgoE3k9CrcD2K0EmgaX0YI0HUhkyzbkg34EhpWJ6vvRUbRiNRo +9cIbt/ya9Y9u0Ja8GLXv6dwX0l0IdJMkL8KifXUFAVCujp1FBrr/gdmwQn8itANy ++qbnWSxmOvtaY0zcaFAcONuHva0h51/WqXOMO1eb8PhR4HIIYU8p1oBwQp7dSni8 +THDi1F+GG5PsymMDj5cWK42f+QzjVw5PrVmFqqrrEoMlx8DWh5Y= +-----END CERTIFICATE----- +""".strip() + + # RSA public key as dumped by openssl + exponent = 65537 + modulus_str = """ +00:fd:2e:c6:25:5d:8e:70:57:72:34:c4:38:2b:be: +4a:ad:81:6d:30:49:f7:56:71:05:74:f1:da:8f:24: +b7:ed:79:4c:c5:28:74:a1:2e:06:79:31:95:50:22: +48:9e:5e:42:a6:cb:fc:59:07:61:d3:71:5f:78:e7: +6f:ce:9e:48:67:8e:8e:72:37:8e:3e:25:a7:9a:38: +25:d8:6e:63:ed:ed:9b:0c:7e:d1:49:c8:46:57:e3: +83:e8:56:f5:02:0b:ee:bb:6f:fd:8b:2f:1f:75:fd: +eb:3d:0f:fb:78:07:5d:cc:a5:2b:cf:06:9c:55:b9: +d6:fa:d8:36:42:d4:97:29:17 + """ + modulus = int(re.sub("[^0-9a-f]","", modulus_str), 16) + + key = RSA.importKey(x509_v3_cert) + self.assertEqual(key.e, exponent) + self.assertEqual(key.n, modulus) + self.assertFalse(key.has_private()) + + +class TestImport_2048(unittest.TestCase): + + def test_import_openssh_public(self): + key_file_ref = load_file("rsa2048_private.pem") + key_file = load_file("rsa2048_public_openssh.txt") + + # Skip test if test vectors are not installed + if None in (key_file_ref, key_file): + return + + key_ref = RSA.import_key(key_file_ref).public_key() + key = RSA.import_key(key_file) + self.assertEqual(key_ref, key) + + def test_import_openssh_private_clear(self): + key_file = load_file("rsa2048_private_openssh.pem") + key_file_old = load_file("rsa2048_private_openssh_old.pem") + + # Skip test if test vectors are not installed + if None in (key_file_old, key_file): + return + + key = RSA.import_key(key_file) + key_old = RSA.import_key(key_file_old) + + self.assertEqual(key, key_old) + + def test_import_openssh_private_password(self): + key_file = load_file("rsa2048_private_openssh_pwd.pem") + key_file_old = load_file("rsa2048_private_openssh_pwd_old.pem") + + # Skip test if test vectors are not installed + if None in (key_file_old, key_file): + return + + key = RSA.import_key(key_file, b"password") + key_old = RSA.import_key(key_file_old) + self.assertEqual(key, key_old) + + +if __name__ == '__main__': + unittest.main() + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(ImportKeyTests) + tests += list_test_cases(ImportKeyFromX509Cert) + tests += list_test_cases(TestImport_2048) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/__init__.py new file mode 100644 index 0000000..763ee9c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/__init__.py @@ -0,0 +1,39 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Random/__init__.py: Self-test for random number generation modules +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for random number generators""" + +__revision__ = "$Id$" + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.Random import test_random; tests += test_random.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/test_random.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/test_random.py new file mode 100644 index 0000000..30e9194 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Random/test_random.py @@ -0,0 +1,167 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/test_generic.py: Self-test for the Cryptodome.Random.new() function +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test suite for Cryptodome.Random.new()""" + +import sys +import unittest +from Cryptodome.Util.py3compat import b + +class SimpleTest(unittest.TestCase): + def runTest(self): + """Cryptodome.Random.new()""" + # Import the Random module and try to use it + from Cryptodome import Random + randobj = Random.new() + x = randobj.read(16) + y = randobj.read(16) + self.assertNotEqual(x, y) + z = Random.get_random_bytes(16) + self.assertNotEqual(x, z) + self.assertNotEqual(y, z) + # Test the Random.random module, which + # implements a subset of Python's random API + # Not implemented: + # seed(), getstate(), setstate(), jumpahead() + # random(), uniform(), triangular(), betavariate() + # expovariate(), gammavariate(), gauss(), + # longnormvariate(), normalvariate(), + # vonmisesvariate(), paretovariate() + # weibullvariate() + # WichmannHill(), whseed(), SystemRandom() + from Cryptodome.Random import random + x = random.getrandbits(16*8) + y = random.getrandbits(16*8) + self.assertNotEqual(x, y) + # Test randrange + if x>y: + start = y + stop = x + else: + start = x + stop = y + for step in range(1,10): + x = random.randrange(start,stop,step) + y = random.randrange(start,stop,step) + self.assertNotEqual(x, y) + self.assertEqual(start <= x < stop, True) + self.assertEqual(start <= y < stop, True) + self.assertEqual((x - start) % step, 0) + self.assertEqual((y - start) % step, 0) + for i in range(10): + self.assertEqual(random.randrange(1,2), 1) + self.assertRaises(ValueError, random.randrange, start, start) + self.assertRaises(ValueError, random.randrange, stop, start, step) + self.assertRaises(TypeError, random.randrange, start, stop, step, step) + self.assertRaises(TypeError, random.randrange, start, stop, "1") + self.assertRaises(TypeError, random.randrange, "1", stop, step) + self.assertRaises(TypeError, random.randrange, 1, "2", step) + self.assertRaises(ValueError, random.randrange, start, stop, 0) + # Test randint + x = random.randint(start,stop) + y = random.randint(start,stop) + self.assertNotEqual(x, y) + self.assertEqual(start <= x <= stop, True) + self.assertEqual(start <= y <= stop, True) + for i in range(10): + self.assertEqual(random.randint(1,1), 1) + self.assertRaises(ValueError, random.randint, stop, start) + self.assertRaises(TypeError, random.randint, start, stop, step) + self.assertRaises(TypeError, random.randint, "1", stop) + self.assertRaises(TypeError, random.randint, 1, "2") + # Test choice + seq = range(10000) + x = random.choice(seq) + y = random.choice(seq) + self.assertNotEqual(x, y) + self.assertEqual(x in seq, True) + self.assertEqual(y in seq, True) + for i in range(10): + self.assertEqual(random.choice((1,2,3)) in (1,2,3), True) + self.assertEqual(random.choice([1,2,3]) in [1,2,3], True) + if sys.version_info[0] == 3: + self.assertEqual(random.choice(bytearray(b('123'))) in bytearray(b('123')), True) + self.assertEqual(1, random.choice([1])) + self.assertRaises(IndexError, random.choice, []) + self.assertRaises(TypeError, random.choice, 1) + # Test shuffle. Lacks random parameter to specify function. + # Make copies of seq + seq = range(500) + x = list(seq) + y = list(seq) + random.shuffle(x) + random.shuffle(y) + self.assertNotEqual(x, y) + self.assertEqual(len(seq), len(x)) + self.assertEqual(len(seq), len(y)) + for i in range(len(seq)): + self.assertEqual(x[i] in seq, True) + self.assertEqual(y[i] in seq, True) + self.assertEqual(seq[i] in x, True) + self.assertEqual(seq[i] in y, True) + z = [1] + random.shuffle(z) + self.assertEqual(z, [1]) + if sys.version_info[0] == 3: + z = bytearray(b('12')) + random.shuffle(z) + self.assertEqual(b('1') in z, True) + self.assertRaises(TypeError, random.shuffle, b('12')) + self.assertRaises(TypeError, random.shuffle, 1) + self.assertRaises(TypeError, random.shuffle, "11") + self.assertRaises(TypeError, random.shuffle, (1,2)) + # 2to3 wraps a list() around it, alas - but I want to shoot + # myself in the foot here! :D + # if sys.version_info[0] == 3: + # self.assertRaises(TypeError, random.shuffle, range(3)) + # Test sample + x = random.sample(seq, 20) + y = random.sample(seq, 20) + self.assertNotEqual(x, y) + for i in range(20): + self.assertEqual(x[i] in seq, True) + self.assertEqual(y[i] in seq, True) + z = random.sample([1], 1) + self.assertEqual(z, [1]) + z = random.sample((1,2,3), 1) + self.assertEqual(z[0] in (1,2,3), True) + z = random.sample("123", 1) + self.assertEqual(z[0] in "123", True) + z = random.sample(range(3), 1) + self.assertEqual(z[0] in range(3), True) + if sys.version_info[0] == 3: + z = random.sample(b("123"), 1) + self.assertEqual(z[0] in b("123"), True) + z = random.sample(bytearray(b("123")), 1) + self.assertEqual(z[0] in bytearray(b("123")), True) + self.assertRaises(TypeError, random.sample, 1) + +def get_tests(config={}): + return [SimpleTest()] + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/__init__.py new file mode 100644 index 0000000..83cf0f3 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/__init__.py @@ -0,0 +1,41 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Signature/__init__.py: Self-test for signature modules +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for signature modules""" + +import unittest +from . import test_pkcs1_15, test_pss, test_dss, test_eddsa + + +def get_tests(config={}): + tests = [] + tests += test_pkcs1_15.get_tests(config=config) + tests += test_pss.get_tests(config=config) + tests += test_dss.get_tests(config=config) + tests += test_eddsa.get_tests(config=config) + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_dss.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_dss.py new file mode 100644 index 0000000..156ee67 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_dss.py @@ -0,0 +1,1369 @@ +# +# SelfTest/Signature/test_dss.py: Self-test for DSS signatures +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import re +import unittest +from binascii import hexlify, unhexlify + +from Cryptodome.Util.py3compat import tobytes, bord, bchr + +from Cryptodome.Hash import (SHA1, SHA224, SHA256, SHA384, SHA512, + SHA3_224, SHA3_256, SHA3_384, SHA3_512) +from Cryptodome.Signature import DSS +from Cryptodome.PublicKey import DSA, ECC +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof +from Cryptodome.Util.number import bytes_to_long, long_to_bytes + + +def t2b(hexstring): + ws = hexstring.replace(" ", "").replace("\n", "") + return unhexlify(tobytes(ws)) + + +def t2l(hexstring): + ws = hexstring.replace(" ", "").replace("\n", "") + return int(ws, 16) + + +def load_hash_by_name(hash_name): + return __import__("Cryptodome.Hash." + hash_name, globals(), locals(), ["new"]) + + +class StrRNG: + + def __init__(self, randomness): + length = len(randomness) + self._idx = 0 + # Fix required to get the right K (see how randint() works!) + self._randomness = long_to_bytes(bytes_to_long(randomness) - 1, length) + + def __call__(self, n): + out = self._randomness[self._idx:self._idx + n] + self._idx += n + return out + + +class FIPS_DSA_Tests(unittest.TestCase): + + # 1st 1024 bit key from SigGen.txt + P = 0xa8f9cd201e5e35d892f85f80e4db2599a5676a3b1d4f190330ed3256b26d0e80a0e49a8fffaaad2a24f472d2573241d4d6d6c7480c80b4c67bb4479c15ada7ea8424d2502fa01472e760241713dab025ae1b02e1703a1435f62ddf4ee4c1b664066eb22f2e3bf28bb70a2a76e4fd5ebe2d1229681b5b06439ac9c7e9d8bde283 + Q = 0xf85f0f83ac4df7ea0cdf8f469bfeeaea14156495 + G = 0x2b3152ff6c62f14622b8f48e59f8af46883b38e79b8c74deeae9df131f8b856e3ad6c8455dab87cc0da8ac973417ce4f7878557d6cdf40b35b4a0ca3eb310c6a95d68ce284ad4e25ea28591611ee08b8444bd64b25f3f7c572410ddfb39cc728b9c936f85f419129869929cdb909a6a3a99bbe089216368171bd0ba81de4fe33 + X = 0xc53eae6d45323164c7d07af5715703744a63fc3a + Y = 0x313fd9ebca91574e1c2eebe1517c57e0c21b0209872140c5328761bbb2450b33f1b18b409ce9ab7c4cd8fda3391e8e34868357c199e16a6b2eba06d6749def791d79e95d3a4d09b24c392ad89dbf100995ae19c01062056bb14bce005e8731efde175f95b975089bdcdaea562b32786d96f5a31aedf75364008ad4fffebb970b + + key_pub = DSA.construct((Y, G, P, Q)) + key_priv = DSA.construct((Y, G, P, Q, X)) + + def shortDescription(self): + return "FIPS DSA Tests" + + def test_loopback(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv, 'fips-186-3') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub, 'fips-186-3') + verifier.verify(hashed_msg, signature) + + def test_negative_unapproved_hashes(self): + """Verify that unapproved hashes are rejected""" + + from Cryptodome.Hash import RIPEMD160 + + self.description = "Unapproved hash (RIPEMD160) test" + hash_obj = RIPEMD160.new() + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertRaises(ValueError, signer.sign, hash_obj) + self.assertRaises(ValueError, signer.verify, hash_obj, b"\x00" * 40) + + def test_negative_unknown_modes_encodings(self): + """Verify that unknown modes/encodings are rejected""" + + self.description = "Unknown mode test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-0') + + self.description = "Unknown encoding test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-3', 'xml') + + def test_asn1_encoding(self): + """Verify ASN.1 encoding""" + + self.description = "ASN.1 encoding test" + hash_obj = SHA1.new() + signer = DSS.new(self.key_priv, 'fips-186-3', 'der') + signature = signer.sign(hash_obj) + + # Verify that output looks like a DER SEQUENCE + self.assertEqual(bord(signature[0]), 48) + signer.verify(hash_obj, signature) + + # Verify that ASN.1 parsing fails as expected + signature = bchr(7) + signature[1:] + self.assertRaises(ValueError, signer.verify, hash_obj, signature) + + def test_sign_verify(self): + """Verify public/private method""" + + self.description = "can_sign() test" + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertTrue(signer.can_sign()) + + signer = DSS.new(self.key_pub, 'fips-186-3') + self.assertFalse(signer.can_sign()) + + try: + signer.sign(SHA256.new(b'xyz')) + except TypeError as e: + msg = str(e) + else: + msg = "" + self.assertTrue("Private key is needed" in msg) + + +class FIPS_DSA_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "DSA"), + "FIPS_186_3_SigVer.rsp", + "Signature Verification 186-3", + {'result': lambda x: x}) or [] + +for idx, tv in enumerate(test_vectors_verify): + + if isinstance(tv, str): + res = re.match(r"\[mod = L=([0-9]+), N=([0-9]+), ([a-zA-Z0-9-]+)\]", tv) + assert(res) + hash_name = res.group(3).replace("-", "") + hash_module = load_hash_by_name(hash_name) + continue + + if hasattr(tv, "p"): + modulus = tv.p + generator = tv.g + suborder = tv.q + continue + + hash_obj = hash_module.new(tv.msg) + + comps = [bytes_to_long(x) for x in (tv.y, generator, modulus, suborder)] + key = DSA.construct(comps, False) # type: ignore + verifier = DSS.new(key, 'fips-186-3') + + def positive_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result == 'p': + setattr(FIPS_DSA_Tests_KAT, "test_verify_positive_%d" % idx, positive_test) + else: + setattr(FIPS_DSA_Tests_KAT, "test_verify_negative_%d" % idx, negative_test) + + +test_vectors_sign = load_test_vectors(("Signature", "DSA"), + "FIPS_186_3_SigGen.txt", + "Signature Creation 186-3", + {}) or [] + +for idx, tv in enumerate(test_vectors_sign): + + if isinstance(tv, str): + res = re.match(r"\[mod = L=([0-9]+), N=([0-9]+), ([a-zA-Z0-9-]+)\]", tv) + assert(res) + hash_name = res.group(3).replace("-", "") + hash_module = load_hash_by_name(hash_name) + continue + + if hasattr(tv, "p"): + modulus = tv.p + generator = tv.g + suborder = tv.q + continue + + hash_obj = hash_module.new(tv.msg) + comps_dsa = [bytes_to_long(x) for x in (tv.y, generator, modulus, suborder, tv.x)] + key = DSA.construct(comps_dsa, False) # type: ignore + signer = DSS.new(key, 'fips-186-3', randfunc=StrRNG(tv.k)) + + def new_test(self, signer=signer, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertEqual(signer.sign(hash_obj), signature) + setattr(FIPS_DSA_Tests_KAT, "test_sign_%d" % idx, new_test) + + +class FIPS_ECDSA_Tests(unittest.TestCase): + + key_priv = ECC.generate(curve="P-256") + key_pub = key_priv.public_key() + + def shortDescription(self): + return "FIPS ECDSA Tests" + + def test_loopback(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv, 'fips-186-3') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub, 'fips-186-3') + verifier.verify(hashed_msg, signature) + + def test_negative_unapproved_hashes(self): + """Verify that unapproved hashes are rejected""" + + from Cryptodome.Hash import SHA1 + + self.description = "Unapproved hash (SHA-1) test" + hash_obj = SHA1.new() + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertRaises(ValueError, signer.sign, hash_obj) + self.assertRaises(ValueError, signer.verify, hash_obj, b"\x00" * 40) + + def test_negative_eddsa_key(self): + key = ECC.generate(curve="ed25519") + self.assertRaises(ValueError, DSS.new, key, 'fips-186-3') + + def test_sign_verify(self): + """Verify public/private method""" + + self.description = "can_sign() test" + signer = DSS.new(self.key_priv, 'fips-186-3') + self.assertTrue(signer.can_sign()) + + signer = DSS.new(self.key_pub, 'fips-186-3') + self.assertFalse(signer.can_sign()) + self.assertRaises(TypeError, signer.sign, SHA256.new(b'xyz')) + + try: + signer.sign(SHA256.new(b'xyz')) + except TypeError as e: + msg = str(e) + else: + msg = "" + self.assertTrue("Private key is needed" in msg) + + def test_negative_unknown_modes_encodings(self): + """Verify that unknown modes/encodings are rejected""" + + self.description = "Unknown mode test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-0') + + self.description = "Unknown encoding test" + self.assertRaises(ValueError, DSS.new, self.key_priv, 'fips-186-3', 'xml') + + def test_asn1_encoding(self): + """Verify ASN.1 encoding""" + + self.description = "ASN.1 encoding test" + hash_obj = SHA256.new() + signer = DSS.new(self.key_priv, 'fips-186-3', 'der') + signature = signer.sign(hash_obj) + + # Verify that output looks like a DER SEQUENCE + self.assertEqual(bord(signature[0]), 48) + signer.verify(hash_obj, signature) + + # Verify that ASN.1 parsing fails as expected + signature = bchr(7) + signature[1:] + self.assertRaises(ValueError, signer.verify, hash_obj, signature) + + +class FIPS_ECDSA_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "ECDSA"), + "SigVer.rsp", + "ECDSA Signature Verification 186-3", + {'result': lambda x: x, + 'qx': lambda x: int(x, 16), + 'qy': lambda x: int(x, 16), + }) or [] +test_vectors_verify += load_test_vectors(("Signature", "ECDSA"), + "SigVer_TruncatedSHAs.rsp", + "ECDSA Signature Verification 186-3", + {'result': lambda x: x, + 'qx': lambda x: int(x, 16), + 'qy': lambda x: int(x, 16), + }) or [] + + +for idx, tv in enumerate(test_vectors_verify): + + if isinstance(tv, str): + res = re.match(r"\[(P-[0-9]+),(SHA-[0-9]+)\]", tv) + assert res + curve_name = res.group(1) + hash_name = res.group(2).replace("-", "") + if hash_name in ("SHA512224", "SHA512256"): + truncate = hash_name[-3:] + hash_name = hash_name[:-3] + else: + truncate = None + hash_module = load_hash_by_name(hash_name) + continue + + if truncate is None: + hash_obj = hash_module.new(tv.msg) + else: + hash_obj = hash_module.new(tv.msg, truncate=truncate) + ecc_key = ECC.construct(curve=curve_name, point_x=tv.qx, point_y=tv.qy) + verifier = DSS.new(ecc_key, 'fips-186-3') + + def positive_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, verifier=verifier, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result.startswith('p'): + setattr(FIPS_ECDSA_Tests_KAT, "test_verify_positive_%d" % idx, positive_test) + else: + setattr(FIPS_ECDSA_Tests_KAT, "test_verify_negative_%d" % idx, negative_test) + + +test_vectors_sign = load_test_vectors(("Signature", "ECDSA"), + "SigGen.txt", + "ECDSA Signature Verification 186-3", + {'d': lambda x: int(x, 16)}) or [] + +for idx, tv in enumerate(test_vectors_sign): + + if isinstance(tv, str): + res = re.match(r"\[(P-[0-9]+),(SHA-[0-9]+)\]", tv) + assert res + curve_name = res.group(1) + hash_name = res.group(2).replace("-", "") + hash_module = load_hash_by_name(hash_name) + continue + + hash_obj = hash_module.new(tv.msg) + ecc_key = ECC.construct(curve=curve_name, d=tv.d) + signer = DSS.new(ecc_key, 'fips-186-3', randfunc=StrRNG(tv.k)) + + def sign_test(self, signer=signer, hash_obj=hash_obj, signature=tv.r+tv.s): + self.assertEqual(signer.sign(hash_obj), signature) + setattr(FIPS_ECDSA_Tests_KAT, "test_sign_%d" % idx, sign_test) + + +class Det_DSA_Tests(unittest.TestCase): + """Tests from rfc6979""" + + # Each key is (p, q, g, x, y, desc) + keys = [ + ( + """ + 86F5CA03DCFEB225063FF830A0C769B9DD9D6153AD91D7CE27F787C43278B447 + E6533B86B18BED6E8A48B784A14C252C5BE0DBF60B86D6385BD2F12FB763ED88 + 73ABFD3F5BA2E0A8C0A59082EAC056935E529DAF7C610467899C77ADEDFC846C + 881870B7B19B2B58F9BE0521A17002E3BDD6B86685EE90B3D9A1B02B782B1779""", + "996F967F6C8E388D9E28D01E205FBA957A5698B1", + """ + 07B0F92546150B62514BB771E2A0C0CE387F03BDA6C56B505209FF25FD3C133D + 89BBCD97E904E09114D9A7DEFDEADFC9078EA544D2E401AEECC40BB9FBBF78FD + 87995A10A1C27CB7789B594BA7EFB5C4326A9FE59A070E136DB77175464ADCA4 + 17BE5DCE2F40D10A46A3A3943F26AB7FD9C0398FF8C76EE0A56826A8A88F1DBD""", + "411602CB19A6CCC34494D79D98EF1E7ED5AF25F7", + """ + 5DF5E01DED31D0297E274E1691C192FE5868FEF9E19A84776454B100CF16F653 + 92195A38B90523E2542EE61871C0440CB87C322FC4B4D2EC5E1E7EC766E1BE8D + 4CE935437DC11C3C8FD426338933EBFE739CB3465F4D3668C5E473508253B1E6 + 82F65CBDC4FAE93C2EA212390E54905A86E2223170B44EAA7DA5DD9FFCFB7F3B""", + "DSA1024" + ), + ( + """ + 9DB6FB5951B66BB6FE1E140F1D2CE5502374161FD6538DF1648218642F0B5C48 + C8F7A41AADFA187324B87674FA1822B00F1ECF8136943D7C55757264E5A1A44F + FE012E9936E00C1D3E9310B01C7D179805D3058B2A9F4BB6F9716BFE6117C6B5 + B3CC4D9BE341104AD4A80AD6C94E005F4B993E14F091EB51743BF33050C38DE2 + 35567E1B34C3D6A5C0CEAA1A0F368213C3D19843D0B4B09DCB9FC72D39C8DE41 + F1BF14D4BB4563CA28371621CAD3324B6A2D392145BEBFAC748805236F5CA2FE + 92B871CD8F9C36D3292B5509CA8CAA77A2ADFC7BFD77DDA6F71125A7456FEA15 + 3E433256A2261C6A06ED3693797E7995FAD5AABBCFBE3EDA2741E375404AE25B""", + "F2C3119374CE76C9356990B465374A17F23F9ED35089BD969F61C6DDE9998C1F", + """ + 5C7FF6B06F8F143FE8288433493E4769C4D988ACE5BE25A0E24809670716C613 + D7B0CEE6932F8FAA7C44D2CB24523DA53FBE4F6EC3595892D1AA58C4328A06C4 + 6A15662E7EAA703A1DECF8BBB2D05DBE2EB956C142A338661D10461C0D135472 + 085057F3494309FFA73C611F78B32ADBB5740C361C9F35BE90997DB2014E2EF5 + AA61782F52ABEB8BD6432C4DD097BC5423B285DAFB60DC364E8161F4A2A35ACA + 3A10B1C4D203CC76A470A33AFDCBDD92959859ABD8B56E1725252D78EAC66E71 + BA9AE3F1DD2487199874393CD4D832186800654760E1E34C09E4D155179F9EC0 + DC4473F996BDCE6EED1CABED8B6F116F7AD9CF505DF0F998E34AB27514B0FFE7""", + "69C7548C21D0DFEA6B9A51C9EAD4E27C33D3B3F180316E5BCAB92C933F0E4DBC", + """ + 667098C654426C78D7F8201EAC6C203EF030D43605032C2F1FA937E5237DBD94 + 9F34A0A2564FE126DC8B715C5141802CE0979C8246463C40E6B6BDAA2513FA61 + 1728716C2E4FD53BC95B89E69949D96512E873B9C8F8DFD499CC312882561ADE + CB31F658E934C0C197F2C4D96B05CBAD67381E7B768891E4DA3843D24D94CDFB + 5126E9B8BF21E8358EE0E0A30EF13FD6A664C0DCE3731F7FB49A4845A4FD8254 + 687972A2D382599C9BAC4E0ED7998193078913032558134976410B89D2C171D1 + 23AC35FD977219597AA7D15C1A9A428E59194F75C721EBCBCFAE44696A499AFA + 74E04299F132026601638CB87AB79190D4A0986315DA8EEC6561C938996BEADF""", + "DSA2048" + ), + ] + + # This is a sequence of items: + # message, k, r, s, hash module + signatures = [ + ( + "sample", + "7BDB6B0FF756E1BB5D53583EF979082F9AD5BD5B", + "2E1A0C2562B2912CAAF89186FB0F42001585DA55", + "29EFB6B0AFF2D7A68EB70CA313022253B9A88DF5", + SHA1, + 'DSA1024' + ), + ( + "sample", + "562097C06782D60C3037BA7BE104774344687649", + "4BC3B686AEA70145856814A6F1BB53346F02101E", + "410697B92295D994D21EDD2F4ADA85566F6F94C1", + SHA224, + 'DSA1024' + ), + ( + "sample", + "519BA0546D0C39202A7D34D7DFA5E760B318BCFB", + "81F2F5850BE5BC123C43F71A3033E9384611C545", + "4CDD914B65EB6C66A8AAAD27299BEE6B035F5E89", + SHA256, + 'DSA1024' + ), + ( + "sample", + "95897CD7BBB944AA932DBC579C1C09EB6FCFC595", + "07F2108557EE0E3921BC1774F1CA9B410B4CE65A", + "54DF70456C86FAC10FAB47C1949AB83F2C6F7595", + SHA384, + 'DSA1024' + ), + ( + "sample", + "09ECE7CA27D0F5A4DD4E556C9DF1D21D28104F8B", + "16C3491F9B8C3FBBDD5E7A7B667057F0D8EE8E1B", + "02C36A127A7B89EDBB72E4FFBC71DABC7D4FC69C", + SHA512, + 'DSA1024' + ), + ( + "test", + "5C842DF4F9E344EE09F056838B42C7A17F4A6433", + "42AB2052FD43E123F0607F115052A67DCD9C5C77", + "183916B0230D45B9931491D4C6B0BD2FB4AAF088", + SHA1, + 'DSA1024' + ), + ( + "test", + "4598B8EFC1A53BC8AECD58D1ABBB0C0C71E67297", + "6868E9964E36C1689F6037F91F28D5F2C30610F2", + "49CEC3ACDC83018C5BD2674ECAAD35B8CD22940F", + SHA224, + 'DSA1024' + ), + ( + "test", + "5A67592E8128E03A417B0484410FB72C0B630E1A", + "22518C127299B0F6FDC9872B282B9E70D0790812", + "6837EC18F150D55DE95B5E29BE7AF5D01E4FE160", + SHA256, + 'DSA1024' + ), + ( + "test", + "220156B761F6CA5E6C9F1B9CF9C24BE25F98CD89", + "854CF929B58D73C3CBFDC421E8D5430CD6DB5E66", + "91D0E0F53E22F898D158380676A871A157CDA622", + SHA384, + 'DSA1024' + ), + ( + "test", + "65D2C2EEB175E370F28C75BFCDC028D22C7DBE9C", + "8EA47E475BA8AC6F2D821DA3BD212D11A3DEB9A0", + "7C670C7AD72B6C050C109E1790008097125433E8", + SHA512, + 'DSA1024' + ), + ( + "sample", + "888FA6F7738A41BDC9846466ABDB8174C0338250AE50CE955CA16230F9CBD53E", + "3A1B2DBD7489D6ED7E608FD036C83AF396E290DBD602408E8677DAABD6E7445A", + "D26FCBA19FA3E3058FFC02CA1596CDBB6E0D20CB37B06054F7E36DED0CDBBCCF", + SHA1, + 'DSA2048' + ), + ( + "sample", + "BC372967702082E1AA4FCE892209F71AE4AD25A6DFD869334E6F153BD0C4D806", + "DC9F4DEADA8D8FF588E98FED0AB690FFCE858DC8C79376450EB6B76C24537E2C", + "A65A9C3BC7BABE286B195D5DA68616DA8D47FA0097F36DD19F517327DC848CEC", + SHA224, + 'DSA2048' + ), + ( + "sample", + "8926A27C40484216F052F4427CFD5647338B7B3939BC6573AF4333569D597C52", + "EACE8BDBBE353C432A795D9EC556C6D021F7A03F42C36E9BC87E4AC7932CC809", + "7081E175455F9247B812B74583E9E94F9EA79BD640DC962533B0680793A38D53", + SHA256, + 'DSA2048' + ), + ( + "sample", + "C345D5AB3DA0A5BCB7EC8F8FB7A7E96069E03B206371EF7D83E39068EC564920", + "B2DA945E91858834FD9BF616EBAC151EDBC4B45D27D0DD4A7F6A22739F45C00B", + "19048B63D9FD6BCA1D9BAE3664E1BCB97F7276C306130969F63F38FA8319021B", + SHA384, + 'DSA2048' + ), + ( + "sample", + "5A12994431785485B3F5F067221517791B85A597B7A9436995C89ED0374668FC", + "2016ED092DC5FB669B8EFB3D1F31A91EECB199879BE0CF78F02BA062CB4C942E", + "D0C76F84B5F091E141572A639A4FB8C230807EEA7D55C8A154A224400AFF2351", + SHA512, + 'DSA2048' + ), + ( + "test", + "6EEA486F9D41A037B2C640BC5645694FF8FF4B98D066A25F76BE641CCB24BA4F", + "C18270A93CFC6063F57A4DFA86024F700D980E4CF4E2CB65A504397273D98EA0", + "414F22E5F31A8B6D33295C7539C1C1BA3A6160D7D68D50AC0D3A5BEAC2884FAA", + SHA1, + 'DSA2048' + ), + ( + "test", + "06BD4C05ED74719106223BE33F2D95DA6B3B541DAD7BFBD7AC508213B6DA6670", + "272ABA31572F6CC55E30BF616B7A265312018DD325BE031BE0CC82AA17870EA3", + "E9CC286A52CCE201586722D36D1E917EB96A4EBDB47932F9576AC645B3A60806", + SHA224, + 'DSA2048' + ), + ( + "test", + "1D6CE6DDA1C5D37307839CD03AB0A5CBB18E60D800937D67DFB4479AAC8DEAD7", + "8190012A1969F9957D56FCCAAD223186F423398D58EF5B3CEFD5A4146A4476F0", + "7452A53F7075D417B4B013B278D1BB8BBD21863F5E7B1CEE679CF2188E1AB19E", + SHA256, + 'DSA2048' + ), + ( + "test", + "206E61F73DBE1B2DC8BE736B22B079E9DACD974DB00EEBBC5B64CAD39CF9F91C", + "239E66DDBE8F8C230A3D071D601B6FFBDFB5901F94D444C6AF56F732BEB954BE", + "6BD737513D5E72FE85D1C750E0F73921FE299B945AAD1C802F15C26A43D34961", + SHA384, + 'DSA2048' + ), + ( + "test", + "AFF1651E4CD6036D57AA8B2A05CCF1A9D5A40166340ECBBDC55BE10B568AA0AA", + "89EC4BB1400ECCFF8E7D9AA515CD1DE7803F2DAFF09693EE7FD1353E90A68307", + "C9F0BDABCC0D880BB137A994CC7F3980CE91CC10FAF529FC46565B15CEA854E1", + SHA512, + 'DSA2048' + ) + ] + + def setUp(self): + # Convert DSA key components from hex strings to integers + # Each key is (p, q, g, x, y, desc) + + from collections import namedtuple + + TestKey = namedtuple('TestKey', 'p q g x y') + new_keys = {} + for k in self.keys: + tk = TestKey(*[t2l(y) for y in k[:-1]]) + new_keys[k[-1]] = tk + self.keys = new_keys + + # Convert signature encoding + TestSig = namedtuple('TestSig', 'message nonce result module test_key') + new_signatures = [] + for message, nonce, r, s, module, test_key in self.signatures: + tsig = TestSig( + tobytes(message), + t2l(nonce), + t2b(r) + t2b(s), + module, + self.keys[test_key] + ) + new_signatures.append(tsig) + self.signatures = new_signatures + + def test1(self): + q = 0x4000000000000000000020108A2E0CC0D99F8A5EF + x = 0x09A4D6792295A7F730FC3F2B49CBC0F62E862272F + p = 2 * q + 1 + y = pow(2, x, p) + key = DSA.construct([pow(y, 2, p), 2, p, q, x], False) + signer = DSS.new(key, 'deterministic-rfc6979') + + # Test _int2octets + self.assertEqual(hexlify(signer._int2octets(x)), + b'009a4d6792295a7f730fc3f2b49cbc0f62e862272f') + + # Test _bits2octets + h1 = SHA256.new(b"sample").digest() + self.assertEqual(hexlify(signer._bits2octets(h1)), + b'01795edf0d54db760f156d0dac04c0322b3a204224') + + def test2(self): + + for sig in self.signatures: + tk = sig.test_key + key = DSA.construct([tk.y, tk.g, tk.p, tk.q, tk.x], False) + signer = DSS.new(key, 'deterministic-rfc6979') + + hash_obj = sig.module.new(sig.message) + result = signer.sign(hash_obj) + self.assertEqual(sig.result, result) + + +class Det_ECDSA_Tests(unittest.TestCase): + + key_priv_p192 = ECC.construct(curve="P-192", d=0x6FAB034934E4C0FC9AE67F5B5659A9D7D1FEFD187EE09FD4) + key_pub_p192 = key_priv_p192.public_key() + + key_priv_p224 = ECC.construct(curve="P-224", d=0xF220266E1105BFE3083E03EC7A3A654651F45E37167E88600BF257C1) + key_pub_p224 = key_priv_p224.public_key() + + key_priv_p256 = ECC.construct(curve="P-256", d=0xC9AFA9D845BA75166B5C215767B1D6934E50C3DB36E89B127B8A622B120F6721) + key_pub_p256 = key_priv_p256.public_key() + + key_priv_p384 = ECC.construct(curve="P-384", d=0x6B9D3DAD2E1B8C1C05B19875B6659F4DE23C3B667BF297BA9AA47740787137D896D5724E4C70A825F872C9EA60D2EDF5) + key_pub_p384 = key_priv_p384.public_key() + + key_priv_p521 = ECC.construct(curve="P-521", d=0x0FAD06DAA62BA3B25D2FB40133DA757205DE67F5BB0018FEE8C86E1B68C7E75CAA896EB32F1F47C70855836A6D16FCC1466F6D8FBEC67DB89EC0C08B0E996B83538) + key_pub_p521 = key_priv_p521.public_key() + + # This is a sequence of items: + # message, k, r, s, hash module + # taken from RFC6979 + signatures_p192_ = ( + ( + "sample", + "37D7CA00D2C7B0E5E412AC03BD44BA837FDD5B28CD3B0021", + "98C6BD12B23EAF5E2A2045132086BE3EB8EBD62ABF6698FF", + "57A22B07DEA9530F8DE9471B1DC6624472E8E2844BC25B64", + SHA1 + ), + ( + "sample", + "4381526B3FC1E7128F202E194505592F01D5FF4C5AF015D8", + "A1F00DAD97AEEC91C95585F36200C65F3C01812AA60378F5", + "E07EC1304C7C6C9DEBBE980B9692668F81D4DE7922A0F97A", + SHA224 + ), + ( + "sample", + "32B1B6D7D42A05CB449065727A84804FB1A3E34D8F261496", + "4B0B8CE98A92866A2820E20AA6B75B56382E0F9BFD5ECB55", + "CCDB006926EA9565CBADC840829D8C384E06DE1F1E381B85", + SHA256 + ), + ( + "sample", + "4730005C4FCB01834C063A7B6760096DBE284B8252EF4311", + "DA63BF0B9ABCF948FBB1E9167F136145F7A20426DCC287D5", + "C3AA2C960972BD7A2003A57E1C4C77F0578F8AE95E31EC5E", + SHA384 + ), + ( + "sample", + "A2AC7AB055E4F20692D49209544C203A7D1F2C0BFBC75DB1", + "4D60C5AB1996BD848343B31C00850205E2EA6922DAC2E4B8", + "3F6E837448F027A1BF4B34E796E32A811CBB4050908D8F67", + SHA512 + ), + ( + "test", + "D9CF9C3D3297D3260773A1DA7418DB5537AB8DD93DE7FA25", + "0F2141A0EBBC44D2E1AF90A50EBCFCE5E197B3B7D4DE036D", + "EB18BC9E1F3D7387500CB99CF5F7C157070A8961E38700B7", + SHA1 + ), + ( + "test", + "F5DC805F76EF851800700CCE82E7B98D8911B7D510059FBE", + "6945A1C1D1B2206B8145548F633BB61CEF04891BAF26ED34", + "B7FB7FDFC339C0B9BD61A9F5A8EAF9BE58FC5CBA2CB15293", + SHA224 + ), + ( + "test", + "5C4CE89CF56D9E7C77C8585339B006B97B5F0680B4306C6C", + "3A718BD8B4926C3B52EE6BBE67EF79B18CB6EB62B1AD97AE", + "5662E6848A4A19B1F1AE2F72ACD4B8BBE50F1EAC65D9124F", + SHA256 + ), + ( + "test", + "5AFEFB5D3393261B828DB6C91FBC68C230727B030C975693", + "B234B60B4DB75A733E19280A7A6034BD6B1EE88AF5332367", + "7994090B2D59BB782BE57E74A44C9A1C700413F8ABEFE77A", + SHA384 + ), + ( + "test", + "0758753A5254759C7CFBAD2E2D9B0792EEE44136C9480527", + "FE4F4AE86A58B6507946715934FE2D8FF9D95B6B098FE739", + "74CF5605C98FBA0E1EF34D4B5A1577A7DCF59457CAE52290", + SHA512 + ) + ) + + signatures_p224_ = ( + ( + "sample", + "7EEFADD91110D8DE6C2C470831387C50D3357F7F4D477054B8B426BC", + "22226F9D40A96E19C4A301CE5B74B115303C0F3A4FD30FC257FB57AC", + "66D1CDD83E3AF75605DD6E2FEFF196D30AA7ED7A2EDF7AF475403D69", + SHA1 + ), + ( + "sample", + "C1D1F2F10881088301880506805FEB4825FE09ACB6816C36991AA06D", + "1CDFE6662DDE1E4A1EC4CDEDF6A1F5A2FB7FBD9145C12113E6ABFD3E", + "A6694FD7718A21053F225D3F46197CA699D45006C06F871808F43EBC", + SHA224 + ), + ( + "sample", + "AD3029E0278F80643DE33917CE6908C70A8FF50A411F06E41DEDFCDC", + "61AA3DA010E8E8406C656BC477A7A7189895E7E840CDFE8FF42307BA", + "BC814050DAB5D23770879494F9E0A680DC1AF7161991BDE692B10101", + SHA256 + ), + ( + "sample", + "52B40F5A9D3D13040F494E83D3906C6079F29981035C7BD51E5CAC40", + "0B115E5E36F0F9EC81F1325A5952878D745E19D7BB3EABFABA77E953", + "830F34CCDFE826CCFDC81EB4129772E20E122348A2BBD889A1B1AF1D", + SHA384 + ), + ( + "sample", + "9DB103FFEDEDF9CFDBA05184F925400C1653B8501BAB89CEA0FBEC14", + "074BD1D979D5F32BF958DDC61E4FB4872ADCAFEB2256497CDAC30397", + "A4CECA196C3D5A1FF31027B33185DC8EE43F288B21AB342E5D8EB084", + SHA512 + ), + ( + "test", + "2519178F82C3F0E4F87ED5883A4E114E5B7A6E374043D8EFD329C253", + "DEAA646EC2AF2EA8AD53ED66B2E2DDAA49A12EFD8356561451F3E21C", + "95987796F6CF2062AB8135271DE56AE55366C045F6D9593F53787BD2", + SHA1 + ), + ( + "test", + "DF8B38D40DCA3E077D0AC520BF56B6D565134D9B5F2EAE0D34900524", + "C441CE8E261DED634E4CF84910E4C5D1D22C5CF3B732BB204DBEF019", + "902F42847A63BDC5F6046ADA114953120F99442D76510150F372A3F4", + SHA224 + ), + ( + "test", + "FF86F57924DA248D6E44E8154EB69F0AE2AEBAEE9931D0B5A969F904", + "AD04DDE87B84747A243A631EA47A1BA6D1FAA059149AD2440DE6FBA6", + "178D49B1AE90E3D8B629BE3DB5683915F4E8C99FDF6E666CF37ADCFD", + SHA256 + ), + ( + "test", + "7046742B839478C1B5BD31DB2E862AD868E1A45C863585B5F22BDC2D", + "389B92682E399B26518A95506B52C03BC9379A9DADF3391A21FB0EA4", + "414A718ED3249FF6DBC5B50C27F71F01F070944DA22AB1F78F559AAB", + SHA384 + ), + ( + "test", + "E39C2AA4EA6BE2306C72126D40ED77BF9739BB4D6EF2BBB1DCB6169D", + "049F050477C5ADD858CAC56208394B5A55BAEBBE887FDF765047C17C", + "077EB13E7005929CEFA3CD0403C7CDCC077ADF4E44F3C41B2F60ECFF", + SHA512 + ) + ) + + signatures_p256_ = ( + ( + "sample", + "882905F1227FD620FBF2ABF21244F0BA83D0DC3A9103DBBEE43A1FB858109DB4", + "61340C88C3AAEBEB4F6D667F672CA9759A6CCAA9FA8811313039EE4A35471D32", + "6D7F147DAC089441BB2E2FE8F7A3FA264B9C475098FDCF6E00D7C996E1B8B7EB", + SHA1 + ), + ( + "sample", + "103F90EE9DC52E5E7FB5132B7033C63066D194321491862059967C715985D473", + "53B2FFF5D1752B2C689DF257C04C40A587FABABB3F6FC2702F1343AF7CA9AA3F", + "B9AFB64FDC03DC1A131C7D2386D11E349F070AA432A4ACC918BEA988BF75C74C", + SHA224 + ), + ( + "sample", + "A6E3C57DD01ABE90086538398355DD4C3B17AA873382B0F24D6129493D8AAD60", + "EFD48B2AACB6A8FD1140DD9CD45E81D69D2C877B56AAF991C34D0EA84EAF3716", + "F7CB1C942D657C41D436C7A1B6E29F65F3E900DBB9AFF4064DC4AB2F843ACDA8", + SHA256 + ), + ( + "sample", + "09F634B188CEFD98E7EC88B1AA9852D734D0BC272F7D2A47DECC6EBEB375AAD4", + "0EAFEA039B20E9B42309FB1D89E213057CBF973DC0CFC8F129EDDDC800EF7719", + "4861F0491E6998B9455193E34E7B0D284DDD7149A74B95B9261F13ABDE940954", + SHA384 + ), + ( + "sample", + "5FA81C63109BADB88C1F367B47DA606DA28CAD69AA22C4FE6AD7DF73A7173AA5", + "8496A60B5E9B47C825488827E0495B0E3FA109EC4568FD3F8D1097678EB97F00", + "2362AB1ADBE2B8ADF9CB9EDAB740EA6049C028114F2460F96554F61FAE3302FE", + SHA512 + ), + ( + "test", + "8C9520267C55D6B980DF741E56B4ADEE114D84FBFA2E62137954164028632A2E", + "0CBCC86FD6ABD1D99E703E1EC50069EE5C0B4BA4B9AC60E409E8EC5910D81A89", + "01B9D7B73DFAA60D5651EC4591A0136F87653E0FD780C3B1BC872FFDEAE479B1", + SHA1 + ), + ( + "test", + "669F4426F2688B8BE0DB3A6BD1989BDAEFFF84B649EEB84F3DD26080F667FAA7", + "C37EDB6F0AE79D47C3C27E962FA269BB4F441770357E114EE511F662EC34A692", + "C820053A05791E521FCAAD6042D40AEA1D6B1A540138558F47D0719800E18F2D", + SHA224 + ), + ( + "test", + "D16B6AE827F17175E040871A1C7EC3500192C4C92677336EC2537ACAEE0008E0", + "F1ABB023518351CD71D881567B1EA663ED3EFCF6C5132B354F28D3B0B7D38367", + "019F4113742A2B14BD25926B49C649155F267E60D3814B4C0CC84250E46F0083", + SHA256 + ), + ( + "test", + "16AEFFA357260B04B1DD199693960740066C1A8F3E8EDD79070AA914D361B3B8", + "83910E8B48BB0C74244EBDF7F07A1C5413D61472BD941EF3920E623FBCCEBEB6", + "8DDBEC54CF8CD5874883841D712142A56A8D0F218F5003CB0296B6B509619F2C", + SHA384 + ), + ( + "test", + "6915D11632ACA3C40D5D51C08DAF9C555933819548784480E93499000D9F0B7F", + "461D93F31B6540894788FD206C07CFA0CC35F46FA3C91816FFF1040AD1581A04", + "39AF9F15DE0DB8D97E72719C74820D304CE5226E32DEDAE67519E840D1194E55", + SHA512 + ) + ) + + signatures_p384_ = ( + ( + "sample", + "4471EF7518BB2C7C20F62EAE1C387AD0C5E8E470995DB4ACF694466E6AB096630F29E5938D25106C3C340045A2DB01A7", + "EC748D839243D6FBEF4FC5C4859A7DFFD7F3ABDDF72014540C16D73309834FA37B9BA002899F6FDA3A4A9386790D4EB2", + "A3BCFA947BEEF4732BF247AC17F71676CB31A847B9FF0CBC9C9ED4C1A5B3FACF26F49CA031D4857570CCB5CA4424A443", + SHA1 + ), + ( + "sample", + "A4E4D2F0E729EB786B31FC20AD5D849E304450E0AE8E3E341134A5C1AFA03CAB8083EE4E3C45B06A5899EA56C51B5879", + "42356E76B55A6D9B4631C865445DBE54E056D3B3431766D0509244793C3F9366450F76EE3DE43F5A125333A6BE060122", + "9DA0C81787064021E78DF658F2FBB0B042BF304665DB721F077A4298B095E4834C082C03D83028EFBF93A3C23940CA8D", + SHA224 + ), + ( + "sample", + "180AE9F9AEC5438A44BC159A1FCB277C7BE54FA20E7CF404B490650A8ACC414E375572342863C899F9F2EDF9747A9B60", + "21B13D1E013C7FA1392D03C5F99AF8B30C570C6F98D4EA8E354B63A21D3DAA33BDE1E888E63355D92FA2B3C36D8FB2CD", + "F3AA443FB107745BF4BD77CB3891674632068A10CA67E3D45DB2266FA7D1FEEBEFDC63ECCD1AC42EC0CB8668A4FA0AB0", + SHA256 + ), + ( + "sample", + "94ED910D1A099DAD3254E9242AE85ABDE4BA15168EAF0CA87A555FD56D10FBCA2907E3E83BA95368623B8C4686915CF9", + "94EDBB92A5ECB8AAD4736E56C691916B3F88140666CE9FA73D64C4EA95AD133C81A648152E44ACF96E36DD1E80FABE46", + "99EF4AEB15F178CEA1FE40DB2603138F130E740A19624526203B6351D0A3A94FA329C145786E679E7B82C71A38628AC8", + SHA384 + ), + ( + "sample", + "92FC3C7183A883E24216D1141F1A8976C5B0DD797DFA597E3D7B32198BD35331A4E966532593A52980D0E3AAA5E10EC3", + "ED0959D5880AB2D869AE7F6C2915C6D60F96507F9CB3E047C0046861DA4A799CFE30F35CC900056D7C99CD7882433709", + "512C8CCEEE3890A84058CE1E22DBC2198F42323CE8ACA9135329F03C068E5112DC7CC3EF3446DEFCEB01A45C2667FDD5", + SHA512 + ), + ( + "test", + "66CC2C8F4D303FC962E5FF6A27BD79F84EC812DDAE58CF5243B64A4AD8094D47EC3727F3A3C186C15054492E30698497", + "4BC35D3A50EF4E30576F58CD96CE6BF638025EE624004A1F7789A8B8E43D0678ACD9D29876DAF46638645F7F404B11C7", + "D5A6326C494ED3FF614703878961C0FDE7B2C278F9A65FD8C4B7186201A2991695BA1C84541327E966FA7B50F7382282", + SHA1 + ), + ( + "test", + "18FA39DB95AA5F561F30FA3591DC59C0FA3653A80DAFFA0B48D1A4C6DFCBFF6E3D33BE4DC5EB8886A8ECD093F2935726", + "E8C9D0B6EA72A0E7837FEA1D14A1A9557F29FAA45D3E7EE888FC5BF954B5E62464A9A817C47FF78B8C11066B24080E72", + "07041D4A7A0379AC7232FF72E6F77B6DDB8F09B16CCE0EC3286B2BD43FA8C6141C53EA5ABEF0D8231077A04540A96B66", + SHA224 + ), + ( + "test", + "0CFAC37587532347DC3389FDC98286BBA8C73807285B184C83E62E26C401C0FAA48DD070BA79921A3457ABFF2D630AD7", + "6D6DEFAC9AB64DABAFE36C6BF510352A4CC27001263638E5B16D9BB51D451559F918EEDAF2293BE5B475CC8F0188636B", + "2D46F3BECBCC523D5F1A1256BF0C9B024D879BA9E838144C8BA6BAEB4B53B47D51AB373F9845C0514EEFB14024787265", + SHA256 + ), + ( + "test", + "015EE46A5BF88773ED9123A5AB0807962D193719503C527B031B4C2D225092ADA71F4A459BC0DA98ADB95837DB8312EA", + "8203B63D3C853E8D77227FB377BCF7B7B772E97892A80F36AB775D509D7A5FEB0542A7F0812998DA8F1DD3CA3CF023DB", + "DDD0760448D42D8A43AF45AF836FCE4DE8BE06B485E9B61B827C2F13173923E06A739F040649A667BF3B828246BAA5A5", + SHA384 + ), + ( + "test", + "3780C4F67CB15518B6ACAE34C9F83568D2E12E47DEAB6C50A4E4EE5319D1E8CE0E2CC8A136036DC4B9C00E6888F66B6C", + "A0D5D090C9980FAF3C2CE57B7AE951D31977DD11C775D314AF55F76C676447D06FB6495CD21B4B6E340FC236584FB277", + "976984E59B4C77B0E8E4460DCA3D9F20E07B9BB1F63BEEFAF576F6B2E8B224634A2092CD3792E0159AD9CEE37659C736", + SHA512 + ), + ) + + signatures_p521_ = ( + ( + "sample", + "0089C071B419E1C2820962321787258469511958E80582E95D8378E0C2CCDB3CB42BEDE42F50E3FA3C71F5A76724281D31D9C89F0F91FC1BE4918DB1C03A5838D0F9", + "00343B6EC45728975EA5CBA6659BBB6062A5FF89EEA58BE3C80B619F322C87910FE092F7D45BB0F8EEE01ED3F20BABEC079D202AE677B243AB40B5431D497C55D75D", + "00E7B0E675A9B24413D448B8CC119D2BF7B2D2DF032741C096634D6D65D0DBE3D5694625FB9E8104D3B842C1B0E2D0B98BEA19341E8676AEF66AE4EBA3D5475D5D16", + SHA1 + ), + ( + "sample", + "0121415EC2CD7726330A61F7F3FA5DE14BE9436019C4DB8CB4041F3B54CF31BE0493EE3F427FB906393D895A19C9523F3A1D54BB8702BD4AA9C99DAB2597B92113F3", + "01776331CFCDF927D666E032E00CF776187BC9FDD8E69D0DABB4109FFE1B5E2A30715F4CC923A4A5E94D2503E9ACFED92857B7F31D7152E0F8C00C15FF3D87E2ED2E", + "0050CB5265417FE2320BBB5A122B8E1A32BD699089851128E360E620A30C7E17BA41A666AF126CE100E5799B153B60528D5300D08489CA9178FB610A2006C254B41F", + SHA224 + ), + ( + "sample", + "00EDF38AFCAAECAB4383358B34D67C9F2216C8382AAEA44A3DAD5FDC9C32575761793FEF24EB0FC276DFC4F6E3EC476752F043CF01415387470BCBD8678ED2C7E1A0", + "01511BB4D675114FE266FC4372B87682BAECC01D3CC62CF2303C92B3526012659D16876E25C7C1E57648F23B73564D67F61C6F14D527D54972810421E7D87589E1A7", + "004A171143A83163D6DF460AAF61522695F207A58B95C0644D87E52AA1A347916E4F7A72930B1BC06DBE22CE3F58264AFD23704CBB63B29B931F7DE6C9D949A7ECFC", + SHA256 + ), + ( + "sample", + "01546A108BC23A15D6F21872F7DED661FA8431DDBD922D0DCDB77CC878C8553FFAD064C95A920A750AC9137E527390D2D92F153E66196966EA554D9ADFCB109C4211", + "01EA842A0E17D2DE4F92C15315C63DDF72685C18195C2BB95E572B9C5136CA4B4B576AD712A52BE9730627D16054BA40CC0B8D3FF035B12AE75168397F5D50C67451", + "01F21A3CEE066E1961025FB048BD5FE2B7924D0CD797BABE0A83B66F1E35EEAF5FDE143FA85DC394A7DEE766523393784484BDF3E00114A1C857CDE1AA203DB65D61", + SHA384 + ), + ( + "sample", + "01DAE2EA071F8110DC26882D4D5EAE0621A3256FC8847FB9022E2B7D28E6F10198B1574FDD03A9053C08A1854A168AA5A57470EC97DD5CE090124EF52A2F7ECBFFD3", + "00C328FAFCBD79DD77850370C46325D987CB525569FB63C5D3BC53950E6D4C5F174E25A1EE9017B5D450606ADD152B534931D7D4E8455CC91F9B15BF05EC36E377FA", + "00617CCE7CF5064806C467F678D3B4080D6F1CC50AF26CA209417308281B68AF282623EAA63E5B5C0723D8B8C37FF0777B1A20F8CCB1DCCC43997F1EE0E44DA4A67A", + SHA512 + ), + ( + "test", + "00BB9F2BF4FE1038CCF4DABD7139A56F6FD8BB1386561BD3C6A4FC818B20DF5DDBA80795A947107A1AB9D12DAA615B1ADE4F7A9DC05E8E6311150F47F5C57CE8B222", + "013BAD9F29ABE20DE37EBEB823C252CA0F63361284015A3BF430A46AAA80B87B0693F0694BD88AFE4E661FC33B094CD3B7963BED5A727ED8BD6A3A202ABE009D0367", + "01E9BB81FF7944CA409AD138DBBEE228E1AFCC0C890FC78EC8604639CB0DBDC90F717A99EAD9D272855D00162EE9527567DD6A92CBD629805C0445282BBC916797FF", + SHA1 + ), + ( + "test", + "0040D09FCF3C8A5F62CF4FB223CBBB2B9937F6B0577C27020A99602C25A01136987E452988781484EDBBCF1C47E554E7FC901BC3085E5206D9F619CFF07E73D6F706", + "01C7ED902E123E6815546065A2C4AF977B22AA8EADDB68B2C1110E7EA44D42086BFE4A34B67DDC0E17E96536E358219B23A706C6A6E16BA77B65E1C595D43CAE17FB", + "0177336676304FCB343CE028B38E7B4FBA76C1C1B277DA18CAD2A8478B2A9A9F5BEC0F3BA04F35DB3E4263569EC6AADE8C92746E4C82F8299AE1B8F1739F8FD519A4", + SHA224 + ), + ( + "test", + "001DE74955EFAABC4C4F17F8E84D881D1310B5392D7700275F82F145C61E843841AF09035BF7A6210F5A431A6A9E81C9323354A9E69135D44EBD2FCAA7731B909258", + "000E871C4A14F993C6C7369501900C4BC1E9C7B0B4BA44E04868B30B41D8071042EB28C4C250411D0CE08CD197E4188EA4876F279F90B3D8D74A3C76E6F1E4656AA8", + "00CD52DBAA33B063C3A6CD8058A1FB0A46A4754B034FCC644766CA14DA8CA5CA9FDE00E88C1AD60CCBA759025299079D7A427EC3CC5B619BFBC828E7769BCD694E86", + SHA256 + ), + ( + "test", + "01F1FC4A349A7DA9A9E116BFDD055DC08E78252FF8E23AC276AC88B1770AE0B5DCEB1ED14A4916B769A523CE1E90BA22846AF11DF8B300C38818F713DADD85DE0C88", + "014BEE21A18B6D8B3C93FAB08D43E739707953244FDBE924FA926D76669E7AC8C89DF62ED8975C2D8397A65A49DCC09F6B0AC62272741924D479354D74FF6075578C", + "0133330865C067A0EAF72362A65E2D7BC4E461E8C8995C3B6226A21BD1AA78F0ED94FE536A0DCA35534F0CD1510C41525D163FE9D74D134881E35141ED5E8E95B979", + SHA384 + ), + ( + "test", + "016200813020EC986863BEDFC1B121F605C1215645018AEA1A7B215A564DE9EB1B38A67AA1128B80CE391C4FB71187654AAA3431027BFC7F395766CA988C964DC56D", + "013E99020ABF5CEE7525D16B69B229652AB6BDF2AFFCAEF38773B4B7D08725F10CDB93482FDCC54EDCEE91ECA4166B2A7C6265EF0CE2BD7051B7CEF945BABD47EE6D", + "01FBD0013C674AA79CB39849527916CE301C66EA7CE8B80682786AD60F98F7E78A19CA69EFF5C57400E3B3A0AD66CE0978214D13BAF4E9AC60752F7B155E2DE4DCE3", + SHA512 + ), + ) + + signatures_p192 = [] + for a, b, c, d, e in signatures_p192_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p192.append(new_tv) + + signatures_p224 = [] + for a, b, c, d, e in signatures_p224_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p224.append(new_tv) + + signatures_p256 = [] + for a, b, c, d, e in signatures_p256_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p256.append(new_tv) + + signatures_p384 = [] + for a, b, c, d, e in signatures_p384_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p384.append(new_tv) + + signatures_p521 = [] + for a, b, c, d, e in signatures_p521_: + new_tv = (tobytes(a), unhexlify(b), unhexlify(c), unhexlify(d), e) + signatures_p521.append(new_tv) + + def shortDescription(self): + return "Deterministic ECDSA Tests" + + def test_loopback_p192(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p192, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p192, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p224(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p224, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p224, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p256(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p256, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p256, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p384(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p384, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p384, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_loopback_p521(self): + hashed_msg = SHA512.new(b"test") + signer = DSS.new(self.key_priv_p521, 'deterministic-rfc6979') + signature = signer.sign(hashed_msg) + + verifier = DSS.new(self.key_pub_p521, 'deterministic-rfc6979') + verifier.verify(hashed_msg, signature) + + def test_data_rfc6979_p192(self): + signer = DSS.new(self.key_priv_p192, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p192: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p224(self): + signer = DSS.new(self.key_priv_p224, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p224: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p256(self): + signer = DSS.new(self.key_priv_p256, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p256: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p384(self): + signer = DSS.new(self.key_priv_p384, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p384: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + def test_data_rfc6979_p521(self): + signer = DSS.new(self.key_priv_p521, 'deterministic-rfc6979') + for message, k, r, s, module in self.signatures_p521: + hash_obj = module.new(message) + result = signer.sign(hash_obj) + self.assertEqual(r + s, result) + + +def get_hash_module(hash_name): + if hash_name == "SHA-512": + hash_module = SHA512 + elif hash_name == "SHA-512/224": + hash_module = SHA512.new(truncate="224") + elif hash_name == "SHA-512/256": + hash_module = SHA512.new(truncate="256") + elif hash_name == "SHA-384": + hash_module = SHA384 + elif hash_name == "SHA-256": + hash_module = SHA256 + elif hash_name == "SHA-224": + hash_module = SHA224 + elif hash_name == "SHA-1": + hash_module = SHA1 + elif hash_name == "SHA3-224": + hash_module = SHA3_224 + elif hash_name == "SHA3-256": + hash_module = SHA3_256 + elif hash_name == "SHA3-384": + hash_module = SHA3_384 + elif hash_name == "SHA3-512": + hash_module = SHA3_512 + else: + raise ValueError("Unknown hash algorithm: " + hash_name) + return hash_module + + +class TestVectorsDSAWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._slow_tests = slow_tests + self._id = "None" + self.tv = [] + + def setUp(self): + + def filter_dsa(group): + return DSA.import_key(group['keyPem']) + + def filter_sha(group): + return get_hash_module(group['sha']) + + def filter_type(group): + sig_type = group['type'] + if sig_type != 'DsaVerify': + raise ValueError("Unknown signature type " + sig_type) + return sig_type + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + "dsa_test.json", + "Wycheproof DSA signature", + group_tag={'key': filter_dsa, + 'hash_module': filter_sha, + 'sig_type': filter_type}) + self.tv += result + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof DSA Test #" + str(tv.id) + + hashed_msg = tv.hash_module.new(tv.msg) + signer = DSS.new(tv.key, 'fips-186-3', encoding='der') + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +class TestVectorsECDSAWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings, slow_tests): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._slow_tests = slow_tests + self._id = "None" + + def add_tests(self, filename): + + def filter_ecc(group): + # These are the only curves we accept to skip + if group['key']['curve'] in ('secp224k1', 'secp256k1', + 'brainpoolP224r1', 'brainpoolP224t1', + 'brainpoolP256r1', 'brainpoolP256t1', + 'brainpoolP320r1', 'brainpoolP320t1', + 'brainpoolP384r1', 'brainpoolP384t1', + 'brainpoolP512r1', 'brainpoolP512t1', + ): + return None + return ECC.import_key(group['keyPem']) + + def filter_sha(group): + return get_hash_module(group['sha']) + + def filter_encoding(group): + encoding_name = group['type'] + if encoding_name == "EcdsaVerify": + return "der" + elif encoding_name == "EcdsaP1363Verify": + return "binary" + else: + raise ValueError("Unknown signature type " + encoding_name) + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof ECDSA signature (%s)" % filename, + group_tag={'key': filter_ecc, + 'hash_module': filter_sha, + 'encoding': filter_encoding, + }) + self.tv += result + + def setUp(self): + self.tv = [] + self.add_tests("ecdsa_secp224r1_sha224_p1363_test.json") + self.add_tests("ecdsa_secp224r1_sha224_test.json") + if self._slow_tests: + self.add_tests("ecdsa_secp224r1_sha256_p1363_test.json") + self.add_tests("ecdsa_secp224r1_sha256_test.json") + self.add_tests("ecdsa_secp224r1_sha3_224_test.json") + self.add_tests("ecdsa_secp224r1_sha3_256_test.json") + self.add_tests("ecdsa_secp224r1_sha3_512_test.json") + self.add_tests("ecdsa_secp224r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp224r1_sha512_test.json") + self.add_tests("ecdsa_secp256r1_sha256_p1363_test.json") + self.add_tests("ecdsa_secp256r1_sha256_test.json") + self.add_tests("ecdsa_secp256r1_sha3_256_test.json") + self.add_tests("ecdsa_secp256r1_sha3_512_test.json") + self.add_tests("ecdsa_secp256r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp256r1_sha512_test.json") + if self._slow_tests: + self.add_tests("ecdsa_secp384r1_sha3_384_test.json") + self.add_tests("ecdsa_secp384r1_sha3_512_test.json") + self.add_tests("ecdsa_secp384r1_sha384_p1363_test.json") + self.add_tests("ecdsa_secp384r1_sha384_test.json") + self.add_tests("ecdsa_secp384r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp384r1_sha512_test.json") + if self._slow_tests: + self.add_tests("ecdsa_secp521r1_sha3_512_test.json") + self.add_tests("ecdsa_secp521r1_sha512_p1363_test.json") + self.add_tests("ecdsa_secp521r1_sha512_test.json") + self.add_tests("ecdsa_test.json") + self.add_tests("ecdsa_webcrypto_test.json") + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof ECDSA Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + + # Skip tests with unsupported curves + if tv.key is None: + return + + hashed_msg = tv.hash_module.new(tv.msg) + signer = DSS.new(tv.key, 'fips-186-3', encoding=tv.encoding) + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + if tv.comment == "k*G has a large x-coordinate": + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(FIPS_DSA_Tests) + tests += list_test_cases(FIPS_ECDSA_Tests) + tests += list_test_cases(Det_DSA_Tests) + tests += list_test_cases(Det_ECDSA_Tests) + + slow_tests = config.get('slow_tests') + if slow_tests: + tests += list_test_cases(FIPS_DSA_Tests_KAT) + tests += list_test_cases(FIPS_ECDSA_Tests_KAT) + + tests += [TestVectorsDSAWycheproof(wycheproof_warnings, slow_tests)] + tests += [TestVectorsECDSAWycheproof(wycheproof_warnings, slow_tests)] + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py new file mode 100644 index 0000000..015e247 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_eddsa.py @@ -0,0 +1,578 @@ +# +# Copyright (c) 2022, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify + +from Cryptodome.PublicKey import ECC +from Cryptodome.Signature import eddsa +from Cryptodome.Hash import SHA512, SHAKE256 +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors_wycheproof +from Cryptodome.Util.number import bytes_to_long + +rfc8032_tv_str = ( + # 7.1 Ed25519 + ( + "9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60", + "d75a980182b10ab7d54bfed3c964073a0ee172f3daa62325af021a68f707511a", + "", + None, + "", + "e5564300c360ac729086e2cc806e828a" + "84877f1eb8e5d974d873e06522490155" + "5fb8821590a33bacc61e39701cf9b46b" + "d25bf5f0595bbe24655141438e7a100b" + ), + ( + "4ccd089b28ff96da9db6c346ec114e0f5b8a319f35aba624da8cf6ed4fb8a6fb", + "3d4017c3e843895a92b70aa74d1b7ebc9c982ccf2ec4968cc0cd55f12af4660c", + "72", + None, + "", + "92a009a9f0d4cab8720e820b5f642540" + "a2b27b5416503f8fb3762223ebdb69da" + "085ac1e43e15996e458f3613d0f11d8c" + "387b2eaeb4302aeeb00d291612bb0c00" + ), + ( + "c5aa8df43f9f837bedb7442f31dcb7b166d38535076f094b85ce3a2e0b4458f7", + "fc51cd8e6218a1a38da47ed00230f0580816ed13ba3303ac5deb911548908025", + "af82", + None, + "", + "6291d657deec24024827e69c3abe01a3" + "0ce548a284743a445e3680d7db5ac3ac" + "18ff9b538d16f290ae67f760984dc659" + "4a7c15e9716ed28dc027beceea1ec40a" + ), + ( + "f5e5767cf153319517630f226876b86c8160cc583bc013744c6bf255f5cc0ee5", + "278117fc144c72340f67d0f2316e8386ceffbf2b2428c9c51fef7c597f1d426e", + "08b8b2b733424243760fe426a4b54908" + "632110a66c2f6591eabd3345e3e4eb98" + "fa6e264bf09efe12ee50f8f54e9f77b1" + "e355f6c50544e23fb1433ddf73be84d8" + "79de7c0046dc4996d9e773f4bc9efe57" + "38829adb26c81b37c93a1b270b20329d" + "658675fc6ea534e0810a4432826bf58c" + "941efb65d57a338bbd2e26640f89ffbc" + "1a858efcb8550ee3a5e1998bd177e93a" + "7363c344fe6b199ee5d02e82d522c4fe" + "ba15452f80288a821a579116ec6dad2b" + "3b310da903401aa62100ab5d1a36553e" + "06203b33890cc9b832f79ef80560ccb9" + "a39ce767967ed628c6ad573cb116dbef" + "efd75499da96bd68a8a97b928a8bbc10" + "3b6621fcde2beca1231d206be6cd9ec7" + "aff6f6c94fcd7204ed3455c68c83f4a4" + "1da4af2b74ef5c53f1d8ac70bdcb7ed1" + "85ce81bd84359d44254d95629e9855a9" + "4a7c1958d1f8ada5d0532ed8a5aa3fb2" + "d17ba70eb6248e594e1a2297acbbb39d" + "502f1a8c6eb6f1ce22b3de1a1f40cc24" + "554119a831a9aad6079cad88425de6bd" + "e1a9187ebb6092cf67bf2b13fd65f270" + "88d78b7e883c8759d2c4f5c65adb7553" + "878ad575f9fad878e80a0c9ba63bcbcc" + "2732e69485bbc9c90bfbd62481d9089b" + "eccf80cfe2df16a2cf65bd92dd597b07" + "07e0917af48bbb75fed413d238f5555a" + "7a569d80c3414a8d0859dc65a46128ba" + "b27af87a71314f318c782b23ebfe808b" + "82b0ce26401d2e22f04d83d1255dc51a" + "ddd3b75a2b1ae0784504df543af8969b" + "e3ea7082ff7fc9888c144da2af58429e" + "c96031dbcad3dad9af0dcbaaaf268cb8" + "fcffead94f3c7ca495e056a9b47acdb7" + "51fb73e666c6c655ade8297297d07ad1" + "ba5e43f1bca32301651339e22904cc8c" + "42f58c30c04aafdb038dda0847dd988d" + "cda6f3bfd15c4b4c4525004aa06eeff8" + "ca61783aacec57fb3d1f92b0fe2fd1a8" + "5f6724517b65e614ad6808d6f6ee34df" + "f7310fdc82aebfd904b01e1dc54b2927" + "094b2db68d6f903b68401adebf5a7e08" + "d78ff4ef5d63653a65040cf9bfd4aca7" + "984a74d37145986780fc0b16ac451649" + "de6188a7dbdf191f64b5fc5e2ab47b57" + "f7f7276cd419c17a3ca8e1b939ae49e4" + "88acba6b965610b5480109c8b17b80e1" + "b7b750dfc7598d5d5011fd2dcc5600a3" + "2ef5b52a1ecc820e308aa342721aac09" + "43bf6686b64b2579376504ccc493d97e" + "6aed3fb0f9cd71a43dd497f01f17c0e2" + "cb3797aa2a2f256656168e6c496afc5f" + "b93246f6b1116398a346f1a641f3b041" + "e989f7914f90cc2c7fff357876e506b5" + "0d334ba77c225bc307ba537152f3f161" + "0e4eafe595f6d9d90d11faa933a15ef1" + "369546868a7f3a45a96768d40fd9d034" + "12c091c6315cf4fde7cb68606937380d" + "b2eaaa707b4c4185c32eddcdd306705e" + "4dc1ffc872eeee475a64dfac86aba41c" + "0618983f8741c5ef68d3a101e8a3b8ca" + "c60c905c15fc910840b94c00a0b9d0", + None, + "", + "0aab4c900501b3e24d7cdf4663326a3a" + "87df5e4843b2cbdb67cbf6e460fec350" + "aa5371b1508f9f4528ecea23c436d94b" + "5e8fcd4f681e30a6ac00a9704a188a03" + ), + # 7.2 Ed25519ctx + ( + "0305334e381af78f141cb666f6199f57" + "bc3495335a256a95bd2a55bf546663f6", + "dfc9425e4f968f7f0c29f0259cf5f9ae" + "d6851c2bb4ad8bfb860cfee0ab248292", + "f726936d19c800494e3fdaff20b276a8", + None, + "666f6f", + "55a4cc2f70a54e04288c5f4cd1e45a7b" + "b520b36292911876cada7323198dd87a" + "8b36950b95130022907a7fb7c4e9b2d5" + "f6cca685a587b4b21f4b888e4e7edb0d" + ), + ( + "0305334e381af78f141cb666f6199f57" + "bc3495335a256a95bd2a55bf546663f6", + "dfc9425e4f968f7f0c29f0259cf5f9ae" + "d6851c2bb4ad8bfb860cfee0ab248292", + "f726936d19c800494e3fdaff20b276a8", + None, + "626172", + "fc60d5872fc46b3aa69f8b5b4351d580" + "8f92bcc044606db097abab6dbcb1aee3" + "216c48e8b3b66431b5b186d1d28f8ee1" + "5a5ca2df6668346291c2043d4eb3e90d" + ), + ( + "0305334e381af78f141cb666f6199f57" + "bc3495335a256a95bd2a55bf546663f6", + "dfc9425e4f968f7f0c29f0259cf5f9ae" + "d6851c2bb4ad8bfb860cfee0ab248292", + "508e9e6882b979fea900f62adceaca35", + None, + "666f6f", + "8b70c1cc8310e1de20ac53ce28ae6e72" + "07f33c3295e03bb5c0732a1d20dc6490" + "8922a8b052cf99b7c4fe107a5abb5b2c" + "4085ae75890d02df26269d8945f84b0b" + ), + ( + "ab9c2853ce297ddab85c993b3ae14bca" + "d39b2c682beabc27d6d4eb20711d6560", + "0f1d1274943b91415889152e893d80e9" + "3275a1fc0b65fd71b4b0dda10ad7d772", + "f726936d19c800494e3fdaff20b276a8", + None, + "666f6f", + "21655b5f1aa965996b3f97b3c849eafb" + "a922a0a62992f73b3d1b73106a84ad85" + "e9b86a7b6005ea868337ff2d20a7f5fb" + "d4cd10b0be49a68da2b2e0dc0ad8960f" + ), + # 7.3 Ed25519ph + ( + "833fe62409237b9d62ec77587520911e" + "9a759cec1d19755b7da901b96dca3d42", + "ec172b93ad5e563bf4932c70e1245034" + "c35467ef2efd4d64ebf819683467e2bf", + "616263", + SHA512, + "", + "98a70222f0b8121aa9d30f813d683f80" + "9e462b469c7ff87639499bb94e6dae41" + "31f85042463c2a355a2003d062adf5aa" + "a10b8c61e636062aaad11c2a26083406" + ), + # 7.4 Ed448 + ( + "6c82a562cb808d10d632be89c8513ebf6c929f34ddfa8c9f63c9960ef6e348a3" + "528c8a3fcc2f044e39a3fc5b94492f8f032e7549a20098f95b", + "5fd7449b59b461fd2ce787ec616ad46a1da1342485a70e1f8a0ea75d80e96778" + "edf124769b46c7061bd6783df1e50f6cd1fa1abeafe8256180", + "", + None, + "", + "533a37f6bbe457251f023c0d88f976ae2dfb504a843e34d2074fd823d41a591f" + "2b233f034f628281f2fd7a22ddd47d7828c59bd0a21bfd3980ff0d2028d4b18a" + "9df63e006c5d1c2d345b925d8dc00b4104852db99ac5c7cdda8530a113a0f4db" + "b61149f05a7363268c71d95808ff2e652600" + ), + ( + "c4eab05d357007c632f3dbb48489924d552b08fe0c353a0d4a1f00acda2c463a" + "fbea67c5e8d2877c5e3bc397a659949ef8021e954e0a12274e", + "43ba28f430cdff456ae531545f7ecd0ac834a55d9358c0372bfa0c6c6798c086" + "6aea01eb00742802b8438ea4cb82169c235160627b4c3a9480", + "03", + None, + "", + "26b8f91727bd62897af15e41eb43c377efb9c610d48f2335cb0bd0087810f435" + "2541b143c4b981b7e18f62de8ccdf633fc1bf037ab7cd779805e0dbcc0aae1cb" + "cee1afb2e027df36bc04dcecbf154336c19f0af7e0a6472905e799f1953d2a0f" + "f3348ab21aa4adafd1d234441cf807c03a00", + ), + ( + "c4eab05d357007c632f3dbb48489924d552b08fe0c353a0d4a1f00acda2c463a" + "fbea67c5e8d2877c5e3bc397a659949ef8021e954e0a12274e", + "43ba28f430cdff456ae531545f7ecd0ac834a55d9358c0372bfa0c6c6798c086" + "6aea01eb00742802b8438ea4cb82169c235160627b4c3a9480", + "03", + None, + "666f6f", + "d4f8f6131770dd46f40867d6fd5d5055de43541f8c5e35abbcd001b32a89f7d2" + "151f7647f11d8ca2ae279fb842d607217fce6e042f6815ea000c85741de5c8da" + "1144a6a1aba7f96de42505d7a7298524fda538fccbbb754f578c1cad10d54d0d" + "5428407e85dcbc98a49155c13764e66c3c00", + ), + ( + "cd23d24f714274e744343237b93290f511f6425f98e64459ff203e8985083ffd" + "f60500553abc0e05cd02184bdb89c4ccd67e187951267eb328", + "dcea9e78f35a1bf3499a831b10b86c90aac01cd84b67a0109b55a36e9328b1e3" + "65fce161d71ce7131a543ea4cb5f7e9f1d8b00696447001400", + "0c3e544074ec63b0265e0c", + None, + "", + "1f0a8888ce25e8d458a21130879b840a9089d999aaba039eaf3e3afa090a09d3" + "89dba82c4ff2ae8ac5cdfb7c55e94d5d961a29fe0109941e00b8dbdeea6d3b05" + "1068df7254c0cdc129cbe62db2dc957dbb47b51fd3f213fb8698f064774250a5" + "028961c9bf8ffd973fe5d5c206492b140e00", + ), + ( + "258cdd4ada32ed9c9ff54e63756ae582fb8fab2ac721f2c8e676a72768513d93" + "9f63dddb55609133f29adf86ec9929dccb52c1c5fd2ff7e21b", + "3ba16da0c6f2cc1f30187740756f5e798d6bc5fc015d7c63cc9510ee3fd44adc" + "24d8e968b6e46e6f94d19b945361726bd75e149ef09817f580", + "64a65f3cdedcdd66811e2915", + None, + "", + "7eeeab7c4e50fb799b418ee5e3197ff6bf15d43a14c34389b59dd1a7b1b85b4a" + "e90438aca634bea45e3a2695f1270f07fdcdf7c62b8efeaf00b45c2c96ba457e" + "b1a8bf075a3db28e5c24f6b923ed4ad747c3c9e03c7079efb87cb110d3a99861" + "e72003cbae6d6b8b827e4e6c143064ff3c00", + ), + ( + "7ef4e84544236752fbb56b8f31a23a10e42814f5f55ca037cdcc11c64c9a3b29" + "49c1bb60700314611732a6c2fea98eebc0266a11a93970100e", + "b3da079b0aa493a5772029f0467baebee5a8112d9d3a22532361da294f7bb381" + "5c5dc59e176b4d9f381ca0938e13c6c07b174be65dfa578e80", + "64a65f3cdedcdd66811e2915e7", + None, + "", + "6a12066f55331b6c22acd5d5bfc5d71228fbda80ae8dec26bdd306743c5027cb" + "4890810c162c027468675ecf645a83176c0d7323a2ccde2d80efe5a1268e8aca" + "1d6fbc194d3f77c44986eb4ab4177919ad8bec33eb47bbb5fc6e28196fd1caf5" + "6b4e7e0ba5519234d047155ac727a1053100", + ), + ( + "d65df341ad13e008567688baedda8e9dcdc17dc024974ea5b4227b6530e339bf" + "f21f99e68ca6968f3cca6dfe0fb9f4fab4fa135d5542ea3f01", + "df9705f58edbab802c7f8363cfe5560ab1c6132c20a9f1dd163483a26f8ac53a" + "39d6808bf4a1dfbd261b099bb03b3fb50906cb28bd8a081f00", + "bd0f6a3747cd561bdddf4640a332461a4a30a12a434cd0bf40d766d9c6d458e5" + "512204a30c17d1f50b5079631f64eb3112182da3005835461113718d1a5ef944", + None, + "", + "554bc2480860b49eab8532d2a533b7d578ef473eeb58c98bb2d0e1ce488a98b1" + "8dfde9b9b90775e67f47d4a1c3482058efc9f40d2ca033a0801b63d45b3b722e" + "f552bad3b4ccb667da350192b61c508cf7b6b5adadc2c8d9a446ef003fb05cba" + "5f30e88e36ec2703b349ca229c2670833900", + ), + ( + "2ec5fe3c17045abdb136a5e6a913e32ab75ae68b53d2fc149b77e504132d3756" + "9b7e766ba74a19bd6162343a21c8590aa9cebca9014c636df5", + "79756f014dcfe2079f5dd9e718be4171e2ef2486a08f25186f6bff43a9936b9b" + "fe12402b08ae65798a3d81e22e9ec80e7690862ef3d4ed3a00", + "15777532b0bdd0d1389f636c5f6b9ba734c90af572877e2d272dd078aa1e567c" + "fa80e12928bb542330e8409f3174504107ecd5efac61ae7504dabe2a602ede89" + "e5cca6257a7c77e27a702b3ae39fc769fc54f2395ae6a1178cab4738e543072f" + "c1c177fe71e92e25bf03e4ecb72f47b64d0465aaea4c7fad372536c8ba516a60" + "39c3c2a39f0e4d832be432dfa9a706a6e5c7e19f397964ca4258002f7c0541b5" + "90316dbc5622b6b2a6fe7a4abffd96105eca76ea7b98816af0748c10df048ce0" + "12d901015a51f189f3888145c03650aa23ce894c3bd889e030d565071c59f409" + "a9981b51878fd6fc110624dcbcde0bf7a69ccce38fabdf86f3bef6044819de11", + None, + "", + "c650ddbb0601c19ca11439e1640dd931f43c518ea5bea70d3dcde5f4191fe53f" + "00cf966546b72bcc7d58be2b9badef28743954e3a44a23f880e8d4f1cfce2d7a" + "61452d26da05896f0a50da66a239a8a188b6d825b3305ad77b73fbac0836ecc6" + "0987fd08527c1a8e80d5823e65cafe2a3d00", + ), + ( + "872d093780f5d3730df7c212664b37b8a0f24f56810daa8382cd4fa3f77634ec" + "44dc54f1c2ed9bea86fafb7632d8be199ea165f5ad55dd9ce8", + "a81b2e8a70a5ac94ffdbcc9badfc3feb0801f258578bb114ad44ece1ec0e799d" + "a08effb81c5d685c0c56f64eecaef8cdf11cc38737838cf400", + "6ddf802e1aae4986935f7f981ba3f0351d6273c0a0c22c9c0e8339168e675412" + "a3debfaf435ed651558007db4384b650fcc07e3b586a27a4f7a00ac8a6fec2cd" + "86ae4bf1570c41e6a40c931db27b2faa15a8cedd52cff7362c4e6e23daec0fbc" + "3a79b6806e316efcc7b68119bf46bc76a26067a53f296dafdbdc11c77f7777e9" + "72660cf4b6a9b369a6665f02e0cc9b6edfad136b4fabe723d2813db3136cfde9" + "b6d044322fee2947952e031b73ab5c603349b307bdc27bc6cb8b8bbd7bd32321" + "9b8033a581b59eadebb09b3c4f3d2277d4f0343624acc817804728b25ab79717" + "2b4c5c21a22f9c7839d64300232eb66e53f31c723fa37fe387c7d3e50bdf9813" + "a30e5bb12cf4cd930c40cfb4e1fc622592a49588794494d56d24ea4b40c89fc0" + "596cc9ebb961c8cb10adde976a5d602b1c3f85b9b9a001ed3c6a4d3b1437f520" + "96cd1956d042a597d561a596ecd3d1735a8d570ea0ec27225a2c4aaff26306d1" + "526c1af3ca6d9cf5a2c98f47e1c46db9a33234cfd4d81f2c98538a09ebe76998" + "d0d8fd25997c7d255c6d66ece6fa56f11144950f027795e653008f4bd7ca2dee" + "85d8e90f3dc315130ce2a00375a318c7c3d97be2c8ce5b6db41a6254ff264fa6" + "155baee3b0773c0f497c573f19bb4f4240281f0b1f4f7be857a4e59d416c06b4" + "c50fa09e1810ddc6b1467baeac5a3668d11b6ecaa901440016f389f80acc4db9" + "77025e7f5924388c7e340a732e554440e76570f8dd71b7d640b3450d1fd5f041" + "0a18f9a3494f707c717b79b4bf75c98400b096b21653b5d217cf3565c9597456" + "f70703497a078763829bc01bb1cbc8fa04eadc9a6e3f6699587a9e75c94e5bab" + "0036e0b2e711392cff0047d0d6b05bd2a588bc109718954259f1d86678a579a3" + "120f19cfb2963f177aeb70f2d4844826262e51b80271272068ef5b3856fa8535" + "aa2a88b2d41f2a0e2fda7624c2850272ac4a2f561f8f2f7a318bfd5caf969614" + "9e4ac824ad3460538fdc25421beec2cc6818162d06bbed0c40a387192349db67" + "a118bada6cd5ab0140ee273204f628aad1c135f770279a651e24d8c14d75a605" + "9d76b96a6fd857def5e0b354b27ab937a5815d16b5fae407ff18222c6d1ed263" + "be68c95f32d908bd895cd76207ae726487567f9a67dad79abec316f683b17f2d" + "02bf07e0ac8b5bc6162cf94697b3c27cd1fea49b27f23ba2901871962506520c" + "392da8b6ad0d99f7013fbc06c2c17a569500c8a7696481c1cd33e9b14e40b82e" + "79a5f5db82571ba97bae3ad3e0479515bb0e2b0f3bfcd1fd33034efc6245eddd" + "7ee2086ddae2600d8ca73e214e8c2b0bdb2b047c6a464a562ed77b73d2d841c4" + "b34973551257713b753632efba348169abc90a68f42611a40126d7cb21b58695" + "568186f7e569d2ff0f9e745d0487dd2eb997cafc5abf9dd102e62ff66cba87", + None, + "", + "e301345a41a39a4d72fff8df69c98075a0cc082b802fc9b2b6bc503f926b65bd" + "df7f4c8f1cb49f6396afc8a70abe6d8aef0db478d4c6b2970076c6a0484fe76d" + "76b3a97625d79f1ce240e7c576750d295528286f719b413de9ada3e8eb78ed57" + "3603ce30d8bb761785dc30dbc320869e1a00" + ), + # 7.5 Ed448ph + ( + "833fe62409237b9d62ec77587520911e9a759cec1d19755b7da901b96dca3d42" + "ef7822e0d5104127dc05d6dbefde69e3ab2cec7c867c6e2c49", + "259b71c19f83ef77a7abd26524cbdb3161b590a48f7d17de3ee0ba9c52beb743" + "c09428a131d6b1b57303d90d8132c276d5ed3d5d01c0f53880", + "616263", + SHAKE256, + "", + "822f6901f7480f3d5f562c592994d9693602875614483256505600bbc281ae38" + "1f54d6bce2ea911574932f52a4e6cadd78769375ec3ffd1b801a0d9b3f4030cd" + "433964b6457ea39476511214f97469b57dd32dbc560a9a94d00bff07620464a3" + "ad203df7dc7ce360c3cd3696d9d9fab90f00" + ), + ( + "833fe62409237b9d62ec77587520911e9a759cec1d19755b7da901b96dca3d42" + "ef7822e0d5104127dc05d6dbefde69e3ab2cec7c867c6e2c49", + "259b71c19f83ef77a7abd26524cbdb3161b590a48f7d17de3ee0ba9c52beb743" + "c09428a131d6b1b57303d90d8132c276d5ed3d5d01c0f53880", + "616263", + SHAKE256, + "666f6f", + "c32299d46ec8ff02b54540982814dce9a05812f81962b649d528095916a2aa48" + "1065b1580423ef927ecf0af5888f90da0f6a9a85ad5dc3f280d91224ba9911a3" + "653d00e484e2ce232521481c8658df304bb7745a73514cdb9bf3e15784ab7128" + "4f8d0704a608c54a6b62d97beb511d132100", + ), +) + + +rfc8032_tv_bytes = [] +for tv_str in rfc8032_tv_str: + rfc8032_tv_bytes.append([unhexlify(i) if isinstance(i, str) else i for i in tv_str]) + + +class TestEdDSA(unittest.TestCase): + + def test_sign(self): + for sk, _, msg, hashmod, ctx, exp_signature in rfc8032_tv_bytes: + key = eddsa.import_private_key(sk) + signer = eddsa.new(key, 'rfc8032', context=ctx) + if hashmod is None: + # PureEdDSA + signature = signer.sign(msg) + else: + # HashEdDSA + hashobj = hashmod.new(msg) + signature = signer.sign(hashobj) + self.assertEqual(exp_signature, signature) + + def test_verify(self): + for _, pk, msg, hashmod, ctx, exp_signature in rfc8032_tv_bytes: + key = eddsa.import_public_key(pk) + verifier = eddsa.new(key, 'rfc8032', context=ctx) + if hashmod is None: + # PureEdDSA + verifier.verify(msg, exp_signature) + else: + # HashEdDSA + hashobj = hashmod.new(msg) + verifier.verify(hashobj, exp_signature) + + def test_negative(self): + key = ECC.generate(curve="ed25519") + self.assertRaises(ValueError, eddsa.new, key, 'rfc9999') + + nist_key = ECC.generate(curve="p256") + self.assertRaises(ValueError, eddsa.new, nist_key, 'rfc8032') + + +class TestExport_Ed25519(unittest.TestCase): + + def test_raw(self): + key = ECC.generate(curve="Ed25519") + x, y = key.pointQ.xy + raw = bytearray(key._export_eddsa()) + sign_x = raw[31] >> 7 + raw[31] &= 0x7F + yt = bytes_to_long(raw[::-1]) + self.assertEqual(y, yt) + self.assertEqual(x & 1, sign_x) + + key = ECC.construct(point_x=0, point_y=1, curve="Ed25519") + out = key._export_eddsa() + self.assertEqual(b'\x01' + b'\x00' * 31, out) + + +class TestExport_Ed448(unittest.TestCase): + + def test_raw(self): + key = ECC.generate(curve="Ed448") + x, y = key.pointQ.xy + raw = bytearray(key._export_eddsa()) + sign_x = raw[56] >> 7 + raw[56] &= 0x7F + yt = bytes_to_long(raw[::-1]) + self.assertEqual(y, yt) + self.assertEqual(x & 1, sign_x) + + key = ECC.construct(point_x=0, point_y=1, curve="Ed448") + out = key._export_eddsa() + self.assertEqual(b'\x01' + b'\x00' * 56, out) + + +class TestImport_Ed25519(unittest.TestCase): + + def test_raw(self): + Px = 24407857220263921307776619664228778204996144802740950419837658238229122415920 + Py = 56480760040633817885061096979765646085062883740629155052073094891081309750690 + encoded = b'\xa2\x05\xd6\x00\xe1 \xe1\xc0\xff\x96\xee?V\x8e\xba/\xd3\x89\x06\xd7\xc4c\xe8$\xc2d\xd7a1\xfa\xde|' + key = eddsa.import_public_key(encoded) + self.assertEqual(Py, key.pointQ.y) + self.assertEqual(Px, key.pointQ.x) + + encoded = b'\x01' + b'\x00' * 31 + key = eddsa.import_public_key(encoded) + self.assertEqual(1, key.pointQ.y) + self.assertEqual(0, key.pointQ.x) + + +class TestImport_Ed448(unittest.TestCase): + + def test_raw(self): + Px = 0x153f42025aba3b0daecaa5cd79458b3146c7c9378c16c17b4a59bc3561113d90c169045bc12966c3f93e140c2ca0a3acc33d9205b9daf9b1 + Py = 0x38f5c0015d3dedd576c232810dd90373b5b1d631a12894c043b7be529cbae03ede177d8fa490b56131dbcb2465d2aba777ef839fc1719b25 + encoded = unhexlify("259b71c19f83ef77a7abd26524cbdb31" + "61b590a48f7d17de3ee0ba9c52beb743" + "c09428a131d6b1b57303d90d8132c276" + "d5ed3d5d01c0f53880") + key = eddsa.import_public_key(encoded) + self.assertEqual(Py, key.pointQ.y) + self.assertEqual(Px, key.pointQ.x) + + encoded = b'\x01' + b'\x00' * 56 + key = eddsa.import_public_key(encoded) + self.assertEqual(1, key.pointQ.y) + self.assertEqual(0, key.pointQ.x) + + +class TestVectorsEdDSAWycheproof(unittest.TestCase): + + def add_tests(self, filename): + + def pk(group): + elem = group['key']['pk'] + return unhexlify(elem) + + def sk(group): + elem = group['key']['sk'] + return unhexlify(elem) + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof ECDSA signature (%s)" + % filename, + group_tag={'pk': pk, 'sk': sk}) + self.tv += result + + def setUp(self): + self.tv = [] + self.add_tests("eddsa_test.json") + self.add_tests("ed448_test.json") + + def test_sign(self, tv): + if not tv.valid: + return + + self._id = "Wycheproof EdDSA Sign Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + key = eddsa.import_private_key(tv.sk) + signer = eddsa.new(key, 'rfc8032') + signature = signer.sign(tv.msg) + self.assertEqual(signature, tv.sig) + + def test_verify(self, tv): + self._id = "Wycheproof EdDSA Verify Test #%d (%s, %s)" % (tv.id, tv.comment, tv.filename) + key = eddsa.import_public_key(tv.pk) + verifier = eddsa.new(key, 'rfc8032') + try: + verifier.verify(tv.msg, tv.sig) + except ValueError: + assert not tv.valid + else: + assert tv.valid + + def runTest(self): + for tv in self.tv: + self.test_sign(tv) + self.test_verify(tv) + + +def get_tests(config={}): + + tests = [] + tests += list_test_cases(TestExport_Ed25519) + tests += list_test_cases(TestExport_Ed448) + tests += list_test_cases(TestImport_Ed25519) + tests += list_test_cases(TestImport_Ed448) + tests += list_test_cases(TestEdDSA) + tests += [TestVectorsEdDSAWycheproof()] + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py new file mode 100644 index 0000000..3a3e30b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pkcs1_15.py @@ -0,0 +1,348 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import json +import unittest +from binascii import unhexlify + +from Cryptodome.Util.py3compat import bchr +from Cryptodome.Util.number import bytes_to_long +from Cryptodome.Util.strxor import strxor +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof + +from Cryptodome.Hash import (SHA1, SHA224, SHA256, SHA384, SHA512, SHA3_384, + SHA3_224, SHA3_256, SHA3_512) +from Cryptodome.PublicKey import RSA +from Cryptodome.Signature import pkcs1_15 +from Cryptodome.Signature import PKCS1_v1_5 + +from Cryptodome.Util._file_system import pycryptodome_filename +from Cryptodome.Util.strxor import strxor + + +def load_hash_by_name(hash_name): + return __import__("Cryptodome.Hash." + hash_name, globals(), locals(), ["new"]) + + +class FIPS_PKCS1_Verify_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Verify)" + + def test_can_sign(self): + test_public_key = RSA.generate(1024).public_key() + verifier = pkcs1_15.new(test_public_key) + self.assertEqual(verifier.can_sign(), False) + + +class FIPS_PKCS1_Verify_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "PKCS1-v1.5"), + "SigVer15_186-3.rsp", + "Signature Verification 186-3", + {'shaalg': lambda x: x, + 'd': lambda x: int(x), + 'result': lambda x: x}) or [] + + +for count, tv in enumerate(test_vectors_verify): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + public_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e)]) # type: ignore + verifier = pkcs1_15.new(public_key) + + def positive_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result == 'f': + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_negative_%d" % count, negative_test) + else: + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_positive_%d" % count, positive_test) + + +class FIPS_PKCS1_Sign_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Sign)" + + def test_can_sign(self): + test_private_key = RSA.generate(1024) + signer = pkcs1_15.new(test_private_key) + self.assertEqual(signer.can_sign(), True) + + +class FIPS_PKCS1_Sign_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_sign = load_test_vectors(("Signature", "PKCS1-v1.5"), + "SigGen15_186-2.txt", + "Signature Generation 186-2", + {'shaalg': lambda x: x}) or [] + +test_vectors_sign += load_test_vectors(("Signature", "PKCS1-v1.5"), + "SigGen15_186-3.txt", + "Signature Generation 186-3", + {'shaalg': lambda x: x}) or [] + +for count, tv in enumerate(test_vectors_sign): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + if hasattr(tv, "e"): + private_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e, tv.d)]) # type: ignore + signer = pkcs1_15.new(private_key) + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + + def new_test(self, hash_obj=hash_obj, signer=signer, result=tv.s): + signature = signer.sign(hash_obj) + self.assertEqual(signature, result) + + setattr(FIPS_PKCS1_Sign_Tests_KAT, "test_%d" % count, new_test) + + +class PKCS1_15_NoParams(unittest.TestCase): + """Verify that PKCS#1 v1.5 signatures pass even without NULL parameters in + the algorithm identifier (PyCrypto/LP bug #1119552).""" + + rsakey = """-----BEGIN RSA PRIVATE KEY----- + MIIBOwIBAAJBAL8eJ5AKoIsjURpcEoGubZMxLD7+kT+TLr7UkvEtFrRhDDKMtuII + q19FrL4pUIMymPMSLBn3hJLe30Dw48GQM4UCAwEAAQJACUSDEp8RTe32ftq8IwG8 + Wojl5mAd1wFiIOrZ/Uv8b963WJOJiuQcVN29vxU5+My9GPZ7RA3hrDBEAoHUDPrI + OQIhAPIPLz4dphiD9imAkivY31Rc5AfHJiQRA7XixTcjEkojAiEAyh/pJHks/Mlr + +rdPNEpotBjfV4M4BkgGAA/ipcmaAjcCIQCHvhwwKVBLzzTscT2HeUdEeBMoiXXK + JACAr3sJQJGxIQIgarRp+m1WSKV1MciwMaTOnbU7wxFs9DP1pva76lYBzgUCIQC9 + n0CnZCJ6IZYqSt0H5N7+Q+2Ro64nuwV/OSQfM6sBwQ== + -----END RSA PRIVATE KEY-----""" + + msg = b"This is a test\x0a" + + # PKCS1 v1.5 signature of the message computed using SHA-1. + # The digestAlgorithm SEQUENCE does NOT contain the NULL parameter. + sig_str = "a287a13517f716e72fb14eea8e33a8db4a4643314607e7ca3e3e28"\ + "1893db74013dda8b855fd99f6fecedcb25fcb7a434f35cd0a101f8"\ + "b19348e0bd7b6f152dfc" + signature = unhexlify(sig_str) + + def runTest(self): + verifier = pkcs1_15.new(RSA.importKey(self.rsakey)) + hashed = SHA1.new(self.msg) + verifier.verify(hashed, self.signature) + + +class PKCS1_Legacy_Module_Tests(unittest.TestCase): + """Verify that the legacy module Cryptodome.Signature.PKCS1_v1_5 + behaves as expected. The only difference is that the verify() + method returns True/False and does not raise exceptions.""" + + def shortDescription(self): + return "Test legacy Cryptodome.Signature.PKCS1_v1_5" + + def runTest(self): + key = RSA.importKey(PKCS1_15_NoParams.rsakey) + hashed = SHA1.new(b"Test") + good_signature = PKCS1_v1_5.new(key).sign(hashed) + verifier = PKCS1_v1_5.new(key.public_key()) + + self.assertEqual(verifier.verify(hashed, good_signature), True) + + # Flip a few bits in the signature + bad_signature = strxor(good_signature, bchr(1) * len(good_signature)) + self.assertEqual(verifier.verify(hashed, bad_signature), False) + + +class PKCS1_All_Hashes_Tests(unittest.TestCase): + + def shortDescription(self): + return "Test PKCS#1v1.5 signature in combination with all hashes" + + def runTest(self): + + key = RSA.generate(1024) + signer = pkcs1_15.new(key) + hash_names = ("MD2", "MD4", "MD5", "RIPEMD160", "SHA1", + "SHA224", "SHA256", "SHA384", "SHA512", + "SHA3_224", "SHA3_256", "SHA3_384", "SHA3_512") + + for name in hash_names: + hashed = load_hash_by_name(name).new(b"Test") + signer.sign(hashed) + + from Cryptodome.Hash import BLAKE2b, BLAKE2s + for hash_size in (20, 32, 48, 64): + hashed_b = BLAKE2b.new(digest_bytes=hash_size, data=b"Test") + signer.sign(hashed_b) + for hash_size in (16, 20, 28, 32): + hashed_s = BLAKE2s.new(digest_bytes=hash_size, data=b"Test") + signer.sign(hashed_s) + + +class TestVectorsWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def setUp(self): + self.tv = [] + self.add_tests("rsa_sig_gen_misc_test.json") + self.add_tests("rsa_signature_2048_sha224_test.json") + self.add_tests("rsa_signature_2048_sha256_test.json") + self.add_tests("rsa_signature_2048_sha384_test.json") + self.add_tests("rsa_signature_2048_sha3_224_test.json") + self.add_tests("rsa_signature_2048_sha3_256_test.json") + self.add_tests("rsa_signature_2048_sha3_384_test.json") + self.add_tests("rsa_signature_2048_sha3_512_test.json") + self.add_tests("rsa_signature_2048_sha512_test.json") + self.add_tests("rsa_signature_2048_sha512_224_test.json") + self.add_tests("rsa_signature_2048_sha512_256_test.json") + self.add_tests("rsa_signature_3072_sha256_test.json") + self.add_tests("rsa_signature_3072_sha384_test.json") + self.add_tests("rsa_signature_3072_sha3_256_test.json") + self.add_tests("rsa_signature_3072_sha3_384_test.json") + self.add_tests("rsa_signature_3072_sha3_512_test.json") + self.add_tests("rsa_signature_3072_sha512_test.json") + self.add_tests("rsa_signature_3072_sha512_256_test.json") + self.add_tests("rsa_signature_4096_sha384_test.json") + self.add_tests("rsa_signature_4096_sha512_test.json") + self.add_tests("rsa_signature_4096_sha512_256_test.json") + self.add_tests("rsa_signature_test.json") + + def add_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['keyPem']) + + def filter_sha(group): + hash_name = group['sha'] + if hash_name == "SHA-512": + return SHA512 + elif hash_name == "SHA-512/224": + return SHA512.new(truncate="224") + elif hash_name == "SHA-512/256": + return SHA512.new(truncate="256") + elif hash_name == "SHA3-512": + return SHA3_512 + elif hash_name == "SHA-384": + return SHA384 + elif hash_name == "SHA3-384": + return SHA3_384 + elif hash_name == "SHA-256": + return SHA256 + elif hash_name == "SHA3-256": + return SHA3_256 + elif hash_name == "SHA-224": + return SHA224 + elif hash_name == "SHA3-224": + return SHA3_224 + elif hash_name == "SHA-1": + return SHA1 + else: + raise ValueError("Unknown hash algorithm: " + hash_name) + + def filter_type(group): + type_name = group['type'] + if type_name not in ("RsassaPkcs1Verify", "RsassaPkcs1Generate"): + raise ValueError("Unknown type name " + type_name) + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof PKCS#1v1.5 signature (%s)" % filename, + group_tag={'rsa_key': filter_rsa, + 'hash_mod': filter_sha, + 'type': filter_type}) + return result + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof RSA PKCS$#1 Test #" + str(tv.id) + + hashed_msg = tv.hash_module.new(tv.msg) + signer = pkcs1_15.new(tv.key) + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(FIPS_PKCS1_Verify_Tests) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests) + tests += list_test_cases(PKCS1_15_NoParams) + tests += list_test_cases(PKCS1_Legacy_Module_Tests) + tests += list_test_cases(PKCS1_All_Hashes_Tests) + tests += [ TestVectorsWycheproof(wycheproof_warnings) ] + + if config.get('slow_tests'): + tests += list_test_cases(FIPS_PKCS1_Verify_Tests_KAT) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests_KAT) + + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pss.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pss.py new file mode 100644 index 0000000..c3b1ce5 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Signature/test_pss.py @@ -0,0 +1,377 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest + +from Cryptodome.Util.py3compat import b, bchr +from Cryptodome.Util.number import bytes_to_long +from Cryptodome.Util.strxor import strxor +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.SelfTest.loader import load_test_vectors, load_test_vectors_wycheproof + +from Cryptodome.Hash import SHA1, SHA224, SHA256, SHA384, SHA512 +from Cryptodome.PublicKey import RSA +from Cryptodome.Signature import pss +from Cryptodome.Signature import PKCS1_PSS + +from Cryptodome.Signature.pss import MGF1 + + +def load_hash_by_name(hash_name): + return __import__("Cryptodome.Hash." + hash_name, globals(), locals(), ["new"]) + + +class PRNG(object): + + def __init__(self, stream): + self.stream = stream + self.idx = 0 + + def __call__(self, rnd_size): + result = self.stream[self.idx:self.idx + rnd_size] + self.idx += rnd_size + return result + + +class PSS_Tests(unittest.TestCase): + + rsa_key = b'-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAsvI34FgiTK8+txBvmooNGpNwk23YTU51dwNZi5yha3W4lA/Q\nvcZrDalkmD7ekWQwnduxVKa6pRSI13KBgeUOIqJoGXSWhntEtY3FEwvWOHW5AE7Q\njUzTzCiYT6TVaCcpa/7YLai+p6ai2g5f5Zfh4jSawa9uYeuggFygQq4IVW796MgV\nyqxYMM/arEj+/sKz3Viua9Rp9fFosertCYCX4DUTgW0mX9bwEnEOgjSI3pLOPXz1\n8vx+DRZS5wMCmwCUa0sKonLn3cAUPq+sGix7+eo7T0Z12MU8ud7IYVX/75r3cXiF\nPaYE2q8Le0kgOApIXbb+x74x0rNgyIh1yGygkwIDAQABAoIBABz4t1A0pLT6qHI2\nEIOaNz3mwhK0dZEqkz0GB1Dhtoax5ATgvKCFB98J3lYB08IBURe1snOsnMpOVUtg\naBRSM+QqnCUG6bnzKjAkuFP5liDE+oNQv1YpKp9CsUovuzdmI8Au3ewihl+ZTIN2\nUVNYMEOR1b5m+z2SSwWNOYsiJwpBrT7zkpdlDyjat7FiiPhMMIMXjhQFVxURMIcB\njUBtPzGvV/PG90cVDWi1wRGeeP1dDqti/jsnvykQ15KW1MqGrpeNKRmDdTy/Ucl1\nWIoYklKw3U456lgZ/rDTDB818+Tlnk35z4yF7d5ANPM8CKfqOPcnO1BCKVFzf4eq\n54wvUtkCgYEA1Zv2lp06l7rXMsvNtyYQjbFChezRDRnPwZmN4NCdRtTgGG1G0Ryd\nYz6WWoPGqZp0b4LAaaHd3W2GTcpXF8WXMKfMX1W+tMAxMozfsXRKMcHoypwuS5wT\nfJRXJCG4pvd57AB0iVUEJW2we+uGKU5Zxcx//id2nXGCpoRyViIplQsCgYEA1nVC\neHupHChht0Fh4N09cGqZHZzuwXjOUMzR3Vsfz+4WzVS3NvIgN4g5YgmQFOeKwo5y\niRq5yvubcNdFvf85eHWClg0zPAyxJCVUWigCrrOanGEhJo6re4idJvNVzu4Ucg0v\n6B3SJ1HsCda+ZSNz24bSyqRep8A+RoAaoVSFx5kCgYEAn3RvXPs9s+obnqWYiPF3\nRe5etE6Vt2vfNKwFxx6zaR6bsmBQjuUHcABWiHb6I71S0bMPI0tbrWGG8ibrYKl1\nNTLtUvVVCOS3VP7oNTWT9RTFTAnOXU7DFSo+6o/poWn3r36ff6zhDXeWWMr2OXtt\ndEQ1/2lCGEGVv+v61eVmmQUCgYABFHITPTwqwiFL1O5zPWnzyPWgaovhOYSAb6eW\n38CXQXGn8wdBJZL39J2lWrr4//l45VK6UgIhfYbY2JynSkO10ZGow8RARygVMILu\nOUlaK9lZdDvAf/NpGdUAvzTtZ9F+iYZ2OsA2JnlzyzsGM1l//3vMPWukmJk3ral0\nqoJJ8QKBgGRG3eVHnIegBbFVuMDp2NTcfuSuDVUQ1fGAwtPiFa8u81IodJnMk2pq\niXu2+0ytNA/M+SVrAnE2AgIzcaJbtr0p2srkuVM7KMWnG1vWFNjtXN8fAhf/joOv\nD+NmPL/N4uE57e40tbiU/H7KdyZaDt+5QiTmdhuyAe6CBjKsF2jy\n-----END RSA PRIVATE KEY-----' + msg = b'AAA' + tag = b'\x00[c5\xd8\xb0\x8b!D\x81\x83\x07\xc0\xdd\xb9\xb4\xb2`\x92\xe7\x02\xf1\xe1P\xea\xc3\xf0\xe3>\xddX5\xdd\x8e\xc5\x89\xef\xf3\xc2\xdc\xfeP\x02\x7f\x12+\xc9\xaf\xbb\xec\xfe\xb0\xa5\xb9\x08\x11P\x8fL\xee5\x9b\xb0k{=_\xd2\x14\xfb\x01R\xb7\xfe\x14}b\x03\x8d5Y\x89~}\xfc\xf2l\xd01-\xbd\xeb\x11\xcdV\x11\xe9l\x19k/o5\xa2\x0f\x15\xe7Q$\t=\xec\x1dAB\x19\xa5P\x9a\xaf\xa3G\x86"\xd6~\xf0<p5\x00\x86\xe0\xf3\x99\xc7+\xcfc,\\\x13)v\xcd\xff\x08o\x90\xc5\xd1\xca\x869\xf45\x1e\xfd\xa2\xf1n\xa3\xa6e\xc5\x11Q\xe4@\xbd\x17\x83x\xc9\x9b\xb5\xc7\xea\x03U\x9b\xa0\xccC\x17\xc9T\x86/\x05\x1c\xc7\x95hC\xf9b1\xbb\x05\xc3\xf0\x9a>j\xfcqkbs\x13\x84b\xe4\xbdm(\xed`\xa4F\xfb\x8f.\xe1\x8c)/_\x9eS\x98\xa4v\xb8\xdc\xfe\xf7/D\x18\x19\xb3T\x97:\xe2\x96s\xe8<\xa2\xb4\xb9\xf8/' + + def test_positive_1(self): + key = RSA.import_key(self.rsa_key) + h = SHA256.new(self.msg) + verifier = pss.new(key) + verifier.verify(h, self.tag) + + def test_negative_1(self): + key = RSA.import_key(self.rsa_key) + h = SHA256.new(self.msg + b'A') + verifier = pss.new(key) + tag = bytearray(self.tag) + self.assertRaises(ValueError, verifier.verify, h, tag) + + def test_negative_2(self): + key = RSA.import_key(self.rsa_key) + h = SHA256.new(self.msg) + verifier = pss.new(key, salt_bytes=1000) + tag = bytearray(self.tag) + self.assertRaises(ValueError, verifier.verify, h, tag) + + +class FIPS_PKCS1_Verify_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Verify)" + + def verify_positive(self, hashmod, message, public_key, salt, signature): + prng = PRNG(salt) + hashed = hashmod.new(message) + verifier = pss.new(public_key, salt_bytes=len(salt), rand_func=prng) + verifier.verify(hashed, signature) + + def verify_negative(self, hashmod, message, public_key, salt, signature): + prng = PRNG(salt) + hashed = hashmod.new(message) + verifier = pss.new(public_key, salt_bytes=len(salt), rand_func=prng) + self.assertRaises(ValueError, verifier.verify, hashed, signature) + + def test_can_sign(self): + test_public_key = RSA.generate(1024).public_key() + verifier = pss.new(test_public_key) + self.assertEqual(verifier.can_sign(), False) + + +class FIPS_PKCS1_Verify_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_verify = load_test_vectors(("Signature", "PKCS1-PSS"), + "SigVerPSS_186-3.rsp", + "Signature Verification 186-3", + {'shaalg': lambda x: x, + 'result': lambda x: x}) or [] + + +for count, tv in enumerate(test_vectors_verify): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + if hasattr(tv, "p"): + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + public_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e)]) # type: ignore + if tv.saltval != b("\x00"): + prng = PRNG(tv.saltval) + verifier = pss.new(public_key, salt_bytes=len(tv.saltval), rand_func=prng) + else: + verifier = pss.new(public_key, salt_bytes=0) + + def positive_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + verifier.verify(hash_obj, signature) + + def negative_test(self, hash_obj=hash_obj, verifier=verifier, signature=tv.s): + self.assertRaises(ValueError, verifier.verify, hash_obj, signature) + + if tv.result == 'p': + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_positive_%d" % count, positive_test) + else: + setattr(FIPS_PKCS1_Verify_Tests_KAT, "test_negative_%d" % count, negative_test) + + +class FIPS_PKCS1_Sign_Tests(unittest.TestCase): + + def shortDescription(self): + return "FIPS PKCS1 Tests (Sign)" + + def test_can_sign(self): + test_private_key = RSA.generate(1024) + signer = pss.new(test_private_key) + self.assertEqual(signer.can_sign(), True) + + +class FIPS_PKCS1_Sign_Tests_KAT(unittest.TestCase): + pass + + +test_vectors_sign = load_test_vectors(("Signature", "PKCS1-PSS"), + "SigGenPSS_186-2.txt", + "Signature Generation 186-2", + {'shaalg': lambda x: x}) or [] + +test_vectors_sign += load_test_vectors(("Signature", "PKCS1-PSS"), + "SigGenPSS_186-3.txt", + "Signature Generation 186-3", + {'shaalg': lambda x: x}) or [] + +for count, tv in enumerate(test_vectors_sign): + if isinstance(tv, str): + continue + if hasattr(tv, "n"): + modulus = tv.n + continue + if hasattr(tv, "e"): + private_key = RSA.construct([bytes_to_long(x) for x in (modulus, tv.e, tv.d)]) # type: ignore + continue + + hash_module = load_hash_by_name(tv.shaalg.upper()) + hash_obj = hash_module.new(tv.msg) + if tv.saltval != b("\x00"): + prng = PRNG(tv.saltval) + signer = pss.new(private_key, salt_bytes=len(tv.saltval), rand_func=prng) + else: + signer = pss.new(private_key, salt_bytes=0) + + def new_test(self, hash_obj=hash_obj, signer=signer, result=tv.s): + signature = signer.sign(hash_obj) + self.assertEqual(signature, result) + + setattr(FIPS_PKCS1_Sign_Tests_KAT, "test_%d" % count, new_test) + + +class PKCS1_Legacy_Module_Tests(unittest.TestCase): + """Verify that the legacy module Cryptodome.Signature.PKCS1_PSS + behaves as expected. The only difference is that the verify() + method returns True/False and does not raise exceptions.""" + + def shortDescription(self): + return "Test legacy Cryptodome.Signature.PKCS1_PSS" + + def runTest(self): + key = RSA.generate(1024) + hashed = SHA1.new(b("Test")) + good_signature = PKCS1_PSS.new(key).sign(hashed) + verifier = PKCS1_PSS.new(key.public_key()) + + self.assertEqual(verifier.verify(hashed, good_signature), True) + + # Flip a few bits in the signature + bad_signature = strxor(good_signature, bchr(1) * len(good_signature)) + self.assertEqual(verifier.verify(hashed, bad_signature), False) + + +class PKCS1_All_Hashes_Tests(unittest.TestCase): + + def shortDescription(self): + return "Test PKCS#1 PSS signature in combination with all hashes" + + def runTest(self): + + key = RSA.generate(1280) + signer = pss.new(key) + hash_names = ("MD2", "MD4", "MD5", "RIPEMD160", "SHA1", + "SHA224", "SHA256", "SHA384", "SHA512", + "SHA3_224", "SHA3_256", "SHA3_384", "SHA3_512") + + for name in hash_names: + hashed = load_hash_by_name(name).new(b("Test")) + signer.sign(hashed) + + from Cryptodome.Hash import BLAKE2b, BLAKE2s + for hash_size in (20, 32, 48, 64): + hashed_b = BLAKE2b.new(digest_bytes=hash_size, data=b("Test")) + signer.sign(hashed_b) + for hash_size in (16, 20, 28, 32): + hashed_s = BLAKE2s.new(digest_bytes=hash_size, data=b("Test")) + signer.sign(hashed_s) + + +def get_hash_module(hash_name): + if hash_name == "SHA-512": + hash_module = SHA512 + elif hash_name == "SHA-512/224": + hash_module = SHA512.new(truncate="224") + elif hash_name == "SHA-512/256": + hash_module = SHA512.new(truncate="256") + elif hash_name == "SHA-384": + hash_module = SHA384 + elif hash_name == "SHA-256": + hash_module = SHA256 + elif hash_name == "SHA-224": + hash_module = SHA224 + elif hash_name == "SHA-1": + hash_module = SHA1 + else: + raise ValueError("Unknown hash algorithm: " + hash_name) + return hash_module + + +class TestVectorsPSSWycheproof(unittest.TestCase): + + def __init__(self, wycheproof_warnings): + unittest.TestCase.__init__(self) + self._wycheproof_warnings = wycheproof_warnings + self._id = "None" + + def add_tests(self, filename): + + def filter_rsa(group): + return RSA.import_key(group['keyPem']) + + def filter_sha(group): + return get_hash_module(group['sha']) + + def filter_type(group): + type_name = group['type'] + if type_name not in ("RsassaPssVerify", ): + raise ValueError("Unknown type name " + type_name) + + def filter_slen(group): + return group['sLen'] + + def filter_mgf(group): + mgf = group['mgf'] + if mgf not in ("MGF1", ): + raise ValueError("Unknown MGF " + mgf) + mgf1_hash = get_hash_module(group['mgfSha']) + + def mgf(x, y, mh=mgf1_hash): + return MGF1(x, y, mh) + + return mgf + + result = load_test_vectors_wycheproof(("Signature", "wycheproof"), + filename, + "Wycheproof PSS signature (%s)" % filename, + group_tag={'key': filter_rsa, + 'hash_module': filter_sha, + 'sLen': filter_slen, + 'mgf': filter_mgf, + 'type': filter_type}) + return result + + def setUp(self): + self.tv = [] + self.add_tests("rsa_pss_2048_sha1_mgf1_20_test.json") + self.add_tests("rsa_pss_2048_sha256_mgf1_0_test.json") + self.add_tests("rsa_pss_2048_sha256_mgf1_32_test.json") + self.add_tests("rsa_pss_2048_sha512_256_mgf1_28_test.json") + self.add_tests("rsa_pss_2048_sha512_256_mgf1_32_test.json") + self.add_tests("rsa_pss_3072_sha256_mgf1_32_test.json") + self.add_tests("rsa_pss_4096_sha256_mgf1_32_test.json") + self.add_tests("rsa_pss_4096_sha512_mgf1_32_test.json") + self.add_tests("rsa_pss_misc_test.json") + + def shortDescription(self): + return self._id + + def warn(self, tv): + if tv.warning and self._wycheproof_warnings: + import warnings + warnings.warn("Wycheproof warning: %s (%s)" % (self._id, tv.comment)) + + def test_verify(self, tv): + self._id = "Wycheproof RSA PSS Test #%d (%s)" % (tv.id, tv.comment) + + hashed_msg = tv.hash_module.new(tv.msg) + signer = pss.new(tv.key, mask_func=tv.mgf, salt_bytes=tv.sLen) + try: + signature = signer.verify(hashed_msg, tv.sig) + except ValueError as e: + if tv.warning: + return + assert not tv.valid + else: + assert tv.valid + self.warn(tv) + + def runTest(self): + for tv in self.tv: + self.test_verify(tv) + + +def get_tests(config={}): + wycheproof_warnings = config.get('wycheproof_warnings') + + tests = [] + tests += list_test_cases(PSS_Tests) + tests += list_test_cases(FIPS_PKCS1_Verify_Tests) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests) + tests += list_test_cases(PKCS1_Legacy_Module_Tests) + tests += list_test_cases(PKCS1_All_Hashes_Tests) + + if config.get('slow_tests'): + tests += list_test_cases(FIPS_PKCS1_Verify_Tests_KAT) + tests += list_test_cases(FIPS_PKCS1_Sign_Tests_KAT) + + tests += [TestVectorsPSSWycheproof(wycheproof_warnings)] + + return tests + + +if __name__ == '__main__': + def suite(): + return unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/__init__.py new file mode 100644 index 0000000..e52c490 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/__init__.py @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/__init__.py: Self-test for utility modules +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-test for utility modules""" + +__revision__ = "$Id$" + +import os + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest.Util import test_number; tests += test_number.get_tests(config=config) + from Cryptodome.SelfTest.Util import test_Counter; tests += test_Counter.get_tests(config=config) + from Cryptodome.SelfTest.Util import test_Padding; tests += test_Padding.get_tests(config=config) + from Cryptodome.SelfTest.Util import test_strxor; tests += test_strxor.get_tests(config=config) + from Cryptodome.SelfTest.Util import test_asn1; tests += test_asn1.get_tests(config=config) + from Cryptodome.SelfTest.Util import test_rfc1751; tests += test_rfc1751.get_tests(config=config) + return tests + +if __name__ == '__main__': + import unittest + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Counter.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Counter.py new file mode 100644 index 0000000..0d1e089 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Counter.py @@ -0,0 +1,67 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/test_Counter: Self-test for the Cryptodome.Util.Counter module +# +# Written in 2009 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-tests for Cryptodome.Util.Counter""" + +from Cryptodome.Util.py3compat import * + +import unittest + +class CounterTests(unittest.TestCase): + def setUp(self): + global Counter + from Cryptodome.Util import Counter + + def test_BE(self): + """Big endian""" + c = Counter.new(128) + c = Counter.new(128, little_endian=False) + + def test_LE(self): + """Little endian""" + c = Counter.new(128, little_endian=True) + + def test_nbits(self): + c = Counter.new(nbits=128) + self.assertRaises(ValueError, Counter.new, 129) + + def test_prefix(self): + c = Counter.new(128, prefix=b("xx")) + + def test_suffix(self): + c = Counter.new(128, suffix=b("xx")) + + def test_iv(self): + c = Counter.new(128, initial_value=2) + self.assertRaises(ValueError, Counter.new, 16, initial_value=0x1FFFF) + +def get_tests(config={}): + from Cryptodome.SelfTest.st_common import list_test_cases + return list_test_cases(CounterTests) + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Padding.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Padding.py new file mode 100644 index 0000000..d6a794e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_Padding.py @@ -0,0 +1,154 @@ +# +# SelfTest/Util/test_Padding.py: Self-test for padding functions +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify as uh + +from Cryptodome.Util.py3compat import * +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.Padding import pad, unpad + +class PKCS7_Tests(unittest.TestCase): + + def test1(self): + padded = pad(b(""), 4) + self.assertTrue(padded == uh(b("04040404"))) + padded = pad(b(""), 4, 'pkcs7') + self.assertTrue(padded == uh(b("04040404"))) + back = unpad(padded, 4) + self.assertTrue(back == b("")) + + def test2(self): + padded = pad(uh(b("12345678")), 4) + self.assertTrue(padded == uh(b("1234567804040404"))) + back = unpad(padded, 4) + self.assertTrue(back == uh(b("12345678"))) + + def test3(self): + padded = pad(uh(b("123456")), 4) + self.assertTrue(padded == uh(b("12345601"))) + back = unpad(padded, 4) + self.assertTrue(back == uh(b("123456"))) + + def test4(self): + padded = pad(uh(b("1234567890")), 4) + self.assertTrue(padded == uh(b("1234567890030303"))) + back = unpad(padded, 4) + self.assertTrue(back == uh(b("1234567890"))) + + def testn1(self): + self.assertRaises(ValueError, pad, uh(b("12")), 4, 'pkcs8') + + def testn2(self): + self.assertRaises(ValueError, unpad, b("\0\0\0"), 4) + self.assertRaises(ValueError, unpad, b(""), 4) + + def testn3(self): + self.assertRaises(ValueError, unpad, b("123456\x02"), 4) + self.assertRaises(ValueError, unpad, b("123456\x00"), 4) + self.assertRaises(ValueError, unpad, b("123456\x05\x05\x05\x05\x05"), 4) + +class X923_Tests(unittest.TestCase): + + def test1(self): + padded = pad(b(""), 4, 'x923') + self.assertTrue(padded == uh(b("00000004"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == b("")) + + def test2(self): + padded = pad(uh(b("12345678")), 4, 'x923') + self.assertTrue(padded == uh(b("1234567800000004"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == uh(b("12345678"))) + + def test3(self): + padded = pad(uh(b("123456")), 4, 'x923') + self.assertTrue(padded == uh(b("12345601"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == uh(b("123456"))) + + def test4(self): + padded = pad(uh(b("1234567890")), 4, 'x923') + self.assertTrue(padded == uh(b("1234567890000003"))) + back = unpad(padded, 4, 'x923') + self.assertTrue(back == uh(b("1234567890"))) + + def testn1(self): + self.assertRaises(ValueError, unpad, b("123456\x02"), 4, 'x923') + self.assertRaises(ValueError, unpad, b("123456\x00"), 4, 'x923') + self.assertRaises(ValueError, unpad, b("123456\x00\x00\x00\x00\x05"), 4, 'x923') + self.assertRaises(ValueError, unpad, b(""), 4, 'x923') + +class ISO7816_Tests(unittest.TestCase): + + def test1(self): + padded = pad(b(""), 4, 'iso7816') + self.assertTrue(padded == uh(b("80000000"))) + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == b("")) + + def test2(self): + padded = pad(uh(b("12345678")), 4, 'iso7816') + self.assertTrue(padded == uh(b("1234567880000000"))) + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == uh(b("12345678"))) + + def test3(self): + padded = pad(uh(b("123456")), 4, 'iso7816') + self.assertTrue(padded == uh(b("12345680"))) + #import pdb; pdb.set_trace() + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == uh(b("123456"))) + + def test4(self): + padded = pad(uh(b("1234567890")), 4, 'iso7816') + self.assertTrue(padded == uh(b("1234567890800000"))) + back = unpad(padded, 4, 'iso7816') + self.assertTrue(back == uh(b("1234567890"))) + + def testn1(self): + self.assertRaises(ValueError, unpad, b("123456\x81"), 4, 'iso7816') + self.assertRaises(ValueError, unpad, b(""), 4, 'iso7816') + +def get_tests(config={}): + tests = [] + tests += list_test_cases(PKCS7_Tests) + tests += list_test_cases(X923_Tests) + tests += list_test_cases(ISO7816_Tests) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_asn1.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_asn1.py new file mode 100644 index 0000000..811ac84 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_asn1.py @@ -0,0 +1,851 @@ +# +# SelfTest/Util/test_asn.py: Self-test for the Cryptodome.Util.asn1 module +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Self-tests for Cryptodome.Util.asn1""" + +import unittest + +from Cryptodome.Util.py3compat import * +from Cryptodome.Util.asn1 import (DerObject, DerSetOf, DerInteger, + DerBitString, + DerObjectId, DerNull, DerOctetString, + DerSequence, DerBoolean) + +class DerObjectTests(unittest.TestCase): + + def testObjInit1(self): + # Fail with invalid tag format (must be 1 byte) + self.assertRaises(ValueError, DerObject, b('\x00\x99')) + # Fail with invalid implicit tag (must be <0x1F) + self.assertRaises(ValueError, DerObject, 0x1F) + + # ------ + + def testObjEncode1(self): + # No payload + der = DerObject(b('\x02')) + self.assertEqual(der.encode(), b('\x02\x00')) + # Small payload (primitive) + der.payload = b('\x45') + self.assertEqual(der.encode(), b('\x02\x01\x45')) + # Invariant + self.assertEqual(der.encode(), b('\x02\x01\x45')) + # Initialize with numerical tag + der = DerObject(0x04) + der.payload = b('\x45') + self.assertEqual(der.encode(), b('\x04\x01\x45')) + # Initialize with constructed type + der = DerObject(b('\x10'), constructed=True) + self.assertEqual(der.encode(), b('\x30\x00')) + + def testObjEncode2(self): + # Initialize with payload + der = DerObject(0x03, b('\x12\x12')) + self.assertEqual(der.encode(), b('\x03\x02\x12\x12')) + + def testObjEncode3(self): + # Long payload + der = DerObject(b('\x10')) + der.payload = b("0")*128 + self.assertEqual(der.encode(), b('\x10\x81\x80' + "0"*128)) + + def testObjEncode4(self): + # Implicit tags (constructed) + der = DerObject(0x10, implicit=1, constructed=True) + der.payload = b('ppll') + self.assertEqual(der.encode(), b('\xa1\x04ppll')) + # Implicit tags (primitive) + der = DerObject(0x02, implicit=0x1E, constructed=False) + der.payload = b('ppll') + self.assertEqual(der.encode(), b('\x9E\x04ppll')) + + def testObjEncode5(self): + # Encode type with explicit tag + der = DerObject(0x10, explicit=5) + der.payload = b("xxll") + self.assertEqual(der.encode(), b("\xa5\x06\x10\x04xxll")) + + # ----- + + def testObjDecode1(self): + # Decode short payload + der = DerObject(0x02) + der.decode(b('\x02\x02\x01\x02')) + self.assertEqual(der.payload, b("\x01\x02")) + self.assertEqual(der._tag_octet, 0x02) + + def testObjDecode2(self): + # Decode long payload + der = DerObject(0x02) + der.decode(b('\x02\x81\x80' + "1"*128)) + self.assertEqual(der.payload, b("1")*128) + self.assertEqual(der._tag_octet, 0x02) + + def testObjDecode3(self): + # Decode payload with too much data gives error + der = DerObject(0x02) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02\xFF')) + # Decode payload with too little data gives error + der = DerObject(0x02) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01')) + + def testObjDecode4(self): + # Decode implicit tag (primitive) + der = DerObject(0x02, constructed=False, implicit=0xF) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02')) + der.decode(b('\x8F\x01\x00')) + self.assertEqual(der.payload, b('\x00')) + # Decode implicit tag (constructed) + der = DerObject(0x02, constructed=True, implicit=0xF) + self.assertRaises(ValueError, der.decode, b('\x02\x02\x01\x02')) + der.decode(b('\xAF\x01\x00')) + self.assertEqual(der.payload, b('\x00')) + + def testObjDecode5(self): + # Decode payload with unexpected tag gives error + der = DerObject(0x02) + self.assertRaises(ValueError, der.decode, b('\x03\x02\x01\x02')) + + def testObjDecode6(self): + # Arbitrary DER object + der = DerObject() + der.decode(b('\x65\x01\x88')) + self.assertEqual(der._tag_octet, 0x65) + self.assertEqual(der.payload, b('\x88')) + + def testObjDecode7(self): + # Decode explicit tag + der = DerObject(0x10, explicit=5) + der.decode(b("\xa5\x06\x10\x04xxll")) + self.assertEqual(der._inner_tag_octet, 0x10) + self.assertEqual(der.payload, b('xxll')) + + # Explicit tag may be 0 + der = DerObject(0x10, explicit=0) + der.decode(b("\xa0\x06\x10\x04xxll")) + self.assertEqual(der._inner_tag_octet, 0x10) + self.assertEqual(der.payload, b('xxll')) + + def testObjDecode8(self): + # Verify that decode returns the object + der = DerObject(0x02) + self.assertEqual(der, der.decode(b('\x02\x02\x01\x02'))) + +class DerIntegerTests(unittest.TestCase): + + def testInit1(self): + der = DerInteger(1) + self.assertEqual(der.encode(), b('\x02\x01\x01')) + + def testEncode1(self): + # Single-byte integers + # Value 0 + der = DerInteger(0) + self.assertEqual(der.encode(), b('\x02\x01\x00')) + # Value 1 + der = DerInteger(1) + self.assertEqual(der.encode(), b('\x02\x01\x01')) + # Value 127 + der = DerInteger(127) + self.assertEqual(der.encode(), b('\x02\x01\x7F')) + + def testEncode2(self): + # Multi-byte integers + # Value 128 + der = DerInteger(128) + self.assertEqual(der.encode(), b('\x02\x02\x00\x80')) + # Value 0x180 + der = DerInteger(0x180) + self.assertEqual(der.encode(), b('\x02\x02\x01\x80')) + # One very long integer + der = DerInteger(2**2048) + self.assertEqual(der.encode(), + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + + def testEncode3(self): + # Negative integers + # Value -1 + der = DerInteger(-1) + self.assertEqual(der.encode(), b('\x02\x01\xFF')) + # Value -128 + der = DerInteger(-128) + self.assertEqual(der.encode(), b('\x02\x01\x80')) + # Value + der = DerInteger(-87873) + self.assertEqual(der.encode(), b('\x02\x03\xFE\xA8\xBF')) + + def testEncode4(self): + # Explicit encoding + number = DerInteger(0x34, explicit=3) + self.assertEqual(number.encode(), b('\xa3\x03\x02\x01\x34')) + + # ----- + + def testDecode1(self): + # Single-byte integer + der = DerInteger() + # Value 0 + der.decode(b('\x02\x01\x00')) + self.assertEqual(der.value, 0) + # Value 1 + der.decode(b('\x02\x01\x01')) + self.assertEqual(der.value, 1) + # Value 127 + der.decode(b('\x02\x01\x7F')) + self.assertEqual(der.value, 127) + + def testDecode2(self): + # Multi-byte integer + der = DerInteger() + # Value 0x180L + der.decode(b('\x02\x02\x01\x80')) + self.assertEqual(der.value,0x180) + # One very long integer + der.decode( + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + self.assertEqual(der.value,2**2048) + + def testDecode3(self): + # Negative integer + der = DerInteger() + # Value -1 + der.decode(b('\x02\x01\xFF')) + self.assertEqual(der.value, -1) + # Value -32768 + der.decode(b('\x02\x02\x80\x00')) + self.assertEqual(der.value, -32768) + + def testDecode5(self): + # We still accept BER integer format + der = DerInteger() + # Redundant leading zeroes + der.decode(b('\x02\x02\x00\x01')) + self.assertEqual(der.value, 1) + # Redundant leading 0xFF + der.decode(b('\x02\x02\xFF\xFF')) + self.assertEqual(der.value, -1) + # Empty payload + der.decode(b('\x02\x00')) + self.assertEqual(der.value, 0) + + def testDecode6(self): + # Explicit encoding + number = DerInteger(explicit=3) + number.decode(b('\xa3\x03\x02\x01\x34')) + self.assertEqual(number.value, 0x34) + + def testDecode7(self): + # Verify decode returns the DerInteger + der = DerInteger() + self.assertEqual(der, der.decode(b('\x02\x01\x7F'))) + + ### + + def testStrict1(self): + number = DerInteger() + + number.decode(b'\x02\x02\x00\x01') + number.decode(b'\x02\x02\x00\x7F') + self.assertRaises(ValueError, number.decode, b'\x02\x02\x00\x01', strict=True) + self.assertRaises(ValueError, number.decode, b'\x02\x02\x00\x7F', strict=True) + + ### + + def testErrDecode1(self): + # Wide length field + der = DerInteger() + self.assertRaises(ValueError, der.decode, b('\x02\x81\x01\x01')) + + +class DerSequenceTests(unittest.TestCase): + + def testInit1(self): + der = DerSequence([1, DerInteger(2), b('0\x00')]) + self.assertEqual(der.encode(), b('0\x08\x02\x01\x01\x02\x01\x020\x00')) + + def testEncode1(self): + # Empty sequence + der = DerSequence() + self.assertEqual(der.encode(), b('0\x00')) + self.assertFalse(der.hasOnlyInts()) + # One single-byte integer (zero) + der.append(0) + self.assertEqual(der.encode(), b('0\x03\x02\x01\x00')) + self.assertEqual(der.hasInts(),1) + self.assertEqual(der.hasInts(False),1) + self.assertTrue(der.hasOnlyInts()) + self.assertTrue(der.hasOnlyInts(False)) + # Invariant + self.assertEqual(der.encode(), b('0\x03\x02\x01\x00')) + + def testEncode2(self): + # Indexing + der = DerSequence() + der.append(0) + der[0] = 1 + self.assertEqual(len(der),1) + self.assertEqual(der[0],1) + self.assertEqual(der[-1],1) + self.assertEqual(der.encode(), b('0\x03\x02\x01\x01')) + # + der[:] = [1] + self.assertEqual(len(der),1) + self.assertEqual(der[0],1) + self.assertEqual(der.encode(), b('0\x03\x02\x01\x01')) + + def testEncode3(self): + # One multi-byte integer (non-zero) + der = DerSequence() + der.append(0x180) + self.assertEqual(der.encode(), b('0\x04\x02\x02\x01\x80')) + + def testEncode4(self): + # One very long integer + der = DerSequence() + der.append(2**2048) + self.assertEqual(der.encode(), b('0\x82\x01\x05')+ + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + + def testEncode5(self): + der = DerSequence() + der += 1 + der += b('\x30\x00') + self.assertEqual(der.encode(), b('\x30\x05\x02\x01\x01\x30\x00')) + + def testEncode6(self): + # Two positive integers + der = DerSequence() + der.append(0x180) + der.append(0xFF) + self.assertEqual(der.encode(), b('0\x08\x02\x02\x01\x80\x02\x02\x00\xff')) + self.assertTrue(der.hasOnlyInts()) + self.assertTrue(der.hasOnlyInts(False)) + # Two mixed integers + der = DerSequence() + der.append(2) + der.append(-2) + self.assertEqual(der.encode(), b('0\x06\x02\x01\x02\x02\x01\xFE')) + self.assertEqual(der.hasInts(), 1) + self.assertEqual(der.hasInts(False), 2) + self.assertFalse(der.hasOnlyInts()) + self.assertTrue(der.hasOnlyInts(False)) + # + der.append(0x01) + der[1:] = [9,8] + self.assertEqual(len(der),3) + self.assertEqual(der[1:],[9,8]) + self.assertEqual(der[1:-1],[9]) + self.assertEqual(der.encode(), b('0\x09\x02\x01\x02\x02\x01\x09\x02\x01\x08')) + + def testEncode7(self): + # One integer and another type (already encoded) + der = DerSequence() + der.append(0x180) + der.append(b('0\x03\x02\x01\x05')) + self.assertEqual(der.encode(), b('0\x09\x02\x02\x01\x800\x03\x02\x01\x05')) + self.assertFalse(der.hasOnlyInts()) + + def testEncode8(self): + # One integer and another type (yet to encode) + der = DerSequence() + der.append(0x180) + der.append(DerSequence([5])) + self.assertEqual(der.encode(), b('0\x09\x02\x02\x01\x800\x03\x02\x01\x05')) + self.assertFalse(der.hasOnlyInts()) + + #### + + def testDecode1(self): + # Empty sequence + der = DerSequence() + der.decode(b('0\x00')) + self.assertEqual(len(der),0) + # One single-byte integer (zero) + der.decode(b('0\x03\x02\x01\x00')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],0) + # Invariant + der.decode(b('0\x03\x02\x01\x00')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],0) + + def testDecode2(self): + # One single-byte integer (non-zero) + der = DerSequence() + der.decode(b('0\x03\x02\x01\x7f')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],127) + + def testDecode4(self): + # One very long integer + der = DerSequence() + der.decode(b('0\x82\x01\x05')+ + b('\x02\x82\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')+ + b('\x00\x00\x00\x00\x00\x00\x00\x00\x00')) + self.assertEqual(len(der),1) + self.assertEqual(der[0],2**2048) + + def testDecode6(self): + # Two integers + der = DerSequence() + der.decode(b('0\x08\x02\x02\x01\x80\x02\x02\x00\xff')) + self.assertEqual(len(der),2) + self.assertEqual(der[0],0x180) + self.assertEqual(der[1],0xFF) + + def testDecode7(self): + # One integer and 2 other types + der = DerSequence() + der.decode(b('0\x0A\x02\x02\x01\x80\x24\x02\xb6\x63\x12\x00')) + self.assertEqual(len(der),3) + self.assertEqual(der[0],0x180) + self.assertEqual(der[1],b('\x24\x02\xb6\x63')) + self.assertEqual(der[2],b('\x12\x00')) + + def testDecode8(self): + # Only 2 other types + der = DerSequence() + der.decode(b('0\x06\x24\x02\xb6\x63\x12\x00')) + self.assertEqual(len(der),2) + self.assertEqual(der[0],b('\x24\x02\xb6\x63')) + self.assertEqual(der[1],b('\x12\x00')) + self.assertEqual(der.hasInts(), 0) + self.assertEqual(der.hasInts(False), 0) + self.assertFalse(der.hasOnlyInts()) + self.assertFalse(der.hasOnlyInts(False)) + + def testDecode9(self): + # Verify that decode returns itself + der = DerSequence() + self.assertEqual(der, der.decode(b('0\x06\x24\x02\xb6\x63\x12\x00'))) + + ### + + def testErrDecode1(self): + # Not a sequence + der = DerSequence() + self.assertRaises(ValueError, der.decode, b('')) + self.assertRaises(ValueError, der.decode, b('\x00')) + self.assertRaises(ValueError, der.decode, b('\x30')) + + def testErrDecode2(self): + der = DerSequence() + # Too much data + self.assertRaises(ValueError, der.decode, b('\x30\x00\x00')) + + def testErrDecode3(self): + # Wrong length format + der = DerSequence() + # Missing length in sub-item + self.assertRaises(ValueError, der.decode, b('\x30\x04\x02\x01\x01\x00')) + # Valid BER, but invalid DER length + self.assertRaises(ValueError, der.decode, b('\x30\x81\x03\x02\x01\x01')) + self.assertRaises(ValueError, der.decode, b('\x30\x04\x02\x81\x01\x01')) + + def test_expected_nr_elements(self): + der_bin = DerSequence([1, 2, 3]).encode() + + DerSequence().decode(der_bin, nr_elements=3) + DerSequence().decode(der_bin, nr_elements=(2,3)) + self.assertRaises(ValueError, DerSequence().decode, der_bin, nr_elements=1) + self.assertRaises(ValueError, DerSequence().decode, der_bin, nr_elements=(4,5)) + + def test_expected_only_integers(self): + + der_bin1 = DerSequence([1, 2, 3]).encode() + der_bin2 = DerSequence([1, 2, DerSequence([3, 4])]).encode() + + DerSequence().decode(der_bin1, only_ints_expected=True) + DerSequence().decode(der_bin1, only_ints_expected=False) + DerSequence().decode(der_bin2, only_ints_expected=False) + self.assertRaises(ValueError, DerSequence().decode, der_bin2, only_ints_expected=True) + + +class DerOctetStringTests(unittest.TestCase): + + def testInit1(self): + der = DerOctetString(b('\xFF')) + self.assertEqual(der.encode(), b('\x04\x01\xFF')) + + def testEncode1(self): + # Empty sequence + der = DerOctetString() + self.assertEqual(der.encode(), b('\x04\x00')) + # Small payload + der.payload = b('\x01\x02') + self.assertEqual(der.encode(), b('\x04\x02\x01\x02')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerOctetString() + der.decode(b('\x04\x00')) + self.assertEqual(der.payload, b('')) + # Small payload + der.decode(b('\x04\x02\x01\x02')) + self.assertEqual(der.payload, b('\x01\x02')) + + def testDecode2(self): + # Verify that decode returns the object + der = DerOctetString() + self.assertEqual(der, der.decode(b('\x04\x00'))) + + def testErrDecode1(self): + # No leftovers allowed + der = DerOctetString() + self.assertRaises(ValueError, der.decode, b('\x04\x01\x01\xff')) + +class DerNullTests(unittest.TestCase): + + def testEncode1(self): + der = DerNull() + self.assertEqual(der.encode(), b('\x05\x00')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerNull() + self.assertEqual(der, der.decode(b('\x05\x00'))) + +class DerObjectIdTests(unittest.TestCase): + + def testInit1(self): + der = DerObjectId("1.1") + self.assertEqual(der.encode(), b'\x06\x01)') + + def testEncode1(self): + der = DerObjectId('1.2.840.113549.1.1.1') + self.assertEqual(der.encode(), b'\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01') + + der = DerObjectId() + der.value = '1.2.840.113549.1.1.1' + self.assertEqual(der.encode(), b'\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01') + + der = DerObjectId('2.999.1234') + self.assertEqual(der.encode(), b'\x06\x04\x88\x37\x89\x52') + + def testEncode2(self): + der = DerObjectId('3.4') + self.assertRaises(ValueError, der.encode) + + der = DerObjectId('1.40') + self.assertRaises(ValueError, der.encode) + + #### + + def testDecode1(self): + # Empty sequence + der = DerObjectId() + der.decode(b'\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01') + self.assertEqual(der.value, '1.2.840.113549.1.1.1') + + def testDecode2(self): + # Verify that decode returns the object + der = DerObjectId() + self.assertEqual(der, + der.decode(b'\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01')) + + def testDecode3(self): + der = DerObjectId() + der.decode(b'\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x00\x01') + self.assertEqual(der.value, '1.2.840.113549.1.0.1') + + def testDecode4(self): + der = DerObjectId() + der.decode(b'\x06\x04\x88\x37\x89\x52') + self.assertEqual(der.value, '2.999.1234') + + +class DerBitStringTests(unittest.TestCase): + + def testInit1(self): + der = DerBitString(b("\xFF")) + self.assertEqual(der.encode(), b('\x03\x02\x00\xFF')) + + def testInit2(self): + der = DerBitString(DerInteger(1)) + self.assertEqual(der.encode(), b('\x03\x04\x00\x02\x01\x01')) + + def testEncode1(self): + # Empty sequence + der = DerBitString() + self.assertEqual(der.encode(), b('\x03\x01\x00')) + # Small payload + der = DerBitString(b('\x01\x02')) + self.assertEqual(der.encode(), b('\x03\x03\x00\x01\x02')) + # Small payload + der = DerBitString() + der.value = b('\x01\x02') + self.assertEqual(der.encode(), b('\x03\x03\x00\x01\x02')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerBitString() + der.decode(b('\x03\x00')) + self.assertEqual(der.value, b('')) + # Small payload + der.decode(b('\x03\x03\x00\x01\x02')) + self.assertEqual(der.value, b('\x01\x02')) + + def testDecode2(self): + # Verify that decode returns the object + der = DerBitString() + self.assertEqual(der, der.decode(b('\x03\x00'))) + + +class DerSetOfTests(unittest.TestCase): + + def testInit1(self): + der = DerSetOf([DerInteger(1), DerInteger(2)]) + self.assertEqual(der.encode(), b('1\x06\x02\x01\x01\x02\x01\x02')) + + def testEncode1(self): + # Empty set + der = DerSetOf() + self.assertEqual(der.encode(), b('1\x00')) + # One single-byte integer (zero) + der.add(0) + self.assertEqual(der.encode(), b('1\x03\x02\x01\x00')) + # Invariant + self.assertEqual(der.encode(), b('1\x03\x02\x01\x00')) + + def testEncode2(self): + # Two integers + der = DerSetOf() + der.add(0x180) + der.add(0xFF) + self.assertEqual(der.encode(), b('1\x08\x02\x02\x00\xff\x02\x02\x01\x80')) + # Initialize with integers + der = DerSetOf([0x180, 0xFF]) + self.assertEqual(der.encode(), b('1\x08\x02\x02\x00\xff\x02\x02\x01\x80')) + + def testEncode3(self): + # One integer and another type (no matter what it is) + der = DerSetOf() + der.add(0x180) + self.assertRaises(ValueError, der.add, b('\x00\x02\x00\x00')) + + def testEncode4(self): + # Only non integers + der = DerSetOf() + der.add(b('\x01\x00')) + der.add(b('\x01\x01\x01')) + self.assertEqual(der.encode(), b('1\x05\x01\x00\x01\x01\x01')) + + #### + + def testDecode1(self): + # Empty sequence + der = DerSetOf() + der.decode(b('1\x00')) + self.assertEqual(len(der),0) + # One single-byte integer (zero) + der.decode(b('1\x03\x02\x01\x00')) + self.assertEqual(len(der),1) + self.assertEqual(list(der),[0]) + + def testDecode2(self): + # Two integers + der = DerSetOf() + der.decode(b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff')) + self.assertEqual(len(der),2) + l = list(der) + self.assertTrue(0x180 in l) + self.assertTrue(0xFF in l) + + def testDecode3(self): + # One integer and 2 other types + der = DerSetOf() + #import pdb; pdb.set_trace() + self.assertRaises(ValueError, der.decode, + b('0\x0A\x02\x02\x01\x80\x24\x02\xb6\x63\x12\x00')) + + def testDecode4(self): + # Verify that decode returns the object + der = DerSetOf() + self.assertEqual(der, + der.decode(b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff'))) + + ### + + def testErrDecode1(self): + # No leftovers allowed + der = DerSetOf() + self.assertRaises(ValueError, der.decode, + b('1\x08\x02\x02\x01\x80\x02\x02\x00\xff\xAA')) + + +class DerBooleanTests(unittest.TestCase): + + def testEncode1(self): + der = DerBoolean(False) + self.assertEqual(der.encode(), b'\x01\x01\x00') + + def testEncode2(self): + der = DerBoolean(True) + self.assertEqual(der.encode(), b'\x01\x01\xFF') + + def testEncode3(self): + der = DerBoolean(False, implicit=0x12) + self.assertEqual(der.encode(), b'\x92\x01\x00') + + def testEncode4(self): + der = DerBoolean(False, explicit=0x05) + self.assertEqual(der.encode(), b'\xA5\x03\x01\x01\x00') + #### + + def testDecode1(self): + der = DerBoolean() + der.decode(b'\x01\x01\x00') + self.assertEqual(der.value, False) + + def testDecode2(self): + der = DerBoolean() + der.decode(b'\x01\x01\xFF') + self.assertEqual(der.value, True) + + def testDecode3(self): + der = DerBoolean(implicit=0x12) + der.decode(b'\x92\x01\x00') + self.assertEqual(der.value, False) + + def testDecode4(self): + der = DerBoolean(explicit=0x05) + der.decode(b'\xA5\x03\x01\x01\x00') + self.assertEqual(der.value, False) + + def testErrorDecode1(self): + der = DerBoolean() + # Wrong tag + self.assertRaises(ValueError, der.decode, b'\x02\x01\x00') + + def testErrorDecode2(self): + der = DerBoolean() + # Payload too long + self.assertRaises(ValueError, der.decode, b'\x01\x01\x00\xFF') + + +def get_tests(config={}): + from Cryptodome.SelfTest.st_common import list_test_cases + listTests = [] + listTests += list_test_cases(DerObjectTests) + listTests += list_test_cases(DerIntegerTests) + listTests += list_test_cases(DerSequenceTests) + listTests += list_test_cases(DerOctetStringTests) + listTests += list_test_cases(DerNullTests) + listTests += list_test_cases(DerObjectIdTests) + listTests += list_test_cases(DerBitStringTests) + listTests += list_test_cases(DerSetOfTests) + listTests += list_test_cases(DerBooleanTests) + return listTests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_number.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_number.py new file mode 100644 index 0000000..8221443 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_number.py @@ -0,0 +1,192 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/Util/test_number.py: Self-test for parts of the Cryptodome.Util.number module +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self-tests for (some of) Cryptodome.Util.number""" + +import math +import unittest + +from Cryptodome.Util.py3compat import * +from Cryptodome.SelfTest.st_common import list_test_cases + +from Cryptodome.Util import number +from Cryptodome.Util.number import long_to_bytes + + +class MyError(Exception): + """Dummy exception used for tests""" + +# NB: In some places, we compare tuples instead of just output values so that +# if any inputs cause a test failure, we'll be able to tell which ones. + +class MiscTests(unittest.TestCase): + + def test_ceil_div(self): + """Util.number.ceil_div""" + self.assertRaises(TypeError, number.ceil_div, "1", 1) + self.assertRaises(ZeroDivisionError, number.ceil_div, 1, 0) + self.assertRaises(ZeroDivisionError, number.ceil_div, -1, 0) + + # b = 1 + self.assertEqual(0, number.ceil_div(0, 1)) + self.assertEqual(1, number.ceil_div(1, 1)) + self.assertEqual(2, number.ceil_div(2, 1)) + self.assertEqual(3, number.ceil_div(3, 1)) + + # b = 2 + self.assertEqual(0, number.ceil_div(0, 2)) + self.assertEqual(1, number.ceil_div(1, 2)) + self.assertEqual(1, number.ceil_div(2, 2)) + self.assertEqual(2, number.ceil_div(3, 2)) + self.assertEqual(2, number.ceil_div(4, 2)) + self.assertEqual(3, number.ceil_div(5, 2)) + + # b = 3 + self.assertEqual(0, number.ceil_div(0, 3)) + self.assertEqual(1, number.ceil_div(1, 3)) + self.assertEqual(1, number.ceil_div(2, 3)) + self.assertEqual(1, number.ceil_div(3, 3)) + self.assertEqual(2, number.ceil_div(4, 3)) + self.assertEqual(2, number.ceil_div(5, 3)) + self.assertEqual(2, number.ceil_div(6, 3)) + self.assertEqual(3, number.ceil_div(7, 3)) + + # b = 4 + self.assertEqual(0, number.ceil_div(0, 4)) + self.assertEqual(1, number.ceil_div(1, 4)) + self.assertEqual(1, number.ceil_div(2, 4)) + self.assertEqual(1, number.ceil_div(3, 4)) + self.assertEqual(1, number.ceil_div(4, 4)) + self.assertEqual(2, number.ceil_div(5, 4)) + self.assertEqual(2, number.ceil_div(6, 4)) + self.assertEqual(2, number.ceil_div(7, 4)) + self.assertEqual(2, number.ceil_div(8, 4)) + self.assertEqual(3, number.ceil_div(9, 4)) + + def test_getPrime(self): + """Util.number.getPrime""" + self.assertRaises(ValueError, number.getPrime, -100) + self.assertRaises(ValueError, number.getPrime, 0) + self.assertRaises(ValueError, number.getPrime, 1) + + bits = 4 + for i in range(100): + x = number.getPrime(bits) + self.assertEqual(x >= (1 << bits - 1), 1) + self.assertEqual(x < (1 << bits), 1) + + bits = 512 + x = number.getPrime(bits) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x >= (1 << bits - 1), 1) + self.assertEqual(x < (1 << bits), 1) + + def test_getStrongPrime(self): + """Util.number.getStrongPrime""" + self.assertRaises(ValueError, number.getStrongPrime, 256) + self.assertRaises(ValueError, number.getStrongPrime, 513) + bits = 512 + x = number.getStrongPrime(bits) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x > (1 << bits-1)-1, 1) + self.assertEqual(x < (1 << bits), 1) + e = 2**16+1 + x = number.getStrongPrime(bits, e) + self.assertEqual(number.GCD(x-1, e), 1) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x > (1 << bits-1)-1, 1) + self.assertEqual(x < (1 << bits), 1) + e = 2**16+2 + x = number.getStrongPrime(bits, e) + self.assertEqual(number.GCD((x-1)>>1, e), 1) + self.assertNotEqual(x % 2, 0) + self.assertEqual(x > (1 << bits-1)-1, 1) + self.assertEqual(x < (1 << bits), 1) + + def test_isPrime(self): + """Util.number.isPrime""" + self.assertEqual(number.isPrime(-3), False) # Regression test: negative numbers should not be prime + self.assertEqual(number.isPrime(-2), False) # Regression test: negative numbers should not be prime + self.assertEqual(number.isPrime(1), False) # Regression test: isPrime(1) caused some versions of PyCryptodome to crash. + self.assertEqual(number.isPrime(2), True) + self.assertEqual(number.isPrime(3), True) + self.assertEqual(number.isPrime(4), False) + self.assertEqual(number.isPrime(2**1279-1), True) + self.assertEqual(number.isPrime(-(2**1279-1)), False) # Regression test: negative numbers should not be prime + # test some known gmp pseudo-primes taken from + # http://www.trnicely.net/misc/mpzspsp.html + for composite in (43 * 127 * 211, 61 * 151 * 211, 15259 * 30517, + 346141 * 692281, 1007119 * 2014237, 3589477 * 7178953, + 4859419 * 9718837, 2730439 * 5460877, + 245127919 * 490255837, 963939391 * 1927878781, + 4186358431 * 8372716861, 1576820467 * 3153640933): + self.assertEqual(number.isPrime(int(composite)), False) + + def test_size(self): + self.assertEqual(number.size(2),2) + self.assertEqual(number.size(3),2) + self.assertEqual(number.size(0xa2),8) + self.assertEqual(number.size(0xa2ba40),8*3) + self.assertEqual(number.size(0xa2ba40ee07e3b2bd2f02ce227f36a195024486e49c19cb41bbbdfbba98b22b0e577c2eeaffa20d883a76e65e394c69d4b3c05a1e8fadda27edb2a42bc000fe888b9b32c22d15add0cd76b3e7936e19955b220dd17d4ea904b1ec102b2e4de7751222aa99151024c7cb41cc5ea21d00eeb41f7c800834d2c6e06bce3bce7ea9a5), 1024) + self.assertRaises(ValueError, number.size, -1) + + +class LongTests(unittest.TestCase): + + def test1(self): + self.assertEqual(long_to_bytes(0), b'\x00') + self.assertEqual(long_to_bytes(1), b'\x01') + self.assertEqual(long_to_bytes(0x100), b'\x01\x00') + self.assertEqual(long_to_bytes(0xFF00000000), b'\xFF\x00\x00\x00\x00') + self.assertEqual(long_to_bytes(0xFF00000000), b'\xFF\x00\x00\x00\x00') + self.assertEqual(long_to_bytes(0x1122334455667788), b'\x11\x22\x33\x44\x55\x66\x77\x88') + self.assertEqual(long_to_bytes(0x112233445566778899), b'\x11\x22\x33\x44\x55\x66\x77\x88\x99') + + def test2(self): + self.assertEqual(long_to_bytes(0, 1), b'\x00') + self.assertEqual(long_to_bytes(0, 2), b'\x00\x00') + self.assertEqual(long_to_bytes(1, 3), b'\x00\x00\x01') + self.assertEqual(long_to_bytes(65535, 2), b'\xFF\xFF') + self.assertEqual(long_to_bytes(65536, 2), b'\x00\x01\x00\x00') + self.assertEqual(long_to_bytes(0x100, 1), b'\x01\x00') + self.assertEqual(long_to_bytes(0xFF00000001, 6), b'\x00\xFF\x00\x00\x00\x01') + self.assertEqual(long_to_bytes(0xFF00000001, 8), b'\x00\x00\x00\xFF\x00\x00\x00\x01') + self.assertEqual(long_to_bytes(0xFF00000001, 10), b'\x00\x00\x00\x00\x00\xFF\x00\x00\x00\x01') + self.assertEqual(long_to_bytes(0xFF00000001, 11), b'\x00\x00\x00\x00\x00\x00\xFF\x00\x00\x00\x01') + + def test_err1(self): + self.assertRaises(ValueError, long_to_bytes, -1) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(MiscTests) + tests += list_test_cases(LongTests) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py new file mode 100644 index 0000000..43b137d --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_rfc1751.py @@ -0,0 +1,38 @@ +import unittest + +import binascii +from Cryptodome.Util.RFC1751 import key_to_english, english_to_key + + +class RFC1751_Tests(unittest.TestCase): + + def test1(self): + data = [ + ('EB33F77EE73D4053', 'TIDE ITCH SLOW REIN RULE MOT'), + ('CCAC2AED591056BE4F90FD441C534766', 'RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE'), + ('EFF81F9BFBC65350920CDD7416DE8009', 'TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL') + ] + + for key_hex, words in data: + key_bin = binascii.a2b_hex(key_hex) + + w2 = key_to_english(key_bin) + self.assertEqual(w2, words) + + k2 = english_to_key(words) + self.assertEqual(k2, key_bin) + + def test_error_key_to_english(self): + + self.assertRaises(ValueError, key_to_english, b'0' * 7) + + +def get_tests(config={}): + from Cryptodome.SelfTest.st_common import list_test_cases + tests = list_test_cases(RFC1751_Tests) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_strxor.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_strxor.py new file mode 100644 index 0000000..6a96129 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/Util/test_strxor.py @@ -0,0 +1,280 @@ +# +# SelfTest/Util/test_strxor.py: Self-test for XORing +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import unittest +from binascii import unhexlify, hexlify + +from Cryptodome.SelfTest.st_common import list_test_cases +from Cryptodome.Util.strxor import strxor, strxor_c + + +class StrxorTests(unittest.TestCase): + + def test1(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + result = unhexlify(b"c70ed123c59a7fcb6f12") + self.assertEqual(strxor(term1, term2), result) + self.assertEqual(strxor(term2, term1), result) + + def test2(self): + es = b"" + self.assertEqual(strxor(es, es), es) + + def test3(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + all_zeros = b"\x00" * len(term1) + self.assertEqual(strxor(term1, term1), all_zeros) + + def test_wrong_length(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"ff339a83e5cd4cdf564990") + self.assertRaises(ValueError, strxor, term1, term2) + + def test_bytearray(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_ba = bytearray(term1) + term2 = unhexlify(b"383d4ba020573314395b") + result = unhexlify(b"c70ed123c59a7fcb6f12") + + self.assertEqual(strxor(term1_ba, term2), result) + + def test_memoryview(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_mv = memoryview(term1) + term2 = unhexlify(b"383d4ba020573314395b") + result = unhexlify(b"c70ed123c59a7fcb6f12") + + self.assertEqual(strxor(term1_mv, term2), result) + + def test_output_bytearray(self): + """Verify result can be stored in pre-allocated memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + original_term1 = term1[:] + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + output = bytearray(len(term1)) + + result = strxor(term1, term2, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_xor) + self.assertEqual(term1, original_term1) + self.assertEqual(term2, original_term2) + + def test_output_memoryview(self): + """Verify result can be stored in pre-allocated memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + original_term1 = term1[:] + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + output = memoryview(bytearray(len(term1))) + + result = strxor(term1, term2, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_xor) + self.assertEqual(term1, original_term1) + self.assertEqual(term2, original_term2) + + def test_output_overlapping_bytearray(self): + """Verify result can be stored in overlapping memory""" + + term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649")) + term2 = unhexlify(b"383d4ba020573314395b") + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + + result = strxor(term1, term2, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + self.assertEqual(term2, original_term2) + + def test_output_overlapping_memoryview(self): + """Verify result can be stored in overlapping memory""" + + term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))) + term2 = unhexlify(b"383d4ba020573314395b") + original_term2 = term2[:] + expected_xor = unhexlify(b"c70ed123c59a7fcb6f12") + + result = strxor(term1, term2, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + self.assertEqual(term2, original_term2) + + def test_output_ro_bytes(self): + """Verify result cannot be stored in read-only memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + + self.assertRaises(TypeError, strxor, term1, term2, output=term1) + + def test_output_ro_memoryview(self): + """Verify result cannot be stored in read-only memory""" + + term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649")) + term2 = unhexlify(b"383d4ba020573314395b") + + self.assertRaises(TypeError, strxor, term1, term2, output=term1) + + def test_output_incorrect_length(self): + """Verify result cannot be stored in memory of incorrect length""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term2 = unhexlify(b"383d4ba020573314395b") + output = bytearray(len(term1) - 1) + + self.assertRaises(ValueError, strxor, term1, term2, output=output) + + +class Strxor_cTests(unittest.TestCase): + + def test1(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + result = unhexlify(b"be72dbc2a48c0d9e1708") + self.assertEqual(strxor_c(term1, 65), result) + + def test2(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + self.assertEqual(strxor_c(term1, 0), term1) + + def test3(self): + self.assertEqual(strxor_c(b"", 90), b"") + + def test_wrong_range(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + self.assertRaises(ValueError, strxor_c, term1, -1) + self.assertRaises(ValueError, strxor_c, term1, 256) + + def test_bytearray(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_ba = bytearray(term1) + result = unhexlify(b"be72dbc2a48c0d9e1708") + + self.assertEqual(strxor_c(term1_ba, 65), result) + + def test_memoryview(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + term1_mv = memoryview(term1) + result = unhexlify(b"be72dbc2a48c0d9e1708") + + self.assertEqual(strxor_c(term1_mv, 65), result) + + def test_output_bytearray(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + original_term1 = term1[:] + expected_result = unhexlify(b"be72dbc2a48c0d9e1708") + output = bytearray(len(term1)) + + result = strxor_c(term1, 65, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_result) + self.assertEqual(term1, original_term1) + + def test_output_memoryview(self): + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + original_term1 = term1[:] + expected_result = unhexlify(b"be72dbc2a48c0d9e1708") + output = memoryview(bytearray(len(term1))) + + result = strxor_c(term1, 65, output=output) + + self.assertEqual(result, None) + self.assertEqual(output, expected_result) + self.assertEqual(term1, original_term1) + + def test_output_overlapping_bytearray(self): + """Verify result can be stored in overlapping memory""" + + term1 = bytearray(unhexlify(b"ff339a83e5cd4cdf5649")) + expected_xor = unhexlify(b"be72dbc2a48c0d9e1708") + + result = strxor_c(term1, 65, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + + def test_output_overlapping_memoryview(self): + """Verify result can be stored in overlapping memory""" + + term1 = memoryview(bytearray(unhexlify(b"ff339a83e5cd4cdf5649"))) + expected_xor = unhexlify(b"be72dbc2a48c0d9e1708") + + result = strxor_c(term1, 65, output=term1) + + self.assertEqual(result, None) + self.assertEqual(term1, expected_xor) + + def test_output_ro_bytes(self): + """Verify result cannot be stored in read-only memory""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + + self.assertRaises(TypeError, strxor_c, term1, 65, output=term1) + + def test_output_ro_memoryview(self): + """Verify result cannot be stored in read-only memory""" + + term1 = memoryview(unhexlify(b"ff339a83e5cd4cdf5649")) + term2 = unhexlify(b"383d4ba020573314395b") + + self.assertRaises(TypeError, strxor_c, term1, 65, output=term1) + + def test_output_incorrect_length(self): + """Verify result cannot be stored in memory of incorrect length""" + + term1 = unhexlify(b"ff339a83e5cd4cdf5649") + output = bytearray(len(term1) - 1) + + self.assertRaises(ValueError, strxor_c, term1, 65, output=output) + + +def get_tests(config={}): + tests = [] + tests += list_test_cases(StrxorTests) + tests += list_test_cases(Strxor_cTests) + return tests + + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/__init__.py new file mode 100644 index 0000000..dcc6ce6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/__init__.py @@ -0,0 +1,97 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/__init__.py: Self-test for PyCrypto +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Self tests + +These tests should perform quickly and can ideally be used every time an +application runs. +""" + +__revision__ = "$Id$" + +import sys +import unittest +from Cryptodome.Util.py3compat import StringIO + +class SelfTestError(Exception): + def __init__(self, message, result): + Exception.__init__(self, message, result) + self.message = message + self.result = result + +def run(module=None, verbosity=0, stream=None, tests=None, config=None, **kwargs): + """Execute self-tests. + + This raises SelfTestError if any test is unsuccessful. + + You may optionally pass in a sub-module of SelfTest if you only want to + perform some of the tests. For example, the following would test only the + hash modules: + + Cryptodome.SelfTest.run(Cryptodome.SelfTest.Hash) + + """ + + if config is None: + config = {} + suite = unittest.TestSuite() + if module is None: + if tests is None: + tests = get_tests(config=config) + suite.addTests(tests) + else: + if tests is None: + suite.addTests(module.get_tests(config=config)) + else: + raise ValueError("'module' and 'tests' arguments are mutually exclusive") + if stream is None: + kwargs['stream'] = StringIO() + else: + kwargs['stream'] = stream + runner = unittest.TextTestRunner(verbosity=verbosity, **kwargs) + result = runner.run(suite) + if not result.wasSuccessful(): + if stream is None: + sys.stderr.write(kwargs['stream'].getvalue()) + raise SelfTestError("Self-test failed", result) + return result + +def get_tests(config={}): + tests = [] + from Cryptodome.SelfTest import Cipher; tests += Cipher.get_tests(config=config) + from Cryptodome.SelfTest import Hash; tests += Hash.get_tests(config=config) + from Cryptodome.SelfTest import Protocol; tests += Protocol.get_tests(config=config) + from Cryptodome.SelfTest import PublicKey; tests += PublicKey.get_tests(config=config) + from Cryptodome.SelfTest import Random; tests += Random.get_tests(config=config) + from Cryptodome.SelfTest import Util; tests += Util.get_tests(config=config) + from Cryptodome.SelfTest import Signature; tests += Signature.get_tests(config=config) + from Cryptodome.SelfTest import IO; tests += IO.get_tests(config=config) + from Cryptodome.SelfTest import Math; tests += Math.get_tests(config=config) + return tests + +if __name__ == '__main__': + suite = lambda: unittest.TestSuite(get_tests()) + unittest.main(defaultTest='suite') + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/__main__.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/__main__.py new file mode 100644 index 0000000..28f0d37 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/__main__.py @@ -0,0 +1,38 @@ +#! /usr/bin/env python +# +# __main__.py : Stand-along loader for PyCryptodome test suite +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from __future__ import print_function + +import sys + +from Cryptodome import SelfTest + +slow_tests = not "--skip-slow-tests" in sys.argv +if not slow_tests: + print("Skipping slow tests") + +wycheproof_warnings = "--wycheproof-warnings" in sys.argv +if wycheproof_warnings: + print("Printing Wycheproof warnings") + +config = {'slow_tests' : slow_tests, 'wycheproof_warnings' : wycheproof_warnings } +SelfTest.run(stream=sys.stdout, verbosity=1, config=config) diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/loader.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/loader.py new file mode 100644 index 0000000..34d5bd9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/loader.py @@ -0,0 +1,235 @@ +# =================================================================== +# +# Copyright (c) 2016, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import re +import json +import errno +import binascii +import warnings +from binascii import unhexlify +from Cryptodome.Util.py3compat import FileNotFoundError + + +try: + import pycryptodome_test_vectors # type: ignore + test_vectors_available = True +except ImportError: + test_vectors_available = False + + +def _load_tests(dir_comps, file_in, description, conversions): + """Load and parse a test vector file + + Return a list of objects, one per group of adjacent + KV lines or for a single line in the form "[.*]". + + For a group of lines, the object has one attribute per line. + """ + + line_number = 0 + results = [] + + class TestVector(object): + def __init__(self, description, count): + self.desc = description + self.count = count + self.others = [] + + test_vector = None + count = 0 + new_group = True + + while True: + line_number += 1 + line = file_in.readline() + if not line: + if test_vector is not None: + results.append(test_vector) + break + line = line.strip() + + # Skip comments and empty lines + if line.startswith('#') or not line: + new_group = True + continue + + if line.startswith("["): + if test_vector is not None: + results.append(test_vector) + test_vector = None + results.append(line) + continue + + if new_group: + count += 1 + new_group = False + if test_vector is not None: + results.append(test_vector) + test_vector = TestVector("%s (#%d)" % (description, count), count) + + res = re.match("([A-Za-z0-9]+) = ?(.*)", line) + if not res: + test_vector.others += [line] + else: + token = res.group(1).lower() + data = res.group(2).lower() + + conversion = conversions.get(token, None) + if conversion is None: + if len(data) % 2 != 0: + data = "0" + data + setattr(test_vector, token, binascii.unhexlify(data)) + else: + setattr(test_vector, token, conversion(data)) + + # This line is ignored + return results + + +def load_test_vectors(dir_comps, file_name, description, conversions): + """Load and parse a test vector file, formatted using the NIST style. + + Args: + dir_comps (list of strings): + The path components under the ``pycryptodome_test_vectors`` package. + For instance ``("Cipher", "AES")``. + file_name (string): + The name of the file with the test vectors. + description (string): + A description applicable to the test vectors in the file. + conversions (dictionary): + The dictionary contains functions. + Values in the file that have an entry in this dictionary + will be converted usign the matching function. + Otherwise, values will be considered as hexadecimal and + converted to binary. + + Returns: + A list of test vector objects. + + The file is formatted in the following way: + + - Lines starting with "#" are comments and will be ignored. + - Each test vector is a sequence of 1 or more adjacent lines, where + each lines is an assignement. + - Test vectors are separated by an empty line, a comment, or + a line starting with "[". + + A test vector object has the following attributes: + + - desc (string): description + - counter (int): the order of the test vector in the file (from 1) + - others (list): zero or more lines of the test vector that were not assignments + - left-hand side of each assignment (lowercase): the value of the + assignement, either converted or bytes. + """ + + results = None + + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + description = "%s test (%s)" % (description, file_name) + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name) as file_in: + results = _load_tests(dir_comps, file_in, description, conversions) + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for " + description, + UserWarning, + stacklevel=2) + + return results + + +def load_test_vectors_wycheproof(dir_comps, file_name, description, + root_tag={}, group_tag={}, unit_tag={}): + + result = [] + try: + if not test_vectors_available: + raise FileNotFoundError(errno.ENOENT, + os.strerror(errno.ENOENT), + file_name) + + init_dir = os.path.dirname(pycryptodome_test_vectors.__file__) + full_file_name = os.path.join(os.path.join(init_dir, *dir_comps), file_name) + with open(full_file_name) as file_in: + tv_tree = json.load(file_in) + + except FileNotFoundError: + warnings.warn("Warning: skipping extended tests for " + description, + UserWarning, + stacklevel=2) + return result + + class TestVector(object): + pass + + common_root = {} + for k, v in root_tag.items(): + common_root[k] = v(tv_tree) + + for group in tv_tree['testGroups']: + + common_group = {} + for k, v in group_tag.items(): + common_group[k] = v(group) + + for test in group['tests']: + tv = TestVector() + + for k, v in common_root.items(): + setattr(tv, k, v) + for k, v in common_group.items(): + setattr(tv, k, v) + + tv.id = test['tcId'] + tv.comment = test['comment'] + for attr in 'key', 'iv', 'aad', 'msg', 'ct', 'tag', 'label', 'ikm', 'salt', 'info', 'okm', 'sig': + if attr in test: + setattr(tv, attr, unhexlify(test[attr])) + tv.filename = file_name + + for k, v in unit_tag.items(): + setattr(tv, k, v(test)) + + tv.valid = test['result'] != "invalid" + tv.warning = test['result'] == "acceptable" + result.append(tv) + + return result + diff --git a/python/lib/python3.11/site-packages/Cryptodome/SelfTest/st_common.py b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/st_common.py new file mode 100644 index 0000000..3565251 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/SelfTest/st_common.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- +# +# SelfTest/st_common.py: Common functions for SelfTest modules +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Common functions for SelfTest modules""" + +import unittest +import binascii +from Cryptodome.Util.py3compat import b + + +def list_test_cases(class_): + """Return a list of TestCase instances given a TestCase class + + This is useful when you have defined test* methods on your TestCase class. + """ + return unittest.TestLoader().loadTestsFromTestCase(class_) + +def strip_whitespace(s): + """Remove whitespace from a text or byte string""" + if isinstance(s,str): + return b("".join(s.split())) + else: + return b("").join(s.split()) + +def a2b_hex(s): + """Convert hexadecimal to binary, ignoring whitespace""" + return binascii.a2b_hex(strip_whitespace(s)) + +def b2a_hex(s): + """Convert binary to hexadecimal""" + # For completeness + return binascii.b2a_hex(s) + +# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/DSS.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/DSS.py new file mode 100644 index 0000000..67f23ac --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/DSS.py @@ -0,0 +1,403 @@ +# +# Signature/DSS.py : DSS.py +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.asn1 import DerSequence +from Cryptodome.Util.number import long_to_bytes +from Cryptodome.Math.Numbers import Integer + +from Cryptodome.Hash import HMAC +from Cryptodome.PublicKey.ECC import EccKey +from Cryptodome.PublicKey.DSA import DsaKey + +__all__ = ['DssSigScheme', 'new'] + + +class DssSigScheme(object): + """A (EC)DSA signature object. + Do not instantiate directly. + Use :func:`Cryptodome.Signature.DSS.new`. + """ + + def __init__(self, key, encoding, order): + """Create a new Digital Signature Standard (DSS) object. + + Do not instantiate this object directly, + use `Cryptodome.Signature.DSS.new` instead. + """ + + self._key = key + self._encoding = encoding + self._order = order + + self._order_bits = self._order.size_in_bits() + self._order_bytes = (self._order_bits - 1) // 8 + 1 + + def can_sign(self): + """Return ``True`` if this signature object can be used + for signing messages.""" + + return self._key.has_private() + + def _compute_nonce(self, msg_hash): + raise NotImplementedError("To be provided by subclasses") + + def _valid_hash(self, msg_hash): + raise NotImplementedError("To be provided by subclasses") + + def sign(self, msg_hash): + """Compute the DSA/ECDSA signature of a message. + + Args: + msg_hash (hash object): + The hash that was carried out over the message. + The object belongs to the :mod:`Cryptodome.Hash` package. + Under mode ``'fips-186-3'``, the hash must be a FIPS + approved secure hash (SHA-2 or SHA-3). + + :return: The signature as ``bytes`` + :raise ValueError: if the hash algorithm is incompatible to the (EC)DSA key + :raise TypeError: if the (EC)DSA key has no private half + """ + + if not self._key.has_private(): + raise TypeError("Private key is needed to sign") + + if not self._valid_hash(msg_hash): + raise ValueError("Hash is not sufficiently strong") + + # Generate the nonce k (critical!) + nonce = self._compute_nonce(msg_hash) + + # Perform signature using the raw API + z = Integer.from_bytes(msg_hash.digest()[:self._order_bytes]) + sig_pair = self._key._sign(z, nonce) + + # Encode the signature into a single byte string + if self._encoding == 'binary': + output = b"".join([long_to_bytes(x, self._order_bytes) + for x in sig_pair]) + else: + # Dss-sig ::= SEQUENCE { + # r INTEGER, + # s INTEGER + # } + # Ecdsa-Sig-Value ::= SEQUENCE { + # r INTEGER, + # s INTEGER + # } + output = DerSequence(sig_pair).encode() + + return output + + def verify(self, msg_hash, signature): + """Check if a certain (EC)DSA signature is authentic. + + Args: + msg_hash (hash object): + The hash that was carried out over the message. + This is an object belonging to the :mod:`Cryptodome.Hash` module. + Under mode ``'fips-186-3'``, the hash must be a FIPS + approved secure hash (SHA-2 or SHA-3). + + signature (``bytes``): + The signature that needs to be validated. + + :raise ValueError: if the signature is not authentic + """ + + if not self._valid_hash(msg_hash): + raise ValueError("Hash is not sufficiently strong") + + if self._encoding == 'binary': + if len(signature) != (2 * self._order_bytes): + raise ValueError("The signature is not authentic (length)") + r_prime, s_prime = [Integer.from_bytes(x) + for x in (signature[:self._order_bytes], + signature[self._order_bytes:])] + else: + try: + der_seq = DerSequence().decode(signature, strict=True) + except (ValueError, IndexError): + raise ValueError("The signature is not authentic (DER)") + if len(der_seq) != 2 or not der_seq.hasOnlyInts(): + raise ValueError("The signature is not authentic (DER content)") + r_prime, s_prime = Integer(der_seq[0]), Integer(der_seq[1]) + + if not (0 < r_prime < self._order) or not (0 < s_prime < self._order): + raise ValueError("The signature is not authentic (d)") + + z = Integer.from_bytes(msg_hash.digest()[:self._order_bytes]) + result = self._key._verify(z, (r_prime, s_prime)) + if not result: + raise ValueError("The signature is not authentic") + # Make PyCryptodome code to fail + return False + + +class DeterministicDsaSigScheme(DssSigScheme): + # Also applicable to ECDSA + + def __init__(self, key, encoding, order, private_key): + super(DeterministicDsaSigScheme, self).__init__(key, encoding, order) + self._private_key = private_key + + def _bits2int(self, bstr): + """See 2.3.2 in RFC6979""" + + result = Integer.from_bytes(bstr) + q_len = self._order.size_in_bits() + b_len = len(bstr) * 8 + if b_len > q_len: + # Only keep leftmost q_len bits + result >>= (b_len - q_len) + return result + + def _int2octets(self, int_mod_q): + """See 2.3.3 in RFC6979""" + + assert 0 < int_mod_q < self._order + return long_to_bytes(int_mod_q, self._order_bytes) + + def _bits2octets(self, bstr): + """See 2.3.4 in RFC6979""" + + z1 = self._bits2int(bstr) + if z1 < self._order: + z2 = z1 + else: + z2 = z1 - self._order + return self._int2octets(z2) + + def _compute_nonce(self, mhash): + """Generate k in a deterministic way""" + + # See section 3.2 in RFC6979.txt + # Step a + h1 = mhash.digest() + # Step b + mask_v = b'\x01' * mhash.digest_size + # Step c + nonce_k = b'\x00' * mhash.digest_size + + for int_oct in (b'\x00', b'\x01'): + # Step d/f + nonce_k = HMAC.new(nonce_k, + mask_v + int_oct + + self._int2octets(self._private_key) + + self._bits2octets(h1), mhash).digest() + # Step e/g + mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() + + nonce = -1 + while not (0 < nonce < self._order): + # Step h.C (second part) + if nonce != -1: + nonce_k = HMAC.new(nonce_k, mask_v + b'\x00', + mhash).digest() + mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() + + # Step h.A + mask_t = b"" + + # Step h.B + while len(mask_t) < self._order_bytes: + mask_v = HMAC.new(nonce_k, mask_v, mhash).digest() + mask_t += mask_v + + # Step h.C (first part) + nonce = self._bits2int(mask_t) + return nonce + + def _valid_hash(self, msg_hash): + return True + + +class FipsDsaSigScheme(DssSigScheme): + + #: List of L (bit length of p) and N (bit length of q) combinations + #: that are allowed by FIPS 186-3. The security level is provided in + #: Table 2 of FIPS 800-57 (rev3). + _fips_186_3_L_N = ( + (1024, 160), # 80 bits (SHA-1 or stronger) + (2048, 224), # 112 bits (SHA-224 or stronger) + (2048, 256), # 128 bits (SHA-256 or stronger) + (3072, 256) # 256 bits (SHA-512) + ) + + def __init__(self, key, encoding, order, randfunc): + super(FipsDsaSigScheme, self).__init__(key, encoding, order) + self._randfunc = randfunc + + L = Integer(key.p).size_in_bits() + if (L, self._order_bits) not in self._fips_186_3_L_N: + error = ("L/N (%d, %d) is not compliant to FIPS 186-3" + % (L, self._order_bits)) + raise ValueError(error) + + def _compute_nonce(self, msg_hash): + # hash is not used + return Integer.random_range(min_inclusive=1, + max_exclusive=self._order, + randfunc=self._randfunc) + + def _valid_hash(self, msg_hash): + """Verify that SHA-1, SHA-2 or SHA-3 are used""" + return (msg_hash.oid == "1.3.14.3.2.26" or + msg_hash.oid.startswith("2.16.840.1.101.3.4.2.")) + + +class FipsEcDsaSigScheme(DssSigScheme): + + def __init__(self, key, encoding, order, randfunc): + super(FipsEcDsaSigScheme, self).__init__(key, encoding, order) + self._randfunc = randfunc + + def _compute_nonce(self, msg_hash): + return Integer.random_range(min_inclusive=1, + max_exclusive=self._key._curve.order, + randfunc=self._randfunc) + + def _valid_hash(self, msg_hash): + """Verify that the strength of the hash matches or exceeds + the strength of the EC. We fail if the hash is too weak.""" + + modulus_bits = self._key.pointQ.size_in_bits() + + # SHS: SHA-2, SHA-3, truncated SHA-512 + sha224 = ("2.16.840.1.101.3.4.2.4", "2.16.840.1.101.3.4.2.7", "2.16.840.1.101.3.4.2.5") + sha256 = ("2.16.840.1.101.3.4.2.1", "2.16.840.1.101.3.4.2.8", "2.16.840.1.101.3.4.2.6") + sha384 = ("2.16.840.1.101.3.4.2.2", "2.16.840.1.101.3.4.2.9") + sha512 = ("2.16.840.1.101.3.4.2.3", "2.16.840.1.101.3.4.2.10") + shs = sha224 + sha256 + sha384 + sha512 + + try: + result = msg_hash.oid in shs + except AttributeError: + result = False + return result + + +def new(key, mode, encoding='binary', randfunc=None): + """Create a signature object :class:`DssSigScheme` that + can perform (EC)DSA signature or verification. + + .. note:: + Refer to `NIST SP 800 Part 1 Rev 4`_ (or newer release) for an + overview of the recommended key lengths. + + Args: + key (:class:`Cryptodome.PublicKey.DSA` or :class:`Cryptodome.PublicKey.ECC`): + The key to use for computing the signature (*private* keys only) + or for verifying one. + For DSA keys, let ``L`` and ``N`` be the bit lengths of the modulus ``p`` + and of ``q``: the pair ``(L,N)`` must appear in the following list, + in compliance to section 4.2 of `FIPS 186-4`_: + + - (1024, 160) *legacy only; do not create new signatures with this* + - (2048, 224) *deprecated; do not create new signatures with this* + - (2048, 256) + - (3072, 256) + + For ECC, only keys over P-224, P-256, P-384, and P-521 are accepted. + + mode (string): + The parameter can take these values: + + - ``'fips-186-3'``. The signature generation is randomized and carried out + according to `FIPS 186-3`_: the nonce ``k`` is taken from the RNG. + - ``'deterministic-rfc6979'``. The signature generation is not + randomized. See RFC6979_. + + encoding (string): + How the signature is encoded. This value determines the output of + :meth:`sign` and the input to :meth:`verify`. + + The following values are accepted: + + - ``'binary'`` (default), the signature is the raw concatenation + of ``r`` and ``s``. It is defined in the IEEE P.1363 standard. + For DSA, the size in bytes of the signature is ``N/4`` bytes + (e.g. 64 for ``N=256``). + For ECDSA, the signature is always twice the length of a point + coordinate (e.g. 64 bytes for P-256). + + - ``'der'``, the signature is a ASN.1 DER SEQUENCE + with two INTEGERs (``r`` and ``s``). It is defined in RFC3279_. + The size of the signature is variable. + + randfunc (callable): + A function that returns random ``bytes``, of a given length. + If omitted, the internal RNG is used. + Only applicable for the *'fips-186-3'* mode. + + .. _FIPS 186-3: http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf + .. _FIPS 186-4: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf + .. _NIST SP 800 Part 1 Rev 4: http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf + .. _RFC6979: http://tools.ietf.org/html/rfc6979 + .. _RFC3279: https://tools.ietf.org/html/rfc3279#section-2.2.2 + """ + + # The goal of the 'mode' parameter is to avoid to + # have the current version of the standard as default. + # + # Over time, such version will be superseded by (for instance) + # FIPS 186-4 and it will be odd to have -3 as default. + + if encoding not in ('binary', 'der'): + raise ValueError("Unknown encoding '%s'" % encoding) + + if isinstance(key, EccKey): + order = key._curve.order + private_key_attr = 'd' + if key._curve.name == "ed25519": + raise ValueError("ECC key is not on a NIST P curve") + elif isinstance(key, DsaKey): + order = Integer(key.q) + private_key_attr = 'x' + else: + raise ValueError("Unsupported key type " + str(type(key))) + + if key.has_private(): + private_key = getattr(key, private_key_attr) + else: + private_key = None + + if mode == 'deterministic-rfc6979': + return DeterministicDsaSigScheme(key, encoding, order, private_key) + elif mode == 'fips-186-3': + if isinstance(key, EccKey): + return FipsEcDsaSigScheme(key, encoding, order, randfunc) + else: + return FipsDsaSigScheme(key, encoding, order, randfunc) + else: + raise ValueError("Unknown DSS mode '%s'" % mode) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/DSS.pyi b/python/lib/python3.11/site-packages/Cryptodome/Signature/DSS.pyi new file mode 100644 index 0000000..52ecc8f --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/DSS.pyi @@ -0,0 +1,27 @@ +from typing import Union, Optional, Callable +from typing_extensions import Protocol + +from Cryptodome.PublicKey.DSA import DsaKey +from Cryptodome.PublicKey.ECC import EccKey + +class Hash(Protocol): + def digest(self) -> bytes: ... + +__all__ = ['new'] + +class DssSigScheme: + def __init__(self, key: Union[DsaKey, EccKey], encoding: str, order: int) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> bool: ... + +class DeterministicDsaSigScheme(DssSigScheme): + def __init__(self, key, encoding, order, private_key) -> None: ... + +class FipsDsaSigScheme(DssSigScheme): + def __init__(self, key: DsaKey, encoding: str, order: int, randfunc: Callable) -> None: ... + +class FipsEcDsaSigScheme(DssSigScheme): + def __init__(self, key: EccKey, encoding: str, order: int, randfunc: Callable) -> None: ... + +def new(key: Union[DsaKey, EccKey], mode: str, encoding: Optional[str]='binary', randfunc: Optional[Callable]=None) -> Union[DeterministicDsaSigScheme, FipsDsaSigScheme, FipsEcDsaSigScheme]: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.py new file mode 100644 index 0000000..1e7e5b5 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.py @@ -0,0 +1,55 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Legacy module for PKCS#1 PSS signatures. + +:undocumented: __package__ +""" + +import types + +from Cryptodome.Signature import pss + + +def _pycrypto_verify(self, hash_object, signature): + try: + self._verify(hash_object, signature) + except (ValueError, TypeError): + return False + return True + + +def new(rsa_key, mgfunc=None, saltLen=None, randfunc=None): + pkcs1 = pss.new(rsa_key, mask_func=mgfunc, + salt_bytes=saltLen, rand_func=randfunc) + pkcs1._verify = pkcs1.verify + pkcs1.verify = types.MethodType(_pycrypto_verify, pkcs1) + return pkcs1 diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi new file mode 100644 index 0000000..e7424f5 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_PSS.pyi @@ -0,0 +1,28 @@ +from typing import Union, Callable, Optional +from typing_extensions import Protocol + +from Cryptodome.PublicKey.RSA import RsaKey + + +class Hash(Protocol): + def digest(self) -> bytes: ... + def update(self, bytes) -> None: ... + + +class HashModule(Protocol): + @staticmethod + def new(data: Optional[bytes]) -> Hash: ... + + +MaskFunction = Callable[[bytes, int, Union[Hash, HashModule]], bytes] +RndFunction = Callable[[int], bytes] + +class PSS_SigScheme: + def __init__(self, key: RsaKey, mgfunc: MaskFunction, saltLen: int, randfunc: RndFunction) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> bool: ... + + + +def new(rsa_key: RsaKey, mgfunc: Optional[MaskFunction]=None, saltLen: Optional[int]=None, randfunc: Optional[RndFunction]=None) -> PSS_SigScheme: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.py new file mode 100644 index 0000000..d560663 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.py @@ -0,0 +1,53 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +""" +Legacy module for PKCS#1 v1.5 signatures. + +:undocumented: __package__ +""" + +import types + +from Cryptodome.Signature import pkcs1_15 + +def _pycrypto_verify(self, hash_object, signature): + try: + self._verify(hash_object, signature) + except (ValueError, TypeError): + return False + return True + +def new(rsa_key): + pkcs1 = pkcs1_15.new(rsa_key) + pkcs1._verify = pkcs1.verify + pkcs1.verify = types.MethodType(_pycrypto_verify, pkcs1) + return pkcs1 + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi new file mode 100644 index 0000000..d02555c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/PKCS1_v1_5.pyi @@ -0,0 +1,16 @@ +from typing import Optional +from typing_extensions import Protocol + +from Cryptodome.PublicKey.RSA import RsaKey + +class Hash(Protocol): + def digest(self) -> bytes: ... + +class PKCS115_SigScheme: + def __init__(self, rsa_key: RsaKey) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> bool: ... + + +def new(rsa_key: RsaKey) -> PKCS115_SigScheme: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/__init__.py new file mode 100644 index 0000000..11ca64c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/__init__.py @@ -0,0 +1,36 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +"""Digital signature protocols + +A collection of standardized protocols to carry out digital signatures. +""" + +__all__ = ['PKCS1_v1_5', 'PKCS1_PSS', 'DSS', 'pkcs1_15', 'pss', 'eddsa'] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.py new file mode 100644 index 0000000..638b96b --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.py @@ -0,0 +1,343 @@ +# =================================================================== +# +# Copyright (c) 2022, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Math.Numbers import Integer + +from Cryptodome.Hash import SHA512, SHAKE256 +from Cryptodome.Util.py3compat import bchr, is_bytes +from Cryptodome.PublicKey.ECC import (EccKey, + construct, + _import_ed25519_public_key, + _import_ed448_public_key) + + +def import_public_key(encoded): + """Create a new Ed25519 or Ed448 public key object, + starting from the key encoded as raw ``bytes``, + in the format described in RFC8032. + + Args: + encoded (bytes): + The EdDSA public key to import. + It must be 32 bytes for Ed25519, and 57 bytes for Ed448. + + Returns: + :class:`Cryptodome.PublicKey.EccKey` : a new ECC key object. + + Raises: + ValueError: when the given key cannot be parsed. + """ + + if len(encoded) == 32: + x, y = _import_ed25519_public_key(encoded) + curve_name = "Ed25519" + elif len(encoded) == 57: + x, y = _import_ed448_public_key(encoded) + curve_name = "Ed448" + else: + raise ValueError("Not an EdDSA key (%d bytes)" % len(encoded)) + return construct(curve=curve_name, point_x=x, point_y=y) + + +def import_private_key(encoded): + """Create a new Ed25519 or Ed448 private key object, + starting from the key encoded as raw ``bytes``, + in the format described in RFC8032. + + Args: + encoded (bytes): + The EdDSA private key to import. + It must be 32 bytes for Ed25519, and 57 bytes for Ed448. + + Returns: + :class:`Cryptodome.PublicKey.EccKey` : a new ECC key object. + + Raises: + ValueError: when the given key cannot be parsed. + """ + + if len(encoded) == 32: + curve_name = "ed25519" + elif len(encoded) == 57: + curve_name = "ed448" + else: + raise ValueError("Incorrect length. Only EdDSA private keys are supported.") + + # Note that the private key is truly a sequence of random bytes, + # so we cannot check its correctness in any way. + + return construct(seed=encoded, curve=curve_name) + + +class EdDSASigScheme(object): + """An EdDSA signature object. + Do not instantiate directly. + Use :func:`Cryptodome.Signature.eddsa.new`. + """ + + def __init__(self, key, context): + """Create a new EdDSA object. + + Do not instantiate this object directly, + use `Cryptodome.Signature.DSS.new` instead. + """ + + self._key = key + self._context = context + self._A = key._export_eddsa() + self._order = key._curve.order + + def can_sign(self): + """Return ``True`` if this signature object can be used + for signing messages.""" + + return self._key.has_private() + + def sign(self, msg_or_hash): + """Compute the EdDSA signature of a message. + + Args: + msg_or_hash (bytes or a hash object): + The message to sign (``bytes``, in case of *PureEdDSA*) or + the hash that was carried out over the message (hash object, for *HashEdDSA*). + + The hash object must be :class:`Cryptodome.Hash.SHA512` for Ed25519, + and :class:`Cryptodome.Hash.SHAKE256` object for Ed448. + + :return: The signature as ``bytes``. It is always 64 bytes for Ed25519, and 114 bytes for Ed448. + :raise TypeError: if the EdDSA key has no private half + """ + + if not self._key.has_private(): + raise TypeError("Private key is needed to sign") + + if self._key._curve.name == "ed25519": + ph = isinstance(msg_or_hash, SHA512.SHA512Hash) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHA-512 hash") + eddsa_sign_method = self._sign_ed25519 + + elif self._key._curve.name == "ed448": + ph = isinstance(msg_or_hash, SHAKE256.SHAKE256_XOF) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHAKE256 hash") + eddsa_sign_method = self._sign_ed448 + + else: + raise ValueError("Incorrect curve for EdDSA") + + return eddsa_sign_method(msg_or_hash, ph) + + def _sign_ed25519(self, msg_or_hash, ph): + + if self._context or ph: + flag = int(ph) + # dom2(flag, self._context) + dom2 = b'SigEd25519 no Ed25519 collisions' + bchr(flag) + \ + bchr(len(self._context)) + self._context + else: + dom2 = b'' + + PHM = msg_or_hash.digest() if ph else msg_or_hash + + # See RFC 8032, section 5.1.6 + + # Step 2 + r_hash = SHA512.new(dom2 + self._key._prefix + PHM).digest() + r = Integer.from_bytes(r_hash, 'little') % self._order + # Step 3 + R_pk = EccKey(point=r * self._key._curve.G)._export_eddsa() + # Step 4 + k_hash = SHA512.new(dom2 + R_pk + self._A + PHM).digest() + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 5 + s = (r + k * self._key.d) % self._order + + return R_pk + s.to_bytes(32, 'little') + + def _sign_ed448(self, msg_or_hash, ph): + + flag = int(ph) + # dom4(flag, self._context) + dom4 = b'SigEd448' + bchr(flag) + \ + bchr(len(self._context)) + self._context + + PHM = msg_or_hash.read(64) if ph else msg_or_hash + + # See RFC 8032, section 5.2.6 + + # Step 2 + r_hash = SHAKE256.new(dom4 + self._key._prefix + PHM).read(114) + r = Integer.from_bytes(r_hash, 'little') % self._order + # Step 3 + R_pk = EccKey(point=r * self._key._curve.G)._export_eddsa() + # Step 4 + k_hash = SHAKE256.new(dom4 + R_pk + self._A + PHM).read(114) + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 5 + s = (r + k * self._key.d) % self._order + + return R_pk + s.to_bytes(57, 'little') + + def verify(self, msg_or_hash, signature): + """Check if an EdDSA signature is authentic. + + Args: + msg_or_hash (bytes or a hash object): + The message to verify (``bytes``, in case of *PureEdDSA*) or + the hash that was carried out over the message (hash object, for *HashEdDSA*). + + The hash object must be :class:`Cryptodome.Hash.SHA512` object for Ed25519, + and :class:`Cryptodome.Hash.SHAKE256` for Ed448. + + signature (``bytes``): + The signature that needs to be validated. + It must be 64 bytes for Ed25519, and 114 bytes for Ed448. + + :raise ValueError: if the signature is not authentic + """ + + if self._key._curve.name == "ed25519": + ph = isinstance(msg_or_hash, SHA512.SHA512Hash) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHA-512 hash") + eddsa_verify_method = self._verify_ed25519 + + elif self._key._curve.name == "ed448": + ph = isinstance(msg_or_hash, SHAKE256.SHAKE256_XOF) + if not (ph or is_bytes(msg_or_hash)): + raise TypeError("'msg_or_hash' must be bytes of a SHAKE256 hash") + eddsa_verify_method = self._verify_ed448 + + else: + raise ValueError("Incorrect curve for EdDSA") + + return eddsa_verify_method(msg_or_hash, signature, ph) + + def _verify_ed25519(self, msg_or_hash, signature, ph): + + if len(signature) != 64: + raise ValueError("The signature is not authentic (length)") + + if self._context or ph: + flag = int(ph) + dom2 = b'SigEd25519 no Ed25519 collisions' + bchr(flag) + \ + bchr(len(self._context)) + self._context + else: + dom2 = b'' + + PHM = msg_or_hash.digest() if ph else msg_or_hash + + # Section 5.1.7 + + # Step 1 + try: + R = import_public_key(signature[:32]).pointQ + except ValueError: + raise ValueError("The signature is not authentic (R)") + s = Integer.from_bytes(signature[32:], 'little') + if s > self._order: + raise ValueError("The signature is not authentic (S)") + # Step 2 + k_hash = SHA512.new(dom2 + signature[:32] + self._A + PHM).digest() + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 3 + point1 = s * 8 * self._key._curve.G + # OPTIMIZE: with double-scalar multiplication, with no SCA + # countermeasures because it is public values + point2 = 8 * R + k * 8 * self._key.pointQ + if point1 != point2: + raise ValueError("The signature is not authentic") + + def _verify_ed448(self, msg_or_hash, signature, ph): + + if len(signature) != 114: + raise ValueError("The signature is not authentic (length)") + + flag = int(ph) + # dom4(flag, self._context) + dom4 = b'SigEd448' + bchr(flag) + \ + bchr(len(self._context)) + self._context + + PHM = msg_or_hash.read(64) if ph else msg_or_hash + + # Section 5.2.7 + + # Step 1 + try: + R = import_public_key(signature[:57]).pointQ + except ValueError: + raise ValueError("The signature is not authentic (R)") + s = Integer.from_bytes(signature[57:], 'little') + if s > self._order: + raise ValueError("The signature is not authentic (S)") + # Step 2 + k_hash = SHAKE256.new(dom4 + signature[:57] + self._A + PHM).read(114) + k = Integer.from_bytes(k_hash, 'little') % self._order + # Step 3 + point1 = s * 8 * self._key._curve.G + # OPTIMIZE: with double-scalar multiplication, with no SCA + # countermeasures because it is public values + point2 = 8 * R + k * 8 * self._key.pointQ + if point1 != point2: + raise ValueError("The signature is not authentic") + + +def new(key, mode, context=None): + """Create a signature object :class:`EdDSASigScheme` that + can perform or verify an EdDSA signature. + + Args: + key (:class:`Cryptodome.PublicKey.ECC` object): + The key to use for computing the signature (*private* keys only) + or for verifying one. + The key must be on the curve ``Ed25519`` or ``Ed448``. + + mode (string): + This parameter must be ``'rfc8032'``. + + context (bytes): + Up to 255 bytes of `context <https://datatracker.ietf.org/doc/html/rfc8032#page-41>`_, + which is a constant byte string to segregate different protocols or + different applications of the same key. + """ + + if not isinstance(key, EccKey) or not key._is_eddsa(): + raise ValueError("EdDSA can only be used with EdDSA keys") + + if mode != 'rfc8032': + raise ValueError("Mode must be 'rfc8032'") + + if context is None: + context = b'' + elif len(context) > 255: + raise ValueError("Context for EdDSA must not be longer than 255 bytes") + + return EdDSASigScheme(key, context) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.pyi b/python/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.pyi new file mode 100644 index 0000000..809a7ad --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/eddsa.pyi @@ -0,0 +1,21 @@ +from typing import Union, Optional +from typing_extensions import Protocol +from Cryptodome.PublicKey.ECC import EccKey + +class Hash(Protocol): + def digest(self) -> bytes: ... + +class XOF(Protocol): + def read(self, len: int) -> bytes: ... + +def import_public_key(encoded: bytes) -> EccKey: ... +def import_private_key(encoded: bytes) -> EccKey: ... + +class EdDSASigScheme(object): + + def __init__(self, key: EccKey, context: bytes) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_or_hash: Union[bytes, Hash, XOF]) -> bytes: ... + def verify(self, msg_or_hash: Union[bytes, Hash, XOF], signature: bytes) -> None: ... + +def new(key: EccKey, mode: str, context: Optional[bytes]=None) -> EdDSASigScheme: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.py new file mode 100644 index 0000000..ae9257e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.py @@ -0,0 +1,222 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import Cryptodome.Util.number +from Cryptodome.Util.number import ceil_div, bytes_to_long, long_to_bytes +from Cryptodome.Util.asn1 import DerSequence, DerNull, DerOctetString, DerObjectId + +class PKCS115_SigScheme: + """A signature object for ``RSASSA-PKCS1-v1_5``. + Do not instantiate directly. + Use :func:`Cryptodome.Signature.pkcs1_15.new`. + """ + + def __init__(self, rsa_key): + """Initialize this PKCS#1 v1.5 signature scheme object. + + :Parameters: + rsa_key : an RSA key object + Creation of signatures is only possible if this is a *private* + RSA key. Verification of signatures is always possible. + """ + self._key = rsa_key + + def can_sign(self): + """Return ``True`` if this object can be used to sign messages.""" + return self._key.has_private() + + def sign(self, msg_hash): + """Create the PKCS#1 v1.5 signature of a message. + + This function is also called ``RSASSA-PKCS1-V1_5-SIGN`` and + it is specified in + `section 8.2.1 of RFC8017 <https://tools.ietf.org/html/rfc8017#page-36>`_. + + :parameter msg_hash: + This is an object from the :mod:`Cryptodome.Hash` package. + It has been used to digest the message to sign. + :type msg_hash: hash object + + :return: the signature encoded as a *byte string*. + :raise ValueError: if the RSA key is not long enough for the given hash algorithm. + :raise TypeError: if the RSA key has no private half. + """ + + # See 8.2.1 in RFC3447 + modBits = Cryptodome.Util.number.size(self._key.n) + k = ceil_div(modBits,8) # Convert from bits to bytes + + # Step 1 + em = _EMSA_PKCS1_V1_5_ENCODE(msg_hash, k) + # Step 2a (OS2IP) + em_int = bytes_to_long(em) + # Step 2b (RSASP1) + m_int = self._key._decrypt(em_int) + # Step 2c (I2OSP) + signature = long_to_bytes(m_int, k) + return signature + + def verify(self, msg_hash, signature): + """Check if the PKCS#1 v1.5 signature over a message is valid. + + This function is also called ``RSASSA-PKCS1-V1_5-VERIFY`` and + it is specified in + `section 8.2.2 of RFC8037 <https://tools.ietf.org/html/rfc8017#page-37>`_. + + :parameter msg_hash: + The hash that was carried out over the message. This is an object + belonging to the :mod:`Cryptodome.Hash` module. + :type parameter: hash object + + :parameter signature: + The signature that needs to be validated. + :type signature: byte string + + :raise ValueError: if the signature is not valid. + """ + + # See 8.2.2 in RFC3447 + modBits = Cryptodome.Util.number.size(self._key.n) + k = ceil_div(modBits, 8) # Convert from bits to bytes + + # Step 1 + if len(signature) != k: + raise ValueError("Invalid signature") + # Step 2a (O2SIP) + signature_int = bytes_to_long(signature) + # Step 2b (RSAVP1) + em_int = self._key._encrypt(signature_int) + # Step 2c (I2OSP) + em1 = long_to_bytes(em_int, k) + # Step 3 + try: + possible_em1 = [ _EMSA_PKCS1_V1_5_ENCODE(msg_hash, k, True) ] + # MD2/4/5 hashes always require NULL params in AlgorithmIdentifier. + # For all others, it is optional. + try: + algorithm_is_md = msg_hash.oid.startswith('1.2.840.113549.2.') + except AttributeError: + algorithm_is_md = False + if not algorithm_is_md: # MD2/MD4/MD5 + possible_em1.append(_EMSA_PKCS1_V1_5_ENCODE(msg_hash, k, False)) + except ValueError: + raise ValueError("Invalid signature") + # Step 4 + # By comparing the full encodings (as opposed to checking each + # of its components one at a time) we avoid attacks to the padding + # scheme like Bleichenbacher's (see http://www.mail-archive.com/cryptography@metzdowd.com/msg06537). + # + if em1 not in possible_em1: + raise ValueError("Invalid signature") + pass + + +def _EMSA_PKCS1_V1_5_ENCODE(msg_hash, emLen, with_hash_parameters=True): + """ + Implement the ``EMSA-PKCS1-V1_5-ENCODE`` function, as defined + in PKCS#1 v2.1 (RFC3447, 9.2). + + ``_EMSA-PKCS1-V1_5-ENCODE`` actually accepts the message ``M`` as input, + and hash it internally. Here, we expect that the message has already + been hashed instead. + + :Parameters: + msg_hash : hash object + The hash object that holds the digest of the message being signed. + emLen : int + The length the final encoding must have, in bytes. + with_hash_parameters : bool + If True (default), include NULL parameters for the hash + algorithm in the ``digestAlgorithm`` SEQUENCE. + + :attention: the early standard (RFC2313) stated that ``DigestInfo`` + had to be BER-encoded. This means that old signatures + might have length tags in indefinite form, which + is not supported in DER. Such encoding cannot be + reproduced by this function. + + :Return: An ``emLen`` byte long string that encodes the hash. + """ + + # First, build the ASN.1 DER object DigestInfo: + # + # DigestInfo ::= SEQUENCE { + # digestAlgorithm AlgorithmIdentifier, + # digest OCTET STRING + # } + # + # where digestAlgorithm identifies the hash function and shall be an + # algorithm ID with an OID in the set PKCS1-v1-5DigestAlgorithms. + # + # PKCS1-v1-5DigestAlgorithms ALGORITHM-IDENTIFIER ::= { + # { OID id-md2 PARAMETERS NULL }| + # { OID id-md5 PARAMETERS NULL }| + # { OID id-sha1 PARAMETERS NULL }| + # { OID id-sha256 PARAMETERS NULL }| + # { OID id-sha384 PARAMETERS NULL }| + # { OID id-sha512 PARAMETERS NULL } + # } + # + # Appendix B.1 also says that for SHA-1/-2 algorithms, the parameters + # should be omitted. They may be present, but when they are, they shall + # have NULL value. + + digestAlgo = DerSequence([ DerObjectId(msg_hash.oid).encode() ]) + + if with_hash_parameters: + digestAlgo.append(DerNull().encode()) + + digest = DerOctetString(msg_hash.digest()) + digestInfo = DerSequence([ + digestAlgo.encode(), + digest.encode() + ]).encode() + + # We need at least 11 bytes for the remaining data: 3 fixed bytes and + # at least 8 bytes of padding). + if emLen<len(digestInfo)+11: + raise TypeError("Selected hash algorithm has a too long digest (%d bytes)." % len(digest)) + PS = b'\xFF' * (emLen - len(digestInfo) - 3) + return b'\x00\x01' + PS + b'\x00' + digestInfo + +def new(rsa_key): + """Create a signature object for creating + or verifying PKCS#1 v1.5 signatures. + + :parameter rsa_key: + The RSA key to use for signing or verifying the message. + This is a :class:`Cryptodome.PublicKey.RSA` object. + Signing is only possible when ``rsa_key`` is a **private** RSA key. + :type rsa_key: RSA object + + :return: a :class:`PKCS115_SigScheme` signature object + """ + return PKCS115_SigScheme(rsa_key) + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.pyi b/python/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.pyi new file mode 100644 index 0000000..04faf60 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/pkcs1_15.pyi @@ -0,0 +1,17 @@ +from typing import Optional +from typing_extensions import Protocol + +from Cryptodome.PublicKey.RSA import RsaKey + +class Hash(Protocol): + def digest(self) -> bytes: ... + +class PKCS115_SigScheme: + def __init__(self, rsa_key: RsaKey) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> None: ... + +def _EMSA_PKCS1_V1_5_ENCODE(msg_hash: Hash, emLen: int, with_hash_parameters: Optional[bool]=True) -> bytes: ... + +def new(rsa_key: RsaKey) -> PKCS115_SigScheme: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/pss.py b/python/lib/python3.11/site-packages/Cryptodome/Signature/pss.py new file mode 100644 index 0000000..0b05ed2 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/pss.py @@ -0,0 +1,386 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util.py3compat import bchr, bord, iter_range +import Cryptodome.Util.number +from Cryptodome.Util.number import (ceil_div, + long_to_bytes, + bytes_to_long + ) +from Cryptodome.Util.strxor import strxor +from Cryptodome import Random + + +class PSS_SigScheme: + """A signature object for ``RSASSA-PSS``. + Do not instantiate directly. + Use :func:`Cryptodome.Signature.pss.new`. + """ + + def __init__(self, key, mgfunc, saltLen, randfunc): + """Initialize this PKCS#1 PSS signature scheme object. + + :Parameters: + key : an RSA key object + If a private half is given, both signature and + verification are possible. + If a public half is given, only verification is possible. + mgfunc : callable + A mask generation function that accepts two parameters: + a string to use as seed, and the lenth of the mask to + generate, in bytes. + saltLen : integer + Length of the salt, in bytes. + randfunc : callable + A function that returns random bytes. + """ + + self._key = key + self._saltLen = saltLen + self._mgfunc = mgfunc + self._randfunc = randfunc + + def can_sign(self): + """Return ``True`` if this object can be used to sign messages.""" + return self._key.has_private() + + def sign(self, msg_hash): + """Create the PKCS#1 PSS signature of a message. + + This function is also called ``RSASSA-PSS-SIGN`` and + it is specified in + `section 8.1.1 of RFC8017 <https://tools.ietf.org/html/rfc8017#section-8.1.1>`_. + + :parameter msg_hash: + This is an object from the :mod:`Cryptodome.Hash` package. + It has been used to digest the message to sign. + :type msg_hash: hash object + + :return: the signature encoded as a *byte string*. + :raise ValueError: if the RSA key is not long enough for the given hash algorithm. + :raise TypeError: if the RSA key has no private half. + """ + + # Set defaults for salt length and mask generation function + if self._saltLen is None: + sLen = msg_hash.digest_size + else: + sLen = self._saltLen + + if self._mgfunc is None: + mgf = lambda x, y: MGF1(x, y, msg_hash) + else: + mgf = self._mgfunc + + modBits = Cryptodome.Util.number.size(self._key.n) + + # See 8.1.1 in RFC3447 + k = ceil_div(modBits, 8) # k is length in bytes of the modulus + # Step 1 + em = _EMSA_PSS_ENCODE(msg_hash, modBits-1, self._randfunc, mgf, sLen) + # Step 2a (OS2IP) + em_int = bytes_to_long(em) + # Step 2b (RSASP1) + m_int = self._key._decrypt(em_int) + # Step 2c (I2OSP) + signature = long_to_bytes(m_int, k) + return signature + + def verify(self, msg_hash, signature): + """Check if the PKCS#1 PSS signature over a message is valid. + + This function is also called ``RSASSA-PSS-VERIFY`` and + it is specified in + `section 8.1.2 of RFC8037 <https://tools.ietf.org/html/rfc8017#section-8.1.2>`_. + + :parameter msg_hash: + The hash that was carried out over the message. This is an object + belonging to the :mod:`Cryptodome.Hash` module. + :type parameter: hash object + + :parameter signature: + The signature that needs to be validated. + :type signature: bytes + + :raise ValueError: if the signature is not valid. + """ + + # Set defaults for salt length and mask generation function + if self._saltLen is None: + sLen = msg_hash.digest_size + else: + sLen = self._saltLen + if self._mgfunc: + mgf = self._mgfunc + else: + mgf = lambda x, y: MGF1(x, y, msg_hash) + + modBits = Cryptodome.Util.number.size(self._key.n) + + # See 8.1.2 in RFC3447 + k = ceil_div(modBits, 8) # Convert from bits to bytes + # Step 1 + if len(signature) != k: + raise ValueError("Incorrect signature") + # Step 2a (O2SIP) + signature_int = bytes_to_long(signature) + # Step 2b (RSAVP1) + em_int = self._key._encrypt(signature_int) + # Step 2c (I2OSP) + emLen = ceil_div(modBits - 1, 8) + em = long_to_bytes(em_int, emLen) + # Step 3/4 + _EMSA_PSS_VERIFY(msg_hash, em, modBits-1, mgf, sLen) + + +def MGF1(mgfSeed, maskLen, hash_gen): + """Mask Generation Function, described in `B.2.1 of RFC8017 + <https://tools.ietf.org/html/rfc8017>`_. + + :param mfgSeed: + seed from which the mask is generated + :type mfgSeed: byte string + + :param maskLen: + intended length in bytes of the mask + :type maskLen: integer + + :param hash_gen: + A module or a hash object from :mod:`Cryptodome.Hash` + :type hash_object: + + :return: the mask, as a *byte string* + """ + + T = b"" + for counter in iter_range(ceil_div(maskLen, hash_gen.digest_size)): + c = long_to_bytes(counter, 4) + hobj = hash_gen.new() + hobj.update(mgfSeed + c) + T = T + hobj.digest() + assert(len(T) >= maskLen) + return T[:maskLen] + + +def _EMSA_PSS_ENCODE(mhash, emBits, randFunc, mgf, sLen): + r""" + Implement the ``EMSA-PSS-ENCODE`` function, as defined + in PKCS#1 v2.1 (RFC3447, 9.1.1). + + The original ``EMSA-PSS-ENCODE`` actually accepts the message ``M`` + as input, and hash it internally. Here, we expect that the message + has already been hashed instead. + + :Parameters: + mhash : hash object + The hash object that holds the digest of the message being signed. + emBits : int + Maximum length of the final encoding, in bits. + randFunc : callable + An RNG function that accepts as only parameter an int, and returns + a string of random bytes, to be used as salt. + mgf : callable + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + sLen : int + Length of the salt, in bytes. + + :Return: An ``emLen`` byte long string that encodes the hash + (with ``emLen = \ceil(emBits/8)``). + + :Raise ValueError: + When digest or salt length are too big. + """ + + emLen = ceil_div(emBits, 8) + + # Bitmask of digits that fill up + lmask = 0 + for i in iter_range(8*emLen-emBits): + lmask = lmask >> 1 | 0x80 + + # Step 1 and 2 have been already done + # Step 3 + if emLen < mhash.digest_size+sLen+2: + raise ValueError("Digest or salt length are too long" + " for given key size.") + # Step 4 + salt = randFunc(sLen) + # Step 5 + m_prime = bchr(0)*8 + mhash.digest() + salt + # Step 6 + h = mhash.new() + h.update(m_prime) + # Step 7 + ps = bchr(0)*(emLen-sLen-mhash.digest_size-2) + # Step 8 + db = ps + bchr(1) + salt + # Step 9 + dbMask = mgf(h.digest(), emLen-mhash.digest_size-1) + # Step 10 + maskedDB = strxor(db, dbMask) + # Step 11 + maskedDB = bchr(bord(maskedDB[0]) & ~lmask) + maskedDB[1:] + # Step 12 + em = maskedDB + h.digest() + bchr(0xBC) + return em + + +def _EMSA_PSS_VERIFY(mhash, em, emBits, mgf, sLen): + """ + Implement the ``EMSA-PSS-VERIFY`` function, as defined + in PKCS#1 v2.1 (RFC3447, 9.1.2). + + ``EMSA-PSS-VERIFY`` actually accepts the message ``M`` as input, + and hash it internally. Here, we expect that the message has already + been hashed instead. + + :Parameters: + mhash : hash object + The hash object that holds the digest of the message to be verified. + em : string + The signature to verify, therefore proving that the sender really + signed the message that was received. + emBits : int + Length of the final encoding (em), in bits. + mgf : callable + A mask generation function that accepts two parameters: a string to + use as seed, and the lenth of the mask to generate, in bytes. + sLen : int + Length of the salt, in bytes. + + :Raise ValueError: + When the encoding is inconsistent, or the digest or salt lengths + are too big. + """ + + emLen = ceil_div(emBits, 8) + + # Bitmask of digits that fill up + lmask = 0 + for i in iter_range(8*emLen-emBits): + lmask = lmask >> 1 | 0x80 + + # Step 1 and 2 have been already done + # Step 3 + if emLen < mhash.digest_size+sLen+2: + raise ValueError("Incorrect signature") + # Step 4 + if ord(em[-1:]) != 0xBC: + raise ValueError("Incorrect signature") + # Step 5 + maskedDB = em[:emLen-mhash.digest_size-1] + h = em[emLen-mhash.digest_size-1:-1] + # Step 6 + if lmask & bord(em[0]): + raise ValueError("Incorrect signature") + # Step 7 + dbMask = mgf(h, emLen-mhash.digest_size-1) + # Step 8 + db = strxor(maskedDB, dbMask) + # Step 9 + db = bchr(bord(db[0]) & ~lmask) + db[1:] + # Step 10 + if not db.startswith(bchr(0)*(emLen-mhash.digest_size-sLen-2) + bchr(1)): + raise ValueError("Incorrect signature") + # Step 11 + if sLen > 0: + salt = db[-sLen:] + else: + salt = b"" + # Step 12 + m_prime = bchr(0)*8 + mhash.digest() + salt + # Step 13 + hobj = mhash.new() + hobj.update(m_prime) + hp = hobj.digest() + # Step 14 + if h != hp: + raise ValueError("Incorrect signature") + + +def new(rsa_key, **kwargs): + """Create an object for making or verifying PKCS#1 PSS signatures. + + :parameter rsa_key: + The RSA key to use for signing or verifying the message. + This is a :class:`Cryptodome.PublicKey.RSA` object. + Signing is only possible when ``rsa_key`` is a **private** RSA key. + :type rsa_key: RSA object + + :Keyword Arguments: + + * *mask_func* (``callable``) -- + A function that returns the mask (as `bytes`). + It must accept two parameters: a seed (as `bytes`) + and the length of the data to return. + + If not specified, it will be the function :func:`MGF1` defined in + `RFC8017 <https://tools.ietf.org/html/rfc8017#page-67>`_ and + combined with the same hash algorithm applied to the + message to sign or verify. + + If you want to use a different function, for instance still :func:`MGF1` + but together with another hash, you can do:: + + from Cryptodome.Hash import SHA256 + from Cryptodome.Signature.pss import MGF1 + mgf = lambda x, y: MGF1(x, y, SHA256) + + * *salt_bytes* (``integer``) -- + Length of the salt, in bytes. + It is a value between 0 and ``emLen - hLen - 2``, where ``emLen`` + is the size of the RSA modulus and ``hLen`` is the size of the digest + applied to the message to sign or verify. + + The salt is generated internally, you don't need to provide it. + + If not specified, the salt length will be ``hLen``. + If it is zero, the signature scheme becomes deterministic. + + Note that in some implementations such as OpenSSL the default + salt length is ``emLen - hLen - 2`` (even though it is not more + secure than ``hLen``). + + * *rand_func* (``callable``) -- + A function that returns random ``bytes``, of the desired length. + The default is :func:`Cryptodome.Random.get_random_bytes`. + + :return: a :class:`PSS_SigScheme` signature object + """ + + mask_func = kwargs.pop("mask_func", None) + salt_len = kwargs.pop("salt_bytes", None) + rand_func = kwargs.pop("rand_func", None) + if rand_func is None: + rand_func = Random.get_random_bytes + if kwargs: + raise ValueError("Unknown keywords: " + str(kwargs.keys())) + return PSS_SigScheme(rsa_key, mask_func, salt_len, rand_func) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Signature/pss.pyi b/python/lib/python3.11/site-packages/Cryptodome/Signature/pss.pyi new file mode 100644 index 0000000..84a960e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Signature/pss.pyi @@ -0,0 +1,30 @@ +from typing import Union, Callable, Optional +from typing_extensions import Protocol + +from Cryptodome.PublicKey.RSA import RsaKey + + +class Hash(Protocol): + def digest(self) -> bytes: ... + def update(self, bytes) -> None: ... + + +class HashModule(Protocol): + @staticmethod + def new(data: Optional[bytes]) -> Hash: ... + + +MaskFunction = Callable[[bytes, int, Union[Hash, HashModule]], bytes] +RndFunction = Callable[[int], bytes] + +class PSS_SigScheme: + def __init__(self, key: RsaKey, mgfunc: MaskFunction, saltLen: int, randfunc: RndFunction) -> None: ... + def can_sign(self) -> bool: ... + def sign(self, msg_hash: Hash) -> bytes: ... + def verify(self, msg_hash: Hash, signature: bytes) -> None: ... + + +MGF1 : MaskFunction +def _EMSA_PSS_ENCODE(mhash: Hash, emBits: int, randFunc: RndFunction, mgf:MaskFunction, sLen: int) -> str: ... +def _EMSA_PSS_VERIFY(mhash: Hash, em: str, emBits: int, mgf: MaskFunction, sLen: int) -> None: ... +def new(rsa_key: RsaKey, **kwargs: Union[MaskFunction, RndFunction, int]) -> PSS_SigScheme: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/Counter.py b/python/lib/python3.11/site-packages/Cryptodome/Util/Counter.py new file mode 100644 index 0000000..c67bc95 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/Counter.py @@ -0,0 +1,77 @@ +# -*- coding: ascii -*- +# +# Util/Counter.py : Fast counter for use with CTR-mode ciphers +# +# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +def new(nbits, prefix=b"", suffix=b"", initial_value=1, little_endian=False, allow_wraparound=False): + """Create a stateful counter block function suitable for CTR encryption modes. + + Each call to the function returns the next counter block. + Each counter block is made up by three parts: + + +------+--------------+-------+ + |prefix| counter value|postfix| + +------+--------------+-------+ + + The counter value is incremented by 1 at each call. + + Args: + nbits (integer): + Length of the desired counter value, in bits. It must be a multiple of 8. + prefix (byte string): + The constant prefix of the counter block. By default, no prefix is + used. + suffix (byte string): + The constant postfix of the counter block. By default, no suffix is + used. + initial_value (integer): + The initial value of the counter. Default value is 1. + Its length in bits must not exceed the argument ``nbits``. + little_endian (boolean): + If ``True``, the counter number will be encoded in little endian format. + If ``False`` (default), in big endian format. + allow_wraparound (boolean): + This parameter is ignored. + Returns: + An object that can be passed with the :data:`counter` parameter to a CTR mode + cipher. + + It must hold that *len(prefix) + nbits//8 + len(suffix)* matches the + block size of the underlying block cipher. + """ + + if (nbits % 8) != 0: + raise ValueError("'nbits' must be a multiple of 8") + + iv_bl = initial_value.bit_length() + if iv_bl > nbits: + raise ValueError("Initial value takes %d bits but it is longer than " + "the counter (%d bits)" % + (iv_bl, nbits)) + + # Ignore wraparound + return {"counter_len": nbits // 8, + "prefix": prefix, + "suffix": suffix, + "initial_value": initial_value, + "little_endian": little_endian + } diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/Counter.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/Counter.pyi new file mode 100644 index 0000000..fa2ffdd --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/Counter.pyi @@ -0,0 +1,5 @@ +from typing import Optional, Union, Dict + +def new(nbits: int, prefix: Optional[bytes]=..., suffix: Optional[bytes]=..., initial_value: Optional[int]=1, + little_endian: Optional[bool]=False, allow_wraparound: Optional[bool]=False) -> \ + Dict[str, Union[int, bytes, bool]]: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/Padding.py b/python/lib/python3.11/site-packages/Cryptodome/Util/Padding.py new file mode 100644 index 0000000..b525475 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/Padding.py @@ -0,0 +1,108 @@ +# +# Util/Padding.py : Functions to manage padding +# +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +__all__ = [ 'pad', 'unpad' ] + +from Cryptodome.Util.py3compat import * + + +def pad(data_to_pad, block_size, style='pkcs7'): + """Apply standard padding. + + Args: + data_to_pad (byte string): + The data that needs to be padded. + block_size (integer): + The block boundary to use for padding. The output length is guaranteed + to be a multiple of :data:`block_size`. + style (string): + Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*. + + Return: + byte string : the original data with the appropriate padding added at the end. + """ + + padding_len = block_size-len(data_to_pad)%block_size + if style == 'pkcs7': + padding = bchr(padding_len)*padding_len + elif style == 'x923': + padding = bchr(0)*(padding_len-1) + bchr(padding_len) + elif style == 'iso7816': + padding = bchr(128) + bchr(0)*(padding_len-1) + else: + raise ValueError("Unknown padding style") + return data_to_pad + padding + + +def unpad(padded_data, block_size, style='pkcs7'): + """Remove standard padding. + + Args: + padded_data (byte string): + A piece of data with padding that needs to be stripped. + block_size (integer): + The block boundary to use for padding. The input length + must be a multiple of :data:`block_size`. + style (string): + Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*. + Return: + byte string : data without padding. + Raises: + ValueError: if the padding is incorrect. + """ + + pdata_len = len(padded_data) + if pdata_len == 0: + raise ValueError("Zero-length input cannot be unpadded") + if pdata_len % block_size: + raise ValueError("Input data is not padded") + if style in ('pkcs7', 'x923'): + padding_len = bord(padded_data[-1]) + if padding_len<1 or padding_len>min(block_size, pdata_len): + raise ValueError("Padding is incorrect.") + if style == 'pkcs7': + if padded_data[-padding_len:]!=bchr(padding_len)*padding_len: + raise ValueError("PKCS#7 padding is incorrect.") + else: + if padded_data[-padding_len:-1]!=bchr(0)*(padding_len-1): + raise ValueError("ANSI X.923 padding is incorrect.") + elif style == 'iso7816': + padding_len = pdata_len - padded_data.rfind(bchr(128)) + if padding_len<1 or padding_len>min(block_size, pdata_len): + raise ValueError("Padding is incorrect.") + if padding_len>1 and padded_data[1-padding_len:]!=bchr(0)*(padding_len-1): + raise ValueError("ISO 7816-4 padding is incorrect.") + else: + raise ValueError("Unknown padding style") + return padded_data[:-padding_len] + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/Padding.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/Padding.pyi new file mode 100644 index 0000000..4d8d30d --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/Padding.pyi @@ -0,0 +1,6 @@ +from typing import Optional + +__all__ = [ 'pad', 'unpad' ] + +def pad(data_to_pad: bytes, block_size: int, style: Optional[str]='pkcs7') -> bytes: ... +def unpad(padded_data: bytes, block_size: int, style: Optional[str]='pkcs7') -> bytes: ... \ No newline at end of file diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.py b/python/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.py new file mode 100644 index 0000000..10859c3 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.py @@ -0,0 +1,386 @@ +# rfc1751.py : Converts between 128-bit strings and a human-readable +# sequence of words, as defined in RFC1751: "A Convention for +# Human-Readable 128-bit Keys", by Daniel L. McDonald. +# +# Part of the Python Cryptography Toolkit +# +# Written by Andrew M. Kuchling and others +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +from __future__ import print_function + +import binascii + +from Cryptodome.Util.py3compat import bord, bchr + +binary = {0: '0000', 1: '0001', 2: '0010', 3: '0011', 4: '0100', 5: '0101', + 6: '0110', 7: '0111', 8: '1000', 9: '1001', 10: '1010', 11: '1011', + 12: '1100', 13: '1101', 14: '1110', 15: '1111'} + + +def _key2bin(s): + "Convert a key into a string of binary digits" + kl = map(lambda x: bord(x), s) + kl = map(lambda x: binary[x >> 4] + binary[x & 15], kl) + return ''.join(kl) + + +def _extract(key, start, length): + """Extract a bitstring(2.x)/bytestring(2.x) from a string of binary digits, and return its + numeric value.""" + + result = 0 + for y in key[start:start+length]: + result = result * 2 + ord(y) - 48 + return result + + +def key_to_english(key): + """Transform an arbitrary key into a string containing English words. + + Example:: + + >>> from Cryptodome.Util.RFC1751 import key_to_english + >>> key_to_english(b'66666666') + 'RAM LOIS GOAD CREW CARE HIT' + + Args: + key (byte string): + The key to convert. Its length must be a multiple of 8. + Return: + A string of English words. + """ + + if len(key) % 8 != 0: + raise ValueError('The length of the key must be a multiple of 8.') + + english = '' + for index in range(0, len(key), 8): # Loop over 8-byte subkeys + subkey = key[index:index + 8] + # Compute the parity of the key + skbin = _key2bin(subkey) + p = 0 + for i in range(0, 64, 2): + p = p + _extract(skbin, i, 2) + # Append parity bits to the subkey + skbin = _key2bin(subkey + bchr((p << 6) & 255)) + for i in range(0, 64, 11): + english = english + wordlist[_extract(skbin, i, 11)] + ' ' + + return english.strip() + + +def english_to_key(s): + """Transform a string into a corresponding key. + + Example:: + + >>> from Cryptodome.Util.RFC1751 import english_to_key + >>> english_to_key('RAM LOIS GOAD CREW CARE HIT') + b'66666666' + + Args: + s (string): the string with the words separated by whitespace; + the number of words must be a multiple of 6. + Return: + A byte string. + """ + + L = s.upper().split() + key = b'' + for index in range(0, len(L), 6): + sublist = L[index:index + 6] + char = 9 * [0] + bits = 0 + for i in sublist: + index = wordlist.index(i) + shift = (8 - (bits + 11) % 8) % 8 + y = index << shift + cl, cc, cr = (y >> 16), (y >> 8) & 0xff, y & 0xff + if (shift > 5): + char[bits >> 3] = char[bits >> 3] | cl + char[(bits >> 3) + 1] = char[(bits >> 3) + 1] | cc + char[(bits >> 3) + 2] = char[(bits >> 3) + 2] | cr + elif shift > -3: + char[bits >> 3] = char[bits >> 3] | cc + char[(bits >> 3) + 1] = char[(bits >> 3) + 1] | cr + else: + char[bits >> 3] = char[bits >> 3] | cr + bits = bits + 11 + + subkey = b'' + for y in char: + subkey = subkey + bchr(y) + + # Check the parity of the resulting key + skbin = _key2bin(subkey) + p = 0 + for i in range(0, 64, 2): + p = p + _extract(skbin, i, 2) + if (p & 3) != _extract(skbin, 64, 2): + raise ValueError("Parity error in resulting key") + key = key + subkey[0:8] + return key + + +wordlist = [ + "A", "ABE", "ACE", "ACT", "AD", "ADA", "ADD", + "AGO", "AID", "AIM", "AIR", "ALL", "ALP", "AM", "AMY", "AN", "ANA", + "AND", "ANN", "ANT", "ANY", "APE", "APS", "APT", "ARC", "ARE", "ARK", + "ARM", "ART", "AS", "ASH", "ASK", "AT", "ATE", "AUG", "AUK", "AVE", + "AWE", "AWK", "AWL", "AWN", "AX", "AYE", "BAD", "BAG", "BAH", "BAM", + "BAN", "BAR", "BAT", "BAY", "BE", "BED", "BEE", "BEG", "BEN", "BET", + "BEY", "BIB", "BID", "BIG", "BIN", "BIT", "BOB", "BOG", "BON", "BOO", + "BOP", "BOW", "BOY", "BUB", "BUD", "BUG", "BUM", "BUN", "BUS", "BUT", + "BUY", "BY", "BYE", "CAB", "CAL", "CAM", "CAN", "CAP", "CAR", "CAT", + "CAW", "COD", "COG", "COL", "CON", "COO", "COP", "COT", "COW", "COY", + "CRY", "CUB", "CUE", "CUP", "CUR", "CUT", "DAB", "DAD", "DAM", "DAN", + "DAR", "DAY", "DEE", "DEL", "DEN", "DES", "DEW", "DID", "DIE", "DIG", + "DIN", "DIP", "DO", "DOE", "DOG", "DON", "DOT", "DOW", "DRY", "DUB", + "DUD", "DUE", "DUG", "DUN", "EAR", "EAT", "ED", "EEL", "EGG", "EGO", + "ELI", "ELK", "ELM", "ELY", "EM", "END", "EST", "ETC", "EVA", "EVE", + "EWE", "EYE", "FAD", "FAN", "FAR", "FAT", "FAY", "FED", "FEE", "FEW", + "FIB", "FIG", "FIN", "FIR", "FIT", "FLO", "FLY", "FOE", "FOG", "FOR", + "FRY", "FUM", "FUN", "FUR", "GAB", "GAD", "GAG", "GAL", "GAM", "GAP", + "GAS", "GAY", "GEE", "GEL", "GEM", "GET", "GIG", "GIL", "GIN", "GO", + "GOT", "GUM", "GUN", "GUS", "GUT", "GUY", "GYM", "GYP", "HA", "HAD", + "HAL", "HAM", "HAN", "HAP", "HAS", "HAT", "HAW", "HAY", "HE", "HEM", + "HEN", "HER", "HEW", "HEY", "HI", "HID", "HIM", "HIP", "HIS", "HIT", + "HO", "HOB", "HOC", "HOE", "HOG", "HOP", "HOT", "HOW", "HUB", "HUE", + "HUG", "HUH", "HUM", "HUT", "I", "ICY", "IDA", "IF", "IKE", "ILL", + "INK", "INN", "IO", "ION", "IQ", "IRA", "IRE", "IRK", "IS", "IT", + "ITS", "IVY", "JAB", "JAG", "JAM", "JAN", "JAR", "JAW", "JAY", "JET", + "JIG", "JIM", "JO", "JOB", "JOE", "JOG", "JOT", "JOY", "JUG", "JUT", + "KAY", "KEG", "KEN", "KEY", "KID", "KIM", "KIN", "KIT", "LA", "LAB", + "LAC", "LAD", "LAG", "LAM", "LAP", "LAW", "LAY", "LEA", "LED", "LEE", + "LEG", "LEN", "LEO", "LET", "LEW", "LID", "LIE", "LIN", "LIP", "LIT", + "LO", "LOB", "LOG", "LOP", "LOS", "LOT", "LOU", "LOW", "LOY", "LUG", + "LYE", "MA", "MAC", "MAD", "MAE", "MAN", "MAO", "MAP", "MAT", "MAW", + "MAY", "ME", "MEG", "MEL", "MEN", "MET", "MEW", "MID", "MIN", "MIT", + "MOB", "MOD", "MOE", "MOO", "MOP", "MOS", "MOT", "MOW", "MUD", "MUG", + "MUM", "MY", "NAB", "NAG", "NAN", "NAP", "NAT", "NAY", "NE", "NED", + "NEE", "NET", "NEW", "NIB", "NIL", "NIP", "NIT", "NO", "NOB", "NOD", + "NON", "NOR", "NOT", "NOV", "NOW", "NU", "NUN", "NUT", "O", "OAF", + "OAK", "OAR", "OAT", "ODD", "ODE", "OF", "OFF", "OFT", "OH", "OIL", + "OK", "OLD", "ON", "ONE", "OR", "ORB", "ORE", "ORR", "OS", "OTT", + "OUR", "OUT", "OVA", "OW", "OWE", "OWL", "OWN", "OX", "PA", "PAD", + "PAL", "PAM", "PAN", "PAP", "PAR", "PAT", "PAW", "PAY", "PEA", "PEG", + "PEN", "PEP", "PER", "PET", "PEW", "PHI", "PI", "PIE", "PIN", "PIT", + "PLY", "PO", "POD", "POE", "POP", "POT", "POW", "PRO", "PRY", "PUB", + "PUG", "PUN", "PUP", "PUT", "QUO", "RAG", "RAM", "RAN", "RAP", "RAT", + "RAW", "RAY", "REB", "RED", "REP", "RET", "RIB", "RID", "RIG", "RIM", + "RIO", "RIP", "ROB", "ROD", "ROE", "RON", "ROT", "ROW", "ROY", "RUB", + "RUE", "RUG", "RUM", "RUN", "RYE", "SAC", "SAD", "SAG", "SAL", "SAM", + "SAN", "SAP", "SAT", "SAW", "SAY", "SEA", "SEC", "SEE", "SEN", "SET", + "SEW", "SHE", "SHY", "SIN", "SIP", "SIR", "SIS", "SIT", "SKI", "SKY", + "SLY", "SO", "SOB", "SOD", "SON", "SOP", "SOW", "SOY", "SPA", "SPY", + "SUB", "SUD", "SUE", "SUM", "SUN", "SUP", "TAB", "TAD", "TAG", "TAN", + "TAP", "TAR", "TEA", "TED", "TEE", "TEN", "THE", "THY", "TIC", "TIE", + "TIM", "TIN", "TIP", "TO", "TOE", "TOG", "TOM", "TON", "TOO", "TOP", + "TOW", "TOY", "TRY", "TUB", "TUG", "TUM", "TUN", "TWO", "UN", "UP", + "US", "USE", "VAN", "VAT", "VET", "VIE", "WAD", "WAG", "WAR", "WAS", + "WAY", "WE", "WEB", "WED", "WEE", "WET", "WHO", "WHY", "WIN", "WIT", + "WOK", "WON", "WOO", "WOW", "WRY", "WU", "YAM", "YAP", "YAW", "YE", + "YEA", "YES", "YET", "YOU", "ABED", "ABEL", "ABET", "ABLE", "ABUT", + "ACHE", "ACID", "ACME", "ACRE", "ACTA", "ACTS", "ADAM", "ADDS", + "ADEN", "AFAR", "AFRO", "AGEE", "AHEM", "AHOY", "AIDA", "AIDE", + "AIDS", "AIRY", "AJAR", "AKIN", "ALAN", "ALEC", "ALGA", "ALIA", + "ALLY", "ALMA", "ALOE", "ALSO", "ALTO", "ALUM", "ALVA", "AMEN", + "AMES", "AMID", "AMMO", "AMOK", "AMOS", "AMRA", "ANDY", "ANEW", + "ANNA", "ANNE", "ANTE", "ANTI", "AQUA", "ARAB", "ARCH", "AREA", + "ARGO", "ARID", "ARMY", "ARTS", "ARTY", "ASIA", "ASKS", "ATOM", + "AUNT", "AURA", "AUTO", "AVER", "AVID", "AVIS", "AVON", "AVOW", + "AWAY", "AWRY", "BABE", "BABY", "BACH", "BACK", "BADE", "BAIL", + "BAIT", "BAKE", "BALD", "BALE", "BALI", "BALK", "BALL", "BALM", + "BAND", "BANE", "BANG", "BANK", "BARB", "BARD", "BARE", "BARK", + "BARN", "BARR", "BASE", "BASH", "BASK", "BASS", "BATE", "BATH", + "BAWD", "BAWL", "BEAD", "BEAK", "BEAM", "BEAN", "BEAR", "BEAT", + "BEAU", "BECK", "BEEF", "BEEN", "BEER", + "BEET", "BELA", "BELL", "BELT", "BEND", "BENT", "BERG", "BERN", + "BERT", "BESS", "BEST", "BETA", "BETH", "BHOY", "BIAS", "BIDE", + "BIEN", "BILE", "BILK", "BILL", "BIND", "BING", "BIRD", "BITE", + "BITS", "BLAB", "BLAT", "BLED", "BLEW", "BLOB", "BLOC", "BLOT", + "BLOW", "BLUE", "BLUM", "BLUR", "BOAR", "BOAT", "BOCA", "BOCK", + "BODE", "BODY", "BOGY", "BOHR", "BOIL", "BOLD", "BOLO", "BOLT", + "BOMB", "BONA", "BOND", "BONE", "BONG", "BONN", "BONY", "BOOK", + "BOOM", "BOON", "BOOT", "BORE", "BORG", "BORN", "BOSE", "BOSS", + "BOTH", "BOUT", "BOWL", "BOYD", "BRAD", "BRAE", "BRAG", "BRAN", + "BRAY", "BRED", "BREW", "BRIG", "BRIM", "BROW", "BUCK", "BUDD", + "BUFF", "BULB", "BULK", "BULL", "BUNK", "BUNT", "BUOY", "BURG", + "BURL", "BURN", "BURR", "BURT", "BURY", "BUSH", "BUSS", "BUST", + "BUSY", "BYTE", "CADY", "CAFE", "CAGE", "CAIN", "CAKE", "CALF", + "CALL", "CALM", "CAME", "CANE", "CANT", "CARD", "CARE", "CARL", + "CARR", "CART", "CASE", "CASH", "CASK", "CAST", "CAVE", "CEIL", + "CELL", "CENT", "CERN", "CHAD", "CHAR", "CHAT", "CHAW", "CHEF", + "CHEN", "CHEW", "CHIC", "CHIN", "CHOU", "CHOW", "CHUB", "CHUG", + "CHUM", "CITE", "CITY", "CLAD", "CLAM", "CLAN", "CLAW", "CLAY", + "CLOD", "CLOG", "CLOT", "CLUB", "CLUE", "COAL", "COAT", "COCA", + "COCK", "COCO", "CODA", "CODE", "CODY", "COED", "COIL", "COIN", + "COKE", "COLA", "COLD", "COLT", "COMA", "COMB", "COME", "COOK", + "COOL", "COON", "COOT", "CORD", "CORE", "CORK", "CORN", "COST", + "COVE", "COWL", "CRAB", "CRAG", "CRAM", "CRAY", "CREW", "CRIB", + "CROW", "CRUD", "CUBA", "CUBE", "CUFF", "CULL", "CULT", "CUNY", + "CURB", "CURD", "CURE", "CURL", "CURT", "CUTS", "DADE", "DALE", + "DAME", "DANA", "DANE", "DANG", "DANK", "DARE", "DARK", "DARN", + "DART", "DASH", "DATA", "DATE", "DAVE", "DAVY", "DAWN", "DAYS", + "DEAD", "DEAF", "DEAL", "DEAN", "DEAR", "DEBT", "DECK", "DEED", + "DEEM", "DEER", "DEFT", "DEFY", "DELL", "DENT", "DENY", "DESK", + "DIAL", "DICE", "DIED", "DIET", "DIME", "DINE", "DING", "DINT", + "DIRE", "DIRT", "DISC", "DISH", "DISK", "DIVE", "DOCK", "DOES", + "DOLE", "DOLL", "DOLT", "DOME", "DONE", "DOOM", "DOOR", "DORA", + "DOSE", "DOTE", "DOUG", "DOUR", "DOVE", "DOWN", "DRAB", "DRAG", + "DRAM", "DRAW", "DREW", "DRUB", "DRUG", "DRUM", "DUAL", "DUCK", + "DUCT", "DUEL", "DUET", "DUKE", "DULL", "DUMB", "DUNE", "DUNK", + "DUSK", "DUST", "DUTY", "EACH", "EARL", "EARN", "EASE", "EAST", + "EASY", "EBEN", "ECHO", "EDDY", "EDEN", "EDGE", "EDGY", "EDIT", + "EDNA", "EGAN", "ELAN", "ELBA", "ELLA", "ELSE", "EMIL", "EMIT", + "EMMA", "ENDS", "ERIC", "EROS", "EVEN", "EVER", "EVIL", "EYED", + "FACE", "FACT", "FADE", "FAIL", "FAIN", "FAIR", "FAKE", "FALL", + "FAME", "FANG", "FARM", "FAST", "FATE", "FAWN", "FEAR", "FEAT", + "FEED", "FEEL", "FEET", "FELL", "FELT", "FEND", "FERN", "FEST", + "FEUD", "FIEF", "FIGS", "FILE", "FILL", "FILM", "FIND", "FINE", + "FINK", "FIRE", "FIRM", "FISH", "FISK", "FIST", "FITS", "FIVE", + "FLAG", "FLAK", "FLAM", "FLAT", "FLAW", "FLEA", "FLED", "FLEW", + "FLIT", "FLOC", "FLOG", "FLOW", "FLUB", "FLUE", "FOAL", "FOAM", + "FOGY", "FOIL", "FOLD", "FOLK", "FOND", "FONT", "FOOD", "FOOL", + "FOOT", "FORD", "FORE", "FORK", "FORM", "FORT", "FOSS", "FOUL", + "FOUR", "FOWL", "FRAU", "FRAY", "FRED", "FREE", "FRET", "FREY", + "FROG", "FROM", "FUEL", "FULL", "FUME", "FUND", "FUNK", "FURY", + "FUSE", "FUSS", "GAFF", "GAGE", "GAIL", "GAIN", "GAIT", "GALA", + "GALE", "GALL", "GALT", "GAME", "GANG", "GARB", "GARY", "GASH", + "GATE", "GAUL", "GAUR", "GAVE", "GAWK", "GEAR", "GELD", "GENE", + "GENT", "GERM", "GETS", "GIBE", "GIFT", "GILD", "GILL", "GILT", + "GINA", "GIRD", "GIRL", "GIST", "GIVE", "GLAD", "GLEE", "GLEN", + "GLIB", "GLOB", "GLOM", "GLOW", "GLUE", "GLUM", "GLUT", "GOAD", + "GOAL", "GOAT", "GOER", "GOES", "GOLD", "GOLF", "GONE", "GONG", + "GOOD", "GOOF", "GORE", "GORY", "GOSH", "GOUT", "GOWN", "GRAB", + "GRAD", "GRAY", "GREG", "GREW", "GREY", "GRID", "GRIM", "GRIN", + "GRIT", "GROW", "GRUB", "GULF", "GULL", "GUNK", "GURU", "GUSH", + "GUST", "GWEN", "GWYN", "HAAG", "HAAS", "HACK", "HAIL", "HAIR", + "HALE", "HALF", "HALL", "HALO", "HALT", "HAND", "HANG", "HANK", + "HANS", "HARD", "HARK", "HARM", "HART", "HASH", "HAST", "HATE", + "HATH", "HAUL", "HAVE", "HAWK", "HAYS", "HEAD", "HEAL", "HEAR", + "HEAT", "HEBE", "HECK", "HEED", "HEEL", "HEFT", "HELD", "HELL", + "HELM", "HERB", "HERD", "HERE", "HERO", "HERS", "HESS", "HEWN", + "HICK", "HIDE", "HIGH", "HIKE", "HILL", "HILT", "HIND", "HINT", + "HIRE", "HISS", "HIVE", "HOBO", "HOCK", "HOFF", "HOLD", "HOLE", + "HOLM", "HOLT", "HOME", "HONE", "HONK", "HOOD", "HOOF", "HOOK", + "HOOT", "HORN", "HOSE", "HOST", "HOUR", "HOVE", "HOWE", "HOWL", + "HOYT", "HUCK", "HUED", "HUFF", "HUGE", "HUGH", "HUGO", "HULK", + "HULL", "HUNK", "HUNT", "HURD", "HURL", "HURT", "HUSH", "HYDE", + "HYMN", "IBIS", "ICON", "IDEA", "IDLE", "IFFY", "INCA", "INCH", + "INTO", "IONS", "IOTA", "IOWA", "IRIS", "IRMA", "IRON", "ISLE", + "ITCH", "ITEM", "IVAN", "JACK", "JADE", "JAIL", "JAKE", "JANE", + "JAVA", "JEAN", "JEFF", "JERK", "JESS", "JEST", "JIBE", "JILL", + "JILT", "JIVE", "JOAN", "JOBS", "JOCK", "JOEL", "JOEY", "JOHN", + "JOIN", "JOKE", "JOLT", "JOVE", "JUDD", "JUDE", "JUDO", "JUDY", + "JUJU", "JUKE", "JULY", "JUNE", "JUNK", "JUNO", "JURY", "JUST", + "JUTE", "KAHN", "KALE", "KANE", "KANT", "KARL", "KATE", "KEEL", + "KEEN", "KENO", "KENT", "KERN", "KERR", "KEYS", "KICK", "KILL", + "KIND", "KING", "KIRK", "KISS", "KITE", "KLAN", "KNEE", "KNEW", + "KNIT", "KNOB", "KNOT", "KNOW", "KOCH", "KONG", "KUDO", "KURD", + "KURT", "KYLE", "LACE", "LACK", "LACY", "LADY", "LAID", "LAIN", + "LAIR", "LAKE", "LAMB", "LAME", "LAND", "LANE", "LANG", "LARD", + "LARK", "LASS", "LAST", "LATE", "LAUD", "LAVA", "LAWN", "LAWS", + "LAYS", "LEAD", "LEAF", "LEAK", "LEAN", "LEAR", "LEEK", "LEER", + "LEFT", "LEND", "LENS", "LENT", "LEON", "LESK", "LESS", "LEST", + "LETS", "LIAR", "LICE", "LICK", "LIED", "LIEN", "LIES", "LIEU", + "LIFE", "LIFT", "LIKE", "LILA", "LILT", "LILY", "LIMA", "LIMB", + "LIME", "LIND", "LINE", "LINK", "LINT", "LION", "LISA", "LIST", + "LIVE", "LOAD", "LOAF", "LOAM", "LOAN", "LOCK", "LOFT", "LOGE", + "LOIS", "LOLA", "LONE", "LONG", "LOOK", "LOON", "LOOT", "LORD", + "LORE", "LOSE", "LOSS", "LOST", "LOUD", "LOVE", "LOWE", "LUCK", + "LUCY", "LUGE", "LUKE", "LULU", "LUND", "LUNG", "LURA", "LURE", + "LURK", "LUSH", "LUST", "LYLE", "LYNN", "LYON", "LYRA", "MACE", + "MADE", "MAGI", "MAID", "MAIL", "MAIN", "MAKE", "MALE", "MALI", + "MALL", "MALT", "MANA", "MANN", "MANY", "MARC", "MARE", "MARK", + "MARS", "MART", "MARY", "MASH", "MASK", "MASS", "MAST", "MATE", + "MATH", "MAUL", "MAYO", "MEAD", "MEAL", "MEAN", "MEAT", "MEEK", + "MEET", "MELD", "MELT", "MEMO", "MEND", "MENU", "MERT", "MESH", + "MESS", "MICE", "MIKE", "MILD", "MILE", "MILK", "MILL", "MILT", + "MIMI", "MIND", "MINE", "MINI", "MINK", "MINT", "MIRE", "MISS", + "MIST", "MITE", "MITT", "MOAN", "MOAT", "MOCK", "MODE", "MOLD", + "MOLE", "MOLL", "MOLT", "MONA", "MONK", "MONT", "MOOD", "MOON", + "MOOR", "MOOT", "MORE", "MORN", "MORT", "MOSS", "MOST", "MOTH", + "MOVE", "MUCH", "MUCK", "MUDD", "MUFF", "MULE", "MULL", "MURK", + "MUSH", "MUST", "MUTE", "MUTT", "MYRA", "MYTH", "NAGY", "NAIL", + "NAIR", "NAME", "NARY", "NASH", "NAVE", "NAVY", "NEAL", "NEAR", + "NEAT", "NECK", "NEED", "NEIL", "NELL", "NEON", "NERO", "NESS", + "NEST", "NEWS", "NEWT", "NIBS", "NICE", "NICK", "NILE", "NINA", + "NINE", "NOAH", "NODE", "NOEL", "NOLL", "NONE", "NOOK", "NOON", + "NORM", "NOSE", "NOTE", "NOUN", "NOVA", "NUDE", "NULL", "NUMB", + "OATH", "OBEY", "OBOE", "ODIN", "OHIO", "OILY", "OINT", "OKAY", + "OLAF", "OLDY", "OLGA", "OLIN", "OMAN", "OMEN", "OMIT", "ONCE", + "ONES", "ONLY", "ONTO", "ONUS", "ORAL", "ORGY", "OSLO", "OTIS", + "OTTO", "OUCH", "OUST", "OUTS", "OVAL", "OVEN", "OVER", "OWLY", + "OWNS", "QUAD", "QUIT", "QUOD", "RACE", "RACK", "RACY", "RAFT", + "RAGE", "RAID", "RAIL", "RAIN", "RAKE", "RANK", "RANT", "RARE", + "RASH", "RATE", "RAVE", "RAYS", "READ", "REAL", "REAM", "REAR", + "RECK", "REED", "REEF", "REEK", "REEL", "REID", "REIN", "RENA", + "REND", "RENT", "REST", "RICE", "RICH", "RICK", "RIDE", "RIFT", + "RILL", "RIME", "RING", "RINK", "RISE", "RISK", "RITE", "ROAD", + "ROAM", "ROAR", "ROBE", "ROCK", "RODE", "ROIL", "ROLL", "ROME", + "ROOD", "ROOF", "ROOK", "ROOM", "ROOT", "ROSA", "ROSE", "ROSS", + "ROSY", "ROTH", "ROUT", "ROVE", "ROWE", "ROWS", "RUBE", "RUBY", + "RUDE", "RUDY", "RUIN", "RULE", "RUNG", "RUNS", "RUNT", "RUSE", + "RUSH", "RUSK", "RUSS", "RUST", "RUTH", "SACK", "SAFE", "SAGE", + "SAID", "SAIL", "SALE", "SALK", "SALT", "SAME", "SAND", "SANE", + "SANG", "SANK", "SARA", "SAUL", "SAVE", "SAYS", "SCAN", "SCAR", + "SCAT", "SCOT", "SEAL", "SEAM", "SEAR", "SEAT", "SEED", "SEEK", + "SEEM", "SEEN", "SEES", "SELF", "SELL", "SEND", "SENT", "SETS", + "SEWN", "SHAG", "SHAM", "SHAW", "SHAY", "SHED", "SHIM", "SHIN", + "SHOD", "SHOE", "SHOT", "SHOW", "SHUN", "SHUT", "SICK", "SIDE", + "SIFT", "SIGH", "SIGN", "SILK", "SILL", "SILO", "SILT", "SINE", + "SING", "SINK", "SIRE", "SITE", "SITS", "SITU", "SKAT", "SKEW", + "SKID", "SKIM", "SKIN", "SKIT", "SLAB", "SLAM", "SLAT", "SLAY", + "SLED", "SLEW", "SLID", "SLIM", "SLIT", "SLOB", "SLOG", "SLOT", + "SLOW", "SLUG", "SLUM", "SLUR", "SMOG", "SMUG", "SNAG", "SNOB", + "SNOW", "SNUB", "SNUG", "SOAK", "SOAR", "SOCK", "SODA", "SOFA", + "SOFT", "SOIL", "SOLD", "SOME", "SONG", "SOON", "SOOT", "SORE", + "SORT", "SOUL", "SOUR", "SOWN", "STAB", "STAG", "STAN", "STAR", + "STAY", "STEM", "STEW", "STIR", "STOW", "STUB", "STUN", "SUCH", + "SUDS", "SUIT", "SULK", "SUMS", "SUNG", "SUNK", "SURE", "SURF", + "SWAB", "SWAG", "SWAM", "SWAN", "SWAT", "SWAY", "SWIM", "SWUM", + "TACK", "TACT", "TAIL", "TAKE", "TALE", "TALK", "TALL", "TANK", + "TASK", "TATE", "TAUT", "TEAL", "TEAM", "TEAR", "TECH", "TEEM", + "TEEN", "TEET", "TELL", "TEND", "TENT", "TERM", "TERN", "TESS", + "TEST", "THAN", "THAT", "THEE", "THEM", "THEN", "THEY", "THIN", + "THIS", "THUD", "THUG", "TICK", "TIDE", "TIDY", "TIED", "TIER", + "TILE", "TILL", "TILT", "TIME", "TINA", "TINE", "TINT", "TINY", + "TIRE", "TOAD", "TOGO", "TOIL", "TOLD", "TOLL", "TONE", "TONG", + "TONY", "TOOK", "TOOL", "TOOT", "TORE", "TORN", "TOTE", "TOUR", + "TOUT", "TOWN", "TRAG", "TRAM", "TRAY", "TREE", "TREK", "TRIG", + "TRIM", "TRIO", "TROD", "TROT", "TROY", "TRUE", "TUBA", "TUBE", + "TUCK", "TUFT", "TUNA", "TUNE", "TUNG", "TURF", "TURN", "TUSK", + "TWIG", "TWIN", "TWIT", "ULAN", "UNIT", "URGE", "USED", "USER", + "USES", "UTAH", "VAIL", "VAIN", "VALE", "VARY", "VASE", "VAST", + "VEAL", "VEDA", "VEIL", "VEIN", "VEND", "VENT", "VERB", "VERY", + "VETO", "VICE", "VIEW", "VINE", "VISE", "VOID", "VOLT", "VOTE", + "WACK", "WADE", "WAGE", "WAIL", "WAIT", "WAKE", "WALE", "WALK", + "WALL", "WALT", "WAND", "WANE", "WANG", "WANT", "WARD", "WARM", + "WARN", "WART", "WASH", "WAST", "WATS", "WATT", "WAVE", "WAVY", + "WAYS", "WEAK", "WEAL", "WEAN", "WEAR", "WEED", "WEEK", "WEIR", + "WELD", "WELL", "WELT", "WENT", "WERE", "WERT", "WEST", "WHAM", + "WHAT", "WHEE", "WHEN", "WHET", "WHOA", "WHOM", "WICK", "WIFE", + "WILD", "WILL", "WIND", "WINE", "WING", "WINK", "WINO", "WIRE", + "WISE", "WISH", "WITH", "WOLF", "WONT", "WOOD", "WOOL", "WORD", + "WORE", "WORK", "WORM", "WORN", "WOVE", "WRIT", "WYNN", "YALE", + "YANG", "YANK", "YARD", "YARN", "YAWL", "YAWN", "YEAH", "YEAR", + "YELL", "YOGA", "YOKE" ] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.pyi new file mode 100644 index 0000000..6ad07ff --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/RFC1751.pyi @@ -0,0 +1,7 @@ +from typing import Dict, List + +binary: Dict[int, str] +wordlist: List[str] + +def key_to_english(key: bytes) -> str: ... +def english_to_key(s: str) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/Util/__init__.py new file mode 100644 index 0000000..1862b82 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/__init__.py @@ -0,0 +1,41 @@ +# -*- coding: utf-8 -*- +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Miscellaneous modules + +Contains useful modules that don't belong into any of the +other Cryptodome.* subpackages. + +======================== ============================================= +Module Description +======================== ============================================= +`Cryptodome.Util.number` Number-theoretic functions (primality testing, etc.) +`Cryptodome.Util.Counter` Fast counter functions for CTR cipher modes. +`Cryptodome.Util.RFC1751` Converts between 128-bit keys and human-readable + strings of words. +`Cryptodome.Util.asn1` Minimal support for ASN.1 DER encoding +`Cryptodome.Util.Padding` Set of functions for adding and removing padding. +======================== ============================================= + +:undocumented: _galois, _number_new, cpuid, py3compat, _raw_api +""" + +__all__ = ['RFC1751', 'number', 'strxor', 'asn1', 'Counter', 'Padding'] + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.py b/python/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.py new file mode 100644 index 0000000..4794a02 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.py @@ -0,0 +1,46 @@ +# =================================================================== +# +# Copyright (c) 2018, Helder Eijs <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util._raw_api import load_pycryptodome_raw_lib + + +_raw_cpuid_lib = load_pycryptodome_raw_lib("Cryptodome.Util._cpuid_c", + """ + int have_aes_ni(void); + int have_clmul(void); + """) + + +def have_aes_ni(): + return _raw_cpuid_lib.have_aes_ni() + + +def have_clmul(): + return _raw_cpuid_lib.have_clmul() diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.pyi new file mode 100644 index 0000000..10e669e --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/_cpu_features.pyi @@ -0,0 +1,2 @@ +def have_aes_ni() -> int: ... +def have_clmul() -> int: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_cpuid_c.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Util/_cpuid_c.abi3.so new file mode 100755 index 0000000..51e31b7 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Util/_cpuid_c.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_file_system.py b/python/lib/python3.11/site-packages/Cryptodome/Util/_file_system.py new file mode 100644 index 0000000..282f0dc --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/_file_system.py @@ -0,0 +1,54 @@ +# =================================================================== +# +# Copyright (c) 2016, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os + + +def pycryptodome_filename(dir_comps, filename): + """Return the complete file name for the module + + dir_comps : list of string + The list of directory names in the PyCryptodome package. + The first element must be "Cryptodome". + + filename : string + The filename (inclusing extension) in the target directory. + """ + + if dir_comps[0] != "Cryptodome": + raise ValueError("Only available for modules under 'Cryptodome'") + + dir_comps = list(dir_comps[1:]) + [filename] + + util_lib, _ = os.path.split(os.path.abspath(__file__)) + root_lib = os.path.join(util_lib, "..") + + return os.path.join(root_lib, *dir_comps) + diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_file_system.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/_file_system.pyi new file mode 100644 index 0000000..d54a126 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/_file_system.pyi @@ -0,0 +1,4 @@ +from typing import List + + +def pycryptodome_filename(dir_comps: List[str], filename: str) -> str: ... \ No newline at end of file diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.py b/python/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.py new file mode 100644 index 0000000..c2e0187 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.py @@ -0,0 +1,319 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +import os +import abc +import sys +from Cryptodome.Util.py3compat import byte_string +from Cryptodome.Util._file_system import pycryptodome_filename + +# +# List of file suffixes for Python extensions +# +if sys.version_info[0] < 3: + + import imp + extension_suffixes = [] + for ext, mod, typ in imp.get_suffixes(): + if typ == imp.C_EXTENSION: + extension_suffixes.append(ext) + +else: + + from importlib import machinery + extension_suffixes = machinery.EXTENSION_SUFFIXES + +# Which types with buffer interface we support (apart from byte strings) +_buffer_type = (bytearray, memoryview) + + +class _VoidPointer(object): + @abc.abstractmethod + def get(self): + """Return the memory location we point to""" + return + + @abc.abstractmethod + def address_of(self): + """Return a raw pointer to this pointer""" + return + + +try: + # Starting from v2.18, pycparser (used by cffi for in-line ABI mode) + # stops working correctly when PYOPTIMIZE==2 or the parameter -OO is + # passed. In that case, we fall back to ctypes. + # Note that PyPy ships with an old version of pycparser so we can keep + # using cffi there. + # See https://github.com/Legrandin/pycryptodome/issues/228 + if '__pypy__' not in sys.builtin_module_names and sys.flags.optimize == 2: + raise ImportError("CFFI with optimize=2 fails due to pycparser bug.") + + from cffi import FFI + + ffi = FFI() + null_pointer = ffi.NULL + uint8_t_type = ffi.typeof(ffi.new("const uint8_t*")) + + _Array = ffi.new("uint8_t[1]").__class__.__bases__ + + def load_lib(name, cdecl): + """Load a shared library and return a handle to it. + + @name, either an absolute path or the name of a library + in the system search path. + + @cdecl, the C function declarations. + """ + + if hasattr(ffi, "RTLD_DEEPBIND") and not os.getenv('PYCRYPTODOME_DISABLE_DEEPBIND'): + lib = ffi.dlopen(name, ffi.RTLD_DEEPBIND) + else: + lib = ffi.dlopen(name) + ffi.cdef(cdecl) + return lib + + def c_ulong(x): + """Convert a Python integer to unsigned long""" + return x + + c_ulonglong = c_ulong + c_uint = c_ulong + c_ubyte = c_ulong + + def c_size_t(x): + """Convert a Python integer to size_t""" + return x + + def create_string_buffer(init_or_size, size=None): + """Allocate the given amount of bytes (initially set to 0)""" + + if isinstance(init_or_size, bytes): + size = max(len(init_or_size) + 1, size) + result = ffi.new("uint8_t[]", size) + result[:] = init_or_size + else: + if size: + raise ValueError("Size must be specified once only") + result = ffi.new("uint8_t[]", init_or_size) + return result + + def get_c_string(c_string): + """Convert a C string into a Python byte sequence""" + return ffi.string(c_string) + + def get_raw_buffer(buf): + """Convert a C buffer into a Python byte sequence""" + return ffi.buffer(buf)[:] + + def c_uint8_ptr(data): + if isinstance(data, _buffer_type): + # This only works for cffi >= 1.7 + return ffi.cast(uint8_t_type, ffi.from_buffer(data)) + elif byte_string(data) or isinstance(data, _Array): + return data + else: + raise TypeError("Object type %s cannot be passed to C code" % type(data)) + + class VoidPointer_cffi(_VoidPointer): + """Model a newly allocated pointer to void""" + + def __init__(self): + self._pp = ffi.new("void *[1]") + + def get(self): + return self._pp[0] + + def address_of(self): + return self._pp + + def VoidPointer(): + return VoidPointer_cffi() + + backend = "cffi" + +except ImportError: + + import ctypes + from ctypes import (CDLL, c_void_p, byref, c_ulong, c_ulonglong, c_size_t, + create_string_buffer, c_ubyte, c_uint) + from ctypes.util import find_library + from ctypes import Array as _Array + + null_pointer = None + cached_architecture = [] + + def c_ubyte(c): + if not (0 <= c < 256): + raise OverflowError() + return ctypes.c_ubyte(c) + + def load_lib(name, cdecl): + if not cached_architecture: + # platform.architecture() creates a subprocess, so caching the + # result makes successive imports faster. + import platform + cached_architecture[:] = platform.architecture() + bits, linkage = cached_architecture + if "." not in name and not linkage.startswith("Win"): + full_name = find_library(name) + if full_name is None: + raise OSError("Cannot load library '%s'" % name) + name = full_name + return CDLL(name) + + def get_c_string(c_string): + return c_string.value + + def get_raw_buffer(buf): + return buf.raw + + # ---- Get raw pointer --- + + _c_ssize_t = ctypes.c_ssize_t + + _PyBUF_SIMPLE = 0 + _PyObject_GetBuffer = ctypes.pythonapi.PyObject_GetBuffer + _PyBuffer_Release = ctypes.pythonapi.PyBuffer_Release + _py_object = ctypes.py_object + _c_ssize_p = ctypes.POINTER(_c_ssize_t) + + # See Include/object.h for CPython + # and https://github.com/pallets/click/blob/master/src/click/_winconsole.py + class _Py_buffer(ctypes.Structure): + _fields_ = [ + ('buf', c_void_p), + ('obj', ctypes.py_object), + ('len', _c_ssize_t), + ('itemsize', _c_ssize_t), + ('readonly', ctypes.c_int), + ('ndim', ctypes.c_int), + ('format', ctypes.c_char_p), + ('shape', _c_ssize_p), + ('strides', _c_ssize_p), + ('suboffsets', _c_ssize_p), + ('internal', c_void_p) + ] + + # Extra field for CPython 2.6/2.7 + if sys.version_info[0] == 2: + _fields_.insert(-1, ('smalltable', _c_ssize_t * 2)) + + def c_uint8_ptr(data): + if byte_string(data) or isinstance(data, _Array): + return data + elif isinstance(data, _buffer_type): + obj = _py_object(data) + buf = _Py_buffer() + _PyObject_GetBuffer(obj, byref(buf), _PyBUF_SIMPLE) + try: + buffer_type = ctypes.c_ubyte * buf.len + return buffer_type.from_address(buf.buf) + finally: + _PyBuffer_Release(byref(buf)) + else: + raise TypeError("Object type %s cannot be passed to C code" % type(data)) + + # --- + + class VoidPointer_ctypes(_VoidPointer): + """Model a newly allocated pointer to void""" + + def __init__(self): + self._p = c_void_p() + + def get(self): + return self._p + + def address_of(self): + return byref(self._p) + + def VoidPointer(): + return VoidPointer_ctypes() + + backend = "ctypes" + + +class SmartPointer(object): + """Class to hold a non-managed piece of memory""" + + def __init__(self, raw_pointer, destructor): + self._raw_pointer = raw_pointer + self._destructor = destructor + + def get(self): + return self._raw_pointer + + def release(self): + rp, self._raw_pointer = self._raw_pointer, None + return rp + + def __del__(self): + try: + if self._raw_pointer is not None: + self._destructor(self._raw_pointer) + self._raw_pointer = None + except AttributeError: + pass + + +def load_pycryptodome_raw_lib(name, cdecl): + """Load a shared library and return a handle to it. + + @name, the name of the library expressed as a PyCryptodome module, + for instance Cryptodome.Cipher._raw_cbc. + + @cdecl, the C function declarations. + """ + + split = name.split(".") + dir_comps, basename = split[:-1], split[-1] + attempts = [] + for ext in extension_suffixes: + try: + filename = basename + ext + full_name = pycryptodome_filename(dir_comps, filename) + if not os.path.isfile(full_name): + attempts.append("Not found '%s'" % filename) + continue + return load_lib(full_name, cdecl) + except OSError as exp: + attempts.append("Cannot load '%s': %s" % (filename, str(exp))) + raise OSError("Cannot load native module '%s': %s" % (name, ", ".join(attempts))) + + +def is_buffer(x): + """Return True if object x supports the buffer interface""" + return isinstance(x, (bytes, bytearray, memoryview)) + + +def is_writeable_buffer(x): + return (isinstance(x, bytearray) or + (isinstance(x, memoryview) and not x.readonly)) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.pyi new file mode 100644 index 0000000..2bc5301 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/_raw_api.pyi @@ -0,0 +1,27 @@ +from typing import Any, Optional, Union + +def load_lib(name: str, cdecl: str) -> Any : ... +def c_ulong(x: int ) -> Any : ... +def c_ulonglong(x: int ) -> Any : ... +def c_size_t(x: int) -> Any : ... +def create_string_buffer(init_or_size: Union[bytes,int], size: Optional[int]) -> Any : ... +def get_c_string(c_string: Any) -> bytes : ... +def get_raw_buffer(buf: Any) -> bytes : ... +def c_uint8_ptr(data: Union[bytes, memoryview, bytearray]) -> Any : ... + +class VoidPointer(object): + def get(self) -> Any : ... + def address_of(self) -> Any : ... + +class SmartPointer(object): + def __init__(self, raw_pointer: Any, destructor: Any) -> None : ... + def get(self) -> Any : ... + def release(self) -> Any : ... + +backend : str +null_pointer : Any +ffi: Any + +def load_pycryptodome_raw_lib(name: str, cdecl: str) -> Any : ... +def is_buffer(x: Any) -> bool : ... +def is_writeable_buffer(x: Any) -> bool : ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/_strxor.abi3.so b/python/lib/python3.11/site-packages/Cryptodome/Util/_strxor.abi3.so new file mode 100755 index 0000000..f0f3784 Binary files /dev/null and b/python/lib/python3.11/site-packages/Cryptodome/Util/_strxor.abi3.so differ diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/asn1.py b/python/lib/python3.11/site-packages/Cryptodome/Util/asn1.py new file mode 100644 index 0000000..36f2d72 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/asn1.py @@ -0,0 +1,1064 @@ +# -*- coding: ascii -*- +# +# Util/asn1.py : Minimal support for ASN.1 DER binary encoding. +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +import struct + +from Cryptodome.Util.py3compat import byte_string, bchr, bord + +from Cryptodome.Util.number import long_to_bytes, bytes_to_long + +__all__ = ['DerObject', 'DerInteger', 'DerBoolean', 'DerOctetString', + 'DerNull', 'DerSequence', 'DerObjectId', 'DerBitString', 'DerSetOf'] + +# Useful references: +# - https://luca.ntop.org/Teaching/Appunti/asn1.html +# - https://letsencrypt.org/docs/a-warm-welcome-to-asn1-and-der/ +# - https://www.zytrax.com/tech/survival/asn1.html +# - https://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf +# - https://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf +# - https://misc.daniel-marschall.de/asn.1/oid-converter/online.php + +def _is_number(x, only_non_negative=False): + test = 0 + try: + test = x + test + except TypeError: + return False + return not only_non_negative or x >= 0 + + +class BytesIO_EOF(object): + """This class differs from BytesIO in that a ValueError exception is + raised whenever EOF is reached.""" + + def __init__(self, initial_bytes): + self._buffer = initial_bytes + self._index = 0 + self._bookmark = None + + def set_bookmark(self): + self._bookmark = self._index + + def data_since_bookmark(self): + assert self._bookmark is not None + return self._buffer[self._bookmark:self._index] + + def remaining_data(self): + return len(self._buffer) - self._index + + def read(self, length): + new_index = self._index + length + if new_index > len(self._buffer): + raise ValueError("Not enough data for DER decoding: expected %d bytes and found %d" % (new_index, len(self._buffer))) + + result = self._buffer[self._index:new_index] + self._index = new_index + return result + + def read_byte(self): + return bord(self.read(1)[0]) + + +class DerObject(object): + """Base class for defining a single DER object. + + This class should never be directly instantiated. + """ + + def __init__(self, asn1Id=None, payload=b'', implicit=None, + constructed=False, explicit=None): + """Initialize the DER object according to a specific ASN.1 type. + + :Parameters: + asn1Id : integer or byte + The universal DER tag number for this object + (e.g. 0x10 for a SEQUENCE). + If None, the tag is not known yet. + + payload : byte string + The initial payload of the object (that it, + the content octets). + If not specified, the payload is empty. + + implicit : integer or byte + The IMPLICIT tag number (< 0x1F) to use for the encoded object. + It overrides the universal tag *asn1Id*. + It cannot be combined with the ``explicit`` parameter. + By default, there is no IMPLICIT tag. + + constructed : bool + True when the ASN.1 type is *constructed*. + False when it is *primitive* (default). + + explicit : integer or byte + The EXPLICIT tag number (< 0x1F) to use for the encoded object. + It cannot be combined with the ``implicit`` parameter. + By default, there is no EXPLICIT tag. + """ + + if asn1Id is None: + # The tag octet will be read in with ``decode`` + self._tag_octet = None + return + asn1Id = self._convertTag(asn1Id) + + self.payload = payload + + # In a BER/DER identifier octet: + # * bits 4-0 contain the tag value + # * bit 5 is set if the type is 'constructed' + # and unset if 'primitive' + # * bits 7-6 depend on the encoding class + # + # Class | Bit 7, Bit 6 + # ---------------------------------- + # universal | 0 0 + # application | 0 1 + # context-spec | 1 0 (default for IMPLICIT/EXPLICIT) + # private | 1 1 + # + + constructed_bit = 0x20 if constructed else 0x00 + + if None not in (explicit, implicit): + raise ValueError("Explicit and implicit tags are" + " mutually exclusive") + + if implicit is not None: + # IMPLICIT tag overrides asn1Id + self._tag_octet = 0x80 | constructed_bit | self._convertTag(implicit) + elif explicit is not None: + # 'constructed bit' is always asserted for an EXPLICIT tag + self._tag_octet = 0x80 | 0x20 | self._convertTag(explicit) + self._inner_tag_octet = constructed_bit | asn1Id + else: + # Neither IMPLICIT nor EXPLICIT + self._tag_octet = constructed_bit | asn1Id + + def _convertTag(self, tag): + """Check if *tag* is a real DER tag (5 bits). + Convert it from a character to number if necessary. + """ + if not _is_number(tag): + if len(tag) == 1: + tag = bord(tag[0]) + # Ensure that tag is a low tag + if not (_is_number(tag) and 0 <= tag < 0x1F): + raise ValueError("Wrong DER tag") + return tag + + @staticmethod + def _definite_form(length): + """Build length octets according to BER/DER + definite form. + """ + if length > 127: + encoding = long_to_bytes(length) + return bchr(len(encoding) + 128) + encoding + return bchr(length) + + def encode(self): + """Return this DER element, fully encoded as a binary byte string.""" + + # Concatenate identifier octets, length octets, + # and contents octets + + output_payload = self.payload + + # In case of an EXTERNAL tag, first encode the inner + # element. + if hasattr(self, "_inner_tag_octet"): + output_payload = (bchr(self._inner_tag_octet) + + self._definite_form(len(self.payload)) + + self.payload) + + return (bchr(self._tag_octet) + + self._definite_form(len(output_payload)) + + output_payload) + + def _decodeLen(self, s): + """Decode DER length octets from a file.""" + + length = s.read_byte() + + if length > 127: + encoded_length = s.read(length & 0x7F) + if bord(encoded_length[0]) == 0: + raise ValueError("Invalid DER: length has leading zero") + length = bytes_to_long(encoded_length) + if length <= 127: + raise ValueError("Invalid DER: length in long form but smaller than 128") + + return length + + def decode(self, der_encoded, strict=False): + """Decode a complete DER element, and re-initializes this + object with it. + + Args: + der_encoded (byte string): A complete DER element. + + Raises: + ValueError: in case of parsing errors. + """ + + if not byte_string(der_encoded): + raise ValueError("Input is not a byte string") + + s = BytesIO_EOF(der_encoded) + self._decodeFromStream(s, strict) + + # There shouldn't be other bytes left + if s.remaining_data() > 0: + raise ValueError("Unexpected extra data after the DER structure") + + return self + + def _decodeFromStream(self, s, strict): + """Decode a complete DER element from a file.""" + + idOctet = s.read_byte() + if self._tag_octet is not None: + if idOctet != self._tag_octet: + raise ValueError("Unexpected DER tag") + else: + self._tag_octet = idOctet + length = self._decodeLen(s) + self.payload = s.read(length) + + # In case of an EXTERNAL tag, further decode the inner + # element. + if hasattr(self, "_inner_tag_octet"): + p = BytesIO_EOF(self.payload) + inner_octet = p.read_byte() + if inner_octet != self._inner_tag_octet: + raise ValueError("Unexpected internal DER tag") + length = self._decodeLen(p) + self.payload = p.read(length) + + # There shouldn't be other bytes left + if p.remaining_data() > 0: + raise ValueError("Unexpected extra data after the DER structure") + + +class DerInteger(DerObject): + """Class to model a DER INTEGER. + + An example of encoding is:: + + >>> from Cryptodome.Util.asn1 import DerInteger + >>> from binascii import hexlify, unhexlify + >>> int_der = DerInteger(9) + >>> print hexlify(int_der.encode()) + + which will show ``020109``, the DER encoding of 9. + + And for decoding:: + + >>> s = unhexlify(b'020109') + >>> try: + >>> int_der = DerInteger() + >>> int_der.decode(s) + >>> print int_der.value + >>> except ValueError: + >>> print "Not a valid DER INTEGER" + + the output will be ``9``. + + :ivar value: The integer value + :vartype value: integer + """ + + def __init__(self, value=0, implicit=None, explicit=None): + """Initialize the DER object as an INTEGER. + + :Parameters: + value : integer + The value of the integer. + + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for INTEGER (2). + """ + + DerObject.__init__(self, 0x02, b'', implicit, + False, explicit) + self.value = value # The integer value + + def encode(self): + """Return the DER INTEGER, fully encoded as a + binary string.""" + + number = self.value + self.payload = b'' + while True: + self.payload = bchr(int(number & 255)) + self.payload + if 128 <= number <= 255: + self.payload = bchr(0x00) + self.payload + if -128 <= number <= 255: + break + number >>= 8 + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a DER-encoded INTEGER, and re-initializes this + object with it. + + Args: + der_encoded (byte string): A complete INTEGER DER element. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict=strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER INTEGER from a file.""" + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + if strict: + if len(self.payload) == 0: + raise ValueError("Invalid encoding for DER INTEGER: empty payload") + if len(self.payload) >= 2 and struct.unpack('>H', self.payload[:2])[0] < 0x80: + raise ValueError("Invalid encoding for DER INTEGER: leading zero") + + # Derive self.value from self.payload + self.value = 0 + bits = 1 + for i in self.payload: + self.value *= 256 + self.value += bord(i) + bits <<= 8 + if self.payload and bord(self.payload[0]) & 0x80: + self.value -= bits + + +class DerBoolean(DerObject): + """Class to model a DER-encoded BOOLEAN. + + An example of encoding is:: + + >>> from Cryptodome.Util.asn1 import DerBoolean + >>> bool_der = DerBoolean(True) + >>> print(bool_der.encode().hex()) + + which will show ``0101ff``, the DER encoding of True. + + And for decoding:: + + >>> s = bytes.fromhex('0101ff') + >>> try: + >>> bool_der = DerBoolean() + >>> bool_der.decode(s) + >>> print(bool_der.value) + >>> except ValueError: + >>> print "Not a valid DER BOOLEAN" + + the output will be ``True``. + + :ivar value: The boolean value + :vartype value: boolean + """ + def __init__(self, value=False, implicit=None, explicit=None): + """Initialize the DER object as a BOOLEAN. + + Args: + value (boolean): + The value of the boolean. Default is False. + + implicit (integer or byte): + The IMPLICIT tag number (< 0x1F) to use for the encoded object. + It overrides the universal tag for BOOLEAN (1). + It cannot be combined with the ``explicit`` parameter. + By default, there is no IMPLICIT tag. + + explicit (integer or byte): + The EXPLICIT tag number (< 0x1F) to use for the encoded object. + It cannot be combined with the ``implicit`` parameter. + By default, there is no EXPLICIT tag. + """ + + DerObject.__init__(self, 0x01, b'', implicit, False, explicit) + self.value = value # The boolean value + + def encode(self): + """Return the DER BOOLEAN, fully encoded as a binary string.""" + + self.payload = b'\xFF' if self.value else b'\x00' + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a DER-encoded BOOLEAN, and re-initializes this object with it. + + Args: + der_encoded (byte string): A DER-encoded BOOLEAN. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a DER-encoded BOOLEAN from a file.""" + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + if len(self.payload) != 1: + raise ValueError("Invalid encoding for DER BOOLEAN: payload is not 1 byte") + + if bord(self.payload[0]) == 0: + self.value = False + elif bord(self.payload[0]) == 0xFF: + self.value = True + else: + raise ValueError("Invalid payload for DER BOOLEAN") + + +class DerSequence(DerObject): + """Class to model a DER SEQUENCE. + + This object behaves like a dynamic Python sequence. + + Sub-elements that are INTEGERs behave like Python integers. + + Any other sub-element is a binary string encoded as a complete DER + sub-element (TLV). + + An example of encoding is: + + >>> from Cryptodome.Util.asn1 import DerSequence, DerInteger + >>> from binascii import hexlify, unhexlify + >>> obj_der = unhexlify('070102') + >>> seq_der = DerSequence([4]) + >>> seq_der.append(9) + >>> seq_der.append(obj_der.encode()) + >>> print hexlify(seq_der.encode()) + + which will show ``3009020104020109070102``, the DER encoding of the + sequence containing ``4``, ``9``, and the object with payload ``02``. + + For decoding: + + >>> s = unhexlify(b'3009020104020109070102') + >>> try: + >>> seq_der = DerSequence() + >>> seq_der.decode(s) + >>> print len(seq_der) + >>> print seq_der[0] + >>> print seq_der[:] + >>> except ValueError: + >>> print "Not a valid DER SEQUENCE" + + the output will be:: + + 3 + 4 + [4, 9, b'\x07\x01\x02'] + + """ + + def __init__(self, startSeq=None, implicit=None, explicit=None): + """Initialize the DER object as a SEQUENCE. + + :Parameters: + startSeq : Python sequence + A sequence whose element are either integers or + other DER objects. + + implicit : integer or byte + The IMPLICIT tag number (< 0x1F) to use for the encoded object. + It overrides the universal tag for SEQUENCE (16). + It cannot be combined with the ``explicit`` parameter. + By default, there is no IMPLICIT tag. + + explicit : integer or byte + The EXPLICIT tag number (< 0x1F) to use for the encoded object. + It cannot be combined with the ``implicit`` parameter. + By default, there is no EXPLICIT tag. + """ + + DerObject.__init__(self, 0x10, b'', implicit, True, explicit) + if startSeq is None: + self._seq = [] + else: + self._seq = startSeq + + # A few methods to make it behave like a python sequence + + def __delitem__(self, n): + del self._seq[n] + + def __getitem__(self, n): + return self._seq[n] + + def __setitem__(self, key, value): + self._seq[key] = value + + def __setslice__(self, i, j, sequence): + self._seq[i:j] = sequence + + def __delslice__(self, i, j): + del self._seq[i:j] + + def __getslice__(self, i, j): + return self._seq[max(0, i):max(0, j)] + + def __len__(self): + return len(self._seq) + + def __iadd__(self, item): + self._seq.append(item) + return self + + def append(self, item): + self._seq.append(item) + return self + + def insert(self, index, item): + self._seq.insert(index, item) + return self + + def hasInts(self, only_non_negative=True): + """Return the number of items in this sequence that are + integers. + + Args: + only_non_negative (boolean): + If ``True``, negative integers are not counted in. + """ + + items = [x for x in self._seq if _is_number(x, only_non_negative)] + return len(items) + + def hasOnlyInts(self, only_non_negative=True): + """Return ``True`` if all items in this sequence are integers + or non-negative integers. + + This function returns False is the sequence is empty, + or at least one member is not an integer. + + Args: + only_non_negative (boolean): + If ``True``, the presence of negative integers + causes the method to return ``False``.""" + return self._seq and self.hasInts(only_non_negative) == len(self._seq) + + def encode(self): + """Return this DER SEQUENCE, fully encoded as a + binary string. + + Raises: + ValueError: if some elements in the sequence are neither integers + nor byte strings. + """ + self.payload = b'' + for item in self._seq: + if byte_string(item): + self.payload += item + elif _is_number(item): + self.payload += DerInteger(item).encode() + else: + self.payload += item.encode() + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False, nr_elements=None, only_ints_expected=False): + """Decode a complete DER SEQUENCE, and re-initializes this + object with it. + + Args: + der_encoded (byte string): + A complete SEQUENCE DER element. + nr_elements (None or integer or list of integers): + The number of members the SEQUENCE can have + only_ints_expected (boolean): + Whether the SEQUENCE is expected to contain only integers. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + + DER INTEGERs are decoded into Python integers. Any other DER + element is not decoded. Its validity is not checked. + """ + + self._nr_elements = nr_elements + result = DerObject.decode(self, der_encoded, strict=strict) + + if only_ints_expected and not self.hasOnlyInts(): + raise ValueError("Some members are not INTEGERs") + + return result + + def _decodeFromStream(self, s, strict): + """Decode a complete DER SEQUENCE from a file.""" + + self._seq = [] + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + # Add one item at a time to self.seq, by scanning self.payload + p = BytesIO_EOF(self.payload) + while p.remaining_data() > 0: + p.set_bookmark() + + der = DerObject() + der._decodeFromStream(p, strict) + + # Parse INTEGERs differently + if der._tag_octet != 0x02: + self._seq.append(p.data_since_bookmark()) + else: + derInt = DerInteger() + data = p.data_since_bookmark() + derInt.decode(data, strict=strict) + self._seq.append(derInt.value) + + ok = True + if self._nr_elements is not None: + try: + ok = len(self._seq) in self._nr_elements + except TypeError: + ok = len(self._seq) == self._nr_elements + + if not ok: + raise ValueError("Unexpected number of members (%d)" + " in the sequence" % len(self._seq)) + + +class DerOctetString(DerObject): + """Class to model a DER OCTET STRING. + + An example of encoding is: + + >>> from Cryptodome.Util.asn1 import DerOctetString + >>> from binascii import hexlify, unhexlify + >>> os_der = DerOctetString(b'\\xaa') + >>> os_der.payload += b'\\xbb' + >>> print hexlify(os_der.encode()) + + which will show ``0402aabb``, the DER encoding for the byte string + ``b'\\xAA\\xBB'``. + + For decoding: + + >>> s = unhexlify(b'0402aabb') + >>> try: + >>> os_der = DerOctetString() + >>> os_der.decode(s) + >>> print hexlify(os_der.payload) + >>> except ValueError: + >>> print "Not a valid DER OCTET STRING" + + the output will be ``aabb``. + + :ivar payload: The content of the string + :vartype payload: byte string + """ + + def __init__(self, value=b'', implicit=None): + """Initialize the DER object as an OCTET STRING. + + :Parameters: + value : byte string + The initial payload of the object. + If not specified, the payload is empty. + + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for OCTET STRING (4). + """ + DerObject.__init__(self, 0x04, value, implicit, False) + + +class DerNull(DerObject): + """Class to model a DER NULL element.""" + + def __init__(self): + """Initialize the DER object as a NULL.""" + + DerObject.__init__(self, 0x05, b'', None, False) + + +class DerObjectId(DerObject): + """Class to model a DER OBJECT ID. + + An example of encoding is: + + >>> from Cryptodome.Util.asn1 import DerObjectId + >>> from binascii import hexlify, unhexlify + >>> oid_der = DerObjectId("1.2") + >>> oid_der.value += ".840.113549.1.1.1" + >>> print hexlify(oid_der.encode()) + + which will show ``06092a864886f70d010101``, the DER encoding for the + RSA Object Identifier ``1.2.840.113549.1.1.1``. + + For decoding: + + >>> s = unhexlify(b'06092a864886f70d010101') + >>> try: + >>> oid_der = DerObjectId() + >>> oid_der.decode(s) + >>> print oid_der.value + >>> except ValueError: + >>> print "Not a valid DER OBJECT ID" + + the output will be ``1.2.840.113549.1.1.1``. + + :ivar value: The Object ID (OID), a dot separated list of integers + :vartype value: string + """ + + def __init__(self, value='', implicit=None, explicit=None): + """Initialize the DER object as an OBJECT ID. + + :Parameters: + value : string + The initial Object Identifier (e.g. "1.2.0.0.6.2"). + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for OBJECT ID (6). + explicit : integer + The EXPLICIT tag to use for the encoded object. + """ + DerObject.__init__(self, 0x06, b'', implicit, False, explicit) + self.value = value + + def encode(self): + """Return the DER OBJECT ID, fully encoded as a + binary string.""" + + comps = [int(x) for x in self.value.split(".")] + + if len(comps) < 2: + raise ValueError("Not a valid Object Identifier string") + if comps[0] > 2: + raise ValueError("First component must be 0, 1 or 2") + if comps[0] < 2 and comps[1] > 39: + raise ValueError("Second component must be 39 at most") + + subcomps = [40 * comps[0] + comps[1]] + comps[2:] + + encoding = [] + for v in reversed(subcomps): + encoding.append(v & 0x7F) + v >>= 7 + while v: + encoding.append((v & 0x7F) | 0x80) + v >>= 7 + + self.payload = b''.join([bchr(x) for x in reversed(encoding)]) + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a complete DER OBJECT ID, and re-initializes this + object with it. + + Args: + der_encoded (byte string): + A complete DER OBJECT ID. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER OBJECT ID from a file.""" + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + # Derive self.value from self.payload + p = BytesIO_EOF(self.payload) + + subcomps = [] + v = 0 + while p.remaining_data(): + c = p.read_byte() + v = (v << 7) + (c & 0x7F) + if not (c & 0x80): + subcomps.append(v) + v = 0 + + if len(subcomps) == 0: + raise ValueError("Empty payload") + + if subcomps[0] < 40: + subcomps[:1] = [0, subcomps[0]] + elif subcomps[0] < 80: + subcomps[:1] = [1, subcomps[0] - 40] + else: + subcomps[:1] = [2, subcomps[0] - 80] + + self.value = ".".join([str(x) for x in subcomps]) + + +class DerBitString(DerObject): + """Class to model a DER BIT STRING. + + An example of encoding is: + + >>> from Cryptodome.Util.asn1 import DerBitString + >>> bs_der = DerBitString(b'\\xAA') + >>> bs_der.value += b'\\xBB' + >>> print(bs_der.encode().hex()) + + which will show ``030300aabb``, the DER encoding for the bit string + ``b'\\xAA\\xBB'``. + + For decoding: + + >>> s = bytes.fromhex('030300aabb') + >>> try: + >>> bs_der = DerBitString() + >>> bs_der.decode(s) + >>> print(bs_der.value.hex()) + >>> except ValueError: + >>> print "Not a valid DER BIT STRING" + + the output will be ``aabb``. + + :ivar value: The content of the string + :vartype value: byte string + """ + + def __init__(self, value=b'', implicit=None, explicit=None): + """Initialize the DER object as a BIT STRING. + + :Parameters: + value : byte string or DER object + The initial, packed bit string. + If not specified, the bit string is empty. + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for BIT STRING (3). + explicit : integer + The EXPLICIT tag to use for the encoded object. + """ + DerObject.__init__(self, 0x03, b'', implicit, False, explicit) + + # The bitstring value (packed) + if isinstance(value, DerObject): + self.value = value.encode() + else: + self.value = value + + def encode(self): + """Return the DER BIT STRING, fully encoded as a + byte string.""" + + # Add padding count byte + self.payload = b'\x00' + self.value + return DerObject.encode(self) + + def decode(self, der_encoded, strict=False): + """Decode a complete DER BIT STRING, and re-initializes this + object with it. + + Args: + der_encoded (byte string): a complete DER BIT STRING. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER BIT STRING DER from a file.""" + + # Fill-up self.payload + DerObject._decodeFromStream(self, s, strict) + + if self.payload and bord(self.payload[0]) != 0: + raise ValueError("Not a valid BIT STRING") + + # Fill-up self.value + self.value = b'' + # Remove padding count byte + if self.payload: + self.value = self.payload[1:] + + +class DerSetOf(DerObject): + """Class to model a DER SET OF. + + An example of encoding is: + + >>> from Cryptodome.Util.asn1 import DerBitString + >>> from binascii import hexlify, unhexlify + >>> so_der = DerSetOf([4,5]) + >>> so_der.add(6) + >>> print hexlify(so_der.encode()) + + which will show ``3109020104020105020106``, the DER encoding + of a SET OF with items 4,5, and 6. + + For decoding: + + >>> s = unhexlify(b'3109020104020105020106') + >>> try: + >>> so_der = DerSetOf() + >>> so_der.decode(s) + >>> print [x for x in so_der] + >>> except ValueError: + >>> print "Not a valid DER SET OF" + + the output will be ``[4, 5, 6]``. + """ + + def __init__(self, startSet=None, implicit=None): + """Initialize the DER object as a SET OF. + + :Parameters: + startSet : container + The initial set of integers or DER encoded objects. + implicit : integer + The IMPLICIT tag to use for the encoded object. + It overrides the universal tag for SET OF (17). + """ + DerObject.__init__(self, 0x11, b'', implicit, True) + self._seq = [] + + # All elements must be of the same type (and therefore have the + # same leading octet) + self._elemOctet = None + + if startSet: + for e in startSet: + self.add(e) + + def __getitem__(self, n): + return self._seq[n] + + def __iter__(self): + return iter(self._seq) + + def __len__(self): + return len(self._seq) + + def add(self, elem): + """Add an element to the set. + + Args: + elem (byte string or integer): + An element of the same type of objects already in the set. + It can be an integer or a DER encoded object. + """ + + if _is_number(elem): + eo = 0x02 + elif isinstance(elem, DerObject): + eo = self._tag_octet + else: + eo = bord(elem[0]) + + if self._elemOctet != eo: + if self._elemOctet is not None: + raise ValueError("New element does not belong to the set") + self._elemOctet = eo + + if elem not in self._seq: + self._seq.append(elem) + + def decode(self, der_encoded, strict=False): + """Decode a complete SET OF DER element, and re-initializes this + object with it. + + DER INTEGERs are decoded into Python integers. Any other DER + element is left undecoded; its validity is not checked. + + Args: + der_encoded (byte string): a complete DER BIT SET OF. + strict (boolean): + Whether decoding must check for strict DER compliancy. + + Raises: + ValueError: in case of parsing errors. + """ + + return DerObject.decode(self, der_encoded, strict) + + def _decodeFromStream(self, s, strict): + """Decode a complete DER SET OF from a file.""" + + self._seq = [] + + # Fill up self.payload + DerObject._decodeFromStream(self, s, strict) + + # Add one item at a time to self.seq, by scanning self.payload + p = BytesIO_EOF(self.payload) + setIdOctet = -1 + while p.remaining_data() > 0: + p.set_bookmark() + + der = DerObject() + der._decodeFromStream(p, strict) + + # Verify that all members are of the same type + if setIdOctet < 0: + setIdOctet = der._tag_octet + else: + if setIdOctet != der._tag_octet: + raise ValueError("Not all elements are of the same DER type") + + # Parse INTEGERs differently + if setIdOctet != 0x02: + self._seq.append(p.data_since_bookmark()) + else: + derInt = DerInteger() + derInt.decode(p.data_since_bookmark(), strict) + self._seq.append(derInt.value) + # end + + def encode(self): + """Return this SET OF DER element, fully encoded as a + binary string. + """ + + # Elements in the set must be ordered in lexicographic order + ordered = [] + for item in self._seq: + if _is_number(item): + bys = DerInteger(item).encode() + elif isinstance(item, DerObject): + bys = item.encode() + else: + bys = item + ordered.append(bys) + ordered.sort() + self.payload = b''.join(ordered) + return DerObject.encode(self) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/asn1.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/asn1.pyi new file mode 100644 index 0000000..ee4891c --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/asn1.pyi @@ -0,0 +1,80 @@ +from typing import Optional, Sequence, Union, Set, Iterable + +__all__ = ['DerObject', 'DerInteger', 'DerOctetString', 'DerNull', + 'DerSequence', 'DerObjectId', 'DerBitString', 'DerSetOf'] + +# TODO: Make the encoded DerObjects their own type, so that DerSequence and +# DerSetOf can check their contents better + +class BytesIO_EOF: + def __init__(self, initial_bytes: bytes) -> None: ... + def set_bookmark(self) -> None: ... + def data_since_bookmark(self) -> bytes: ... + def remaining_data(self) -> int: ... + def read(self, length: int) -> bytes: ... + def read_byte(self) -> bytes: ... + +class DerObject: + payload: bytes + def __init__(self, asn1Id: Optional[int]=None, payload: Optional[bytes]=..., implicit: Optional[int]=None, + constructed: Optional[bool]=False, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: bool=...) -> DerObject: ... + +class DerInteger(DerObject): + value: int + def __init__(self, value: Optional[int]= 0, implicit: Optional[int]=None, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: bool=...) -> DerInteger: ... + +class DerBoolean(DerObject): + value: bool + def __init__(self, value: bool=..., implicit: Optional[Union[int, bytes]]=..., explicit: Optional[Union[int, bytes]]=...) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: bool=...) -> DerBoolean: ... + +class DerSequence(DerObject): + def __init__(self, startSeq: Optional[Sequence[Union[int, DerInteger, DerObject]]]=None, implicit: Optional[int]=None) -> None: ... + def __delitem__(self, n: int) -> None: ... + def __getitem__(self, n: int) -> None: ... + def __setitem__(self, key: int, value: DerObject) -> None: ... + def __setslice__(self, i: int, j: int, sequence: Sequence) -> None: ... + def __delslice__(self, i: int, j: int) -> None: ... + def __getslice__(self, i: int, j: int) -> DerSequence: ... + def __len__(self) -> int: ... + def __iadd__(self, item: DerObject) -> DerSequence: ... + def append(self, item: DerObject) -> DerSequence: ... + def hasInts(self, only_non_negative: Optional[bool]=True) -> int: ... + def hasOnlyInts(self, only_non_negative: Optional[bool]=True) -> bool: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: bool=..., nr_elements: Optional[int]=None, only_ints_expected: Optional[bool]=False) -> DerSequence: ... + +class DerOctetString(DerObject): + payload: bytes + def __init__(self, value: Optional[bytes]=..., implicit: Optional[int]=None) -> None: ... + +class DerNull(DerObject): + def __init__(self) -> None: ... + +class DerObjectId(DerObject): + value: str + def __init__(self, value: Optional[str]=..., implicit: Optional[int]=None, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: bool=...) -> DerObjectId: ... + +class DerBitString(DerObject): + value: bytes + def __init__(self, value: Optional[bytes]=..., implicit: Optional[int]=None, explicit: Optional[int]=None) -> None: ... + def encode(self) -> bytes: ... + def decode(self, der_encoded: bytes, strict: bool=...) -> DerBitString: ... + +DerSetElement = Union[bytes, int] + +class DerSetOf(DerObject): + def __init__(self, startSet: Optional[Set[DerSetElement]]=None, implicit: Optional[int]=None) -> None: ... + def __getitem__(self, n: int) -> DerSetElement: ... + def __iter__(self) -> Iterable: ... + def __len__(self) -> int: ... + def add(self, elem: DerSetElement) -> None: ... + def decode(self, der_encoded: bytes, strict: bool=...) -> DerObject: ... + def encode(self) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/number.py b/python/lib/python3.11/site-packages/Cryptodome/Util/number.py new file mode 100644 index 0000000..6d59fd9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/number.py @@ -0,0 +1,1525 @@ +# +# number.py : Number-theoretic functions +# +# Part of the Python Cryptography Toolkit +# +# Written by Andrew M. Kuchling, Barry A. Warsaw, and others +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== +# + +import math +import sys +import struct +from Cryptodome import Random +from Cryptodome.Util.py3compat import iter_range + +# Backward compatibility +_fastmath = None + + +def ceil_div(n, d): + """Return ceil(n/d), that is, the smallest integer r such that r*d >= n""" + + if d == 0: + raise ZeroDivisionError() + if (n < 0) or (d < 0): + raise ValueError("Non positive values") + r, q = divmod(n, d) + if (n != 0) and (q != 0): + r += 1 + return r + + +def size (N): + """Returns the size of the number N in bits.""" + + if N < 0: + raise ValueError("Size in bits only available for non-negative numbers") + return N.bit_length() + + +def getRandomInteger(N, randfunc=None): + """Return a random number at most N bits long. + + If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. Use :func:`Cryptodome.Random.random.getrandbits` instead. + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + + S = randfunc(N>>3) + odd_bits = N % 8 + if odd_bits != 0: + rand_bits = ord(randfunc(1)) >> (8-odd_bits) + S = struct.pack('B', rand_bits) + S + value = bytes_to_long(S) + return value + +def getRandomRange(a, b, randfunc=None): + """Return a random number *n* so that *a <= n < b*. + + If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. Use :func:`Cryptodome.Random.random.randrange` instead. + """ + + range_ = b - a - 1 + bits = size(range_) + value = getRandomInteger(bits, randfunc) + while value > range_: + value = getRandomInteger(bits, randfunc) + return a + value + +def getRandomNBitInteger(N, randfunc=None): + """Return a random number with exactly N-bits, + i.e. a random number between 2**(N-1) and (2**N)-1. + + If :data:`randfunc` is omitted, then :meth:`Random.get_random_bytes` is used. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. + """ + + value = getRandomInteger (N-1, randfunc) + value |= 2 ** (N-1) # Ensure high bit is set + assert size(value) >= N + return value + + +if sys.version_info[:2] >= (3, 5): + + GCD = math.gcd + +else: + + def GCD(x,y): + """Greatest Common Denominator of :data:`x` and :data:`y`. + """ + + x = abs(x) ; y = abs(y) + while x > 0: + x, y = y % x, x + return y + + +if sys.version_info[:2] >= (3, 8): + + def inverse(u, v): + """The inverse of :data:`u` *mod* :data:`v`.""" + + if v == 0: + raise ZeroDivisionError("Modulus cannot be zero") + if v < 0: + raise ValueError("Modulus cannot be negative") + + return pow(u, -1, v) + +else: + + def inverse(u, v): + """The inverse of :data:`u` *mod* :data:`v`.""" + + if v == 0: + raise ZeroDivisionError("Modulus cannot be zero") + if v < 0: + raise ValueError("Modulus cannot be negative") + + u3, v3 = u, v + u1, v1 = 1, 0 + while v3 > 0: + q = u3 // v3 + u1, v1 = v1, u1 - v1*q + u3, v3 = v3, u3 - v3*q + if u3 != 1: + raise ValueError("No inverse value can be computed") + while u1<0: + u1 = u1 + v + return u1 + +# Given a number of bits to generate and a random generation function, +# find a prime number of the appropriate size. + +def getPrime(N, randfunc=None): + """Return a random N-bit prime number. + + N must be an integer larger than 1. + If randfunc is omitted, then :meth:`Random.get_random_bytes` is used. + """ + if randfunc is None: + randfunc = Random.get_random_bytes + + if N < 2: + raise ValueError("N must be larger than 1") + + while True: + number = getRandomNBitInteger(N, randfunc) | 1 + if isPrime(number, randfunc=randfunc): + break + return number + + +def _rabinMillerTest(n, rounds, randfunc=None): + """_rabinMillerTest(n:long, rounds:int, randfunc:callable):int + Tests if n is prime. + Returns 0 when n is definitely composite. + Returns 1 when n is probably prime. + Returns 2 when n is definitely prime. + + If randfunc is omitted, then Random.new().read is used. + + This function is for internal use only and may be renamed or removed in + the future. + """ + # check special cases (n==2, n even, n < 2) + if n < 3 or (n & 1) == 0: + return n == 2 + # n might be very large so it might be beneficial to precalculate n-1 + n_1 = n - 1 + # determine m and b so that 2**b * m = n - 1 and b maximal + b = 0 + m = n_1 + while (m & 1) == 0: + b += 1 + m >>= 1 + + tested = [] + # we need to do at most n-2 rounds. + for i in iter_range (min (rounds, n-2)): + # randomly choose a < n and make sure it hasn't been tested yet + a = getRandomRange (2, n, randfunc) + while a in tested: + a = getRandomRange (2, n, randfunc) + tested.append (a) + # do the rabin-miller test + z = pow (a, m, n) # (a**m) % n + if z == 1 or z == n_1: + continue + composite = 1 + for r in iter_range(b): + z = (z * z) % n + if z == 1: + return 0 + elif z == n_1: + composite = 0 + break + if composite: + return 0 + return 1 + +def getStrongPrime(N, e=0, false_positive_prob=1e-6, randfunc=None): + r""" + Return a random strong *N*-bit prime number. + In this context, *p* is a strong prime if *p-1* and *p+1* have at + least one large prime factor. + + Args: + N (integer): the exact length of the strong prime. + It must be a multiple of 128 and > 512. + e (integer): if provided, the returned prime (minus 1) + will be coprime to *e* and thus suitable for RSA where + *e* is the public exponent. + false_positive_prob (float): + The statistical probability for the result not to be actually a + prime. It defaults to 10\ :sup:`-6`. + Note that the real probability of a false-positive is far less. This is + just the mathematically provable limit. + randfunc (callable): + A function that takes a parameter *N* and that returns + a random byte string of such length. + If omitted, :func:`Cryptodome.Random.get_random_bytes` is used. + Return: + The new strong prime. + + .. deprecated:: 3.0 + This function is for internal use only and may be renamed or removed in + the future. + """ + + # This function was implemented following the + # instructions found in the paper: + # "FAST GENERATION OF RANDOM, STRONG RSA PRIMES" + # by Robert D. Silverman + # RSA Laboratories + # May 17, 1997 + # which by the time of writing could be freely downloaded here: + # http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2713&rep=rep1&type=pdf + + if randfunc is None: + randfunc = Random.get_random_bytes + + # Use the accelerator if available + if _fastmath is not None: + return _fastmath.getStrongPrime(long(N), long(e), false_positive_prob, + randfunc) + + if (N < 512) or ((N % 128) != 0): + raise ValueError ("bits must be multiple of 128 and > 512") + + rabin_miller_rounds = int(math.ceil(-math.log(false_positive_prob)/math.log(4))) + + # calculate range for X + # lower_bound = sqrt(2) * 2^{511 + 128*x} + # upper_bound = 2^{512 + 128*x} - 1 + x = (N - 512) >> 7 + # We need to approximate the sqrt(2) in the lower_bound by an integer + # expression because floating point math overflows with these numbers + lower_bound = (14142135623730950489 * (2 ** (511 + 128*x))) // 10000000000000000000 + upper_bound = (1 << (512 + 128*x)) - 1 + # Randomly choose X in calculated range + X = getRandomRange (lower_bound, upper_bound, randfunc) + + # generate p1 and p2 + p = [0, 0] + for i in (0, 1): + # randomly choose 101-bit y + y = getRandomNBitInteger (101, randfunc) + # initialize the field for sieving + field = [0] * 5 * len (sieve_base) + # sieve the field + for prime in sieve_base: + offset = y % prime + for j in iter_range((prime - offset) % prime, len (field), prime): + field[j] = 1 + + # look for suitable p[i] starting at y + result = 0 + for j in range(len(field)): + composite = field[j] + # look for next canidate + if composite: + continue + tmp = y + j + result = _rabinMillerTest (tmp, rabin_miller_rounds) + if result > 0: + p[i] = tmp + break + if result == 0: + raise RuntimeError ("Couln't find prime in field. " + "Developer: Increase field_size") + + # Calculate R + # R = (p2^{-1} mod p1) * p2 - (p1^{-1} mod p2) * p1 + tmp1 = inverse (p[1], p[0]) * p[1] # (p2^-1 mod p1)*p2 + tmp2 = inverse (p[0], p[1]) * p[0] # (p1^-1 mod p2)*p1 + R = tmp1 - tmp2 # (p2^-1 mod p1)*p2 - (p1^-1 mod p2)*p1 + + # search for final prime number starting by Y0 + # Y0 = X + (R - X mod p1p2) + increment = p[0] * p[1] + X = X + (R - (X % increment)) + while 1: + is_possible_prime = 1 + # first check candidate against sieve_base + for prime in sieve_base: + if (X % prime) == 0: + is_possible_prime = 0 + break + # if e is given make sure that e and X-1 are coprime + # this is not necessarily a strong prime criterion but useful when + # creating them for RSA where the p-1 and q-1 should be coprime to + # the public exponent e + if e and is_possible_prime: + if e & 1: + if GCD(e, X-1) != 1: + is_possible_prime = 0 + else: + if GCD(e, (X-1) // 2) != 1: + is_possible_prime = 0 + + # do some Rabin-Miller-Tests + if is_possible_prime: + result = _rabinMillerTest (X, rabin_miller_rounds) + if result > 0: + break + X += increment + # abort when X has more bits than requested + # TODO: maybe we shouldn't abort but rather start over. + if X >= 1 << N: + raise RuntimeError ("Couln't find prime in field. " + "Developer: Increase field_size") + return X + +def isPrime(N, false_positive_prob=1e-6, randfunc=None): + r"""Test if a number *N* is a prime. + + Args: + false_positive_prob (float): + The statistical probability for the result not to be actually a + prime. It defaults to 10\ :sup:`-6`. + Note that the real probability of a false-positive is far less. + This is just the mathematically provable limit. + randfunc (callable): + A function that takes a parameter *N* and that returns + a random byte string of such length. + If omitted, :func:`Cryptodome.Random.get_random_bytes` is used. + + Return: + `True` is the input is indeed prime. + """ + + if randfunc is None: + randfunc = Random.get_random_bytes + + if _fastmath is not None: + return _fastmath.isPrime(long(N), false_positive_prob, randfunc) + + if N < 3 or N & 1 == 0: + return N == 2 + for p in sieve_base: + if N == p: + return True + if N % p == 0: + return False + + rounds = int(math.ceil(-math.log(false_positive_prob)/math.log(4))) + return bool(_rabinMillerTest(N, rounds, randfunc)) + + +# Improved conversion functions contributed by Barry Warsaw, after +# careful benchmarking + +import struct + +def long_to_bytes(n, blocksize=0): + """Convert a positive integer to a byte string using big endian encoding. + + If :data:`blocksize` is absent or zero, the byte string will + be of minimal length. + + Otherwise, the length of the byte string is guaranteed to be a multiple + of :data:`blocksize`. If necessary, zeroes (``\\x00``) are added at the left. + + .. note:: + In Python 3, if you are sure that :data:`n` can fit into + :data:`blocksize` bytes, you can simply use the native method instead:: + + >>> n.to_bytes(blocksize, 'big') + + For instance:: + + >>> n = 80 + >>> n.to_bytes(2, 'big') + b'\\x00P' + + However, and unlike this ``long_to_bytes()`` function, + an ``OverflowError`` exception is raised if :data:`n` does not fit. + """ + + if n < 0 or blocksize < 0: + raise ValueError("Values must be non-negative") + + result = [] + pack = struct.pack + + # Fill the first block independently from the value of n + bsr = blocksize + while bsr >= 8: + result.insert(0, pack('>Q', n & 0xFFFFFFFFFFFFFFFF)) + n = n >> 64 + bsr -= 8 + + while bsr >= 4: + result.insert(0, pack('>I', n & 0xFFFFFFFF)) + n = n >> 32 + bsr -= 4 + + while bsr > 0: + result.insert(0, pack('>B', n & 0xFF)) + n = n >> 8 + bsr -= 1 + + if n == 0: + if len(result) == 0: + bresult = b'\x00' + else: + bresult = b''.join(result) + else: + # The encoded number exceeds the block size + while n > 0: + result.insert(0, pack('>Q', n & 0xFFFFFFFFFFFFFFFF)) + n = n >> 64 + result[0] = result[0].lstrip(b'\x00') + bresult = b''.join(result) + # bresult has minimum length here + if blocksize > 0: + target_len = ((len(bresult) - 1) // blocksize + 1) * blocksize + bresult = b'\x00' * (target_len - len(bresult)) + bresult + + return bresult + + +def bytes_to_long(s): + """Convert a byte string to a long integer (big endian). + + In Python 3.2+, use the native method instead:: + + >>> int.from_bytes(s, 'big') + + For instance:: + + >>> int.from_bytes(b'\x00P', 'big') + 80 + + This is (essentially) the inverse of :func:`long_to_bytes`. + """ + acc = 0 + + unpack = struct.unpack + + # Up to Python 2.7.4, struct.unpack can't work with bytearrays nor + # memoryviews + if sys.version_info[0:3] < (2, 7, 4): + if isinstance(s, bytearray): + s = bytes(s) + elif isinstance(s, memoryview): + s = s.tobytes() + + length = len(s) + if length % 4: + extra = (4 - length % 4) + s = b'\x00' * extra + s + length = length + extra + for i in range(0, length, 4): + acc = (acc << 32) + unpack('>I', s[i:i+4])[0] + return acc + + +# For backwards compatibility... +import warnings +def long2str(n, blocksize=0): + warnings.warn("long2str() has been replaced by long_to_bytes()") + return long_to_bytes(n, blocksize) +def str2long(s): + warnings.warn("str2long() has been replaced by bytes_to_long()") + return bytes_to_long(s) + + +# The first 10000 primes used for checking primality. +# This should be enough to eliminate most of the odd +# numbers before needing to do a Rabin-Miller test at all. +sieve_base = ( + 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, + 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, + 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, + 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, + 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, + 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, + 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, + 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, + 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, + 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, + 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, + 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, + 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, + 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, + 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, + 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, + 947, 953, 967, 971, 977, 983, 991, 997, 1009, 1013, + 1019, 1021, 1031, 1033, 1039, 1049, 1051, 1061, 1063, 1069, + 1087, 1091, 1093, 1097, 1103, 1109, 1117, 1123, 1129, 1151, + 1153, 1163, 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223, + 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283, 1289, 1291, + 1297, 1301, 1303, 1307, 1319, 1321, 1327, 1361, 1367, 1373, + 1381, 1399, 1409, 1423, 1427, 1429, 1433, 1439, 1447, 1451, + 1453, 1459, 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511, + 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571, 1579, 1583, + 1597, 1601, 1607, 1609, 1613, 1619, 1621, 1627, 1637, 1657, + 1663, 1667, 1669, 1693, 1697, 1699, 1709, 1721, 1723, 1733, + 1741, 1747, 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811, + 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877, 1879, 1889, + 1901, 1907, 1913, 1931, 1933, 1949, 1951, 1973, 1979, 1987, + 1993, 1997, 1999, 2003, 2011, 2017, 2027, 2029, 2039, 2053, + 2063, 2069, 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129, + 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203, 2207, 2213, + 2221, 2237, 2239, 2243, 2251, 2267, 2269, 2273, 2281, 2287, + 2293, 2297, 2309, 2311, 2333, 2339, 2341, 2347, 2351, 2357, + 2371, 2377, 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423, + 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503, 2521, 2531, + 2539, 2543, 2549, 2551, 2557, 2579, 2591, 2593, 2609, 2617, + 2621, 2633, 2647, 2657, 2659, 2663, 2671, 2677, 2683, 2687, + 2689, 2693, 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741, + 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801, 2803, 2819, + 2833, 2837, 2843, 2851, 2857, 2861, 2879, 2887, 2897, 2903, + 2909, 2917, 2927, 2939, 2953, 2957, 2963, 2969, 2971, 2999, + 3001, 3011, 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079, + 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167, 3169, 3181, + 3187, 3191, 3203, 3209, 3217, 3221, 3229, 3251, 3253, 3257, + 3259, 3271, 3299, 3301, 3307, 3313, 3319, 3323, 3329, 3331, + 3343, 3347, 3359, 3361, 3371, 3373, 3389, 3391, 3407, 3413, + 3433, 3449, 3457, 3461, 3463, 3467, 3469, 3491, 3499, 3511, + 3517, 3527, 3529, 3533, 3539, 3541, 3547, 3557, 3559, 3571, + 3581, 3583, 3593, 3607, 3613, 3617, 3623, 3631, 3637, 3643, + 3659, 3671, 3673, 3677, 3691, 3697, 3701, 3709, 3719, 3727, + 3733, 3739, 3761, 3767, 3769, 3779, 3793, 3797, 3803, 3821, + 3823, 3833, 3847, 3851, 3853, 3863, 3877, 3881, 3889, 3907, + 3911, 3917, 3919, 3923, 3929, 3931, 3943, 3947, 3967, 3989, + 4001, 4003, 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057, + 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129, 4133, 4139, + 4153, 4157, 4159, 4177, 4201, 4211, 4217, 4219, 4229, 4231, + 4241, 4243, 4253, 4259, 4261, 4271, 4273, 4283, 4289, 4297, + 4327, 4337, 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409, + 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481, 4483, 4493, + 4507, 4513, 4517, 4519, 4523, 4547, 4549, 4561, 4567, 4583, + 4591, 4597, 4603, 4621, 4637, 4639, 4643, 4649, 4651, 4657, + 4663, 4673, 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751, + 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813, 4817, 4831, + 4861, 4871, 4877, 4889, 4903, 4909, 4919, 4931, 4933, 4937, + 4943, 4951, 4957, 4967, 4969, 4973, 4987, 4993, 4999, 5003, + 5009, 5011, 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087, + 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167, 5171, 5179, + 5189, 5197, 5209, 5227, 5231, 5233, 5237, 5261, 5273, 5279, + 5281, 5297, 5303, 5309, 5323, 5333, 5347, 5351, 5381, 5387, + 5393, 5399, 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443, + 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507, 5519, 5521, + 5527, 5531, 5557, 5563, 5569, 5573, 5581, 5591, 5623, 5639, + 5641, 5647, 5651, 5653, 5657, 5659, 5669, 5683, 5689, 5693, + 5701, 5711, 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791, + 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849, 5851, 5857, + 5861, 5867, 5869, 5879, 5881, 5897, 5903, 5923, 5927, 5939, + 5953, 5981, 5987, 6007, 6011, 6029, 6037, 6043, 6047, 6053, + 6067, 6073, 6079, 6089, 6091, 6101, 6113, 6121, 6131, 6133, + 6143, 6151, 6163, 6173, 6197, 6199, 6203, 6211, 6217, 6221, + 6229, 6247, 6257, 6263, 6269, 6271, 6277, 6287, 6299, 6301, + 6311, 6317, 6323, 6329, 6337, 6343, 6353, 6359, 6361, 6367, + 6373, 6379, 6389, 6397, 6421, 6427, 6449, 6451, 6469, 6473, + 6481, 6491, 6521, 6529, 6547, 6551, 6553, 6563, 6569, 6571, + 6577, 6581, 6599, 6607, 6619, 6637, 6653, 6659, 6661, 6673, + 6679, 6689, 6691, 6701, 6703, 6709, 6719, 6733, 6737, 6761, + 6763, 6779, 6781, 6791, 6793, 6803, 6823, 6827, 6829, 6833, + 6841, 6857, 6863, 6869, 6871, 6883, 6899, 6907, 6911, 6917, + 6947, 6949, 6959, 6961, 6967, 6971, 6977, 6983, 6991, 6997, + 7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, + 7109, 7121, 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207, + 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253, 7283, 7297, + 7307, 7309, 7321, 7331, 7333, 7349, 7351, 7369, 7393, 7411, + 7417, 7433, 7451, 7457, 7459, 7477, 7481, 7487, 7489, 7499, + 7507, 7517, 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561, + 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621, 7639, 7643, + 7649, 7669, 7673, 7681, 7687, 7691, 7699, 7703, 7717, 7723, + 7727, 7741, 7753, 7757, 7759, 7789, 7793, 7817, 7823, 7829, + 7841, 7853, 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919, + 7927, 7933, 7937, 7949, 7951, 7963, 7993, 8009, 8011, 8017, + 8039, 8053, 8059, 8069, 8081, 8087, 8089, 8093, 8101, 8111, + 8117, 8123, 8147, 8161, 8167, 8171, 8179, 8191, 8209, 8219, + 8221, 8231, 8233, 8237, 8243, 8263, 8269, 8273, 8287, 8291, + 8293, 8297, 8311, 8317, 8329, 8353, 8363, 8369, 8377, 8387, + 8389, 8419, 8423, 8429, 8431, 8443, 8447, 8461, 8467, 8501, + 8513, 8521, 8527, 8537, 8539, 8543, 8563, 8573, 8581, 8597, + 8599, 8609, 8623, 8627, 8629, 8641, 8647, 8663, 8669, 8677, + 8681, 8689, 8693, 8699, 8707, 8713, 8719, 8731, 8737, 8741, + 8747, 8753, 8761, 8779, 8783, 8803, 8807, 8819, 8821, 8831, + 8837, 8839, 8849, 8861, 8863, 8867, 8887, 8893, 8923, 8929, + 8933, 8941, 8951, 8963, 8969, 8971, 8999, 9001, 9007, 9011, + 9013, 9029, 9041, 9043, 9049, 9059, 9067, 9091, 9103, 9109, + 9127, 9133, 9137, 9151, 9157, 9161, 9173, 9181, 9187, 9199, + 9203, 9209, 9221, 9227, 9239, 9241, 9257, 9277, 9281, 9283, + 9293, 9311, 9319, 9323, 9337, 9341, 9343, 9349, 9371, 9377, + 9391, 9397, 9403, 9413, 9419, 9421, 9431, 9433, 9437, 9439, + 9461, 9463, 9467, 9473, 9479, 9491, 9497, 9511, 9521, 9533, + 9539, 9547, 9551, 9587, 9601, 9613, 9619, 9623, 9629, 9631, + 9643, 9649, 9661, 9677, 9679, 9689, 9697, 9719, 9721, 9733, + 9739, 9743, 9749, 9767, 9769, 9781, 9787, 9791, 9803, 9811, + 9817, 9829, 9833, 9839, 9851, 9857, 9859, 9871, 9883, 9887, + 9901, 9907, 9923, 9929, 9931, 9941, 9949, 9967, 9973, 10007, + 10009, 10037, 10039, 10061, 10067, 10069, 10079, 10091, 10093, 10099, + 10103, 10111, 10133, 10139, 10141, 10151, 10159, 10163, 10169, 10177, + 10181, 10193, 10211, 10223, 10243, 10247, 10253, 10259, 10267, 10271, + 10273, 10289, 10301, 10303, 10313, 10321, 10331, 10333, 10337, 10343, + 10357, 10369, 10391, 10399, 10427, 10429, 10433, 10453, 10457, 10459, + 10463, 10477, 10487, 10499, 10501, 10513, 10529, 10531, 10559, 10567, + 10589, 10597, 10601, 10607, 10613, 10627, 10631, 10639, 10651, 10657, + 10663, 10667, 10687, 10691, 10709, 10711, 10723, 10729, 10733, 10739, + 10753, 10771, 10781, 10789, 10799, 10831, 10837, 10847, 10853, 10859, + 10861, 10867, 10883, 10889, 10891, 10903, 10909, 10937, 10939, 10949, + 10957, 10973, 10979, 10987, 10993, 11003, 11027, 11047, 11057, 11059, + 11069, 11071, 11083, 11087, 11093, 11113, 11117, 11119, 11131, 11149, + 11159, 11161, 11171, 11173, 11177, 11197, 11213, 11239, 11243, 11251, + 11257, 11261, 11273, 11279, 11287, 11299, 11311, 11317, 11321, 11329, + 11351, 11353, 11369, 11383, 11393, 11399, 11411, 11423, 11437, 11443, + 11447, 11467, 11471, 11483, 11489, 11491, 11497, 11503, 11519, 11527, + 11549, 11551, 11579, 11587, 11593, 11597, 11617, 11621, 11633, 11657, + 11677, 11681, 11689, 11699, 11701, 11717, 11719, 11731, 11743, 11777, + 11779, 11783, 11789, 11801, 11807, 11813, 11821, 11827, 11831, 11833, + 11839, 11863, 11867, 11887, 11897, 11903, 11909, 11923, 11927, 11933, + 11939, 11941, 11953, 11959, 11969, 11971, 11981, 11987, 12007, 12011, + 12037, 12041, 12043, 12049, 12071, 12073, 12097, 12101, 12107, 12109, + 12113, 12119, 12143, 12149, 12157, 12161, 12163, 12197, 12203, 12211, + 12227, 12239, 12241, 12251, 12253, 12263, 12269, 12277, 12281, 12289, + 12301, 12323, 12329, 12343, 12347, 12373, 12377, 12379, 12391, 12401, + 12409, 12413, 12421, 12433, 12437, 12451, 12457, 12473, 12479, 12487, + 12491, 12497, 12503, 12511, 12517, 12527, 12539, 12541, 12547, 12553, + 12569, 12577, 12583, 12589, 12601, 12611, 12613, 12619, 12637, 12641, + 12647, 12653, 12659, 12671, 12689, 12697, 12703, 12713, 12721, 12739, + 12743, 12757, 12763, 12781, 12791, 12799, 12809, 12821, 12823, 12829, + 12841, 12853, 12889, 12893, 12899, 12907, 12911, 12917, 12919, 12923, + 12941, 12953, 12959, 12967, 12973, 12979, 12983, 13001, 13003, 13007, + 13009, 13033, 13037, 13043, 13049, 13063, 13093, 13099, 13103, 13109, + 13121, 13127, 13147, 13151, 13159, 13163, 13171, 13177, 13183, 13187, + 13217, 13219, 13229, 13241, 13249, 13259, 13267, 13291, 13297, 13309, + 13313, 13327, 13331, 13337, 13339, 13367, 13381, 13397, 13399, 13411, + 13417, 13421, 13441, 13451, 13457, 13463, 13469, 13477, 13487, 13499, + 13513, 13523, 13537, 13553, 13567, 13577, 13591, 13597, 13613, 13619, + 13627, 13633, 13649, 13669, 13679, 13681, 13687, 13691, 13693, 13697, + 13709, 13711, 13721, 13723, 13729, 13751, 13757, 13759, 13763, 13781, + 13789, 13799, 13807, 13829, 13831, 13841, 13859, 13873, 13877, 13879, + 13883, 13901, 13903, 13907, 13913, 13921, 13931, 13933, 13963, 13967, + 13997, 13999, 14009, 14011, 14029, 14033, 14051, 14057, 14071, 14081, + 14083, 14087, 14107, 14143, 14149, 14153, 14159, 14173, 14177, 14197, + 14207, 14221, 14243, 14249, 14251, 14281, 14293, 14303, 14321, 14323, + 14327, 14341, 14347, 14369, 14387, 14389, 14401, 14407, 14411, 14419, + 14423, 14431, 14437, 14447, 14449, 14461, 14479, 14489, 14503, 14519, + 14533, 14537, 14543, 14549, 14551, 14557, 14561, 14563, 14591, 14593, + 14621, 14627, 14629, 14633, 14639, 14653, 14657, 14669, 14683, 14699, + 14713, 14717, 14723, 14731, 14737, 14741, 14747, 14753, 14759, 14767, + 14771, 14779, 14783, 14797, 14813, 14821, 14827, 14831, 14843, 14851, + 14867, 14869, 14879, 14887, 14891, 14897, 14923, 14929, 14939, 14947, + 14951, 14957, 14969, 14983, 15013, 15017, 15031, 15053, 15061, 15073, + 15077, 15083, 15091, 15101, 15107, 15121, 15131, 15137, 15139, 15149, + 15161, 15173, 15187, 15193, 15199, 15217, 15227, 15233, 15241, 15259, + 15263, 15269, 15271, 15277, 15287, 15289, 15299, 15307, 15313, 15319, + 15329, 15331, 15349, 15359, 15361, 15373, 15377, 15383, 15391, 15401, + 15413, 15427, 15439, 15443, 15451, 15461, 15467, 15473, 15493, 15497, + 15511, 15527, 15541, 15551, 15559, 15569, 15581, 15583, 15601, 15607, + 15619, 15629, 15641, 15643, 15647, 15649, 15661, 15667, 15671, 15679, + 15683, 15727, 15731, 15733, 15737, 15739, 15749, 15761, 15767, 15773, + 15787, 15791, 15797, 15803, 15809, 15817, 15823, 15859, 15877, 15881, + 15887, 15889, 15901, 15907, 15913, 15919, 15923, 15937, 15959, 15971, + 15973, 15991, 16001, 16007, 16033, 16057, 16061, 16063, 16067, 16069, + 16073, 16087, 16091, 16097, 16103, 16111, 16127, 16139, 16141, 16183, + 16187, 16189, 16193, 16217, 16223, 16229, 16231, 16249, 16253, 16267, + 16273, 16301, 16319, 16333, 16339, 16349, 16361, 16363, 16369, 16381, + 16411, 16417, 16421, 16427, 16433, 16447, 16451, 16453, 16477, 16481, + 16487, 16493, 16519, 16529, 16547, 16553, 16561, 16567, 16573, 16603, + 16607, 16619, 16631, 16633, 16649, 16651, 16657, 16661, 16673, 16691, + 16693, 16699, 16703, 16729, 16741, 16747, 16759, 16763, 16787, 16811, + 16823, 16829, 16831, 16843, 16871, 16879, 16883, 16889, 16901, 16903, + 16921, 16927, 16931, 16937, 16943, 16963, 16979, 16981, 16987, 16993, + 17011, 17021, 17027, 17029, 17033, 17041, 17047, 17053, 17077, 17093, + 17099, 17107, 17117, 17123, 17137, 17159, 17167, 17183, 17189, 17191, + 17203, 17207, 17209, 17231, 17239, 17257, 17291, 17293, 17299, 17317, + 17321, 17327, 17333, 17341, 17351, 17359, 17377, 17383, 17387, 17389, + 17393, 17401, 17417, 17419, 17431, 17443, 17449, 17467, 17471, 17477, + 17483, 17489, 17491, 17497, 17509, 17519, 17539, 17551, 17569, 17573, + 17579, 17581, 17597, 17599, 17609, 17623, 17627, 17657, 17659, 17669, + 17681, 17683, 17707, 17713, 17729, 17737, 17747, 17749, 17761, 17783, + 17789, 17791, 17807, 17827, 17837, 17839, 17851, 17863, 17881, 17891, + 17903, 17909, 17911, 17921, 17923, 17929, 17939, 17957, 17959, 17971, + 17977, 17981, 17987, 17989, 18013, 18041, 18043, 18047, 18049, 18059, + 18061, 18077, 18089, 18097, 18119, 18121, 18127, 18131, 18133, 18143, + 18149, 18169, 18181, 18191, 18199, 18211, 18217, 18223, 18229, 18233, + 18251, 18253, 18257, 18269, 18287, 18289, 18301, 18307, 18311, 18313, + 18329, 18341, 18353, 18367, 18371, 18379, 18397, 18401, 18413, 18427, + 18433, 18439, 18443, 18451, 18457, 18461, 18481, 18493, 18503, 18517, + 18521, 18523, 18539, 18541, 18553, 18583, 18587, 18593, 18617, 18637, + 18661, 18671, 18679, 18691, 18701, 18713, 18719, 18731, 18743, 18749, + 18757, 18773, 18787, 18793, 18797, 18803, 18839, 18859, 18869, 18899, + 18911, 18913, 18917, 18919, 18947, 18959, 18973, 18979, 19001, 19009, + 19013, 19031, 19037, 19051, 19069, 19073, 19079, 19081, 19087, 19121, + 19139, 19141, 19157, 19163, 19181, 19183, 19207, 19211, 19213, 19219, + 19231, 19237, 19249, 19259, 19267, 19273, 19289, 19301, 19309, 19319, + 19333, 19373, 19379, 19381, 19387, 19391, 19403, 19417, 19421, 19423, + 19427, 19429, 19433, 19441, 19447, 19457, 19463, 19469, 19471, 19477, + 19483, 19489, 19501, 19507, 19531, 19541, 19543, 19553, 19559, 19571, + 19577, 19583, 19597, 19603, 19609, 19661, 19681, 19687, 19697, 19699, + 19709, 19717, 19727, 19739, 19751, 19753, 19759, 19763, 19777, 19793, + 19801, 19813, 19819, 19841, 19843, 19853, 19861, 19867, 19889, 19891, + 19913, 19919, 19927, 19937, 19949, 19961, 19963, 19973, 19979, 19991, + 19993, 19997, 20011, 20021, 20023, 20029, 20047, 20051, 20063, 20071, + 20089, 20101, 20107, 20113, 20117, 20123, 20129, 20143, 20147, 20149, + 20161, 20173, 20177, 20183, 20201, 20219, 20231, 20233, 20249, 20261, + 20269, 20287, 20297, 20323, 20327, 20333, 20341, 20347, 20353, 20357, + 20359, 20369, 20389, 20393, 20399, 20407, 20411, 20431, 20441, 20443, + 20477, 20479, 20483, 20507, 20509, 20521, 20533, 20543, 20549, 20551, + 20563, 20593, 20599, 20611, 20627, 20639, 20641, 20663, 20681, 20693, + 20707, 20717, 20719, 20731, 20743, 20747, 20749, 20753, 20759, 20771, + 20773, 20789, 20807, 20809, 20849, 20857, 20873, 20879, 20887, 20897, + 20899, 20903, 20921, 20929, 20939, 20947, 20959, 20963, 20981, 20983, + 21001, 21011, 21013, 21017, 21019, 21023, 21031, 21059, 21061, 21067, + 21089, 21101, 21107, 21121, 21139, 21143, 21149, 21157, 21163, 21169, + 21179, 21187, 21191, 21193, 21211, 21221, 21227, 21247, 21269, 21277, + 21283, 21313, 21317, 21319, 21323, 21341, 21347, 21377, 21379, 21383, + 21391, 21397, 21401, 21407, 21419, 21433, 21467, 21481, 21487, 21491, + 21493, 21499, 21503, 21517, 21521, 21523, 21529, 21557, 21559, 21563, + 21569, 21577, 21587, 21589, 21599, 21601, 21611, 21613, 21617, 21647, + 21649, 21661, 21673, 21683, 21701, 21713, 21727, 21737, 21739, 21751, + 21757, 21767, 21773, 21787, 21799, 21803, 21817, 21821, 21839, 21841, + 21851, 21859, 21863, 21871, 21881, 21893, 21911, 21929, 21937, 21943, + 21961, 21977, 21991, 21997, 22003, 22013, 22027, 22031, 22037, 22039, + 22051, 22063, 22067, 22073, 22079, 22091, 22093, 22109, 22111, 22123, + 22129, 22133, 22147, 22153, 22157, 22159, 22171, 22189, 22193, 22229, + 22247, 22259, 22271, 22273, 22277, 22279, 22283, 22291, 22303, 22307, + 22343, 22349, 22367, 22369, 22381, 22391, 22397, 22409, 22433, 22441, + 22447, 22453, 22469, 22481, 22483, 22501, 22511, 22531, 22541, 22543, + 22549, 22567, 22571, 22573, 22613, 22619, 22621, 22637, 22639, 22643, + 22651, 22669, 22679, 22691, 22697, 22699, 22709, 22717, 22721, 22727, + 22739, 22741, 22751, 22769, 22777, 22783, 22787, 22807, 22811, 22817, + 22853, 22859, 22861, 22871, 22877, 22901, 22907, 22921, 22937, 22943, + 22961, 22963, 22973, 22993, 23003, 23011, 23017, 23021, 23027, 23029, + 23039, 23041, 23053, 23057, 23059, 23063, 23071, 23081, 23087, 23099, + 23117, 23131, 23143, 23159, 23167, 23173, 23189, 23197, 23201, 23203, + 23209, 23227, 23251, 23269, 23279, 23291, 23293, 23297, 23311, 23321, + 23327, 23333, 23339, 23357, 23369, 23371, 23399, 23417, 23431, 23447, + 23459, 23473, 23497, 23509, 23531, 23537, 23539, 23549, 23557, 23561, + 23563, 23567, 23581, 23593, 23599, 23603, 23609, 23623, 23627, 23629, + 23633, 23663, 23669, 23671, 23677, 23687, 23689, 23719, 23741, 23743, + 23747, 23753, 23761, 23767, 23773, 23789, 23801, 23813, 23819, 23827, + 23831, 23833, 23857, 23869, 23873, 23879, 23887, 23893, 23899, 23909, + 23911, 23917, 23929, 23957, 23971, 23977, 23981, 23993, 24001, 24007, + 24019, 24023, 24029, 24043, 24049, 24061, 24071, 24077, 24083, 24091, + 24097, 24103, 24107, 24109, 24113, 24121, 24133, 24137, 24151, 24169, + 24179, 24181, 24197, 24203, 24223, 24229, 24239, 24247, 24251, 24281, + 24317, 24329, 24337, 24359, 24371, 24373, 24379, 24391, 24407, 24413, + 24419, 24421, 24439, 24443, 24469, 24473, 24481, 24499, 24509, 24517, + 24527, 24533, 24547, 24551, 24571, 24593, 24611, 24623, 24631, 24659, + 24671, 24677, 24683, 24691, 24697, 24709, 24733, 24749, 24763, 24767, + 24781, 24793, 24799, 24809, 24821, 24841, 24847, 24851, 24859, 24877, + 24889, 24907, 24917, 24919, 24923, 24943, 24953, 24967, 24971, 24977, + 24979, 24989, 25013, 25031, 25033, 25037, 25057, 25073, 25087, 25097, + 25111, 25117, 25121, 25127, 25147, 25153, 25163, 25169, 25171, 25183, + 25189, 25219, 25229, 25237, 25243, 25247, 25253, 25261, 25301, 25303, + 25307, 25309, 25321, 25339, 25343, 25349, 25357, 25367, 25373, 25391, + 25409, 25411, 25423, 25439, 25447, 25453, 25457, 25463, 25469, 25471, + 25523, 25537, 25541, 25561, 25577, 25579, 25583, 25589, 25601, 25603, + 25609, 25621, 25633, 25639, 25643, 25657, 25667, 25673, 25679, 25693, + 25703, 25717, 25733, 25741, 25747, 25759, 25763, 25771, 25793, 25799, + 25801, 25819, 25841, 25847, 25849, 25867, 25873, 25889, 25903, 25913, + 25919, 25931, 25933, 25939, 25943, 25951, 25969, 25981, 25997, 25999, + 26003, 26017, 26021, 26029, 26041, 26053, 26083, 26099, 26107, 26111, + 26113, 26119, 26141, 26153, 26161, 26171, 26177, 26183, 26189, 26203, + 26209, 26227, 26237, 26249, 26251, 26261, 26263, 26267, 26293, 26297, + 26309, 26317, 26321, 26339, 26347, 26357, 26371, 26387, 26393, 26399, + 26407, 26417, 26423, 26431, 26437, 26449, 26459, 26479, 26489, 26497, + 26501, 26513, 26539, 26557, 26561, 26573, 26591, 26597, 26627, 26633, + 26641, 26647, 26669, 26681, 26683, 26687, 26693, 26699, 26701, 26711, + 26713, 26717, 26723, 26729, 26731, 26737, 26759, 26777, 26783, 26801, + 26813, 26821, 26833, 26839, 26849, 26861, 26863, 26879, 26881, 26891, + 26893, 26903, 26921, 26927, 26947, 26951, 26953, 26959, 26981, 26987, + 26993, 27011, 27017, 27031, 27043, 27059, 27061, 27067, 27073, 27077, + 27091, 27103, 27107, 27109, 27127, 27143, 27179, 27191, 27197, 27211, + 27239, 27241, 27253, 27259, 27271, 27277, 27281, 27283, 27299, 27329, + 27337, 27361, 27367, 27397, 27407, 27409, 27427, 27431, 27437, 27449, + 27457, 27479, 27481, 27487, 27509, 27527, 27529, 27539, 27541, 27551, + 27581, 27583, 27611, 27617, 27631, 27647, 27653, 27673, 27689, 27691, + 27697, 27701, 27733, 27737, 27739, 27743, 27749, 27751, 27763, 27767, + 27773, 27779, 27791, 27793, 27799, 27803, 27809, 27817, 27823, 27827, + 27847, 27851, 27883, 27893, 27901, 27917, 27919, 27941, 27943, 27947, + 27953, 27961, 27967, 27983, 27997, 28001, 28019, 28027, 28031, 28051, + 28057, 28069, 28081, 28087, 28097, 28099, 28109, 28111, 28123, 28151, + 28163, 28181, 28183, 28201, 28211, 28219, 28229, 28277, 28279, 28283, + 28289, 28297, 28307, 28309, 28319, 28349, 28351, 28387, 28393, 28403, + 28409, 28411, 28429, 28433, 28439, 28447, 28463, 28477, 28493, 28499, + 28513, 28517, 28537, 28541, 28547, 28549, 28559, 28571, 28573, 28579, + 28591, 28597, 28603, 28607, 28619, 28621, 28627, 28631, 28643, 28649, + 28657, 28661, 28663, 28669, 28687, 28697, 28703, 28711, 28723, 28729, + 28751, 28753, 28759, 28771, 28789, 28793, 28807, 28813, 28817, 28837, + 28843, 28859, 28867, 28871, 28879, 28901, 28909, 28921, 28927, 28933, + 28949, 28961, 28979, 29009, 29017, 29021, 29023, 29027, 29033, 29059, + 29063, 29077, 29101, 29123, 29129, 29131, 29137, 29147, 29153, 29167, + 29173, 29179, 29191, 29201, 29207, 29209, 29221, 29231, 29243, 29251, + 29269, 29287, 29297, 29303, 29311, 29327, 29333, 29339, 29347, 29363, + 29383, 29387, 29389, 29399, 29401, 29411, 29423, 29429, 29437, 29443, + 29453, 29473, 29483, 29501, 29527, 29531, 29537, 29567, 29569, 29573, + 29581, 29587, 29599, 29611, 29629, 29633, 29641, 29663, 29669, 29671, + 29683, 29717, 29723, 29741, 29753, 29759, 29761, 29789, 29803, 29819, + 29833, 29837, 29851, 29863, 29867, 29873, 29879, 29881, 29917, 29921, + 29927, 29947, 29959, 29983, 29989, 30011, 30013, 30029, 30047, 30059, + 30071, 30089, 30091, 30097, 30103, 30109, 30113, 30119, 30133, 30137, + 30139, 30161, 30169, 30181, 30187, 30197, 30203, 30211, 30223, 30241, + 30253, 30259, 30269, 30271, 30293, 30307, 30313, 30319, 30323, 30341, + 30347, 30367, 30389, 30391, 30403, 30427, 30431, 30449, 30467, 30469, + 30491, 30493, 30497, 30509, 30517, 30529, 30539, 30553, 30557, 30559, + 30577, 30593, 30631, 30637, 30643, 30649, 30661, 30671, 30677, 30689, + 30697, 30703, 30707, 30713, 30727, 30757, 30763, 30773, 30781, 30803, + 30809, 30817, 30829, 30839, 30841, 30851, 30853, 30859, 30869, 30871, + 30881, 30893, 30911, 30931, 30937, 30941, 30949, 30971, 30977, 30983, + 31013, 31019, 31033, 31039, 31051, 31063, 31069, 31079, 31081, 31091, + 31121, 31123, 31139, 31147, 31151, 31153, 31159, 31177, 31181, 31183, + 31189, 31193, 31219, 31223, 31231, 31237, 31247, 31249, 31253, 31259, + 31267, 31271, 31277, 31307, 31319, 31321, 31327, 31333, 31337, 31357, + 31379, 31387, 31391, 31393, 31397, 31469, 31477, 31481, 31489, 31511, + 31513, 31517, 31531, 31541, 31543, 31547, 31567, 31573, 31583, 31601, + 31607, 31627, 31643, 31649, 31657, 31663, 31667, 31687, 31699, 31721, + 31723, 31727, 31729, 31741, 31751, 31769, 31771, 31793, 31799, 31817, + 31847, 31849, 31859, 31873, 31883, 31891, 31907, 31957, 31963, 31973, + 31981, 31991, 32003, 32009, 32027, 32029, 32051, 32057, 32059, 32063, + 32069, 32077, 32083, 32089, 32099, 32117, 32119, 32141, 32143, 32159, + 32173, 32183, 32189, 32191, 32203, 32213, 32233, 32237, 32251, 32257, + 32261, 32297, 32299, 32303, 32309, 32321, 32323, 32327, 32341, 32353, + 32359, 32363, 32369, 32371, 32377, 32381, 32401, 32411, 32413, 32423, + 32429, 32441, 32443, 32467, 32479, 32491, 32497, 32503, 32507, 32531, + 32533, 32537, 32561, 32563, 32569, 32573, 32579, 32587, 32603, 32609, + 32611, 32621, 32633, 32647, 32653, 32687, 32693, 32707, 32713, 32717, + 32719, 32749, 32771, 32779, 32783, 32789, 32797, 32801, 32803, 32831, + 32833, 32839, 32843, 32869, 32887, 32909, 32911, 32917, 32933, 32939, + 32941, 32957, 32969, 32971, 32983, 32987, 32993, 32999, 33013, 33023, + 33029, 33037, 33049, 33053, 33071, 33073, 33083, 33091, 33107, 33113, + 33119, 33149, 33151, 33161, 33179, 33181, 33191, 33199, 33203, 33211, + 33223, 33247, 33287, 33289, 33301, 33311, 33317, 33329, 33331, 33343, + 33347, 33349, 33353, 33359, 33377, 33391, 33403, 33409, 33413, 33427, + 33457, 33461, 33469, 33479, 33487, 33493, 33503, 33521, 33529, 33533, + 33547, 33563, 33569, 33577, 33581, 33587, 33589, 33599, 33601, 33613, + 33617, 33619, 33623, 33629, 33637, 33641, 33647, 33679, 33703, 33713, + 33721, 33739, 33749, 33751, 33757, 33767, 33769, 33773, 33791, 33797, + 33809, 33811, 33827, 33829, 33851, 33857, 33863, 33871, 33889, 33893, + 33911, 33923, 33931, 33937, 33941, 33961, 33967, 33997, 34019, 34031, + 34033, 34039, 34057, 34061, 34123, 34127, 34129, 34141, 34147, 34157, + 34159, 34171, 34183, 34211, 34213, 34217, 34231, 34253, 34259, 34261, + 34267, 34273, 34283, 34297, 34301, 34303, 34313, 34319, 34327, 34337, + 34351, 34361, 34367, 34369, 34381, 34403, 34421, 34429, 34439, 34457, + 34469, 34471, 34483, 34487, 34499, 34501, 34511, 34513, 34519, 34537, + 34543, 34549, 34583, 34589, 34591, 34603, 34607, 34613, 34631, 34649, + 34651, 34667, 34673, 34679, 34687, 34693, 34703, 34721, 34729, 34739, + 34747, 34757, 34759, 34763, 34781, 34807, 34819, 34841, 34843, 34847, + 34849, 34871, 34877, 34883, 34897, 34913, 34919, 34939, 34949, 34961, + 34963, 34981, 35023, 35027, 35051, 35053, 35059, 35069, 35081, 35083, + 35089, 35099, 35107, 35111, 35117, 35129, 35141, 35149, 35153, 35159, + 35171, 35201, 35221, 35227, 35251, 35257, 35267, 35279, 35281, 35291, + 35311, 35317, 35323, 35327, 35339, 35353, 35363, 35381, 35393, 35401, + 35407, 35419, 35423, 35437, 35447, 35449, 35461, 35491, 35507, 35509, + 35521, 35527, 35531, 35533, 35537, 35543, 35569, 35573, 35591, 35593, + 35597, 35603, 35617, 35671, 35677, 35729, 35731, 35747, 35753, 35759, + 35771, 35797, 35801, 35803, 35809, 35831, 35837, 35839, 35851, 35863, + 35869, 35879, 35897, 35899, 35911, 35923, 35933, 35951, 35963, 35969, + 35977, 35983, 35993, 35999, 36007, 36011, 36013, 36017, 36037, 36061, + 36067, 36073, 36083, 36097, 36107, 36109, 36131, 36137, 36151, 36161, + 36187, 36191, 36209, 36217, 36229, 36241, 36251, 36263, 36269, 36277, + 36293, 36299, 36307, 36313, 36319, 36341, 36343, 36353, 36373, 36383, + 36389, 36433, 36451, 36457, 36467, 36469, 36473, 36479, 36493, 36497, + 36523, 36527, 36529, 36541, 36551, 36559, 36563, 36571, 36583, 36587, + 36599, 36607, 36629, 36637, 36643, 36653, 36671, 36677, 36683, 36691, + 36697, 36709, 36713, 36721, 36739, 36749, 36761, 36767, 36779, 36781, + 36787, 36791, 36793, 36809, 36821, 36833, 36847, 36857, 36871, 36877, + 36887, 36899, 36901, 36913, 36919, 36923, 36929, 36931, 36943, 36947, + 36973, 36979, 36997, 37003, 37013, 37019, 37021, 37039, 37049, 37057, + 37061, 37087, 37097, 37117, 37123, 37139, 37159, 37171, 37181, 37189, + 37199, 37201, 37217, 37223, 37243, 37253, 37273, 37277, 37307, 37309, + 37313, 37321, 37337, 37339, 37357, 37361, 37363, 37369, 37379, 37397, + 37409, 37423, 37441, 37447, 37463, 37483, 37489, 37493, 37501, 37507, + 37511, 37517, 37529, 37537, 37547, 37549, 37561, 37567, 37571, 37573, + 37579, 37589, 37591, 37607, 37619, 37633, 37643, 37649, 37657, 37663, + 37691, 37693, 37699, 37717, 37747, 37781, 37783, 37799, 37811, 37813, + 37831, 37847, 37853, 37861, 37871, 37879, 37889, 37897, 37907, 37951, + 37957, 37963, 37967, 37987, 37991, 37993, 37997, 38011, 38039, 38047, + 38053, 38069, 38083, 38113, 38119, 38149, 38153, 38167, 38177, 38183, + 38189, 38197, 38201, 38219, 38231, 38237, 38239, 38261, 38273, 38281, + 38287, 38299, 38303, 38317, 38321, 38327, 38329, 38333, 38351, 38371, + 38377, 38393, 38431, 38447, 38449, 38453, 38459, 38461, 38501, 38543, + 38557, 38561, 38567, 38569, 38593, 38603, 38609, 38611, 38629, 38639, + 38651, 38653, 38669, 38671, 38677, 38693, 38699, 38707, 38711, 38713, + 38723, 38729, 38737, 38747, 38749, 38767, 38783, 38791, 38803, 38821, + 38833, 38839, 38851, 38861, 38867, 38873, 38891, 38903, 38917, 38921, + 38923, 38933, 38953, 38959, 38971, 38977, 38993, 39019, 39023, 39041, + 39043, 39047, 39079, 39089, 39097, 39103, 39107, 39113, 39119, 39133, + 39139, 39157, 39161, 39163, 39181, 39191, 39199, 39209, 39217, 39227, + 39229, 39233, 39239, 39241, 39251, 39293, 39301, 39313, 39317, 39323, + 39341, 39343, 39359, 39367, 39371, 39373, 39383, 39397, 39409, 39419, + 39439, 39443, 39451, 39461, 39499, 39503, 39509, 39511, 39521, 39541, + 39551, 39563, 39569, 39581, 39607, 39619, 39623, 39631, 39659, 39667, + 39671, 39679, 39703, 39709, 39719, 39727, 39733, 39749, 39761, 39769, + 39779, 39791, 39799, 39821, 39827, 39829, 39839, 39841, 39847, 39857, + 39863, 39869, 39877, 39883, 39887, 39901, 39929, 39937, 39953, 39971, + 39979, 39983, 39989, 40009, 40013, 40031, 40037, 40039, 40063, 40087, + 40093, 40099, 40111, 40123, 40127, 40129, 40151, 40153, 40163, 40169, + 40177, 40189, 40193, 40213, 40231, 40237, 40241, 40253, 40277, 40283, + 40289, 40343, 40351, 40357, 40361, 40387, 40423, 40427, 40429, 40433, + 40459, 40471, 40483, 40487, 40493, 40499, 40507, 40519, 40529, 40531, + 40543, 40559, 40577, 40583, 40591, 40597, 40609, 40627, 40637, 40639, + 40693, 40697, 40699, 40709, 40739, 40751, 40759, 40763, 40771, 40787, + 40801, 40813, 40819, 40823, 40829, 40841, 40847, 40849, 40853, 40867, + 40879, 40883, 40897, 40903, 40927, 40933, 40939, 40949, 40961, 40973, + 40993, 41011, 41017, 41023, 41039, 41047, 41051, 41057, 41077, 41081, + 41113, 41117, 41131, 41141, 41143, 41149, 41161, 41177, 41179, 41183, + 41189, 41201, 41203, 41213, 41221, 41227, 41231, 41233, 41243, 41257, + 41263, 41269, 41281, 41299, 41333, 41341, 41351, 41357, 41381, 41387, + 41389, 41399, 41411, 41413, 41443, 41453, 41467, 41479, 41491, 41507, + 41513, 41519, 41521, 41539, 41543, 41549, 41579, 41593, 41597, 41603, + 41609, 41611, 41617, 41621, 41627, 41641, 41647, 41651, 41659, 41669, + 41681, 41687, 41719, 41729, 41737, 41759, 41761, 41771, 41777, 41801, + 41809, 41813, 41843, 41849, 41851, 41863, 41879, 41887, 41893, 41897, + 41903, 41911, 41927, 41941, 41947, 41953, 41957, 41959, 41969, 41981, + 41983, 41999, 42013, 42017, 42019, 42023, 42043, 42061, 42071, 42073, + 42083, 42089, 42101, 42131, 42139, 42157, 42169, 42179, 42181, 42187, + 42193, 42197, 42209, 42221, 42223, 42227, 42239, 42257, 42281, 42283, + 42293, 42299, 42307, 42323, 42331, 42337, 42349, 42359, 42373, 42379, + 42391, 42397, 42403, 42407, 42409, 42433, 42437, 42443, 42451, 42457, + 42461, 42463, 42467, 42473, 42487, 42491, 42499, 42509, 42533, 42557, + 42569, 42571, 42577, 42589, 42611, 42641, 42643, 42649, 42667, 42677, + 42683, 42689, 42697, 42701, 42703, 42709, 42719, 42727, 42737, 42743, + 42751, 42767, 42773, 42787, 42793, 42797, 42821, 42829, 42839, 42841, + 42853, 42859, 42863, 42899, 42901, 42923, 42929, 42937, 42943, 42953, + 42961, 42967, 42979, 42989, 43003, 43013, 43019, 43037, 43049, 43051, + 43063, 43067, 43093, 43103, 43117, 43133, 43151, 43159, 43177, 43189, + 43201, 43207, 43223, 43237, 43261, 43271, 43283, 43291, 43313, 43319, + 43321, 43331, 43391, 43397, 43399, 43403, 43411, 43427, 43441, 43451, + 43457, 43481, 43487, 43499, 43517, 43541, 43543, 43573, 43577, 43579, + 43591, 43597, 43607, 43609, 43613, 43627, 43633, 43649, 43651, 43661, + 43669, 43691, 43711, 43717, 43721, 43753, 43759, 43777, 43781, 43783, + 43787, 43789, 43793, 43801, 43853, 43867, 43889, 43891, 43913, 43933, + 43943, 43951, 43961, 43963, 43969, 43973, 43987, 43991, 43997, 44017, + 44021, 44027, 44029, 44041, 44053, 44059, 44071, 44087, 44089, 44101, + 44111, 44119, 44123, 44129, 44131, 44159, 44171, 44179, 44189, 44201, + 44203, 44207, 44221, 44249, 44257, 44263, 44267, 44269, 44273, 44279, + 44281, 44293, 44351, 44357, 44371, 44381, 44383, 44389, 44417, 44449, + 44453, 44483, 44491, 44497, 44501, 44507, 44519, 44531, 44533, 44537, + 44543, 44549, 44563, 44579, 44587, 44617, 44621, 44623, 44633, 44641, + 44647, 44651, 44657, 44683, 44687, 44699, 44701, 44711, 44729, 44741, + 44753, 44771, 44773, 44777, 44789, 44797, 44809, 44819, 44839, 44843, + 44851, 44867, 44879, 44887, 44893, 44909, 44917, 44927, 44939, 44953, + 44959, 44963, 44971, 44983, 44987, 45007, 45013, 45053, 45061, 45077, + 45083, 45119, 45121, 45127, 45131, 45137, 45139, 45161, 45179, 45181, + 45191, 45197, 45233, 45247, 45259, 45263, 45281, 45289, 45293, 45307, + 45317, 45319, 45329, 45337, 45341, 45343, 45361, 45377, 45389, 45403, + 45413, 45427, 45433, 45439, 45481, 45491, 45497, 45503, 45523, 45533, + 45541, 45553, 45557, 45569, 45587, 45589, 45599, 45613, 45631, 45641, + 45659, 45667, 45673, 45677, 45691, 45697, 45707, 45737, 45751, 45757, + 45763, 45767, 45779, 45817, 45821, 45823, 45827, 45833, 45841, 45853, + 45863, 45869, 45887, 45893, 45943, 45949, 45953, 45959, 45971, 45979, + 45989, 46021, 46027, 46049, 46051, 46061, 46073, 46091, 46093, 46099, + 46103, 46133, 46141, 46147, 46153, 46171, 46181, 46183, 46187, 46199, + 46219, 46229, 46237, 46261, 46271, 46273, 46279, 46301, 46307, 46309, + 46327, 46337, 46349, 46351, 46381, 46399, 46411, 46439, 46441, 46447, + 46451, 46457, 46471, 46477, 46489, 46499, 46507, 46511, 46523, 46549, + 46559, 46567, 46573, 46589, 46591, 46601, 46619, 46633, 46639, 46643, + 46649, 46663, 46679, 46681, 46687, 46691, 46703, 46723, 46727, 46747, + 46751, 46757, 46769, 46771, 46807, 46811, 46817, 46819, 46829, 46831, + 46853, 46861, 46867, 46877, 46889, 46901, 46919, 46933, 46957, 46993, + 46997, 47017, 47041, 47051, 47057, 47059, 47087, 47093, 47111, 47119, + 47123, 47129, 47137, 47143, 47147, 47149, 47161, 47189, 47207, 47221, + 47237, 47251, 47269, 47279, 47287, 47293, 47297, 47303, 47309, 47317, + 47339, 47351, 47353, 47363, 47381, 47387, 47389, 47407, 47417, 47419, + 47431, 47441, 47459, 47491, 47497, 47501, 47507, 47513, 47521, 47527, + 47533, 47543, 47563, 47569, 47581, 47591, 47599, 47609, 47623, 47629, + 47639, 47653, 47657, 47659, 47681, 47699, 47701, 47711, 47713, 47717, + 47737, 47741, 47743, 47777, 47779, 47791, 47797, 47807, 47809, 47819, + 47837, 47843, 47857, 47869, 47881, 47903, 47911, 47917, 47933, 47939, + 47947, 47951, 47963, 47969, 47977, 47981, 48017, 48023, 48029, 48049, + 48073, 48079, 48091, 48109, 48119, 48121, 48131, 48157, 48163, 48179, + 48187, 48193, 48197, 48221, 48239, 48247, 48259, 48271, 48281, 48299, + 48311, 48313, 48337, 48341, 48353, 48371, 48383, 48397, 48407, 48409, + 48413, 48437, 48449, 48463, 48473, 48479, 48481, 48487, 48491, 48497, + 48523, 48527, 48533, 48539, 48541, 48563, 48571, 48589, 48593, 48611, + 48619, 48623, 48647, 48649, 48661, 48673, 48677, 48679, 48731, 48733, + 48751, 48757, 48761, 48767, 48779, 48781, 48787, 48799, 48809, 48817, + 48821, 48823, 48847, 48857, 48859, 48869, 48871, 48883, 48889, 48907, + 48947, 48953, 48973, 48989, 48991, 49003, 49009, 49019, 49031, 49033, + 49037, 49043, 49057, 49069, 49081, 49103, 49109, 49117, 49121, 49123, + 49139, 49157, 49169, 49171, 49177, 49193, 49199, 49201, 49207, 49211, + 49223, 49253, 49261, 49277, 49279, 49297, 49307, 49331, 49333, 49339, + 49363, 49367, 49369, 49391, 49393, 49409, 49411, 49417, 49429, 49433, + 49451, 49459, 49463, 49477, 49481, 49499, 49523, 49529, 49531, 49537, + 49547, 49549, 49559, 49597, 49603, 49613, 49627, 49633, 49639, 49663, + 49667, 49669, 49681, 49697, 49711, 49727, 49739, 49741, 49747, 49757, + 49783, 49787, 49789, 49801, 49807, 49811, 49823, 49831, 49843, 49853, + 49871, 49877, 49891, 49919, 49921, 49927, 49937, 49939, 49943, 49957, + 49991, 49993, 49999, 50021, 50023, 50033, 50047, 50051, 50053, 50069, + 50077, 50087, 50093, 50101, 50111, 50119, 50123, 50129, 50131, 50147, + 50153, 50159, 50177, 50207, 50221, 50227, 50231, 50261, 50263, 50273, + 50287, 50291, 50311, 50321, 50329, 50333, 50341, 50359, 50363, 50377, + 50383, 50387, 50411, 50417, 50423, 50441, 50459, 50461, 50497, 50503, + 50513, 50527, 50539, 50543, 50549, 50551, 50581, 50587, 50591, 50593, + 50599, 50627, 50647, 50651, 50671, 50683, 50707, 50723, 50741, 50753, + 50767, 50773, 50777, 50789, 50821, 50833, 50839, 50849, 50857, 50867, + 50873, 50891, 50893, 50909, 50923, 50929, 50951, 50957, 50969, 50971, + 50989, 50993, 51001, 51031, 51043, 51047, 51059, 51061, 51071, 51109, + 51131, 51133, 51137, 51151, 51157, 51169, 51193, 51197, 51199, 51203, + 51217, 51229, 51239, 51241, 51257, 51263, 51283, 51287, 51307, 51329, + 51341, 51343, 51347, 51349, 51361, 51383, 51407, 51413, 51419, 51421, + 51427, 51431, 51437, 51439, 51449, 51461, 51473, 51479, 51481, 51487, + 51503, 51511, 51517, 51521, 51539, 51551, 51563, 51577, 51581, 51593, + 51599, 51607, 51613, 51631, 51637, 51647, 51659, 51673, 51679, 51683, + 51691, 51713, 51719, 51721, 51749, 51767, 51769, 51787, 51797, 51803, + 51817, 51827, 51829, 51839, 51853, 51859, 51869, 51871, 51893, 51899, + 51907, 51913, 51929, 51941, 51949, 51971, 51973, 51977, 51991, 52009, + 52021, 52027, 52051, 52057, 52067, 52069, 52081, 52103, 52121, 52127, + 52147, 52153, 52163, 52177, 52181, 52183, 52189, 52201, 52223, 52237, + 52249, 52253, 52259, 52267, 52289, 52291, 52301, 52313, 52321, 52361, + 52363, 52369, 52379, 52387, 52391, 52433, 52453, 52457, 52489, 52501, + 52511, 52517, 52529, 52541, 52543, 52553, 52561, 52567, 52571, 52579, + 52583, 52609, 52627, 52631, 52639, 52667, 52673, 52691, 52697, 52709, + 52711, 52721, 52727, 52733, 52747, 52757, 52769, 52783, 52807, 52813, + 52817, 52837, 52859, 52861, 52879, 52883, 52889, 52901, 52903, 52919, + 52937, 52951, 52957, 52963, 52967, 52973, 52981, 52999, 53003, 53017, + 53047, 53051, 53069, 53077, 53087, 53089, 53093, 53101, 53113, 53117, + 53129, 53147, 53149, 53161, 53171, 53173, 53189, 53197, 53201, 53231, + 53233, 53239, 53267, 53269, 53279, 53281, 53299, 53309, 53323, 53327, + 53353, 53359, 53377, 53381, 53401, 53407, 53411, 53419, 53437, 53441, + 53453, 53479, 53503, 53507, 53527, 53549, 53551, 53569, 53591, 53593, + 53597, 53609, 53611, 53617, 53623, 53629, 53633, 53639, 53653, 53657, + 53681, 53693, 53699, 53717, 53719, 53731, 53759, 53773, 53777, 53783, + 53791, 53813, 53819, 53831, 53849, 53857, 53861, 53881, 53887, 53891, + 53897, 53899, 53917, 53923, 53927, 53939, 53951, 53959, 53987, 53993, + 54001, 54011, 54013, 54037, 54049, 54059, 54083, 54091, 54101, 54121, + 54133, 54139, 54151, 54163, 54167, 54181, 54193, 54217, 54251, 54269, + 54277, 54287, 54293, 54311, 54319, 54323, 54331, 54347, 54361, 54367, + 54371, 54377, 54401, 54403, 54409, 54413, 54419, 54421, 54437, 54443, + 54449, 54469, 54493, 54497, 54499, 54503, 54517, 54521, 54539, 54541, + 54547, 54559, 54563, 54577, 54581, 54583, 54601, 54617, 54623, 54629, + 54631, 54647, 54667, 54673, 54679, 54709, 54713, 54721, 54727, 54751, + 54767, 54773, 54779, 54787, 54799, 54829, 54833, 54851, 54869, 54877, + 54881, 54907, 54917, 54919, 54941, 54949, 54959, 54973, 54979, 54983, + 55001, 55009, 55021, 55049, 55051, 55057, 55061, 55073, 55079, 55103, + 55109, 55117, 55127, 55147, 55163, 55171, 55201, 55207, 55213, 55217, + 55219, 55229, 55243, 55249, 55259, 55291, 55313, 55331, 55333, 55337, + 55339, 55343, 55351, 55373, 55381, 55399, 55411, 55439, 55441, 55457, + 55469, 55487, 55501, 55511, 55529, 55541, 55547, 55579, 55589, 55603, + 55609, 55619, 55621, 55631, 55633, 55639, 55661, 55663, 55667, 55673, + 55681, 55691, 55697, 55711, 55717, 55721, 55733, 55763, 55787, 55793, + 55799, 55807, 55813, 55817, 55819, 55823, 55829, 55837, 55843, 55849, + 55871, 55889, 55897, 55901, 55903, 55921, 55927, 55931, 55933, 55949, + 55967, 55987, 55997, 56003, 56009, 56039, 56041, 56053, 56081, 56087, + 56093, 56099, 56101, 56113, 56123, 56131, 56149, 56167, 56171, 56179, + 56197, 56207, 56209, 56237, 56239, 56249, 56263, 56267, 56269, 56299, + 56311, 56333, 56359, 56369, 56377, 56383, 56393, 56401, 56417, 56431, + 56437, 56443, 56453, 56467, 56473, 56477, 56479, 56489, 56501, 56503, + 56509, 56519, 56527, 56531, 56533, 56543, 56569, 56591, 56597, 56599, + 56611, 56629, 56633, 56659, 56663, 56671, 56681, 56687, 56701, 56711, + 56713, 56731, 56737, 56747, 56767, 56773, 56779, 56783, 56807, 56809, + 56813, 56821, 56827, 56843, 56857, 56873, 56891, 56893, 56897, 56909, + 56911, 56921, 56923, 56929, 56941, 56951, 56957, 56963, 56983, 56989, + 56993, 56999, 57037, 57041, 57047, 57059, 57073, 57077, 57089, 57097, + 57107, 57119, 57131, 57139, 57143, 57149, 57163, 57173, 57179, 57191, + 57193, 57203, 57221, 57223, 57241, 57251, 57259, 57269, 57271, 57283, + 57287, 57301, 57329, 57331, 57347, 57349, 57367, 57373, 57383, 57389, + 57397, 57413, 57427, 57457, 57467, 57487, 57493, 57503, 57527, 57529, + 57557, 57559, 57571, 57587, 57593, 57601, 57637, 57641, 57649, 57653, + 57667, 57679, 57689, 57697, 57709, 57713, 57719, 57727, 57731, 57737, + 57751, 57773, 57781, 57787, 57791, 57793, 57803, 57809, 57829, 57839, + 57847, 57853, 57859, 57881, 57899, 57901, 57917, 57923, 57943, 57947, + 57973, 57977, 57991, 58013, 58027, 58031, 58043, 58049, 58057, 58061, + 58067, 58073, 58099, 58109, 58111, 58129, 58147, 58151, 58153, 58169, + 58171, 58189, 58193, 58199, 58207, 58211, 58217, 58229, 58231, 58237, + 58243, 58271, 58309, 58313, 58321, 58337, 58363, 58367, 58369, 58379, + 58391, 58393, 58403, 58411, 58417, 58427, 58439, 58441, 58451, 58453, + 58477, 58481, 58511, 58537, 58543, 58549, 58567, 58573, 58579, 58601, + 58603, 58613, 58631, 58657, 58661, 58679, 58687, 58693, 58699, 58711, + 58727, 58733, 58741, 58757, 58763, 58771, 58787, 58789, 58831, 58889, + 58897, 58901, 58907, 58909, 58913, 58921, 58937, 58943, 58963, 58967, + 58979, 58991, 58997, 59009, 59011, 59021, 59023, 59029, 59051, 59053, + 59063, 59069, 59077, 59083, 59093, 59107, 59113, 59119, 59123, 59141, + 59149, 59159, 59167, 59183, 59197, 59207, 59209, 59219, 59221, 59233, + 59239, 59243, 59263, 59273, 59281, 59333, 59341, 59351, 59357, 59359, + 59369, 59377, 59387, 59393, 59399, 59407, 59417, 59419, 59441, 59443, + 59447, 59453, 59467, 59471, 59473, 59497, 59509, 59513, 59539, 59557, + 59561, 59567, 59581, 59611, 59617, 59621, 59627, 59629, 59651, 59659, + 59663, 59669, 59671, 59693, 59699, 59707, 59723, 59729, 59743, 59747, + 59753, 59771, 59779, 59791, 59797, 59809, 59833, 59863, 59879, 59887, + 59921, 59929, 59951, 59957, 59971, 59981, 59999, 60013, 60017, 60029, + 60037, 60041, 60077, 60083, 60089, 60091, 60101, 60103, 60107, 60127, + 60133, 60139, 60149, 60161, 60167, 60169, 60209, 60217, 60223, 60251, + 60257, 60259, 60271, 60289, 60293, 60317, 60331, 60337, 60343, 60353, + 60373, 60383, 60397, 60413, 60427, 60443, 60449, 60457, 60493, 60497, + 60509, 60521, 60527, 60539, 60589, 60601, 60607, 60611, 60617, 60623, + 60631, 60637, 60647, 60649, 60659, 60661, 60679, 60689, 60703, 60719, + 60727, 60733, 60737, 60757, 60761, 60763, 60773, 60779, 60793, 60811, + 60821, 60859, 60869, 60887, 60889, 60899, 60901, 60913, 60917, 60919, + 60923, 60937, 60943, 60953, 60961, 61001, 61007, 61027, 61031, 61043, + 61051, 61057, 61091, 61099, 61121, 61129, 61141, 61151, 61153, 61169, + 61211, 61223, 61231, 61253, 61261, 61283, 61291, 61297, 61331, 61333, + 61339, 61343, 61357, 61363, 61379, 61381, 61403, 61409, 61417, 61441, + 61463, 61469, 61471, 61483, 61487, 61493, 61507, 61511, 61519, 61543, + 61547, 61553, 61559, 61561, 61583, 61603, 61609, 61613, 61627, 61631, + 61637, 61643, 61651, 61657, 61667, 61673, 61681, 61687, 61703, 61717, + 61723, 61729, 61751, 61757, 61781, 61813, 61819, 61837, 61843, 61861, + 61871, 61879, 61909, 61927, 61933, 61949, 61961, 61967, 61979, 61981, + 61987, 61991, 62003, 62011, 62017, 62039, 62047, 62053, 62057, 62071, + 62081, 62099, 62119, 62129, 62131, 62137, 62141, 62143, 62171, 62189, + 62191, 62201, 62207, 62213, 62219, 62233, 62273, 62297, 62299, 62303, + 62311, 62323, 62327, 62347, 62351, 62383, 62401, 62417, 62423, 62459, + 62467, 62473, 62477, 62483, 62497, 62501, 62507, 62533, 62539, 62549, + 62563, 62581, 62591, 62597, 62603, 62617, 62627, 62633, 62639, 62653, + 62659, 62683, 62687, 62701, 62723, 62731, 62743, 62753, 62761, 62773, + 62791, 62801, 62819, 62827, 62851, 62861, 62869, 62873, 62897, 62903, + 62921, 62927, 62929, 62939, 62969, 62971, 62981, 62983, 62987, 62989, + 63029, 63031, 63059, 63067, 63073, 63079, 63097, 63103, 63113, 63127, + 63131, 63149, 63179, 63197, 63199, 63211, 63241, 63247, 63277, 63281, + 63299, 63311, 63313, 63317, 63331, 63337, 63347, 63353, 63361, 63367, + 63377, 63389, 63391, 63397, 63409, 63419, 63421, 63439, 63443, 63463, + 63467, 63473, 63487, 63493, 63499, 63521, 63527, 63533, 63541, 63559, + 63577, 63587, 63589, 63599, 63601, 63607, 63611, 63617, 63629, 63647, + 63649, 63659, 63667, 63671, 63689, 63691, 63697, 63703, 63709, 63719, + 63727, 63737, 63743, 63761, 63773, 63781, 63793, 63799, 63803, 63809, + 63823, 63839, 63841, 63853, 63857, 63863, 63901, 63907, 63913, 63929, + 63949, 63977, 63997, 64007, 64013, 64019, 64033, 64037, 64063, 64067, + 64081, 64091, 64109, 64123, 64151, 64153, 64157, 64171, 64187, 64189, + 64217, 64223, 64231, 64237, 64271, 64279, 64283, 64301, 64303, 64319, + 64327, 64333, 64373, 64381, 64399, 64403, 64433, 64439, 64451, 64453, + 64483, 64489, 64499, 64513, 64553, 64567, 64577, 64579, 64591, 64601, + 64609, 64613, 64621, 64627, 64633, 64661, 64663, 64667, 64679, 64693, + 64709, 64717, 64747, 64763, 64781, 64783, 64793, 64811, 64817, 64849, + 64853, 64871, 64877, 64879, 64891, 64901, 64919, 64921, 64927, 64937, + 64951, 64969, 64997, 65003, 65011, 65027, 65029, 65033, 65053, 65063, + 65071, 65089, 65099, 65101, 65111, 65119, 65123, 65129, 65141, 65147, + 65167, 65171, 65173, 65179, 65183, 65203, 65213, 65239, 65257, 65267, + 65269, 65287, 65293, 65309, 65323, 65327, 65353, 65357, 65371, 65381, + 65393, 65407, 65413, 65419, 65423, 65437, 65447, 65449, 65479, 65497, + 65519, 65521, 65537, 65539, 65543, 65551, 65557, 65563, 65579, 65581, + 65587, 65599, 65609, 65617, 65629, 65633, 65647, 65651, 65657, 65677, + 65687, 65699, 65701, 65707, 65713, 65717, 65719, 65729, 65731, 65761, + 65777, 65789, 65809, 65827, 65831, 65837, 65839, 65843, 65851, 65867, + 65881, 65899, 65921, 65927, 65929, 65951, 65957, 65963, 65981, 65983, + 65993, 66029, 66037, 66041, 66047, 66067, 66071, 66083, 66089, 66103, + 66107, 66109, 66137, 66161, 66169, 66173, 66179, 66191, 66221, 66239, + 66271, 66293, 66301, 66337, 66343, 66347, 66359, 66361, 66373, 66377, + 66383, 66403, 66413, 66431, 66449, 66457, 66463, 66467, 66491, 66499, + 66509, 66523, 66529, 66533, 66541, 66553, 66569, 66571, 66587, 66593, + 66601, 66617, 66629, 66643, 66653, 66683, 66697, 66701, 66713, 66721, + 66733, 66739, 66749, 66751, 66763, 66791, 66797, 66809, 66821, 66841, + 66851, 66853, 66863, 66877, 66883, 66889, 66919, 66923, 66931, 66943, + 66947, 66949, 66959, 66973, 66977, 67003, 67021, 67033, 67043, 67049, + 67057, 67061, 67073, 67079, 67103, 67121, 67129, 67139, 67141, 67153, + 67157, 67169, 67181, 67187, 67189, 67211, 67213, 67217, 67219, 67231, + 67247, 67261, 67271, 67273, 67289, 67307, 67339, 67343, 67349, 67369, + 67391, 67399, 67409, 67411, 67421, 67427, 67429, 67433, 67447, 67453, + 67477, 67481, 67489, 67493, 67499, 67511, 67523, 67531, 67537, 67547, + 67559, 67567, 67577, 67579, 67589, 67601, 67607, 67619, 67631, 67651, + 67679, 67699, 67709, 67723, 67733, 67741, 67751, 67757, 67759, 67763, + 67777, 67783, 67789, 67801, 67807, 67819, 67829, 67843, 67853, 67867, + 67883, 67891, 67901, 67927, 67931, 67933, 67939, 67943, 67957, 67961, + 67967, 67979, 67987, 67993, 68023, 68041, 68053, 68059, 68071, 68087, + 68099, 68111, 68113, 68141, 68147, 68161, 68171, 68207, 68209, 68213, + 68219, 68227, 68239, 68261, 68279, 68281, 68311, 68329, 68351, 68371, + 68389, 68399, 68437, 68443, 68447, 68449, 68473, 68477, 68483, 68489, + 68491, 68501, 68507, 68521, 68531, 68539, 68543, 68567, 68581, 68597, + 68611, 68633, 68639, 68659, 68669, 68683, 68687, 68699, 68711, 68713, + 68729, 68737, 68743, 68749, 68767, 68771, 68777, 68791, 68813, 68819, + 68821, 68863, 68879, 68881, 68891, 68897, 68899, 68903, 68909, 68917, + 68927, 68947, 68963, 68993, 69001, 69011, 69019, 69029, 69031, 69061, + 69067, 69073, 69109, 69119, 69127, 69143, 69149, 69151, 69163, 69191, + 69193, 69197, 69203, 69221, 69233, 69239, 69247, 69257, 69259, 69263, + 69313, 69317, 69337, 69341, 69371, 69379, 69383, 69389, 69401, 69403, + 69427, 69431, 69439, 69457, 69463, 69467, 69473, 69481, 69491, 69493, + 69497, 69499, 69539, 69557, 69593, 69623, 69653, 69661, 69677, 69691, + 69697, 69709, 69737, 69739, 69761, 69763, 69767, 69779, 69809, 69821, + 69827, 69829, 69833, 69847, 69857, 69859, 69877, 69899, 69911, 69929, + 69931, 69941, 69959, 69991, 69997, 70001, 70003, 70009, 70019, 70039, + 70051, 70061, 70067, 70079, 70099, 70111, 70117, 70121, 70123, 70139, + 70141, 70157, 70163, 70177, 70181, 70183, 70199, 70201, 70207, 70223, + 70229, 70237, 70241, 70249, 70271, 70289, 70297, 70309, 70313, 70321, + 70327, 70351, 70373, 70379, 70381, 70393, 70423, 70429, 70439, 70451, + 70457, 70459, 70481, 70487, 70489, 70501, 70507, 70529, 70537, 70549, + 70571, 70573, 70583, 70589, 70607, 70619, 70621, 70627, 70639, 70657, + 70663, 70667, 70687, 70709, 70717, 70729, 70753, 70769, 70783, 70793, + 70823, 70841, 70843, 70849, 70853, 70867, 70877, 70879, 70891, 70901, + 70913, 70919, 70921, 70937, 70949, 70951, 70957, 70969, 70979, 70981, + 70991, 70997, 70999, 71011, 71023, 71039, 71059, 71069, 71081, 71089, + 71119, 71129, 71143, 71147, 71153, 71161, 71167, 71171, 71191, 71209, + 71233, 71237, 71249, 71257, 71261, 71263, 71287, 71293, 71317, 71327, + 71329, 71333, 71339, 71341, 71347, 71353, 71359, 71363, 71387, 71389, + 71399, 71411, 71413, 71419, 71429, 71437, 71443, 71453, 71471, 71473, + 71479, 71483, 71503, 71527, 71537, 71549, 71551, 71563, 71569, 71593, + 71597, 71633, 71647, 71663, 71671, 71693, 71699, 71707, 71711, 71713, + 71719, 71741, 71761, 71777, 71789, 71807, 71809, 71821, 71837, 71843, + 71849, 71861, 71867, 71879, 71881, 71887, 71899, 71909, 71917, 71933, + 71941, 71947, 71963, 71971, 71983, 71987, 71993, 71999, 72019, 72031, + 72043, 72047, 72053, 72073, 72077, 72089, 72091, 72101, 72103, 72109, + 72139, 72161, 72167, 72169, 72173, 72211, 72221, 72223, 72227, 72229, + 72251, 72253, 72269, 72271, 72277, 72287, 72307, 72313, 72337, 72341, + 72353, 72367, 72379, 72383, 72421, 72431, 72461, 72467, 72469, 72481, + 72493, 72497, 72503, 72533, 72547, 72551, 72559, 72577, 72613, 72617, + 72623, 72643, 72647, 72649, 72661, 72671, 72673, 72679, 72689, 72701, + 72707, 72719, 72727, 72733, 72739, 72763, 72767, 72797, 72817, 72823, + 72859, 72869, 72871, 72883, 72889, 72893, 72901, 72907, 72911, 72923, + 72931, 72937, 72949, 72953, 72959, 72973, 72977, 72997, 73009, 73013, + 73019, 73037, 73039, 73043, 73061, 73063, 73079, 73091, 73121, 73127, + 73133, 73141, 73181, 73189, 73237, 73243, 73259, 73277, 73291, 73303, + 73309, 73327, 73331, 73351, 73361, 73363, 73369, 73379, 73387, 73417, + 73421, 73433, 73453, 73459, 73471, 73477, 73483, 73517, 73523, 73529, + 73547, 73553, 73561, 73571, 73583, 73589, 73597, 73607, 73609, 73613, + 73637, 73643, 73651, 73673, 73679, 73681, 73693, 73699, 73709, 73721, + 73727, 73751, 73757, 73771, 73783, 73819, 73823, 73847, 73849, 73859, + 73867, 73877, 73883, 73897, 73907, 73939, 73943, 73951, 73961, 73973, + 73999, 74017, 74021, 74027, 74047, 74051, 74071, 74077, 74093, 74099, + 74101, 74131, 74143, 74149, 74159, 74161, 74167, 74177, 74189, 74197, + 74201, 74203, 74209, 74219, 74231, 74257, 74279, 74287, 74293, 74297, + 74311, 74317, 74323, 74353, 74357, 74363, 74377, 74381, 74383, 74411, + 74413, 74419, 74441, 74449, 74453, 74471, 74489, 74507, 74509, 74521, + 74527, 74531, 74551, 74561, 74567, 74573, 74587, 74597, 74609, 74611, + 74623, 74653, 74687, 74699, 74707, 74713, 74717, 74719, 74729, 74731, + 74747, 74759, 74761, 74771, 74779, 74797, 74821, 74827, 74831, 74843, + 74857, 74861, 74869, 74873, 74887, 74891, 74897, 74903, 74923, 74929, + 74933, 74941, 74959, 75011, 75013, 75017, 75029, 75037, 75041, 75079, + 75083, 75109, 75133, 75149, 75161, 75167, 75169, 75181, 75193, 75209, + 75211, 75217, 75223, 75227, 75239, 75253, 75269, 75277, 75289, 75307, + 75323, 75329, 75337, 75347, 75353, 75367, 75377, 75389, 75391, 75401, + 75403, 75407, 75431, 75437, 75479, 75503, 75511, 75521, 75527, 75533, + 75539, 75541, 75553, 75557, 75571, 75577, 75583, 75611, 75617, 75619, + 75629, 75641, 75653, 75659, 75679, 75683, 75689, 75703, 75707, 75709, + 75721, 75731, 75743, 75767, 75773, 75781, 75787, 75793, 75797, 75821, + 75833, 75853, 75869, 75883, 75913, 75931, 75937, 75941, 75967, 75979, + 75983, 75989, 75991, 75997, 76001, 76003, 76031, 76039, 76079, 76081, + 76091, 76099, 76103, 76123, 76129, 76147, 76157, 76159, 76163, 76207, + 76213, 76231, 76243, 76249, 76253, 76259, 76261, 76283, 76289, 76303, + 76333, 76343, 76367, 76369, 76379, 76387, 76403, 76421, 76423, 76441, + 76463, 76471, 76481, 76487, 76493, 76507, 76511, 76519, 76537, 76541, + 76543, 76561, 76579, 76597, 76603, 76607, 76631, 76649, 76651, 76667, + 76673, 76679, 76697, 76717, 76733, 76753, 76757, 76771, 76777, 76781, + 76801, 76819, 76829, 76831, 76837, 76847, 76871, 76873, 76883, 76907, + 76913, 76919, 76943, 76949, 76961, 76963, 76991, 77003, 77017, 77023, + 77029, 77041, 77047, 77069, 77081, 77093, 77101, 77137, 77141, 77153, + 77167, 77171, 77191, 77201, 77213, 77237, 77239, 77243, 77249, 77261, + 77263, 77267, 77269, 77279, 77291, 77317, 77323, 77339, 77347, 77351, + 77359, 77369, 77377, 77383, 77417, 77419, 77431, 77447, 77471, 77477, + 77479, 77489, 77491, 77509, 77513, 77521, 77527, 77543, 77549, 77551, + 77557, 77563, 77569, 77573, 77587, 77591, 77611, 77617, 77621, 77641, + 77647, 77659, 77681, 77687, 77689, 77699, 77711, 77713, 77719, 77723, + 77731, 77743, 77747, 77761, 77773, 77783, 77797, 77801, 77813, 77839, + 77849, 77863, 77867, 77893, 77899, 77929, 77933, 77951, 77969, 77977, + 77983, 77999, 78007, 78017, 78031, 78041, 78049, 78059, 78079, 78101, + 78121, 78137, 78139, 78157, 78163, 78167, 78173, 78179, 78191, 78193, + 78203, 78229, 78233, 78241, 78259, 78277, 78283, 78301, 78307, 78311, + 78317, 78341, 78347, 78367, 78401, 78427, 78437, 78439, 78467, 78479, + 78487, 78497, 78509, 78511, 78517, 78539, 78541, 78553, 78569, 78571, + 78577, 78583, 78593, 78607, 78623, 78643, 78649, 78653, 78691, 78697, + 78707, 78713, 78721, 78737, 78779, 78781, 78787, 78791, 78797, 78803, + 78809, 78823, 78839, 78853, 78857, 78877, 78887, 78889, 78893, 78901, + 78919, 78929, 78941, 78977, 78979, 78989, 79031, 79039, 79043, 79063, + 79087, 79103, 79111, 79133, 79139, 79147, 79151, 79153, 79159, 79181, + 79187, 79193, 79201, 79229, 79231, 79241, 79259, 79273, 79279, 79283, + 79301, 79309, 79319, 79333, 79337, 79349, 79357, 79367, 79379, 79393, + 79397, 79399, 79411, 79423, 79427, 79433, 79451, 79481, 79493, 79531, + 79537, 79549, 79559, 79561, 79579, 79589, 79601, 79609, 79613, 79621, + 79627, 79631, 79633, 79657, 79669, 79687, 79691, 79693, 79697, 79699, + 79757, 79769, 79777, 79801, 79811, 79813, 79817, 79823, 79829, 79841, + 79843, 79847, 79861, 79867, 79873, 79889, 79901, 79903, 79907, 79939, + 79943, 79967, 79973, 79979, 79987, 79997, 79999, 80021, 80039, 80051, + 80071, 80077, 80107, 80111, 80141, 80147, 80149, 80153, 80167, 80173, + 80177, 80191, 80207, 80209, 80221, 80231, 80233, 80239, 80251, 80263, + 80273, 80279, 80287, 80309, 80317, 80329, 80341, 80347, 80363, 80369, + 80387, 80407, 80429, 80447, 80449, 80471, 80473, 80489, 80491, 80513, + 80527, 80537, 80557, 80567, 80599, 80603, 80611, 80621, 80627, 80629, + 80651, 80657, 80669, 80671, 80677, 80681, 80683, 80687, 80701, 80713, + 80737, 80747, 80749, 80761, 80777, 80779, 80783, 80789, 80803, 80809, + 80819, 80831, 80833, 80849, 80863, 80897, 80909, 80911, 80917, 80923, + 80929, 80933, 80953, 80963, 80989, 81001, 81013, 81017, 81019, 81023, + 81031, 81041, 81043, 81047, 81049, 81071, 81077, 81083, 81097, 81101, + 81119, 81131, 81157, 81163, 81173, 81181, 81197, 81199, 81203, 81223, + 81233, 81239, 81281, 81283, 81293, 81299, 81307, 81331, 81343, 81349, + 81353, 81359, 81371, 81373, 81401, 81409, 81421, 81439, 81457, 81463, + 81509, 81517, 81527, 81533, 81547, 81551, 81553, 81559, 81563, 81569, + 81611, 81619, 81629, 81637, 81647, 81649, 81667, 81671, 81677, 81689, + 81701, 81703, 81707, 81727, 81737, 81749, 81761, 81769, 81773, 81799, + 81817, 81839, 81847, 81853, 81869, 81883, 81899, 81901, 81919, 81929, + 81931, 81937, 81943, 81953, 81967, 81971, 81973, 82003, 82007, 82009, + 82013, 82021, 82031, 82037, 82039, 82051, 82067, 82073, 82129, 82139, + 82141, 82153, 82163, 82171, 82183, 82189, 82193, 82207, 82217, 82219, + 82223, 82231, 82237, 82241, 82261, 82267, 82279, 82301, 82307, 82339, + 82349, 82351, 82361, 82373, 82387, 82393, 82421, 82457, 82463, 82469, + 82471, 82483, 82487, 82493, 82499, 82507, 82529, 82531, 82549, 82559, + 82561, 82567, 82571, 82591, 82601, 82609, 82613, 82619, 82633, 82651, + 82657, 82699, 82721, 82723, 82727, 82729, 82757, 82759, 82763, 82781, + 82787, 82793, 82799, 82811, 82813, 82837, 82847, 82883, 82889, 82891, + 82903, 82913, 82939, 82963, 82981, 82997, 83003, 83009, 83023, 83047, + 83059, 83063, 83071, 83077, 83089, 83093, 83101, 83117, 83137, 83177, + 83203, 83207, 83219, 83221, 83227, 83231, 83233, 83243, 83257, 83267, + 83269, 83273, 83299, 83311, 83339, 83341, 83357, 83383, 83389, 83399, + 83401, 83407, 83417, 83423, 83431, 83437, 83443, 83449, 83459, 83471, + 83477, 83497, 83537, 83557, 83561, 83563, 83579, 83591, 83597, 83609, + 83617, 83621, 83639, 83641, 83653, 83663, 83689, 83701, 83717, 83719, + 83737, 83761, 83773, 83777, 83791, 83813, 83833, 83843, 83857, 83869, + 83873, 83891, 83903, 83911, 83921, 83933, 83939, 83969, 83983, 83987, + 84011, 84017, 84047, 84053, 84059, 84061, 84067, 84089, 84121, 84127, + 84131, 84137, 84143, 84163, 84179, 84181, 84191, 84199, 84211, 84221, + 84223, 84229, 84239, 84247, 84263, 84299, 84307, 84313, 84317, 84319, + 84347, 84349, 84377, 84389, 84391, 84401, 84407, 84421, 84431, 84437, + 84443, 84449, 84457, 84463, 84467, 84481, 84499, 84503, 84509, 84521, + 84523, 84533, 84551, 84559, 84589, 84629, 84631, 84649, 84653, 84659, + 84673, 84691, 84697, 84701, 84713, 84719, 84731, 84737, 84751, 84761, + 84787, 84793, 84809, 84811, 84827, 84857, 84859, 84869, 84871, 84913, + 84919, 84947, 84961, 84967, 84977, 84979, 84991, 85009, 85021, 85027, + 85037, 85049, 85061, 85081, 85087, 85091, 85093, 85103, 85109, 85121, + 85133, 85147, 85159, 85193, 85199, 85201, 85213, 85223, 85229, 85237, + 85243, 85247, 85259, 85297, 85303, 85313, 85331, 85333, 85361, 85363, + 85369, 85381, 85411, 85427, 85429, 85439, 85447, 85451, 85453, 85469, + 85487, 85513, 85517, 85523, 85531, 85549, 85571, 85577, 85597, 85601, + 85607, 85619, 85621, 85627, 85639, 85643, 85661, 85667, 85669, 85691, + 85703, 85711, 85717, 85733, 85751, 85781, 85793, 85817, 85819, 85829, + 85831, 85837, 85843, 85847, 85853, 85889, 85903, 85909, 85931, 85933, + 85991, 85999, 86011, 86017, 86027, 86029, 86069, 86077, 86083, 86111, + 86113, 86117, 86131, 86137, 86143, 86161, 86171, 86179, 86183, 86197, + 86201, 86209, 86239, 86243, 86249, 86257, 86263, 86269, 86287, 86291, + 86293, 86297, 86311, 86323, 86341, 86351, 86353, 86357, 86369, 86371, + 86381, 86389, 86399, 86413, 86423, 86441, 86453, 86461, 86467, 86477, + 86491, 86501, 86509, 86531, 86533, 86539, 86561, 86573, 86579, 86587, + 86599, 86627, 86629, 86677, 86689, 86693, 86711, 86719, 86729, 86743, + 86753, 86767, 86771, 86783, 86813, 86837, 86843, 86851, 86857, 86861, + 86869, 86923, 86927, 86929, 86939, 86951, 86959, 86969, 86981, 86993, + 87011, 87013, 87037, 87041, 87049, 87071, 87083, 87103, 87107, 87119, + 87121, 87133, 87149, 87151, 87179, 87181, 87187, 87211, 87221, 87223, + 87251, 87253, 87257, 87277, 87281, 87293, 87299, 87313, 87317, 87323, + 87337, 87359, 87383, 87403, 87407, 87421, 87427, 87433, 87443, 87473, + 87481, 87491, 87509, 87511, 87517, 87523, 87539, 87541, 87547, 87553, + 87557, 87559, 87583, 87587, 87589, 87613, 87623, 87629, 87631, 87641, + 87643, 87649, 87671, 87679, 87683, 87691, 87697, 87701, 87719, 87721, + 87739, 87743, 87751, 87767, 87793, 87797, 87803, 87811, 87833, 87853, + 87869, 87877, 87881, 87887, 87911, 87917, 87931, 87943, 87959, 87961, + 87973, 87977, 87991, 88001, 88003, 88007, 88019, 88037, 88069, 88079, + 88093, 88117, 88129, 88169, 88177, 88211, 88223, 88237, 88241, 88259, + 88261, 88289, 88301, 88321, 88327, 88337, 88339, 88379, 88397, 88411, + 88423, 88427, 88463, 88469, 88471, 88493, 88499, 88513, 88523, 88547, + 88589, 88591, 88607, 88609, 88643, 88651, 88657, 88661, 88663, 88667, + 88681, 88721, 88729, 88741, 88747, 88771, 88789, 88793, 88799, 88801, + 88807, 88811, 88813, 88817, 88819, 88843, 88853, 88861, 88867, 88873, + 88883, 88897, 88903, 88919, 88937, 88951, 88969, 88993, 88997, 89003, + 89009, 89017, 89021, 89041, 89051, 89057, 89069, 89071, 89083, 89087, + 89101, 89107, 89113, 89119, 89123, 89137, 89153, 89189, 89203, 89209, + 89213, 89227, 89231, 89237, 89261, 89269, 89273, 89293, 89303, 89317, + 89329, 89363, 89371, 89381, 89387, 89393, 89399, 89413, 89417, 89431, + 89443, 89449, 89459, 89477, 89491, 89501, 89513, 89519, 89521, 89527, + 89533, 89561, 89563, 89567, 89591, 89597, 89599, 89603, 89611, 89627, + 89633, 89653, 89657, 89659, 89669, 89671, 89681, 89689, 89753, 89759, + 89767, 89779, 89783, 89797, 89809, 89819, 89821, 89833, 89839, 89849, + 89867, 89891, 89897, 89899, 89909, 89917, 89923, 89939, 89959, 89963, + 89977, 89983, 89989, 90001, 90007, 90011, 90017, 90019, 90023, 90031, + 90053, 90059, 90067, 90071, 90073, 90089, 90107, 90121, 90127, 90149, + 90163, 90173, 90187, 90191, 90197, 90199, 90203, 90217, 90227, 90239, + 90247, 90263, 90271, 90281, 90289, 90313, 90353, 90359, 90371, 90373, + 90379, 90397, 90401, 90403, 90407, 90437, 90439, 90469, 90473, 90481, + 90499, 90511, 90523, 90527, 90529, 90533, 90547, 90583, 90599, 90617, + 90619, 90631, 90641, 90647, 90659, 90677, 90679, 90697, 90703, 90709, + 90731, 90749, 90787, 90793, 90803, 90821, 90823, 90833, 90841, 90847, + 90863, 90887, 90901, 90907, 90911, 90917, 90931, 90947, 90971, 90977, + 90989, 90997, 91009, 91019, 91033, 91079, 91081, 91097, 91099, 91121, + 91127, 91129, 91139, 91141, 91151, 91153, 91159, 91163, 91183, 91193, + 91199, 91229, 91237, 91243, 91249, 91253, 91283, 91291, 91297, 91303, + 91309, 91331, 91367, 91369, 91373, 91381, 91387, 91393, 91397, 91411, + 91423, 91433, 91453, 91457, 91459, 91463, 91493, 91499, 91513, 91529, + 91541, 91571, 91573, 91577, 91583, 91591, 91621, 91631, 91639, 91673, + 91691, 91703, 91711, 91733, 91753, 91757, 91771, 91781, 91801, 91807, + 91811, 91813, 91823, 91837, 91841, 91867, 91873, 91909, 91921, 91939, + 91943, 91951, 91957, 91961, 91967, 91969, 91997, 92003, 92009, 92033, + 92041, 92051, 92077, 92083, 92107, 92111, 92119, 92143, 92153, 92173, + 92177, 92179, 92189, 92203, 92219, 92221, 92227, 92233, 92237, 92243, + 92251, 92269, 92297, 92311, 92317, 92333, 92347, 92353, 92357, 92363, + 92369, 92377, 92381, 92383, 92387, 92399, 92401, 92413, 92419, 92431, + 92459, 92461, 92467, 92479, 92489, 92503, 92507, 92551, 92557, 92567, + 92569, 92581, 92593, 92623, 92627, 92639, 92641, 92647, 92657, 92669, + 92671, 92681, 92683, 92693, 92699, 92707, 92717, 92723, 92737, 92753, + 92761, 92767, 92779, 92789, 92791, 92801, 92809, 92821, 92831, 92849, + 92857, 92861, 92863, 92867, 92893, 92899, 92921, 92927, 92941, 92951, + 92957, 92959, 92987, 92993, 93001, 93047, 93053, 93059, 93077, 93083, + 93089, 93097, 93103, 93113, 93131, 93133, 93139, 93151, 93169, 93179, + 93187, 93199, 93229, 93239, 93241, 93251, 93253, 93257, 93263, 93281, + 93283, 93287, 93307, 93319, 93323, 93329, 93337, 93371, 93377, 93383, + 93407, 93419, 93427, 93463, 93479, 93481, 93487, 93491, 93493, 93497, + 93503, 93523, 93529, 93553, 93557, 93559, 93563, 93581, 93601, 93607, + 93629, 93637, 93683, 93701, 93703, 93719, 93739, 93761, 93763, 93787, + 93809, 93811, 93827, 93851, 93871, 93887, 93889, 93893, 93901, 93911, + 93913, 93923, 93937, 93941, 93949, 93967, 93971, 93979, 93983, 93997, + 94007, 94009, 94033, 94049, 94057, 94063, 94079, 94099, 94109, 94111, + 94117, 94121, 94151, 94153, 94169, 94201, 94207, 94219, 94229, 94253, + 94261, 94273, 94291, 94307, 94309, 94321, 94327, 94331, 94343, 94349, + 94351, 94379, 94397, 94399, 94421, 94427, 94433, 94439, 94441, 94447, + 94463, 94477, 94483, 94513, 94529, 94531, 94541, 94543, 94547, 94559, + 94561, 94573, 94583, 94597, 94603, 94613, 94621, 94649, 94651, 94687, + 94693, 94709, 94723, 94727, 94747, 94771, 94777, 94781, 94789, 94793, + 94811, 94819, 94823, 94837, 94841, 94847, 94849, 94873, 94889, 94903, + 94907, 94933, 94949, 94951, 94961, 94993, 94999, 95003, 95009, 95021, + 95027, 95063, 95071, 95083, 95087, 95089, 95093, 95101, 95107, 95111, + 95131, 95143, 95153, 95177, 95189, 95191, 95203, 95213, 95219, 95231, + 95233, 95239, 95257, 95261, 95267, 95273, 95279, 95287, 95311, 95317, + 95327, 95339, 95369, 95383, 95393, 95401, 95413, 95419, 95429, 95441, + 95443, 95461, 95467, 95471, 95479, 95483, 95507, 95527, 95531, 95539, + 95549, 95561, 95569, 95581, 95597, 95603, 95617, 95621, 95629, 95633, + 95651, 95701, 95707, 95713, 95717, 95723, 95731, 95737, 95747, 95773, + 95783, 95789, 95791, 95801, 95803, 95813, 95819, 95857, 95869, 95873, + 95881, 95891, 95911, 95917, 95923, 95929, 95947, 95957, 95959, 95971, + 95987, 95989, 96001, 96013, 96017, 96043, 96053, 96059, 96079, 96097, + 96137, 96149, 96157, 96167, 96179, 96181, 96199, 96211, 96221, 96223, + 96233, 96259, 96263, 96269, 96281, 96289, 96293, 96323, 96329, 96331, + 96337, 96353, 96377, 96401, 96419, 96431, 96443, 96451, 96457, 96461, + 96469, 96479, 96487, 96493, 96497, 96517, 96527, 96553, 96557, 96581, + 96587, 96589, 96601, 96643, 96661, 96667, 96671, 96697, 96703, 96731, + 96737, 96739, 96749, 96757, 96763, 96769, 96779, 96787, 96797, 96799, + 96821, 96823, 96827, 96847, 96851, 96857, 96893, 96907, 96911, 96931, + 96953, 96959, 96973, 96979, 96989, 96997, 97001, 97003, 97007, 97021, + 97039, 97073, 97081, 97103, 97117, 97127, 97151, 97157, 97159, 97169, + 97171, 97177, 97187, 97213, 97231, 97241, 97259, 97283, 97301, 97303, + 97327, 97367, 97369, 97373, 97379, 97381, 97387, 97397, 97423, 97429, + 97441, 97453, 97459, 97463, 97499, 97501, 97511, 97523, 97547, 97549, + 97553, 97561, 97571, 97577, 97579, 97583, 97607, 97609, 97613, 97649, + 97651, 97673, 97687, 97711, 97729, 97771, 97777, 97787, 97789, 97813, + 97829, 97841, 97843, 97847, 97849, 97859, 97861, 97871, 97879, 97883, + 97919, 97927, 97931, 97943, 97961, 97967, 97973, 97987, 98009, 98011, + 98017, 98041, 98047, 98057, 98081, 98101, 98123, 98129, 98143, 98179, + 98207, 98213, 98221, 98227, 98251, 98257, 98269, 98297, 98299, 98317, + 98321, 98323, 98327, 98347, 98369, 98377, 98387, 98389, 98407, 98411, + 98419, 98429, 98443, 98453, 98459, 98467, 98473, 98479, 98491, 98507, + 98519, 98533, 98543, 98561, 98563, 98573, 98597, 98621, 98627, 98639, + 98641, 98663, 98669, 98689, 98711, 98713, 98717, 98729, 98731, 98737, + 98773, 98779, 98801, 98807, 98809, 98837, 98849, 98867, 98869, 98873, + 98887, 98893, 98897, 98899, 98909, 98911, 98927, 98929, 98939, 98947, + 98953, 98963, 98981, 98993, 98999, 99013, 99017, 99023, 99041, 99053, + 99079, 99083, 99089, 99103, 99109, 99119, 99131, 99133, 99137, 99139, + 99149, 99173, 99181, 99191, 99223, 99233, 99241, 99251, 99257, 99259, + 99277, 99289, 99317, 99347, 99349, 99367, 99371, 99377, 99391, 99397, + 99401, 99409, 99431, 99439, 99469, 99487, 99497, 99523, 99527, 99529, + 99551, 99559, 99563, 99571, 99577, 99581, 99607, 99611, 99623, 99643, + 99661, 99667, 99679, 99689, 99707, 99709, 99713, 99719, 99721, 99733, + 99761, 99767, 99787, 99793, 99809, 99817, 99823, 99829, 99833, 99839, + 99859, 99871, 99877, 99881, 99901, 99907, 99923, 99929, 99961, 99971, + 99989, 99991, 100003, 100019, 100043, 100049, 100057, 100069, 100103, 100109, +100129, 100151, 100153, 100169, 100183, 100189, 100193, 100207, 100213, 100237, +100267, 100271, 100279, 100291, 100297, 100313, 100333, 100343, 100357, 100361, +100363, 100379, 100391, 100393, 100403, 100411, 100417, 100447, 100459, 100469, +100483, 100493, 100501, 100511, 100517, 100519, 100523, 100537, 100547, 100549, +100559, 100591, 100609, 100613, 100621, 100649, 100669, 100673, 100693, 100699, +100703, 100733, 100741, 100747, 100769, 100787, 100799, 100801, 100811, 100823, +100829, 100847, 100853, 100907, 100913, 100927, 100931, 100937, 100943, 100957, +100981, 100987, 100999, 101009, 101021, 101027, 101051, 101063, 101081, 101089, +101107, 101111, 101113, 101117, 101119, 101141, 101149, 101159, 101161, 101173, +101183, 101197, 101203, 101207, 101209, 101221, 101267, 101273, 101279, 101281, +101287, 101293, 101323, 101333, 101341, 101347, 101359, 101363, 101377, 101383, +101399, 101411, 101419, 101429, 101449, 101467, 101477, 101483, 101489, 101501, +101503, 101513, 101527, 101531, 101533, 101537, 101561, 101573, 101581, 101599, +101603, 101611, 101627, 101641, 101653, 101663, 101681, 101693, 101701, 101719, +101723, 101737, 101741, 101747, 101749, 101771, 101789, 101797, 101807, 101833, +101837, 101839, 101863, 101869, 101873, 101879, 101891, 101917, 101921, 101929, +101939, 101957, 101963, 101977, 101987, 101999, 102001, 102013, 102019, 102023, +102031, 102043, 102059, 102061, 102071, 102077, 102079, 102101, 102103, 102107, +102121, 102139, 102149, 102161, 102181, 102191, 102197, 102199, 102203, 102217, +102229, 102233, 102241, 102251, 102253, 102259, 102293, 102299, 102301, 102317, +102329, 102337, 102359, 102367, 102397, 102407, 102409, 102433, 102437, 102451, +102461, 102481, 102497, 102499, 102503, 102523, 102533, 102539, 102547, 102551, +102559, 102563, 102587, 102593, 102607, 102611, 102643, 102647, 102653, 102667, +102673, 102677, 102679, 102701, 102761, 102763, 102769, 102793, 102797, 102811, +102829, 102841, 102859, 102871, 102877, 102881, 102911, 102913, 102929, 102931, +102953, 102967, 102983, 103001, 103007, 103043, 103049, 103067, 103069, 103079, +103087, 103091, 103093, 103099, 103123, 103141, 103171, 103177, 103183, 103217, +103231, 103237, 103289, 103291, 103307, 103319, 103333, 103349, 103357, 103387, +103391, 103393, 103399, 103409, 103421, 103423, 103451, 103457, 103471, 103483, +103511, 103529, 103549, 103553, 103561, 103567, 103573, 103577, 103583, 103591, +103613, 103619, 103643, 103651, 103657, 103669, 103681, 103687, 103699, 103703, +103723, 103769, 103787, 103801, 103811, 103813, 103837, 103841, 103843, 103867, +103889, 103903, 103913, 103919, 103951, 103963, 103967, 103969, 103979, 103981, +103991, 103993, 103997, 104003, 104009, 104021, 104033, 104047, 104053, 104059, +104087, 104089, 104107, 104113, 104119, 104123, 104147, 104149, 104161, 104173, +104179, 104183, 104207, 104231, 104233, 104239, 104243, 104281, 104287, 104297, +104309, 104311, 104323, 104327, 104347, 104369, 104381, 104383, 104393, 104399, +104417, 104459, 104471, 104473, 104479, 104491, 104513, 104527, 104537, 104543, +104549, 104551, 104561, 104579, 104593, 104597, 104623, 104639, 104651, 104659, +104677, 104681, 104683, 104693, 104701, 104707, 104711, 104717, 104723, 104729, +) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/number.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/number.pyi new file mode 100644 index 0000000..f8680bf --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/number.pyi @@ -0,0 +1,19 @@ +from typing import List, Optional, Callable + + +def ceil_div(n: int, d: int) -> int: ... +def size (N: int) -> int: ... +def getRandomInteger(N: int, randfunc: Optional[Callable]=None) -> int: ... +def getRandomRange(a: int, b: int, randfunc: Optional[Callable]=None) -> int: ... +def getRandomNBitInteger(N: int, randfunc: Optional[Callable]=None) -> int: ... +def GCD(x: int,y: int) -> int: ... +def inverse(u: int, v: int) -> int: ... +def getPrime(N: int, randfunc: Optional[Callable]=None) -> int: ... +def getStrongPrime(N: int, e: Optional[int]=0, false_positive_prob: Optional[float]=1e-6, randfunc: Optional[Callable]=None) -> int: ... +def isPrime(N: int, false_positive_prob: Optional[float]=1e-6, randfunc: Optional[Callable]=None) -> bool: ... +def long_to_bytes(n: int, blocksize: Optional[int]=0) -> bytes: ... +def bytes_to_long(s: bytes) -> int: ... +def long2str(n: int, blocksize: Optional[int]=0) -> bytes: ... +def str2long(s: bytes) -> int: ... + +sieve_base: List[int] diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/py3compat.py b/python/lib/python3.11/site-packages/Cryptodome/Util/py3compat.py new file mode 100644 index 0000000..9a982e9 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/py3compat.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- +# +# Util/py3compat.py : Compatibility code for handling Py3k / Python 2.x +# +# Written in 2010 by Thorsten Behrens +# +# =================================================================== +# The contents of this file are dedicated to the public domain. To +# the extent that dedication to the public domain is not available, +# everyone is granted a worldwide, perpetual, royalty-free, +# non-exclusive license to exercise all rights associated with the +# contents of this file for any purpose whatsoever. +# No rights are reserved. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# =================================================================== + +"""Compatibility code for handling string/bytes changes from Python 2.x to Py3k + +In Python 2.x, strings (of type ''str'') contain binary data, including encoded +Unicode text (e.g. UTF-8). The separate type ''unicode'' holds Unicode text. +Unicode literals are specified via the u'...' prefix. Indexing or slicing +either type always produces a string of the same type as the original. +Data read from a file is always of '''str'' type. + +In Python 3.x, strings (type ''str'') may only contain Unicode text. The u'...' +prefix and the ''unicode'' type are now redundant. A new type (called +''bytes'') has to be used for binary data (including any particular +''encoding'' of a string). The b'...' prefix allows one to specify a binary +literal. Indexing or slicing a string produces another string. Slicing a byte +string produces another byte string, but the indexing operation produces an +integer. Data read from a file is of '''str'' type if the file was opened in +text mode, or of ''bytes'' type otherwise. + +Since PyCryptodome aims at supporting both Python 2.x and 3.x, the following helper +functions are used to keep the rest of the library as independent as possible +from the actual Python version. + +In general, the code should always deal with binary strings, and use integers +instead of 1-byte character strings. + +b(s) + Take a text string literal (with no prefix or with u'...' prefix) and + make a byte string. +bchr(c) + Take an integer and make a 1-character byte string. +bord(c) + Take the result of indexing on a byte string and make an integer. +tobytes(s) + Take a text string, a byte string, or a sequence of character taken from + a byte string and make a byte string. +""" + +import sys +import abc + + +if sys.version_info[0] == 2: + def b(s): + return s + def bchr(s): + return chr(s) + def bstr(s): + return str(s) + def bord(s): + return ord(s) + def tobytes(s, encoding="latin-1"): + if isinstance(s, unicode): + return s.encode(encoding) + elif isinstance(s, str): + return s + elif isinstance(s, bytearray): + return bytes(s) + elif isinstance(s, memoryview): + return s.tobytes() + else: + return ''.join(s) + def tostr(bs): + return bs + def byte_string(s): + return isinstance(s, str) + + from StringIO import StringIO + BytesIO = StringIO + + from sys import maxint + + iter_range = xrange + + def is_native_int(x): + return isinstance(x, (int, long)) + + def is_string(x): + return isinstance(x, basestring) + + def is_bytes(x): + return isinstance(x, str) or \ + isinstance(x, bytearray) or \ + isinstance(x, memoryview) + + ABC = abc.ABCMeta('ABC', (object,), {'__slots__': ()}) + + FileNotFoundError = IOError + +else: + def b(s): + return s.encode("latin-1") # utf-8 would cause some side-effects we don't want + def bchr(s): + return bytes([s]) + def bstr(s): + if isinstance(s,str): + return bytes(s,"latin-1") + else: + return bytes(s) + def bord(s): + return s + def tobytes(s, encoding="latin-1"): + if isinstance(s, bytes): + return s + elif isinstance(s, bytearray): + return bytes(s) + elif isinstance(s,str): + return s.encode(encoding) + elif isinstance(s, memoryview): + return s.tobytes() + else: + return bytes([s]) + def tostr(bs): + return bs.decode("latin-1") + def byte_string(s): + return isinstance(s, bytes) + + from io import BytesIO + from io import StringIO + from sys import maxsize as maxint + + iter_range = range + + def is_native_int(x): + return isinstance(x, int) + + def is_string(x): + return isinstance(x, str) + + def is_bytes(x): + return isinstance(x, bytes) or \ + isinstance(x, bytearray) or \ + isinstance(x, memoryview) + + from abc import ABC + + FileNotFoundError = FileNotFoundError + + +def _copy_bytes(start, end, seq): + """Return an immutable copy of a sequence (byte string, byte array, memoryview) + in a certain interval [start:seq]""" + + if isinstance(seq, memoryview): + return seq[start:end].tobytes() + elif isinstance(seq, bytearray): + return bytes(seq[start:end]) + else: + return seq[start:end] + +del sys +del abc diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/py3compat.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/py3compat.pyi new file mode 100644 index 0000000..74e04a2 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/py3compat.pyi @@ -0,0 +1,33 @@ +from typing import Union, Any, Optional, IO + +Buffer = Union[bytes, bytearray, memoryview] + +import sys + +def b(s: str) -> bytes: ... +def bchr(s: int) -> bytes: ... +def bord(s: bytes) -> int: ... +def tobytes(s: Union[bytes, str]) -> bytes: ... +def tostr(b: bytes) -> str: ... +def bytestring(x: Any) -> bool: ... + +def is_native_int(s: Any) -> bool: ... +def is_string(x: Any) -> bool: ... +def is_bytes(x: Any) -> bool: ... + +def BytesIO(b: bytes) -> IO[bytes]: ... +def StringIO(s: str) -> IO[str]: ... + +if sys.version_info[0] == 2: + from sys import maxint + iter_range = xrange + +else: + from sys import maxsize as maxint + iter_range = range + +class FileNotFoundError: + def __init__(self, err: int, msg: str, filename: str) -> None: + pass + +def _copy_bytes(start: Optional[int], end: Optional[int], seq: Buffer) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/strxor.py b/python/lib/python3.11/site-packages/Cryptodome/Util/strxor.py new file mode 100644 index 0000000..6b16155 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/strxor.py @@ -0,0 +1,146 @@ +# =================================================================== +# +# Copyright (c) 2014, Legrandin <helderijs@gmail.com> +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS +# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN +# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# =================================================================== + +from Cryptodome.Util._raw_api import (load_pycryptodome_raw_lib, c_size_t, + create_string_buffer, get_raw_buffer, + c_uint8_ptr, is_writeable_buffer) + +_raw_strxor = load_pycryptodome_raw_lib( + "Cryptodome.Util._strxor", + """ + void strxor(const uint8_t *in1, + const uint8_t *in2, + uint8_t *out, size_t len); + void strxor_c(const uint8_t *in, + uint8_t c, + uint8_t *out, + size_t len); + """) + + +def strxor(term1, term2, output=None): + """From two byte strings of equal length, + create a third one which is the byte-by-byte XOR of the two. + + Args: + term1 (bytes/bytearray/memoryview): + The first byte string to XOR. + term2 (bytes/bytearray/memoryview): + The second byte string to XOR. + output (bytearray/memoryview): + The location where the result will be written to. + It must have the same length as ``term1`` and ``term2``. + If ``None``, the result is returned. + :Return: + If ``output`` is ``None``, a new byte string with the result. + Otherwise ``None``. + + .. note:: + ``term1`` and ``term2`` must have the same length. + """ + + if len(term1) != len(term2): + raise ValueError("Only byte strings of equal length can be xored") + + if output is None: + result = create_string_buffer(len(term1)) + else: + # Note: output may overlap with either input + result = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(term1) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(term1)) + + _raw_strxor.strxor(c_uint8_ptr(term1), + c_uint8_ptr(term2), + c_uint8_ptr(result), + c_size_t(len(term1))) + + if output is None: + return get_raw_buffer(result) + else: + return None + + +def strxor_c(term, c, output=None): + """From a byte string, create a second one of equal length + where each byte is XOR-red with the same value. + + Args: + term(bytes/bytearray/memoryview): + The byte string to XOR. + c (int): + Every byte in the string will be XOR-ed with this value. + It must be between 0 and 255 (included). + output (None or bytearray/memoryview): + The location where the result will be written to. + It must have the same length as ``term``. + If ``None``, the result is returned. + + Return: + If ``output`` is ``None``, a new ``bytes`` string with the result. + Otherwise ``None``. + """ + + if not 0 <= c < 256: + raise ValueError("c must be in range(256)") + + if output is None: + result = create_string_buffer(len(term)) + else: + # Note: output may overlap with either input + result = output + + if not is_writeable_buffer(output): + raise TypeError("output must be a bytearray or a writeable memoryview") + + if len(term) != len(output): + raise ValueError("output must have the same length as the input" + " (%d bytes)" % len(term)) + + _raw_strxor.strxor_c(c_uint8_ptr(term), + c, + c_uint8_ptr(result), + c_size_t(len(term)) + ) + + if output is None: + return get_raw_buffer(result) + else: + return None + + +def _strxor_direct(term1, term2, result): + """Very fast XOR - check conditions!""" + _raw_strxor.strxor(term1, term2, result, c_size_t(len(term1))) diff --git a/python/lib/python3.11/site-packages/Cryptodome/Util/strxor.pyi b/python/lib/python3.11/site-packages/Cryptodome/Util/strxor.pyi new file mode 100644 index 0000000..ca896f3 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/Util/strxor.pyi @@ -0,0 +1,6 @@ +from typing import Union, Optional + +Buffer = Union[bytes, bytearray, memoryview] + +def strxor(term1: bytes, term2: bytes, output: Optional[Buffer]=...) -> bytes: ... +def strxor_c(term: bytes, c: int, output: Optional[Buffer]=...) -> bytes: ... diff --git a/python/lib/python3.11/site-packages/Cryptodome/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/__init__.py new file mode 100644 index 0000000..bfd02cd --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/__init__.py @@ -0,0 +1,6 @@ +__all__ = ['Cipher', 'Hash', 'Protocol', 'PublicKey', 'Util', 'Signature', + 'IO', 'Math'] + +version_info = (3, 18, '0') + +__version__ = ".".join([str(x) for x in version_info]) diff --git a/python/lib/python3.11/site-packages/Cryptodome/__init__.pyi b/python/lib/python3.11/site-packages/Cryptodome/__init__.pyi new file mode 100644 index 0000000..bc73446 --- /dev/null +++ b/python/lib/python3.11/site-packages/Cryptodome/__init__.pyi @@ -0,0 +1,4 @@ +from typing import Tuple, Union + +version_info : Tuple[int, int, Union[int, str]] +__version__ : str diff --git a/lib/site-packages/pip/_vendor/chardet/metadata/__init__.py b/python/lib/python3.11/site-packages/Cryptodome/py.typed similarity index 100% rename from lib/site-packages/pip/_vendor/chardet/metadata/__init__.py rename to python/lib/python3.11/site-packages/Cryptodome/py.typed diff --git a/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/INSTALLER b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/LICENSE.rst b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/LICENSE.rst new file mode 100644 index 0000000..9d227a0 --- /dev/null +++ b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/METADATA b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/METADATA new file mode 100644 index 0000000..2526be4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/METADATA @@ -0,0 +1,123 @@ +Metadata-Version: 2.1 +Name: Flask +Version: 2.2.5 +Summary: A simple framework for building complex web applications. +Home-page: https://palletsprojects.com/p/flask +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Documentation, https://flask.palletsprojects.com/ +Project-URL: Changes, https://flask.palletsprojects.com/changes/ +Project-URL: Source Code, https://github.com/pallets/flask/ +Project-URL: Issue Tracker, https://github.com/pallets/flask/issues/ +Project-URL: Twitter, https://twitter.com/PalletsTeam +Project-URL: Chat, https://discord.gg/pallets +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Framework :: Flask +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI :: Application +Classifier: Topic :: Software Development :: Libraries :: Application Frameworks +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE.rst +Requires-Dist: Werkzeug (>=2.2.2) +Requires-Dist: Jinja2 (>=3.0) +Requires-Dist: itsdangerous (>=2.0) +Requires-Dist: click (>=8.0) +Requires-Dist: importlib-metadata (>=3.6.0) ; python_version < "3.10" +Provides-Extra: async +Requires-Dist: asgiref (>=3.2) ; extra == 'async' +Provides-Extra: dotenv +Requires-Dist: python-dotenv ; extra == 'dotenv' + +Flask +===== + +Flask is a lightweight `WSGI`_ web application framework. It is designed +to make getting started quick and easy, with the ability to scale up to +complex applications. It began as a simple wrapper around `Werkzeug`_ +and `Jinja`_ and has become one of the most popular Python web +application frameworks. + +Flask offers suggestions, but doesn't enforce any dependencies or +project layout. It is up to the developer to choose the tools and +libraries they want to use. There are many extensions provided by the +community that make adding new functionality easy. + +.. _WSGI: https://wsgi.readthedocs.io/ +.. _Werkzeug: https://werkzeug.palletsprojects.com/ +.. _Jinja: https://jinja.palletsprojects.com/ + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + $ pip install -U Flask + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +A Simple Example +---------------- + +.. code-block:: python + + # save this as app.py + from flask import Flask + + app = Flask(__name__) + + @app.route("/") + def hello(): + return "Hello, World!" + +.. code-block:: text + + $ flask run + * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) + + +Contributing +------------ + +For guidance on setting up a development environment and how to make a +contribution to Flask, see the `contributing guidelines`_. + +.. _contributing guidelines: https://github.com/pallets/flask/blob/main/CONTRIBUTING.rst + + +Donate +------ + +The Pallets organization develops and supports Flask and the libraries +it uses. In order to grow the community of contributors and users, and +allow the maintainers to devote more time to the projects, `please +donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://flask.palletsprojects.com/ +- Changes: https://flask.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/Flask/ +- Source Code: https://github.com/pallets/flask/ +- Issue Tracker: https://github.com/pallets/flask/issues/ +- Website: https://palletsprojects.com/p/flask/ +- Twitter: https://twitter.com/PalletsTeam +- Chat: https://discord.gg/pallets diff --git a/lib/python3.11/site-packages/Flask-2.2.5.dist-info/RECORD b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/Flask-2.2.5.dist-info/RECORD rename to python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/RECORD diff --git a/lib/site-packages/pip/_vendor/resolvelib/compat/__init__.py b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_vendor/resolvelib/compat/__init__.py rename to python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/REQUESTED diff --git a/lib/site-packages/pip-23.2.1.dist-info/WHEEL b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/WHEEL similarity index 100% rename from lib/site-packages/pip-23.2.1.dist-info/WHEEL rename to python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/WHEEL diff --git a/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/entry_points.txt b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/entry_points.txt new file mode 100644 index 0000000..137232d --- /dev/null +++ b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +flask = flask.cli:main diff --git a/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/top_level.txt b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/top_level.txt new file mode 100644 index 0000000..7e10602 --- /dev/null +++ b/python/lib/python3.11/site-packages/Flask-2.2.5.dist-info/top_level.txt @@ -0,0 +1 @@ +flask diff --git a/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/INSTALLER b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst new file mode 100644 index 0000000..c37cae4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2007 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/METADATA b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/METADATA new file mode 100644 index 0000000..f54bb5c --- /dev/null +++ b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/METADATA @@ -0,0 +1,113 @@ +Metadata-Version: 2.1 +Name: Jinja2 +Version: 3.1.2 +Summary: A very fast and expressive template engine. +Home-page: https://palletsprojects.com/p/jinja/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Documentation, https://jinja.palletsprojects.com/ +Project-URL: Changes, https://jinja.palletsprojects.com/changes/ +Project-URL: Source Code, https://github.com/pallets/jinja/ +Project-URL: Issue Tracker, https://github.com/pallets/jinja/issues/ +Project-URL: Twitter, https://twitter.com/PalletsTeam +Project-URL: Chat, https://discord.gg/pallets +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE.rst +Requires-Dist: MarkupSafe (>=2.0) +Provides-Extra: i18n +Requires-Dist: Babel (>=2.7) ; extra == 'i18n' + +Jinja +===== + +Jinja is a fast, expressive, extensible templating engine. Special +placeholders in the template allow writing code similar to Python +syntax. Then the template is passed data to render the final document. + +It includes: + +- Template inheritance and inclusion. +- Define and import macros within templates. +- HTML templates can use autoescaping to prevent XSS from untrusted + user input. +- A sandboxed environment can safely render untrusted templates. +- AsyncIO support for generating templates and calling async + functions. +- I18N support with Babel. +- Templates are compiled to optimized Python code just-in-time and + cached, or can be compiled ahead-of-time. +- Exceptions point to the correct line in templates to make debugging + easier. +- Extensible filters, tests, functions, and even syntax. + +Jinja's philosophy is that while application logic belongs in Python if +possible, it shouldn't make the template designer's job difficult by +restricting functionality too much. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + $ pip install -U Jinja2 + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +In A Nutshell +------------- + +.. code-block:: jinja + + {% extends "base.html" %} + {% block title %}Members{% endblock %} + {% block content %} + <ul> + {% for user in users %} + <li><a href="{{ user.url }}">{{ user.username }}</a></li> + {% endfor %} + </ul> + {% endblock %} + + +Donate +------ + +The Pallets organization develops and supports Jinja and other popular +packages. In order to grow the community of contributors and users, and +allow the maintainers to devote more time to the projects, `please +donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://jinja.palletsprojects.com/ +- Changes: https://jinja.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/Jinja2/ +- Source Code: https://github.com/pallets/jinja/ +- Issue Tracker: https://github.com/pallets/jinja/issues/ +- Website: https://palletsprojects.com/p/jinja/ +- Twitter: https://twitter.com/PalletsTeam +- Chat: https://discord.gg/pallets + + diff --git a/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/RECORD b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/RECORD rename to python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/RECORD diff --git a/lib/site-packages/pip/_vendor/urllib3/contrib/__init__.py b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_vendor/urllib3/contrib/__init__.py rename to python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/REQUESTED diff --git a/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/WHEEL b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/WHEEL new file mode 100644 index 0000000..becc9a6 --- /dev/null +++ b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt new file mode 100644 index 0000000..7b9666c --- /dev/null +++ b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[babel.extractors] +jinja2 = jinja2.ext:babel_extract[i18n] diff --git a/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/top_level.txt b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/top_level.txt new file mode 100644 index 0000000..7f7afbf --- /dev/null +++ b/python/lib/python3.11/site-packages/Jinja2-3.1.2.dist-info/top_level.txt @@ -0,0 +1 @@ +jinja2 diff --git a/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst new file mode 100644 index 0000000..9d227a0 --- /dev/null +++ b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2010 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/METADATA b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/METADATA new file mode 100644 index 0000000..bced165 --- /dev/null +++ b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/METADATA @@ -0,0 +1,93 @@ +Metadata-Version: 2.1 +Name: MarkupSafe +Version: 2.1.3 +Summary: Safely add untrusted strings to HTML/XML markup. +Home-page: https://palletsprojects.com/p/markupsafe/ +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Documentation, https://markupsafe.palletsprojects.com/ +Project-URL: Changes, https://markupsafe.palletsprojects.com/changes/ +Project-URL: Source Code, https://github.com/pallets/markupsafe/ +Project-URL: Issue Tracker, https://github.com/pallets/markupsafe/issues/ +Project-URL: Chat, https://discord.gg/pallets +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Text Processing :: Markup :: HTML +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE.rst + +MarkupSafe +========== + +MarkupSafe implements a text object that escapes characters so it is +safe to use in HTML and XML. Characters that have special meanings are +replaced so that they display as the actual characters. This mitigates +injection attacks, meaning untrusted user input can safely be displayed +on a page. + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + pip install -U MarkupSafe + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +Examples +-------- + +.. code-block:: pycon + + >>> from markupsafe import Markup, escape + + >>> # escape replaces special characters and wraps in Markup + >>> escape("<script>alert(document.cookie);</script>") + Markup('<script>alert(document.cookie);</script>') + + >>> # wrap in Markup to mark text "safe" and prevent escaping + >>> Markup("<strong>Hello</strong>") + Markup('<strong>hello</strong>') + + >>> escape(Markup("<strong>Hello</strong>")) + Markup('<strong>hello</strong>') + + >>> # Markup is a str subclass + >>> # methods and operators escape their arguments + >>> template = Markup("Hello <em>{name}</em>") + >>> template.format(name='"World"') + Markup('Hello <em>"World"</em>') + + +Donate +------ + +The Pallets organization develops and supports MarkupSafe and other +popular packages. In order to grow the community of contributors and +users, and allow the maintainers to devote more time to the projects, +`please donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://markupsafe.palletsprojects.com/ +- Changes: https://markupsafe.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/MarkupSafe/ +- Source Code: https://github.com/pallets/markupsafe/ +- Issue Tracker: https://github.com/pallets/markupsafe/issues/ +- Chat: https://discord.gg/pallets diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/RECORD b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/RECORD rename to python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/RECORD diff --git a/lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py rename to python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL rename to python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/WHEEL diff --git a/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt new file mode 100644 index 0000000..75bf729 --- /dev/null +++ b/python/lib/python3.11/site-packages/MarkupSafe-2.1.3.dist-info/top_level.txt @@ -0,0 +1 @@ +markupsafe diff --git a/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst new file mode 100644 index 0000000..c37cae4 --- /dev/null +++ b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/LICENSE.rst @@ -0,0 +1,28 @@ +Copyright 2007 Pallets + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/METADATA b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/METADATA new file mode 100644 index 0000000..647bfc8 --- /dev/null +++ b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/METADATA @@ -0,0 +1,126 @@ +Metadata-Version: 2.1 +Name: Werkzeug +Version: 2.2.3 +Summary: The comprehensive WSGI web application library. +Home-page: https://palletsprojects.com/p/werkzeug/ +Author: Armin Ronacher +Author-email: armin.ronacher@active-4.com +Maintainer: Pallets +Maintainer-email: contact@palletsprojects.com +License: BSD-3-Clause +Project-URL: Donate, https://palletsprojects.com/donate +Project-URL: Documentation, https://werkzeug.palletsprojects.com/ +Project-URL: Changes, https://werkzeug.palletsprojects.com/changes/ +Project-URL: Source Code, https://github.com/pallets/werkzeug/ +Project-URL: Issue Tracker, https://github.com/pallets/werkzeug/issues/ +Project-URL: Twitter, https://twitter.com/PalletsTeam +Project-URL: Chat, https://discord.gg/pallets +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Web Environment +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI :: Application +Classifier: Topic :: Internet :: WWW/HTTP :: WSGI :: Middleware +Classifier: Topic :: Software Development :: Libraries :: Application Frameworks +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst +License-File: LICENSE.rst +Requires-Dist: MarkupSafe (>=2.1.1) +Provides-Extra: watchdog +Requires-Dist: watchdog ; extra == 'watchdog' + +Werkzeug +======== + +*werkzeug* German noun: "tool". Etymology: *werk* ("work"), *zeug* ("stuff") + +Werkzeug is a comprehensive `WSGI`_ web application library. It began as +a simple collection of various utilities for WSGI applications and has +become one of the most advanced WSGI utility libraries. + +It includes: + +- An interactive debugger that allows inspecting stack traces and + source code in the browser with an interactive interpreter for any + frame in the stack. +- A full-featured request object with objects to interact with + headers, query args, form data, files, and cookies. +- A response object that can wrap other WSGI applications and handle + streaming data. +- A routing system for matching URLs to endpoints and generating URLs + for endpoints, with an extensible system for capturing variables + from URLs. +- HTTP utilities to handle entity tags, cache control, dates, user + agents, cookies, files, and more. +- A threaded WSGI server for use while developing applications + locally. +- A test client for simulating HTTP requests during testing without + requiring running a server. + +Werkzeug doesn't enforce any dependencies. It is up to the developer to +choose a template engine, database adapter, and even how to handle +requests. It can be used to build all sorts of end user applications +such as blogs, wikis, or bulletin boards. + +`Flask`_ wraps Werkzeug, using it to handle the details of WSGI while +providing more structure and patterns for defining powerful +applications. + +.. _WSGI: https://wsgi.readthedocs.io/en/latest/ +.. _Flask: https://www.palletsprojects.com/p/flask/ + + +Installing +---------- + +Install and update using `pip`_: + +.. code-block:: text + + pip install -U Werkzeug + +.. _pip: https://pip.pypa.io/en/stable/getting-started/ + + +A Simple Example +---------------- + +.. code-block:: python + + from werkzeug.wrappers import Request, Response + + @Request.application + def application(request): + return Response('Hello, World!') + + if __name__ == '__main__': + from werkzeug.serving import run_simple + run_simple('localhost', 4000, application) + + +Donate +------ + +The Pallets organization develops and supports Werkzeug and other +popular packages. In order to grow the community of contributors and +users, and allow the maintainers to devote more time to the projects, +`please donate today`_. + +.. _please donate today: https://palletsprojects.com/donate + + +Links +----- + +- Documentation: https://werkzeug.palletsprojects.com/ +- Changes: https://werkzeug.palletsprojects.com/changes/ +- PyPI Releases: https://pypi.org/project/Werkzeug/ +- Source Code: https://github.com/pallets/werkzeug/ +- Issue Tracker: https://github.com/pallets/werkzeug/issues/ +- Website: https://palletsprojects.com/p/werkzeug/ +- Twitter: https://twitter.com/PalletsTeam +- Chat: https://discord.gg/pallets diff --git a/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/RECORD b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/RECORD rename to python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/RECORD diff --git a/lib/site-packages/pip/_vendor/urllib3/packages/__init__.py b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/REQUESTED similarity index 100% rename from lib/site-packages/pip/_vendor/urllib3/packages/__init__.py rename to python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/REQUESTED diff --git a/lib/python3.11/site-packages/zipp-3.15.0.dist-info/WHEEL b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/WHEEL similarity index 100% rename from lib/python3.11/site-packages/zipp-3.15.0.dist-info/WHEEL rename to python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/WHEEL diff --git a/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt new file mode 100644 index 0000000..6fe8da8 --- /dev/null +++ b/python/lib/python3.11/site-packages/Werkzeug-2.2.3.dist-info/top_level.txt @@ -0,0 +1 @@ +werkzeug diff --git a/lib/python3.11/site-packages/_brotli.cpython-311-x86_64-linux-gnu.so b/python/lib/python3.11/site-packages/_brotli.cpython-311-x86_64-linux-gnu.so similarity index 100% rename from lib/python3.11/site-packages/_brotli.cpython-311-x86_64-linux-gnu.so rename to python/lib/python3.11/site-packages/_brotli.cpython-311-x86_64-linux-gnu.so diff --git a/lib/python3.11/site-packages/_distutils_hack/__init__.py b/python/lib/python3.11/site-packages/_distutils_hack/__init__.py similarity index 100% rename from lib/python3.11/site-packages/_distutils_hack/__init__.py rename to python/lib/python3.11/site-packages/_distutils_hack/__init__.py diff --git a/python/lib/python3.11/site-packages/_distutils_hack/override.py b/python/lib/python3.11/site-packages/_distutils_hack/override.py new file mode 100644 index 0000000..2cc433a --- /dev/null +++ b/python/lib/python3.11/site-packages/_distutils_hack/override.py @@ -0,0 +1 @@ +__import__('_distutils_hack').do_override() diff --git a/python/lib/python3.11/site-packages/apscheduler/__init__.py b/python/lib/python3.11/site-packages/apscheduler/__init__.py new file mode 100644 index 0000000..968169a --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/__init__.py @@ -0,0 +1,10 @@ +from pkg_resources import get_distribution, DistributionNotFound + +try: + release = get_distribution('APScheduler').version.split('-')[0] +except DistributionNotFound: + release = '3.5.0' + +version_info = tuple(int(x) if x.isdigit() else x for x in release.split('.')) +version = __version__ = '.'.join(str(x) for x in version_info[:3]) +del get_distribution, DistributionNotFound diff --git a/python/lib/python3.11/site-packages/apscheduler/events.py b/python/lib/python3.11/site-packages/apscheduler/events.py new file mode 100644 index 0000000..016da03 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/events.py @@ -0,0 +1,94 @@ +__all__ = ('EVENT_SCHEDULER_STARTED', 'EVENT_SCHEDULER_SHUTDOWN', 'EVENT_SCHEDULER_PAUSED', + 'EVENT_SCHEDULER_RESUMED', 'EVENT_EXECUTOR_ADDED', 'EVENT_EXECUTOR_REMOVED', + 'EVENT_JOBSTORE_ADDED', 'EVENT_JOBSTORE_REMOVED', 'EVENT_ALL_JOBS_REMOVED', + 'EVENT_JOB_ADDED', 'EVENT_JOB_REMOVED', 'EVENT_JOB_MODIFIED', 'EVENT_JOB_EXECUTED', + 'EVENT_JOB_ERROR', 'EVENT_JOB_MISSED', 'EVENT_JOB_SUBMITTED', 'EVENT_JOB_MAX_INSTANCES', + 'SchedulerEvent', 'JobEvent', 'JobExecutionEvent', 'JobSubmissionEvent') + + +EVENT_SCHEDULER_STARTED = EVENT_SCHEDULER_START = 2 ** 0 +EVENT_SCHEDULER_SHUTDOWN = 2 ** 1 +EVENT_SCHEDULER_PAUSED = 2 ** 2 +EVENT_SCHEDULER_RESUMED = 2 ** 3 +EVENT_EXECUTOR_ADDED = 2 ** 4 +EVENT_EXECUTOR_REMOVED = 2 ** 5 +EVENT_JOBSTORE_ADDED = 2 ** 6 +EVENT_JOBSTORE_REMOVED = 2 ** 7 +EVENT_ALL_JOBS_REMOVED = 2 ** 8 +EVENT_JOB_ADDED = 2 ** 9 +EVENT_JOB_REMOVED = 2 ** 10 +EVENT_JOB_MODIFIED = 2 ** 11 +EVENT_JOB_EXECUTED = 2 ** 12 +EVENT_JOB_ERROR = 2 ** 13 +EVENT_JOB_MISSED = 2 ** 14 +EVENT_JOB_SUBMITTED = 2 ** 15 +EVENT_JOB_MAX_INSTANCES = 2 ** 16 +EVENT_ALL = (EVENT_SCHEDULER_STARTED | EVENT_SCHEDULER_SHUTDOWN | EVENT_SCHEDULER_PAUSED | + EVENT_SCHEDULER_RESUMED | EVENT_EXECUTOR_ADDED | EVENT_EXECUTOR_REMOVED | + EVENT_JOBSTORE_ADDED | EVENT_JOBSTORE_REMOVED | EVENT_ALL_JOBS_REMOVED | + EVENT_JOB_ADDED | EVENT_JOB_REMOVED | EVENT_JOB_MODIFIED | EVENT_JOB_EXECUTED | + EVENT_JOB_ERROR | EVENT_JOB_MISSED | EVENT_JOB_SUBMITTED | EVENT_JOB_MAX_INSTANCES) + + +class SchedulerEvent(object): + """ + An event that concerns the scheduler itself. + + :ivar code: the type code of this event + :ivar alias: alias of the job store or executor that was added or removed (if applicable) + """ + + def __init__(self, code, alias=None): + super(SchedulerEvent, self).__init__() + self.code = code + self.alias = alias + + def __repr__(self): + return '<%s (code=%d)>' % (self.__class__.__name__, self.code) + + +class JobEvent(SchedulerEvent): + """ + An event that concerns a job. + + :ivar code: the type code of this event + :ivar job_id: identifier of the job in question + :ivar jobstore: alias of the job store containing the job in question + """ + + def __init__(self, code, job_id, jobstore): + super(JobEvent, self).__init__(code) + self.code = code + self.job_id = job_id + self.jobstore = jobstore + + +class JobSubmissionEvent(JobEvent): + """ + An event that concerns the submission of a job to its executor. + + :ivar scheduled_run_times: a list of datetimes when the job was intended to run + """ + + def __init__(self, code, job_id, jobstore, scheduled_run_times): + super(JobSubmissionEvent, self).__init__(code, job_id, jobstore) + self.scheduled_run_times = scheduled_run_times + + +class JobExecutionEvent(JobEvent): + """ + An event that concerns the running of a job within its executor. + + :ivar scheduled_run_time: the time when the job was scheduled to be run + :ivar retval: the return value of the successfully executed job + :ivar exception: the exception raised by the job + :ivar traceback: a formatted traceback for the exception + """ + + def __init__(self, code, job_id, jobstore, scheduled_run_time, retval=None, exception=None, + traceback=None): + super(JobExecutionEvent, self).__init__(code, job_id, jobstore) + self.scheduled_run_time = scheduled_run_time + self.retval = retval + self.exception = exception + self.traceback = traceback diff --git a/lib/site-packages/pip/_vendor/urllib3/packages/backports/__init__.py b/python/lib/python3.11/site-packages/apscheduler/executors/__init__.py similarity index 100% rename from lib/site-packages/pip/_vendor/urllib3/packages/backports/__init__.py rename to python/lib/python3.11/site-packages/apscheduler/executors/__init__.py diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/asyncio.py b/python/lib/python3.11/site-packages/apscheduler/executors/asyncio.py new file mode 100644 index 0000000..7d45d6c --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/asyncio.py @@ -0,0 +1,52 @@ +from __future__ import absolute_import + +import sys + +from apscheduler.executors.base import BaseExecutor, run_job +from apscheduler.executors.base_py3 import run_coroutine_job +from apscheduler.util import iscoroutinefunction_partial + + +class AsyncIOExecutor(BaseExecutor): + """ + Runs jobs in the default executor of the event loop. + + If the job function is a native coroutine function, it is scheduled to be run directly in the + event loop as soon as possible. All other functions are run in the event loop's default + executor which is usually a thread pool. + + Plugin alias: ``asyncio`` + """ + + def start(self, scheduler, alias): + super(AsyncIOExecutor, self).start(scheduler, alias) + self._eventloop = scheduler._eventloop + self._pending_futures = set() + + def shutdown(self, wait=True): + # There is no way to honor wait=True without converting this method into a coroutine method + for f in self._pending_futures: + if not f.done(): + f.cancel() + + self._pending_futures.clear() + + def _do_submit_job(self, job, run_times): + def callback(f): + self._pending_futures.discard(f) + try: + events = f.result() + except BaseException: + self._run_job_error(job.id, *sys.exc_info()[1:]) + else: + self._run_job_success(job.id, events) + + if iscoroutinefunction_partial(job.func): + coro = run_coroutine_job(job, job._jobstore_alias, run_times, self._logger.name) + f = self._eventloop.create_task(coro) + else: + f = self._eventloop.run_in_executor(None, run_job, job, job._jobstore_alias, run_times, + self._logger.name) + + f.add_done_callback(callback) + self._pending_futures.add(f) diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/base.py b/python/lib/python3.11/site-packages/apscheduler/executors/base.py new file mode 100644 index 0000000..4c09fc1 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/base.py @@ -0,0 +1,146 @@ +from abc import ABCMeta, abstractmethod +from collections import defaultdict +from datetime import datetime, timedelta +from traceback import format_tb +import logging +import sys + +from pytz import utc +import six + +from apscheduler.events import ( + JobExecutionEvent, EVENT_JOB_MISSED, EVENT_JOB_ERROR, EVENT_JOB_EXECUTED) + + +class MaxInstancesReachedError(Exception): + def __init__(self, job): + super(MaxInstancesReachedError, self).__init__( + 'Job "%s" has already reached its maximum number of instances (%d)' % + (job.id, job.max_instances)) + + +class BaseExecutor(six.with_metaclass(ABCMeta, object)): + """Abstract base class that defines the interface that every executor must implement.""" + + _scheduler = None + _lock = None + _logger = logging.getLogger('apscheduler.executors') + + def __init__(self): + super(BaseExecutor, self).__init__() + self._instances = defaultdict(lambda: 0) + + def start(self, scheduler, alias): + """ + Called by the scheduler when the scheduler is being started or when the executor is being + added to an already running scheduler. + + :param apscheduler.schedulers.base.BaseScheduler scheduler: the scheduler that is starting + this executor + :param str|unicode alias: alias of this executor as it was assigned to the scheduler + + """ + self._scheduler = scheduler + self._lock = scheduler._create_lock() + self._logger = logging.getLogger('apscheduler.executors.%s' % alias) + + def shutdown(self, wait=True): + """ + Shuts down this executor. + + :param bool wait: ``True`` to wait until all submitted jobs + have been executed + """ + + def submit_job(self, job, run_times): + """ + Submits job for execution. + + :param Job job: job to execute + :param list[datetime] run_times: list of datetimes specifying + when the job should have been run + :raises MaxInstancesReachedError: if the maximum number of + allowed instances for this job has been reached + + """ + assert self._lock is not None, 'This executor has not been started yet' + with self._lock: + if self._instances[job.id] >= job.max_instances: + raise MaxInstancesReachedError(job) + + self._do_submit_job(job, run_times) + self._instances[job.id] += 1 + + @abstractmethod + def _do_submit_job(self, job, run_times): + """Performs the actual task of scheduling `run_job` to be called.""" + + def _run_job_success(self, job_id, events): + """ + Called by the executor with the list of generated events when :func:`run_job` has been + successfully called. + + """ + with self._lock: + self._instances[job_id] -= 1 + if self._instances[job_id] == 0: + del self._instances[job_id] + + for event in events: + self._scheduler._dispatch_event(event) + + def _run_job_error(self, job_id, exc, traceback=None): + """Called by the executor with the exception if there is an error calling `run_job`.""" + with self._lock: + self._instances[job_id] -= 1 + if self._instances[job_id] == 0: + del self._instances[job_id] + + exc_info = (exc.__class__, exc, traceback) + self._logger.error('Error running job %s', job_id, exc_info=exc_info) + + +def run_job(job, jobstore_alias, run_times, logger_name): + """ + Called by executors to run the job. Returns a list of scheduler events to be dispatched by the + scheduler. + + """ + events = [] + logger = logging.getLogger(logger_name) + for run_time in run_times: + # See if the job missed its run time window, and handle + # possible misfires accordingly + if job.misfire_grace_time is not None: + difference = datetime.now(utc) - run_time + grace_time = timedelta(seconds=job.misfire_grace_time) + if difference > grace_time: + events.append(JobExecutionEvent(EVENT_JOB_MISSED, job.id, jobstore_alias, + run_time)) + logger.warning('Run time of job "%s" was missed by %s', job, difference) + continue + + logger.info('Running job "%s" (scheduled at %s)', job, run_time) + try: + retval = job.func(*job.args, **job.kwargs) + except BaseException: + exc, tb = sys.exc_info()[1:] + formatted_tb = ''.join(format_tb(tb)) + events.append(JobExecutionEvent(EVENT_JOB_ERROR, job.id, jobstore_alias, run_time, + exception=exc, traceback=formatted_tb)) + logger.exception('Job "%s" raised an exception', job) + + # This is to prevent cyclic references that would lead to memory leaks + if six.PY2: + sys.exc_clear() + del tb + else: + import traceback + traceback.clear_frames(tb) + del tb + else: + events.append(JobExecutionEvent(EVENT_JOB_EXECUTED, job.id, jobstore_alias, run_time, + retval=retval)) + logger.info('Job "%s" executed successfully', job) + + return events diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/base_py3.py b/python/lib/python3.11/site-packages/apscheduler/executors/base_py3.py new file mode 100644 index 0000000..7111d2a --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/base_py3.py @@ -0,0 +1,43 @@ +import logging +import sys +import traceback +from datetime import datetime, timedelta +from traceback import format_tb + +from pytz import utc + +from apscheduler.events import ( + JobExecutionEvent, EVENT_JOB_MISSED, EVENT_JOB_ERROR, EVENT_JOB_EXECUTED) + + +async def run_coroutine_job(job, jobstore_alias, run_times, logger_name): + """Coroutine version of run_job().""" + events = [] + logger = logging.getLogger(logger_name) + for run_time in run_times: + # See if the job missed its run time window, and handle possible misfires accordingly + if job.misfire_grace_time is not None: + difference = datetime.now(utc) - run_time + grace_time = timedelta(seconds=job.misfire_grace_time) + if difference > grace_time: + events.append(JobExecutionEvent(EVENT_JOB_MISSED, job.id, jobstore_alias, + run_time)) + logger.warning('Run time of job "%s" was missed by %s', job, difference) + continue + + logger.info('Running job "%s" (scheduled at %s)', job, run_time) + try: + retval = await job.func(*job.args, **job.kwargs) + except BaseException: + exc, tb = sys.exc_info()[1:] + formatted_tb = ''.join(format_tb(tb)) + events.append(JobExecutionEvent(EVENT_JOB_ERROR, job.id, jobstore_alias, run_time, + exception=exc, traceback=formatted_tb)) + logger.exception('Job "%s" raised an exception', job) + traceback.clear_frames(tb) + else: + events.append(JobExecutionEvent(EVENT_JOB_EXECUTED, job.id, jobstore_alias, run_time, + retval=retval)) + logger.info('Job "%s" executed successfully', job) + + return events diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/debug.py b/python/lib/python3.11/site-packages/apscheduler/executors/debug.py new file mode 100644 index 0000000..ac739ae --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/debug.py @@ -0,0 +1,20 @@ +import sys + +from apscheduler.executors.base import BaseExecutor, run_job + + +class DebugExecutor(BaseExecutor): + """ + A special executor that executes the target callable directly instead of deferring it to a + thread or process. + + Plugin alias: ``debug`` + """ + + def _do_submit_job(self, job, run_times): + try: + events = run_job(job, job._jobstore_alias, run_times, self._logger.name) + except BaseException: + self._run_job_error(job.id, *sys.exc_info()[1:]) + else: + self._run_job_success(job.id, events) diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/gevent.py b/python/lib/python3.11/site-packages/apscheduler/executors/gevent.py new file mode 100644 index 0000000..1235bb6 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/gevent.py @@ -0,0 +1,30 @@ +from __future__ import absolute_import +import sys + +from apscheduler.executors.base import BaseExecutor, run_job + + +try: + import gevent +except ImportError: # pragma: nocover + raise ImportError('GeventExecutor requires gevent installed') + + +class GeventExecutor(BaseExecutor): + """ + Runs jobs as greenlets. + + Plugin alias: ``gevent`` + """ + + def _do_submit_job(self, job, run_times): + def callback(greenlet): + try: + events = greenlet.get() + except BaseException: + self._run_job_error(job.id, *sys.exc_info()[1:]) + else: + self._run_job_success(job.id, events) + + gevent.spawn(run_job, job, job._jobstore_alias, run_times, self._logger.name).\ + link(callback) diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/pool.py b/python/lib/python3.11/site-packages/apscheduler/executors/pool.py new file mode 100644 index 0000000..c85896e --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/pool.py @@ -0,0 +1,71 @@ +from abc import abstractmethod +import concurrent.futures + +from apscheduler.executors.base import BaseExecutor, run_job + +try: + from concurrent.futures.process import BrokenProcessPool +except ImportError: + BrokenProcessPool = None + + +class BasePoolExecutor(BaseExecutor): + @abstractmethod + def __init__(self, pool): + super(BasePoolExecutor, self).__init__() + self._pool = pool + + def _do_submit_job(self, job, run_times): + def callback(f): + exc, tb = (f.exception_info() if hasattr(f, 'exception_info') else + (f.exception(), getattr(f.exception(), '__traceback__', None))) + if exc: + self._run_job_error(job.id, exc, tb) + else: + self._run_job_success(job.id, f.result()) + + try: + f = self._pool.submit(run_job, job, job._jobstore_alias, run_times, self._logger.name) + except BrokenProcessPool: + self._logger.warning('Process pool is broken; replacing pool with a fresh instance') + self._pool = self._pool.__class__(self._pool._max_workers) + f = self._pool.submit(run_job, job, job._jobstore_alias, run_times, self._logger.name) + + f.add_done_callback(callback) + + def shutdown(self, wait=True): + self._pool.shutdown(wait) + + +class ThreadPoolExecutor(BasePoolExecutor): + """ + An executor that runs jobs in a concurrent.futures thread pool. + + Plugin alias: ``threadpool`` + + :param max_workers: the maximum number of spawned threads. + :param pool_kwargs: dict of keyword arguments to pass to the underlying + ThreadPoolExecutor constructor + """ + + def __init__(self, max_workers=10, pool_kwargs=None): + pool_kwargs = pool_kwargs or {} + pool = concurrent.futures.ThreadPoolExecutor(int(max_workers), **pool_kwargs) + super(ThreadPoolExecutor, self).__init__(pool) + + +class ProcessPoolExecutor(BasePoolExecutor): + """ + An executor that runs jobs in a concurrent.futures process pool. + + Plugin alias: ``processpool`` + + :param max_workers: the maximum number of spawned processes. + :param pool_kwargs: dict of keyword arguments to pass to the underlying + ProcessPoolExecutor constructor + """ + + def __init__(self, max_workers=10, pool_kwargs=None): + pool_kwargs = pool_kwargs or {} + pool = concurrent.futures.ProcessPoolExecutor(int(max_workers), **pool_kwargs) + super(ProcessPoolExecutor, self).__init__(pool) diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/tornado.py b/python/lib/python3.11/site-packages/apscheduler/executors/tornado.py new file mode 100644 index 0000000..3b97eec --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/tornado.py @@ -0,0 +1,54 @@ +from __future__ import absolute_import + +import sys +from concurrent.futures import ThreadPoolExecutor + +from tornado.gen import convert_yielded + +from apscheduler.executors.base import BaseExecutor, run_job + +try: + from apscheduler.executors.base_py3 import run_coroutine_job + from apscheduler.util import iscoroutinefunction_partial +except ImportError: + def iscoroutinefunction_partial(func): + return False + + +class TornadoExecutor(BaseExecutor): + """ + Runs jobs either in a thread pool or directly on the I/O loop. + + If the job function is a native coroutine function, it is scheduled to be run directly in the + I/O loop as soon as possible. All other functions are run in a thread pool. + + Plugin alias: ``tornado`` + + :param int max_workers: maximum number of worker threads in the thread pool + """ + + def __init__(self, max_workers=10): + super(TornadoExecutor, self).__init__() + self.executor = ThreadPoolExecutor(max_workers) + + def start(self, scheduler, alias): + super(TornadoExecutor, self).start(scheduler, alias) + self._ioloop = scheduler._ioloop + + def _do_submit_job(self, job, run_times): + def callback(f): + try: + events = f.result() + except BaseException: + self._run_job_error(job.id, *sys.exc_info()[1:]) + else: + self._run_job_success(job.id, events) + + if iscoroutinefunction_partial(job.func): + f = run_coroutine_job(job, job._jobstore_alias, run_times, self._logger.name) + else: + f = self.executor.submit(run_job, job, job._jobstore_alias, run_times, + self._logger.name) + + f = convert_yielded(f) + f.add_done_callback(callback) diff --git a/python/lib/python3.11/site-packages/apscheduler/executors/twisted.py b/python/lib/python3.11/site-packages/apscheduler/executors/twisted.py new file mode 100644 index 0000000..c7bcf64 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/executors/twisted.py @@ -0,0 +1,25 @@ +from __future__ import absolute_import + +from apscheduler.executors.base import BaseExecutor, run_job + + +class TwistedExecutor(BaseExecutor): + """ + Runs jobs in the reactor's thread pool. + + Plugin alias: ``twisted`` + """ + + def start(self, scheduler, alias): + super(TwistedExecutor, self).start(scheduler, alias) + self._reactor = scheduler._reactor + + def _do_submit_job(self, job, run_times): + def callback(success, result): + if success: + self._run_job_success(job.id, result) + else: + self._run_job_error(job.id, result.value, result.tb) + + self._reactor.getThreadPool().callInThreadWithCallback( + callback, run_job, job, job._jobstore_alias, run_times, self._logger.name) diff --git a/python/lib/python3.11/site-packages/apscheduler/job.py b/python/lib/python3.11/site-packages/apscheduler/job.py new file mode 100644 index 0000000..445d9a8 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/job.py @@ -0,0 +1,302 @@ +from inspect import ismethod, isclass +from uuid import uuid4 + +import six + +from apscheduler.triggers.base import BaseTrigger +from apscheduler.util import ( + ref_to_obj, obj_to_ref, datetime_repr, repr_escape, get_callable_name, check_callable_args, + convert_to_datetime) + +try: + from collections.abc import Iterable, Mapping +except ImportError: + from collections import Iterable, Mapping + + +class Job(object): + """ + Contains the options given when scheduling callables and its current schedule and other state. + This class should never be instantiated by the user. + + :var str id: the unique identifier of this job + :var str name: the description of this job + :var func: the callable to execute + :var tuple|list args: positional arguments to the callable + :var dict kwargs: keyword arguments to the callable + :var bool coalesce: whether to only run the job once when several run times are due + :var trigger: the trigger object that controls the schedule of this job + :var str executor: the name of the executor that will run this job + :var int misfire_grace_time: the time (in seconds) how much this job's execution is allowed to + be late (``None`` means "allow the job to run no matter how late it is") + :var int max_instances: the maximum number of concurrently executing instances allowed for this + job + :var datetime.datetime next_run_time: the next scheduled run time of this job + + .. note:: + The ``misfire_grace_time`` has some non-obvious effects on job execution. See the + :ref:`missed-job-executions` section in the documentation for an in-depth explanation. + """ + + __slots__ = ('_scheduler', '_jobstore_alias', 'id', 'trigger', 'executor', 'func', 'func_ref', + 'args', 'kwargs', 'name', 'misfire_grace_time', 'coalesce', 'max_instances', + 'next_run_time', '__weakref__') + + def __init__(self, scheduler, id=None, **kwargs): + super(Job, self).__init__() + self._scheduler = scheduler + self._jobstore_alias = None + self._modify(id=id or uuid4().hex, **kwargs) + + def modify(self, **changes): + """ + Makes the given changes to this job and saves it in the associated job store. + + Accepted keyword arguments are the same as the variables on this class. + + .. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.modify_job` + + :return Job: this job instance + + """ + self._scheduler.modify_job(self.id, self._jobstore_alias, **changes) + return self + + def reschedule(self, trigger, **trigger_args): + """ + Shortcut for switching the trigger on this job. + + .. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.reschedule_job` + + :return Job: this job instance + + """ + self._scheduler.reschedule_job(self.id, self._jobstore_alias, trigger, **trigger_args) + return self + + def pause(self): + """ + Temporarily suspend the execution of this job. + + .. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.pause_job` + + :return Job: this job instance + + """ + self._scheduler.pause_job(self.id, self._jobstore_alias) + return self + + def resume(self): + """ + Resume the schedule of this job if previously paused. + + .. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.resume_job` + + :return Job: this job instance + + """ + self._scheduler.resume_job(self.id, self._jobstore_alias) + return self + + def remove(self): + """ + Unschedules this job and removes it from its associated job store. + + .. seealso:: :meth:`~apscheduler.schedulers.base.BaseScheduler.remove_job` + + """ + self._scheduler.remove_job(self.id, self._jobstore_alias) + + @property + def pending(self): + """ + Returns ``True`` if the referenced job is still waiting to be added to its designated job + store. + + """ + return self._jobstore_alias is None + + # + # Private API + # + + def _get_run_times(self, now): + """ + Computes the scheduled run times between ``next_run_time`` and ``now`` (inclusive). + + :type now: datetime.datetime + :rtype: list[datetime.datetime] + + """ + run_times = [] + next_run_time = self.next_run_time + while next_run_time and next_run_time <= now: + run_times.append(next_run_time) + next_run_time = self.trigger.get_next_fire_time(next_run_time, now) + + return run_times + + def _modify(self, **changes): + """ + Validates the changes to the Job and makes the modifications if and only if all of them + validate. + + """ + approved = {} + + if 'id' in changes: + value = changes.pop('id') + if not isinstance(value, six.string_types): + raise TypeError("id must be a nonempty string") + if hasattr(self, 'id'): + raise ValueError('The job ID may not be changed') + approved['id'] = value + + if 'func' in changes or 'args' in changes or 'kwargs' in changes: + func = changes.pop('func') if 'func' in changes else self.func + args = changes.pop('args') if 'args' in changes else self.args + kwargs = changes.pop('kwargs') if 'kwargs' in changes else self.kwargs + + if isinstance(func, six.string_types): + func_ref = func + func = ref_to_obj(func) + elif callable(func): + try: + func_ref = obj_to_ref(func) + except ValueError: + # If this happens, this Job won't be serializable + func_ref = None + else: + raise TypeError('func must be a callable or a textual reference to one') + + if not hasattr(self, 'name') and changes.get('name', None) is None: + changes['name'] = get_callable_name(func) + + if isinstance(args, six.string_types) or not isinstance(args, Iterable): + raise TypeError('args must be a non-string iterable') + if isinstance(kwargs, six.string_types) or not isinstance(kwargs, Mapping): + raise TypeError('kwargs must be a dict-like object') + + check_callable_args(func, args, kwargs) + + approved['func'] = func + approved['func_ref'] = func_ref + approved['args'] = args + approved['kwargs'] = kwargs + + if 'name' in changes: + value = changes.pop('name') + if not value or not isinstance(value, six.string_types): + raise TypeError("name must be a nonempty string") + approved['name'] = value + + if 'misfire_grace_time' in changes: + value = changes.pop('misfire_grace_time') + if value is not None and (not isinstance(value, six.integer_types) or value <= 0): + raise TypeError('misfire_grace_time must be either None or a positive integer') + approved['misfire_grace_time'] = value + + if 'coalesce' in changes: + value = bool(changes.pop('coalesce')) + approved['coalesce'] = value + + if 'max_instances' in changes: + value = changes.pop('max_instances') + if not isinstance(value, six.integer_types) or value <= 0: + raise TypeError('max_instances must be a positive integer') + approved['max_instances'] = value + + if 'trigger' in changes: + trigger = changes.pop('trigger') + if not isinstance(trigger, BaseTrigger): + raise TypeError('Expected a trigger instance, got %s instead' % + trigger.__class__.__name__) + + approved['trigger'] = trigger + + if 'executor' in changes: + value = changes.pop('executor') + if not isinstance(value, six.string_types): + raise TypeError('executor must be a string') + approved['executor'] = value + + if 'next_run_time' in changes: + value = changes.pop('next_run_time') + approved['next_run_time'] = convert_to_datetime(value, self._scheduler.timezone, + 'next_run_time') + + if changes: + raise AttributeError('The following are not modifiable attributes of Job: %s' % + ', '.join(changes)) + + for key, value in six.iteritems(approved): + setattr(self, key, value) + + def __getstate__(self): + # Don't allow this Job to be serialized if the function reference could not be determined + if not self.func_ref: + raise ValueError( + 'This Job cannot be serialized since the reference to its callable (%r) could not ' + 'be determined. Consider giving a textual reference (module:function name) ' + 'instead.' % (self.func,)) + + # Instance methods cannot survive serialization as-is, so store the "self" argument + # explicitly + func = self.func + if ismethod(func) and not isclass(func.__self__) and obj_to_ref(func) == self.func_ref: + args = (func.__self__,) + tuple(self.args) + else: + args = self.args + + return { + 'version': 1, + 'id': self.id, + 'func': self.func_ref, + 'trigger': self.trigger, + 'executor': self.executor, + 'args': args, + 'kwargs': self.kwargs, + 'name': self.name, + 'misfire_grace_time': self.misfire_grace_time, + 'coalesce': self.coalesce, + 'max_instances': self.max_instances, + 'next_run_time': self.next_run_time + } + + def __setstate__(self, state): + if state.get('version', 1) > 1: + raise ValueError('Job has version %s, but only version 1 can be handled' % + state['version']) + + self.id = state['id'] + self.func_ref = state['func'] + self.func = ref_to_obj(self.func_ref) + self.trigger = state['trigger'] + self.executor = state['executor'] + self.args = state['args'] + self.kwargs = state['kwargs'] + self.name = state['name'] + self.misfire_grace_time = state['misfire_grace_time'] + self.coalesce = state['coalesce'] + self.max_instances = state['max_instances'] + self.next_run_time = state['next_run_time'] + + def __eq__(self, other): + if isinstance(other, Job): + return self.id == other.id + return NotImplemented + + def __repr__(self): + return '<Job (id=%s name=%s)>' % (repr_escape(self.id), repr_escape(self.name)) + + def __str__(self): + return repr_escape(self.__unicode__()) + + def __unicode__(self): + if hasattr(self, 'next_run_time'): + status = ('next run at: ' + datetime_repr(self.next_run_time) if + self.next_run_time else 'paused') + else: + status = 'pending' + + return u'%s (trigger: %s, %s)' % (self.name, self.trigger, status) diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/__init__.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/base.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/base.py new file mode 100644 index 0000000..9cff66c --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/base.py @@ -0,0 +1,143 @@ +from abc import ABCMeta, abstractmethod +import logging + +import six + + +class JobLookupError(KeyError): + """Raised when the job store cannot find a job for update or removal.""" + + def __init__(self, job_id): + super(JobLookupError, self).__init__(u'No job by the id of %s was found' % job_id) + + +class ConflictingIdError(KeyError): + """Raised when the uniqueness of job IDs is being violated.""" + + def __init__(self, job_id): + super(ConflictingIdError, self).__init__( + u'Job identifier (%s) conflicts with an existing job' % job_id) + + +class TransientJobError(ValueError): + """ + Raised when an attempt to add transient (with no func_ref) job to a persistent job store is + detected. + """ + + def __init__(self, job_id): + super(TransientJobError, self).__init__( + u'Job (%s) cannot be added to this job store because a reference to the callable ' + u'could not be determined.' % job_id) + + +class BaseJobStore(six.with_metaclass(ABCMeta)): + """Abstract base class that defines the interface that every job store must implement.""" + + _scheduler = None + _alias = None + _logger = logging.getLogger('apscheduler.jobstores') + + def start(self, scheduler, alias): + """ + Called by the scheduler when the scheduler is being started or when the job store is being + added to an already running scheduler. + + :param apscheduler.schedulers.base.BaseScheduler scheduler: the scheduler that is starting + this job store + :param str|unicode alias: alias of this job store as it was assigned to the scheduler + """ + + self._scheduler = scheduler + self._alias = alias + self._logger = logging.getLogger('apscheduler.jobstores.%s' % alias) + + def shutdown(self): + """Frees any resources still bound to this job store.""" + + def _fix_paused_jobs_sorting(self, jobs): + for i, job in enumerate(jobs): + if job.next_run_time is not None: + if i > 0: + paused_jobs = jobs[:i] + del jobs[:i] + jobs.extend(paused_jobs) + break + + @abstractmethod + def lookup_job(self, job_id): + """ + Returns a specific job, or ``None`` if it isn't found.. + + The job store is responsible for setting the ``scheduler`` and ``jobstore`` attributes of + the returned job to point to the scheduler and itself, respectively. + + :param str|unicode job_id: identifier of the job + :rtype: Job + """ + + @abstractmethod + def get_due_jobs(self, now): + """ + Returns the list of jobs that have ``next_run_time`` earlier or equal to ``now``. + The returned jobs must be sorted by next run time (ascending). + + :param datetime.datetime now: the current (timezone aware) datetime + :rtype: list[Job] + """ + + @abstractmethod + def get_next_run_time(self): + """ + Returns the earliest run time of all the jobs stored in this job store, or ``None`` if + there are no active jobs. + + :rtype: datetime.datetime + """ + + @abstractmethod + def get_all_jobs(self): + """ + Returns a list of all jobs in this job store. + The returned jobs should be sorted by next run time (ascending). + Paused jobs (next_run_time == None) should be sorted last. + + The job store is responsible for setting the ``scheduler`` and ``jobstore`` attributes of + the returned jobs to point to the scheduler and itself, respectively. + + :rtype: list[Job] + """ + + @abstractmethod + def add_job(self, job): + """ + Adds the given job to this store. + + :param Job job: the job to add + :raises ConflictingIdError: if there is another job in this store with the same ID + """ + + @abstractmethod + def update_job(self, job): + """ + Replaces the job in the store with the given newer version. + + :param Job job: the job to update + :raises JobLookupError: if the job does not exist + """ + + @abstractmethod + def remove_job(self, job_id): + """ + Removes the given job from this store. + + :param str|unicode job_id: identifier of the job + :raises JobLookupError: if the job does not exist + """ + + @abstractmethod + def remove_all_jobs(self): + """Removes all jobs from this store.""" + + def __repr__(self): + return '<%s>' % self.__class__.__name__ diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/memory.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/memory.py new file mode 100644 index 0000000..abfe7c6 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/memory.py @@ -0,0 +1,108 @@ +from __future__ import absolute_import + +from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError +from apscheduler.util import datetime_to_utc_timestamp + + +class MemoryJobStore(BaseJobStore): + """ + Stores jobs in an array in RAM. Provides no persistence support. + + Plugin alias: ``memory`` + """ + + def __init__(self): + super(MemoryJobStore, self).__init__() + # list of (job, timestamp), sorted by next_run_time and job id (ascending) + self._jobs = [] + self._jobs_index = {} # id -> (job, timestamp) lookup table + + def lookup_job(self, job_id): + return self._jobs_index.get(job_id, (None, None))[0] + + def get_due_jobs(self, now): + now_timestamp = datetime_to_utc_timestamp(now) + pending = [] + for job, timestamp in self._jobs: + if timestamp is None or timestamp > now_timestamp: + break + pending.append(job) + + return pending + + def get_next_run_time(self): + return self._jobs[0][0].next_run_time if self._jobs else None + + def get_all_jobs(self): + return [j[0] for j in self._jobs] + + def add_job(self, job): + if job.id in self._jobs_index: + raise ConflictingIdError(job.id) + + timestamp = datetime_to_utc_timestamp(job.next_run_time) + index = self._get_job_index(timestamp, job.id) + self._jobs.insert(index, (job, timestamp)) + self._jobs_index[job.id] = (job, timestamp) + + def update_job(self, job): + old_job, old_timestamp = self._jobs_index.get(job.id, (None, None)) + if old_job is None: + raise JobLookupError(job.id) + + # If the next run time has not changed, simply replace the job in its present index. + # Otherwise, reinsert the job to the list to preserve the ordering. + old_index = self._get_job_index(old_timestamp, old_job.id) + new_timestamp = datetime_to_utc_timestamp(job.next_run_time) + if old_timestamp == new_timestamp: + self._jobs[old_index] = (job, new_timestamp) + else: + del self._jobs[old_index] + new_index = self._get_job_index(new_timestamp, job.id) + self._jobs.insert(new_index, (job, new_timestamp)) + + self._jobs_index[old_job.id] = (job, new_timestamp) + + def remove_job(self, job_id): + job, timestamp = self._jobs_index.get(job_id, (None, None)) + if job is None: + raise JobLookupError(job_id) + + index = self._get_job_index(timestamp, job_id) + del self._jobs[index] + del self._jobs_index[job.id] + + def remove_all_jobs(self): + self._jobs = [] + self._jobs_index = {} + + def shutdown(self): + self.remove_all_jobs() + + def _get_job_index(self, timestamp, job_id): + """ + Returns the index of the given job, or if it's not found, the index where the job should be + inserted based on the given timestamp. + + :type timestamp: int + :type job_id: str + + """ + lo, hi = 0, len(self._jobs) + timestamp = float('inf') if timestamp is None else timestamp + while lo < hi: + mid = (lo + hi) // 2 + mid_job, mid_timestamp = self._jobs[mid] + mid_timestamp = float('inf') if mid_timestamp is None else mid_timestamp + if mid_timestamp > timestamp: + hi = mid + elif mid_timestamp < timestamp: + lo = mid + 1 + elif mid_job.id > job_id: + hi = mid + elif mid_job.id < job_id: + lo = mid + 1 + else: + return mid + + return lo diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/mongodb.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/mongodb.py new file mode 100644 index 0000000..5a00f94 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/mongodb.py @@ -0,0 +1,141 @@ +from __future__ import absolute_import +import warnings + +from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError +from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime +from apscheduler.job import Job + +try: + import cPickle as pickle +except ImportError: # pragma: nocover + import pickle + +try: + from bson.binary import Binary + from pymongo.errors import DuplicateKeyError + from pymongo import MongoClient, ASCENDING +except ImportError: # pragma: nocover + raise ImportError('MongoDBJobStore requires PyMongo installed') + + +class MongoDBJobStore(BaseJobStore): + """ + Stores jobs in a MongoDB database. Any leftover keyword arguments are directly passed to + pymongo's `MongoClient + <http://api.mongodb.org/python/current/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient>`_. + + Plugin alias: ``mongodb`` + + :param str database: database to store jobs in + :param str collection: collection to store jobs in + :param client: a :class:`~pymongo.mongo_client.MongoClient` instance to use instead of + providing connection arguments + :param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the + highest available + """ + + def __init__(self, database='apscheduler', collection='jobs', client=None, + pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args): + super(MongoDBJobStore, self).__init__() + self.pickle_protocol = pickle_protocol + + if not database: + raise ValueError('The "database" parameter must not be empty') + if not collection: + raise ValueError('The "collection" parameter must not be empty') + + if client: + self.client = maybe_ref(client) + else: + connect_args.setdefault('w', 1) + self.client = MongoClient(**connect_args) + + self.collection = self.client[database][collection] + + def start(self, scheduler, alias): + super(MongoDBJobStore, self).start(scheduler, alias) + self.collection.create_index('next_run_time', sparse=True) + + @property + def connection(self): + warnings.warn('The "connection" member is deprecated -- use "client" instead', + DeprecationWarning) + return self.client + + def lookup_job(self, job_id): + document = self.collection.find_one(job_id, ['job_state']) + return self._reconstitute_job(document['job_state']) if document else None + + def get_due_jobs(self, now): + timestamp = datetime_to_utc_timestamp(now) + return self._get_jobs({'next_run_time': {'$lte': timestamp}}) + + def get_next_run_time(self): + document = self.collection.find_one({'next_run_time': {'$ne': None}}, + projection=['next_run_time'], + sort=[('next_run_time', ASCENDING)]) + return utc_timestamp_to_datetime(document['next_run_time']) if document else None + + def get_all_jobs(self): + jobs = self._get_jobs({}) + self._fix_paused_jobs_sorting(jobs) + return jobs + + def add_job(self, job): + try: + self.collection.insert_one({ + '_id': job.id, + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': Binary(pickle.dumps(job.__getstate__(), self.pickle_protocol)) + }) + except DuplicateKeyError: + raise ConflictingIdError(job.id) + + def update_job(self, job): + changes = { + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': Binary(pickle.dumps(job.__getstate__(), self.pickle_protocol)) + } + result = self.collection.update_one({'_id': job.id}, {'$set': changes}) + if result and result.matched_count == 0: + raise JobLookupError(job.id) + + def remove_job(self, job_id): + result = self.collection.delete_one({'_id': job_id}) + if result and result.deleted_count == 0: + raise JobLookupError(job_id) + + def remove_all_jobs(self): + self.collection.delete_many({}) + + def shutdown(self): + self.client.close() + + def _reconstitute_job(self, job_state): + job_state = pickle.loads(job_state) + job = Job.__new__(Job) + job.__setstate__(job_state) + job._scheduler = self._scheduler + job._jobstore_alias = self._alias + return job + + def _get_jobs(self, conditions): + jobs = [] + failed_job_ids = [] + for document in self.collection.find(conditions, ['_id', 'job_state'], + sort=[('next_run_time', ASCENDING)]): + try: + jobs.append(self._reconstitute_job(document['job_state'])) + except BaseException: + self._logger.exception('Unable to restore job "%s" -- removing it', + document['_id']) + failed_job_ids.append(document['_id']) + + # Remove all the jobs we failed to restore + if failed_job_ids: + self.collection.delete_many({'_id': {'$in': failed_job_ids}}) + + return jobs + + def __repr__(self): + return '<%s (client=%s)>' % (self.__class__.__name__, self.client) diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/redis.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/redis.py new file mode 100644 index 0000000..5bb69d6 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/redis.py @@ -0,0 +1,150 @@ +from __future__ import absolute_import +from datetime import datetime + +from pytz import utc +import six + +from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError +from apscheduler.util import datetime_to_utc_timestamp, utc_timestamp_to_datetime +from apscheduler.job import Job + +try: + import cPickle as pickle +except ImportError: # pragma: nocover + import pickle + +try: + from redis import Redis +except ImportError: # pragma: nocover + raise ImportError('RedisJobStore requires redis installed') + + +class RedisJobStore(BaseJobStore): + """ + Stores jobs in a Redis database. Any leftover keyword arguments are directly passed to redis's + :class:`~redis.StrictRedis`. + + Plugin alias: ``redis`` + + :param int db: the database number to store jobs in + :param str jobs_key: key to store jobs in + :param str run_times_key: key to store the jobs' run times in + :param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the + highest available + """ + + def __init__(self, db=0, jobs_key='apscheduler.jobs', run_times_key='apscheduler.run_times', + pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args): + super(RedisJobStore, self).__init__() + + if db is None: + raise ValueError('The "db" parameter must not be empty') + if not jobs_key: + raise ValueError('The "jobs_key" parameter must not be empty') + if not run_times_key: + raise ValueError('The "run_times_key" parameter must not be empty') + + self.pickle_protocol = pickle_protocol + self.jobs_key = jobs_key + self.run_times_key = run_times_key + self.redis = Redis(db=int(db), **connect_args) + + def lookup_job(self, job_id): + job_state = self.redis.hget(self.jobs_key, job_id) + return self._reconstitute_job(job_state) if job_state else None + + def get_due_jobs(self, now): + timestamp = datetime_to_utc_timestamp(now) + job_ids = self.redis.zrangebyscore(self.run_times_key, 0, timestamp) + if job_ids: + job_states = self.redis.hmget(self.jobs_key, *job_ids) + return self._reconstitute_jobs(six.moves.zip(job_ids, job_states)) + return [] + + def get_next_run_time(self): + next_run_time = self.redis.zrange(self.run_times_key, 0, 0, withscores=True) + if next_run_time: + return utc_timestamp_to_datetime(next_run_time[0][1]) + + def get_all_jobs(self): + job_states = self.redis.hgetall(self.jobs_key) + jobs = self._reconstitute_jobs(six.iteritems(job_states)) + paused_sort_key = datetime(9999, 12, 31, tzinfo=utc) + return sorted(jobs, key=lambda job: job.next_run_time or paused_sort_key) + + def add_job(self, job): + if self.redis.hexists(self.jobs_key, job.id): + raise ConflictingIdError(job.id) + + with self.redis.pipeline() as pipe: + pipe.multi() + pipe.hset(self.jobs_key, job.id, pickle.dumps(job.__getstate__(), + self.pickle_protocol)) + if job.next_run_time: + pipe.zadd(self.run_times_key, + {job.id: datetime_to_utc_timestamp(job.next_run_time)}) + + pipe.execute() + + def update_job(self, job): + if not self.redis.hexists(self.jobs_key, job.id): + raise JobLookupError(job.id) + + with self.redis.pipeline() as pipe: + pipe.hset(self.jobs_key, job.id, pickle.dumps(job.__getstate__(), + self.pickle_protocol)) + if job.next_run_time: + pipe.zadd(self.run_times_key, + {job.id: datetime_to_utc_timestamp(job.next_run_time)}) + else: + pipe.zrem(self.run_times_key, job.id) + + pipe.execute() + + def remove_job(self, job_id): + if not self.redis.hexists(self.jobs_key, job_id): + raise JobLookupError(job_id) + + with self.redis.pipeline() as pipe: + pipe.hdel(self.jobs_key, job_id) + pipe.zrem(self.run_times_key, job_id) + pipe.execute() + + def remove_all_jobs(self): + with self.redis.pipeline() as pipe: + pipe.delete(self.jobs_key) + pipe.delete(self.run_times_key) + pipe.execute() + + def shutdown(self): + self.redis.connection_pool.disconnect() + + def _reconstitute_job(self, job_state): + job_state = pickle.loads(job_state) + job = Job.__new__(Job) + job.__setstate__(job_state) + job._scheduler = self._scheduler + job._jobstore_alias = self._alias + return job + + def _reconstitute_jobs(self, job_states): + jobs = [] + failed_job_ids = [] + for job_id, job_state in job_states: + try: + jobs.append(self._reconstitute_job(job_state)) + except BaseException: + self._logger.exception('Unable to restore job "%s" -- removing it', job_id) + failed_job_ids.append(job_id) + + # Remove all the jobs we failed to restore + if failed_job_ids: + with self.redis.pipeline() as pipe: + pipe.hdel(self.jobs_key, *failed_job_ids) + pipe.zrem(self.run_times_key, *failed_job_ids) + pipe.execute() + + return jobs + + def __repr__(self): + return '<%s>' % self.__class__.__name__ diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/rethinkdb.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/rethinkdb.py new file mode 100644 index 0000000..d8a78cd --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/rethinkdb.py @@ -0,0 +1,155 @@ +from __future__ import absolute_import + +from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError +from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime +from apscheduler.job import Job + +try: + import cPickle as pickle +except ImportError: # pragma: nocover + import pickle + +try: + from rethinkdb import RethinkDB +except ImportError: # pragma: nocover + raise ImportError('RethinkDBJobStore requires rethinkdb installed') + + +class RethinkDBJobStore(BaseJobStore): + """ + Stores jobs in a RethinkDB database. Any leftover keyword arguments are directly passed to + rethinkdb's `RethinkdbClient <http://www.rethinkdb.com/api/#connect>`_. + + Plugin alias: ``rethinkdb`` + + :param str database: database to store jobs in + :param str collection: collection to store jobs in + :param client: a :class:`rethinkdb.net.Connection` instance to use instead of providing + connection arguments + :param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the + highest available + """ + + def __init__(self, database='apscheduler', table='jobs', client=None, + pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args): + super(RethinkDBJobStore, self).__init__() + + if not database: + raise ValueError('The "database" parameter must not be empty') + if not table: + raise ValueError('The "table" parameter must not be empty') + + self.database = database + self.table_name = table + self.table = None + self.client = client + self.pickle_protocol = pickle_protocol + self.connect_args = connect_args + self.r = RethinkDB() + self.conn = None + + def start(self, scheduler, alias): + super(RethinkDBJobStore, self).start(scheduler, alias) + + if self.client: + self.conn = maybe_ref(self.client) + else: + self.conn = self.r.connect(db=self.database, **self.connect_args) + + if self.database not in self.r.db_list().run(self.conn): + self.r.db_create(self.database).run(self.conn) + + if self.table_name not in self.r.table_list().run(self.conn): + self.r.table_create(self.table_name).run(self.conn) + + if 'next_run_time' not in self.r.table(self.table_name).index_list().run(self.conn): + self.r.table(self.table_name).index_create('next_run_time').run(self.conn) + + self.table = self.r.db(self.database).table(self.table_name) + + def lookup_job(self, job_id): + results = list(self.table.get_all(job_id).pluck('job_state').run(self.conn)) + return self._reconstitute_job(results[0]['job_state']) if results else None + + def get_due_jobs(self, now): + return self._get_jobs(self.r.row['next_run_time'] <= datetime_to_utc_timestamp(now)) + + def get_next_run_time(self): + results = list( + self.table + .filter(self.r.row['next_run_time'] != None) # noqa + .order_by(self.r.asc('next_run_time')) + .map(lambda x: x['next_run_time']) + .limit(1) + .run(self.conn) + ) + return utc_timestamp_to_datetime(results[0]) if results else None + + def get_all_jobs(self): + jobs = self._get_jobs() + self._fix_paused_jobs_sorting(jobs) + return jobs + + def add_job(self, job): + job_dict = { + 'id': job.id, + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': self.r.binary(pickle.dumps(job.__getstate__(), self.pickle_protocol)) + } + results = self.table.insert(job_dict).run(self.conn) + if results['errors'] > 0: + raise ConflictingIdError(job.id) + + def update_job(self, job): + changes = { + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': self.r.binary(pickle.dumps(job.__getstate__(), self.pickle_protocol)) + } + results = self.table.get_all(job.id).update(changes).run(self.conn) + skipped = False in map(lambda x: results[x] == 0, results.keys()) + if results['skipped'] > 0 or results['errors'] > 0 or not skipped: + raise JobLookupError(job.id) + + def remove_job(self, job_id): + results = self.table.get_all(job_id).delete().run(self.conn) + if results['deleted'] + results['skipped'] != 1: + raise JobLookupError(job_id) + + def remove_all_jobs(self): + self.table.delete().run(self.conn) + + def shutdown(self): + self.conn.close() + + def _reconstitute_job(self, job_state): + job_state = pickle.loads(job_state) + job = Job.__new__(Job) + job.__setstate__(job_state) + job._scheduler = self._scheduler + job._jobstore_alias = self._alias + return job + + def _get_jobs(self, predicate=None): + jobs = [] + failed_job_ids = [] + query = (self.table.filter(self.r.row['next_run_time'] != None).filter(predicate) # noqa + if predicate else self.table) + query = query.order_by('next_run_time', 'id').pluck('id', 'job_state') + + for document in query.run(self.conn): + try: + jobs.append(self._reconstitute_job(document['job_state'])) + except Exception: + self._logger.exception('Unable to restore job "%s" -- removing it', document['id']) + failed_job_ids.append(document['id']) + + # Remove all the jobs we failed to restore + if failed_job_ids: + self.r.expr(failed_job_ids).for_each( + lambda job_id: self.table.get_all(job_id).delete()).run(self.conn) + + return jobs + + def __repr__(self): + connection = self.conn + return '<%s (connection=%s)>' % (self.__class__.__name__, connection) diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/sqlalchemy.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/sqlalchemy.py new file mode 100644 index 0000000..716549b --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/sqlalchemy.py @@ -0,0 +1,161 @@ +from __future__ import absolute_import + +from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError +from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime +from apscheduler.job import Job + +try: + import cPickle as pickle +except ImportError: # pragma: nocover + import pickle + +try: + from sqlalchemy import ( + create_engine, Table, Column, MetaData, Unicode, Float, LargeBinary, select, and_) + from sqlalchemy.exc import IntegrityError + from sqlalchemy.sql.expression import null +except ImportError: # pragma: nocover + raise ImportError('SQLAlchemyJobStore requires SQLAlchemy installed') + + +class SQLAlchemyJobStore(BaseJobStore): + """ + Stores jobs in a database table using SQLAlchemy. + The table will be created if it doesn't exist in the database. + + Plugin alias: ``sqlalchemy`` + + :param str url: connection string (see + :ref:`SQLAlchemy documentation <sqlalchemy:database_urls>` on this) + :param engine: an SQLAlchemy :class:`~sqlalchemy.engine.Engine` to use instead of creating a + new one based on ``url`` + :param str tablename: name of the table to store jobs in + :param metadata: a :class:`~sqlalchemy.schema.MetaData` instance to use instead of creating a + new one + :param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the + highest available + :param str tableschema: name of the (existing) schema in the target database where the table + should be + :param dict engine_options: keyword arguments to :func:`~sqlalchemy.create_engine` + (ignored if ``engine`` is given) + """ + + def __init__(self, url=None, engine=None, tablename='apscheduler_jobs', metadata=None, + pickle_protocol=pickle.HIGHEST_PROTOCOL, tableschema=None, engine_options=None): + super(SQLAlchemyJobStore, self).__init__() + self.pickle_protocol = pickle_protocol + metadata = maybe_ref(metadata) or MetaData() + + if engine: + self.engine = maybe_ref(engine) + elif url: + self.engine = create_engine(url, **(engine_options or {})) + else: + raise ValueError('Need either "engine" or "url" defined') + + # 191 = max key length in MySQL for InnoDB/utf8mb4 tables, + # 25 = precision that translates to an 8-byte float + self.jobs_t = Table( + tablename, metadata, + Column('id', Unicode(191), primary_key=True), + Column('next_run_time', Float(25), index=True), + Column('job_state', LargeBinary, nullable=False), + schema=tableschema + ) + + def start(self, scheduler, alias): + super(SQLAlchemyJobStore, self).start(scheduler, alias) + self.jobs_t.create(self.engine, True) + + def lookup_job(self, job_id): + selectable = select(self.jobs_t.c.job_state).where(self.jobs_t.c.id == job_id) + with self.engine.begin() as connection: + job_state = connection.execute(selectable).scalar() + return self._reconstitute_job(job_state) if job_state else None + + def get_due_jobs(self, now): + timestamp = datetime_to_utc_timestamp(now) + return self._get_jobs(self.jobs_t.c.next_run_time <= timestamp) + + def get_next_run_time(self): + selectable = select(self.jobs_t.c.next_run_time).\ + where(self.jobs_t.c.next_run_time != null()).\ + order_by(self.jobs_t.c.next_run_time).limit(1) + with self.engine.begin() as connection: + next_run_time = connection.execute(selectable).scalar() + return utc_timestamp_to_datetime(next_run_time) + + def get_all_jobs(self): + jobs = self._get_jobs() + self._fix_paused_jobs_sorting(jobs) + return jobs + + def add_job(self, job): + insert = self.jobs_t.insert().values(**{ + 'id': job.id, + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': pickle.dumps(job.__getstate__(), self.pickle_protocol) + }) + with self.engine.begin() as connection: + try: + connection.execute(insert) + except IntegrityError: + raise ConflictingIdError(job.id) + + def update_job(self, job): + update = self.jobs_t.update().values(**{ + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': pickle.dumps(job.__getstate__(), self.pickle_protocol) + }).where(self.jobs_t.c.id == job.id) + with self.engine.begin() as connection: + result = connection.execute(update) + if result.rowcount == 0: + raise JobLookupError(job.id) + + def remove_job(self, job_id): + delete = self.jobs_t.delete().where(self.jobs_t.c.id == job_id) + with self.engine.begin() as connection: + result = connection.execute(delete) + if result.rowcount == 0: + raise JobLookupError(job_id) + + def remove_all_jobs(self): + delete = self.jobs_t.delete() + with self.engine.begin() as connection: + connection.execute(delete) + + def shutdown(self): + self.engine.dispose() + + def _reconstitute_job(self, job_state): + job_state = pickle.loads(job_state) + job_state['jobstore'] = self + job = Job.__new__(Job) + job.__setstate__(job_state) + job._scheduler = self._scheduler + job._jobstore_alias = self._alias + return job + + def _get_jobs(self, *conditions): + jobs = [] + selectable = select(self.jobs_t.c.id, self.jobs_t.c.job_state).\ + order_by(self.jobs_t.c.next_run_time) + selectable = selectable.where(and_(*conditions)) if conditions else selectable + failed_job_ids = set() + with self.engine.begin() as connection: + for row in connection.execute(selectable): + try: + jobs.append(self._reconstitute_job(row.job_state)) + except BaseException: + self._logger.exception('Unable to restore job "%s" -- removing it', row.id) + failed_job_ids.add(row.id) + + # Remove all the jobs we failed to restore + if failed_job_ids: + delete = self.jobs_t.delete().where(self.jobs_t.c.id.in_(failed_job_ids)) + connection.execute(delete) + + return jobs + + def __repr__(self): + return '<%s (url=%s)>' % (self.__class__.__name__, self.engine.url) diff --git a/python/lib/python3.11/site-packages/apscheduler/jobstores/zookeeper.py b/python/lib/python3.11/site-packages/apscheduler/jobstores/zookeeper.py new file mode 100644 index 0000000..5253069 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/jobstores/zookeeper.py @@ -0,0 +1,178 @@ +from __future__ import absolute_import + +from datetime import datetime + +from pytz import utc +from kazoo.exceptions import NoNodeError, NodeExistsError + +from apscheduler.jobstores.base import BaseJobStore, JobLookupError, ConflictingIdError +from apscheduler.util import maybe_ref, datetime_to_utc_timestamp, utc_timestamp_to_datetime +from apscheduler.job import Job + +try: + import cPickle as pickle +except ImportError: # pragma: nocover + import pickle + +try: + from kazoo.client import KazooClient +except ImportError: # pragma: nocover + raise ImportError('ZooKeeperJobStore requires Kazoo installed') + + +class ZooKeeperJobStore(BaseJobStore): + """ + Stores jobs in a ZooKeeper tree. Any leftover keyword arguments are directly passed to + kazoo's `KazooClient + <http://kazoo.readthedocs.io/en/latest/api/client.html>`_. + + Plugin alias: ``zookeeper`` + + :param str path: path to store jobs in + :param client: a :class:`~kazoo.client.KazooClient` instance to use instead of + providing connection arguments + :param int pickle_protocol: pickle protocol level to use (for serialization), defaults to the + highest available + """ + + def __init__(self, path='/apscheduler', client=None, close_connection_on_exit=False, + pickle_protocol=pickle.HIGHEST_PROTOCOL, **connect_args): + super(ZooKeeperJobStore, self).__init__() + self.pickle_protocol = pickle_protocol + self.close_connection_on_exit = close_connection_on_exit + + if not path: + raise ValueError('The "path" parameter must not be empty') + + self.path = path + + if client: + self.client = maybe_ref(client) + else: + self.client = KazooClient(**connect_args) + self._ensured_path = False + + def _ensure_paths(self): + if not self._ensured_path: + self.client.ensure_path(self.path) + self._ensured_path = True + + def start(self, scheduler, alias): + super(ZooKeeperJobStore, self).start(scheduler, alias) + if not self.client.connected: + self.client.start() + + def lookup_job(self, job_id): + self._ensure_paths() + node_path = self.path + "/" + str(job_id) + try: + content, _ = self.client.get(node_path) + doc = pickle.loads(content) + job = self._reconstitute_job(doc['job_state']) + return job + except BaseException: + return None + + def get_due_jobs(self, now): + timestamp = datetime_to_utc_timestamp(now) + jobs = [job_def['job'] for job_def in self._get_jobs() + if job_def['next_run_time'] is not None and job_def['next_run_time'] <= timestamp] + return jobs + + def get_next_run_time(self): + next_runs = [job_def['next_run_time'] for job_def in self._get_jobs() + if job_def['next_run_time'] is not None] + return utc_timestamp_to_datetime(min(next_runs)) if len(next_runs) > 0 else None + + def get_all_jobs(self): + jobs = [job_def['job'] for job_def in self._get_jobs()] + self._fix_paused_jobs_sorting(jobs) + return jobs + + def add_job(self, job): + self._ensure_paths() + node_path = self.path + "/" + str(job.id) + value = { + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': job.__getstate__() + } + data = pickle.dumps(value, self.pickle_protocol) + try: + self.client.create(node_path, value=data) + except NodeExistsError: + raise ConflictingIdError(job.id) + + def update_job(self, job): + self._ensure_paths() + node_path = self.path + "/" + str(job.id) + changes = { + 'next_run_time': datetime_to_utc_timestamp(job.next_run_time), + 'job_state': job.__getstate__() + } + data = pickle.dumps(changes, self.pickle_protocol) + try: + self.client.set(node_path, value=data) + except NoNodeError: + raise JobLookupError(job.id) + + def remove_job(self, job_id): + self._ensure_paths() + node_path = self.path + "/" + str(job_id) + try: + self.client.delete(node_path) + except NoNodeError: + raise JobLookupError(job_id) + + def remove_all_jobs(self): + try: + self.client.delete(self.path, recursive=True) + except NoNodeError: + pass + self._ensured_path = False + + def shutdown(self): + if self.close_connection_on_exit: + self.client.stop() + self.client.close() + + def _reconstitute_job(self, job_state): + job_state = job_state + job = Job.__new__(Job) + job.__setstate__(job_state) + job._scheduler = self._scheduler + job._jobstore_alias = self._alias + return job + + def _get_jobs(self): + self._ensure_paths() + jobs = [] + failed_job_ids = [] + all_ids = self.client.get_children(self.path) + for node_name in all_ids: + try: + node_path = self.path + "/" + node_name + content, _ = self.client.get(node_path) + doc = pickle.loads(content) + job_def = { + 'job_id': node_name, + 'next_run_time': doc['next_run_time'] if doc['next_run_time'] else None, + 'job_state': doc['job_state'], + 'job': self._reconstitute_job(doc['job_state']), + 'creation_time': _.ctime + } + jobs.append(job_def) + except BaseException: + self._logger.exception('Unable to restore job "%s" -- removing it' % node_name) + failed_job_ids.append(node_name) + + # Remove all the jobs we failed to restore + if failed_job_ids: + for failed_id in failed_job_ids: + self.remove_job(failed_id) + paused_sort_key = datetime(9999, 12, 31, tzinfo=utc) + return sorted(jobs, key=lambda job_def: (job_def['job'].next_run_time or paused_sort_key, + job_def['creation_time'])) + + def __repr__(self): + self._logger.exception('<%s (client=%s)>' % (self.__class__.__name__, self.client)) + return '<%s (client=%s)>' % (self.__class__.__name__, self.client) diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/__init__.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/__init__.py new file mode 100644 index 0000000..bd8a790 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/__init__.py @@ -0,0 +1,12 @@ +class SchedulerAlreadyRunningError(Exception): + """Raised when attempting to start or configure the scheduler when it's already running.""" + + def __str__(self): + return 'Scheduler is already running' + + +class SchedulerNotRunningError(Exception): + """Raised when attempting to shutdown the scheduler when it's not running.""" + + def __str__(self): + return 'Scheduler is not running' diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/asyncio.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/asyncio.py new file mode 100644 index 0000000..8bcdfda --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/asyncio.py @@ -0,0 +1,66 @@ +from __future__ import absolute_import +import asyncio +from functools import wraps, partial + +from apscheduler.schedulers.base import BaseScheduler +from apscheduler.util import maybe_ref + + +def run_in_event_loop(func): + @wraps(func) + def wrapper(self, *args, **kwargs): + wrapped = partial(func, self, *args, **kwargs) + self._eventloop.call_soon_threadsafe(wrapped) + return wrapper + + +class AsyncIOScheduler(BaseScheduler): + """ + A scheduler that runs on an asyncio (:pep:`3156`) event loop. + + The default executor can run jobs based on native coroutines (``async def``). + + Extra options: + + ============== ============================================================= + ``event_loop`` AsyncIO event loop to use (defaults to the global event loop) + ============== ============================================================= + """ + + _eventloop = None + _timeout = None + + def start(self, paused=False): + if not self._eventloop: + self._eventloop = asyncio.get_event_loop() + + super(AsyncIOScheduler, self).start(paused) + + @run_in_event_loop + def shutdown(self, wait=True): + super(AsyncIOScheduler, self).shutdown(wait) + self._stop_timer() + + def _configure(self, config): + self._eventloop = maybe_ref(config.pop('event_loop', None)) + super(AsyncIOScheduler, self)._configure(config) + + def _start_timer(self, wait_seconds): + self._stop_timer() + if wait_seconds is not None: + self._timeout = self._eventloop.call_later(wait_seconds, self.wakeup) + + def _stop_timer(self): + if self._timeout: + self._timeout.cancel() + del self._timeout + + @run_in_event_loop + def wakeup(self): + self._stop_timer() + wait_seconds = self._process_jobs() + self._start_timer(wait_seconds) + + def _create_default_executor(self): + from apscheduler.executors.asyncio import AsyncIOExecutor + return AsyncIOExecutor() diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/background.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/background.py new file mode 100644 index 0000000..bb8f77d --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/background.py @@ -0,0 +1,43 @@ +from __future__ import absolute_import + +from threading import Thread, Event + +from apscheduler.schedulers.base import BaseScheduler +from apscheduler.schedulers.blocking import BlockingScheduler +from apscheduler.util import asbool + + +class BackgroundScheduler(BlockingScheduler): + """ + A scheduler that runs in the background using a separate thread + (:meth:`~apscheduler.schedulers.base.BaseScheduler.start` will return immediately). + + Extra options: + + ========== ============================================================================= + ``daemon`` Set the ``daemon`` option in the background thread (defaults to ``True``, see + `the documentation + <https://docs.python.org/3.4/library/threading.html#thread-objects>`_ + for further details) + ========== ============================================================================= + """ + + _thread = None + + def _configure(self, config): + self._daemon = asbool(config.pop('daemon', True)) + super(BackgroundScheduler, self)._configure(config) + + def start(self, *args, **kwargs): + if self._event is None or self._event.is_set(): + self._event = Event() + + BaseScheduler.start(self, *args, **kwargs) + self._thread = Thread(target=self._main_loop, name='APScheduler') + self._thread.daemon = self._daemon + self._thread.start() + + def shutdown(self, *args, **kwargs): + super(BackgroundScheduler, self).shutdown(*args, **kwargs) + self._thread.join() + del self._thread diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/base.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/base.py new file mode 100644 index 0000000..444de8e --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/base.py @@ -0,0 +1,1026 @@ +from __future__ import print_function + +from abc import ABCMeta, abstractmethod +from threading import RLock +from datetime import datetime, timedelta +from logging import getLogger +import warnings +import sys + +from pkg_resources import iter_entry_points +from tzlocal import get_localzone +import six + +from apscheduler.schedulers import SchedulerAlreadyRunningError, SchedulerNotRunningError +from apscheduler.executors.base import MaxInstancesReachedError, BaseExecutor +from apscheduler.executors.pool import ThreadPoolExecutor +from apscheduler.jobstores.base import ConflictingIdError, JobLookupError, BaseJobStore +from apscheduler.jobstores.memory import MemoryJobStore +from apscheduler.job import Job +from apscheduler.triggers.base import BaseTrigger +from apscheduler.util import ( + asbool, asint, astimezone, maybe_ref, timedelta_seconds, undefined, TIMEOUT_MAX) +from apscheduler.events import ( + SchedulerEvent, JobEvent, JobSubmissionEvent, EVENT_SCHEDULER_START, EVENT_SCHEDULER_SHUTDOWN, + EVENT_JOBSTORE_ADDED, EVENT_JOBSTORE_REMOVED, EVENT_ALL, EVENT_JOB_MODIFIED, EVENT_JOB_REMOVED, + EVENT_JOB_ADDED, EVENT_EXECUTOR_ADDED, EVENT_EXECUTOR_REMOVED, EVENT_ALL_JOBS_REMOVED, + EVENT_JOB_SUBMITTED, EVENT_JOB_MAX_INSTANCES, EVENT_SCHEDULER_RESUMED, EVENT_SCHEDULER_PAUSED) + +try: + from collections.abc import MutableMapping +except ImportError: + from collections import MutableMapping + +#: constant indicating a scheduler's stopped state +STATE_STOPPED = 0 +#: constant indicating a scheduler's running state (started and processing jobs) +STATE_RUNNING = 1 +#: constant indicating a scheduler's paused state (started but not processing jobs) +STATE_PAUSED = 2 + + +class BaseScheduler(six.with_metaclass(ABCMeta)): + """ + Abstract base class for all schedulers. + + Takes the following keyword arguments: + + :param str|logging.Logger logger: logger to use for the scheduler's logging (defaults to + apscheduler.scheduler) + :param str|datetime.tzinfo timezone: the default time zone (defaults to the local timezone) + :param int|float jobstore_retry_interval: the minimum number of seconds to wait between + retries in the scheduler's main loop if the job store raises an exception when getting + the list of due jobs + :param dict job_defaults: default values for newly added jobs + :param dict jobstores: a dictionary of job store alias -> job store instance or configuration + dict + :param dict executors: a dictionary of executor alias -> executor instance or configuration + dict + + :ivar int state: current running state of the scheduler (one of the following constants from + ``apscheduler.schedulers.base``: ``STATE_STOPPED``, ``STATE_RUNNING``, ``STATE_PAUSED``) + + .. seealso:: :ref:`scheduler-config` + """ + + _trigger_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.triggers')) + _trigger_classes = {} + _executor_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.executors')) + _executor_classes = {} + _jobstore_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.jobstores')) + _jobstore_classes = {} + + # + # Public API + # + + def __init__(self, gconfig={}, **options): + super(BaseScheduler, self).__init__() + self._executors = {} + self._executors_lock = self._create_lock() + self._jobstores = {} + self._jobstores_lock = self._create_lock() + self._listeners = [] + self._listeners_lock = self._create_lock() + self._pending_jobs = [] + self.state = STATE_STOPPED + self.configure(gconfig, **options) + + def __getstate__(self): + raise TypeError("Schedulers cannot be serialized. Ensure that you are not passing a " + "scheduler instance as an argument to a job, or scheduling an instance " + "method where the instance contains a scheduler as an attribute.") + + def configure(self, gconfig={}, prefix='apscheduler.', **options): + """ + Reconfigures the scheduler with the given options. + + Can only be done when the scheduler isn't running. + + :param dict gconfig: a "global" configuration dictionary whose values can be overridden by + keyword arguments to this method + :param str|unicode prefix: pick only those keys from ``gconfig`` that are prefixed with + this string (pass an empty string or ``None`` to use all keys) + :raises SchedulerAlreadyRunningError: if the scheduler is already running + + """ + if self.state != STATE_STOPPED: + raise SchedulerAlreadyRunningError + + # If a non-empty prefix was given, strip it from the keys in the + # global configuration dict + if prefix: + prefixlen = len(prefix) + gconfig = dict((key[prefixlen:], value) for key, value in six.iteritems(gconfig) + if key.startswith(prefix)) + + # Create a structure from the dotted options + # (e.g. "a.b.c = d" -> {'a': {'b': {'c': 'd'}}}) + config = {} + for key, value in six.iteritems(gconfig): + parts = key.split('.') + parent = config + key = parts.pop(0) + while parts: + parent = parent.setdefault(key, {}) + key = parts.pop(0) + parent[key] = value + + # Override any options with explicit keyword arguments + config.update(options) + self._configure(config) + + def start(self, paused=False): + """ + Start the configured executors and job stores and begin processing scheduled jobs. + + :param bool paused: if ``True``, don't start job processing until :meth:`resume` is called + :raises SchedulerAlreadyRunningError: if the scheduler is already running + :raises RuntimeError: if running under uWSGI with threads disabled + + """ + if self.state != STATE_STOPPED: + raise SchedulerAlreadyRunningError + + self._check_uwsgi() + + with self._executors_lock: + # Create a default executor if nothing else is configured + if 'default' not in self._executors: + self.add_executor(self._create_default_executor(), 'default') + + # Start all the executors + for alias, executor in six.iteritems(self._executors): + executor.start(self, alias) + + with self._jobstores_lock: + # Create a default job store if nothing else is configured + if 'default' not in self._jobstores: + self.add_jobstore(self._create_default_jobstore(), 'default') + + # Start all the job stores + for alias, store in six.iteritems(self._jobstores): + store.start(self, alias) + + # Schedule all pending jobs + for job, jobstore_alias, replace_existing in self._pending_jobs: + self._real_add_job(job, jobstore_alias, replace_existing) + del self._pending_jobs[:] + + self.state = STATE_PAUSED if paused else STATE_RUNNING + self._logger.info('Scheduler started') + self._dispatch_event(SchedulerEvent(EVENT_SCHEDULER_START)) + + if not paused: + self.wakeup() + + @abstractmethod + def shutdown(self, wait=True): + """ + Shuts down the scheduler, along with its executors and job stores. + + Does not interrupt any currently running jobs. + + :param bool wait: ``True`` to wait until all currently executing jobs have finished + :raises SchedulerNotRunningError: if the scheduler has not been started yet + + """ + if self.state == STATE_STOPPED: + raise SchedulerNotRunningError + + self.state = STATE_STOPPED + + # Shut down all executors + with self._executors_lock, self._jobstores_lock: + for executor in six.itervalues(self._executors): + executor.shutdown(wait) + + # Shut down all job stores + for jobstore in six.itervalues(self._jobstores): + jobstore.shutdown() + + self._logger.info('Scheduler has been shut down') + self._dispatch_event(SchedulerEvent(EVENT_SCHEDULER_SHUTDOWN)) + + def pause(self): + """ + Pause job processing in the scheduler. + + This will prevent the scheduler from waking up to do job processing until :meth:`resume` + is called. It will not however stop any already running job processing. + + """ + if self.state == STATE_STOPPED: + raise SchedulerNotRunningError + elif self.state == STATE_RUNNING: + self.state = STATE_PAUSED + self._logger.info('Paused scheduler job processing') + self._dispatch_event(SchedulerEvent(EVENT_SCHEDULER_PAUSED)) + + def resume(self): + """Resume job processing in the scheduler.""" + if self.state == STATE_STOPPED: + raise SchedulerNotRunningError + elif self.state == STATE_PAUSED: + self.state = STATE_RUNNING + self._logger.info('Resumed scheduler job processing') + self._dispatch_event(SchedulerEvent(EVENT_SCHEDULER_RESUMED)) + self.wakeup() + + @property + def running(self): + """ + Return ``True`` if the scheduler has been started. + + This is a shortcut for ``scheduler.state != STATE_STOPPED``. + + """ + return self.state != STATE_STOPPED + + def add_executor(self, executor, alias='default', **executor_opts): + """ + Adds an executor to this scheduler. + + Any extra keyword arguments will be passed to the executor plugin's constructor, assuming + that the first argument is the name of an executor plugin. + + :param str|unicode|apscheduler.executors.base.BaseExecutor executor: either an executor + instance or the name of an executor plugin + :param str|unicode alias: alias for the scheduler + :raises ValueError: if there is already an executor by the given alias + + """ + with self._executors_lock: + if alias in self._executors: + raise ValueError('This scheduler already has an executor by the alias of "%s"' % + alias) + + if isinstance(executor, BaseExecutor): + self._executors[alias] = executor + elif isinstance(executor, six.string_types): + self._executors[alias] = executor = self._create_plugin_instance( + 'executor', executor, executor_opts) + else: + raise TypeError('Expected an executor instance or a string, got %s instead' % + executor.__class__.__name__) + + # Start the executor right away if the scheduler is running + if self.state != STATE_STOPPED: + executor.start(self, alias) + + self._dispatch_event(SchedulerEvent(EVENT_EXECUTOR_ADDED, alias)) + + def remove_executor(self, alias, shutdown=True): + """ + Removes the executor by the given alias from this scheduler. + + :param str|unicode alias: alias of the executor + :param bool shutdown: ``True`` to shut down the executor after + removing it + + """ + with self._executors_lock: + executor = self._lookup_executor(alias) + del self._executors[alias] + + if shutdown: + executor.shutdown() + + self._dispatch_event(SchedulerEvent(EVENT_EXECUTOR_REMOVED, alias)) + + def add_jobstore(self, jobstore, alias='default', **jobstore_opts): + """ + Adds a job store to this scheduler. + + Any extra keyword arguments will be passed to the job store plugin's constructor, assuming + that the first argument is the name of a job store plugin. + + :param str|unicode|apscheduler.jobstores.base.BaseJobStore jobstore: job store to be added + :param str|unicode alias: alias for the job store + :raises ValueError: if there is already a job store by the given alias + + """ + with self._jobstores_lock: + if alias in self._jobstores: + raise ValueError('This scheduler already has a job store by the alias of "%s"' % + alias) + + if isinstance(jobstore, BaseJobStore): + self._jobstores[alias] = jobstore + elif isinstance(jobstore, six.string_types): + self._jobstores[alias] = jobstore = self._create_plugin_instance( + 'jobstore', jobstore, jobstore_opts) + else: + raise TypeError('Expected a job store instance or a string, got %s instead' % + jobstore.__class__.__name__) + + # Start the job store right away if the scheduler isn't stopped + if self.state != STATE_STOPPED: + jobstore.start(self, alias) + + # Notify listeners that a new job store has been added + self._dispatch_event(SchedulerEvent(EVENT_JOBSTORE_ADDED, alias)) + + # Notify the scheduler so it can scan the new job store for jobs + if self.state != STATE_STOPPED: + self.wakeup() + + def remove_jobstore(self, alias, shutdown=True): + """ + Removes the job store by the given alias from this scheduler. + + :param str|unicode alias: alias of the job store + :param bool shutdown: ``True`` to shut down the job store after removing it + + """ + with self._jobstores_lock: + jobstore = self._lookup_jobstore(alias) + del self._jobstores[alias] + + if shutdown: + jobstore.shutdown() + + self._dispatch_event(SchedulerEvent(EVENT_JOBSTORE_REMOVED, alias)) + + def add_listener(self, callback, mask=EVENT_ALL): + """ + add_listener(callback, mask=EVENT_ALL) + + Adds a listener for scheduler events. + + When a matching event occurs, ``callback`` is executed with the event object as its + sole argument. If the ``mask`` parameter is not provided, the callback will receive events + of all types. + + :param callback: any callable that takes one argument + :param int mask: bitmask that indicates which events should be + listened to + + .. seealso:: :mod:`apscheduler.events` + .. seealso:: :ref:`scheduler-events` + + """ + with self._listeners_lock: + self._listeners.append((callback, mask)) + + def remove_listener(self, callback): + """Removes a previously added event listener.""" + + with self._listeners_lock: + for i, (cb, _) in enumerate(self._listeners): + if callback == cb: + del self._listeners[i] + + def add_job(self, func, trigger=None, args=None, kwargs=None, id=None, name=None, + misfire_grace_time=undefined, coalesce=undefined, max_instances=undefined, + next_run_time=undefined, jobstore='default', executor='default', + replace_existing=False, **trigger_args): + """ + add_job(func, trigger=None, args=None, kwargs=None, id=None, \ + name=None, misfire_grace_time=undefined, coalesce=undefined, \ + max_instances=undefined, next_run_time=undefined, \ + jobstore='default', executor='default', \ + replace_existing=False, **trigger_args) + + Adds the given job to the job list and wakes up the scheduler if it's already running. + + Any option that defaults to ``undefined`` will be replaced with the corresponding default + value when the job is scheduled (which happens when the scheduler is started, or + immediately if the scheduler is already running). + + The ``func`` argument can be given either as a callable object or a textual reference in + the ``package.module:some.object`` format, where the first half (separated by ``:``) is an + importable module and the second half is a reference to the callable object, relative to + the module. + + The ``trigger`` argument can either be: + #. the alias name of the trigger (e.g. ``date``, ``interval`` or ``cron``), in which case + any extra keyword arguments to this method are passed on to the trigger's constructor + #. an instance of a trigger class + + :param func: callable (or a textual reference to one) to run at the given time + :param str|apscheduler.triggers.base.BaseTrigger trigger: trigger that determines when + ``func`` is called + :param list|tuple args: list of positional arguments to call func with + :param dict kwargs: dict of keyword arguments to call func with + :param str|unicode id: explicit identifier for the job (for modifying it later) + :param str|unicode name: textual description of the job + :param int misfire_grace_time: seconds after the designated runtime that the job is still + allowed to be run (or ``None`` to allow the job to run no matter how late it is) + :param bool coalesce: run once instead of many times if the scheduler determines that the + job should be run more than once in succession + :param int max_instances: maximum number of concurrently running instances allowed for this + job + :param datetime next_run_time: when to first run the job, regardless of the trigger (pass + ``None`` to add the job as paused) + :param str|unicode jobstore: alias of the job store to store the job in + :param str|unicode executor: alias of the executor to run the job with + :param bool replace_existing: ``True`` to replace an existing job with the same ``id`` + (but retain the number of runs from the existing one) + :rtype: Job + + """ + job_kwargs = { + 'trigger': self._create_trigger(trigger, trigger_args), + 'executor': executor, + 'func': func, + 'args': tuple(args) if args is not None else (), + 'kwargs': dict(kwargs) if kwargs is not None else {}, + 'id': id, + 'name': name, + 'misfire_grace_time': misfire_grace_time, + 'coalesce': coalesce, + 'max_instances': max_instances, + 'next_run_time': next_run_time + } + job_kwargs = dict((key, value) for key, value in six.iteritems(job_kwargs) if + value is not undefined) + job = Job(self, **job_kwargs) + + # Don't really add jobs to job stores before the scheduler is up and running + with self._jobstores_lock: + if self.state == STATE_STOPPED: + self._pending_jobs.append((job, jobstore, replace_existing)) + self._logger.info('Adding job tentatively -- it will be properly scheduled when ' + 'the scheduler starts') + else: + self._real_add_job(job, jobstore, replace_existing) + + return job + + def scheduled_job(self, trigger, args=None, kwargs=None, id=None, name=None, + misfire_grace_time=undefined, coalesce=undefined, max_instances=undefined, + next_run_time=undefined, jobstore='default', executor='default', + **trigger_args): + """ + scheduled_job(trigger, args=None, kwargs=None, id=None, \ + name=None, misfire_grace_time=undefined, \ + coalesce=undefined, max_instances=undefined, \ + next_run_time=undefined, jobstore='default', \ + executor='default',**trigger_args) + + A decorator version of :meth:`add_job`, except that ``replace_existing`` is always + ``True``. + + .. important:: The ``id`` argument must be given if scheduling a job in a persistent job + store. The scheduler cannot, however, enforce this requirement. + + """ + def inner(func): + self.add_job(func, trigger, args, kwargs, id, name, misfire_grace_time, coalesce, + max_instances, next_run_time, jobstore, executor, True, **trigger_args) + return func + return inner + + def modify_job(self, job_id, jobstore=None, **changes): + """ + Modifies the properties of a single job. + + Modifications are passed to this method as extra keyword arguments. + + :param str|unicode job_id: the identifier of the job + :param str|unicode jobstore: alias of the job store that contains the job + :return Job: the relevant job instance + + """ + with self._jobstores_lock: + job, jobstore = self._lookup_job(job_id, jobstore) + job._modify(**changes) + if jobstore: + self._lookup_jobstore(jobstore).update_job(job) + + self._dispatch_event(JobEvent(EVENT_JOB_MODIFIED, job_id, jobstore)) + + # Wake up the scheduler since the job's next run time may have been changed + if self.state == STATE_RUNNING: + self.wakeup() + + return job + + def reschedule_job(self, job_id, jobstore=None, trigger=None, **trigger_args): + """ + Constructs a new trigger for a job and updates its next run time. + + Extra keyword arguments are passed directly to the trigger's constructor. + + :param str|unicode job_id: the identifier of the job + :param str|unicode jobstore: alias of the job store that contains the job + :param trigger: alias of the trigger type or a trigger instance + :return Job: the relevant job instance + + """ + trigger = self._create_trigger(trigger, trigger_args) + now = datetime.now(self.timezone) + next_run_time = trigger.get_next_fire_time(None, now) + return self.modify_job(job_id, jobstore, trigger=trigger, next_run_time=next_run_time) + + def pause_job(self, job_id, jobstore=None): + """ + Causes the given job not to be executed until it is explicitly resumed. + + :param str|unicode job_id: the identifier of the job + :param str|unicode jobstore: alias of the job store that contains the job + :return Job: the relevant job instance + + """ + return self.modify_job(job_id, jobstore, next_run_time=None) + + def resume_job(self, job_id, jobstore=None): + """ + Resumes the schedule of the given job, or removes the job if its schedule is finished. + + :param str|unicode job_id: the identifier of the job + :param str|unicode jobstore: alias of the job store that contains the job + :return Job|None: the relevant job instance if the job was rescheduled, or ``None`` if no + next run time could be calculated and the job was removed + + """ + with self._jobstores_lock: + job, jobstore = self._lookup_job(job_id, jobstore) + now = datetime.now(self.timezone) + next_run_time = job.trigger.get_next_fire_time(None, now) + if next_run_time: + return self.modify_job(job_id, jobstore, next_run_time=next_run_time) + else: + self.remove_job(job.id, jobstore) + + def get_jobs(self, jobstore=None, pending=None): + """ + Returns a list of pending jobs (if the scheduler hasn't been started yet) and scheduled + jobs, either from a specific job store or from all of them. + + If the scheduler has not been started yet, only pending jobs can be returned because the + job stores haven't been started yet either. + + :param str|unicode jobstore: alias of the job store + :param bool pending: **DEPRECATED** + :rtype: list[Job] + + """ + if pending is not None: + warnings.warn('The "pending" option is deprecated -- get_jobs() always returns ' + 'scheduled jobs if the scheduler has been started and pending jobs ' + 'otherwise', DeprecationWarning) + + with self._jobstores_lock: + jobs = [] + if self.state == STATE_STOPPED: + for job, alias, replace_existing in self._pending_jobs: + if jobstore is None or alias == jobstore: + jobs.append(job) + else: + for alias, store in six.iteritems(self._jobstores): + if jobstore is None or alias == jobstore: + jobs.extend(store.get_all_jobs()) + + return jobs + + def get_job(self, job_id, jobstore=None): + """ + Returns the Job that matches the given ``job_id``. + + :param str|unicode job_id: the identifier of the job + :param str|unicode jobstore: alias of the job store that most likely contains the job + :return: the Job by the given ID, or ``None`` if it wasn't found + :rtype: Job + + """ + with self._jobstores_lock: + try: + return self._lookup_job(job_id, jobstore)[0] + except JobLookupError: + return + + def remove_job(self, job_id, jobstore=None): + """ + Removes a job, preventing it from being run any more. + + :param str|unicode job_id: the identifier of the job + :param str|unicode jobstore: alias of the job store that contains the job + :raises JobLookupError: if the job was not found + + """ + jobstore_alias = None + with self._jobstores_lock: + # Check if the job is among the pending jobs + if self.state == STATE_STOPPED: + for i, (job, alias, replace_existing) in enumerate(self._pending_jobs): + if job.id == job_id and jobstore in (None, alias): + del self._pending_jobs[i] + jobstore_alias = alias + break + else: + # Otherwise, try to remove it from each store until it succeeds or we run out of + # stores to check + for alias, store in six.iteritems(self._jobstores): + if jobstore in (None, alias): + try: + store.remove_job(job_id) + jobstore_alias = alias + break + except JobLookupError: + continue + + if jobstore_alias is None: + raise JobLookupError(job_id) + + # Notify listeners that a job has been removed + event = JobEvent(EVENT_JOB_REMOVED, job_id, jobstore_alias) + self._dispatch_event(event) + + self._logger.info('Removed job %s', job_id) + + def remove_all_jobs(self, jobstore=None): + """ + Removes all jobs from the specified job store, or all job stores if none is given. + + :param str|unicode jobstore: alias of the job store + + """ + with self._jobstores_lock: + if self.state == STATE_STOPPED: + if jobstore: + self._pending_jobs = [pending for pending in self._pending_jobs if + pending[1] != jobstore] + else: + self._pending_jobs = [] + else: + for alias, store in six.iteritems(self._jobstores): + if jobstore in (None, alias): + store.remove_all_jobs() + + self._dispatch_event(SchedulerEvent(EVENT_ALL_JOBS_REMOVED, jobstore)) + + def print_jobs(self, jobstore=None, out=None): + """ + print_jobs(jobstore=None, out=sys.stdout) + + Prints out a textual listing of all jobs currently scheduled on either all job stores or + just a specific one. + + :param str|unicode jobstore: alias of the job store, ``None`` to list jobs from all stores + :param file out: a file-like object to print to (defaults to **sys.stdout** if nothing is + given) + + """ + out = out or sys.stdout + with self._jobstores_lock: + if self.state == STATE_STOPPED: + print(u'Pending jobs:', file=out) + if self._pending_jobs: + for job, jobstore_alias, replace_existing in self._pending_jobs: + if jobstore in (None, jobstore_alias): + print(u' %s' % job, file=out) + else: + print(u' No pending jobs', file=out) + else: + for alias, store in sorted(six.iteritems(self._jobstores)): + if jobstore in (None, alias): + print(u'Jobstore %s:' % alias, file=out) + jobs = store.get_all_jobs() + if jobs: + for job in jobs: + print(u' %s' % job, file=out) + else: + print(u' No scheduled jobs', file=out) + + @abstractmethod + def wakeup(self): + """ + Notifies the scheduler that there may be jobs due for execution. + Triggers :meth:`_process_jobs` to be run in an implementation specific manner. + """ + + # + # Private API + # + + def _configure(self, config): + # Set general options + self._logger = maybe_ref(config.pop('logger', None)) or getLogger('apscheduler.scheduler') + self.timezone = astimezone(config.pop('timezone', None)) or get_localzone() + self.jobstore_retry_interval = float(config.pop('jobstore_retry_interval', 10)) + + # Set the job defaults + job_defaults = config.get('job_defaults', {}) + self._job_defaults = { + 'misfire_grace_time': asint(job_defaults.get('misfire_grace_time', 1)), + 'coalesce': asbool(job_defaults.get('coalesce', True)), + 'max_instances': asint(job_defaults.get('max_instances', 1)) + } + + # Configure executors + self._executors.clear() + for alias, value in six.iteritems(config.get('executors', {})): + if isinstance(value, BaseExecutor): + self.add_executor(value, alias) + elif isinstance(value, MutableMapping): + executor_class = value.pop('class', None) + plugin = value.pop('type', None) + if plugin: + executor = self._create_plugin_instance('executor', plugin, value) + elif executor_class: + cls = maybe_ref(executor_class) + executor = cls(**value) + else: + raise ValueError( + 'Cannot create executor "%s" -- either "type" or "class" must be defined' % + alias) + + self.add_executor(executor, alias) + else: + raise TypeError( + "Expected executor instance or dict for executors['%s'], got %s instead" % + (alias, value.__class__.__name__)) + + # Configure job stores + self._jobstores.clear() + for alias, value in six.iteritems(config.get('jobstores', {})): + if isinstance(value, BaseJobStore): + self.add_jobstore(value, alias) + elif isinstance(value, MutableMapping): + jobstore_class = value.pop('class', None) + plugin = value.pop('type', None) + if plugin: + jobstore = self._create_plugin_instance('jobstore', plugin, value) + elif jobstore_class: + cls = maybe_ref(jobstore_class) + jobstore = cls(**value) + else: + raise ValueError( + 'Cannot create job store "%s" -- either "type" or "class" must be ' + 'defined' % alias) + + self.add_jobstore(jobstore, alias) + else: + raise TypeError( + "Expected job store instance or dict for jobstores['%s'], got %s instead" % + (alias, value.__class__.__name__)) + + def _create_default_executor(self): + """Creates a default executor store, specific to the particular scheduler type.""" + return ThreadPoolExecutor() + + def _create_default_jobstore(self): + """Creates a default job store, specific to the particular scheduler type.""" + return MemoryJobStore() + + def _lookup_executor(self, alias): + """ + Returns the executor instance by the given name from the list of executors that were added + to this scheduler. + + :type alias: str + :raises KeyError: if no executor by the given alias is not found + + """ + try: + return self._executors[alias] + except KeyError: + raise KeyError('No such executor: %s' % alias) + + def _lookup_jobstore(self, alias): + """ + Returns the job store instance by the given name from the list of job stores that were + added to this scheduler. + + :type alias: str + :raises KeyError: if no job store by the given alias is not found + + """ + try: + return self._jobstores[alias] + except KeyError: + raise KeyError('No such job store: %s' % alias) + + def _lookup_job(self, job_id, jobstore_alias): + """ + Finds a job by its ID. + + :type job_id: str + :param str jobstore_alias: alias of a job store to look in + :return tuple[Job, str]: a tuple of job, jobstore alias (jobstore alias is None in case of + a pending job) + :raises JobLookupError: if no job by the given ID is found. + + """ + if self.state == STATE_STOPPED: + # Check if the job is among the pending jobs + for job, alias, replace_existing in self._pending_jobs: + if job.id == job_id: + return job, None + else: + # Look in all job stores + for alias, store in six.iteritems(self._jobstores): + if jobstore_alias in (None, alias): + job = store.lookup_job(job_id) + if job is not None: + return job, alias + + raise JobLookupError(job_id) + + def _dispatch_event(self, event): + """ + Dispatches the given event to interested listeners. + + :param SchedulerEvent event: the event to send + + """ + with self._listeners_lock: + listeners = tuple(self._listeners) + + for cb, mask in listeners: + if event.code & mask: + try: + cb(event) + except BaseException: + self._logger.exception('Error notifying listener') + + def _check_uwsgi(self): + """Check if we're running under uWSGI with threads disabled.""" + uwsgi_module = sys.modules.get('uwsgi') + if not getattr(uwsgi_module, 'has_threads', True): + raise RuntimeError('The scheduler seems to be running under uWSGI, but threads have ' + 'been disabled. You must run uWSGI with the --enable-threads ' + 'option for the scheduler to work.') + + def _real_add_job(self, job, jobstore_alias, replace_existing): + """ + :param Job job: the job to add + :param bool replace_existing: ``True`` to use update_job() in case the job already exists + in the store + + """ + # Fill in undefined values with defaults + replacements = {} + for key, value in six.iteritems(self._job_defaults): + if not hasattr(job, key): + replacements[key] = value + + # Calculate the next run time if there is none defined + if not hasattr(job, 'next_run_time'): + now = datetime.now(self.timezone) + replacements['next_run_time'] = job.trigger.get_next_fire_time(None, now) + + # Apply any replacements + job._modify(**replacements) + + # Add the job to the given job store + store = self._lookup_jobstore(jobstore_alias) + try: + store.add_job(job) + except ConflictingIdError: + if replace_existing: + store.update_job(job) + else: + raise + + # Mark the job as no longer pending + job._jobstore_alias = jobstore_alias + + # Notify listeners that a new job has been added + event = JobEvent(EVENT_JOB_ADDED, job.id, jobstore_alias) + self._dispatch_event(event) + + self._logger.info('Added job "%s" to job store "%s"', job.name, jobstore_alias) + + # Notify the scheduler about the new job + if self.state == STATE_RUNNING: + self.wakeup() + + def _create_plugin_instance(self, type_, alias, constructor_kwargs): + """Creates an instance of the given plugin type, loading the plugin first if necessary.""" + plugin_container, class_container, base_class = { + 'trigger': (self._trigger_plugins, self._trigger_classes, BaseTrigger), + 'jobstore': (self._jobstore_plugins, self._jobstore_classes, BaseJobStore), + 'executor': (self._executor_plugins, self._executor_classes, BaseExecutor) + }[type_] + + try: + plugin_cls = class_container[alias] + except KeyError: + if alias in plugin_container: + plugin_cls = class_container[alias] = plugin_container[alias].load() + if not issubclass(plugin_cls, base_class): + raise TypeError('The {0} entry point does not point to a {0} class'. + format(type_)) + else: + raise LookupError('No {0} by the name "{1}" was found'.format(type_, alias)) + + return plugin_cls(**constructor_kwargs) + + def _create_trigger(self, trigger, trigger_args): + if isinstance(trigger, BaseTrigger): + return trigger + elif trigger is None: + trigger = 'date' + elif not isinstance(trigger, six.string_types): + raise TypeError('Expected a trigger instance or string, got %s instead' % + trigger.__class__.__name__) + + # Use the scheduler's time zone if nothing else is specified + trigger_args.setdefault('timezone', self.timezone) + + # Instantiate the trigger class + return self._create_plugin_instance('trigger', trigger, trigger_args) + + def _create_lock(self): + """Creates a reentrant lock object.""" + return RLock() + + def _process_jobs(self): + """ + Iterates through jobs in every jobstore, starts jobs that are due and figures out how long + to wait for the next round. + + If the ``get_due_jobs()`` call raises an exception, a new wakeup is scheduled in at least + ``jobstore_retry_interval`` seconds. + + """ + if self.state == STATE_PAUSED: + self._logger.debug('Scheduler is paused -- not processing jobs') + return None + + self._logger.debug('Looking for jobs to run') + now = datetime.now(self.timezone) + next_wakeup_time = None + events = [] + + with self._jobstores_lock: + for jobstore_alias, jobstore in six.iteritems(self._jobstores): + try: + due_jobs = jobstore.get_due_jobs(now) + except Exception as e: + # Schedule a wakeup at least in jobstore_retry_interval seconds + self._logger.warning('Error getting due jobs from job store %r: %s', + jobstore_alias, e) + retry_wakeup_time = now + timedelta(seconds=self.jobstore_retry_interval) + if not next_wakeup_time or next_wakeup_time > retry_wakeup_time: + next_wakeup_time = retry_wakeup_time + + continue + + for job in due_jobs: + # Look up the job's executor + try: + executor = self._lookup_executor(job.executor) + except BaseException: + self._logger.error( + 'Executor lookup ("%s") failed for job "%s" -- removing it from the ' + 'job store', job.executor, job) + self.remove_job(job.id, jobstore_alias) + continue + + run_times = job._get_run_times(now) + run_times = run_times[-1:] if run_times and job.coalesce else run_times + if run_times: + try: + executor.submit_job(job, run_times) + except MaxInstancesReachedError: + self._logger.warning( + 'Execution of job "%s" skipped: maximum number of running ' + 'instances reached (%d)', job, job.max_instances) + event = JobSubmissionEvent(EVENT_JOB_MAX_INSTANCES, job.id, + jobstore_alias, run_times) + events.append(event) + except BaseException: + self._logger.exception('Error submitting job "%s" to executor "%s"', + job, job.executor) + else: + event = JobSubmissionEvent(EVENT_JOB_SUBMITTED, job.id, jobstore_alias, + run_times) + events.append(event) + + # Update the job if it has a next execution time. + # Otherwise remove it from the job store. + job_next_run = job.trigger.get_next_fire_time(run_times[-1], now) + if job_next_run: + job._modify(next_run_time=job_next_run) + jobstore.update_job(job) + else: + self.remove_job(job.id, jobstore_alias) + + # Set a new next wakeup time if there isn't one yet or + # the jobstore has an even earlier one + jobstore_next_run_time = jobstore.get_next_run_time() + if jobstore_next_run_time and (next_wakeup_time is None or + jobstore_next_run_time < next_wakeup_time): + next_wakeup_time = jobstore_next_run_time.astimezone(self.timezone) + + # Dispatch collected events + for event in events: + self._dispatch_event(event) + + # Determine the delay until this method should be called again + if self.state == STATE_PAUSED: + wait_seconds = None + self._logger.debug('Scheduler is paused; waiting until resume() is called') + elif next_wakeup_time is None: + wait_seconds = None + self._logger.debug('No jobs; waiting until a job is added') + else: + wait_seconds = min(max(timedelta_seconds(next_wakeup_time - now), 0), TIMEOUT_MAX) + self._logger.debug('Next wakeup is due at %s (in %f seconds)', next_wakeup_time, + wait_seconds) + + return wait_seconds diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/blocking.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/blocking.py new file mode 100644 index 0000000..4ecc9f6 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/blocking.py @@ -0,0 +1,35 @@ +from __future__ import absolute_import + +from threading import Event + +from apscheduler.schedulers.base import BaseScheduler, STATE_STOPPED +from apscheduler.util import TIMEOUT_MAX + + +class BlockingScheduler(BaseScheduler): + """ + A scheduler that runs in the foreground + (:meth:`~apscheduler.schedulers.base.BaseScheduler.start` will block). + """ + _event = None + + def start(self, *args, **kwargs): + if self._event is None or self._event.is_set(): + self._event = Event() + + super(BlockingScheduler, self).start(*args, **kwargs) + self._main_loop() + + def shutdown(self, wait=True): + super(BlockingScheduler, self).shutdown(wait) + self._event.set() + + def _main_loop(self): + wait_seconds = TIMEOUT_MAX + while self.state != STATE_STOPPED: + self._event.wait(wait_seconds) + self._event.clear() + wait_seconds = self._process_jobs() + + def wakeup(self): + self._event.set() diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/gevent.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/gevent.py new file mode 100644 index 0000000..d48ed74 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/gevent.py @@ -0,0 +1,35 @@ +from __future__ import absolute_import + +from apscheduler.schedulers.blocking import BlockingScheduler +from apscheduler.schedulers.base import BaseScheduler + +try: + from gevent.event import Event + from gevent.lock import RLock + import gevent +except ImportError: # pragma: nocover + raise ImportError('GeventScheduler requires gevent installed') + + +class GeventScheduler(BlockingScheduler): + """A scheduler that runs as a Gevent greenlet.""" + + _greenlet = None + + def start(self, *args, **kwargs): + self._event = Event() + BaseScheduler.start(self, *args, **kwargs) + self._greenlet = gevent.spawn(self._main_loop) + return self._greenlet + + def shutdown(self, *args, **kwargs): + super(GeventScheduler, self).shutdown(*args, **kwargs) + self._greenlet.join() + del self._greenlet + + def _create_lock(self): + return RLock() + + def _create_default_executor(self): + from apscheduler.executors.gevent import GeventExecutor + return GeventExecutor() diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/qt.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/qt.py new file mode 100644 index 0000000..890a44a --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/qt.py @@ -0,0 +1,50 @@ +from __future__ import absolute_import + +from apscheduler.schedulers.base import BaseScheduler + +try: + from PyQt5.QtCore import QObject, QTimer +except (ImportError, RuntimeError): # pragma: nocover + try: + from PyQt4.QtCore import QObject, QTimer + except ImportError: + try: + from PySide6.QtCore import QObject, QTimer # noqa + except ImportError: + try: + from PySide2.QtCore import QObject, QTimer # noqa + except ImportError: + try: + from PySide.QtCore import QObject, QTimer # noqa + except ImportError: + raise ImportError('QtScheduler requires either PyQt5, PyQt4, PySide6, PySide2 ' + 'or PySide installed') + + +class QtScheduler(BaseScheduler): + """A scheduler that runs in a Qt event loop.""" + + _timer = None + + def shutdown(self, *args, **kwargs): + super(QtScheduler, self).shutdown(*args, **kwargs) + self._stop_timer() + + def _start_timer(self, wait_seconds): + self._stop_timer() + if wait_seconds is not None: + wait_time = min(int(wait_seconds * 1000), 2147483647) + self._timer = QTimer.singleShot(wait_time, self._process_jobs) + + def _stop_timer(self): + if self._timer: + if self._timer.isActive(): + self._timer.stop() + del self._timer + + def wakeup(self): + self._start_timer(0) + + def _process_jobs(self): + wait_seconds = super(QtScheduler, self)._process_jobs() + self._start_timer(wait_seconds) diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/tornado.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/tornado.py new file mode 100644 index 0000000..0a9171f --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/tornado.py @@ -0,0 +1,63 @@ +from __future__ import absolute_import + +from datetime import timedelta +from functools import wraps + +from apscheduler.schedulers.base import BaseScheduler +from apscheduler.util import maybe_ref + +try: + from tornado.ioloop import IOLoop +except ImportError: # pragma: nocover + raise ImportError('TornadoScheduler requires tornado installed') + + +def run_in_ioloop(func): + @wraps(func) + def wrapper(self, *args, **kwargs): + self._ioloop.add_callback(func, self, *args, **kwargs) + return wrapper + + +class TornadoScheduler(BaseScheduler): + """ + A scheduler that runs on a Tornado IOLoop. + + The default executor can run jobs based on native coroutines (``async def``). + + =========== =============================================================== + ``io_loop`` Tornado IOLoop instance to use (defaults to the global IO loop) + =========== =============================================================== + """ + + _ioloop = None + _timeout = None + + @run_in_ioloop + def shutdown(self, wait=True): + super(TornadoScheduler, self).shutdown(wait) + self._stop_timer() + + def _configure(self, config): + self._ioloop = maybe_ref(config.pop('io_loop', None)) or IOLoop.current() + super(TornadoScheduler, self)._configure(config) + + def _start_timer(self, wait_seconds): + self._stop_timer() + if wait_seconds is not None: + self._timeout = self._ioloop.add_timeout(timedelta(seconds=wait_seconds), self.wakeup) + + def _stop_timer(self): + if self._timeout: + self._ioloop.remove_timeout(self._timeout) + del self._timeout + + def _create_default_executor(self): + from apscheduler.executors.tornado import TornadoExecutor + return TornadoExecutor() + + @run_in_ioloop + def wakeup(self): + self._stop_timer() + wait_seconds = self._process_jobs() + self._start_timer(wait_seconds) diff --git a/python/lib/python3.11/site-packages/apscheduler/schedulers/twisted.py b/python/lib/python3.11/site-packages/apscheduler/schedulers/twisted.py new file mode 100644 index 0000000..6b43a84 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/schedulers/twisted.py @@ -0,0 +1,62 @@ +from __future__ import absolute_import + +from functools import wraps + +from apscheduler.schedulers.base import BaseScheduler +from apscheduler.util import maybe_ref + +try: + from twisted.internet import reactor as default_reactor +except ImportError: # pragma: nocover + raise ImportError('TwistedScheduler requires Twisted installed') + + +def run_in_reactor(func): + @wraps(func) + def wrapper(self, *args, **kwargs): + self._reactor.callFromThread(func, self, *args, **kwargs) + return wrapper + + +class TwistedScheduler(BaseScheduler): + """ + A scheduler that runs on a Twisted reactor. + + Extra options: + + =========== ======================================================== + ``reactor`` Reactor instance to use (defaults to the global reactor) + =========== ======================================================== + """ + + _reactor = None + _delayedcall = None + + def _configure(self, config): + self._reactor = maybe_ref(config.pop('reactor', default_reactor)) + super(TwistedScheduler, self)._configure(config) + + @run_in_reactor + def shutdown(self, wait=True): + super(TwistedScheduler, self).shutdown(wait) + self._stop_timer() + + def _start_timer(self, wait_seconds): + self._stop_timer() + if wait_seconds is not None: + self._delayedcall = self._reactor.callLater(wait_seconds, self.wakeup) + + def _stop_timer(self): + if self._delayedcall and self._delayedcall.active(): + self._delayedcall.cancel() + del self._delayedcall + + @run_in_reactor + def wakeup(self): + self._stop_timer() + wait_seconds = self._process_jobs() + self._start_timer(wait_seconds) + + def _create_default_executor(self): + from apscheduler.executors.twisted import TwistedExecutor + return TwistedExecutor() diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/__init__.py b/python/lib/python3.11/site-packages/apscheduler/triggers/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/base.py b/python/lib/python3.11/site-packages/apscheduler/triggers/base.py new file mode 100644 index 0000000..55d010d --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/base.py @@ -0,0 +1,37 @@ +from abc import ABCMeta, abstractmethod +from datetime import timedelta +import random + +import six + + +class BaseTrigger(six.with_metaclass(ABCMeta)): + """Abstract base class that defines the interface that every trigger must implement.""" + + __slots__ = () + + @abstractmethod + def get_next_fire_time(self, previous_fire_time, now): + """ + Returns the next datetime to fire on, If no such datetime can be calculated, returns + ``None``. + + :param datetime.datetime previous_fire_time: the previous time the trigger was fired + :param datetime.datetime now: current datetime + """ + + def _apply_jitter(self, next_fire_time, jitter, now): + """ + Randomize ``next_fire_time`` by adding a random value (the jitter). + + :param datetime.datetime|None next_fire_time: next fire time without jitter applied. If + ``None``, returns ``None``. + :param int|None jitter: maximum number of seconds to add to ``next_fire_time`` + (if ``None`` or ``0``, returns ``next_fire_time``) + :param datetime.datetime now: current datetime + :return datetime.datetime|None: next fire time with a jitter. + """ + if next_fire_time is None or not jitter: + return next_fire_time + + return next_fire_time + timedelta(seconds=random.uniform(0, jitter)) diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/combining.py b/python/lib/python3.11/site-packages/apscheduler/triggers/combining.py new file mode 100644 index 0000000..bb90006 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/combining.py @@ -0,0 +1,95 @@ +from apscheduler.triggers.base import BaseTrigger +from apscheduler.util import obj_to_ref, ref_to_obj + + +class BaseCombiningTrigger(BaseTrigger): + __slots__ = ('triggers', 'jitter') + + def __init__(self, triggers, jitter=None): + self.triggers = triggers + self.jitter = jitter + + def __getstate__(self): + return { + 'version': 1, + 'triggers': [(obj_to_ref(trigger.__class__), trigger.__getstate__()) + for trigger in self.triggers], + 'jitter': self.jitter + } + + def __setstate__(self, state): + if state.get('version', 1) > 1: + raise ValueError( + 'Got serialized data for version %s of %s, but only versions up to 1 can be ' + 'handled' % (state['version'], self.__class__.__name__)) + + self.jitter = state['jitter'] + self.triggers = [] + for clsref, state in state['triggers']: + cls = ref_to_obj(clsref) + trigger = cls.__new__(cls) + trigger.__setstate__(state) + self.triggers.append(trigger) + + def __repr__(self): + return '<{}({}{})>'.format(self.__class__.__name__, self.triggers, + ', jitter={}'.format(self.jitter) if self.jitter else '') + + +class AndTrigger(BaseCombiningTrigger): + """ + Always returns the earliest next fire time that all the given triggers can agree on. + The trigger is considered to be finished when any of the given triggers has finished its + schedule. + + Trigger alias: ``and`` + + :param list triggers: triggers to combine + :param int|None jitter: delay the job execution by ``jitter`` seconds at most + """ + + __slots__ = () + + def get_next_fire_time(self, previous_fire_time, now): + while True: + fire_times = [trigger.get_next_fire_time(previous_fire_time, now) + for trigger in self.triggers] + if None in fire_times: + return None + elif min(fire_times) == max(fire_times): + return self._apply_jitter(fire_times[0], self.jitter, now) + else: + now = max(fire_times) + + def __str__(self): + return 'and[{}]'.format(', '.join(str(trigger) for trigger in self.triggers)) + + +class OrTrigger(BaseCombiningTrigger): + """ + Always returns the earliest next fire time produced by any of the given triggers. + The trigger is considered finished when all the given triggers have finished their schedules. + + Trigger alias: ``or`` + + :param list triggers: triggers to combine + :param int|None jitter: delay the job execution by ``jitter`` seconds at most + + .. note:: Triggers that depends on the previous fire time, such as the interval trigger, may + seem to behave strangely since they are always passed the previous fire time produced by + any of the given triggers. + """ + + __slots__ = () + + def get_next_fire_time(self, previous_fire_time, now): + fire_times = [trigger.get_next_fire_time(previous_fire_time, now) + for trigger in self.triggers] + fire_times = [fire_time for fire_time in fire_times if fire_time is not None] + if fire_times: + return self._apply_jitter(min(fire_times), self.jitter, now) + else: + return None + + def __str__(self): + return 'or[{}]'.format(', '.join(str(trigger) for trigger in self.triggers)) diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/cron/__init__.py b/python/lib/python3.11/site-packages/apscheduler/triggers/cron/__init__.py new file mode 100644 index 0000000..b5389dd --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/cron/__init__.py @@ -0,0 +1,239 @@ +from datetime import datetime, timedelta + +from tzlocal import get_localzone +import six + +from apscheduler.triggers.base import BaseTrigger +from apscheduler.triggers.cron.fields import ( + BaseField, MonthField, WeekField, DayOfMonthField, DayOfWeekField, DEFAULT_VALUES) +from apscheduler.util import ( + datetime_ceil, convert_to_datetime, datetime_repr, astimezone, localize, normalize) + + +class CronTrigger(BaseTrigger): + """ + Triggers when current time matches all specified time constraints, + similarly to how the UNIX cron scheduler works. + + :param int|str year: 4-digit year + :param int|str month: month (1-12) + :param int|str day: day of month (1-31) + :param int|str week: ISO week (1-53) + :param int|str day_of_week: number or name of weekday (0-6 or mon,tue,wed,thu,fri,sat,sun) + :param int|str hour: hour (0-23) + :param int|str minute: minute (0-59) + :param int|str second: second (0-59) + :param datetime|str start_date: earliest possible date/time to trigger on (inclusive) + :param datetime|str end_date: latest possible date/time to trigger on (inclusive) + :param datetime.tzinfo|str timezone: time zone to use for the date/time calculations (defaults + to scheduler timezone) + :param int|None jitter: delay the job execution by ``jitter`` seconds at most + + .. note:: The first weekday is always **monday**. + """ + + FIELD_NAMES = ('year', 'month', 'day', 'week', 'day_of_week', 'hour', 'minute', 'second') + FIELDS_MAP = { + 'year': BaseField, + 'month': MonthField, + 'week': WeekField, + 'day': DayOfMonthField, + 'day_of_week': DayOfWeekField, + 'hour': BaseField, + 'minute': BaseField, + 'second': BaseField + } + + __slots__ = 'timezone', 'start_date', 'end_date', 'fields', 'jitter' + + def __init__(self, year=None, month=None, day=None, week=None, day_of_week=None, hour=None, + minute=None, second=None, start_date=None, end_date=None, timezone=None, + jitter=None): + if timezone: + self.timezone = astimezone(timezone) + elif isinstance(start_date, datetime) and start_date.tzinfo: + self.timezone = start_date.tzinfo + elif isinstance(end_date, datetime) and end_date.tzinfo: + self.timezone = end_date.tzinfo + else: + self.timezone = get_localzone() + + self.start_date = convert_to_datetime(start_date, self.timezone, 'start_date') + self.end_date = convert_to_datetime(end_date, self.timezone, 'end_date') + + self.jitter = jitter + + values = dict((key, value) for (key, value) in six.iteritems(locals()) + if key in self.FIELD_NAMES and value is not None) + self.fields = [] + assign_defaults = False + for field_name in self.FIELD_NAMES: + if field_name in values: + exprs = values.pop(field_name) + is_default = False + assign_defaults = not values + elif assign_defaults: + exprs = DEFAULT_VALUES[field_name] + is_default = True + else: + exprs = '*' + is_default = True + + field_class = self.FIELDS_MAP[field_name] + field = field_class(field_name, exprs, is_default) + self.fields.append(field) + + @classmethod + def from_crontab(cls, expr, timezone=None): + """ + Create a :class:`~CronTrigger` from a standard crontab expression. + + See https://en.wikipedia.org/wiki/Cron for more information on the format accepted here. + + :param expr: minute, hour, day of month, month, day of week + :param datetime.tzinfo|str timezone: time zone to use for the date/time calculations ( + defaults to scheduler timezone) + :return: a :class:`~CronTrigger` instance + + """ + values = expr.split() + if len(values) != 5: + raise ValueError('Wrong number of fields; got {}, expected 5'.format(len(values))) + + return cls(minute=values[0], hour=values[1], day=values[2], month=values[3], + day_of_week=values[4], timezone=timezone) + + def _increment_field_value(self, dateval, fieldnum): + """ + Increments the designated field and resets all less significant fields to their minimum + values. + + :type dateval: datetime + :type fieldnum: int + :return: a tuple containing the new date, and the number of the field that was actually + incremented + :rtype: tuple + """ + + values = {} + i = 0 + while i < len(self.fields): + field = self.fields[i] + if not field.REAL: + if i == fieldnum: + fieldnum -= 1 + i -= 1 + else: + i += 1 + continue + + if i < fieldnum: + values[field.name] = field.get_value(dateval) + i += 1 + elif i > fieldnum: + values[field.name] = field.get_min(dateval) + i += 1 + else: + value = field.get_value(dateval) + maxval = field.get_max(dateval) + if value == maxval: + fieldnum -= 1 + i -= 1 + else: + values[field.name] = value + 1 + i += 1 + + difference = datetime(**values) - dateval.replace(tzinfo=None) + return normalize(dateval + difference), fieldnum + + def _set_field_value(self, dateval, fieldnum, new_value): + values = {} + for i, field in enumerate(self.fields): + if field.REAL: + if i < fieldnum: + values[field.name] = field.get_value(dateval) + elif i > fieldnum: + values[field.name] = field.get_min(dateval) + else: + values[field.name] = new_value + + return localize(datetime(**values), self.timezone) + + def get_next_fire_time(self, previous_fire_time, now): + if previous_fire_time: + start_date = min(now, previous_fire_time + timedelta(microseconds=1)) + if start_date == previous_fire_time: + start_date += timedelta(microseconds=1) + else: + start_date = max(now, self.start_date) if self.start_date else now + + fieldnum = 0 + next_date = datetime_ceil(start_date).astimezone(self.timezone) + while 0 <= fieldnum < len(self.fields): + field = self.fields[fieldnum] + curr_value = field.get_value(next_date) + next_value = field.get_next_value(next_date) + + if next_value is None: + # No valid value was found + next_date, fieldnum = self._increment_field_value(next_date, fieldnum - 1) + elif next_value > curr_value: + # A valid, but higher than the starting value, was found + if field.REAL: + next_date = self._set_field_value(next_date, fieldnum, next_value) + fieldnum += 1 + else: + next_date, fieldnum = self._increment_field_value(next_date, fieldnum) + else: + # A valid value was found, no changes necessary + fieldnum += 1 + + # Return if the date has rolled past the end date + if self.end_date and next_date > self.end_date: + return None + + if fieldnum >= 0: + next_date = self._apply_jitter(next_date, self.jitter, now) + return min(next_date, self.end_date) if self.end_date else next_date + + def __getstate__(self): + return { + 'version': 2, + 'timezone': self.timezone, + 'start_date': self.start_date, + 'end_date': self.end_date, + 'fields': self.fields, + 'jitter': self.jitter, + } + + def __setstate__(self, state): + # This is for compatibility with APScheduler 3.0.x + if isinstance(state, tuple): + state = state[1] + + if state.get('version', 1) > 2: + raise ValueError( + 'Got serialized data for version %s of %s, but only versions up to 2 can be ' + 'handled' % (state['version'], self.__class__.__name__)) + + self.timezone = state['timezone'] + self.start_date = state['start_date'] + self.end_date = state['end_date'] + self.fields = state['fields'] + self.jitter = state.get('jitter') + + def __str__(self): + options = ["%s='%s'" % (f.name, f) for f in self.fields if not f.is_default] + return 'cron[%s]' % (', '.join(options)) + + def __repr__(self): + options = ["%s='%s'" % (f.name, f) for f in self.fields if not f.is_default] + if self.start_date: + options.append("start_date=%r" % datetime_repr(self.start_date)) + if self.end_date: + options.append("end_date=%r" % datetime_repr(self.end_date)) + if self.jitter: + options.append('jitter=%s' % self.jitter) + + return "<%s (%s, timezone='%s')>" % ( + self.__class__.__name__, ', '.join(options), self.timezone) diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/cron/expressions.py b/python/lib/python3.11/site-packages/apscheduler/triggers/cron/expressions.py new file mode 100644 index 0000000..55a3716 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/cron/expressions.py @@ -0,0 +1,251 @@ +"""This module contains the expressions applicable for CronTrigger's fields.""" + +from calendar import monthrange +import re + +from apscheduler.util import asint + +__all__ = ('AllExpression', 'RangeExpression', 'WeekdayRangeExpression', + 'WeekdayPositionExpression', 'LastDayOfMonthExpression') + + +WEEKDAYS = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'] +MONTHS = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec'] + + +class AllExpression(object): + value_re = re.compile(r'\*(?:/(?P<step>\d+))?$') + + def __init__(self, step=None): + self.step = asint(step) + if self.step == 0: + raise ValueError('Increment must be higher than 0') + + def validate_range(self, field_name): + from apscheduler.triggers.cron.fields import MIN_VALUES, MAX_VALUES + + value_range = MAX_VALUES[field_name] - MIN_VALUES[field_name] + if self.step and self.step > value_range: + raise ValueError('the step value ({}) is higher than the total range of the ' + 'expression ({})'.format(self.step, value_range)) + + def get_next_value(self, date, field): + start = field.get_value(date) + minval = field.get_min(date) + maxval = field.get_max(date) + start = max(start, minval) + + if not self.step: + next = start + else: + distance_to_next = (self.step - (start - minval)) % self.step + next = start + distance_to_next + + if next <= maxval: + return next + + def __eq__(self, other): + return isinstance(other, self.__class__) and self.step == other.step + + def __str__(self): + if self.step: + return '*/%d' % self.step + return '*' + + def __repr__(self): + return "%s(%s)" % (self.__class__.__name__, self.step) + + +class RangeExpression(AllExpression): + value_re = re.compile( + r'(?P<first>\d+)(?:-(?P<last>\d+))?(?:/(?P<step>\d+))?$') + + def __init__(self, first, last=None, step=None): + super(RangeExpression, self).__init__(step) + first = asint(first) + last = asint(last) + if last is None and step is None: + last = first + if last is not None and first > last: + raise ValueError('The minimum value in a range must not be higher than the maximum') + self.first = first + self.last = last + + def validate_range(self, field_name): + from apscheduler.triggers.cron.fields import MIN_VALUES, MAX_VALUES + + super(RangeExpression, self).validate_range(field_name) + if self.first < MIN_VALUES[field_name]: + raise ValueError('the first value ({}) is lower than the minimum value ({})' + .format(self.first, MIN_VALUES[field_name])) + if self.last is not None and self.last > MAX_VALUES[field_name]: + raise ValueError('the last value ({}) is higher than the maximum value ({})' + .format(self.last, MAX_VALUES[field_name])) + value_range = (self.last or MAX_VALUES[field_name]) - self.first + if self.step and self.step > value_range: + raise ValueError('the step value ({}) is higher than the total range of the ' + 'expression ({})'.format(self.step, value_range)) + + def get_next_value(self, date, field): + startval = field.get_value(date) + minval = field.get_min(date) + maxval = field.get_max(date) + + # Apply range limits + minval = max(minval, self.first) + maxval = min(maxval, self.last) if self.last is not None else maxval + nextval = max(minval, startval) + + # Apply the step if defined + if self.step: + distance_to_next = (self.step - (nextval - minval)) % self.step + nextval += distance_to_next + + return nextval if nextval <= maxval else None + + def __eq__(self, other): + return (isinstance(other, self.__class__) and self.first == other.first and + self.last == other.last) + + def __str__(self): + if self.last != self.first and self.last is not None: + range = '%d-%d' % (self.first, self.last) + else: + range = str(self.first) + + if self.step: + return '%s/%d' % (range, self.step) + return range + + def __repr__(self): + args = [str(self.first)] + if self.last != self.first and self.last is not None or self.step: + args.append(str(self.last)) + if self.step: + args.append(str(self.step)) + return "%s(%s)" % (self.__class__.__name__, ', '.join(args)) + + +class MonthRangeExpression(RangeExpression): + value_re = re.compile(r'(?P<first>[a-z]+)(?:-(?P<last>[a-z]+))?', re.IGNORECASE) + + def __init__(self, first, last=None): + try: + first_num = MONTHS.index(first.lower()) + 1 + except ValueError: + raise ValueError('Invalid month name "%s"' % first) + + if last: + try: + last_num = MONTHS.index(last.lower()) + 1 + except ValueError: + raise ValueError('Invalid month name "%s"' % last) + else: + last_num = None + + super(MonthRangeExpression, self).__init__(first_num, last_num) + + def __str__(self): + if self.last != self.first and self.last is not None: + return '%s-%s' % (MONTHS[self.first - 1], MONTHS[self.last - 1]) + return MONTHS[self.first - 1] + + def __repr__(self): + args = ["'%s'" % MONTHS[self.first]] + if self.last != self.first and self.last is not None: + args.append("'%s'" % MONTHS[self.last - 1]) + return "%s(%s)" % (self.__class__.__name__, ', '.join(args)) + + +class WeekdayRangeExpression(RangeExpression): + value_re = re.compile(r'(?P<first>[a-z]+)(?:-(?P<last>[a-z]+))?', re.IGNORECASE) + + def __init__(self, first, last=None): + try: + first_num = WEEKDAYS.index(first.lower()) + except ValueError: + raise ValueError('Invalid weekday name "%s"' % first) + + if last: + try: + last_num = WEEKDAYS.index(last.lower()) + except ValueError: + raise ValueError('Invalid weekday name "%s"' % last) + else: + last_num = None + + super(WeekdayRangeExpression, self).__init__(first_num, last_num) + + def __str__(self): + if self.last != self.first and self.last is not None: + return '%s-%s' % (WEEKDAYS[self.first], WEEKDAYS[self.last]) + return WEEKDAYS[self.first] + + def __repr__(self): + args = ["'%s'" % WEEKDAYS[self.first]] + if self.last != self.first and self.last is not None: + args.append("'%s'" % WEEKDAYS[self.last]) + return "%s(%s)" % (self.__class__.__name__, ', '.join(args)) + + +class WeekdayPositionExpression(AllExpression): + options = ['1st', '2nd', '3rd', '4th', '5th', 'last'] + value_re = re.compile(r'(?P<option_name>%s) +(?P<weekday_name>(?:\d+|\w+))' % + '|'.join(options), re.IGNORECASE) + + def __init__(self, option_name, weekday_name): + super(WeekdayPositionExpression, self).__init__(None) + try: + self.option_num = self.options.index(option_name.lower()) + except ValueError: + raise ValueError('Invalid weekday position "%s"' % option_name) + + try: + self.weekday = WEEKDAYS.index(weekday_name.lower()) + except ValueError: + raise ValueError('Invalid weekday name "%s"' % weekday_name) + + def get_next_value(self, date, field): + # Figure out the weekday of the month's first day and the number of days in that month + first_day_wday, last_day = monthrange(date.year, date.month) + + # Calculate which day of the month is the first of the target weekdays + first_hit_day = self.weekday - first_day_wday + 1 + if first_hit_day <= 0: + first_hit_day += 7 + + # Calculate what day of the month the target weekday would be + if self.option_num < 5: + target_day = first_hit_day + self.option_num * 7 + else: + target_day = first_hit_day + ((last_day - first_hit_day) // 7) * 7 + + if target_day <= last_day and target_day >= date.day: + return target_day + + def __eq__(self, other): + return (super(WeekdayPositionExpression, self).__eq__(other) and + self.option_num == other.option_num and self.weekday == other.weekday) + + def __str__(self): + return '%s %s' % (self.options[self.option_num], WEEKDAYS[self.weekday]) + + def __repr__(self): + return "%s('%s', '%s')" % (self.__class__.__name__, self.options[self.option_num], + WEEKDAYS[self.weekday]) + + +class LastDayOfMonthExpression(AllExpression): + value_re = re.compile(r'last', re.IGNORECASE) + + def __init__(self): + super(LastDayOfMonthExpression, self).__init__(None) + + def get_next_value(self, date, field): + return monthrange(date.year, date.month)[1] + + def __str__(self): + return 'last' + + def __repr__(self): + return "%s()" % self.__class__.__name__ diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/cron/fields.py b/python/lib/python3.11/site-packages/apscheduler/triggers/cron/fields.py new file mode 100644 index 0000000..86d620c --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/cron/fields.py @@ -0,0 +1,111 @@ +"""Fields represent CronTrigger options which map to :class:`~datetime.datetime` fields.""" + +from calendar import monthrange +import re + +import six + +from apscheduler.triggers.cron.expressions import ( + AllExpression, RangeExpression, WeekdayPositionExpression, LastDayOfMonthExpression, + WeekdayRangeExpression, MonthRangeExpression) + + +__all__ = ('MIN_VALUES', 'MAX_VALUES', 'DEFAULT_VALUES', 'BaseField', 'WeekField', + 'DayOfMonthField', 'DayOfWeekField') + + +MIN_VALUES = {'year': 1970, 'month': 1, 'day': 1, 'week': 1, 'day_of_week': 0, 'hour': 0, + 'minute': 0, 'second': 0} +MAX_VALUES = {'year': 9999, 'month': 12, 'day': 31, 'week': 53, 'day_of_week': 6, 'hour': 23, + 'minute': 59, 'second': 59} +DEFAULT_VALUES = {'year': '*', 'month': 1, 'day': 1, 'week': '*', 'day_of_week': '*', 'hour': 0, + 'minute': 0, 'second': 0} +SEPARATOR = re.compile(' *, *') + + +class BaseField(object): + REAL = True + COMPILERS = [AllExpression, RangeExpression] + + def __init__(self, name, exprs, is_default=False): + self.name = name + self.is_default = is_default + self.compile_expressions(exprs) + + def get_min(self, dateval): + return MIN_VALUES[self.name] + + def get_max(self, dateval): + return MAX_VALUES[self.name] + + def get_value(self, dateval): + return getattr(dateval, self.name) + + def get_next_value(self, dateval): + smallest = None + for expr in self.expressions: + value = expr.get_next_value(dateval, self) + if smallest is None or (value is not None and value < smallest): + smallest = value + + return smallest + + def compile_expressions(self, exprs): + self.expressions = [] + + # Split a comma-separated expression list, if any + for expr in SEPARATOR.split(str(exprs).strip()): + self.compile_expression(expr) + + def compile_expression(self, expr): + for compiler in self.COMPILERS: + match = compiler.value_re.match(expr) + if match: + compiled_expr = compiler(**match.groupdict()) + + try: + compiled_expr.validate_range(self.name) + except ValueError as e: + exc = ValueError('Error validating expression {!r}: {}'.format(expr, e)) + six.raise_from(exc, None) + + self.expressions.append(compiled_expr) + return + + raise ValueError('Unrecognized expression "%s" for field "%s"' % (expr, self.name)) + + def __eq__(self, other): + return isinstance(self, self.__class__) and self.expressions == other.expressions + + def __str__(self): + expr_strings = (str(e) for e in self.expressions) + return ','.join(expr_strings) + + def __repr__(self): + return "%s('%s', '%s')" % (self.__class__.__name__, self.name, self) + + +class WeekField(BaseField): + REAL = False + + def get_value(self, dateval): + return dateval.isocalendar()[1] + + +class DayOfMonthField(BaseField): + COMPILERS = BaseField.COMPILERS + [WeekdayPositionExpression, LastDayOfMonthExpression] + + def get_max(self, dateval): + return monthrange(dateval.year, dateval.month)[1] + + +class DayOfWeekField(BaseField): + REAL = False + COMPILERS = BaseField.COMPILERS + [WeekdayRangeExpression] + + def get_value(self, dateval): + return dateval.weekday() + + +class MonthField(BaseField): + COMPILERS = BaseField.COMPILERS + [MonthRangeExpression] diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/date.py b/python/lib/python3.11/site-packages/apscheduler/triggers/date.py new file mode 100644 index 0000000..0768100 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/date.py @@ -0,0 +1,51 @@ +from datetime import datetime + +from tzlocal import get_localzone + +from apscheduler.triggers.base import BaseTrigger +from apscheduler.util import convert_to_datetime, datetime_repr, astimezone + + +class DateTrigger(BaseTrigger): + """ + Triggers once on the given datetime. If ``run_date`` is left empty, current time is used. + + :param datetime|str run_date: the date/time to run the job at + :param datetime.tzinfo|str timezone: time zone for ``run_date`` if it doesn't have one already + """ + + __slots__ = 'run_date' + + def __init__(self, run_date=None, timezone=None): + timezone = astimezone(timezone) or get_localzone() + if run_date is not None: + self.run_date = convert_to_datetime(run_date, timezone, 'run_date') + else: + self.run_date = datetime.now(timezone) + + def get_next_fire_time(self, previous_fire_time, now): + return self.run_date if previous_fire_time is None else None + + def __getstate__(self): + return { + 'version': 1, + 'run_date': self.run_date + } + + def __setstate__(self, state): + # This is for compatibility with APScheduler 3.0.x + if isinstance(state, tuple): + state = state[1] + + if state.get('version', 1) > 1: + raise ValueError( + 'Got serialized data for version %s of %s, but only version 1 can be handled' % + (state['version'], self.__class__.__name__)) + + self.run_date = state['run_date'] + + def __str__(self): + return 'date[%s]' % datetime_repr(self.run_date) + + def __repr__(self): + return "<%s (run_date='%s')>" % (self.__class__.__name__, datetime_repr(self.run_date)) diff --git a/python/lib/python3.11/site-packages/apscheduler/triggers/interval.py b/python/lib/python3.11/site-packages/apscheduler/triggers/interval.py new file mode 100644 index 0000000..b0e2dbd --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/triggers/interval.py @@ -0,0 +1,108 @@ +from datetime import timedelta, datetime +from math import ceil + +from tzlocal import get_localzone + +from apscheduler.triggers.base import BaseTrigger +from apscheduler.util import ( + convert_to_datetime, normalize, timedelta_seconds, datetime_repr, + astimezone) + + +class IntervalTrigger(BaseTrigger): + """ + Triggers on specified intervals, starting on ``start_date`` if specified, ``datetime.now()`` + + interval otherwise. + + :param int weeks: number of weeks to wait + :param int days: number of days to wait + :param int hours: number of hours to wait + :param int minutes: number of minutes to wait + :param int seconds: number of seconds to wait + :param datetime|str start_date: starting point for the interval calculation + :param datetime|str end_date: latest possible date/time to trigger on + :param datetime.tzinfo|str timezone: time zone to use for the date/time calculations + :param int|None jitter: delay the job execution by ``jitter`` seconds at most + """ + + __slots__ = 'timezone', 'start_date', 'end_date', 'interval', 'interval_length', 'jitter' + + def __init__(self, weeks=0, days=0, hours=0, minutes=0, seconds=0, start_date=None, + end_date=None, timezone=None, jitter=None): + self.interval = timedelta(weeks=weeks, days=days, hours=hours, minutes=minutes, + seconds=seconds) + self.interval_length = timedelta_seconds(self.interval) + if self.interval_length == 0: + self.interval = timedelta(seconds=1) + self.interval_length = 1 + + if timezone: + self.timezone = astimezone(timezone) + elif isinstance(start_date, datetime) and start_date.tzinfo: + self.timezone = start_date.tzinfo + elif isinstance(end_date, datetime) and end_date.tzinfo: + self.timezone = end_date.tzinfo + else: + self.timezone = get_localzone() + + start_date = start_date or (datetime.now(self.timezone) + self.interval) + self.start_date = convert_to_datetime(start_date, self.timezone, 'start_date') + self.end_date = convert_to_datetime(end_date, self.timezone, 'end_date') + + self.jitter = jitter + + def get_next_fire_time(self, previous_fire_time, now): + if previous_fire_time: + next_fire_time = previous_fire_time + self.interval + elif self.start_date > now: + next_fire_time = self.start_date + else: + timediff_seconds = timedelta_seconds(now - self.start_date) + next_interval_num = int(ceil(timediff_seconds / self.interval_length)) + next_fire_time = self.start_date + self.interval * next_interval_num + + if self.jitter is not None: + next_fire_time = self._apply_jitter(next_fire_time, self.jitter, now) + + if not self.end_date or next_fire_time <= self.end_date: + return normalize(next_fire_time) + + def __getstate__(self): + return { + 'version': 2, + 'timezone': self.timezone, + 'start_date': self.start_date, + 'end_date': self.end_date, + 'interval': self.interval, + 'jitter': self.jitter, + } + + def __setstate__(self, state): + # This is for compatibility with APScheduler 3.0.x + if isinstance(state, tuple): + state = state[1] + + if state.get('version', 1) > 2: + raise ValueError( + 'Got serialized data for version %s of %s, but only versions up to 2 can be ' + 'handled' % (state['version'], self.__class__.__name__)) + + self.timezone = state['timezone'] + self.start_date = state['start_date'] + self.end_date = state['end_date'] + self.interval = state['interval'] + self.interval_length = timedelta_seconds(self.interval) + self.jitter = state.get('jitter') + + def __str__(self): + return 'interval[%s]' % str(self.interval) + + def __repr__(self): + options = ['interval=%r' % self.interval, 'start_date=%r' % datetime_repr(self.start_date)] + if self.end_date: + options.append("end_date=%r" % datetime_repr(self.end_date)) + if self.jitter: + options.append('jitter=%s' % self.jitter) + + return "<%s (%s, timezone='%s')>" % ( + self.__class__.__name__, ', '.join(options), self.timezone) diff --git a/python/lib/python3.11/site-packages/apscheduler/util.py b/python/lib/python3.11/site-packages/apscheduler/util.py new file mode 100644 index 0000000..64b27d7 --- /dev/null +++ b/python/lib/python3.11/site-packages/apscheduler/util.py @@ -0,0 +1,430 @@ +"""This module contains several handy functions primarily meant for internal use.""" + +from __future__ import division + +from asyncio import iscoroutinefunction +from datetime import date, datetime, time, timedelta, tzinfo +from calendar import timegm +from functools import partial +from inspect import isclass, ismethod +import re +import sys + +from pytz import timezone, utc, FixedOffset +import six + +try: + from inspect import signature +except ImportError: # pragma: nocover + from funcsigs import signature + +try: + from threading import TIMEOUT_MAX +except ImportError: + TIMEOUT_MAX = 4294967 # Maximum value accepted by Event.wait() on Windows + +__all__ = ('asint', 'asbool', 'astimezone', 'convert_to_datetime', 'datetime_to_utc_timestamp', + 'utc_timestamp_to_datetime', 'timedelta_seconds', 'datetime_ceil', 'get_callable_name', + 'obj_to_ref', 'ref_to_obj', 'maybe_ref', 'repr_escape', 'check_callable_args', + 'normalize', 'localize', 'TIMEOUT_MAX') + + +class _Undefined(object): + def __nonzero__(self): + return False + + def __bool__(self): + return False + + def __repr__(self): + return '<undefined>' + + +undefined = _Undefined() #: a unique object that only signifies that no value is defined + + +def asint(text): + """ + Safely converts a string to an integer, returning ``None`` if the string is ``None``. + + :type text: str + :rtype: int + + """ + if text is not None: + return int(text) + + +def asbool(obj): + """ + Interprets an object as a boolean value. + + :rtype: bool + + """ + if isinstance(obj, str): + obj = obj.strip().lower() + if obj in ('true', 'yes', 'on', 'y', 't', '1'): + return True + if obj in ('false', 'no', 'off', 'n', 'f', '0'): + return False + raise ValueError('Unable to interpret value "%s" as boolean' % obj) + return bool(obj) + + +def astimezone(obj): + """ + Interprets an object as a timezone. + + :rtype: tzinfo + + """ + if isinstance(obj, six.string_types): + return timezone(obj) + if isinstance(obj, tzinfo): + if obj.tzname(None) == 'local': + raise ValueError( + 'Unable to determine the name of the local timezone -- you must explicitly ' + 'specify the name of the local timezone. Please refrain from using timezones like ' + 'EST to prevent problems with daylight saving time. Instead, use a locale based ' + 'timezone name (such as Europe/Helsinki).') + return obj + if obj is not None: + raise TypeError('Expected tzinfo, got %s instead' % obj.__class__.__name__) + + +_DATE_REGEX = re.compile( + r'(?P<year>\d{4})-(?P<month>\d{1,2})-(?P<day>\d{1,2})' + r'(?:[ T](?P<hour>\d{1,2}):(?P<minute>\d{1,2}):(?P<second>\d{1,2})' + r'(?:\.(?P<microsecond>\d{1,6}))?' + r'(?P<timezone>Z|[+-]\d\d:\d\d)?)?$') + + +def convert_to_datetime(input, tz, arg_name): + """ + Converts the given object to a timezone aware datetime object. + + If a timezone aware datetime object is passed, it is returned unmodified. + If a native datetime object is passed, it is given the specified timezone. + If the input is a string, it is parsed as a datetime with the given timezone. + + Date strings are accepted in three different forms: date only (Y-m-d), date with time + (Y-m-d H:M:S) or with date+time with microseconds (Y-m-d H:M:S.micro). Additionally you can + override the time zone by giving a specific offset in the format specified by ISO 8601: + Z (UTC), +HH:MM or -HH:MM. + + :param str|datetime input: the datetime or string to convert to a timezone aware datetime + :param datetime.tzinfo tz: timezone to interpret ``input`` in + :param str arg_name: the name of the argument (used in an error message) + :rtype: datetime + + """ + if input is None: + return + elif isinstance(input, datetime): + datetime_ = input + elif isinstance(input, date): + datetime_ = datetime.combine(input, time()) + elif isinstance(input, six.string_types): + m = _DATE_REGEX.match(input) + if not m: + raise ValueError('Invalid date string') + + values = m.groupdict() + tzname = values.pop('timezone') + if tzname == 'Z': + tz = utc + elif tzname: + hours, minutes = (int(x) for x in tzname[1:].split(':')) + sign = 1 if tzname[0] == '+' else -1 + tz = FixedOffset(sign * (hours * 60 + minutes)) + + values = {k: int(v or 0) for k, v in values.items()} + datetime_ = datetime(**values) + else: + raise TypeError('Unsupported type for %s: %s' % (arg_name, input.__class__.__name__)) + + if datetime_.tzinfo is not None: + return datetime_ + if tz is None: + raise ValueError( + 'The "tz" argument must be specified if %s has no timezone information' % arg_name) + if isinstance(tz, six.string_types): + tz = timezone(tz) + + return localize(datetime_, tz) + + +def datetime_to_utc_timestamp(timeval): + """ + Converts a datetime instance to a timestamp. + + :type timeval: datetime + :rtype: float + + """ + if timeval is not None: + return timegm(timeval.utctimetuple()) + timeval.microsecond / 1000000 + + +def utc_timestamp_to_datetime(timestamp): + """ + Converts the given timestamp to a datetime instance. + + :type timestamp: float + :rtype: datetime + + """ + if timestamp is not None: + return datetime.fromtimestamp(timestamp, utc) + + +def timedelta_seconds(delta): + """ + Converts the given timedelta to seconds. + + :type delta: timedelta + :rtype: float + + """ + return delta.days * 24 * 60 * 60 + delta.seconds + \ + delta.microseconds / 1000000.0 + + +def datetime_ceil(dateval): + """ + Rounds the given datetime object upwards. + + :type dateval: datetime + + """ + if dateval.microsecond > 0: + return dateval + timedelta(seconds=1, microseconds=-dateval.microsecond) + return dateval + + +def datetime_repr(dateval): + return dateval.strftime('%Y-%m-%d %H:%M:%S %Z') if dateval else 'None' + + +def get_callable_name(func): + """ + Returns the best available display name for the given function/callable. + + :rtype: str + + """ + # the easy case (on Python 3.3+) + if hasattr(func, '__qualname__'): + return func.__qualname__ + + # class methods, bound and unbound methods + f_self = getattr(func, '__self__', None) or getattr(func, 'im_self', None) + if f_self and hasattr(func, '__name__'): + f_class = f_self if isclass(f_self) else f_self.__class__ + else: + f_class = getattr(func, 'im_class', None) + + if f_class and hasattr(func, '__name__'): + return '%s.%s' % (f_class.__name__, func.__name__) + + # class or class instance + if hasattr(func, '__call__'): + # class + if hasattr(func, '__name__'): + return func.__name__ + + # instance of a class with a __call__ method + return func.__class__.__name__ + + raise TypeError('Unable to determine a name for %r -- maybe it is not a callable?' % func) + + +def obj_to_ref(obj): + """ + Returns the path to the given callable. + + :rtype: str + :raises TypeError: if the given object is not callable + :raises ValueError: if the given object is a :class:`~functools.partial`, lambda or a nested + function + + """ + if isinstance(obj, partial): + raise ValueError('Cannot create a reference to a partial()') + + name = get_callable_name(obj) + if '<lambda>' in name: + raise ValueError('Cannot create a reference to a lambda') + if '<locals>' in name: + raise ValueError('Cannot create a reference to a nested function') + + if ismethod(obj): + if hasattr(obj, 'im_self') and obj.im_self: + # bound method + module = obj.im_self.__module__ + elif hasattr(obj, 'im_class') and obj.im_class: + # unbound method + module = obj.im_class.__module__ + else: + module = obj.__module__ + else: + module = obj.__module__ + return '%s:%s' % (module, name) + + +def ref_to_obj(ref): + """ + Returns the object pointed to by ``ref``. + + :type ref: str + + """ + if not isinstance(ref, six.string_types): + raise TypeError('References must be strings') + if ':' not in ref: + raise ValueError('Invalid reference') + + modulename, rest = ref.split(':', 1) + try: + obj = __import__(modulename, fromlist=[rest]) + except ImportError: + raise LookupError('Error resolving reference %s: could not import module' % ref) + + try: + for name in rest.split('.'): + obj = getattr(obj, name) + return obj + except Exception: + raise LookupError('Error resolving reference %s: error looking up object' % ref) + + +def maybe_ref(ref): + """ + Returns the object that the given reference points to, if it is indeed a reference. + If it is not a reference, the object is returned as-is. + + """ + if not isinstance(ref, str): + return ref + return ref_to_obj(ref) + + +if six.PY2: + def repr_escape(string): + if isinstance(string, six.text_type): + return string.encode('ascii', 'backslashreplace') + return string +else: + def repr_escape(string): + return string + + +def check_callable_args(func, args, kwargs): + """ + Ensures that the given callable can be called with the given arguments. + + :type args: tuple + :type kwargs: dict + + """ + pos_kwargs_conflicts = [] # parameters that have a match in both args and kwargs + positional_only_kwargs = [] # positional-only parameters that have a match in kwargs + unsatisfied_args = [] # parameters in signature that don't have a match in args or kwargs + unsatisfied_kwargs = [] # keyword-only arguments that don't have a match in kwargs + unmatched_args = list(args) # args that didn't match any of the parameters in the signature + # kwargs that didn't match any of the parameters in the signature + unmatched_kwargs = list(kwargs) + # indicates if the signature defines *args and **kwargs respectively + has_varargs = has_var_kwargs = False + + try: + if sys.version_info >= (3, 5): + sig = signature(func, follow_wrapped=False) + else: + sig = signature(func) + except ValueError: + # signature() doesn't work against every kind of callable + return + + for param in six.itervalues(sig.parameters): + if param.kind == param.POSITIONAL_OR_KEYWORD: + if param.name in unmatched_kwargs and unmatched_args: + pos_kwargs_conflicts.append(param.name) + elif unmatched_args: + del unmatched_args[0] + elif param.name in unmatched_kwargs: + unmatched_kwargs.remove(param.name) + elif param.default is param.empty: + unsatisfied_args.append(param.name) + elif param.kind == param.POSITIONAL_ONLY: + if unmatched_args: + del unmatched_args[0] + elif param.name in unmatched_kwargs: + unmatched_kwargs.remove(param.name) + positional_only_kwargs.append(param.name) + elif param.default is param.empty: + unsatisfied_args.append(param.name) + elif param.kind == param.KEYWORD_ONLY: + if param.name in unmatched_kwargs: + unmatched_kwargs.remove(param.name) + elif param.default is param.empty: + unsatisfied_kwargs.append(param.name) + elif param.kind == param.VAR_POSITIONAL: + has_varargs = True + elif param.kind == param.VAR_KEYWORD: + has_var_kwargs = True + + # Make sure there are no conflicts between args and kwargs + if pos_kwargs_conflicts: + raise ValueError('The following arguments are supplied in both args and kwargs: %s' % + ', '.join(pos_kwargs_conflicts)) + + # Check if keyword arguments are being fed to positional-only parameters + if positional_only_kwargs: + raise ValueError('The following arguments cannot be given as keyword arguments: %s' % + ', '.join(positional_only_kwargs)) + + # Check that the number of positional arguments minus the number of matched kwargs matches the + # argspec + if unsatisfied_args: + raise ValueError('The following arguments have not been supplied: %s' % + ', '.join(unsatisfied_args)) + + # Check that all keyword-only arguments have been supplied + if unsatisfied_kwargs: + raise ValueError( + 'The following keyword-only arguments have not been supplied in kwargs: %s' % + ', '.join(unsatisfied_kwargs)) + + # Check that the callable can accept the given number of positional arguments + if not has_varargs and unmatched_args: + raise ValueError( + 'The list of positional arguments is longer than the target callable can handle ' + '(allowed: %d, given in args: %d)' % (len(args) - len(unmatched_args), len(args))) + + # Check that the callable can accept the given keyword arguments + if not has_var_kwargs and unmatched_kwargs: + raise ValueError( + 'The target callable does not accept the following keyword arguments: %s' % + ', '.join(unmatched_kwargs)) + + +def iscoroutinefunction_partial(f): + while isinstance(f, partial): + f = f.func + + # The asyncio version of iscoroutinefunction includes testing for @coroutine + # decorations vs. the inspect version which does not. + return iscoroutinefunction(f) + + +def normalize(dt): + return datetime.fromtimestamp(dt.timestamp(), dt.tzinfo) + + +def localize(dt, tzinfo): + if hasattr(tzinfo, 'localize'): + return tzinfo.localize(dt) + + return normalize(dt.replace(tzinfo=tzinfo)) diff --git a/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/INSTALLER b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/LICENSE b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/LICENSE new file mode 100644 index 0000000..033c86b --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/LICENSE @@ -0,0 +1,13 @@ +Copyright 2016-2020 aio-libs collaboration. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/METADATA b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/METADATA new file mode 100644 index 0000000..e68d234 --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/METADATA @@ -0,0 +1,133 @@ +Metadata-Version: 2.1 +Name: async-timeout +Version: 4.0.2 +Summary: Timeout context manager for asyncio programs +Home-page: https://github.com/aio-libs/async-timeout +Author: Andrew Svetlov <andrew.svetlov@gmail.com> +Author-email: andrew.svetlov@gmail.com +License: Apache 2 +Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby +Project-URL: CI: GitHub Actions, https://github.com/aio-libs/async-timeout/actions +Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/async-timeout +Project-URL: GitHub: issues, https://github.com/aio-libs/async-timeout/issues +Project-URL: GitHub: repo, https://github.com/aio-libs/async-timeout +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Topic :: Software Development :: Libraries +Classifier: Framework :: AsyncIO +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Requires-Python: >=3.6 +Description-Content-Type: text/x-rst +License-File: LICENSE +Requires-Dist: typing-extensions (>=3.6.5) ; python_version < "3.8" + +async-timeout +============= +.. image:: https://travis-ci.com/aio-libs/async-timeout.svg?branch=master + :target: https://travis-ci.com/aio-libs/async-timeout +.. image:: https://codecov.io/gh/aio-libs/async-timeout/branch/master/graph/badge.svg + :target: https://codecov.io/gh/aio-libs/async-timeout +.. image:: https://img.shields.io/pypi/v/async-timeout.svg + :target: https://pypi.python.org/pypi/async-timeout +.. image:: https://badges.gitter.im/Join%20Chat.svg + :target: https://gitter.im/aio-libs/Lobby + :alt: Chat on Gitter + +asyncio-compatible timeout context manager. + + +Usage example +------------- + + +The context manager is useful in cases when you want to apply timeout +logic around block of code or in cases when ``asyncio.wait_for()`` is +not suitable. Also it's much faster than ``asyncio.wait_for()`` +because ``timeout`` doesn't create a new task. + +The ``timeout(delay, *, loop=None)`` call returns a context manager +that cancels a block on *timeout* expiring:: + + async with timeout(1.5): + await inner() + +1. If ``inner()`` is executed faster than in ``1.5`` seconds nothing + happens. +2. Otherwise ``inner()`` is cancelled internally by sending + ``asyncio.CancelledError`` into but ``asyncio.TimeoutError`` is + raised outside of context manager scope. + +*timeout* parameter could be ``None`` for skipping timeout functionality. + + +Alternatively, ``timeout_at(when)`` can be used for scheduling +at the absolute time:: + + loop = asyncio.get_event_loop() + now = loop.time() + + async with timeout_at(now + 1.5): + await inner() + + +Please note: it is not POSIX time but a time with +undefined starting base, e.g. the time of the system power on. + + +Context manager has ``.expired`` property for check if timeout happens +exactly in context manager:: + + async with timeout(1.5) as cm: + await inner() + print(cm.expired) + +The property is ``True`` if ``inner()`` execution is cancelled by +timeout context manager. + +If ``inner()`` call explicitly raises ``TimeoutError`` ``cm.expired`` +is ``False``. + +The scheduled deadline time is available as ``.deadline`` property:: + + async with timeout(1.5) as cm: + cm.deadline + +Not finished yet timeout can be rescheduled by ``shift_by()`` +or ``shift_to()`` methods:: + + async with timeout(1.5) as cm: + cm.shift(1) # add another second on waiting + cm.update(loop.time() + 5) # reschedule to now+5 seconds + +Rescheduling is forbidden if the timeout is expired or after exit from ``async with`` +code block. + + +Installation +------------ + +:: + + $ pip install async-timeout + +The library is Python 3 only! + + + +Authors and License +------------------- + +The module is written by Andrew Svetlov. + +It's *Apache 2* licensed and freely available. + + diff --git a/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/RECORD b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/RECORD rename to python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/RECORD diff --git a/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/REQUESTED b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/REQUESTED new file mode 100644 index 0000000..e69de29 diff --git a/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/WHEEL b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/WHEEL new file mode 100644 index 0000000..5bad85f --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.0) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/top_level.txt b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/top_level.txt new file mode 100644 index 0000000..ad29955 --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/top_level.txt @@ -0,0 +1 @@ +async_timeout diff --git a/lib/python3.11/site-packages/tzlocal-5.0.1.dist-info/zip-safe b/python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/zip-safe similarity index 100% rename from lib/python3.11/site-packages/tzlocal-5.0.1.dist-info/zip-safe rename to python/lib/python3.11/site-packages/async_timeout-4.0.2.dist-info/zip-safe diff --git a/python/lib/python3.11/site-packages/async_timeout/__init__.py b/python/lib/python3.11/site-packages/async_timeout/__init__.py new file mode 100644 index 0000000..179d1b0 --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout/__init__.py @@ -0,0 +1,247 @@ +import asyncio +import enum +import sys +import warnings +from types import TracebackType +from typing import Any, Optional, Type + + +if sys.version_info >= (3, 8): + from typing import final +else: + from typing_extensions import final + + +__version__ = "4.0.2" + + +__all__ = ("timeout", "timeout_at", "Timeout") + + +def timeout(delay: Optional[float]) -> "Timeout": + """timeout context manager. + + Useful in cases when you want to apply timeout logic around block + of code or in cases when asyncio.wait_for is not suitable. For example: + + >>> async with timeout(0.001): + ... async with aiohttp.get('https://github.com') as r: + ... await r.text() + + + delay - value in seconds or None to disable timeout logic + """ + loop = _get_running_loop() + if delay is not None: + deadline = loop.time() + delay # type: Optional[float] + else: + deadline = None + return Timeout(deadline, loop) + + +def timeout_at(deadline: Optional[float]) -> "Timeout": + """Schedule the timeout at absolute time. + + deadline argument points on the time in the same clock system + as loop.time(). + + Please note: it is not POSIX time but a time with + undefined starting base, e.g. the time of the system power on. + + >>> async with timeout_at(loop.time() + 10): + ... async with aiohttp.get('https://github.com') as r: + ... await r.text() + + + """ + loop = _get_running_loop() + return Timeout(deadline, loop) + + +class _State(enum.Enum): + INIT = "INIT" + ENTER = "ENTER" + TIMEOUT = "TIMEOUT" + EXIT = "EXIT" + + +@final +class Timeout: + # Internal class, please don't instantiate it directly + # Use timeout() and timeout_at() public factories instead. + # + # Implementation note: `async with timeout()` is preferred + # over `with timeout()`. + # While technically the Timeout class implementation + # doesn't need to be async at all, + # the `async with` statement explicitly points that + # the context manager should be used from async function context. + # + # This design allows to avoid many silly misusages. + # + # TimeoutError is raised immadiatelly when scheduled + # if the deadline is passed. + # The purpose is to time out as sson as possible + # without waiting for the next await expression. + + __slots__ = ("_deadline", "_loop", "_state", "_timeout_handler") + + def __init__( + self, deadline: Optional[float], loop: asyncio.AbstractEventLoop + ) -> None: + self._loop = loop + self._state = _State.INIT + + self._timeout_handler = None # type: Optional[asyncio.Handle] + if deadline is None: + self._deadline = None # type: Optional[float] + else: + self.update(deadline) + + def __enter__(self) -> "Timeout": + warnings.warn( + "with timeout() is deprecated, use async with timeout() instead", + DeprecationWarning, + stacklevel=2, + ) + self._do_enter() + return self + + def __exit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> Optional[bool]: + self._do_exit(exc_type) + return None + + async def __aenter__(self) -> "Timeout": + self._do_enter() + return self + + async def __aexit__( + self, + exc_type: Optional[Type[BaseException]], + exc_val: Optional[BaseException], + exc_tb: Optional[TracebackType], + ) -> Optional[bool]: + self._do_exit(exc_type) + return None + + @property + def expired(self) -> bool: + """Is timeout expired during execution?""" + return self._state == _State.TIMEOUT + + @property + def deadline(self) -> Optional[float]: + return self._deadline + + def reject(self) -> None: + """Reject scheduled timeout if any.""" + # cancel is maybe better name but + # task.cancel() raises CancelledError in asyncio world. + if self._state not in (_State.INIT, _State.ENTER): + raise RuntimeError(f"invalid state {self._state.value}") + self._reject() + + def _reject(self) -> None: + if self._timeout_handler is not None: + self._timeout_handler.cancel() + self._timeout_handler = None + + def shift(self, delay: float) -> None: + """Advance timeout on delay seconds. + + The delay can be negative. + + Raise RuntimeError if shift is called when deadline is not scheduled + """ + deadline = self._deadline + if deadline is None: + raise RuntimeError("cannot shift timeout if deadline is not scheduled") + self.update(deadline + delay) + + def update(self, deadline: float) -> None: + """Set deadline to absolute value. + + deadline argument points on the time in the same clock system + as loop.time(). + + If new deadline is in the past the timeout is raised immediatelly. + + Please note: it is not POSIX time but a time with + undefined starting base, e.g. the time of the system power on. + """ + if self._state == _State.EXIT: + raise RuntimeError("cannot reschedule after exit from context manager") + if self._state == _State.TIMEOUT: + raise RuntimeError("cannot reschedule expired timeout") + if self._timeout_handler is not None: + self._timeout_handler.cancel() + self._deadline = deadline + if self._state != _State.INIT: + self._reschedule() + + def _reschedule(self) -> None: + assert self._state == _State.ENTER + deadline = self._deadline + if deadline is None: + return + + now = self._loop.time() + if self._timeout_handler is not None: + self._timeout_handler.cancel() + + task = _current_task(self._loop) + if deadline <= now: + self._timeout_handler = self._loop.call_soon(self._on_timeout, task) + else: + self._timeout_handler = self._loop.call_at(deadline, self._on_timeout, task) + + def _do_enter(self) -> None: + if self._state != _State.INIT: + raise RuntimeError(f"invalid state {self._state.value}") + self._state = _State.ENTER + self._reschedule() + + def _do_exit(self, exc_type: Optional[Type[BaseException]]) -> None: + if exc_type is asyncio.CancelledError and self._state == _State.TIMEOUT: + self._timeout_handler = None + raise asyncio.TimeoutError + # timeout has not expired + self._state = _State.EXIT + self._reject() + return None + + def _on_timeout(self, task: "asyncio.Task[None]") -> None: + task.cancel() + self._state = _State.TIMEOUT + # drop the reference early + self._timeout_handler = None + + +if sys.version_info >= (3, 7): + + def _current_task(loop: asyncio.AbstractEventLoop) -> "Optional[asyncio.Task[Any]]": + return asyncio.current_task(loop=loop) + +else: + + def _current_task(loop: asyncio.AbstractEventLoop) -> "Optional[asyncio.Task[Any]]": + return asyncio.Task.current_task(loop=loop) + + +if sys.version_info >= (3, 7): + + def _get_running_loop() -> asyncio.AbstractEventLoop: + return asyncio.get_running_loop() + +else: + + def _get_running_loop() -> asyncio.AbstractEventLoop: + loop = asyncio.get_event_loop() + if not loop.is_running(): + raise RuntimeError("no running event loop") + return loop diff --git a/python/lib/python3.11/site-packages/async_timeout/py.typed b/python/lib/python3.11/site-packages/async_timeout/py.typed new file mode 100644 index 0000000..3b94f91 --- /dev/null +++ b/python/lib/python3.11/site-packages/async_timeout/py.typed @@ -0,0 +1 @@ +Placeholder diff --git a/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA new file mode 100644 index 0000000..f1d7204 --- /dev/null +++ b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/METADATA @@ -0,0 +1,116 @@ +Metadata-Version: 2.1 +Name: beautifulsoup4 +Version: 4.12.2 +Summary: Screen-scraping library +Project-URL: Download, https://www.crummy.com/software/BeautifulSoup/bs4/download/ +Project-URL: Homepage, https://www.crummy.com/software/BeautifulSoup/bs4/ +Author-email: Leonard Richardson <leonardr@segfault.org> +License-Expression: MIT +License-File: AUTHORS +License-File: LICENSE +Keywords: HTML,XML,parse,soup +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Topic :: Text Processing :: Markup :: HTML +Classifier: Topic :: Text Processing :: Markup :: SGML +Classifier: Topic :: Text Processing :: Markup :: XML +Requires-Python: >=3.6.0 +Requires-Dist: soupsieve>1.2 +Provides-Extra: html5lib +Requires-Dist: html5lib; extra == 'html5lib' +Provides-Extra: lxml +Requires-Dist: lxml; extra == 'lxml' +Description-Content-Type: text/markdown + +Beautiful Soup is a library that makes it easy to scrape information +from web pages. It sits atop an HTML or XML parser, providing Pythonic +idioms for iterating, searching, and modifying the parse tree. + +# Quick start + +``` +>>> from bs4 import BeautifulSoup +>>> soup = BeautifulSoup("<p>Some<b>bad<i>HTML") +>>> print(soup.prettify()) +<html> + <body> + <p> + Some + <b> + bad + <i> + HTML + </i> + </b> + </p> + </body> +</html> +>>> soup.find(text="bad") +'bad' +>>> soup.i +<i>HTML</i> +# +>>> soup = BeautifulSoup("<tag1>Some<tag2/>bad<tag3>XML", "xml") +# +>>> print(soup.prettify()) +<?xml version="1.0" encoding="utf-8"?> +<tag1> + Some + <tag2/> + bad + <tag3> + XML + </tag3> +</tag1> +``` + +To go beyond the basics, [comprehensive documentation is available](https://www.crummy.com/software/BeautifulSoup/bs4/doc/). + +# Links + +* [Homepage](https://www.crummy.com/software/BeautifulSoup/bs4/) +* [Documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) +* [Discussion group](https://groups.google.com/group/beautifulsoup/) +* [Development](https://code.launchpad.net/beautifulsoup/) +* [Bug tracker](https://bugs.launchpad.net/beautifulsoup/) +* [Complete changelog](https://bazaar.launchpad.net/~leonardr/beautifulsoup/bs4/view/head:/CHANGELOG) + +# Note on Python 2 sunsetting + +Beautiful Soup's support for Python 2 was discontinued on December 31, +2020: one year after the sunset date for Python 2 itself. From this +point onward, new Beautiful Soup development will exclusively target +Python 3. The final release of Beautiful Soup 4 to support Python 2 +was 4.9.3. + +# Supporting the project + +If you use Beautiful Soup as part of your professional work, please consider a +[Tidelift subscription](https://tidelift.com/subscription/pkg/pypi-beautifulsoup4?utm_source=pypi-beautifulsoup4&utm_medium=referral&utm_campaign=readme). +This will support many of the free software projects your organization +depends on, not just Beautiful Soup. + +If you use Beautiful Soup for personal projects, the best way to say +thank you is to read +[Tool Safety](https://www.crummy.com/software/BeautifulSoup/zine/), a zine I +wrote about what Beautiful Soup has taught me about software +development. + +# Building the documentation + +The bs4/doc/ directory contains full documentation in Sphinx +format. Run `make html` in that directory to create HTML +documentation. + +# Running the unit tests + +Beautiful Soup supports unit test discovery using Pytest: + +``` +$ pytest +``` + diff --git a/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD similarity index 100% rename from lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD rename to python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/RECORD diff --git a/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/REQUESTED b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/REQUESTED new file mode 100644 index 0000000..e69de29 diff --git a/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL new file mode 100644 index 0000000..9d72767 --- /dev/null +++ b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: hatchling 1.13.0 +Root-Is-Purelib: true +Tag: py3-none-any diff --git a/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS new file mode 100644 index 0000000..1f14fe0 --- /dev/null +++ b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/AUTHORS @@ -0,0 +1,49 @@ +Behold, mortal, the origins of Beautiful Soup... +================================================ + +Leonard Richardson is the primary maintainer. + +Aaron DeVore and Isaac Muse have made significant contributions to the +code base. + +Mark Pilgrim provided the encoding detection code that forms the base +of UnicodeDammit. + +Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful +Soup 4 working under Python 3. + +Simon Willison wrote soupselect, which was used to make Beautiful Soup +support CSS selectors. Isaac Muse wrote SoupSieve, which made it +possible to _remove_ the CSS selector code from Beautiful Soup. + +Sam Ruby helped with a lot of edge cases. + +Jonathan Ellis was awarded the prestigious Beau Potage D'Or for his +work in solving the nestable tags conundrum. + +An incomplete list of people have contributed patches to Beautiful +Soup: + + Istvan Albert, Andrew Lin, Anthony Baxter, Oliver Beattie, Andrew +Boyko, Tony Chang, Francisco Canas, "Delong", Zephyr Fang, Fuzzy, +Roman Gaufman, Yoni Gilad, Richie Hindle, Toshihiro Kamiya, Peteris +Krumins, Kent Johnson, Marek Kapolka, Andreas Kostyrka, Roel Kramer, +Ben Last, Robert Leftwich, Stefaan Lippens, "liquider", Staffan +Malmgren, Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon", +Ed Oskiewicz, Martijn Peters, Greg Phillips, Giles Radford, Stefano +Revera, Arthur Rudolph, Marko Samastur, James Salter, Jouni Seppänen, +Alexander Schmolck, Tim Shirley, Geoffrey Sneddon, Ville Skyttä, +"Vikas", Jens Svalgaard, Andy Theyers, Eric Weiser, Glyn Webster, John +Wiseman, Paul Wright, Danny Yoo + +An incomplete list of people who made suggestions or found bugs or +found ways to break Beautiful Soup: + + Hanno Böck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel, + Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes, + Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams, + warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison, + Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed + Summers, Dennis Sutch, Chris Smith, Aaron Swartz, Stuart + Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de + Sousa Rocha, Yichun Wei, Per Vognsen diff --git a/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE new file mode 100644 index 0000000..08e3a9c --- /dev/null +++ b/python/lib/python3.11/site-packages/beautifulsoup4-4.12.2.dist-info/licenses/LICENSE @@ -0,0 +1,31 @@ +Beautiful Soup is made available under the MIT license: + + Copyright (c) Leonard Richardson + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + "Software"), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be + included in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE. + +Beautiful Soup incorporates code from the html5lib library, which is +also made available under the MIT license. Copyright (c) James Graham +and other contributors + +Beautiful Soup has an optional dependency on the soupsieve library, +which is also made available under the MIT license. Copyright (c) +Isaac Muse diff --git a/python/lib/python3.11/site-packages/brotli.py b/python/lib/python3.11/site-packages/brotli.py new file mode 100644 index 0000000..d66966b --- /dev/null +++ b/python/lib/python3.11/site-packages/brotli.py @@ -0,0 +1,56 @@ +# Copyright 2016 The Brotli Authors. All rights reserved. +# +# Distributed under MIT license. +# See file LICENSE for detail or copy at https://opensource.org/licenses/MIT + +"""Functions to compress and decompress data using the Brotli library.""" + +import _brotli + + +# The library version. +__version__ = _brotli.__version__ + +# The compression mode. +MODE_GENERIC = _brotli.MODE_GENERIC +MODE_TEXT = _brotli.MODE_TEXT +MODE_FONT = _brotli.MODE_FONT + +# The Compressor object. +Compressor = _brotli.Compressor + +# The Decompressor object. +Decompressor = _brotli.Decompressor + +# Compress a byte string. +def compress(string, mode=MODE_GENERIC, quality=11, lgwin=22, lgblock=0): + """Compress a byte string. + + Args: + string (bytes): The input data. + mode (int, optional): The compression mode can be MODE_GENERIC (default), + MODE_TEXT (for UTF-8 format text input) or MODE_FONT (for WOFF 2.0). + quality (int, optional): Controls the compression-speed vs compression- + density tradeoff. The higher the quality, the slower the compression. + Range is 0 to 11. Defaults to 11. + lgwin (int, optional): Base 2 logarithm of the sliding window size. Range + is 10 to 24. Defaults to 22. + lgblock (int, optional): Base 2 logarithm of the maximum input block size. + Range is 16 to 24. If set to 0, the value will be set based on the + quality. Defaults to 0. + + Returns: + The compressed byte string. + + Raises: + brotli.error: If arguments are invalid, or compressor fails. + """ + compressor = Compressor(mode=mode, quality=quality, lgwin=lgwin, + lgblock=lgblock) + return compressor.process(string) + compressor.finish() + +# Decompress a compressed byte string. +decompress = _brotli.decompress + +# Raised if compression or decompression fails. +error = _brotli.error diff --git a/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/PKG-INFO b/python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/PKG-INFO similarity index 100% rename from lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/PKG-INFO rename to python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/PKG-INFO diff --git a/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/SOURCES.txt b/python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/SOURCES.txt similarity index 100% rename from lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/SOURCES.txt rename to python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/SOURCES.txt diff --git a/lib/python3.11/site-packages/selenium_requests-1.3-py3.11.egg-info/dependency_links.txt b/python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/dependency_links.txt similarity index 100% rename from lib/python3.11/site-packages/selenium_requests-1.3-py3.11.egg-info/dependency_links.txt rename to python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/dependency_links.txt diff --git a/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/installed-files.txt b/python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/installed-files.txt similarity index 100% rename from lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/installed-files.txt rename to python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/installed-files.txt diff --git a/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/requires.txt b/python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/requires.txt similarity index 100% rename from lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/requires.txt rename to python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/requires.txt diff --git a/lib/python3.11/site-packages/wget-3.2-py3.11.egg-info/dependency_links.txt b/python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/top_level.txt similarity index 100% rename from lib/python3.11/site-packages/wget-3.2-py3.11.egg-info/dependency_links.txt rename to python/lib/python3.11/site-packages/bs4-0.0.1-py3.11.egg-info/top_level.txt diff --git a/python/lib/python3.11/site-packages/bs4/__init__.py b/python/lib/python3.11/site-packages/bs4/__init__.py new file mode 100644 index 0000000..3d2ab09 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/__init__.py @@ -0,0 +1,840 @@ +"""Beautiful Soup Elixir and Tonic - "The Screen-Scraper's Friend". + +http://www.crummy.com/software/BeautifulSoup/ + +Beautiful Soup uses a pluggable XML or HTML parser to parse a +(possibly invalid) document into a tree representation. Beautiful Soup +provides methods and Pythonic idioms that make it easy to navigate, +search, and modify the parse tree. + +Beautiful Soup works with Python 3.6 and up. It works better if lxml +and/or html5lib is installed. + +For more than you ever wanted to know about Beautiful Soup, see the +documentation: http://www.crummy.com/software/BeautifulSoup/bs4/doc/ +""" + +__author__ = "Leonard Richardson (leonardr@segfault.org)" +__version__ = "4.12.2" +__copyright__ = "Copyright (c) 2004-2023 Leonard Richardson" +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +__all__ = ['BeautifulSoup'] + +from collections import Counter +import os +import re +import sys +import traceback +import warnings + +# The very first thing we do is give a useful error if someone is +# running this code under Python 2. +if sys.version_info.major < 3: + raise ImportError('You are trying to use a Python 3-specific version of Beautiful Soup under Python 2. This will not work. The final version of Beautiful Soup to support Python 2 was 4.9.3.') + +from .builder import ( + builder_registry, + ParserRejectedMarkup, + XMLParsedAsHTMLWarning, + HTMLParserTreeBuilder +) +from .dammit import UnicodeDammit +from .element import ( + CData, + Comment, + CSS, + DEFAULT_OUTPUT_ENCODING, + Declaration, + Doctype, + NavigableString, + PageElement, + ProcessingInstruction, + PYTHON_SPECIFIC_ENCODINGS, + ResultSet, + Script, + Stylesheet, + SoupStrainer, + Tag, + TemplateString, + ) + +# Define some custom warnings. +class GuessedAtParserWarning(UserWarning): + """The warning issued when BeautifulSoup has to guess what parser to + use -- probably because no parser was specified in the constructor. + """ + +class MarkupResemblesLocatorWarning(UserWarning): + """The warning issued when BeautifulSoup is given 'markup' that + actually looks like a resource locator -- a URL or a path to a file + on disk. + """ + + +class BeautifulSoup(Tag): + """A data structure representing a parsed HTML or XML document. + + Most of the methods you'll call on a BeautifulSoup object are inherited from + PageElement or Tag. + + Internally, this class defines the basic interface called by the + tree builders when converting an HTML/XML document into a data + structure. The interface abstracts away the differences between + parsers. To write a new tree builder, you'll need to understand + these methods as a whole. + + These methods will be called by the BeautifulSoup constructor: + * reset() + * feed(markup) + + The tree builder may call these methods from its feed() implementation: + * handle_starttag(name, attrs) # See note about return value + * handle_endtag(name) + * handle_data(data) # Appends to the current data node + * endData(containerClass) # Ends the current data node + + No matter how complicated the underlying parser is, you should be + able to build a tree using 'start tag' events, 'end tag' events, + 'data' events, and "done with data" events. + + If you encounter an empty-element tag (aka a self-closing tag, + like HTML's <br> tag), call handle_starttag and then + handle_endtag. + """ + + # Since BeautifulSoup subclasses Tag, it's possible to treat it as + # a Tag with a .name. This name makes it clear the BeautifulSoup + # object isn't a real markup tag. + ROOT_TAG_NAME = '[document]' + + # If the end-user gives no indication which tree builder they + # want, look for one with these features. + DEFAULT_BUILDER_FEATURES = ['html', 'fast'] + + # A string containing all ASCII whitespace characters, used in + # endData() to detect data chunks that seem 'empty'. + ASCII_SPACES = '\x20\x0a\x09\x0c\x0d' + + NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features=\"%(parser)s\"' to the BeautifulSoup constructor.\n" + + def __init__(self, markup="", features=None, builder=None, + parse_only=None, from_encoding=None, exclude_encodings=None, + element_classes=None, **kwargs): + """Constructor. + + :param markup: A string or a file-like object representing + markup to be parsed. + + :param features: Desirable features of the parser to be + used. This may be the name of a specific parser ("lxml", + "lxml-xml", "html.parser", or "html5lib") or it may be the + type of markup to be used ("html", "html5", "xml"). It's + recommended that you name a specific parser, so that + Beautiful Soup gives you the same results across platforms + and virtual environments. + + :param builder: A TreeBuilder subclass to instantiate (or + instance to use) instead of looking one up based on + `features`. You only need to use this if you've implemented a + custom TreeBuilder. + + :param parse_only: A SoupStrainer. Only parts of the document + matching the SoupStrainer will be considered. This is useful + when parsing part of a document that would otherwise be too + large to fit into memory. + + :param from_encoding: A string indicating the encoding of the + document to be parsed. Pass this in if Beautiful Soup is + guessing wrongly about the document's encoding. + + :param exclude_encodings: A list of strings indicating + encodings known to be wrong. Pass this in if you don't know + the document's encoding but you know Beautiful Soup's guess is + wrong. + + :param element_classes: A dictionary mapping BeautifulSoup + classes like Tag and NavigableString, to other classes you'd + like to be instantiated instead as the parse tree is + built. This is useful for subclassing Tag or NavigableString + to modify default behavior. + + :param kwargs: For backwards compatibility purposes, the + constructor accepts certain keyword arguments used in + Beautiful Soup 3. None of these arguments do anything in + Beautiful Soup 4; they will result in a warning and then be + ignored. + + Apart from this, any keyword arguments passed into the + BeautifulSoup constructor are propagated to the TreeBuilder + constructor. This makes it possible to configure a + TreeBuilder by passing in arguments, not just by saying which + one to use. + """ + if 'convertEntities' in kwargs: + del kwargs['convertEntities'] + warnings.warn( + "BS4 does not respect the convertEntities argument to the " + "BeautifulSoup constructor. Entities are always converted " + "to Unicode characters.") + + if 'markupMassage' in kwargs: + del kwargs['markupMassage'] + warnings.warn( + "BS4 does not respect the markupMassage argument to the " + "BeautifulSoup constructor. The tree builder is responsible " + "for any necessary markup massage.") + + if 'smartQuotesTo' in kwargs: + del kwargs['smartQuotesTo'] + warnings.warn( + "BS4 does not respect the smartQuotesTo argument to the " + "BeautifulSoup constructor. Smart quotes are always converted " + "to Unicode characters.") + + if 'selfClosingTags' in kwargs: + del kwargs['selfClosingTags'] + warnings.warn( + "BS4 does not respect the selfClosingTags argument to the " + "BeautifulSoup constructor. The tree builder is responsible " + "for understanding self-closing tags.") + + if 'isHTML' in kwargs: + del kwargs['isHTML'] + warnings.warn( + "BS4 does not respect the isHTML argument to the " + "BeautifulSoup constructor. Suggest you use " + "features='lxml' for HTML and features='lxml-xml' for " + "XML.") + + def deprecated_argument(old_name, new_name): + if old_name in kwargs: + warnings.warn( + 'The "%s" argument to the BeautifulSoup constructor ' + 'has been renamed to "%s."' % (old_name, new_name), + DeprecationWarning, stacklevel=3 + ) + return kwargs.pop(old_name) + return None + + parse_only = parse_only or deprecated_argument( + "parseOnlyThese", "parse_only") + + from_encoding = from_encoding or deprecated_argument( + "fromEncoding", "from_encoding") + + if from_encoding and isinstance(markup, str): + warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.") + from_encoding = None + + self.element_classes = element_classes or dict() + + # We need this information to track whether or not the builder + # was specified well enough that we can omit the 'you need to + # specify a parser' warning. + original_builder = builder + original_features = features + + if isinstance(builder, type): + # A builder class was passed in; it needs to be instantiated. + builder_class = builder + builder = None + elif builder is None: + if isinstance(features, str): + features = [features] + if features is None or len(features) == 0: + features = self.DEFAULT_BUILDER_FEATURES + builder_class = builder_registry.lookup(*features) + if builder_class is None: + raise FeatureNotFound( + "Couldn't find a tree builder with the features you " + "requested: %s. Do you need to install a parser library?" + % ",".join(features)) + + # At this point either we have a TreeBuilder instance in + # builder, or we have a builder_class that we can instantiate + # with the remaining **kwargs. + if builder is None: + builder = builder_class(**kwargs) + if not original_builder and not ( + original_features == builder.NAME or + original_features in builder.ALTERNATE_NAMES + ) and markup: + # The user did not tell us which TreeBuilder to use, + # and we had to guess. Issue a warning. + if builder.is_xml: + markup_type = "XML" + else: + markup_type = "HTML" + + # This code adapted from warnings.py so that we get the same line + # of code as our warnings.warn() call gets, even if the answer is wrong + # (as it may be in a multithreading situation). + caller = None + try: + caller = sys._getframe(1) + except ValueError: + pass + if caller: + globals = caller.f_globals + line_number = caller.f_lineno + else: + globals = sys.__dict__ + line_number= 1 + filename = globals.get('__file__') + if filename: + fnl = filename.lower() + if fnl.endswith((".pyc", ".pyo")): + filename = filename[:-1] + if filename: + # If there is no filename at all, the user is most likely in a REPL, + # and the warning is not necessary. + values = dict( + filename=filename, + line_number=line_number, + parser=builder.NAME, + markup_type=markup_type + ) + warnings.warn( + self.NO_PARSER_SPECIFIED_WARNING % values, + GuessedAtParserWarning, stacklevel=2 + ) + else: + if kwargs: + warnings.warn("Keyword arguments to the BeautifulSoup constructor will be ignored. These would normally be passed into the TreeBuilder constructor, but a TreeBuilder instance was passed in as `builder`.") + + self.builder = builder + self.is_xml = builder.is_xml + self.known_xml = self.is_xml + self._namespaces = dict() + self.parse_only = parse_only + + if hasattr(markup, 'read'): # It's a file-type object. + markup = markup.read() + elif len(markup) <= 256 and ( + (isinstance(markup, bytes) and not b'<' in markup) + or (isinstance(markup, str) and not '<' in markup) + ): + # Issue warnings for a couple beginner problems + # involving passing non-markup to Beautiful Soup. + # Beautiful Soup will still parse the input as markup, + # since that is sometimes the intended behavior. + if not self._markup_is_url(markup): + self._markup_resembles_filename(markup) + + rejections = [] + success = False + for (self.markup, self.original_encoding, self.declared_html_encoding, + self.contains_replacement_characters) in ( + self.builder.prepare_markup( + markup, from_encoding, exclude_encodings=exclude_encodings)): + self.reset() + self.builder.initialize_soup(self) + try: + self._feed() + success = True + break + except ParserRejectedMarkup as e: + rejections.append(e) + pass + + if not success: + other_exceptions = [str(e) for e in rejections] + raise ParserRejectedMarkup( + "The markup you provided was rejected by the parser. Trying a different parser or a different encoding may help.\n\nOriginal exception(s) from parser:\n " + "\n ".join(other_exceptions) + ) + + # Clear out the markup and remove the builder's circular + # reference to this object. + self.markup = None + self.builder.soup = None + + def _clone(self): + """Create a new BeautifulSoup object with the same TreeBuilder, + but not associated with any markup. + + This is the first step of the deepcopy process. + """ + clone = type(self)("", None, self.builder) + + # Keep track of the encoding of the original document, + # since we won't be parsing it again. + clone.original_encoding = self.original_encoding + return clone + + def __getstate__(self): + # Frequently a tree builder can't be pickled. + d = dict(self.__dict__) + if 'builder' in d and d['builder'] is not None and not self.builder.picklable: + d['builder'] = type(self.builder) + # Store the contents as a Unicode string. + d['contents'] = [] + d['markup'] = self.decode() + + # If _most_recent_element is present, it's a Tag object left + # over from initial parse. It might not be picklable and we + # don't need it. + if '_most_recent_element' in d: + del d['_most_recent_element'] + return d + + def __setstate__(self, state): + # If necessary, restore the TreeBuilder by looking it up. + self.__dict__ = state + if isinstance(self.builder, type): + self.builder = self.builder() + elif not self.builder: + # We don't know which builder was used to build this + # parse tree, so use a default we know is always available. + self.builder = HTMLParserTreeBuilder() + self.builder.soup = self + self.reset() + self._feed() + return state + + + @classmethod + def _decode_markup(cls, markup): + """Ensure `markup` is bytes so it's safe to send into warnings.warn. + + TODO: warnings.warn had this problem back in 2010 but it might not + anymore. + """ + if isinstance(markup, bytes): + decoded = markup.decode('utf-8', 'replace') + else: + decoded = markup + return decoded + + @classmethod + def _markup_is_url(cls, markup): + """Error-handling method to raise a warning if incoming markup looks + like a URL. + + :param markup: A string. + :return: Whether or not the markup resembles a URL + closely enough to justify a warning. + """ + if isinstance(markup, bytes): + space = b' ' + cant_start_with = (b"http:", b"https:") + elif isinstance(markup, str): + space = ' ' + cant_start_with = ("http:", "https:") + else: + return False + + if any(markup.startswith(prefix) for prefix in cant_start_with): + if not space in markup: + warnings.warn( + 'The input looks more like a URL than markup. You may want to use' + ' an HTTP client like requests to get the document behind' + ' the URL, and feed that document to Beautiful Soup.', + MarkupResemblesLocatorWarning, + stacklevel=3 + ) + return True + return False + + @classmethod + def _markup_resembles_filename(cls, markup): + """Error-handling method to raise a warning if incoming markup + resembles a filename. + + :param markup: A bytestring or string. + :return: Whether or not the markup resembles a filename + closely enough to justify a warning. + """ + path_characters = '/\\' + extensions = ['.html', '.htm', '.xml', '.xhtml', '.txt'] + if isinstance(markup, bytes): + path_characters = path_characters.encode("utf8") + extensions = [x.encode('utf8') for x in extensions] + filelike = False + if any(x in markup for x in path_characters): + filelike = True + else: + lower = markup.lower() + if any(lower.endswith(ext) for ext in extensions): + filelike = True + if filelike: + warnings.warn( + 'The input looks more like a filename than markup. You may' + ' want to open this file and pass the filehandle into' + ' Beautiful Soup.', + MarkupResemblesLocatorWarning, stacklevel=3 + ) + return True + return False + + def _feed(self): + """Internal method that parses previously set markup, creating a large + number of Tag and NavigableString objects. + """ + # Convert the document to Unicode. + self.builder.reset() + + self.builder.feed(self.markup) + # Close out any unfinished strings and close all the open tags. + self.endData() + while self.currentTag.name != self.ROOT_TAG_NAME: + self.popTag() + + def reset(self): + """Reset this object to a state as though it had never parsed any + markup. + """ + Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME) + self.hidden = 1 + self.builder.reset() + self.current_data = [] + self.currentTag = None + self.tagStack = [] + self.open_tag_counter = Counter() + self.preserve_whitespace_tag_stack = [] + self.string_container_stack = [] + self._most_recent_element = None + self.pushTag(self) + + def new_tag(self, name, namespace=None, nsprefix=None, attrs={}, + sourceline=None, sourcepos=None, **kwattrs): + """Create a new Tag associated with this BeautifulSoup object. + + :param name: The name of the new Tag. + :param namespace: The URI of the new Tag's XML namespace, if any. + :param prefix: The prefix for the new Tag's XML namespace, if any. + :param attrs: A dictionary of this Tag's attribute values; can + be used instead of `kwattrs` for attributes like 'class' + that are reserved words in Python. + :param sourceline: The line number where this tag was + (purportedly) found in its source document. + :param sourcepos: The character position within `sourceline` where this + tag was (purportedly) found. + :param kwattrs: Keyword arguments for the new Tag's attribute values. + + """ + kwattrs.update(attrs) + return self.element_classes.get(Tag, Tag)( + None, self.builder, name, namespace, nsprefix, kwattrs, + sourceline=sourceline, sourcepos=sourcepos + ) + + def string_container(self, base_class=None): + container = base_class or NavigableString + + # There may be a general override of NavigableString. + container = self.element_classes.get( + container, container + ) + + # On top of that, we may be inside a tag that needs a special + # container class. + if self.string_container_stack and container is NavigableString: + container = self.builder.string_containers.get( + self.string_container_stack[-1].name, container + ) + return container + + def new_string(self, s, subclass=None): + """Create a new NavigableString associated with this BeautifulSoup + object. + """ + container = self.string_container(subclass) + return container(s) + + def insert_before(self, *args): + """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement + it because there is nothing before or after it in the parse tree. + """ + raise NotImplementedError("BeautifulSoup objects don't support insert_before().") + + def insert_after(self, *args): + """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement + it because there is nothing before or after it in the parse tree. + """ + raise NotImplementedError("BeautifulSoup objects don't support insert_after().") + + def popTag(self): + """Internal method called by _popToTag when a tag is closed.""" + tag = self.tagStack.pop() + if tag.name in self.open_tag_counter: + self.open_tag_counter[tag.name] -= 1 + if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]: + self.preserve_whitespace_tag_stack.pop() + if self.string_container_stack and tag == self.string_container_stack[-1]: + self.string_container_stack.pop() + #print("Pop", tag.name) + if self.tagStack: + self.currentTag = self.tagStack[-1] + return self.currentTag + + def pushTag(self, tag): + """Internal method called by handle_starttag when a tag is opened.""" + #print("Push", tag.name) + if self.currentTag is not None: + self.currentTag.contents.append(tag) + self.tagStack.append(tag) + self.currentTag = self.tagStack[-1] + if tag.name != self.ROOT_TAG_NAME: + self.open_tag_counter[tag.name] += 1 + if tag.name in self.builder.preserve_whitespace_tags: + self.preserve_whitespace_tag_stack.append(tag) + if tag.name in self.builder.string_containers: + self.string_container_stack.append(tag) + + def endData(self, containerClass=None): + """Method called by the TreeBuilder when the end of a data segment + occurs. + """ + if self.current_data: + current_data = ''.join(self.current_data) + # If whitespace is not preserved, and this string contains + # nothing but ASCII spaces, replace it with a single space + # or newline. + if not self.preserve_whitespace_tag_stack: + strippable = True + for i in current_data: + if i not in self.ASCII_SPACES: + strippable = False + break + if strippable: + if '\n' in current_data: + current_data = '\n' + else: + current_data = ' ' + + # Reset the data collector. + self.current_data = [] + + # Should we add this string to the tree at all? + if self.parse_only and len(self.tagStack) <= 1 and \ + (not self.parse_only.text or \ + not self.parse_only.search(current_data)): + return + + containerClass = self.string_container(containerClass) + o = containerClass(current_data) + self.object_was_parsed(o) + + def object_was_parsed(self, o, parent=None, most_recent_element=None): + """Method called by the TreeBuilder to integrate an object into the parse tree.""" + if parent is None: + parent = self.currentTag + if most_recent_element is not None: + previous_element = most_recent_element + else: + previous_element = self._most_recent_element + + next_element = previous_sibling = next_sibling = None + if isinstance(o, Tag): + next_element = o.next_element + next_sibling = o.next_sibling + previous_sibling = o.previous_sibling + if previous_element is None: + previous_element = o.previous_element + + fix = parent.next_element is not None + + o.setup(parent, previous_element, next_element, previous_sibling, next_sibling) + + self._most_recent_element = o + parent.contents.append(o) + + # Check if we are inserting into an already parsed node. + if fix: + self._linkage_fixer(parent) + + def _linkage_fixer(self, el): + """Make sure linkage of this fragment is sound.""" + + first = el.contents[0] + child = el.contents[-1] + descendant = child + + if child is first and el.parent is not None: + # Parent should be linked to first child + el.next_element = child + # We are no longer linked to whatever this element is + prev_el = child.previous_element + if prev_el is not None and prev_el is not el: + prev_el.next_element = None + # First child should be linked to the parent, and no previous siblings. + child.previous_element = el + child.previous_sibling = None + + # We have no sibling as we've been appended as the last. + child.next_sibling = None + + # This index is a tag, dig deeper for a "last descendant" + if isinstance(child, Tag) and child.contents: + descendant = child._last_descendant(False) + + # As the final step, link last descendant. It should be linked + # to the parent's next sibling (if found), else walk up the chain + # and find a parent with a sibling. It should have no next sibling. + descendant.next_element = None + descendant.next_sibling = None + target = el + while True: + if target is None: + break + elif target.next_sibling is not None: + descendant.next_element = target.next_sibling + target.next_sibling.previous_element = child + break + target = target.parent + + def _popToTag(self, name, nsprefix=None, inclusivePop=True): + """Pops the tag stack up to and including the most recent + instance of the given tag. + + If there are no open tags with the given name, nothing will be + popped. + + :param name: Pop up to the most recent tag with this name. + :param nsprefix: The namespace prefix that goes with `name`. + :param inclusivePop: It this is false, pops the tag stack up + to but *not* including the most recent instqance of the + given tag. + + """ + #print("Popping to %s" % name) + if name == self.ROOT_TAG_NAME: + # The BeautifulSoup object itself can never be popped. + return + + most_recently_popped = None + + stack_size = len(self.tagStack) + for i in range(stack_size - 1, 0, -1): + if not self.open_tag_counter.get(name): + break + t = self.tagStack[i] + if (name == t.name and nsprefix == t.prefix): + if inclusivePop: + most_recently_popped = self.popTag() + break + most_recently_popped = self.popTag() + + return most_recently_popped + + def handle_starttag(self, name, namespace, nsprefix, attrs, sourceline=None, + sourcepos=None, namespaces=None): + """Called by the tree builder when a new tag is encountered. + + :param name: Name of the tag. + :param nsprefix: Namespace prefix for the tag. + :param attrs: A dictionary of attribute values. + :param sourceline: The line number where this tag was found in its + source document. + :param sourcepos: The character position within `sourceline` where this + tag was found. + :param namespaces: A dictionary of all namespace prefix mappings + currently in scope in the document. + + If this method returns None, the tag was rejected by an active + SoupStrainer. You should proceed as if the tag had not occurred + in the document. For instance, if this was a self-closing tag, + don't call handle_endtag. + """ + # print("Start tag %s: %s" % (name, attrs)) + self.endData() + + if (self.parse_only and len(self.tagStack) <= 1 + and (self.parse_only.text + or not self.parse_only.search_tag(name, attrs))): + return None + + tag = self.element_classes.get(Tag, Tag)( + self, self.builder, name, namespace, nsprefix, attrs, + self.currentTag, self._most_recent_element, + sourceline=sourceline, sourcepos=sourcepos, + namespaces=namespaces + ) + if tag is None: + return tag + if self._most_recent_element is not None: + self._most_recent_element.next_element = tag + self._most_recent_element = tag + self.pushTag(tag) + return tag + + def handle_endtag(self, name, nsprefix=None): + """Called by the tree builder when an ending tag is encountered. + + :param name: Name of the tag. + :param nsprefix: Namespace prefix for the tag. + """ + #print("End tag: " + name) + self.endData() + self._popToTag(name, nsprefix) + + def handle_data(self, data): + """Called by the tree builder when a chunk of textual data is encountered.""" + self.current_data.append(data) + + def decode(self, pretty_print=False, + eventual_encoding=DEFAULT_OUTPUT_ENCODING, + formatter="minimal", iterator=None): + """Returns a string or Unicode representation of the parse tree + as an HTML or XML document. + + :param pretty_print: If this is True, indentation will be used to + make the document more readable. + :param eventual_encoding: The encoding of the final document. + If this is None, the document will be a Unicode string. + """ + if self.is_xml: + # Print the XML declaration + encoding_part = '' + if eventual_encoding in PYTHON_SPECIFIC_ENCODINGS: + # This is a special Python encoding; it can't actually + # go into an XML document because it means nothing + # outside of Python. + eventual_encoding = None + if eventual_encoding != None: + encoding_part = ' encoding="%s"' % eventual_encoding + prefix = '<?xml version="1.0"%s?>\n' % encoding_part + else: + prefix = '' + if not pretty_print: + indent_level = None + else: + indent_level = 0 + return prefix + super(BeautifulSoup, self).decode( + indent_level, eventual_encoding, formatter, iterator) + +# Aliases to make it easier to get started quickly, e.g. 'from bs4 import _soup' +_s = BeautifulSoup +_soup = BeautifulSoup + +class BeautifulStoneSoup(BeautifulSoup): + """Deprecated interface to an XML parser.""" + + def __init__(self, *args, **kwargs): + kwargs['features'] = 'xml' + warnings.warn( + 'The BeautifulStoneSoup class is deprecated. Instead of using ' + 'it, pass features="xml" into the BeautifulSoup constructor.', + DeprecationWarning, stacklevel=2 + ) + super(BeautifulStoneSoup, self).__init__(*args, **kwargs) + + +class StopParsing(Exception): + """Exception raised by a TreeBuilder if it's unable to continue parsing.""" + pass + +class FeatureNotFound(ValueError): + """Exception raised by the BeautifulSoup constructor if no parser with the + requested features is found. + """ + pass + + +#If this file is run as a script, act as an HTML pretty-printer. +if __name__ == '__main__': + import sys + soup = BeautifulSoup(sys.stdin) + print((soup.prettify())) diff --git a/python/lib/python3.11/site-packages/bs4/builder/__init__.py b/python/lib/python3.11/site-packages/bs4/builder/__init__.py new file mode 100644 index 0000000..2e39745 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/builder/__init__.py @@ -0,0 +1,631 @@ +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +from collections import defaultdict +import itertools +import re +import warnings +import sys +from bs4.element import ( + CharsetMetaAttributeValue, + ContentMetaAttributeValue, + RubyParenthesisString, + RubyTextString, + Stylesheet, + Script, + TemplateString, + nonwhitespace_re +) + +__all__ = [ + 'HTMLTreeBuilder', + 'SAXTreeBuilder', + 'TreeBuilder', + 'TreeBuilderRegistry', + ] + +# Some useful features for a TreeBuilder to have. +FAST = 'fast' +PERMISSIVE = 'permissive' +STRICT = 'strict' +XML = 'xml' +HTML = 'html' +HTML_5 = 'html5' + +class XMLParsedAsHTMLWarning(UserWarning): + """The warning issued when an HTML parser is used to parse + XML that is not XHTML. + """ + MESSAGE = """It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.""" + + +class TreeBuilderRegistry(object): + """A way of looking up TreeBuilder subclasses by their name or by desired + features. + """ + + def __init__(self): + self.builders_for_feature = defaultdict(list) + self.builders = [] + + def register(self, treebuilder_class): + """Register a treebuilder based on its advertised features. + + :param treebuilder_class: A subclass of Treebuilder. its .features + attribute should list its features. + """ + for feature in treebuilder_class.features: + self.builders_for_feature[feature].insert(0, treebuilder_class) + self.builders.insert(0, treebuilder_class) + + def lookup(self, *features): + """Look up a TreeBuilder subclass with the desired features. + + :param features: A list of features to look for. If none are + provided, the most recently registered TreeBuilder subclass + will be used. + :return: A TreeBuilder subclass, or None if there's no + registered subclass with all the requested features. + """ + if len(self.builders) == 0: + # There are no builders at all. + return None + + if len(features) == 0: + # They didn't ask for any features. Give them the most + # recently registered builder. + return self.builders[0] + + # Go down the list of features in order, and eliminate any builders + # that don't match every feature. + features = list(features) + features.reverse() + candidates = None + candidate_set = None + while len(features) > 0: + feature = features.pop() + we_have_the_feature = self.builders_for_feature.get(feature, []) + if len(we_have_the_feature) > 0: + if candidates is None: + candidates = we_have_the_feature + candidate_set = set(candidates) + else: + # Eliminate any candidates that don't have this feature. + candidate_set = candidate_set.intersection( + set(we_have_the_feature)) + + # The only valid candidates are the ones in candidate_set. + # Go through the original list of candidates and pick the first one + # that's in candidate_set. + if candidate_set is None: + return None + for candidate in candidates: + if candidate in candidate_set: + return candidate + return None + +# The BeautifulSoup class will take feature lists from developers and use them +# to look up builders in this registry. +builder_registry = TreeBuilderRegistry() + +class TreeBuilder(object): + """Turn a textual document into a Beautiful Soup object tree.""" + + NAME = "[Unknown tree builder]" + ALTERNATE_NAMES = [] + features = [] + + is_xml = False + picklable = False + empty_element_tags = None # A tag will be considered an empty-element + # tag when and only when it has no contents. + + # A value for these tag/attribute combinations is a space- or + # comma-separated list of CDATA, rather than a single CDATA. + DEFAULT_CDATA_LIST_ATTRIBUTES = defaultdict(list) + + # Whitespace should be preserved inside these tags. + DEFAULT_PRESERVE_WHITESPACE_TAGS = set() + + # The textual contents of tags with these names should be + # instantiated with some class other than NavigableString. + DEFAULT_STRING_CONTAINERS = {} + + USE_DEFAULT = object() + + # Most parsers don't keep track of line numbers. + TRACKS_LINE_NUMBERS = False + + def __init__(self, multi_valued_attributes=USE_DEFAULT, + preserve_whitespace_tags=USE_DEFAULT, + store_line_numbers=USE_DEFAULT, + string_containers=USE_DEFAULT, + ): + """Constructor. + + :param multi_valued_attributes: If this is set to None, the + TreeBuilder will not turn any values for attributes like + 'class' into lists. Setting this to a dictionary will + customize this behavior; look at DEFAULT_CDATA_LIST_ATTRIBUTES + for an example. + + Internally, these are called "CDATA list attributes", but that + probably doesn't make sense to an end-user, so the argument name + is `multi_valued_attributes`. + + :param preserve_whitespace_tags: A list of tags to treat + the way <pre> tags are treated in HTML. Tags in this list + are immune from pretty-printing; their contents will always be + output as-is. + + :param string_containers: A dictionary mapping tag names to + the classes that should be instantiated to contain the textual + contents of those tags. The default is to use NavigableString + for every tag, no matter what the name. You can override the + default by changing DEFAULT_STRING_CONTAINERS. + + :param store_line_numbers: If the parser keeps track of the + line numbers and positions of the original markup, that + information will, by default, be stored in each corresponding + `Tag` object. You can turn this off by passing + store_line_numbers=False. If the parser you're using doesn't + keep track of this information, then setting store_line_numbers=True + will do nothing. + """ + self.soup = None + if multi_valued_attributes is self.USE_DEFAULT: + multi_valued_attributes = self.DEFAULT_CDATA_LIST_ATTRIBUTES + self.cdata_list_attributes = multi_valued_attributes + if preserve_whitespace_tags is self.USE_DEFAULT: + preserve_whitespace_tags = self.DEFAULT_PRESERVE_WHITESPACE_TAGS + self.preserve_whitespace_tags = preserve_whitespace_tags + if store_line_numbers == self.USE_DEFAULT: + store_line_numbers = self.TRACKS_LINE_NUMBERS + self.store_line_numbers = store_line_numbers + if string_containers == self.USE_DEFAULT: + string_containers = self.DEFAULT_STRING_CONTAINERS + self.string_containers = string_containers + + def initialize_soup(self, soup): + """The BeautifulSoup object has been initialized and is now + being associated with the TreeBuilder. + + :param soup: A BeautifulSoup object. + """ + self.soup = soup + + def reset(self): + """Do any work necessary to reset the underlying parser + for a new document. + + By default, this does nothing. + """ + pass + + def can_be_empty_element(self, tag_name): + """Might a tag with this name be an empty-element tag? + + The final markup may or may not actually present this tag as + self-closing. + + For instance: an HTMLBuilder does not consider a <p> tag to be + an empty-element tag (it's not in + HTMLBuilder.empty_element_tags). This means an empty <p> tag + will be presented as "<p></p>", not "<p/>" or "<p>". + + The default implementation has no opinion about which tags are + empty-element tags, so a tag will be presented as an + empty-element tag if and only if it has no children. + "<foo></foo>" will become "<foo/>", and "<foo>bar</foo>" will + be left alone. + + :param tag_name: The name of a markup tag. + """ + if self.empty_element_tags is None: + return True + return tag_name in self.empty_element_tags + + def feed(self, markup): + """Run some incoming markup through some parsing process, + populating the `BeautifulSoup` object in self.soup. + + This method is not implemented in TreeBuilder; it must be + implemented in subclasses. + + :return: None. + """ + raise NotImplementedError() + + def prepare_markup(self, markup, user_specified_encoding=None, + document_declared_encoding=None, exclude_encodings=None): + """Run any preliminary steps necessary to make incoming markup + acceptable to the parser. + + :param markup: Some markup -- probably a bytestring. + :param user_specified_encoding: The user asked to try this encoding. + :param document_declared_encoding: The markup itself claims to be + in this encoding. NOTE: This argument is not used by the + calling code and can probably be removed. + :param exclude_encodings: The user asked _not_ to try any of + these encodings. + + :yield: A series of 4-tuples: + (markup, encoding, declared encoding, + has undergone character replacement) + + Each 4-tuple represents a strategy for converting the + document to Unicode and parsing it. Each strategy will be tried + in turn. + + By default, the only strategy is to parse the markup + as-is. See `LXMLTreeBuilderForXML` and + `HTMLParserTreeBuilder` for implementations that take into + account the quirks of particular parsers. + """ + yield markup, None, None, False + + def test_fragment_to_document(self, fragment): + """Wrap an HTML fragment to make it look like a document. + + Different parsers do this differently. For instance, lxml + introduces an empty <head> tag, and html5lib + doesn't. Abstracting this away lets us write simple tests + which run HTML fragments through the parser and compare the + results against other HTML fragments. + + This method should not be used outside of tests. + + :param fragment: A string -- fragment of HTML. + :return: A string -- a full HTML document. + """ + return fragment + + def set_up_substitutions(self, tag): + """Set up any substitutions that will need to be performed on + a `Tag` when it's output as a string. + + By default, this does nothing. See `HTMLTreeBuilder` for a + case where this is used. + + :param tag: A `Tag` + :return: Whether or not a substitution was performed. + """ + return False + + def _replace_cdata_list_attribute_values(self, tag_name, attrs): + """When an attribute value is associated with a tag that can + have multiple values for that attribute, convert the string + value to a list of strings. + + Basically, replaces class="foo bar" with class=["foo", "bar"] + + NOTE: This method modifies its input in place. + + :param tag_name: The name of a tag. + :param attrs: A dictionary containing the tag's attributes. + Any appropriate attribute values will be modified in place. + """ + if not attrs: + return attrs + if self.cdata_list_attributes: + universal = self.cdata_list_attributes.get('*', []) + tag_specific = self.cdata_list_attributes.get( + tag_name.lower(), None) + for attr in list(attrs.keys()): + if attr in universal or (tag_specific and attr in tag_specific): + # We have a "class"-type attribute whose string + # value is a whitespace-separated list of + # values. Split it into a list. + value = attrs[attr] + if isinstance(value, str): + values = nonwhitespace_re.findall(value) + else: + # html5lib sometimes calls setAttributes twice + # for the same tag when rearranging the parse + # tree. On the second call the attribute value + # here is already a list. If this happens, + # leave the value alone rather than trying to + # split it again. + values = value + attrs[attr] = values + return attrs + +class SAXTreeBuilder(TreeBuilder): + """A Beautiful Soup treebuilder that listens for SAX events. + + This is not currently used for anything, but it demonstrates + how a simple TreeBuilder would work. + """ + + def feed(self, markup): + raise NotImplementedError() + + def close(self): + pass + + def startElement(self, name, attrs): + attrs = dict((key[1], value) for key, value in list(attrs.items())) + #print("Start %s, %r" % (name, attrs)) + self.soup.handle_starttag(name, attrs) + + def endElement(self, name): + #print("End %s" % name) + self.soup.handle_endtag(name) + + def startElementNS(self, nsTuple, nodeName, attrs): + # Throw away (ns, nodeName) for now. + self.startElement(nodeName, attrs) + + def endElementNS(self, nsTuple, nodeName): + # Throw away (ns, nodeName) for now. + self.endElement(nodeName) + #handler.endElementNS((ns, node.nodeName), node.nodeName) + + def startPrefixMapping(self, prefix, nodeValue): + # Ignore the prefix for now. + pass + + def endPrefixMapping(self, prefix): + # Ignore the prefix for now. + # handler.endPrefixMapping(prefix) + pass + + def characters(self, content): + self.soup.handle_data(content) + + def startDocument(self): + pass + + def endDocument(self): + pass + + +class HTMLTreeBuilder(TreeBuilder): + """This TreeBuilder knows facts about HTML. + + Such as which tags are empty-element tags. + """ + + empty_element_tags = set([ + # These are from HTML5. + 'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen', 'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr', + + # These are from earlier versions of HTML and are removed in HTML5. + 'basefont', 'bgsound', 'command', 'frame', 'image', 'isindex', 'nextid', 'spacer' + ]) + + # The HTML standard defines these as block-level elements. Beautiful + # Soup does not treat these elements differently from other elements, + # but it may do so eventually, and this information is available if + # you need to use it. + block_elements = set(["address", "article", "aside", "blockquote", "canvas", "dd", "div", "dl", "dt", "fieldset", "figcaption", "figure", "footer", "form", "h1", "h2", "h3", "h4", "h5", "h6", "header", "hr", "li", "main", "nav", "noscript", "ol", "output", "p", "pre", "section", "table", "tfoot", "ul", "video"]) + + # These HTML tags need special treatment so they can be + # represented by a string class other than NavigableString. + # + # For some of these tags, it's because the HTML standard defines + # an unusual content model for them. I made this list by going + # through the HTML spec + # (https://html.spec.whatwg.org/#metadata-content) and looking for + # "metadata content" elements that can contain strings. + # + # The Ruby tags (<rt> and <rp>) are here despite being normal + # "phrasing content" tags, because the content they contain is + # qualitatively different from other text in the document, and it + # can be useful to be able to distinguish it. + # + # TODO: Arguably <noscript> could go here but it seems + # qualitatively different from the other tags. + DEFAULT_STRING_CONTAINERS = { + 'rt' : RubyTextString, + 'rp' : RubyParenthesisString, + 'style': Stylesheet, + 'script': Script, + 'template': TemplateString, + } + + # The HTML standard defines these attributes as containing a + # space-separated list of values, not a single value. That is, + # class="foo bar" means that the 'class' attribute has two values, + # 'foo' and 'bar', not the single value 'foo bar'. When we + # encounter one of these attributes, we will parse its value into + # a list of values if possible. Upon output, the list will be + # converted back into a string. + DEFAULT_CDATA_LIST_ATTRIBUTES = { + "*" : ['class', 'accesskey', 'dropzone'], + "a" : ['rel', 'rev'], + "link" : ['rel', 'rev'], + "td" : ["headers"], + "th" : ["headers"], + "td" : ["headers"], + "form" : ["accept-charset"], + "object" : ["archive"], + + # These are HTML5 specific, as are *.accesskey and *.dropzone above. + "area" : ["rel"], + "icon" : ["sizes"], + "iframe" : ["sandbox"], + "output" : ["for"], + } + + DEFAULT_PRESERVE_WHITESPACE_TAGS = set(['pre', 'textarea']) + + def set_up_substitutions(self, tag): + """Replace the declared encoding in a <meta> tag with a placeholder, + to be substituted when the tag is output to a string. + + An HTML document may come in to Beautiful Soup as one + encoding, but exit in a different encoding, and the <meta> tag + needs to be changed to reflect this. + + :param tag: A `Tag` + :return: Whether or not a substitution was performed. + """ + # We are only interested in <meta> tags + if tag.name != 'meta': + return False + + http_equiv = tag.get('http-equiv') + content = tag.get('content') + charset = tag.get('charset') + + # We are interested in <meta> tags that say what encoding the + # document was originally in. This means HTML 5-style <meta> + # tags that provide the "charset" attribute. It also means + # HTML 4-style <meta> tags that provide the "content" + # attribute and have "http-equiv" set to "content-type". + # + # In both cases we will replace the value of the appropriate + # attribute with a standin object that can take on any + # encoding. + meta_encoding = None + if charset is not None: + # HTML 5 style: + # <meta charset="utf8"> + meta_encoding = charset + tag['charset'] = CharsetMetaAttributeValue(charset) + + elif (content is not None and http_equiv is not None + and http_equiv.lower() == 'content-type'): + # HTML 4 style: + # <meta http-equiv="content-type" content="text/html; charset=utf8"> + tag['content'] = ContentMetaAttributeValue(content) + + return (meta_encoding is not None) + +class DetectsXMLParsedAsHTML(object): + """A mixin class for any class (a TreeBuilder, or some class used by a + TreeBuilder) that's in a position to detect whether an XML + document is being incorrectly parsed as HTML, and issue an + appropriate warning. + + This requires being able to observe an incoming processing + instruction that might be an XML declaration, and also able to + observe tags as they're opened. If you can't do that for a given + TreeBuilder, there's a less reliable implementation based on + examining the raw markup. + """ + + # Regular expression for seeing if markup has an <html> tag. + LOOKS_LIKE_HTML = re.compile("<[^ +]html", re.I) + LOOKS_LIKE_HTML_B = re.compile(b"<[^ +]html", re.I) + + XML_PREFIX = '<?xml' + XML_PREFIX_B = b'<?xml' + + @classmethod + def warn_if_markup_looks_like_xml(cls, markup): + """Perform a check on some markup to see if it looks like XML + that's not XHTML. If so, issue a warning. + + This is much less reliable than doing the check while parsing, + but some of the tree builders can't do that. + + :return: True if the markup looks like non-XHTML XML, False + otherwise. + """ + if isinstance(markup, bytes): + prefix = cls.XML_PREFIX_B + looks_like_html = cls.LOOKS_LIKE_HTML_B + else: + prefix = cls.XML_PREFIX + looks_like_html = cls.LOOKS_LIKE_HTML + + if (markup is not None + and markup.startswith(prefix) + and not looks_like_html.search(markup[:500]) + ): + cls._warn() + return True + return False + + @classmethod + def _warn(cls): + """Issue a warning about XML being parsed as HTML.""" + warnings.warn( + XMLParsedAsHTMLWarning.MESSAGE, XMLParsedAsHTMLWarning + ) + + def _initialize_xml_detector(self): + """Call this method before parsing a document.""" + self._first_processing_instruction = None + self._root_tag = None + + def _document_might_be_xml(self, processing_instruction): + """Call this method when encountering an XML declaration, or a + "processing instruction" that might be an XML declaration. + """ + if (self._first_processing_instruction is not None + or self._root_tag is not None): + # The document has already started. Don't bother checking + # anymore. + return + + self._first_processing_instruction = processing_instruction + + # We won't know until we encounter the first tag whether or + # not this is actually a problem. + + def _root_tag_encountered(self, name): + """Call this when you encounter the document's root tag. + + This is where we actually check whether an XML document is + being incorrectly parsed as HTML, and issue the warning. + """ + if self._root_tag is not None: + # This method was incorrectly called multiple times. Do + # nothing. + return + + self._root_tag = name + if (name != 'html' and self._first_processing_instruction is not None + and self._first_processing_instruction.lower().startswith('xml ')): + # We encountered an XML declaration and then a tag other + # than 'html'. This is a reliable indicator that a + # non-XHTML document is being parsed as XML. + self._warn() + + +def register_treebuilders_from(module): + """Copy TreeBuilders from the given module into this module.""" + this_module = sys.modules[__name__] + for name in module.__all__: + obj = getattr(module, name) + + if issubclass(obj, TreeBuilder): + setattr(this_module, name, obj) + this_module.__all__.append(name) + # Register the builder while we're at it. + this_module.builder_registry.register(obj) + +class ParserRejectedMarkup(Exception): + """An Exception to be raised when the underlying parser simply + refuses to parse the given markup. + """ + def __init__(self, message_or_exception): + """Explain why the parser rejected the given markup, either + with a textual explanation or another exception. + """ + if isinstance(message_or_exception, Exception): + e = message_or_exception + message_or_exception = "%s: %s" % (e.__class__.__name__, str(e)) + super(ParserRejectedMarkup, self).__init__(message_or_exception) + +# Builders are registered in reverse order of priority, so that custom +# builder registrations will take precedence. In general, we want lxml +# to take precedence over html5lib, because it's faster. And we only +# want to use HTMLParser as a last resort. +from . import _htmlparser +register_treebuilders_from(_htmlparser) +try: + from . import _html5lib + register_treebuilders_from(_html5lib) +except ImportError: + # They don't have html5lib installed. + pass +try: + from . import _lxml + register_treebuilders_from(_lxml) +except ImportError: + # They don't have lxml installed. + pass diff --git a/python/lib/python3.11/site-packages/bs4/builder/_html5lib.py b/python/lib/python3.11/site-packages/bs4/builder/_html5lib.py new file mode 100644 index 0000000..dac2173 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/builder/_html5lib.py @@ -0,0 +1,479 @@ +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +__all__ = [ + 'HTML5TreeBuilder', + ] + +import warnings +import re +from bs4.builder import ( + DetectsXMLParsedAsHTML, + PERMISSIVE, + HTML, + HTML_5, + HTMLTreeBuilder, + ) +from bs4.element import ( + NamespacedAttribute, + nonwhitespace_re, +) +import html5lib +from html5lib.constants import ( + namespaces, + prefixes, + ) +from bs4.element import ( + Comment, + Doctype, + NavigableString, + Tag, + ) + +try: + # Pre-0.99999999 + from html5lib.treebuilders import _base as treebuilder_base + new_html5lib = False +except ImportError as e: + # 0.99999999 and up + from html5lib.treebuilders import base as treebuilder_base + new_html5lib = True + +class HTML5TreeBuilder(HTMLTreeBuilder): + """Use html5lib to build a tree. + + Note that this TreeBuilder does not support some features common + to HTML TreeBuilders. Some of these features could theoretically + be implemented, but at the very least it's quite difficult, + because html5lib moves the parse tree around as it's being built. + + * This TreeBuilder doesn't use different subclasses of NavigableString + based on the name of the tag in which the string was found. + + * You can't use a SoupStrainer to parse only part of a document. + """ + + NAME = "html5lib" + + features = [NAME, PERMISSIVE, HTML_5, HTML] + + # html5lib can tell us which line number and position in the + # original file is the source of an element. + TRACKS_LINE_NUMBERS = True + + def prepare_markup(self, markup, user_specified_encoding, + document_declared_encoding=None, exclude_encodings=None): + # Store the user-specified encoding for use later on. + self.user_specified_encoding = user_specified_encoding + + # document_declared_encoding and exclude_encodings aren't used + # ATM because the html5lib TreeBuilder doesn't use + # UnicodeDammit. + if exclude_encodings: + warnings.warn( + "You provided a value for exclude_encoding, but the html5lib tree builder doesn't support exclude_encoding.", + stacklevel=3 + ) + + # html5lib only parses HTML, so if it's given XML that's worth + # noting. + DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(markup) + + yield (markup, None, None, False) + + # These methods are defined by Beautiful Soup. + def feed(self, markup): + if self.soup.parse_only is not None: + warnings.warn( + "You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.", + stacklevel=4 + ) + parser = html5lib.HTMLParser(tree=self.create_treebuilder) + self.underlying_builder.parser = parser + extra_kwargs = dict() + if not isinstance(markup, str): + if new_html5lib: + extra_kwargs['override_encoding'] = self.user_specified_encoding + else: + extra_kwargs['encoding'] = self.user_specified_encoding + doc = parser.parse(markup, **extra_kwargs) + + # Set the character encoding detected by the tokenizer. + if isinstance(markup, str): + # We need to special-case this because html5lib sets + # charEncoding to UTF-8 if it gets Unicode input. + doc.original_encoding = None + else: + original_encoding = parser.tokenizer.stream.charEncoding[0] + if not isinstance(original_encoding, str): + # In 0.99999999 and up, the encoding is an html5lib + # Encoding object. We want to use a string for compatibility + # with other tree builders. + original_encoding = original_encoding.name + doc.original_encoding = original_encoding + self.underlying_builder.parser = None + + def create_treebuilder(self, namespaceHTMLElements): + self.underlying_builder = TreeBuilderForHtml5lib( + namespaceHTMLElements, self.soup, + store_line_numbers=self.store_line_numbers + ) + return self.underlying_builder + + def test_fragment_to_document(self, fragment): + """See `TreeBuilder`.""" + return '<html><head></head><body>%s</body></html>' % fragment + + +class TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder): + + def __init__(self, namespaceHTMLElements, soup=None, + store_line_numbers=True, **kwargs): + if soup: + self.soup = soup + else: + from bs4 import BeautifulSoup + # TODO: Why is the parser 'html.parser' here? To avoid an + # infinite loop? + self.soup = BeautifulSoup( + "", "html.parser", store_line_numbers=store_line_numbers, + **kwargs + ) + # TODO: What are **kwargs exactly? Should they be passed in + # here in addition to/instead of being passed to the BeautifulSoup + # constructor? + super(TreeBuilderForHtml5lib, self).__init__(namespaceHTMLElements) + + # This will be set later to an html5lib.html5parser.HTMLParser + # object, which we can use to track the current line number. + self.parser = None + self.store_line_numbers = store_line_numbers + + def documentClass(self): + self.soup.reset() + return Element(self.soup, self.soup, None) + + def insertDoctype(self, token): + name = token["name"] + publicId = token["publicId"] + systemId = token["systemId"] + + doctype = Doctype.for_name_and_ids(name, publicId, systemId) + self.soup.object_was_parsed(doctype) + + def elementClass(self, name, namespace): + kwargs = {} + if self.parser and self.store_line_numbers: + # This represents the point immediately after the end of the + # tag. We don't know when the tag started, but we do know + # where it ended -- the character just before this one. + sourceline, sourcepos = self.parser.tokenizer.stream.position() + kwargs['sourceline'] = sourceline + kwargs['sourcepos'] = sourcepos-1 + tag = self.soup.new_tag(name, namespace, **kwargs) + + return Element(tag, self.soup, namespace) + + def commentClass(self, data): + return TextNode(Comment(data), self.soup) + + def fragmentClass(self): + from bs4 import BeautifulSoup + # TODO: Why is the parser 'html.parser' here? To avoid an + # infinite loop? + self.soup = BeautifulSoup("", "html.parser") + self.soup.name = "[document_fragment]" + return Element(self.soup, self.soup, None) + + def appendChild(self, node): + # XXX This code is not covered by the BS4 tests. + self.soup.append(node.element) + + def getDocument(self): + return self.soup + + def getFragment(self): + return treebuilder_base.TreeBuilder.getFragment(self).element + + def testSerializer(self, element): + from bs4 import BeautifulSoup + rv = [] + doctype_re = re.compile(r'^(.*?)(?: PUBLIC "(.*?)"(?: "(.*?)")?| SYSTEM "(.*?)")?$') + + def serializeElement(element, indent=0): + if isinstance(element, BeautifulSoup): + pass + if isinstance(element, Doctype): + m = doctype_re.match(element) + if m: + name = m.group(1) + if m.lastindex > 1: + publicId = m.group(2) or "" + systemId = m.group(3) or m.group(4) or "" + rv.append("""|%s<!DOCTYPE %s "%s" "%s">""" % + (' ' * indent, name, publicId, systemId)) + else: + rv.append("|%s<!DOCTYPE %s>" % (' ' * indent, name)) + else: + rv.append("|%s<!DOCTYPE >" % (' ' * indent,)) + elif isinstance(element, Comment): + rv.append("|%s<!-- %s -->" % (' ' * indent, element)) + elif isinstance(element, NavigableString): + rv.append("|%s\"%s\"" % (' ' * indent, element)) + else: + if element.namespace: + name = "%s %s" % (prefixes[element.namespace], + element.name) + else: + name = element.name + rv.append("|%s<%s>" % (' ' * indent, name)) + if element.attrs: + attributes = [] + for name, value in list(element.attrs.items()): + if isinstance(name, NamespacedAttribute): + name = "%s %s" % (prefixes[name.namespace], name.name) + if isinstance(value, list): + value = " ".join(value) + attributes.append((name, value)) + + for name, value in sorted(attributes): + rv.append('|%s%s="%s"' % (' ' * (indent + 2), name, value)) + indent += 2 + for child in element.children: + serializeElement(child, indent) + serializeElement(element, 0) + + return "\n".join(rv) + +class AttrList(object): + def __init__(self, element): + self.element = element + self.attrs = dict(self.element.attrs) + def __iter__(self): + return list(self.attrs.items()).__iter__() + def __setitem__(self, name, value): + # If this attribute is a multi-valued attribute for this element, + # turn its value into a list. + list_attr = self.element.cdata_list_attributes or {} + if (name in list_attr.get('*', []) + or (self.element.name in list_attr + and name in list_attr.get(self.element.name, []))): + # A node that is being cloned may have already undergone + # this procedure. + if not isinstance(value, list): + value = nonwhitespace_re.findall(value) + self.element[name] = value + def items(self): + return list(self.attrs.items()) + def keys(self): + return list(self.attrs.keys()) + def __len__(self): + return len(self.attrs) + def __getitem__(self, name): + return self.attrs[name] + def __contains__(self, name): + return name in list(self.attrs.keys()) + + +class Element(treebuilder_base.Node): + def __init__(self, element, soup, namespace): + treebuilder_base.Node.__init__(self, element.name) + self.element = element + self.soup = soup + self.namespace = namespace + + def appendChild(self, node): + string_child = child = None + if isinstance(node, str): + # Some other piece of code decided to pass in a string + # instead of creating a TextElement object to contain the + # string. + string_child = child = node + elif isinstance(node, Tag): + # Some other piece of code decided to pass in a Tag + # instead of creating an Element object to contain the + # Tag. + child = node + elif node.element.__class__ == NavigableString: + string_child = child = node.element + node.parent = self + else: + child = node.element + node.parent = self + + if not isinstance(child, str) and child.parent is not None: + node.element.extract() + + if (string_child is not None and self.element.contents + and self.element.contents[-1].__class__ == NavigableString): + # We are appending a string onto another string. + # TODO This has O(n^2) performance, for input like + # "a</a>a</a>a</a>..." + old_element = self.element.contents[-1] + new_element = self.soup.new_string(old_element + string_child) + old_element.replace_with(new_element) + self.soup._most_recent_element = new_element + else: + if isinstance(node, str): + # Create a brand new NavigableString from this string. + child = self.soup.new_string(node) + + # Tell Beautiful Soup to act as if it parsed this element + # immediately after the parent's last descendant. (Or + # immediately after the parent, if it has no children.) + if self.element.contents: + most_recent_element = self.element._last_descendant(False) + elif self.element.next_element is not None: + # Something from further ahead in the parse tree is + # being inserted into this earlier element. This is + # very annoying because it means an expensive search + # for the last element in the tree. + most_recent_element = self.soup._last_descendant() + else: + most_recent_element = self.element + + self.soup.object_was_parsed( + child, parent=self.element, + most_recent_element=most_recent_element) + + def getAttributes(self): + if isinstance(self.element, Comment): + return {} + return AttrList(self.element) + + def setAttributes(self, attributes): + if attributes is not None and len(attributes) > 0: + converted_attributes = [] + for name, value in list(attributes.items()): + if isinstance(name, tuple): + new_name = NamespacedAttribute(*name) + del attributes[name] + attributes[new_name] = value + + self.soup.builder._replace_cdata_list_attribute_values( + self.name, attributes) + for name, value in list(attributes.items()): + self.element[name] = value + + # The attributes may contain variables that need substitution. + # Call set_up_substitutions manually. + # + # The Tag constructor called this method when the Tag was created, + # but we just set/changed the attributes, so call it again. + self.soup.builder.set_up_substitutions(self.element) + attributes = property(getAttributes, setAttributes) + + def insertText(self, data, insertBefore=None): + text = TextNode(self.soup.new_string(data), self.soup) + if insertBefore: + self.insertBefore(text, insertBefore) + else: + self.appendChild(text) + + def insertBefore(self, node, refNode): + index = self.element.index(refNode.element) + if (node.element.__class__ == NavigableString and self.element.contents + and self.element.contents[index-1].__class__ == NavigableString): + # (See comments in appendChild) + old_node = self.element.contents[index-1] + new_str = self.soup.new_string(old_node + node.element) + old_node.replace_with(new_str) + else: + self.element.insert(index, node.element) + node.parent = self + + def removeChild(self, node): + node.element.extract() + + def reparentChildren(self, new_parent): + """Move all of this tag's children into another tag.""" + # print("MOVE", self.element.contents) + # print("FROM", self.element) + # print("TO", new_parent.element) + + element = self.element + new_parent_element = new_parent.element + # Determine what this tag's next_element will be once all the children + # are removed. + final_next_element = element.next_sibling + + new_parents_last_descendant = new_parent_element._last_descendant(False, False) + if len(new_parent_element.contents) > 0: + # The new parent already contains children. We will be + # appending this tag's children to the end. + new_parents_last_child = new_parent_element.contents[-1] + new_parents_last_descendant_next_element = new_parents_last_descendant.next_element + else: + # The new parent contains no children. + new_parents_last_child = None + new_parents_last_descendant_next_element = new_parent_element.next_element + + to_append = element.contents + if len(to_append) > 0: + # Set the first child's previous_element and previous_sibling + # to elements within the new parent + first_child = to_append[0] + if new_parents_last_descendant is not None: + first_child.previous_element = new_parents_last_descendant + else: + first_child.previous_element = new_parent_element + first_child.previous_sibling = new_parents_last_child + if new_parents_last_descendant is not None: + new_parents_last_descendant.next_element = first_child + else: + new_parent_element.next_element = first_child + if new_parents_last_child is not None: + new_parents_last_child.next_sibling = first_child + + # Find the very last element being moved. It is now the + # parent's last descendant. It has no .next_sibling and + # its .next_element is whatever the previous last + # descendant had. + last_childs_last_descendant = to_append[-1]._last_descendant(False, True) + + last_childs_last_descendant.next_element = new_parents_last_descendant_next_element + if new_parents_last_descendant_next_element is not None: + # TODO: This code has no test coverage and I'm not sure + # how to get html5lib to go through this path, but it's + # just the other side of the previous line. + new_parents_last_descendant_next_element.previous_element = last_childs_last_descendant + last_childs_last_descendant.next_sibling = None + + for child in to_append: + child.parent = new_parent_element + new_parent_element.contents.append(child) + + # Now that this element has no children, change its .next_element. + element.contents = [] + element.next_element = final_next_element + + # print("DONE WITH MOVE") + # print("FROM", self.element) + # print("TO", new_parent_element) + + def cloneNode(self): + tag = self.soup.new_tag(self.element.name, self.namespace) + node = Element(tag, self.soup, self.namespace) + for key,value in self.attributes: + node.attributes[key] = value + return node + + def hasContent(self): + return self.element.contents + + def getNameTuple(self): + if self.namespace == None: + return namespaces["html"], self.name + else: + return self.namespace, self.name + + nameTuple = property(getNameTuple) + +class TextNode(Element): + def __init__(self, element, soup): + treebuilder_base.Node.__init__(self, None) + self.element = element + self.soup = soup + + def cloneNode(self): + raise NotImplementedError diff --git a/python/lib/python3.11/site-packages/bs4/builder/_htmlparser.py b/python/lib/python3.11/site-packages/bs4/builder/_htmlparser.py new file mode 100644 index 0000000..e065096 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/builder/_htmlparser.py @@ -0,0 +1,387 @@ +# encoding: utf-8 +"""Use the HTMLParser library to parse HTML files that aren't too bad.""" + +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +__all__ = [ + 'HTMLParserTreeBuilder', + ] + +from html.parser import HTMLParser + +import sys +import warnings + +from bs4.element import ( + CData, + Comment, + Declaration, + Doctype, + ProcessingInstruction, + ) +from bs4.dammit import EntitySubstitution, UnicodeDammit + +from bs4.builder import ( + DetectsXMLParsedAsHTML, + ParserRejectedMarkup, + HTML, + HTMLTreeBuilder, + STRICT, + ) + + +HTMLPARSER = 'html.parser' + +class BeautifulSoupHTMLParser(HTMLParser, DetectsXMLParsedAsHTML): + """A subclass of the Python standard library's HTMLParser class, which + listens for HTMLParser events and translates them into calls + to Beautiful Soup's tree construction API. + """ + + # Strategies for handling duplicate attributes + IGNORE = 'ignore' + REPLACE = 'replace' + + def __init__(self, *args, **kwargs): + """Constructor. + + :param on_duplicate_attribute: A strategy for what to do if a + tag includes the same attribute more than once. Accepted + values are: REPLACE (replace earlier values with later + ones, the default), IGNORE (keep the earliest value + encountered), or a callable. A callable must take three + arguments: the dictionary of attributes already processed, + the name of the duplicate attribute, and the most recent value + encountered. + """ + self.on_duplicate_attribute = kwargs.pop( + 'on_duplicate_attribute', self.REPLACE + ) + HTMLParser.__init__(self, *args, **kwargs) + + # Keep a list of empty-element tags that were encountered + # without an explicit closing tag. If we encounter a closing tag + # of this type, we'll associate it with one of those entries. + # + # This isn't a stack because we don't care about the + # order. It's a list of closing tags we've already handled and + # will ignore, assuming they ever show up. + self.already_closed_empty_element = [] + + self._initialize_xml_detector() + + def error(self, message): + # NOTE: This method is required so long as Python 3.9 is + # supported. The corresponding code is removed from HTMLParser + # in 3.5, but not removed from ParserBase until 3.10. + # https://github.com/python/cpython/issues/76025 + # + # The original implementation turned the error into a warning, + # but in every case I discovered, this made HTMLParser + # immediately crash with an error message that was less + # helpful than the warning. The new implementation makes it + # more clear that html.parser just can't parse this + # markup. The 3.10 implementation does the same, though it + # raises AssertionError rather than calling a method. (We + # catch this error and wrap it in a ParserRejectedMarkup.) + raise ParserRejectedMarkup(message) + + def handle_startendtag(self, name, attrs): + """Handle an incoming empty-element tag. + + This is only called when the markup looks like <tag/>. + + :param name: Name of the tag. + :param attrs: Dictionary of the tag's attributes. + """ + # is_startend() tells handle_starttag not to close the tag + # just because its name matches a known empty-element tag. We + # know that this is an empty-element tag and we want to call + # handle_endtag ourselves. + tag = self.handle_starttag(name, attrs, handle_empty_element=False) + self.handle_endtag(name) + + def handle_starttag(self, name, attrs, handle_empty_element=True): + """Handle an opening tag, e.g. '<tag>' + + :param name: Name of the tag. + :param attrs: Dictionary of the tag's attributes. + :param handle_empty_element: True if this tag is known to be + an empty-element tag (i.e. there is not expected to be any + closing tag). + """ + # XXX namespace + attr_dict = {} + for key, value in attrs: + # Change None attribute values to the empty string + # for consistency with the other tree builders. + if value is None: + value = '' + if key in attr_dict: + # A single attribute shows up multiple times in this + # tag. How to handle it depends on the + # on_duplicate_attribute setting. + on_dupe = self.on_duplicate_attribute + if on_dupe == self.IGNORE: + pass + elif on_dupe in (None, self.REPLACE): + attr_dict[key] = value + else: + on_dupe(attr_dict, key, value) + else: + attr_dict[key] = value + attrvalue = '""' + #print("START", name) + sourceline, sourcepos = self.getpos() + tag = self.soup.handle_starttag( + name, None, None, attr_dict, sourceline=sourceline, + sourcepos=sourcepos + ) + if tag and tag.is_empty_element and handle_empty_element: + # Unlike other parsers, html.parser doesn't send separate end tag + # events for empty-element tags. (It's handled in + # handle_startendtag, but only if the original markup looked like + # <tag/>.) + # + # So we need to call handle_endtag() ourselves. Since we + # know the start event is identical to the end event, we + # don't want handle_endtag() to cross off any previous end + # events for tags of this name. + self.handle_endtag(name, check_already_closed=False) + + # But we might encounter an explicit closing tag for this tag + # later on. If so, we want to ignore it. + self.already_closed_empty_element.append(name) + + if self._root_tag is None: + self._root_tag_encountered(name) + + def handle_endtag(self, name, check_already_closed=True): + """Handle a closing tag, e.g. '</tag>' + + :param name: A tag name. + :param check_already_closed: True if this tag is expected to + be the closing portion of an empty-element tag, + e.g. '<tag></tag>'. + """ + #print("END", name) + if check_already_closed and name in self.already_closed_empty_element: + # This is a redundant end tag for an empty-element tag. + # We've already called handle_endtag() for it, so just + # check it off the list. + #print("ALREADY CLOSED", name) + self.already_closed_empty_element.remove(name) + else: + self.soup.handle_endtag(name) + + def handle_data(self, data): + """Handle some textual data that shows up between tags.""" + self.soup.handle_data(data) + + def handle_charref(self, name): + """Handle a numeric character reference by converting it to the + corresponding Unicode character and treating it as textual + data. + + :param name: Character number, possibly in hexadecimal. + """ + # TODO: This was originally a workaround for a bug in + # HTMLParser. (http://bugs.python.org/issue13633) The bug has + # been fixed, but removing this code still makes some + # Beautiful Soup tests fail. This needs investigation. + if name.startswith('x'): + real_name = int(name.lstrip('x'), 16) + elif name.startswith('X'): + real_name = int(name.lstrip('X'), 16) + else: + real_name = int(name) + + data = None + if real_name < 256: + # HTML numeric entities are supposed to reference Unicode + # code points, but sometimes they reference code points in + # some other encoding (ahem, Windows-1252). E.g. “ + # instead of É for LEFT DOUBLE QUOTATION MARK. This + # code tries to detect this situation and compensate. + for encoding in (self.soup.original_encoding, 'windows-1252'): + if not encoding: + continue + try: + data = bytearray([real_name]).decode(encoding) + except UnicodeDecodeError as e: + pass + if not data: + try: + data = chr(real_name) + except (ValueError, OverflowError) as e: + pass + data = data or "\N{REPLACEMENT CHARACTER}" + self.handle_data(data) + + def handle_entityref(self, name): + """Handle a named entity reference by converting it to the + corresponding Unicode character(s) and treating it as textual + data. + + :param name: Name of the entity reference. + """ + character = EntitySubstitution.HTML_ENTITY_TO_CHARACTER.get(name) + if character is not None: + data = character + else: + # If this were XML, it would be ambiguous whether "&foo" + # was an character entity reference with a missing + # semicolon or the literal string "&foo". Since this is + # HTML, we have a complete list of all character entity references, + # and this one wasn't found, so assume it's the literal string "&foo". + data = "&%s" % name + self.handle_data(data) + + def handle_comment(self, data): + """Handle an HTML comment. + + :param data: The text of the comment. + """ + self.soup.endData() + self.soup.handle_data(data) + self.soup.endData(Comment) + + def handle_decl(self, data): + """Handle a DOCTYPE declaration. + + :param data: The text of the declaration. + """ + self.soup.endData() + data = data[len("DOCTYPE "):] + self.soup.handle_data(data) + self.soup.endData(Doctype) + + def unknown_decl(self, data): + """Handle a declaration of unknown type -- probably a CDATA block. + + :param data: The text of the declaration. + """ + if data.upper().startswith('CDATA['): + cls = CData + data = data[len('CDATA['):] + else: + cls = Declaration + self.soup.endData() + self.soup.handle_data(data) + self.soup.endData(cls) + + def handle_pi(self, data): + """Handle a processing instruction. + + :param data: The text of the instruction. + """ + self.soup.endData() + self.soup.handle_data(data) + self._document_might_be_xml(data) + self.soup.endData(ProcessingInstruction) + + +class HTMLParserTreeBuilder(HTMLTreeBuilder): + """A Beautiful soup `TreeBuilder` that uses the `HTMLParser` parser, + found in the Python standard library. + """ + is_xml = False + picklable = True + NAME = HTMLPARSER + features = [NAME, HTML, STRICT] + + # The html.parser knows which line number and position in the + # original file is the source of an element. + TRACKS_LINE_NUMBERS = True + + def __init__(self, parser_args=None, parser_kwargs=None, **kwargs): + """Constructor. + + :param parser_args: Positional arguments to pass into + the BeautifulSoupHTMLParser constructor, once it's + invoked. + :param parser_kwargs: Keyword arguments to pass into + the BeautifulSoupHTMLParser constructor, once it's + invoked. + :param kwargs: Keyword arguments for the superclass constructor. + """ + # Some keyword arguments will be pulled out of kwargs and placed + # into parser_kwargs. + extra_parser_kwargs = dict() + for arg in ('on_duplicate_attribute',): + if arg in kwargs: + value = kwargs.pop(arg) + extra_parser_kwargs[arg] = value + super(HTMLParserTreeBuilder, self).__init__(**kwargs) + parser_args = parser_args or [] + parser_kwargs = parser_kwargs or {} + parser_kwargs.update(extra_parser_kwargs) + parser_kwargs['convert_charrefs'] = False + self.parser_args = (parser_args, parser_kwargs) + + def prepare_markup(self, markup, user_specified_encoding=None, + document_declared_encoding=None, exclude_encodings=None): + + """Run any preliminary steps necessary to make incoming markup + acceptable to the parser. + + :param markup: Some markup -- probably a bytestring. + :param user_specified_encoding: The user asked to try this encoding. + :param document_declared_encoding: The markup itself claims to be + in this encoding. + :param exclude_encodings: The user asked _not_ to try any of + these encodings. + + :yield: A series of 4-tuples: + (markup, encoding, declared encoding, + has undergone character replacement) + + Each 4-tuple represents a strategy for converting the + document to Unicode and parsing it. Each strategy will be tried + in turn. + """ + if isinstance(markup, str): + # Parse Unicode as-is. + yield (markup, None, None, False) + return + + # Ask UnicodeDammit to sniff the most likely encoding. + + # This was provided by the end-user; treat it as a known + # definite encoding per the algorithm laid out in the HTML5 + # spec. (See the EncodingDetector class for details.) + known_definite_encodings = [user_specified_encoding] + + # This was found in the document; treat it as a slightly lower-priority + # user encoding. + user_encodings = [document_declared_encoding] + + try_encodings = [user_specified_encoding, document_declared_encoding] + dammit = UnicodeDammit( + markup, + known_definite_encodings=known_definite_encodings, + user_encodings=user_encodings, + is_html=True, + exclude_encodings=exclude_encodings + ) + yield (dammit.markup, dammit.original_encoding, + dammit.declared_html_encoding, + dammit.contains_replacement_characters) + + def feed(self, markup): + """Run some incoming markup through some parsing process, + populating the `BeautifulSoup` object in self.soup. + """ + args, kwargs = self.parser_args + parser = BeautifulSoupHTMLParser(*args, **kwargs) + parser.soup = self.soup + try: + parser.feed(markup) + except AssertionError as e: + # html.parser raises AssertionError in rare cases to + # indicate a fatal problem with the markup, especially + # when there's an error in the doctype declaration. + raise ParserRejectedMarkup(e) + parser.close() + parser.already_closed_empty_element = [] diff --git a/python/lib/python3.11/site-packages/bs4/builder/_lxml.py b/python/lib/python3.11/site-packages/bs4/builder/_lxml.py new file mode 100644 index 0000000..971c81e --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/builder/_lxml.py @@ -0,0 +1,386 @@ +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +__all__ = [ + 'LXMLTreeBuilderForXML', + 'LXMLTreeBuilder', + ] + +try: + from collections.abc import Callable # Python 3.6 +except ImportError as e: + from collections import Callable + +from io import BytesIO +from io import StringIO +from lxml import etree +from bs4.element import ( + Comment, + Doctype, + NamespacedAttribute, + ProcessingInstruction, + XMLProcessingInstruction, +) +from bs4.builder import ( + DetectsXMLParsedAsHTML, + FAST, + HTML, + HTMLTreeBuilder, + PERMISSIVE, + ParserRejectedMarkup, + TreeBuilder, + XML) +from bs4.dammit import EncodingDetector + +LXML = 'lxml' + +def _invert(d): + "Invert a dictionary." + return dict((v,k) for k, v in list(d.items())) + +class LXMLTreeBuilderForXML(TreeBuilder): + DEFAULT_PARSER_CLASS = etree.XMLParser + + is_xml = True + processing_instruction_class = XMLProcessingInstruction + + NAME = "lxml-xml" + ALTERNATE_NAMES = ["xml"] + + # Well, it's permissive by XML parser standards. + features = [NAME, LXML, XML, FAST, PERMISSIVE] + + CHUNK_SIZE = 512 + + # This namespace mapping is specified in the XML Namespace + # standard. + DEFAULT_NSMAPS = dict(xml='http://www.w3.org/XML/1998/namespace') + + DEFAULT_NSMAPS_INVERTED = _invert(DEFAULT_NSMAPS) + + # NOTE: If we parsed Element objects and looked at .sourceline, + # we'd be able to see the line numbers from the original document. + # But instead we build an XMLParser or HTMLParser object to serve + # as the target of parse messages, and those messages don't include + # line numbers. + # See: https://bugs.launchpad.net/lxml/+bug/1846906 + + def initialize_soup(self, soup): + """Let the BeautifulSoup object know about the standard namespace + mapping. + + :param soup: A `BeautifulSoup`. + """ + super(LXMLTreeBuilderForXML, self).initialize_soup(soup) + self._register_namespaces(self.DEFAULT_NSMAPS) + + def _register_namespaces(self, mapping): + """Let the BeautifulSoup object know about namespaces encountered + while parsing the document. + + This might be useful later on when creating CSS selectors. + + This will track (almost) all namespaces, even ones that were + only in scope for part of the document. If two namespaces have + the same prefix, only the first one encountered will be + tracked. Un-prefixed namespaces are not tracked. + + :param mapping: A dictionary mapping namespace prefixes to URIs. + """ + for key, value in list(mapping.items()): + # This is 'if key' and not 'if key is not None' because we + # don't track un-prefixed namespaces. Soupselect will + # treat an un-prefixed namespace as the default, which + # causes confusion in some cases. + if key and key not in self.soup._namespaces: + # Let the BeautifulSoup object know about a new namespace. + # If there are multiple namespaces defined with the same + # prefix, the first one in the document takes precedence. + self.soup._namespaces[key] = value + + def default_parser(self, encoding): + """Find the default parser for the given encoding. + + :param encoding: A string. + :return: Either a parser object or a class, which + will be instantiated with default arguments. + """ + if self._default_parser is not None: + return self._default_parser + return etree.XMLParser( + target=self, strip_cdata=False, recover=True, encoding=encoding) + + def parser_for(self, encoding): + """Instantiate an appropriate parser for the given encoding. + + :param encoding: A string. + :return: A parser object such as an `etree.XMLParser`. + """ + # Use the default parser. + parser = self.default_parser(encoding) + + if isinstance(parser, Callable): + # Instantiate the parser with default arguments + parser = parser( + target=self, strip_cdata=False, recover=True, encoding=encoding + ) + return parser + + def __init__(self, parser=None, empty_element_tags=None, **kwargs): + # TODO: Issue a warning if parser is present but not a + # callable, since that means there's no way to create new + # parsers for different encodings. + self._default_parser = parser + if empty_element_tags is not None: + self.empty_element_tags = set(empty_element_tags) + self.soup = None + self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED] + self.active_namespace_prefixes = [dict(self.DEFAULT_NSMAPS)] + super(LXMLTreeBuilderForXML, self).__init__(**kwargs) + + def _getNsTag(self, tag): + # Split the namespace URL out of a fully-qualified lxml tag + # name. Copied from lxml's src/lxml/sax.py. + if tag[0] == '{': + return tuple(tag[1:].split('}', 1)) + else: + return (None, tag) + + def prepare_markup(self, markup, user_specified_encoding=None, + exclude_encodings=None, + document_declared_encoding=None): + """Run any preliminary steps necessary to make incoming markup + acceptable to the parser. + + lxml really wants to get a bytestring and convert it to + Unicode itself. So instead of using UnicodeDammit to convert + the bytestring to Unicode using different encodings, this + implementation uses EncodingDetector to iterate over the + encodings, and tell lxml to try to parse the document as each + one in turn. + + :param markup: Some markup -- hopefully a bytestring. + :param user_specified_encoding: The user asked to try this encoding. + :param document_declared_encoding: The markup itself claims to be + in this encoding. + :param exclude_encodings: The user asked _not_ to try any of + these encodings. + + :yield: A series of 4-tuples: + (markup, encoding, declared encoding, + has undergone character replacement) + + Each 4-tuple represents a strategy for converting the + document to Unicode and parsing it. Each strategy will be tried + in turn. + """ + is_html = not self.is_xml + if is_html: + self.processing_instruction_class = ProcessingInstruction + # We're in HTML mode, so if we're given XML, that's worth + # noting. + DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(markup) + else: + self.processing_instruction_class = XMLProcessingInstruction + + if isinstance(markup, str): + # We were given Unicode. Maybe lxml can parse Unicode on + # this system? + + # TODO: This is a workaround for + # https://bugs.launchpad.net/lxml/+bug/1948551. + # We can remove it once the upstream issue is fixed. + if len(markup) > 0 and markup[0] == u'\N{BYTE ORDER MARK}': + markup = markup[1:] + yield markup, None, document_declared_encoding, False + + if isinstance(markup, str): + # No, apparently not. Convert the Unicode to UTF-8 and + # tell lxml to parse it as UTF-8. + yield (markup.encode("utf8"), "utf8", + document_declared_encoding, False) + + # This was provided by the end-user; treat it as a known + # definite encoding per the algorithm laid out in the HTML5 + # spec. (See the EncodingDetector class for details.) + known_definite_encodings = [user_specified_encoding] + + # This was found in the document; treat it as a slightly lower-priority + # user encoding. + user_encodings = [document_declared_encoding] + detector = EncodingDetector( + markup, known_definite_encodings=known_definite_encodings, + user_encodings=user_encodings, is_html=is_html, + exclude_encodings=exclude_encodings + ) + for encoding in detector.encodings: + yield (detector.markup, encoding, document_declared_encoding, False) + + def feed(self, markup): + if isinstance(markup, bytes): + markup = BytesIO(markup) + elif isinstance(markup, str): + markup = StringIO(markup) + + # Call feed() at least once, even if the markup is empty, + # or the parser won't be initialized. + data = markup.read(self.CHUNK_SIZE) + try: + self.parser = self.parser_for(self.soup.original_encoding) + self.parser.feed(data) + while len(data) != 0: + # Now call feed() on the rest of the data, chunk by chunk. + data = markup.read(self.CHUNK_SIZE) + if len(data) != 0: + self.parser.feed(data) + self.parser.close() + except (UnicodeDecodeError, LookupError, etree.ParserError) as e: + raise ParserRejectedMarkup(e) + + def close(self): + self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED] + + def start(self, name, attrs, nsmap={}): + # Make sure attrs is a mutable dict--lxml may send an immutable dictproxy. + attrs = dict(attrs) + nsprefix = None + # Invert each namespace map as it comes in. + if len(nsmap) == 0 and len(self.nsmaps) > 1: + # There are no new namespaces for this tag, but + # non-default namespaces are in play, so we need a + # separate tag stack to know when they end. + self.nsmaps.append(None) + elif len(nsmap) > 0: + # A new namespace mapping has come into play. + + # First, Let the BeautifulSoup object know about it. + self._register_namespaces(nsmap) + + # Then, add it to our running list of inverted namespace + # mappings. + self.nsmaps.append(_invert(nsmap)) + + # The currently active namespace prefixes have + # changed. Calculate the new mapping so it can be stored + # with all Tag objects created while these prefixes are in + # scope. + current_mapping = dict(self.active_namespace_prefixes[-1]) + current_mapping.update(nsmap) + + # We should not track un-prefixed namespaces as we can only hold one + # and it will be recognized as the default namespace by soupsieve, + # which may be confusing in some situations. + if '' in current_mapping: + del current_mapping[''] + self.active_namespace_prefixes.append(current_mapping) + + # Also treat the namespace mapping as a set of attributes on the + # tag, so we can recreate it later. + attrs = attrs.copy() + for prefix, namespace in list(nsmap.items()): + attribute = NamespacedAttribute( + "xmlns", prefix, "http://www.w3.org/2000/xmlns/") + attrs[attribute] = namespace + + # Namespaces are in play. Find any attributes that came in + # from lxml with namespaces attached to their names, and + # turn then into NamespacedAttribute objects. + new_attrs = {} + for attr, value in list(attrs.items()): + namespace, attr = self._getNsTag(attr) + if namespace is None: + new_attrs[attr] = value + else: + nsprefix = self._prefix_for_namespace(namespace) + attr = NamespacedAttribute(nsprefix, attr, namespace) + new_attrs[attr] = value + attrs = new_attrs + + namespace, name = self._getNsTag(name) + nsprefix = self._prefix_for_namespace(namespace) + self.soup.handle_starttag( + name, namespace, nsprefix, attrs, + namespaces=self.active_namespace_prefixes[-1] + ) + + def _prefix_for_namespace(self, namespace): + """Find the currently active prefix for the given namespace.""" + if namespace is None: + return None + for inverted_nsmap in reversed(self.nsmaps): + if inverted_nsmap is not None and namespace in inverted_nsmap: + return inverted_nsmap[namespace] + return None + + def end(self, name): + self.soup.endData() + completed_tag = self.soup.tagStack[-1] + namespace, name = self._getNsTag(name) + nsprefix = None + if namespace is not None: + for inverted_nsmap in reversed(self.nsmaps): + if inverted_nsmap is not None and namespace in inverted_nsmap: + nsprefix = inverted_nsmap[namespace] + break + self.soup.handle_endtag(name, nsprefix) + if len(self.nsmaps) > 1: + # This tag, or one of its parents, introduced a namespace + # mapping, so pop it off the stack. + out_of_scope_nsmap = self.nsmaps.pop() + + if out_of_scope_nsmap is not None: + # This tag introduced a namespace mapping which is no + # longer in scope. Recalculate the currently active + # namespace prefixes. + self.active_namespace_prefixes.pop() + + def pi(self, target, data): + self.soup.endData() + data = target + ' ' + data + self.soup.handle_data(data) + self.soup.endData(self.processing_instruction_class) + + def data(self, content): + self.soup.handle_data(content) + + def doctype(self, name, pubid, system): + self.soup.endData() + doctype = Doctype.for_name_and_ids(name, pubid, system) + self.soup.object_was_parsed(doctype) + + def comment(self, content): + "Handle comments as Comment objects." + self.soup.endData() + self.soup.handle_data(content) + self.soup.endData(Comment) + + def test_fragment_to_document(self, fragment): + """See `TreeBuilder`.""" + return '<?xml version="1.0" encoding="utf-8"?>\n%s' % fragment + + +class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML): + + NAME = LXML + ALTERNATE_NAMES = ["lxml-html"] + + features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE] + is_xml = False + processing_instruction_class = ProcessingInstruction + + def default_parser(self, encoding): + return etree.HTMLParser + + def feed(self, markup): + encoding = self.soup.original_encoding + try: + self.parser = self.parser_for(encoding) + self.parser.feed(markup) + self.parser.close() + except (UnicodeDecodeError, LookupError, etree.ParserError) as e: + raise ParserRejectedMarkup(e) + + + def test_fragment_to_document(self, fragment): + """See `TreeBuilder`.""" + return '<html><body>%s</body></html>' % fragment diff --git a/python/lib/python3.11/site-packages/bs4/css.py b/python/lib/python3.11/site-packages/bs4/css.py new file mode 100644 index 0000000..245ac60 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/css.py @@ -0,0 +1,280 @@ +"""Integration code for CSS selectors using Soup Sieve (pypi: soupsieve).""" + +import warnings +try: + import soupsieve +except ImportError as e: + soupsieve = None + warnings.warn( + 'The soupsieve package is not installed. CSS selectors cannot be used.' + ) + + +class CSS(object): + """A proxy object against the soupsieve library, to simplify its + CSS selector API. + + Acquire this object through the .css attribute on the + BeautifulSoup object, or on the Tag you want to use as the + starting point for a CSS selector. + + The main advantage of doing this is that the tag to be selected + against doesn't need to be explicitly specified in the function + calls, since it's already scoped to a tag. + """ + + def __init__(self, tag, api=soupsieve): + """Constructor. + + You don't need to instantiate this class yourself; instead, + access the .css attribute on the BeautifulSoup object, or on + the Tag you want to use as the starting point for your CSS + selector. + + :param tag: All CSS selectors will use this as their starting + point. + + :param api: A plug-in replacement for the soupsieve module, + designed mainly for use in tests. + """ + if api is None: + raise NotImplementedError( + "Cannot execute CSS selectors because the soupsieve package is not installed." + ) + self.api = api + self.tag = tag + + def escape(self, ident): + """Escape a CSS identifier. + + This is a simple wrapper around soupselect.escape(). See the + documentation for that function for more information. + """ + if soupsieve is None: + raise NotImplementedError( + "Cannot escape CSS identifiers because the soupsieve package is not installed." + ) + return self.api.escape(ident) + + def _ns(self, ns, select): + """Normalize a dictionary of namespaces.""" + if not isinstance(select, self.api.SoupSieve) and ns is None: + # If the selector is a precompiled pattern, it already has + # a namespace context compiled in, which cannot be + # replaced. + ns = self.tag._namespaces + return ns + + def _rs(self, results): + """Normalize a list of results to a Resultset. + + A ResultSet is more consistent with the rest of Beautiful + Soup's API, and ResultSet.__getattr__ has a helpful error + message if you try to treat a list of results as a single + result (a common mistake). + """ + # Import here to avoid circular import + from bs4.element import ResultSet + return ResultSet(None, results) + + def compile(self, select, namespaces=None, flags=0, **kwargs): + """Pre-compile a selector and return the compiled object. + + :param selector: A CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will use the prefixes it encountered while + parsing the document. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.compile() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.compile() method. + + :return: A precompiled selector object. + :rtype: soupsieve.SoupSieve + """ + return self.api.compile( + select, self._ns(namespaces, select), flags, **kwargs + ) + + def select_one(self, select, namespaces=None, flags=0, **kwargs): + """Perform a CSS selection operation on the current Tag and return the + first result. + + This uses the Soup Sieve library. For more information, see + that library's documentation for the soupsieve.select_one() + method. + + :param selector: A CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will use the prefixes it encountered while + parsing the document. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.select_one() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.select_one() method. + + :return: A Tag, or None if the selector has no match. + :rtype: bs4.element.Tag + + """ + return self.api.select_one( + select, self.tag, self._ns(namespaces, select), flags, **kwargs + ) + + def select(self, select, namespaces=None, limit=0, flags=0, **kwargs): + """Perform a CSS selection operation on the current Tag. + + This uses the Soup Sieve library. For more information, see + that library's documentation for the soupsieve.select() + method. + + :param selector: A string containing a CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will pass in the prefixes it encountered while + parsing the document. + + :param limit: After finding this number of results, stop looking. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.select() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.select() method. + + :return: A ResultSet of Tag objects. + :rtype: bs4.element.ResultSet + + """ + if limit is None: + limit = 0 + + return self._rs( + self.api.select( + select, self.tag, self._ns(namespaces, select), limit, flags, + **kwargs + ) + ) + + def iselect(self, select, namespaces=None, limit=0, flags=0, **kwargs): + """Perform a CSS selection operation on the current Tag. + + This uses the Soup Sieve library. For more information, see + that library's documentation for the soupsieve.iselect() + method. It is the same as select(), but it returns a generator + instead of a list. + + :param selector: A string containing a CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will pass in the prefixes it encountered while + parsing the document. + + :param limit: After finding this number of results, stop looking. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.iselect() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.iselect() method. + + :return: A generator + :rtype: types.GeneratorType + """ + return self.api.iselect( + select, self.tag, self._ns(namespaces, select), limit, flags, **kwargs + ) + + def closest(self, select, namespaces=None, flags=0, **kwargs): + """Find the Tag closest to this one that matches the given selector. + + This uses the Soup Sieve library. For more information, see + that library's documentation for the soupsieve.closest() + method. + + :param selector: A string containing a CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will pass in the prefixes it encountered while + parsing the document. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.closest() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.closest() method. + + :return: A Tag, or None if there is no match. + :rtype: bs4.Tag + + """ + return self.api.closest( + select, self.tag, self._ns(namespaces, select), flags, **kwargs + ) + + def match(self, select, namespaces=None, flags=0, **kwargs): + """Check whether this Tag matches the given CSS selector. + + This uses the Soup Sieve library. For more information, see + that library's documentation for the soupsieve.match() + method. + + :param: a CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will pass in the prefixes it encountered while + parsing the document. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.match() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.match() method. + + :return: True if this Tag matches the selector; False otherwise. + :rtype: bool + """ + return self.api.match( + select, self.tag, self._ns(namespaces, select), flags, **kwargs + ) + + def filter(self, select, namespaces=None, flags=0, **kwargs): + """Filter this Tag's direct children based on the given CSS selector. + + This uses the Soup Sieve library. It works the same way as + passing this Tag into that library's soupsieve.filter() + method. More information, for more information see the + documentation for soupsieve.filter(). + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will pass in the prefixes it encountered while + parsing the document. + + :param flags: Flags to be passed into Soup Sieve's + soupsieve.filter() method. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.filter() method. + + :return: A ResultSet of Tag objects. + :rtype: bs4.element.ResultSet + + """ + return self._rs( + self.api.filter( + select, self.tag, self._ns(namespaces, select), flags, **kwargs + ) + ) diff --git a/python/lib/python3.11/site-packages/bs4/dammit.py b/python/lib/python3.11/site-packages/bs4/dammit.py new file mode 100644 index 0000000..692433c --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/dammit.py @@ -0,0 +1,1095 @@ +# -*- coding: utf-8 -*- +"""Beautiful Soup bonus library: Unicode, Dammit + +This library converts a bytestream to Unicode through any means +necessary. It is heavily based on code from Mark Pilgrim's Universal +Feed Parser. It works best on XML and HTML, but it does not rewrite the +XML or HTML to reflect a new encoding; that's the tree builder's job. +""" +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +from html.entities import codepoint2name +from collections import defaultdict +import codecs +import re +import logging +import string + +# Import a library to autodetect character encodings. We'll support +# any of a number of libraries that all support the same API: +# +# * cchardet +# * chardet +# * charset-normalizer +chardet_module = None +try: + # PyPI package: cchardet + import cchardet as chardet_module +except ImportError: + try: + # Debian package: python-chardet + # PyPI package: chardet + import chardet as chardet_module + except ImportError: + try: + # PyPI package: charset-normalizer + import charset_normalizer as chardet_module + except ImportError: + # No chardet available. + chardet_module = None + +if chardet_module: + def chardet_dammit(s): + if isinstance(s, str): + return None + return chardet_module.detect(s)['encoding'] +else: + def chardet_dammit(s): + return None + +# Build bytestring and Unicode versions of regular expressions for finding +# a declared encoding inside an XML or HTML document. +xml_encoding = '^\\s*<\\?.*encoding=[\'"](.*?)[\'"].*\\?>' +html_meta = '<\\s*meta[^>]+charset\\s*=\\s*["\']?([^>]*?)[ /;\'">]' +encoding_res = dict() +encoding_res[bytes] = { + 'html' : re.compile(html_meta.encode("ascii"), re.I), + 'xml' : re.compile(xml_encoding.encode("ascii"), re.I), +} +encoding_res[str] = { + 'html' : re.compile(html_meta, re.I), + 'xml' : re.compile(xml_encoding, re.I) +} + +from html.entities import html5 + +class EntitySubstitution(object): + """The ability to substitute XML or HTML entities for certain characters.""" + + def _populate_class_variables(): + """Initialize variables used by this class to manage the plethora of + HTML5 named entities. + + This function returns a 3-tuple containing two dictionaries + and a regular expression: + + unicode_to_name - A mapping of Unicode strings like "⦨" to + entity names like "angmsdaa". When a single Unicode string has + multiple entity names, we try to choose the most commonly-used + name. + + name_to_unicode: A mapping of entity names like "angmsdaa" to + Unicode strings like "⦨". + + named_entity_re: A regular expression matching (almost) any + Unicode string that corresponds to an HTML5 named entity. + """ + unicode_to_name = {} + name_to_unicode = {} + + short_entities = set() + long_entities_by_first_character = defaultdict(set) + + for name_with_semicolon, character in sorted(html5.items()): + # "It is intentional, for legacy compatibility, that many + # code points have multiple character reference names. For + # example, some appear both with and without the trailing + # semicolon, or with different capitalizations." + # - https://html.spec.whatwg.org/multipage/named-characters.html#named-character-references + # + # The parsers are in charge of handling (or not) character + # references with no trailing semicolon, so we remove the + # semicolon whenever it appears. + if name_with_semicolon.endswith(';'): + name = name_with_semicolon[:-1] + else: + name = name_with_semicolon + + # When parsing HTML, we want to recognize any known named + # entity and convert it to a sequence of Unicode + # characters. + if name not in name_to_unicode: + name_to_unicode[name] = character + + # When _generating_ HTML, we want to recognize special + # character sequences that _could_ be converted to named + # entities. + unicode_to_name[character] = name + + # We also need to build a regular expression that lets us + # _find_ those characters in output strings so we can + # replace them. + # + # This is tricky, for two reasons. + + if (len(character) == 1 and ord(character) < 128 + and character not in '<>&'): + # First, it would be annoying to turn single ASCII + # characters like | into named entities like + # |. The exceptions are <>&, which we _must_ + # turn into named entities to produce valid HTML. + continue + + if len(character) > 1 and all(ord(x) < 128 for x in character): + # We also do not want to turn _combinations_ of ASCII + # characters like 'fj' into named entities like 'fj', + # though that's more debateable. + continue + + # Second, some named entities have a Unicode value that's + # a subset of the Unicode value for some _other_ named + # entity. As an example, \u2267' is ≧, + # but '\u2267\u0338' is ≧̸. Our regular + # expression needs to match the first two characters of + # "\u2267\u0338foo", but only the first character of + # "\u2267foo". + # + # In this step, we build two sets of characters that + # _eventually_ need to go into the regular expression. But + # we won't know exactly what the regular expression needs + # to look like until we've gone through the entire list of + # named entities. + if len(character) == 1: + short_entities.add(character) + else: + long_entities_by_first_character[character[0]].add(character) + + # Now that we've been through the entire list of entities, we + # can create a regular expression that matches any of them. + particles = set() + for short in short_entities: + long_versions = long_entities_by_first_character[short] + if not long_versions: + particles.add(short) + else: + ignore = "".join([x[1] for x in long_versions]) + # This finds, e.g. \u2267 but only if it is _not_ + # followed by \u0338. + particles.add("%s(?![%s])" % (short, ignore)) + + for long_entities in list(long_entities_by_first_character.values()): + for long_entity in long_entities: + particles.add(long_entity) + + re_definition = "(%s)" % "|".join(particles) + + # If an entity shows up in both html5 and codepoint2name, it's + # likely that HTML5 gives it several different names, such as + # 'rsquo' and 'rsquor'. When converting Unicode characters to + # named entities, the codepoint2name name should take + # precedence where possible, since that's the more easily + # recognizable one. + for codepoint, name in list(codepoint2name.items()): + character = chr(codepoint) + unicode_to_name[character] = name + + return unicode_to_name, name_to_unicode, re.compile(re_definition) + (CHARACTER_TO_HTML_ENTITY, HTML_ENTITY_TO_CHARACTER, + CHARACTER_TO_HTML_ENTITY_RE) = _populate_class_variables() + + CHARACTER_TO_XML_ENTITY = { + "'": "apos", + '"': "quot", + "&": "amp", + "<": "lt", + ">": "gt", + } + + BARE_AMPERSAND_OR_BRACKET = re.compile("([<>]|" + "&(?!#\\d+;|#x[0-9a-fA-F]+;|\\w+;)" + ")") + + AMPERSAND_OR_BRACKET = re.compile("([<>&])") + + @classmethod + def _substitute_html_entity(cls, matchobj): + """Used with a regular expression to substitute the + appropriate HTML entity for a special character string.""" + entity = cls.CHARACTER_TO_HTML_ENTITY.get(matchobj.group(0)) + return "&%s;" % entity + + @classmethod + def _substitute_xml_entity(cls, matchobj): + """Used with a regular expression to substitute the + appropriate XML entity for a special character string.""" + entity = cls.CHARACTER_TO_XML_ENTITY[matchobj.group(0)] + return "&%s;" % entity + + @classmethod + def quoted_attribute_value(self, value): + """Make a value into a quoted XML attribute, possibly escaping it. + + Most strings will be quoted using double quotes. + + Bob's Bar -> "Bob's Bar" + + If a string contains double quotes, it will be quoted using + single quotes. + + Welcome to "my bar" -> 'Welcome to "my bar"' + + If a string contains both single and double quotes, the + double quotes will be escaped, and the string will be quoted + using double quotes. + + Welcome to "Bob's Bar" -> "Welcome to "Bob's bar" + """ + quote_with = '"' + if '"' in value: + if "'" in value: + # The string contains both single and double + # quotes. Turn the double quotes into + # entities. We quote the double quotes rather than + # the single quotes because the entity name is + # """ whether this is HTML or XML. If we + # quoted the single quotes, we'd have to decide + # between ' and &squot;. + replace_with = """ + value = value.replace('"', replace_with) + else: + # There are double quotes but no single quotes. + # We can use single quotes to quote the attribute. + quote_with = "'" + return quote_with + value + quote_with + + @classmethod + def substitute_xml(cls, value, make_quoted_attribute=False): + """Substitute XML entities for special XML characters. + + :param value: A string to be substituted. The less-than sign + will become <, the greater-than sign will become >, + and any ampersands will become &. If you want ampersands + that appear to be part of an entity definition to be left + alone, use substitute_xml_containing_entities() instead. + + :param make_quoted_attribute: If True, then the string will be + quoted, as befits an attribute value. + """ + # Escape angle brackets and ampersands. + value = cls.AMPERSAND_OR_BRACKET.sub( + cls._substitute_xml_entity, value) + + if make_quoted_attribute: + value = cls.quoted_attribute_value(value) + return value + + @classmethod + def substitute_xml_containing_entities( + cls, value, make_quoted_attribute=False): + """Substitute XML entities for special XML characters. + + :param value: A string to be substituted. The less-than sign will + become <, the greater-than sign will become >, and any + ampersands that are not part of an entity defition will + become &. + + :param make_quoted_attribute: If True, then the string will be + quoted, as befits an attribute value. + """ + # Escape angle brackets, and ampersands that aren't part of + # entities. + value = cls.BARE_AMPERSAND_OR_BRACKET.sub( + cls._substitute_xml_entity, value) + + if make_quoted_attribute: + value = cls.quoted_attribute_value(value) + return value + + @classmethod + def substitute_html(cls, s): + """Replace certain Unicode characters with named HTML entities. + + This differs from data.encode(encoding, 'xmlcharrefreplace') + in that the goal is to make the result more readable (to those + with ASCII displays) rather than to recover from + errors. There's absolutely nothing wrong with a UTF-8 string + containg a LATIN SMALL LETTER E WITH ACUTE, but replacing that + character with "é" will make it more readable to some + people. + + :param s: A Unicode string. + """ + return cls.CHARACTER_TO_HTML_ENTITY_RE.sub( + cls._substitute_html_entity, s) + + +class EncodingDetector: + """Suggests a number of possible encodings for a bytestring. + + Order of precedence: + + 1. Encodings you specifically tell EncodingDetector to try first + (the known_definite_encodings argument to the constructor). + + 2. An encoding determined by sniffing the document's byte-order mark. + + 3. Encodings you specifically tell EncodingDetector to try if + byte-order mark sniffing fails (the user_encodings argument to the + constructor). + + 4. An encoding declared within the bytestring itself, either in an + XML declaration (if the bytestring is to be interpreted as an XML + document), or in a <meta> tag (if the bytestring is to be + interpreted as an HTML document.) + + 5. An encoding detected through textual analysis by chardet, + cchardet, or a similar external library. + + 4. UTF-8. + + 5. Windows-1252. + + """ + def __init__(self, markup, known_definite_encodings=None, + is_html=False, exclude_encodings=None, + user_encodings=None, override_encodings=None): + """Constructor. + + :param markup: Some markup in an unknown encoding. + + :param known_definite_encodings: When determining the encoding + of `markup`, these encodings will be tried first, in + order. In HTML terms, this corresponds to the "known + definite encoding" step defined here: + https://html.spec.whatwg.org/multipage/parsing.html#parsing-with-a-known-character-encoding + + :param user_encodings: These encodings will be tried after the + `known_definite_encodings` have been tried and failed, and + after an attempt to sniff the encoding by looking at a + byte order mark has failed. In HTML terms, this + corresponds to the step "user has explicitly instructed + the user agent to override the document's character + encoding", defined here: + https://html.spec.whatwg.org/multipage/parsing.html#determining-the-character-encoding + + :param override_encodings: A deprecated alias for + known_definite_encodings. Any encodings here will be tried + immediately after the encodings in + known_definite_encodings. + + :param is_html: If True, this markup is considered to be + HTML. Otherwise it's assumed to be XML. + + :param exclude_encodings: These encodings will not be tried, + even if they otherwise would be. + + """ + self.known_definite_encodings = list(known_definite_encodings or []) + if override_encodings: + self.known_definite_encodings += override_encodings + self.user_encodings = user_encodings or [] + exclude_encodings = exclude_encodings or [] + self.exclude_encodings = set([x.lower() for x in exclude_encodings]) + self.chardet_encoding = None + self.is_html = is_html + self.declared_encoding = None + + # First order of business: strip a byte-order mark. + self.markup, self.sniffed_encoding = self.strip_byte_order_mark(markup) + + def _usable(self, encoding, tried): + """Should we even bother to try this encoding? + + :param encoding: Name of an encoding. + :param tried: Encodings that have already been tried. This will be modified + as a side effect. + """ + if encoding is not None: + encoding = encoding.lower() + if encoding in self.exclude_encodings: + return False + if encoding not in tried: + tried.add(encoding) + return True + return False + + @property + def encodings(self): + """Yield a number of encodings that might work for this markup. + + :yield: A sequence of strings. + """ + tried = set() + + # First, try the known definite encodings + for e in self.known_definite_encodings: + if self._usable(e, tried): + yield e + + # Did the document originally start with a byte-order mark + # that indicated its encoding? + if self._usable(self.sniffed_encoding, tried): + yield self.sniffed_encoding + + # Sniffing the byte-order mark did nothing; try the user + # encodings. + for e in self.user_encodings: + if self._usable(e, tried): + yield e + + # Look within the document for an XML or HTML encoding + # declaration. + if self.declared_encoding is None: + self.declared_encoding = self.find_declared_encoding( + self.markup, self.is_html) + if self._usable(self.declared_encoding, tried): + yield self.declared_encoding + + # Use third-party character set detection to guess at the + # encoding. + if self.chardet_encoding is None: + self.chardet_encoding = chardet_dammit(self.markup) + if self._usable(self.chardet_encoding, tried): + yield self.chardet_encoding + + # As a last-ditch effort, try utf-8 and windows-1252. + for e in ('utf-8', 'windows-1252'): + if self._usable(e, tried): + yield e + + @classmethod + def strip_byte_order_mark(cls, data): + """If a byte-order mark is present, strip it and return the encoding it implies. + + :param data: Some markup. + :return: A 2-tuple (modified data, implied encoding) + """ + encoding = None + if isinstance(data, str): + # Unicode data cannot have a byte-order mark. + return data, encoding + if (len(data) >= 4) and (data[:2] == b'\xfe\xff') \ + and (data[2:4] != '\x00\x00'): + encoding = 'utf-16be' + data = data[2:] + elif (len(data) >= 4) and (data[:2] == b'\xff\xfe') \ + and (data[2:4] != '\x00\x00'): + encoding = 'utf-16le' + data = data[2:] + elif data[:3] == b'\xef\xbb\xbf': + encoding = 'utf-8' + data = data[3:] + elif data[:4] == b'\x00\x00\xfe\xff': + encoding = 'utf-32be' + data = data[4:] + elif data[:4] == b'\xff\xfe\x00\x00': + encoding = 'utf-32le' + data = data[4:] + return data, encoding + + @classmethod + def find_declared_encoding(cls, markup, is_html=False, search_entire_document=False): + """Given a document, tries to find its declared encoding. + + An XML encoding is declared at the beginning of the document. + + An HTML encoding is declared in a <meta> tag, hopefully near the + beginning of the document. + + :param markup: Some markup. + :param is_html: If True, this markup is considered to be HTML. Otherwise + it's assumed to be XML. + :param search_entire_document: Since an encoding is supposed to declared near the beginning + of the document, most of the time it's only necessary to search a few kilobytes of data. + Set this to True to force this method to search the entire document. + """ + if search_entire_document: + xml_endpos = html_endpos = len(markup) + else: + xml_endpos = 1024 + html_endpos = max(2048, int(len(markup) * 0.05)) + + if isinstance(markup, bytes): + res = encoding_res[bytes] + else: + res = encoding_res[str] + + xml_re = res['xml'] + html_re = res['html'] + declared_encoding = None + declared_encoding_match = xml_re.search(markup, endpos=xml_endpos) + if not declared_encoding_match and is_html: + declared_encoding_match = html_re.search(markup, endpos=html_endpos) + if declared_encoding_match is not None: + declared_encoding = declared_encoding_match.groups()[0] + if declared_encoding: + if isinstance(declared_encoding, bytes): + declared_encoding = declared_encoding.decode('ascii', 'replace') + return declared_encoding.lower() + return None + +class UnicodeDammit: + """A class for detecting the encoding of a *ML document and + converting it to a Unicode string. If the source encoding is + windows-1252, can replace MS smart quotes with their HTML or XML + equivalents.""" + + # This dictionary maps commonly seen values for "charset" in HTML + # meta tags to the corresponding Python codec names. It only covers + # values that aren't in Python's aliases and can't be determined + # by the heuristics in find_codec. + CHARSET_ALIASES = {"macintosh": "mac-roman", + "x-sjis": "shift-jis"} + + ENCODINGS_WITH_SMART_QUOTES = [ + "windows-1252", + "iso-8859-1", + "iso-8859-2", + ] + + def __init__(self, markup, known_definite_encodings=[], + smart_quotes_to=None, is_html=False, exclude_encodings=[], + user_encodings=None, override_encodings=None + ): + """Constructor. + + :param markup: A bytestring representing markup in an unknown encoding. + + :param known_definite_encodings: When determining the encoding + of `markup`, these encodings will be tried first, in + order. In HTML terms, this corresponds to the "known + definite encoding" step defined here: + https://html.spec.whatwg.org/multipage/parsing.html#parsing-with-a-known-character-encoding + + :param user_encodings: These encodings will be tried after the + `known_definite_encodings` have been tried and failed, and + after an attempt to sniff the encoding by looking at a + byte order mark has failed. In HTML terms, this + corresponds to the step "user has explicitly instructed + the user agent to override the document's character + encoding", defined here: + https://html.spec.whatwg.org/multipage/parsing.html#determining-the-character-encoding + + :param override_encodings: A deprecated alias for + known_definite_encodings. Any encodings here will be tried + immediately after the encodings in + known_definite_encodings. + + :param smart_quotes_to: By default, Microsoft smart quotes will, like all other characters, be converted + to Unicode characters. Setting this to 'ascii' will convert them to ASCII quotes instead. + Setting it to 'xml' will convert them to XML entity references, and setting it to 'html' + will convert them to HTML entity references. + :param is_html: If True, this markup is considered to be HTML. Otherwise + it's assumed to be XML. + :param exclude_encodings: These encodings will not be considered, even + if the sniffing code thinks they might make sense. + + """ + self.smart_quotes_to = smart_quotes_to + self.tried_encodings = [] + self.contains_replacement_characters = False + self.is_html = is_html + self.log = logging.getLogger(__name__) + self.detector = EncodingDetector( + markup, known_definite_encodings, is_html, exclude_encodings, + user_encodings, override_encodings + ) + + # Short-circuit if the data is in Unicode to begin with. + if isinstance(markup, str) or markup == '': + self.markup = markup + self.unicode_markup = str(markup) + self.original_encoding = None + return + + # The encoding detector may have stripped a byte-order mark. + # Use the stripped markup from this point on. + self.markup = self.detector.markup + + u = None + for encoding in self.detector.encodings: + markup = self.detector.markup + u = self._convert_from(encoding) + if u is not None: + break + + if not u: + # None of the encodings worked. As an absolute last resort, + # try them again with character replacement. + + for encoding in self.detector.encodings: + if encoding != "ascii": + u = self._convert_from(encoding, "replace") + if u is not None: + self.log.warning( + "Some characters could not be decoded, and were " + "replaced with REPLACEMENT CHARACTER." + ) + self.contains_replacement_characters = True + break + + # If none of that worked, we could at this point force it to + # ASCII, but that would destroy so much data that I think + # giving up is better. + self.unicode_markup = u + if not u: + self.original_encoding = None + + def _sub_ms_char(self, match): + """Changes a MS smart quote character to an XML or HTML + entity, or an ASCII character.""" + orig = match.group(1) + if self.smart_quotes_to == 'ascii': + sub = self.MS_CHARS_TO_ASCII.get(orig).encode() + else: + sub = self.MS_CHARS.get(orig) + if type(sub) == tuple: + if self.smart_quotes_to == 'xml': + sub = '&#x'.encode() + sub[1].encode() + ';'.encode() + else: + sub = '&'.encode() + sub[0].encode() + ';'.encode() + else: + sub = sub.encode() + return sub + + def _convert_from(self, proposed, errors="strict"): + """Attempt to convert the markup to the proposed encoding. + + :param proposed: The name of a character encoding. + """ + proposed = self.find_codec(proposed) + if not proposed or (proposed, errors) in self.tried_encodings: + return None + self.tried_encodings.append((proposed, errors)) + markup = self.markup + # Convert smart quotes to HTML if coming from an encoding + # that might have them. + if (self.smart_quotes_to is not None + and proposed in self.ENCODINGS_WITH_SMART_QUOTES): + smart_quotes_re = b"([\x80-\x9f])" + smart_quotes_compiled = re.compile(smart_quotes_re) + markup = smart_quotes_compiled.sub(self._sub_ms_char, markup) + + try: + #print("Trying to convert document to %s (errors=%s)" % ( + # proposed, errors)) + u = self._to_unicode(markup, proposed, errors) + self.markup = u + self.original_encoding = proposed + except Exception as e: + #print("That didn't work!") + #print(e) + return None + #print("Correct encoding: %s" % proposed) + return self.markup + + def _to_unicode(self, data, encoding, errors="strict"): + """Given a string and its encoding, decodes the string into Unicode. + + :param encoding: The name of an encoding. + """ + return str(data, encoding, errors) + + @property + def declared_html_encoding(self): + """If the markup is an HTML document, returns the encoding declared _within_ + the document. + """ + if not self.is_html: + return None + return self.detector.declared_encoding + + def find_codec(self, charset): + """Convert the name of a character set to a codec name. + + :param charset: The name of a character set. + :return: The name of a codec. + """ + value = (self._codec(self.CHARSET_ALIASES.get(charset, charset)) + or (charset and self._codec(charset.replace("-", ""))) + or (charset and self._codec(charset.replace("-", "_"))) + or (charset and charset.lower()) + or charset + ) + if value: + return value.lower() + return None + + def _codec(self, charset): + if not charset: + return charset + codec = None + try: + codecs.lookup(charset) + codec = charset + except (LookupError, ValueError): + pass + return codec + + + # A partial mapping of ISO-Latin-1 to HTML entities/XML numeric entities. + MS_CHARS = {b'\x80': ('euro', '20AC'), + b'\x81': ' ', + b'\x82': ('sbquo', '201A'), + b'\x83': ('fnof', '192'), + b'\x84': ('bdquo', '201E'), + b'\x85': ('hellip', '2026'), + b'\x86': ('dagger', '2020'), + b'\x87': ('Dagger', '2021'), + b'\x88': ('circ', '2C6'), + b'\x89': ('permil', '2030'), + b'\x8A': ('Scaron', '160'), + b'\x8B': ('lsaquo', '2039'), + b'\x8C': ('OElig', '152'), + b'\x8D': '?', + b'\x8E': ('#x17D', '17D'), + b'\x8F': '?', + b'\x90': '?', + b'\x91': ('lsquo', '2018'), + b'\x92': ('rsquo', '2019'), + b'\x93': ('ldquo', '201C'), + b'\x94': ('rdquo', '201D'), + b'\x95': ('bull', '2022'), + b'\x96': ('ndash', '2013'), + b'\x97': ('mdash', '2014'), + b'\x98': ('tilde', '2DC'), + b'\x99': ('trade', '2122'), + b'\x9a': ('scaron', '161'), + b'\x9b': ('rsaquo', '203A'), + b'\x9c': ('oelig', '153'), + b'\x9d': '?', + b'\x9e': ('#x17E', '17E'), + b'\x9f': ('Yuml', ''),} + + # A parochial partial mapping of ISO-Latin-1 to ASCII. Contains + # horrors like stripping diacritical marks to turn á into a, but also + # contains non-horrors like turning “ into ". + MS_CHARS_TO_ASCII = { + b'\x80' : 'EUR', + b'\x81' : ' ', + b'\x82' : ',', + b'\x83' : 'f', + b'\x84' : ',,', + b'\x85' : '...', + b'\x86' : '+', + b'\x87' : '++', + b'\x88' : '^', + b'\x89' : '%', + b'\x8a' : 'S', + b'\x8b' : '<', + b'\x8c' : 'OE', + b'\x8d' : '?', + b'\x8e' : 'Z', + b'\x8f' : '?', + b'\x90' : '?', + b'\x91' : "'", + b'\x92' : "'", + b'\x93' : '"', + b'\x94' : '"', + b'\x95' : '*', + b'\x96' : '-', + b'\x97' : '--', + b'\x98' : '~', + b'\x99' : '(TM)', + b'\x9a' : 's', + b'\x9b' : '>', + b'\x9c' : 'oe', + b'\x9d' : '?', + b'\x9e' : 'z', + b'\x9f' : 'Y', + b'\xa0' : ' ', + b'\xa1' : '!', + b'\xa2' : 'c', + b'\xa3' : 'GBP', + b'\xa4' : '$', #This approximation is especially parochial--this is the + #generic currency symbol. + b'\xa5' : 'YEN', + b'\xa6' : '|', + b'\xa7' : 'S', + b'\xa8' : '..', + b'\xa9' : '', + b'\xaa' : '(th)', + b'\xab' : '<<', + b'\xac' : '!', + b'\xad' : ' ', + b'\xae' : '(R)', + b'\xaf' : '-', + b'\xb0' : 'o', + b'\xb1' : '+-', + b'\xb2' : '2', + b'\xb3' : '3', + b'\xb4' : ("'", 'acute'), + b'\xb5' : 'u', + b'\xb6' : 'P', + b'\xb7' : '*', + b'\xb8' : ',', + b'\xb9' : '1', + b'\xba' : '(th)', + b'\xbb' : '>>', + b'\xbc' : '1/4', + b'\xbd' : '1/2', + b'\xbe' : '3/4', + b'\xbf' : '?', + b'\xc0' : 'A', + b'\xc1' : 'A', + b'\xc2' : 'A', + b'\xc3' : 'A', + b'\xc4' : 'A', + b'\xc5' : 'A', + b'\xc6' : 'AE', + b'\xc7' : 'C', + b'\xc8' : 'E', + b'\xc9' : 'E', + b'\xca' : 'E', + b'\xcb' : 'E', + b'\xcc' : 'I', + b'\xcd' : 'I', + b'\xce' : 'I', + b'\xcf' : 'I', + b'\xd0' : 'D', + b'\xd1' : 'N', + b'\xd2' : 'O', + b'\xd3' : 'O', + b'\xd4' : 'O', + b'\xd5' : 'O', + b'\xd6' : 'O', + b'\xd7' : '*', + b'\xd8' : 'O', + b'\xd9' : 'U', + b'\xda' : 'U', + b'\xdb' : 'U', + b'\xdc' : 'U', + b'\xdd' : 'Y', + b'\xde' : 'b', + b'\xdf' : 'B', + b'\xe0' : 'a', + b'\xe1' : 'a', + b'\xe2' : 'a', + b'\xe3' : 'a', + b'\xe4' : 'a', + b'\xe5' : 'a', + b'\xe6' : 'ae', + b'\xe7' : 'c', + b'\xe8' : 'e', + b'\xe9' : 'e', + b'\xea' : 'e', + b'\xeb' : 'e', + b'\xec' : 'i', + b'\xed' : 'i', + b'\xee' : 'i', + b'\xef' : 'i', + b'\xf0' : 'o', + b'\xf1' : 'n', + b'\xf2' : 'o', + b'\xf3' : 'o', + b'\xf4' : 'o', + b'\xf5' : 'o', + b'\xf6' : 'o', + b'\xf7' : '/', + b'\xf8' : 'o', + b'\xf9' : 'u', + b'\xfa' : 'u', + b'\xfb' : 'u', + b'\xfc' : 'u', + b'\xfd' : 'y', + b'\xfe' : 'b', + b'\xff' : 'y', + } + + # A map used when removing rogue Windows-1252/ISO-8859-1 + # characters in otherwise UTF-8 documents. + # + # Note that \x81, \x8d, \x8f, \x90, and \x9d are undefined in + # Windows-1252. + WINDOWS_1252_TO_UTF8 = { + 0x80 : b'\xe2\x82\xac', # € + 0x82 : b'\xe2\x80\x9a', # ‚ + 0x83 : b'\xc6\x92', # Æ’ + 0x84 : b'\xe2\x80\x9e', # „ + 0x85 : b'\xe2\x80\xa6', # … + 0x86 : b'\xe2\x80\xa0', # † + 0x87 : b'\xe2\x80\xa1', # ‡ + 0x88 : b'\xcb\x86', # ˆ + 0x89 : b'\xe2\x80\xb0', # ‰ + 0x8a : b'\xc5\xa0', # Å  + 0x8b : b'\xe2\x80\xb9', # ‹ + 0x8c : b'\xc5\x92', # Å’ + 0x8e : b'\xc5\xbd', # Ž + 0x91 : b'\xe2\x80\x98', # ‘ + 0x92 : b'\xe2\x80\x99', # ’ + 0x93 : b'\xe2\x80\x9c', # “ + 0x94 : b'\xe2\x80\x9d', # †+ 0x95 : b'\xe2\x80\xa2', # • + 0x96 : b'\xe2\x80\x93', # – + 0x97 : b'\xe2\x80\x94', # — + 0x98 : b'\xcb\x9c', # Ëœ + 0x99 : b'\xe2\x84\xa2', # â„¢ + 0x9a : b'\xc5\xa1', # Å¡ + 0x9b : b'\xe2\x80\xba', # › + 0x9c : b'\xc5\x93', # Å“ + 0x9e : b'\xc5\xbe', # ž + 0x9f : b'\xc5\xb8', # Ÿ + 0xa0 : b'\xc2\xa0', #   + 0xa1 : b'\xc2\xa1', # ¡ + 0xa2 : b'\xc2\xa2', # ¢ + 0xa3 : b'\xc2\xa3', # £ + 0xa4 : b'\xc2\xa4', # ¤ + 0xa5 : b'\xc2\xa5', # Â¥ + 0xa6 : b'\xc2\xa6', # ¦ + 0xa7 : b'\xc2\xa7', # § + 0xa8 : b'\xc2\xa8', # ¨ + 0xa9 : b'\xc2\xa9', # © + 0xaa : b'\xc2\xaa', # ª + 0xab : b'\xc2\xab', # « + 0xac : b'\xc2\xac', # ¬ + 0xad : b'\xc2\xad', # ­ + 0xae : b'\xc2\xae', # ® + 0xaf : b'\xc2\xaf', # ¯ + 0xb0 : b'\xc2\xb0', # ° + 0xb1 : b'\xc2\xb1', # ± + 0xb2 : b'\xc2\xb2', # ² + 0xb3 : b'\xc2\xb3', # ³ + 0xb4 : b'\xc2\xb4', # ´ + 0xb5 : b'\xc2\xb5', # µ + 0xb6 : b'\xc2\xb6', # ¶ + 0xb7 : b'\xc2\xb7', # · + 0xb8 : b'\xc2\xb8', # ¸ + 0xb9 : b'\xc2\xb9', # ¹ + 0xba : b'\xc2\xba', # º + 0xbb : b'\xc2\xbb', # » + 0xbc : b'\xc2\xbc', # ¼ + 0xbd : b'\xc2\xbd', # ½ + 0xbe : b'\xc2\xbe', # ¾ + 0xbf : b'\xc2\xbf', # ¿ + 0xc0 : b'\xc3\x80', # À + 0xc1 : b'\xc3\x81', # à + 0xc2 : b'\xc3\x82', #  + 0xc3 : b'\xc3\x83', # à + 0xc4 : b'\xc3\x84', # Ä + 0xc5 : b'\xc3\x85', # Ã… + 0xc6 : b'\xc3\x86', # Æ + 0xc7 : b'\xc3\x87', # Ç + 0xc8 : b'\xc3\x88', # È + 0xc9 : b'\xc3\x89', # É + 0xca : b'\xc3\x8a', # Ê + 0xcb : b'\xc3\x8b', # Ë + 0xcc : b'\xc3\x8c', # ÃŒ + 0xcd : b'\xc3\x8d', # à + 0xce : b'\xc3\x8e', # ÃŽ + 0xcf : b'\xc3\x8f', # à + 0xd0 : b'\xc3\x90', # à + 0xd1 : b'\xc3\x91', # Ñ + 0xd2 : b'\xc3\x92', # Ã’ + 0xd3 : b'\xc3\x93', # Ó + 0xd4 : b'\xc3\x94', # Ô + 0xd5 : b'\xc3\x95', # Õ + 0xd6 : b'\xc3\x96', # Ö + 0xd7 : b'\xc3\x97', # × + 0xd8 : b'\xc3\x98', # Ø + 0xd9 : b'\xc3\x99', # Ù + 0xda : b'\xc3\x9a', # Ú + 0xdb : b'\xc3\x9b', # Û + 0xdc : b'\xc3\x9c', # Ü + 0xdd : b'\xc3\x9d', # à + 0xde : b'\xc3\x9e', # Þ + 0xdf : b'\xc3\x9f', # ß + 0xe0 : b'\xc3\xa0', # à + 0xe1 : b'\xa1', # á + 0xe2 : b'\xc3\xa2', # â + 0xe3 : b'\xc3\xa3', # ã + 0xe4 : b'\xc3\xa4', # ä + 0xe5 : b'\xc3\xa5', # Ã¥ + 0xe6 : b'\xc3\xa6', # æ + 0xe7 : b'\xc3\xa7', # ç + 0xe8 : b'\xc3\xa8', # è + 0xe9 : b'\xc3\xa9', # é + 0xea : b'\xc3\xaa', # ê + 0xeb : b'\xc3\xab', # ë + 0xec : b'\xc3\xac', # ì + 0xed : b'\xc3\xad', # í + 0xee : b'\xc3\xae', # î + 0xef : b'\xc3\xaf', # ï + 0xf0 : b'\xc3\xb0', # ð + 0xf1 : b'\xc3\xb1', # ñ + 0xf2 : b'\xc3\xb2', # ò + 0xf3 : b'\xc3\xb3', # ó + 0xf4 : b'\xc3\xb4', # ô + 0xf5 : b'\xc3\xb5', # õ + 0xf6 : b'\xc3\xb6', # ö + 0xf7 : b'\xc3\xb7', # ÷ + 0xf8 : b'\xc3\xb8', # ø + 0xf9 : b'\xc3\xb9', # ù + 0xfa : b'\xc3\xba', # ú + 0xfb : b'\xc3\xbb', # û + 0xfc : b'\xc3\xbc', # ü + 0xfd : b'\xc3\xbd', # ý + 0xfe : b'\xc3\xbe', # þ + } + + MULTIBYTE_MARKERS_AND_SIZES = [ + (0xc2, 0xdf, 2), # 2-byte characters start with a byte C2-DF + (0xe0, 0xef, 3), # 3-byte characters start with E0-EF + (0xf0, 0xf4, 4), # 4-byte characters start with F0-F4 + ] + + FIRST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[0][0] + LAST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[-1][1] + + @classmethod + def detwingle(cls, in_bytes, main_encoding="utf8", + embedded_encoding="windows-1252"): + """Fix characters from one encoding embedded in some other encoding. + + Currently the only situation supported is Windows-1252 (or its + subset ISO-8859-1), embedded in UTF-8. + + :param in_bytes: A bytestring that you suspect contains + characters from multiple encodings. Note that this _must_ + be a bytestring. If you've already converted the document + to Unicode, you're too late. + :param main_encoding: The primary encoding of `in_bytes`. + :param embedded_encoding: The encoding that was used to embed characters + in the main document. + :return: A bytestring in which `embedded_encoding` + characters have been converted to their `main_encoding` + equivalents. + """ + if embedded_encoding.replace('_', '-').lower() not in ( + 'windows-1252', 'windows_1252'): + raise NotImplementedError( + "Windows-1252 and ISO-8859-1 are the only currently supported " + "embedded encodings.") + + if main_encoding.lower() not in ('utf8', 'utf-8'): + raise NotImplementedError( + "UTF-8 is the only currently supported main encoding.") + + byte_chunks = [] + + chunk_start = 0 + pos = 0 + while pos < len(in_bytes): + byte = in_bytes[pos] + if not isinstance(byte, int): + # Python 2.x + byte = ord(byte) + if (byte >= cls.FIRST_MULTIBYTE_MARKER + and byte <= cls.LAST_MULTIBYTE_MARKER): + # This is the start of a UTF-8 multibyte character. Skip + # to the end. + for start, end, size in cls.MULTIBYTE_MARKERS_AND_SIZES: + if byte >= start and byte <= end: + pos += size + break + elif byte >= 0x80 and byte in cls.WINDOWS_1252_TO_UTF8: + # We found a Windows-1252 character! + # Save the string up to this point as a chunk. + byte_chunks.append(in_bytes[chunk_start:pos]) + + # Now translate the Windows-1252 character into UTF-8 + # and add it as another, one-byte chunk. + byte_chunks.append(cls.WINDOWS_1252_TO_UTF8[byte]) + pos += 1 + chunk_start = pos + else: + # Go on to the next character. + pos += 1 + if chunk_start == 0: + # The string is unchanged. + return in_bytes + else: + # Store the final chunk. + byte_chunks.append(in_bytes[chunk_start:]) + return b''.join(byte_chunks) + diff --git a/python/lib/python3.11/site-packages/bs4/diagnose.py b/python/lib/python3.11/site-packages/bs4/diagnose.py new file mode 100644 index 0000000..e079772 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/diagnose.py @@ -0,0 +1,233 @@ +"""Diagnostic functions, mainly for use when doing tech support.""" + +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +import cProfile +from io import BytesIO +from html.parser import HTMLParser +import bs4 +from bs4 import BeautifulSoup, __version__ +from bs4.builder import builder_registry + +import os +import pstats +import random +import tempfile +import time +import traceback +import sys +import cProfile + +def diagnose(data): + """Diagnostic suite for isolating common problems. + + :param data: A string containing markup that needs to be explained. + :return: None; diagnostics are printed to standard output. + """ + print(("Diagnostic running on Beautiful Soup %s" % __version__)) + print(("Python version %s" % sys.version)) + + basic_parsers = ["html.parser", "html5lib", "lxml"] + for name in basic_parsers: + for builder in builder_registry.builders: + if name in builder.features: + break + else: + basic_parsers.remove(name) + print(( + "I noticed that %s is not installed. Installing it may help." % + name)) + + if 'lxml' in basic_parsers: + basic_parsers.append("lxml-xml") + try: + from lxml import etree + print(("Found lxml version %s" % ".".join(map(str,etree.LXML_VERSION)))) + except ImportError as e: + print( + "lxml is not installed or couldn't be imported.") + + + if 'html5lib' in basic_parsers: + try: + import html5lib + print(("Found html5lib version %s" % html5lib.__version__)) + except ImportError as e: + print( + "html5lib is not installed or couldn't be imported.") + + if hasattr(data, 'read'): + data = data.read() + + for parser in basic_parsers: + print(("Trying to parse your markup with %s" % parser)) + success = False + try: + soup = BeautifulSoup(data, features=parser) + success = True + except Exception as e: + print(("%s could not parse the markup." % parser)) + traceback.print_exc() + if success: + print(("Here's what %s did with the markup:" % parser)) + print((soup.prettify())) + + print(("-" * 80)) + +def lxml_trace(data, html=True, **kwargs): + """Print out the lxml events that occur during parsing. + + This lets you see how lxml parses a document when no Beautiful + Soup code is running. You can use this to determine whether + an lxml-specific problem is in Beautiful Soup's lxml tree builders + or in lxml itself. + + :param data: Some markup. + :param html: If True, markup will be parsed with lxml's HTML parser. + if False, lxml's XML parser will be used. + """ + from lxml import etree + recover = kwargs.pop('recover', True) + if isinstance(data, str): + data = data.encode("utf8") + reader = BytesIO(data) + for event, element in etree.iterparse( + reader, html=html, recover=recover, **kwargs + ): + print(("%s, %4s, %s" % (event, element.tag, element.text))) + +class AnnouncingParser(HTMLParser): + """Subclass of HTMLParser that announces parse events, without doing + anything else. + + You can use this to get a picture of how html.parser sees a given + document. The easiest way to do this is to call `htmlparser_trace`. + """ + + def _p(self, s): + print(s) + + def handle_starttag(self, name, attrs): + self._p("%s START" % name) + + def handle_endtag(self, name): + self._p("%s END" % name) + + def handle_data(self, data): + self._p("%s DATA" % data) + + def handle_charref(self, name): + self._p("%s CHARREF" % name) + + def handle_entityref(self, name): + self._p("%s ENTITYREF" % name) + + def handle_comment(self, data): + self._p("%s COMMENT" % data) + + def handle_decl(self, data): + self._p("%s DECL" % data) + + def unknown_decl(self, data): + self._p("%s UNKNOWN-DECL" % data) + + def handle_pi(self, data): + self._p("%s PI" % data) + +def htmlparser_trace(data): + """Print out the HTMLParser events that occur during parsing. + + This lets you see how HTMLParser parses a document when no + Beautiful Soup code is running. + + :param data: Some markup. + """ + parser = AnnouncingParser() + parser.feed(data) + +_vowels = "aeiou" +_consonants = "bcdfghjklmnpqrstvwxyz" + +def rword(length=5): + "Generate a random word-like string." + s = '' + for i in range(length): + if i % 2 == 0: + t = _consonants + else: + t = _vowels + s += random.choice(t) + return s + +def rsentence(length=4): + "Generate a random sentence-like string." + return " ".join(rword(random.randint(4,9)) for i in range(length)) + +def rdoc(num_elements=1000): + """Randomly generate an invalid HTML document.""" + tag_names = ['p', 'div', 'span', 'i', 'b', 'script', 'table'] + elements = [] + for i in range(num_elements): + choice = random.randint(0,3) + if choice == 0: + # New tag. + tag_name = random.choice(tag_names) + elements.append("<%s>" % tag_name) + elif choice == 1: + elements.append(rsentence(random.randint(1,4))) + elif choice == 2: + # Close a tag. + tag_name = random.choice(tag_names) + elements.append("</%s>" % tag_name) + return "<html>" + "\n".join(elements) + "</html>" + +def benchmark_parsers(num_elements=100000): + """Very basic head-to-head performance benchmark.""" + print(("Comparative parser benchmark on Beautiful Soup %s" % __version__)) + data = rdoc(num_elements) + print(("Generated a large invalid HTML document (%d bytes)." % len(data))) + + for parser in ["lxml", ["lxml", "html"], "html5lib", "html.parser"]: + success = False + try: + a = time.time() + soup = BeautifulSoup(data, parser) + b = time.time() + success = True + except Exception as e: + print(("%s could not parse the markup." % parser)) + traceback.print_exc() + if success: + print(("BS4+%s parsed the markup in %.2fs." % (parser, b-a))) + + from lxml import etree + a = time.time() + etree.HTML(data) + b = time.time() + print(("Raw lxml parsed the markup in %.2fs." % (b-a))) + + import html5lib + parser = html5lib.HTMLParser() + a = time.time() + parser.parse(data) + b = time.time() + print(("Raw html5lib parsed the markup in %.2fs." % (b-a))) + +def profile(num_elements=100000, parser="lxml"): + """Use Python's profiler on a randomly generated document.""" + filehandle = tempfile.NamedTemporaryFile() + filename = filehandle.name + + data = rdoc(num_elements) + vars = dict(bs4=bs4, data=data, parser=parser) + cProfile.runctx('bs4.BeautifulSoup(data, parser)' , vars, vars, filename) + + stats = pstats.Stats(filename) + # stats.strip_dirs() + stats.sort_stats("cumulative") + stats.print_stats('_html5lib|bs4', 50) + +# If this file is run as a script, standard input is diagnosed. +if __name__ == '__main__': + diagnose(sys.stdin.read()) diff --git a/python/lib/python3.11/site-packages/bs4/element.py b/python/lib/python3.11/site-packages/bs4/element.py new file mode 100644 index 0000000..9c73957 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/element.py @@ -0,0 +1,2430 @@ +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +try: + from collections.abc import Callable # Python 3.6 +except ImportError as e: + from collections import Callable +import re +import sys +import warnings + +from bs4.css import CSS +from bs4.formatter import ( + Formatter, + HTMLFormatter, + XMLFormatter, +) + +DEFAULT_OUTPUT_ENCODING = "utf-8" + +nonwhitespace_re = re.compile(r"\S+") + +# NOTE: This isn't used as of 4.7.0. I'm leaving it for a little bit on +# the off chance someone imported it for their own use. +whitespace_re = re.compile(r"\s+") + +def _alias(attr): + """Alias one attribute name to another for backward compatibility""" + @property + def alias(self): + return getattr(self, attr) + + @alias.setter + def alias(self): + return setattr(self, attr) + return alias + + +# These encodings are recognized by Python (so PageElement.encode +# could theoretically support them) but XML and HTML don't recognize +# them (so they should not show up in an XML or HTML document as that +# document's encoding). +# +# If an XML document is encoded in one of these encodings, no encoding +# will be mentioned in the XML declaration. If an HTML document is +# encoded in one of these encodings, and the HTML document has a +# <meta> tag that mentions an encoding, the encoding will be given as +# the empty string. +# +# Source: +# https://docs.python.org/3/library/codecs.html#python-specific-encodings +PYTHON_SPECIFIC_ENCODINGS = set([ + "idna", + "mbcs", + "oem", + "palmos", + "punycode", + "raw_unicode_escape", + "undefined", + "unicode_escape", + "raw-unicode-escape", + "unicode-escape", + "string-escape", + "string_escape", +]) + + +class NamespacedAttribute(str): + """A namespaced string (e.g. 'xml:lang') that remembers the namespace + ('xml') and the name ('lang') that were used to create it. + """ + + def __new__(cls, prefix, name=None, namespace=None): + if not name: + # This is the default namespace. Its name "has no value" + # per https://www.w3.org/TR/xml-names/#defaulting + name = None + + if not name: + obj = str.__new__(cls, prefix) + elif not prefix: + # Not really namespaced. + obj = str.__new__(cls, name) + else: + obj = str.__new__(cls, prefix + ":" + name) + obj.prefix = prefix + obj.name = name + obj.namespace = namespace + return obj + +class AttributeValueWithCharsetSubstitution(str): + """A stand-in object for a character encoding specified in HTML.""" + +class CharsetMetaAttributeValue(AttributeValueWithCharsetSubstitution): + """A generic stand-in for the value of a meta tag's 'charset' attribute. + + When Beautiful Soup parses the markup '<meta charset="utf8">', the + value of the 'charset' attribute will be one of these objects. + """ + + def __new__(cls, original_value): + obj = str.__new__(cls, original_value) + obj.original_value = original_value + return obj + + def encode(self, encoding): + """When an HTML document is being encoded to a given encoding, the + value of a meta tag's 'charset' is the name of the encoding. + """ + if encoding in PYTHON_SPECIFIC_ENCODINGS: + return '' + return encoding + + +class ContentMetaAttributeValue(AttributeValueWithCharsetSubstitution): + """A generic stand-in for the value of a meta tag's 'content' attribute. + + When Beautiful Soup parses the markup: + <meta http-equiv="content-type" content="text/html; charset=utf8"> + + The value of the 'content' attribute will be one of these objects. + """ + + CHARSET_RE = re.compile(r"((^|;)\s*charset=)([^;]*)", re.M) + + def __new__(cls, original_value): + match = cls.CHARSET_RE.search(original_value) + if match is None: + # No substitution necessary. + return str.__new__(str, original_value) + + obj = str.__new__(cls, original_value) + obj.original_value = original_value + return obj + + def encode(self, encoding): + if encoding in PYTHON_SPECIFIC_ENCODINGS: + return '' + def rewrite(match): + return match.group(1) + encoding + return self.CHARSET_RE.sub(rewrite, self.original_value) + + +class PageElement(object): + """Contains the navigational information for some part of the page: + that is, its current location in the parse tree. + + NavigableString, Tag, etc. are all subclasses of PageElement. + """ + + # In general, we can't tell just by looking at an element whether + # it's contained in an XML document or an HTML document. But for + # Tags (q.v.) we can store this information at parse time. + known_xml = None + + def setup(self, parent=None, previous_element=None, next_element=None, + previous_sibling=None, next_sibling=None): + """Sets up the initial relations between this element and + other elements. + + :param parent: The parent of this element. + + :param previous_element: The element parsed immediately before + this one. + + :param next_element: The element parsed immediately before + this one. + + :param previous_sibling: The most recently encountered element + on the same level of the parse tree as this one. + + :param previous_sibling: The next element to be encountered + on the same level of the parse tree as this one. + """ + self.parent = parent + + self.previous_element = previous_element + if previous_element is not None: + self.previous_element.next_element = self + + self.next_element = next_element + if self.next_element is not None: + self.next_element.previous_element = self + + self.next_sibling = next_sibling + if self.next_sibling is not None: + self.next_sibling.previous_sibling = self + + if (previous_sibling is None + and self.parent is not None and self.parent.contents): + previous_sibling = self.parent.contents[-1] + + self.previous_sibling = previous_sibling + if previous_sibling is not None: + self.previous_sibling.next_sibling = self + + def format_string(self, s, formatter): + """Format the given string using the given formatter. + + :param s: A string. + :param formatter: A Formatter object, or a string naming one of the standard formatters. + """ + if formatter is None: + return s + if not isinstance(formatter, Formatter): + formatter = self.formatter_for_name(formatter) + output = formatter.substitute(s) + return output + + def formatter_for_name(self, formatter): + """Look up or create a Formatter for the given identifier, + if necessary. + + :param formatter: Can be a Formatter object (used as-is), a + function (used as the entity substitution hook for an + XMLFormatter or HTMLFormatter), or a string (used to look + up an XMLFormatter or HTMLFormatter in the appropriate + registry. + """ + if isinstance(formatter, Formatter): + return formatter + if self._is_xml: + c = XMLFormatter + else: + c = HTMLFormatter + if isinstance(formatter, Callable): + return c(entity_substitution=formatter) + return c.REGISTRY[formatter] + + @property + def _is_xml(self): + """Is this element part of an XML tree or an HTML tree? + + This is used in formatter_for_name, when deciding whether an + XMLFormatter or HTMLFormatter is more appropriate. It can be + inefficient, but it should be called very rarely. + """ + if self.known_xml is not None: + # Most of the time we will have determined this when the + # document is parsed. + return self.known_xml + + # Otherwise, it's likely that this element was created by + # direct invocation of the constructor from within the user's + # Python code. + if self.parent is None: + # This is the top-level object. It should have .known_xml set + # from tree creation. If not, take a guess--BS is usually + # used on HTML markup. + return getattr(self, 'is_xml', False) + return self.parent._is_xml + + nextSibling = _alias("next_sibling") # BS3 + previousSibling = _alias("previous_sibling") # BS3 + + default = object() + def _all_strings(self, strip=False, types=default): + """Yield all strings of certain classes, possibly stripping them. + + This is implemented differently in Tag and NavigableString. + """ + raise NotImplementedError() + + @property + def stripped_strings(self): + """Yield all strings in this PageElement, stripping them first. + + :yield: A sequence of stripped strings. + """ + for string in self._all_strings(True): + yield string + + def get_text(self, separator="", strip=False, + types=default): + """Get all child strings of this PageElement, concatenated using the + given separator. + + :param separator: Strings will be concatenated using this separator. + + :param strip: If True, strings will be stripped before being + concatenated. + + :param types: A tuple of NavigableString subclasses. Any + strings of a subclass not found in this list will be + ignored. Although there are exceptions, the default + behavior in most cases is to consider only NavigableString + and CData objects. That means no comments, processing + instructions, etc. + + :return: A string. + """ + return separator.join([s for s in self._all_strings( + strip, types=types)]) + getText = get_text + text = property(get_text) + + def replace_with(self, *args): + """Replace this PageElement with one or more PageElements, keeping the + rest of the tree the same. + + :param args: One or more PageElements. + :return: `self`, no longer part of the tree. + """ + if self.parent is None: + raise ValueError( + "Cannot replace one element with another when the " + "element to be replaced is not part of a tree.") + if len(args) == 1 and args[0] is self: + return + if any(x is self.parent for x in args): + raise ValueError("Cannot replace a Tag with its parent.") + old_parent = self.parent + my_index = self.parent.index(self) + self.extract(_self_index=my_index) + for idx, replace_with in enumerate(args, start=my_index): + old_parent.insert(idx, replace_with) + return self + replaceWith = replace_with # BS3 + + def unwrap(self): + """Replace this PageElement with its contents. + + :return: `self`, no longer part of the tree. + """ + my_parent = self.parent + if self.parent is None: + raise ValueError( + "Cannot replace an element with its contents when that" + "element is not part of a tree.") + my_index = self.parent.index(self) + self.extract(_self_index=my_index) + for child in reversed(self.contents[:]): + my_parent.insert(my_index, child) + return self + replace_with_children = unwrap + replaceWithChildren = unwrap # BS3 + + def wrap(self, wrap_inside): + """Wrap this PageElement inside another one. + + :param wrap_inside: A PageElement. + :return: `wrap_inside`, occupying the position in the tree that used + to be occupied by `self`, and with `self` inside it. + """ + me = self.replace_with(wrap_inside) + wrap_inside.append(me) + return wrap_inside + + def extract(self, _self_index=None): + """Destructively rips this element out of the tree. + + :param _self_index: The location of this element in its parent's + .contents, if known. Passing this in allows for a performance + optimization. + + :return: `self`, no longer part of the tree. + """ + if self.parent is not None: + if _self_index is None: + _self_index = self.parent.index(self) + del self.parent.contents[_self_index] + + #Find the two elements that would be next to each other if + #this element (and any children) hadn't been parsed. Connect + #the two. + last_child = self._last_descendant() + next_element = last_child.next_element + + if (self.previous_element is not None and + self.previous_element is not next_element): + self.previous_element.next_element = next_element + if next_element is not None and next_element is not self.previous_element: + next_element.previous_element = self.previous_element + self.previous_element = None + last_child.next_element = None + + self.parent = None + if (self.previous_sibling is not None + and self.previous_sibling is not self.next_sibling): + self.previous_sibling.next_sibling = self.next_sibling + if (self.next_sibling is not None + and self.next_sibling is not self.previous_sibling): + self.next_sibling.previous_sibling = self.previous_sibling + self.previous_sibling = self.next_sibling = None + return self + + def _last_descendant(self, is_initialized=True, accept_self=True): + """Finds the last element beneath this object to be parsed. + + :param is_initialized: Has `setup` been called on this PageElement + yet? + :param accept_self: Is `self` an acceptable answer to the question? + """ + if is_initialized and self.next_sibling is not None: + last_child = self.next_sibling.previous_element + else: + last_child = self + while isinstance(last_child, Tag) and last_child.contents: + last_child = last_child.contents[-1] + if not accept_self and last_child is self: + last_child = None + return last_child + # BS3: Not part of the API! + _lastRecursiveChild = _last_descendant + + def insert(self, position, new_child): + """Insert a new PageElement in the list of this PageElement's children. + + This works the same way as `list.insert`. + + :param position: The numeric position that should be occupied + in `self.children` by the new PageElement. + :param new_child: A PageElement. + """ + if new_child is None: + raise ValueError("Cannot insert None into a tag.") + if new_child is self: + raise ValueError("Cannot insert a tag into itself.") + if (isinstance(new_child, str) + and not isinstance(new_child, NavigableString)): + new_child = NavigableString(new_child) + + from bs4 import BeautifulSoup + if isinstance(new_child, BeautifulSoup): + # We don't want to end up with a situation where one BeautifulSoup + # object contains another. Insert the children one at a time. + for subchild in list(new_child.contents): + self.insert(position, subchild) + position += 1 + return + position = min(position, len(self.contents)) + if hasattr(new_child, 'parent') and new_child.parent is not None: + # We're 'inserting' an element that's already one + # of this object's children. + if new_child.parent is self: + current_index = self.index(new_child) + if current_index < position: + # We're moving this element further down the list + # of this object's children. That means that when + # we extract this element, our target index will + # jump down one. + position -= 1 + new_child.extract() + + new_child.parent = self + previous_child = None + if position == 0: + new_child.previous_sibling = None + new_child.previous_element = self + else: + previous_child = self.contents[position - 1] + new_child.previous_sibling = previous_child + new_child.previous_sibling.next_sibling = new_child + new_child.previous_element = previous_child._last_descendant(False) + if new_child.previous_element is not None: + new_child.previous_element.next_element = new_child + + new_childs_last_element = new_child._last_descendant(False) + + if position >= len(self.contents): + new_child.next_sibling = None + + parent = self + parents_next_sibling = None + while parents_next_sibling is None and parent is not None: + parents_next_sibling = parent.next_sibling + parent = parent.parent + if parents_next_sibling is not None: + # We found the element that comes next in the document. + break + if parents_next_sibling is not None: + new_childs_last_element.next_element = parents_next_sibling + else: + # The last element of this tag is the last element in + # the document. + new_childs_last_element.next_element = None + else: + next_child = self.contents[position] + new_child.next_sibling = next_child + if new_child.next_sibling is not None: + new_child.next_sibling.previous_sibling = new_child + new_childs_last_element.next_element = next_child + + if new_childs_last_element.next_element is not None: + new_childs_last_element.next_element.previous_element = new_childs_last_element + self.contents.insert(position, new_child) + + def append(self, tag): + """Appends the given PageElement to the contents of this one. + + :param tag: A PageElement. + """ + self.insert(len(self.contents), tag) + + def extend(self, tags): + """Appends the given PageElements to this one's contents. + + :param tags: A list of PageElements. If a single Tag is + provided instead, this PageElement's contents will be extended + with that Tag's contents. + """ + if isinstance(tags, Tag): + tags = tags.contents + if isinstance(tags, list): + # Moving items around the tree may change their position in + # the original list. Make a list that won't change. + tags = list(tags) + for tag in tags: + self.append(tag) + + def insert_before(self, *args): + """Makes the given element(s) the immediate predecessor of this one. + + All the elements will have the same parent, and the given elements + will be immediately before this one. + + :param args: One or more PageElements. + """ + parent = self.parent + if parent is None: + raise ValueError( + "Element has no parent, so 'before' has no meaning.") + if any(x is self for x in args): + raise ValueError("Can't insert an element before itself.") + for predecessor in args: + # Extract first so that the index won't be screwed up if they + # are siblings. + if isinstance(predecessor, PageElement): + predecessor.extract() + index = parent.index(self) + parent.insert(index, predecessor) + + def insert_after(self, *args): + """Makes the given element(s) the immediate successor of this one. + + The elements will have the same parent, and the given elements + will be immediately after this one. + + :param args: One or more PageElements. + """ + # Do all error checking before modifying the tree. + parent = self.parent + if parent is None: + raise ValueError( + "Element has no parent, so 'after' has no meaning.") + if any(x is self for x in args): + raise ValueError("Can't insert an element after itself.") + + offset = 0 + for successor in args: + # Extract first so that the index won't be screwed up if they + # are siblings. + if isinstance(successor, PageElement): + successor.extract() + index = parent.index(self) + parent.insert(index+1+offset, successor) + offset += 1 + + def find_next(self, name=None, attrs={}, string=None, **kwargs): + """Find the first PageElement that matches the given criteria and + appears later in the document than this PageElement. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one(self.find_all_next, name, attrs, string, **kwargs) + findNext = find_next # BS3 + + def find_all_next(self, name=None, attrs={}, string=None, limit=None, + **kwargs): + """Find all PageElements that match the given criteria and appear + later in the document than this PageElement. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet containing PageElements. + """ + _stacklevel = kwargs.pop('_stacklevel', 2) + return self._find_all(name, attrs, string, limit, self.next_elements, + _stacklevel=_stacklevel+1, **kwargs) + findAllNext = find_all_next # BS3 + + def find_next_sibling(self, name=None, attrs={}, string=None, **kwargs): + """Find the closest sibling to this PageElement that matches the + given criteria and appears later in the document. + + All find_* methods take a common set of arguments. See the + online documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one(self.find_next_siblings, name, attrs, string, + **kwargs) + findNextSibling = find_next_sibling # BS3 + + def find_next_siblings(self, name=None, attrs={}, string=None, limit=None, + **kwargs): + """Find all siblings of this PageElement that match the given criteria + and appear later in the document. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + _stacklevel = kwargs.pop('_stacklevel', 2) + return self._find_all( + name, attrs, string, limit, + self.next_siblings, _stacklevel=_stacklevel+1, **kwargs + ) + findNextSiblings = find_next_siblings # BS3 + fetchNextSiblings = find_next_siblings # BS2 + + def find_previous(self, name=None, attrs={}, string=None, **kwargs): + """Look backwards in the document from this PageElement and find the + first PageElement that matches the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one( + self.find_all_previous, name, attrs, string, **kwargs) + findPrevious = find_previous # BS3 + + def find_all_previous(self, name=None, attrs={}, string=None, limit=None, + **kwargs): + """Look backwards in the document from this PageElement and find all + PageElements that match the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + _stacklevel = kwargs.pop('_stacklevel', 2) + return self._find_all( + name, attrs, string, limit, self.previous_elements, + _stacklevel=_stacklevel+1, **kwargs + ) + findAllPrevious = find_all_previous # BS3 + fetchPrevious = find_all_previous # BS2 + + def find_previous_sibling(self, name=None, attrs={}, string=None, **kwargs): + """Returns the closest sibling to this PageElement that matches the + given criteria and appears earlier in the document. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one(self.find_previous_siblings, name, attrs, string, + **kwargs) + findPreviousSibling = find_previous_sibling # BS3 + + def find_previous_siblings(self, name=None, attrs={}, string=None, + limit=None, **kwargs): + """Returns all siblings to this PageElement that match the + given criteria and appear earlier in the document. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + _stacklevel = kwargs.pop('_stacklevel', 2) + return self._find_all( + name, attrs, string, limit, + self.previous_siblings, _stacklevel=_stacklevel+1, **kwargs + ) + findPreviousSiblings = find_previous_siblings # BS3 + fetchPreviousSiblings = find_previous_siblings # BS2 + + def find_parent(self, name=None, attrs={}, **kwargs): + """Find the closest parent of this PageElement that matches the given + criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :kwargs: A dictionary of filters on attribute values. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + # NOTE: We can't use _find_one because findParents takes a different + # set of arguments. + r = None + l = self.find_parents(name, attrs, 1, _stacklevel=3, **kwargs) + if l: + r = l[0] + return r + findParent = find_parent # BS3 + + def find_parents(self, name=None, attrs={}, limit=None, **kwargs): + """Find all parents of this PageElement that match the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + _stacklevel = kwargs.pop('_stacklevel', 2) + return self._find_all(name, attrs, None, limit, self.parents, + _stacklevel=_stacklevel+1, **kwargs) + findParents = find_parents # BS3 + fetchParents = find_parents # BS2 + + @property + def next(self): + """The PageElement, if any, that was parsed just after this one. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self.next_element + + @property + def previous(self): + """The PageElement, if any, that was parsed just before this one. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self.previous_element + + #These methods do the real heavy lifting. + + def _find_one(self, method, name, attrs, string, **kwargs): + r = None + l = method(name, attrs, string, 1, _stacklevel=4, **kwargs) + if l: + r = l[0] + return r + + def _find_all(self, name, attrs, string, limit, generator, **kwargs): + "Iterates over a generator looking for things that match." + _stacklevel = kwargs.pop('_stacklevel', 3) + + if string is None and 'text' in kwargs: + string = kwargs.pop('text') + warnings.warn( + "The 'text' argument to find()-type methods is deprecated. Use 'string' instead.", + DeprecationWarning, stacklevel=_stacklevel + ) + + if isinstance(name, SoupStrainer): + strainer = name + else: + strainer = SoupStrainer(name, attrs, string, **kwargs) + + if string is None and not limit and not attrs and not kwargs: + if name is True or name is None: + # Optimization to find all tags. + result = (element for element in generator + if isinstance(element, Tag)) + return ResultSet(strainer, result) + elif isinstance(name, str): + # Optimization to find all tags with a given name. + if name.count(':') == 1: + # This is a name with a prefix. If this is a namespace-aware document, + # we need to match the local name against tag.name. If not, + # we need to match the fully-qualified name against tag.name. + prefix, local_name = name.split(':', 1) + else: + prefix = None + local_name = name + result = (element for element in generator + if isinstance(element, Tag) + and ( + element.name == name + ) or ( + element.name == local_name + and (prefix is None or element.prefix == prefix) + ) + ) + return ResultSet(strainer, result) + results = ResultSet(strainer) + while True: + try: + i = next(generator) + except StopIteration: + break + if i: + found = strainer.search(i) + if found: + results.append(found) + if limit and len(results) >= limit: + break + return results + + #These generators can be used to navigate starting from both + #NavigableStrings and Tags. + @property + def next_elements(self): + """All PageElements that were parsed after this one. + + :yield: A sequence of PageElements. + """ + i = self.next_element + while i is not None: + yield i + i = i.next_element + + @property + def next_siblings(self): + """All PageElements that are siblings of this one but were parsed + later. + + :yield: A sequence of PageElements. + """ + i = self.next_sibling + while i is not None: + yield i + i = i.next_sibling + + @property + def previous_elements(self): + """All PageElements that were parsed before this one. + + :yield: A sequence of PageElements. + """ + i = self.previous_element + while i is not None: + yield i + i = i.previous_element + + @property + def previous_siblings(self): + """All PageElements that are siblings of this one but were parsed + earlier. + + :yield: A sequence of PageElements. + """ + i = self.previous_sibling + while i is not None: + yield i + i = i.previous_sibling + + @property + def parents(self): + """All PageElements that are parents of this PageElement. + + :yield: A sequence of PageElements. + """ + i = self.parent + while i is not None: + yield i + i = i.parent + + @property + def decomposed(self): + """Check whether a PageElement has been decomposed. + + :rtype: bool + """ + return getattr(self, '_decomposed', False) or False + + # Old non-property versions of the generators, for backwards + # compatibility with BS3. + def nextGenerator(self): + return self.next_elements + + def nextSiblingGenerator(self): + return self.next_siblings + + def previousGenerator(self): + return self.previous_elements + + def previousSiblingGenerator(self): + return self.previous_siblings + + def parentGenerator(self): + return self.parents + + +class NavigableString(str, PageElement): + """A Python Unicode string that is part of a parse tree. + + When Beautiful Soup parses the markup <b>penguin</b>, it will + create a NavigableString for the string "penguin". + """ + + PREFIX = '' + SUFFIX = '' + + def __new__(cls, value): + """Create a new NavigableString. + + When unpickling a NavigableString, this method is called with + the string in DEFAULT_OUTPUT_ENCODING. That encoding needs to be + passed in to the superclass's __new__ or the superclass won't know + how to handle non-ASCII characters. + """ + if isinstance(value, str): + u = str.__new__(cls, value) + else: + u = str.__new__(cls, value, DEFAULT_OUTPUT_ENCODING) + u.setup() + return u + + def __deepcopy__(self, memo, recursive=False): + """A copy of a NavigableString has the same contents and class + as the original, but it is not connected to the parse tree. + + :param recursive: This parameter is ignored; it's only defined + so that NavigableString.__deepcopy__ implements the same + signature as Tag.__deepcopy__. + """ + return type(self)(self) + + def __copy__(self): + """A copy of a NavigableString can only be a deep copy, because + only one PageElement can occupy a given place in a parse tree. + """ + return self.__deepcopy__({}) + + def __getnewargs__(self): + return (str(self),) + + def __getattr__(self, attr): + """text.string gives you text. This is for backwards + compatibility for Navigable*String, but for CData* it lets you + get the string without the CData wrapper.""" + if attr == 'string': + return self + else: + raise AttributeError( + "'%s' object has no attribute '%s'" % ( + self.__class__.__name__, attr)) + + def output_ready(self, formatter="minimal"): + """Run the string through the provided formatter. + + :param formatter: A Formatter object, or a string naming one of the standard formatters. + """ + output = self.format_string(self, formatter) + return self.PREFIX + output + self.SUFFIX + + @property + def name(self): + """Since a NavigableString is not a Tag, it has no .name. + + This property is implemented so that code like this doesn't crash + when run on a mixture of Tag and NavigableString objects: + [x.name for x in tag.children] + """ + return None + + @name.setter + def name(self, name): + """Prevent NavigableString.name from ever being set.""" + raise AttributeError("A NavigableString cannot be given a name.") + + def _all_strings(self, strip=False, types=PageElement.default): + """Yield all strings of certain classes, possibly stripping them. + + This makes it easy for NavigableString to implement methods + like get_text() as conveniences, creating a consistent + text-extraction API across all PageElements. + + :param strip: If True, all strings will be stripped before being + yielded. + + :param types: A tuple of NavigableString subclasses. If this + NavigableString isn't one of those subclasses, the + sequence will be empty. By default, the subclasses + considered are NavigableString and CData objects. That + means no comments, processing instructions, etc. + + :yield: A sequence that either contains this string, or is empty. + + """ + if types is self.default: + # This is kept in Tag because it's full of subclasses of + # this class, which aren't defined until later in the file. + types = Tag.DEFAULT_INTERESTING_STRING_TYPES + + # Do nothing if the caller is looking for specific types of + # string, and we're of a different type. + # + # We check specific types instead of using isinstance(self, + # types) because all of these classes subclass + # NavigableString. Anyone who's using this feature probably + # wants generic NavigableStrings but not other stuff. + my_type = type(self) + if types is not None: + if isinstance(types, type): + # Looking for a single type. + if my_type is not types: + return + elif my_type not in types: + # Looking for one of a list of types. + return + + value = self + if strip: + value = value.strip() + if len(value) > 0: + yield value + strings = property(_all_strings) + +class PreformattedString(NavigableString): + """A NavigableString not subject to the normal formatting rules. + + This is an abstract class used for special kinds of strings such + as comments (the Comment class) and CDATA blocks (the CData + class). + """ + + PREFIX = '' + SUFFIX = '' + + def output_ready(self, formatter=None): + """Make this string ready for output by adding any subclass-specific + prefix or suffix. + + :param formatter: A Formatter object, or a string naming one + of the standard formatters. The string will be passed into the + Formatter, but only to trigger any side effects: the return + value is ignored. + + :return: The string, with any subclass-specific prefix and + suffix added on. + """ + if formatter is not None: + ignore = self.format_string(self, formatter) + return self.PREFIX + self + self.SUFFIX + +class CData(PreformattedString): + """A CDATA block.""" + PREFIX = '<![CDATA[' + SUFFIX = ']]>' + +class ProcessingInstruction(PreformattedString): + """A SGML processing instruction.""" + + PREFIX = '<?' + SUFFIX = '>' + +class XMLProcessingInstruction(ProcessingInstruction): + """An XML processing instruction.""" + PREFIX = '<?' + SUFFIX = '?>' + +class Comment(PreformattedString): + """An HTML or XML comment.""" + PREFIX = '<!--' + SUFFIX = '-->' + + +class Declaration(PreformattedString): + """An XML declaration.""" + PREFIX = '<?' + SUFFIX = '?>' + + +class Doctype(PreformattedString): + """A document type declaration.""" + @classmethod + def for_name_and_ids(cls, name, pub_id, system_id): + """Generate an appropriate document type declaration for a given + public ID and system ID. + + :param name: The name of the document's root element, e.g. 'html'. + :param pub_id: The Formal Public Identifier for this document type, + e.g. '-//W3C//DTD XHTML 1.1//EN' + :param system_id: The system identifier for this document type, + e.g. 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd' + + :return: A Doctype. + """ + value = name or '' + if pub_id is not None: + value += ' PUBLIC "%s"' % pub_id + if system_id is not None: + value += ' "%s"' % system_id + elif system_id is not None: + value += ' SYSTEM "%s"' % system_id + + return Doctype(value) + + PREFIX = '<!DOCTYPE ' + SUFFIX = '>\n' + + +class Stylesheet(NavigableString): + """A NavigableString representing an stylesheet (probably + CSS). + + Used to distinguish embedded stylesheets from textual content. + """ + pass + + +class Script(NavigableString): + """A NavigableString representing an executable script (probably + Javascript). + + Used to distinguish executable code from textual content. + """ + pass + + +class TemplateString(NavigableString): + """A NavigableString representing a string found inside an HTML + template embedded in a larger document. + + Used to distinguish such strings from the main body of the document. + """ + pass + + +class RubyTextString(NavigableString): + """A NavigableString representing the contents of the <rt> HTML + element. + + https://dev.w3.org/html5/spec-LC/text-level-semantics.html#the-rt-element + + Can be used to distinguish such strings from the strings they're + annotating. + """ + pass + + +class RubyParenthesisString(NavigableString): + """A NavigableString representing the contents of the <rp> HTML + element. + + https://dev.w3.org/html5/spec-LC/text-level-semantics.html#the-rp-element + """ + pass + + +class Tag(PageElement): + """Represents an HTML or XML tag that is part of a parse tree, along + with its attributes and contents. + + When Beautiful Soup parses the markup <b>penguin</b>, it will + create a Tag object representing the <b> tag. + """ + + def __init__(self, parser=None, builder=None, name=None, namespace=None, + prefix=None, attrs=None, parent=None, previous=None, + is_xml=None, sourceline=None, sourcepos=None, + can_be_empty_element=None, cdata_list_attributes=None, + preserve_whitespace_tags=None, + interesting_string_types=None, + namespaces=None + ): + """Basic constructor. + + :param parser: A BeautifulSoup object. + :param builder: A TreeBuilder. + :param name: The name of the tag. + :param namespace: The URI of this Tag's XML namespace, if any. + :param prefix: The prefix for this Tag's XML namespace, if any. + :param attrs: A dictionary of this Tag's attribute values. + :param parent: The PageElement to use as this Tag's parent. + :param previous: The PageElement that was parsed immediately before + this tag. + :param is_xml: If True, this is an XML tag. Otherwise, this is an + HTML tag. + :param sourceline: The line number where this tag was found in its + source document. + :param sourcepos: The character position within `sourceline` where this + tag was found. + :param can_be_empty_element: If True, this tag should be + represented as <tag/>. If False, this tag should be represented + as <tag></tag>. + :param cdata_list_attributes: A list of attributes whose values should + be treated as CDATA if they ever show up on this tag. + :param preserve_whitespace_tags: A list of tag names whose contents + should have their whitespace preserved. + :param interesting_string_types: This is a NavigableString + subclass or a tuple of them. When iterating over this + Tag's strings in methods like Tag.strings or Tag.get_text, + these are the types of strings that are interesting enough + to be considered. The default is to consider + NavigableString and CData the only interesting string + subtypes. + :param namespaces: A dictionary mapping currently active + namespace prefixes to URIs. This can be used later to + construct CSS selectors. + """ + if parser is None: + self.parser_class = None + else: + # We don't actually store the parser object: that lets extracted + # chunks be garbage-collected. + self.parser_class = parser.__class__ + if name is None: + raise ValueError("No value provided for new tag's name.") + self.name = name + self.namespace = namespace + self._namespaces = namespaces or {} + self.prefix = prefix + if ((not builder or builder.store_line_numbers) + and (sourceline is not None or sourcepos is not None)): + self.sourceline = sourceline + self.sourcepos = sourcepos + if attrs is None: + attrs = {} + elif attrs: + if builder is not None and builder.cdata_list_attributes: + attrs = builder._replace_cdata_list_attribute_values( + self.name, attrs) + else: + attrs = dict(attrs) + else: + attrs = dict(attrs) + + # If possible, determine ahead of time whether this tag is an + # XML tag. + if builder: + self.known_xml = builder.is_xml + else: + self.known_xml = is_xml + self.attrs = attrs + self.contents = [] + self.setup(parent, previous) + self.hidden = False + + if builder is None: + # In the absence of a TreeBuilder, use whatever values were + # passed in here. They're probably None, unless this is a copy of some + # other tag. + self.can_be_empty_element = can_be_empty_element + self.cdata_list_attributes = cdata_list_attributes + self.preserve_whitespace_tags = preserve_whitespace_tags + self.interesting_string_types = interesting_string_types + else: + # Set up any substitutions for this tag, such as the charset in a META tag. + builder.set_up_substitutions(self) + + # Ask the TreeBuilder whether this tag might be an empty-element tag. + self.can_be_empty_element = builder.can_be_empty_element(name) + + # Keep track of the list of attributes of this tag that + # might need to be treated as a list. + # + # For performance reasons, we store the whole data structure + # rather than asking the question of every tag. Asking would + # require building a new data structure every time, and + # (unlike can_be_empty_element), we almost never need + # to check this. + self.cdata_list_attributes = builder.cdata_list_attributes + + # Keep track of the names that might cause this tag to be treated as a + # whitespace-preserved tag. + self.preserve_whitespace_tags = builder.preserve_whitespace_tags + + if self.name in builder.string_containers: + # This sort of tag uses a special string container + # subclass for most of its strings. When we ask the + self.interesting_string_types = builder.string_containers[self.name] + else: + self.interesting_string_types = self.DEFAULT_INTERESTING_STRING_TYPES + + parserClass = _alias("parser_class") # BS3 + + def __deepcopy__(self, memo, recursive=True): + """A deepcopy of a Tag is a new Tag, unconnected to the parse tree. + Its contents are a copy of the old Tag's contents. + """ + clone = self._clone() + + if recursive: + # Clone this tag's descendants recursively, but without + # making any recursive function calls. + tag_stack = [clone] + for event, element in self._event_stream(self.descendants): + if event is Tag.END_ELEMENT_EVENT: + # Stop appending incoming Tags to the Tag that was + # just closed. + tag_stack.pop() + else: + descendant_clone = element.__deepcopy__( + memo, recursive=False + ) + # Add to its parent's .contents + tag_stack[-1].append(descendant_clone) + + if event is Tag.START_ELEMENT_EVENT: + # Add the Tag itself to the stack so that its + # children will be .appended to it. + tag_stack.append(descendant_clone) + return clone + + def __copy__(self): + """A copy of a Tag must always be a deep copy, because a Tag's + children can only have one parent at a time. + """ + return self.__deepcopy__({}) + + def _clone(self): + """Create a new Tag just like this one, but with no + contents and unattached to any parse tree. + + This is the first step in the deepcopy process. + """ + clone = type(self)( + None, self.builder, self.name, self.namespace, + self.prefix, self.attrs, is_xml=self._is_xml, + sourceline=self.sourceline, sourcepos=self.sourcepos, + can_be_empty_element=self.can_be_empty_element, + cdata_list_attributes=self.cdata_list_attributes, + preserve_whitespace_tags=self.preserve_whitespace_tags, + interesting_string_types=self.interesting_string_types + ) + for attr in ('can_be_empty_element', 'hidden'): + setattr(clone, attr, getattr(self, attr)) + return clone + + @property + def is_empty_element(self): + """Is this tag an empty-element tag? (aka a self-closing tag) + + A tag that has contents is never an empty-element tag. + + A tag that has no contents may or may not be an empty-element + tag. It depends on the builder used to create the tag. If the + builder has a designated list of empty-element tags, then only + a tag whose name shows up in that list is considered an + empty-element tag. + + If the builder has no designated list of empty-element tags, + then any tag with no contents is an empty-element tag. + """ + return len(self.contents) == 0 and self.can_be_empty_element + isSelfClosing = is_empty_element # BS3 + + @property + def string(self): + """Convenience property to get the single string within this + PageElement. + + TODO It might make sense to have NavigableString.string return + itself. + + :return: If this element has a single string child, return + value is that string. If this element has one child tag, + return value is the 'string' attribute of the child tag, + recursively. If this element is itself a string, has no + children, or has more than one child, return value is None. + """ + if len(self.contents) != 1: + return None + child = self.contents[0] + if isinstance(child, NavigableString): + return child + return child.string + + @string.setter + def string(self, string): + """Replace this PageElement's contents with `string`.""" + self.clear() + self.append(string.__class__(string)) + + DEFAULT_INTERESTING_STRING_TYPES = (NavigableString, CData) + def _all_strings(self, strip=False, types=PageElement.default): + """Yield all strings of certain classes, possibly stripping them. + + :param strip: If True, all strings will be stripped before being + yielded. + + :param types: A tuple of NavigableString subclasses. Any strings of + a subclass not found in this list will be ignored. By + default, the subclasses considered are the ones found in + self.interesting_string_types. If that's not specified, + only NavigableString and CData objects will be + considered. That means no comments, processing + instructions, etc. + + :yield: A sequence of strings. + + """ + if types is self.default: + types = self.interesting_string_types + + for descendant in self.descendants: + if (types is None and not isinstance(descendant, NavigableString)): + continue + descendant_type = type(descendant) + if isinstance(types, type): + if descendant_type is not types: + # We're not interested in strings of this type. + continue + elif types is not None and descendant_type not in types: + # We're not interested in strings of this type. + continue + if strip: + descendant = descendant.strip() + if len(descendant) == 0: + continue + yield descendant + strings = property(_all_strings) + + def decompose(self): + """Recursively destroys this PageElement and its children. + + This element will be removed from the tree and wiped out; so + will everything beneath it. + + The behavior of a decomposed PageElement is undefined and you + should never use one for anything, but if you need to _check_ + whether an element has been decomposed, you can use the + `decomposed` property. + """ + self.extract() + i = self + while i is not None: + n = i.next_element + i.__dict__.clear() + i.contents = [] + i._decomposed = True + i = n + + def clear(self, decompose=False): + """Wipe out all children of this PageElement by calling extract() + on them. + + :param decompose: If this is True, decompose() (a more + destructive method) will be called instead of extract(). + """ + if decompose: + for element in self.contents[:]: + if isinstance(element, Tag): + element.decompose() + else: + element.extract() + else: + for element in self.contents[:]: + element.extract() + + def smooth(self): + """Smooth out this element's children by consolidating consecutive + strings. + + This makes pretty-printed output look more natural following a + lot of operations that modified the tree. + """ + # Mark the first position of every pair of children that need + # to be consolidated. Do this rather than making a copy of + # self.contents, since in most cases very few strings will be + # affected. + marked = [] + for i, a in enumerate(self.contents): + if isinstance(a, Tag): + # Recursively smooth children. + a.smooth() + if i == len(self.contents)-1: + # This is the last item in .contents, and it's not a + # tag. There's no chance it needs any work. + continue + b = self.contents[i+1] + if (isinstance(a, NavigableString) + and isinstance(b, NavigableString) + and not isinstance(a, PreformattedString) + and not isinstance(b, PreformattedString) + ): + marked.append(i) + + # Go over the marked positions in reverse order, so that + # removing items from .contents won't affect the remaining + # positions. + for i in reversed(marked): + a = self.contents[i] + b = self.contents[i+1] + b.extract() + n = NavigableString(a+b) + a.replace_with(n) + + def index(self, element): + """Find the index of a child by identity, not value. + + Avoids issues with tag.contents.index(element) getting the + index of equal elements. + + :param element: Look for this PageElement in `self.contents`. + """ + for i, child in enumerate(self.contents): + if child is element: + return i + raise ValueError("Tag.index: element not in tag") + + def get(self, key, default=None): + """Returns the value of the 'key' attribute for the tag, or + the value given for 'default' if it doesn't have that + attribute.""" + return self.attrs.get(key, default) + + def get_attribute_list(self, key, default=None): + """The same as get(), but always returns a list. + + :param key: The attribute to look for. + :param default: Use this value if the attribute is not present + on this PageElement. + :return: A list of values, probably containing only a single + value. + """ + value = self.get(key, default) + if not isinstance(value, list): + value = [value] + return value + + def has_attr(self, key): + """Does this PageElement have an attribute with the given name?""" + return key in self.attrs + + def __hash__(self): + return str(self).__hash__() + + def __getitem__(self, key): + """tag[key] returns the value of the 'key' attribute for the Tag, + and throws an exception if it's not there.""" + return self.attrs[key] + + def __iter__(self): + "Iterating over a Tag iterates over its contents." + return iter(self.contents) + + def __len__(self): + "The length of a Tag is the length of its list of contents." + return len(self.contents) + + def __contains__(self, x): + return x in self.contents + + def __bool__(self): + "A tag is non-None even if it has no contents." + return True + + def __setitem__(self, key, value): + """Setting tag[key] sets the value of the 'key' attribute for the + tag.""" + self.attrs[key] = value + + def __delitem__(self, key): + "Deleting tag[key] deletes all 'key' attributes for the tag." + self.attrs.pop(key, None) + + def __call__(self, *args, **kwargs): + """Calling a Tag like a function is the same as calling its + find_all() method. Eg. tag('a') returns a list of all the A tags + found within this tag.""" + return self.find_all(*args, **kwargs) + + def __getattr__(self, tag): + """Calling tag.subtag is the same as calling tag.find(name="subtag")""" + #print("Getattr %s.%s" % (self.__class__, tag)) + if len(tag) > 3 and tag.endswith('Tag'): + # BS3: soup.aTag -> "soup.find("a") + tag_name = tag[:-3] + warnings.warn( + '.%(name)sTag is deprecated, use .find("%(name)s") instead. If you really were looking for a tag called %(name)sTag, use .find("%(name)sTag")' % dict( + name=tag_name + ), + DeprecationWarning, stacklevel=2 + ) + return self.find(tag_name) + # We special case contents to avoid recursion. + elif not tag.startswith("__") and not tag == "contents": + return self.find(tag) + raise AttributeError( + "'%s' object has no attribute '%s'" % (self.__class__, tag)) + + def __eq__(self, other): + """Returns true iff this Tag has the same name, the same attributes, + and the same contents (recursively) as `other`.""" + if self is other: + return True + if (not hasattr(other, 'name') or + not hasattr(other, 'attrs') or + not hasattr(other, 'contents') or + self.name != other.name or + self.attrs != other.attrs or + len(self) != len(other)): + return False + for i, my_child in enumerate(self.contents): + if my_child != other.contents[i]: + return False + return True + + def __ne__(self, other): + """Returns true iff this Tag is not identical to `other`, + as defined in __eq__.""" + return not self == other + + def __repr__(self, encoding="unicode-escape"): + """Renders this PageElement as a string. + + :param encoding: The encoding to use (Python 2 only). + TODO: This is now ignored and a warning should be issued + if a value is provided. + :return: A (Unicode) string. + """ + # "The return value must be a string object", i.e. Unicode + return self.decode() + + def __unicode__(self): + """Renders this PageElement as a Unicode string.""" + return self.decode() + + __str__ = __repr__ = __unicode__ + + def encode(self, encoding=DEFAULT_OUTPUT_ENCODING, + indent_level=None, formatter="minimal", + errors="xmlcharrefreplace"): + """Render a bytestring representation of this PageElement and its + contents. + + :param encoding: The destination encoding. + :param indent_level: Each line of the rendering will be + indented this many levels. (The formatter decides what a + 'level' means in terms of spaces or other characters + output.) Used internally in recursive calls while + pretty-printing. + :param formatter: A Formatter object, or a string naming one of + the standard formatters. + :param errors: An error handling strategy such as + 'xmlcharrefreplace'. This value is passed along into + encode() and its value should be one of the constants + defined by Python. + :return: A bytestring. + + """ + # Turn the data structure into Unicode, then encode the + # Unicode. + u = self.decode(indent_level, encoding, formatter) + return u.encode(encoding, errors) + + def decode(self, indent_level=None, + eventual_encoding=DEFAULT_OUTPUT_ENCODING, + formatter="minimal", + iterator=None): + pieces = [] + # First off, turn a non-Formatter `formatter` into a Formatter + # object. This will stop the lookup from happening over and + # over again. + if not isinstance(formatter, Formatter): + formatter = self.formatter_for_name(formatter) + + if indent_level is True: + indent_level = 0 + + # The currently active tag that put us into string literal + # mode. Until this element is closed, children will be treated + # as string literals and not pretty-printed. String literal + # mode is turned on immediately after this tag begins, and + # turned off immediately before it's closed. This means there + # will be whitespace before and after the tag itself. + string_literal_tag = None + + for event, element in self._event_stream(iterator): + if event in (Tag.START_ELEMENT_EVENT, Tag.EMPTY_ELEMENT_EVENT): + piece = element._format_tag( + eventual_encoding, formatter, opening=True + ) + elif event is Tag.END_ELEMENT_EVENT: + piece = element._format_tag( + eventual_encoding, formatter, opening=False + ) + if indent_level is not None: + indent_level -= 1 + else: + piece = element.output_ready(formatter) + + # Now we need to apply the 'prettiness' -- extra + # whitespace before and/or after this tag. This can get + # complicated because certain tags, like <pre> and + # <script>, can't be prettified, since adding whitespace would + # change the meaning of the content. + + # The default behavior is to add whitespace before and + # after an element when string literal mode is off, and to + # leave things as they are when string literal mode is on. + if string_literal_tag: + indent_before = indent_after = False + else: + indent_before = indent_after = True + + # The only time the behavior is more complex than that is + # when we encounter an opening or closing tag that might + # put us into or out of string literal mode. + if (event is Tag.START_ELEMENT_EVENT + and not string_literal_tag + and not element._should_pretty_print()): + # We are about to enter string literal mode. Add + # whitespace before this tag, but not after. We + # will stay in string literal mode until this tag + # is closed. + indent_before = True + indent_after = False + string_literal_tag = element + elif (event is Tag.END_ELEMENT_EVENT + and element is string_literal_tag): + # We are about to exit string literal mode by closing + # the tag that sent us into that mode. Add whitespace + # after this tag, but not before. + indent_before = False + indent_after = True + string_literal_tag = None + + # Now we know whether to add whitespace before and/or + # after this element. + if indent_level is not None: + if (indent_before or indent_after): + if isinstance(element, NavigableString): + piece = piece.strip() + if piece: + piece = self._indent_string( + piece, indent_level, formatter, + indent_before, indent_after + ) + if event == Tag.START_ELEMENT_EVENT: + indent_level += 1 + pieces.append(piece) + return "".join(pieces) + + # Names for the different events yielded by _event_stream + START_ELEMENT_EVENT = object() + END_ELEMENT_EVENT = object() + EMPTY_ELEMENT_EVENT = object() + STRING_ELEMENT_EVENT = object() + + def _event_stream(self, iterator=None): + """Yield a sequence of events that can be used to reconstruct the DOM + for this element. + + This lets us recreate the nested structure of this element + (e.g. when formatting it as a string) without using recursive + method calls. + + This is similar in concept to the SAX API, but it's a simpler + interface designed for internal use. The events are different + from SAX and the arguments associated with the events are Tags + and other Beautiful Soup objects. + + :param iterator: An alternate iterator to use when traversing + the tree. + """ + tag_stack = [] + + iterator = iterator or self.self_and_descendants + + for c in iterator: + # If the parent of the element we're about to yield is not + # the tag currently on the stack, it means that the tag on + # the stack closed before this element appeared. + while tag_stack and c.parent != tag_stack[-1]: + now_closed_tag = tag_stack.pop() + yield Tag.END_ELEMENT_EVENT, now_closed_tag + + if isinstance(c, Tag): + if c.is_empty_element: + yield Tag.EMPTY_ELEMENT_EVENT, c + else: + yield Tag.START_ELEMENT_EVENT, c + tag_stack.append(c) + continue + else: + yield Tag.STRING_ELEMENT_EVENT, c + + while tag_stack: + now_closed_tag = tag_stack.pop() + yield Tag.END_ELEMENT_EVENT, now_closed_tag + + def _indent_string(self, s, indent_level, formatter, + indent_before, indent_after): + """Add indentation whitespace before and/or after a string. + + :param s: The string to amend with whitespace. + :param indent_level: The indentation level; affects how much + whitespace goes before the string. + :param indent_before: Whether or not to add whitespace + before the string. + :param indent_after: Whether or not to add whitespace + (a newline) after the string. + """ + space_before = '' + if indent_before and indent_level: + space_before = (formatter.indent * indent_level) + + space_after = '' + if indent_after: + space_after = "\n" + + return space_before + s + space_after + + def _format_tag(self, eventual_encoding, formatter, opening): + # A tag starts with the < character (see below). + + # Then the / character, if this is a closing tag. + closing_slash = '' + if not opening: + closing_slash = '/' + + # Then an optional namespace prefix. + prefix = '' + if self.prefix: + prefix = self.prefix + ":" + + # Then a list of attribute values, if this is an opening tag. + attribute_string = '' + if opening: + attributes = formatter.attributes(self) + attrs = [] + for key, val in attributes: + if val is None: + decoded = key + else: + if isinstance(val, list) or isinstance(val, tuple): + val = ' '.join(val) + elif not isinstance(val, str): + val = str(val) + elif ( + isinstance(val, AttributeValueWithCharsetSubstitution) + and eventual_encoding is not None + ): + val = val.encode(eventual_encoding) + + text = formatter.attribute_value(val) + decoded = ( + str(key) + '=' + + formatter.quoted_attribute_value(text)) + attrs.append(decoded) + if attrs: + attribute_string = ' ' + ' '.join(attrs) + + # Then an optional closing slash (for a void element in an + # XML document). + void_element_closing_slash = '' + if self.is_empty_element: + void_element_closing_slash = formatter.void_element_close_prefix or '' + + # Put it all together. + return '<' + closing_slash + prefix + self.name + attribute_string + void_element_closing_slash + '>' + + def _should_pretty_print(self, indent_level=1): + """Should this tag be pretty-printed? + + Most of them should, but some (such as <pre> in HTML + documents) should not. + """ + return ( + indent_level is not None + and ( + not self.preserve_whitespace_tags + or self.name not in self.preserve_whitespace_tags + ) + ) + + def prettify(self, encoding=None, formatter="minimal"): + """Pretty-print this PageElement as a string. + + :param encoding: The eventual encoding of the string. If this is None, + a Unicode string will be returned. + :param formatter: A Formatter object, or a string naming one of + the standard formatters. + :return: A Unicode string (if encoding==None) or a bytestring + (otherwise). + """ + if encoding is None: + return self.decode(True, formatter=formatter) + else: + return self.encode(encoding, True, formatter=formatter) + + def decode_contents(self, indent_level=None, + eventual_encoding=DEFAULT_OUTPUT_ENCODING, + formatter="minimal"): + """Renders the contents of this tag as a Unicode string. + + :param indent_level: Each line of the rendering will be + indented this many levels. (The formatter decides what a + 'level' means in terms of spaces or other characters + output.) Used internally in recursive calls while + pretty-printing. + + :param eventual_encoding: The tag is destined to be + encoded into this encoding. decode_contents() is _not_ + responsible for performing that encoding. This information + is passed in so that it can be substituted in if the + document contains a <META> tag that mentions the document's + encoding. + + :param formatter: A Formatter object, or a string naming one of + the standard Formatters. + + """ + return self.decode(indent_level, eventual_encoding, formatter, + iterator=self.descendants) + + def encode_contents( + self, indent_level=None, encoding=DEFAULT_OUTPUT_ENCODING, + formatter="minimal"): + """Renders the contents of this PageElement as a bytestring. + + :param indent_level: Each line of the rendering will be + indented this many levels. (The formatter decides what a + 'level' means in terms of spaces or other characters + output.) Used internally in recursive calls while + pretty-printing. + + :param eventual_encoding: The bytestring will be in this encoding. + + :param formatter: A Formatter object, or a string naming one of + the standard Formatters. + + :return: A bytestring. + """ + contents = self.decode_contents(indent_level, encoding, formatter) + return contents.encode(encoding) + + # Old method for BS3 compatibility + def renderContents(self, encoding=DEFAULT_OUTPUT_ENCODING, + prettyPrint=False, indentLevel=0): + """Deprecated method for BS3 compatibility.""" + if not prettyPrint: + indentLevel = None + return self.encode_contents( + indent_level=indentLevel, encoding=encoding) + + #Soup methods + + def find(self, name=None, attrs={}, recursive=True, string=None, + **kwargs): + """Look in the children of this PageElement and find the first + PageElement that matches the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param recursive: If this is True, find() will perform a + recursive search of this PageElement's children. Otherwise, + only the direct children will be considered. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + r = None + l = self.find_all(name, attrs, recursive, string, 1, _stacklevel=3, + **kwargs) + if l: + r = l[0] + return r + findChild = find #BS2 + + def find_all(self, name=None, attrs={}, recursive=True, string=None, + limit=None, **kwargs): + """Look in the children of this PageElement and find all + PageElements that match the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param recursive: If this is True, find_all() will perform a + recursive search of this PageElement's children. Otherwise, + only the direct children will be considered. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + generator = self.descendants + if not recursive: + generator = self.children + _stacklevel = kwargs.pop('_stacklevel', 2) + return self._find_all(name, attrs, string, limit, generator, + _stacklevel=_stacklevel+1, **kwargs) + findAll = find_all # BS3 + findChildren = find_all # BS2 + + #Generator methods + @property + def children(self): + """Iterate over all direct children of this PageElement. + + :yield: A sequence of PageElements. + """ + # return iter() to make the purpose of the method clear + return iter(self.contents) # XXX This seems to be untested. + + @property + def self_and_descendants(self): + """Iterate over this PageElement and its children in a + breadth-first sequence. + + :yield: A sequence of PageElements. + """ + if not self.hidden: + yield self + for i in self.descendants: + yield i + + @property + def descendants(self): + """Iterate over all children of this PageElement in a + breadth-first sequence. + + :yield: A sequence of PageElements. + """ + if not len(self.contents): + return + stopNode = self._last_descendant().next_element + current = self.contents[0] + while current is not stopNode: + yield current + current = current.next_element + + # CSS selector code + def select_one(self, selector, namespaces=None, **kwargs): + """Perform a CSS selection operation on the current element. + + :param selector: A CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will use the prefixes it encountered while + parsing the document. + + :param kwargs: Keyword arguments to be passed into Soup Sieve's + soupsieve.select() method. + + :return: A Tag. + :rtype: bs4.element.Tag + """ + return self.css.select_one(selector, namespaces, **kwargs) + + def select(self, selector, namespaces=None, limit=None, **kwargs): + """Perform a CSS selection operation on the current element. + + This uses the SoupSieve library. + + :param selector: A string containing a CSS selector. + + :param namespaces: A dictionary mapping namespace prefixes + used in the CSS selector to namespace URIs. By default, + Beautiful Soup will use the prefixes it encountered while + parsing the document. + + :param limit: After finding this number of results, stop looking. + + :param kwargs: Keyword arguments to be passed into SoupSieve's + soupsieve.select() method. + + :return: A ResultSet of Tags. + :rtype: bs4.element.ResultSet + """ + return self.css.select(selector, namespaces, limit, **kwargs) + + @property + def css(self): + """Return an interface to the CSS selector API.""" + return CSS(self) + + # Old names for backwards compatibility + def childGenerator(self): + """Deprecated generator.""" + return self.children + + def recursiveChildGenerator(self): + """Deprecated generator.""" + return self.descendants + + def has_key(self, key): + """Deprecated method. This was kind of misleading because has_key() + (attributes) was different from __in__ (contents). + + has_key() is gone in Python 3, anyway. + """ + warnings.warn( + 'has_key is deprecated. Use has_attr(key) instead.', + DeprecationWarning, stacklevel=2 + ) + return self.has_attr(key) + +# Next, a couple classes to represent queries and their results. +class SoupStrainer(object): + """Encapsulates a number of ways of matching a markup element (tag or + string). + + This is primarily used to underpin the find_* methods, but you can + create one yourself and pass it in as `parse_only` to the + `BeautifulSoup` constructor, to parse a subset of a large + document. + """ + + def __init__(self, name=None, attrs={}, string=None, **kwargs): + """Constructor. + + The SoupStrainer constructor takes the same arguments passed + into the find_* methods. See the online documentation for + detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param string: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + """ + if string is None and 'text' in kwargs: + string = kwargs.pop('text') + warnings.warn( + "The 'text' argument to the SoupStrainer constructor is deprecated. Use 'string' instead.", + DeprecationWarning, stacklevel=2 + ) + + self.name = self._normalize_search_value(name) + if not isinstance(attrs, dict): + # Treat a non-dict value for attrs as a search for the 'class' + # attribute. + kwargs['class'] = attrs + attrs = None + + if 'class_' in kwargs: + # Treat class_="foo" as a search for the 'class' + # attribute, overriding any non-dict value for attrs. + kwargs['class'] = kwargs['class_'] + del kwargs['class_'] + + if kwargs: + if attrs: + attrs = attrs.copy() + attrs.update(kwargs) + else: + attrs = kwargs + normalized_attrs = {} + for key, value in list(attrs.items()): + normalized_attrs[key] = self._normalize_search_value(value) + + self.attrs = normalized_attrs + self.string = self._normalize_search_value(string) + + # DEPRECATED but just in case someone is checking this. + self.text = self.string + + def _normalize_search_value(self, value): + # Leave it alone if it's a Unicode string, a callable, a + # regular expression, a boolean, or None. + if (isinstance(value, str) or isinstance(value, Callable) or hasattr(value, 'match') + or isinstance(value, bool) or value is None): + return value + + # If it's a bytestring, convert it to Unicode, treating it as UTF-8. + if isinstance(value, bytes): + return value.decode("utf8") + + # If it's listlike, convert it into a list of strings. + if hasattr(value, '__iter__'): + new_value = [] + for v in value: + if (hasattr(v, '__iter__') and not isinstance(v, bytes) + and not isinstance(v, str)): + # This is almost certainly the user's mistake. In the + # interests of avoiding infinite loops, we'll let + # it through as-is rather than doing a recursive call. + new_value.append(v) + else: + new_value.append(self._normalize_search_value(v)) + return new_value + + # Otherwise, convert it into a Unicode string. + # The unicode(str()) thing is so this will do the same thing on Python 2 + # and Python 3. + return str(str(value)) + + def __str__(self): + """A human-readable representation of this SoupStrainer.""" + if self.string: + return self.string + else: + return "%s|%s" % (self.name, self.attrs) + + def search_tag(self, markup_name=None, markup_attrs={}): + """Check whether a Tag with the given name and attributes would + match this SoupStrainer. + + Used prospectively to decide whether to even bother creating a Tag + object. + + :param markup_name: A tag name as found in some markup. + :param markup_attrs: A dictionary of attributes as found in some markup. + + :return: True if the prospective tag would match this SoupStrainer; + False otherwise. + """ + found = None + markup = None + if isinstance(markup_name, Tag): + markup = markup_name + markup_attrs = markup + + if isinstance(self.name, str): + # Optimization for a very common case where the user is + # searching for a tag with one specific name, and we're + # looking at a tag with a different name. + if markup and not markup.prefix and self.name != markup.name: + return False + + call_function_with_tag_data = ( + isinstance(self.name, Callable) + and not isinstance(markup_name, Tag)) + + if ((not self.name) + or call_function_with_tag_data + or (markup and self._matches(markup, self.name)) + or (not markup and self._matches(markup_name, self.name))): + if call_function_with_tag_data: + match = self.name(markup_name, markup_attrs) + else: + match = True + markup_attr_map = None + for attr, match_against in list(self.attrs.items()): + if not markup_attr_map: + if hasattr(markup_attrs, 'get'): + markup_attr_map = markup_attrs + else: + markup_attr_map = {} + for k, v in markup_attrs: + markup_attr_map[k] = v + attr_value = markup_attr_map.get(attr) + if not self._matches(attr_value, match_against): + match = False + break + if match: + if markup: + found = markup + else: + found = markup_name + if found and self.string and not self._matches(found.string, self.string): + found = None + return found + + # For BS3 compatibility. + searchTag = search_tag + + def search(self, markup): + """Find all items in `markup` that match this SoupStrainer. + + Used by the core _find_all() method, which is ultimately + called by all find_* methods. + + :param markup: A PageElement or a list of them. + """ + # print('looking for %s in %s' % (self, markup)) + found = None + # If given a list of items, scan it for a text element that + # matches. + if hasattr(markup, '__iter__') and not isinstance(markup, (Tag, str)): + for element in markup: + if isinstance(element, NavigableString) \ + and self.search(element): + found = element + break + # If it's a Tag, make sure its name or attributes match. + # Don't bother with Tags if we're searching for text. + elif isinstance(markup, Tag): + if not self.string or self.name or self.attrs: + found = self.search_tag(markup) + # If it's text, make sure the text matches. + elif isinstance(markup, NavigableString) or \ + isinstance(markup, str): + if not self.name and not self.attrs and self._matches(markup, self.string): + found = markup + else: + raise Exception( + "I don't know how to match against a %s" % markup.__class__) + return found + + def _matches(self, markup, match_against, already_tried=None): + # print(u"Matching %s against %s" % (markup, match_against)) + result = False + if isinstance(markup, list) or isinstance(markup, tuple): + # This should only happen when searching a multi-valued attribute + # like 'class'. + for item in markup: + if self._matches(item, match_against): + return True + # We didn't match any particular value of the multivalue + # attribute, but maybe we match the attribute value when + # considered as a string. + if self._matches(' '.join(markup), match_against): + return True + return False + + if match_against is True: + # True matches any non-None value. + return markup is not None + + if isinstance(match_against, Callable): + return match_against(markup) + + # Custom callables take the tag as an argument, but all + # other ways of matching match the tag name as a string. + original_markup = markup + if isinstance(markup, Tag): + markup = markup.name + + # Ensure that `markup` is either a Unicode string, or None. + markup = self._normalize_search_value(markup) + + if markup is None: + # None matches None, False, an empty string, an empty list, and so on. + return not match_against + + if (hasattr(match_against, '__iter__') + and not isinstance(match_against, str)): + # We're asked to match against an iterable of items. + # The markup must be match at least one item in the + # iterable. We'll try each one in turn. + # + # To avoid infinite recursion we need to keep track of + # items we've already seen. + if not already_tried: + already_tried = set() + for item in match_against: + if item.__hash__: + key = item + else: + key = id(item) + if key in already_tried: + continue + else: + already_tried.add(key) + if self._matches(original_markup, item, already_tried): + return True + else: + return False + + # Beyond this point we might need to run the test twice: once against + # the tag's name and once against its prefixed name. + match = False + + if not match and isinstance(match_against, str): + # Exact string match + match = markup == match_against + + if not match and hasattr(match_against, 'search'): + # Regexp match + return match_against.search(markup) + + if (not match + and isinstance(original_markup, Tag) + and original_markup.prefix): + # Try the whole thing again with the prefixed tag name. + return self._matches( + original_markup.prefix + ':' + original_markup.name, match_against + ) + + return match + + +class ResultSet(list): + """A ResultSet is just a list that keeps track of the SoupStrainer + that created it.""" + def __init__(self, source, result=()): + """Constructor. + + :param source: A SoupStrainer. + :param result: A list of PageElements. + """ + super(ResultSet, self).__init__(result) + self.source = source + + def __getattr__(self, key): + """Raise a helpful exception to explain a common code fix.""" + raise AttributeError( + "ResultSet object has no attribute '%s'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?" % key + ) diff --git a/python/lib/python3.11/site-packages/bs4/formatter.py b/python/lib/python3.11/site-packages/bs4/formatter.py new file mode 100644 index 0000000..c821318 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/formatter.py @@ -0,0 +1,185 @@ +from bs4.dammit import EntitySubstitution + +class Formatter(EntitySubstitution): + """Describes a strategy to use when outputting a parse tree to a string. + + Some parts of this strategy come from the distinction between + HTML4, HTML5, and XML. Others are configurable by the user. + + Formatters are passed in as the `formatter` argument to methods + like `PageElement.encode`. Most people won't need to think about + formatters, and most people who need to think about them can pass + in one of these predefined strings as `formatter` rather than + making a new Formatter object: + + For HTML documents: + * 'html' - HTML entity substitution for generic HTML documents. (default) + * 'html5' - HTML entity substitution for HTML5 documents, as + well as some optimizations in the way tags are rendered. + * 'minimal' - Only make the substitutions necessary to guarantee + valid HTML. + * None - Do not perform any substitution. This will be faster + but may result in invalid markup. + + For XML documents: + * 'html' - Entity substitution for XHTML documents. + * 'minimal' - Only make the substitutions necessary to guarantee + valid XML. (default) + * None - Do not perform any substitution. This will be faster + but may result in invalid markup. + """ + # Registries of XML and HTML formatters. + XML_FORMATTERS = {} + HTML_FORMATTERS = {} + + HTML = 'html' + XML = 'xml' + + HTML_DEFAULTS = dict( + cdata_containing_tags=set(["script", "style"]), + ) + + def _default(self, language, value, kwarg): + if value is not None: + return value + if language == self.XML: + return set() + return self.HTML_DEFAULTS[kwarg] + + def __init__( + self, language=None, entity_substitution=None, + void_element_close_prefix='/', cdata_containing_tags=None, + empty_attributes_are_booleans=False, indent=1, + ): + """Constructor. + + :param language: This should be Formatter.XML if you are formatting + XML markup and Formatter.HTML if you are formatting HTML markup. + + :param entity_substitution: A function to call to replace special + characters with XML/HTML entities. For examples, see + bs4.dammit.EntitySubstitution.substitute_html and substitute_xml. + :param void_element_close_prefix: By default, void elements + are represented as <tag/> (XML rules) rather than <tag> + (HTML rules). To get <tag>, pass in the empty string. + :param cdata_containing_tags: The list of tags that are defined + as containing CDATA in this dialect. For example, in HTML, + <script> and <style> tags are defined as containing CDATA, + and their contents should not be formatted. + :param blank_attributes_are_booleans: Render attributes whose value + is the empty string as HTML-style boolean attributes. + (Attributes whose value is None are always rendered this way.) + + :param indent: If indent is a non-negative integer or string, + then the contents of elements will be indented + appropriately when pretty-printing. An indent level of 0, + negative, or "" will only insert newlines. Using a + positive integer indent indents that many spaces per + level. If indent is a string (such as "\t"), that string + is used to indent each level. The default behavior to + indent one space per level. + """ + self.language = language + self.entity_substitution = entity_substitution + self.void_element_close_prefix = void_element_close_prefix + self.cdata_containing_tags = self._default( + language, cdata_containing_tags, 'cdata_containing_tags' + ) + self.empty_attributes_are_booleans=empty_attributes_are_booleans + if indent is None: + indent = 0 + if isinstance(indent, int): + if indent < 0: + indent = 0 + indent = ' ' * indent + elif isinstance(indent, str): + indent = indent + else: + indent = ' ' + self.indent = indent + + def substitute(self, ns): + """Process a string that needs to undergo entity substitution. + This may be a string encountered in an attribute value or as + text. + + :param ns: A string. + :return: A string with certain characters replaced by named + or numeric entities. + """ + if not self.entity_substitution: + return ns + from .element import NavigableString + if (isinstance(ns, NavigableString) + and ns.parent is not None + and ns.parent.name in self.cdata_containing_tags): + # Do nothing. + return ns + # Substitute. + return self.entity_substitution(ns) + + def attribute_value(self, value): + """Process the value of an attribute. + + :param ns: A string. + :return: A string with certain characters replaced by named + or numeric entities. + """ + return self.substitute(value) + + def attributes(self, tag): + """Reorder a tag's attributes however you want. + + By default, attributes are sorted alphabetically. This makes + behavior consistent between Python 2 and Python 3, and preserves + backwards compatibility with older versions of Beautiful Soup. + + If `empty_boolean_attributes` is True, then attributes whose + values are set to the empty string will be treated as boolean + attributes. + """ + if tag.attrs is None: + return [] + return sorted( + (k, (None if self.empty_attributes_are_booleans and v == '' else v)) + for k, v in list(tag.attrs.items()) + ) + +class HTMLFormatter(Formatter): + """A generic Formatter for HTML.""" + REGISTRY = {} + def __init__(self, *args, **kwargs): + super(HTMLFormatter, self).__init__(self.HTML, *args, **kwargs) + + +class XMLFormatter(Formatter): + """A generic Formatter for XML.""" + REGISTRY = {} + def __init__(self, *args, **kwargs): + super(XMLFormatter, self).__init__(self.XML, *args, **kwargs) + + +# Set up aliases for the default formatters. +HTMLFormatter.REGISTRY['html'] = HTMLFormatter( + entity_substitution=EntitySubstitution.substitute_html +) +HTMLFormatter.REGISTRY["html5"] = HTMLFormatter( + entity_substitution=EntitySubstitution.substitute_html, + void_element_close_prefix=None, + empty_attributes_are_booleans=True, +) +HTMLFormatter.REGISTRY["minimal"] = HTMLFormatter( + entity_substitution=EntitySubstitution.substitute_xml +) +HTMLFormatter.REGISTRY[None] = HTMLFormatter( + entity_substitution=None +) +XMLFormatter.REGISTRY["html"] = XMLFormatter( + entity_substitution=EntitySubstitution.substitute_html +) +XMLFormatter.REGISTRY["minimal"] = XMLFormatter( + entity_substitution=EntitySubstitution.substitute_xml +) +XMLFormatter.REGISTRY[None] = Formatter( + Formatter(Formatter.XML, entity_substitution=None) +) diff --git a/python/lib/python3.11/site-packages/bs4/tests/__init__.py b/python/lib/python3.11/site-packages/bs4/tests/__init__.py new file mode 100644 index 0000000..dbb1593 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/tests/__init__.py @@ -0,0 +1,1177 @@ +# encoding: utf-8 +"""Helper classes for tests.""" + +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +import pickle +import copy +import functools +import warnings +import pytest +from bs4 import BeautifulSoup +from bs4.element import ( + CharsetMetaAttributeValue, + Comment, + ContentMetaAttributeValue, + Doctype, + PYTHON_SPECIFIC_ENCODINGS, + SoupStrainer, + Script, + Stylesheet, + Tag +) + +from bs4.builder import ( + DetectsXMLParsedAsHTML, + HTMLParserTreeBuilder, + XMLParsedAsHTMLWarning, +) +default_builder = HTMLParserTreeBuilder + +# Some tests depend on specific third-party libraries. We use +# @pytest.mark.skipIf on the following conditionals to skip them +# if the libraries are not installed. +try: + from soupsieve import SelectorSyntaxError + SOUP_SIEVE_PRESENT = True +except ImportError: + SOUP_SIEVE_PRESENT = False + +try: + import html5lib + HTML5LIB_PRESENT = True +except ImportError: + HTML5LIB_PRESENT = False + +try: + import lxml.etree + LXML_PRESENT = True + LXML_VERSION = lxml.etree.LXML_VERSION +except ImportError: + LXML_PRESENT = False + LXML_VERSION = (0,) + +BAD_DOCUMENT = """A bare string +<!DOCTYPE xsl:stylesheet SYSTEM "htmlent.dtd"> +<!DOCTYPE xsl:stylesheet PUBLIC "htmlent.dtd"> +<div><![CDATA[A CDATA section where it doesn't belong]]></div> +<div><svg><![CDATA[HTML5 does allow CDATA sections in SVG]]></svg></div> +<div>A <meta> tag</div> +<div>A <br> tag that supposedly has contents.</br></div> +<div>AT&T</div> +<div><textarea>Within a textarea, markup like <b> tags and <&<& should be treated as literal</textarea></div> +<div><script>if (i < 2) { alert("<b>Markup within script tags should be treated as literal.</b>"); }</script></div> +<div>This numeric entity is missing the final semicolon: <x t="piñata"></div> +<div><a href="http://example.com/</a> that attribute value never got closed</div> +<div><a href="foo</a>, </a><a href="bar">that attribute value was closed by the subsequent tag</a></div> +<! This document starts with a bogus declaration ><div>a</div> +<div>This document contains <!an incomplete declaration <div>(do you see it?)</div> +<div>This document ends with <!an incomplete declaration +<div><a style={height:21px;}>That attribute value was bogus</a></div> +<! DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">The doctype is invalid because it contains extra whitespace +<div><table><td nowrap>That boolean attribute had no value</td></table></div> +<div>Here's a nonexistent entity: &#foo; (do you see it?)</div> +<div>This document ends before the entity finishes: > +<div><p>Paragraphs shouldn't contain block display elements, but this one does: <dl><dt>you see?</dt></p> +<b b="20" a="1" b="10" a="2" a="3" a="4">Multiple values for the same attribute.</b> +<div><table><tr><td>Here's a table</td></tr></table></div> +<div><table id="1"><tr><td>Here's a nested table:<table id="2"><tr><td>foo</td></tr></table></td></div> +<div>This tag contains nothing but whitespace: <b> </b></div> +<div><blockquote><p><b>This p tag is cut off by</blockquote></p>the end of the blockquote tag</div> +<div><table><div>This table contains bare markup</div></table></div> +<div><div id="1">\n <a href="link1">This link is never closed.\n</div>\n<div id="2">\n <div id="3">\n <a href="link2">This link is closed.</a>\n </div>\n</div></div> +<div>This document contains a <!DOCTYPE surprise>surprise doctype</div> +<div><a><B><Cd><EFG>Mixed case tags are folded to lowercase</efg></CD></b></A></div> +<div><our\u2603>Tag name contains Unicode characters</our\u2603></div> +<div><a \u2603="snowman">Attribute name contains Unicode characters</a></div> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> +""" + + +class SoupTest(object): + + @property + def default_builder(self): + return default_builder + + def soup(self, markup, **kwargs): + """Build a Beautiful Soup object from markup.""" + builder = kwargs.pop('builder', self.default_builder) + return BeautifulSoup(markup, builder=builder, **kwargs) + + def document_for(self, markup, **kwargs): + """Turn an HTML fragment into a document. + + The details depend on the builder. + """ + return self.default_builder(**kwargs).test_fragment_to_document(markup) + + def assert_soup(self, to_parse, compare_parsed_to=None): + """Parse some markup using Beautiful Soup and verify that + the output markup is as expected. + """ + builder = self.default_builder + obj = BeautifulSoup(to_parse, builder=builder) + if compare_parsed_to is None: + compare_parsed_to = to_parse + + # Verify that the documents come out the same. + assert obj.decode() == self.document_for(compare_parsed_to) + + # Also run some checks on the BeautifulSoup object itself: + + # Verify that every tag that was opened was eventually closed. + + # There are no tags in the open tag counter. + assert all(v==0 for v in list(obj.open_tag_counter.values())) + + # The only tag in the tag stack is the one for the root + # document. + assert [obj.ROOT_TAG_NAME] == [x.name for x in obj.tagStack] + + assertSoupEquals = assert_soup + + def assertConnectedness(self, element): + """Ensure that next_element and previous_element are properly + set for all descendants of the given element. + """ + earlier = None + for e in element.descendants: + if earlier: + assert e == earlier.next_element + assert earlier == e.previous_element + earlier = e + + def linkage_validator(self, el, _recursive_call=False): + """Ensure proper linkage throughout the document.""" + descendant = None + # Document element should have no previous element or previous sibling. + # It also shouldn't have a next sibling. + if el.parent is None: + assert el.previous_element is None,\ + "Bad previous_element\nNODE: {}\nPREV: {}\nEXPECTED: {}".format( + el, el.previous_element, None + ) + assert el.previous_sibling is None,\ + "Bad previous_sibling\nNODE: {}\nPREV: {}\nEXPECTED: {}".format( + el, el.previous_sibling, None + ) + assert el.next_sibling is None,\ + "Bad next_sibling\nNODE: {}\nNEXT: {}\nEXPECTED: {}".format( + el, el.next_sibling, None + ) + + idx = 0 + child = None + last_child = None + last_idx = len(el.contents) - 1 + for child in el.contents: + descendant = None + + # Parent should link next element to their first child + # That child should have no previous sibling + if idx == 0: + if el.parent is not None: + assert el.next_element is child,\ + "Bad next_element\nNODE: {}\nNEXT: {}\nEXPECTED: {}".format( + el, el.next_element, child + ) + assert child.previous_element is el,\ + "Bad previous_element\nNODE: {}\nPREV: {}\nEXPECTED: {}".format( + child, child.previous_element, el + ) + assert child.previous_sibling is None,\ + "Bad previous_sibling\nNODE: {}\nPREV {}\nEXPECTED: {}".format( + child, child.previous_sibling, None + ) + + # If not the first child, previous index should link as sibling to this index + # Previous element should match the last index or the last bubbled up descendant + else: + assert child.previous_sibling is el.contents[idx - 1],\ + "Bad previous_sibling\nNODE: {}\nPREV {}\nEXPECTED {}".format( + child, child.previous_sibling, el.contents[idx - 1] + ) + assert el.contents[idx - 1].next_sibling is child,\ + "Bad next_sibling\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + el.contents[idx - 1], el.contents[idx - 1].next_sibling, child + ) + + if last_child is not None: + assert child.previous_element is last_child,\ + "Bad previous_element\nNODE: {}\nPREV {}\nEXPECTED {}\nCONTENTS {}".format( + child, child.previous_element, last_child, child.parent.contents + ) + assert last_child.next_element is child,\ + "Bad next_element\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + last_child, last_child.next_element, child + ) + + if isinstance(child, Tag) and child.contents: + descendant = self.linkage_validator(child, True) + # A bubbled up descendant should have no next siblings + assert descendant.next_sibling is None,\ + "Bad next_sibling\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + descendant, descendant.next_sibling, None + ) + + # Mark last child as either the bubbled up descendant or the current child + if descendant is not None: + last_child = descendant + else: + last_child = child + + # If last child, there are non next siblings + if idx == last_idx: + assert child.next_sibling is None,\ + "Bad next_sibling\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + child, child.next_sibling, None + ) + idx += 1 + + child = descendant if descendant is not None else child + if child is None: + child = el + + if not _recursive_call and child is not None: + target = el + while True: + if target is None: + assert child.next_element is None, \ + "Bad next_element\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + child, child.next_element, None + ) + break + elif target.next_sibling is not None: + assert child.next_element is target.next_sibling, \ + "Bad next_element\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + child, child.next_element, target.next_sibling + ) + break + target = target.parent + + # We are done, so nothing to return + return None + else: + # Return the child to the recursive caller + return child + + def assert_selects(self, tags, should_match): + """Make sure that the given tags have the correct text. + + This is used in tests that define a bunch of tags, each + containing a single string, and then select certain strings by + some mechanism. + """ + assert [tag.string for tag in tags] == should_match + + def assert_selects_ids(self, tags, should_match): + """Make sure that the given tags have the correct IDs. + + This is used in tests that define a bunch of tags, each + containing a single string, and then select certain strings by + some mechanism. + """ + assert [tag['id'] for tag in tags] == should_match + + +class TreeBuilderSmokeTest(object): + # Tests that are common to HTML and XML tree builders. + + @pytest.mark.parametrize( + "multi_valued_attributes", + [None, {}, dict(b=['class']), {'*': ['notclass']}] + ) + def test_attribute_not_multi_valued(self, multi_valued_attributes): + markup = '<html xmlns="http://www.w3.org/1999/xhtml"><a class="a b c"></html>' + soup = self.soup(markup, multi_valued_attributes=multi_valued_attributes) + assert soup.a['class'] == 'a b c' + + @pytest.mark.parametrize( + "multi_valued_attributes", [dict(a=['class']), {'*': ['class']}] + ) + def test_attribute_multi_valued(self, multi_valued_attributes): + markup = '<a class="a b c">' + soup = self.soup( + markup, multi_valued_attributes=multi_valued_attributes + ) + assert soup.a['class'] == ['a', 'b', 'c'] + + def test_invalid_doctype(self): + markup = '<![if word]>content<![endif]>' + markup = '<!DOCTYPE html]ff>' + soup = self.soup(markup) + +class HTMLTreeBuilderSmokeTest(TreeBuilderSmokeTest): + + """A basic test of a treebuilder's competence. + + Any HTML treebuilder, present or future, should be able to pass + these tests. With invalid markup, there's room for interpretation, + and different parsers can handle it differently. But with the + markup in these tests, there's not much room for interpretation. + """ + + def test_empty_element_tags(self): + """Verify that all HTML4 and HTML5 empty element (aka void element) tags + are handled correctly. + """ + for name in [ + 'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen', 'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr', + 'spacer', 'frame' + ]: + soup = self.soup("") + new_tag = soup.new_tag(name) + assert new_tag.is_empty_element == True + + def test_special_string_containers(self): + soup = self.soup( + "<style>Some CSS</style><script>Some Javascript</script>" + ) + assert isinstance(soup.style.string, Stylesheet) + assert isinstance(soup.script.string, Script) + + soup = self.soup( + "<style><!--Some CSS--></style>" + ) + assert isinstance(soup.style.string, Stylesheet) + # The contents of the style tag resemble an HTML comment, but + # it's not treated as a comment. + assert soup.style.string == "<!--Some CSS-->" + assert isinstance(soup.style.string, Stylesheet) + + def test_pickle_and_unpickle_identity(self): + # Pickling a tree, then unpickling it, yields a tree identical + # to the original. + tree = self.soup("<a><b>foo</a>") + dumped = pickle.dumps(tree, 2) + loaded = pickle.loads(dumped) + assert loaded.__class__ == BeautifulSoup + assert loaded.decode() == tree.decode() + + def assertDoctypeHandled(self, doctype_fragment): + """Assert that a given doctype string is handled correctly.""" + doctype_str, soup = self._document_with_doctype(doctype_fragment) + + # Make sure a Doctype object was created. + doctype = soup.contents[0] + assert doctype.__class__ == Doctype + assert doctype == doctype_fragment + assert soup.encode("utf8")[:len(doctype_str)] == doctype_str + + # Make sure that the doctype was correctly associated with the + # parse tree and that the rest of the document parsed. + assert soup.p.contents[0] == 'foo' + + def _document_with_doctype(self, doctype_fragment, doctype_string="DOCTYPE"): + """Generate and parse a document with the given doctype.""" + doctype = '<!%s %s>' % (doctype_string, doctype_fragment) + markup = doctype + '\n<p>foo</p>' + soup = self.soup(markup) + return doctype.encode("utf8"), soup + + def test_normal_doctypes(self): + """Make sure normal, everyday HTML doctypes are handled correctly.""" + self.assertDoctypeHandled("html") + self.assertDoctypeHandled( + 'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"') + + def test_empty_doctype(self): + soup = self.soup("<!DOCTYPE>") + doctype = soup.contents[0] + assert "" == doctype.strip() + + def test_mixed_case_doctype(self): + # A lowercase or mixed-case doctype becomes a Doctype. + for doctype_fragment in ("doctype", "DocType"): + doctype_str, soup = self._document_with_doctype( + "html", doctype_fragment + ) + + # Make sure a Doctype object was created and that the DOCTYPE + # is uppercase. + doctype = soup.contents[0] + assert doctype.__class__ == Doctype + assert doctype == "html" + assert soup.encode("utf8")[:len(doctype_str)] == b"<!DOCTYPE html>" + + # Make sure that the doctype was correctly associated with the + # parse tree and that the rest of the document parsed. + assert soup.p.contents[0] == 'foo' + + def test_public_doctype_with_url(self): + doctype = 'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"' + self.assertDoctypeHandled(doctype) + + def test_system_doctype(self): + self.assertDoctypeHandled('foo SYSTEM "http://www.example.com/"') + + def test_namespaced_system_doctype(self): + # We can handle a namespaced doctype with a system ID. + self.assertDoctypeHandled('xsl:stylesheet SYSTEM "htmlent.dtd"') + + def test_namespaced_public_doctype(self): + # Test a namespaced doctype with a public id. + self.assertDoctypeHandled('xsl:stylesheet PUBLIC "htmlent.dtd"') + + def test_real_xhtml_document(self): + """A real XHTML document should come out more or less the same as it went in.""" + markup = b"""<?xml version="1.0" encoding="utf-8"?> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"> +<html xmlns="http://www.w3.org/1999/xhtml"> +<head><title>Hello. +Goodbye. +""" + with warnings.catch_warnings(record=True) as w: + soup = self.soup(markup) + assert soup.encode("utf-8").replace(b"\n", b"") == markup.replace(b"\n", b"") + + # No warning was issued about parsing an XML document as HTML, + # because XHTML is both. + assert w == [] + + + def test_namespaced_html(self): + # When a namespaced XML document is parsed as HTML it should + # be treated as HTML with weird tag names. + markup = b"""content""" + with warnings.catch_warnings(record=True) as w: + soup = self.soup(markup) + + assert 2 == len(soup.find_all("ns1:foo")) + + # n.b. no "you're parsing XML as HTML" warning was given + # because there was no XML declaration. + assert [] == w + + def test_detect_xml_parsed_as_html(self): + # A warning is issued when parsing an XML document as HTML, + # but basic stuff should still work. + markup = b"""string""" + with warnings.catch_warnings(record=True) as w: + soup = self.soup(markup) + assert soup.tag.string == 'string' + [warning] = w + assert isinstance(warning.message, XMLParsedAsHTMLWarning) + assert str(warning.message) == XMLParsedAsHTMLWarning.MESSAGE + + # NOTE: the warning is not issued if the document appears to + # be XHTML (tested with test_real_xhtml_document in the + # superclass) or if there is no XML declaration (tested with + # test_namespaced_html in the superclass). + + def test_processing_instruction(self): + # We test both Unicode and bytestring to verify that + # process_markup correctly sets processing_instruction_class + # even when the markup is already Unicode and there is no + # need to process anything. + markup = """""" + soup = self.soup(markup) + assert markup == soup.decode() + + markup = b"""""" + soup = self.soup(markup) + assert markup == soup.encode("utf8") + + def test_deepcopy(self): + """Make sure you can copy the tree builder. + + This is important because the builder is part of a + BeautifulSoup object, and we want to be able to copy that. + """ + copy.deepcopy(self.default_builder) + + def test_p_tag_is_never_empty_element(self): + """A

tag is never designated as an empty-element tag. + + Even if the markup shows it as an empty-element tag, it + shouldn't be presented that way. + """ + soup = self.soup("

") + assert not soup.p.is_empty_element + assert str(soup.p) == "

" + + def test_unclosed_tags_get_closed(self): + """A tag that's not closed by the end of the document should be closed. + + This applies to all tags except empty-element tags. + """ + self.assert_soup("

", "

") + self.assert_soup("", "") + + self.assert_soup("
", "
") + + def test_br_is_always_empty_element_tag(self): + """A
tag is designated as an empty-element tag. + + Some parsers treat

as one
tag, some parsers as + two tags, but it should always be an empty-element tag. + """ + soup = self.soup("

") + assert soup.br.is_empty_element + assert str(soup.br) == "
" + + def test_nested_formatting_elements(self): + self.assert_soup("") + + def test_double_head(self): + html = ''' + + +Ordinary HEAD element test + + + +Hello, world! + + +''' + soup = self.soup(html) + assert "text/javascript" == soup.find('script')['type'] + + def test_comment(self): + # Comments are represented as Comment objects. + markup = "

foobaz

" + self.assert_soup(markup) + + soup = self.soup(markup) + comment = soup.find(string="foobar") + assert comment.__class__ == Comment + + # The comment is properly integrated into the tree. + foo = soup.find(string="foo") + assert comment == foo.next_element + baz = soup.find(string="baz") + assert comment == baz.previous_element + + def test_preserved_whitespace_in_pre_and_textarea(self): + """Whitespace must be preserved in
 and \n"
+        self.assert_soup(pre_markup)
+        self.assert_soup(textarea_markup)
+
+        soup = self.soup(pre_markup)
+        assert soup.pre.prettify() == pre_markup
+
+        soup = self.soup(textarea_markup)
+        assert soup.textarea.prettify() == textarea_markup
+
+        soup = self.soup("")
+        assert soup.textarea.prettify() == "\n"
+
+    def test_nested_inline_elements(self):
+        """Inline elements can be nested indefinitely."""
+        b_tag = "Inside a B tag"
+        self.assert_soup(b_tag)
+
+        nested_b_tag = "

A nested tag

" + self.assert_soup(nested_b_tag) + + double_nested_b_tag = "

A doubly nested tag

" + self.assert_soup(nested_b_tag) + + def test_nested_block_level_elements(self): + """Block elements can be nested.""" + soup = self.soup('

Foo

') + blockquote = soup.blockquote + assert blockquote.p.b.string == 'Foo' + assert blockquote.b.string == 'Foo' + + def test_correctly_nested_tables(self): + """One table can go inside another one.""" + markup = ('' + '' + "') + + self.assert_soup( + markup, + '
Here's another table:" + '' + '' + '
foo
Here\'s another table:' + '
foo
' + '
') + + self.assert_soup( + "" + "" + "
Foo
Bar
Baz
") + + def test_multivalued_attribute_with_whitespace(self): + # Whitespace separating the values of a multi-valued attribute + # should be ignored. + + markup = '
' + soup = self.soup(markup) + assert ['foo', 'bar'] == soup.div['class'] + + # If you search by the literal name of the class it's like the whitespace + # wasn't there. + assert soup.div == soup.find('div', class_="foo bar") + + def test_deeply_nested_multivalued_attribute(self): + # html5lib can set the attributes of the same tag many times + # as it rearranges the tree. This has caused problems with + # multivalued attributes. + markup = '
' + soup = self.soup(markup) + assert ["css"] == soup.div.div['class'] + + def test_multivalued_attribute_on_html(self): + # html5lib uses a different API to set the attributes ot the + # tag. This has caused problems with multivalued + # attributes. + markup = '' + soup = self.soup(markup) + assert ["a", "b"] == soup.html['class'] + + def test_angle_brackets_in_attribute_values_are_escaped(self): + self.assert_soup('', '') + + def test_strings_resembling_character_entity_references(self): + # "&T" and "&p" look like incomplete character entities, but they are + # not. + self.assert_soup( + "

• AT&T is in the s&p 500

", + "

\u2022 AT&T is in the s&p 500

" + ) + + def test_apos_entity(self): + self.assert_soup( + "

Bob's Bar

", + "

Bob's Bar

", + ) + + def test_entities_in_foreign_document_encoding(self): + # “ and ” are invalid numeric entities referencing + # Windows-1252 characters. - references a character common + # to Windows-1252 and Unicode, and ☃ references a + # character only found in Unicode. + # + # All of these entities should be converted to Unicode + # characters. + markup = "

“Hello” -☃

" + soup = self.soup(markup) + assert "“Hello†-☃" == soup.p.string + + def test_entities_in_attributes_converted_to_unicode(self): + expect = '

' + self.assert_soup('

', expect) + self.assert_soup('

', expect) + self.assert_soup('

', expect) + self.assert_soup('

', expect) + + def test_entities_in_text_converted_to_unicode(self): + expect = '

pi\N{LATIN SMALL LETTER N WITH TILDE}ata

' + self.assert_soup("

piñata

", expect) + self.assert_soup("

piñata

", expect) + self.assert_soup("

piñata

", expect) + self.assert_soup("

piñata

", expect) + + def test_quot_entity_converted_to_quotation_mark(self): + self.assert_soup("

I said "good day!"

", + '

I said "good day!"

') + + def test_out_of_range_entity(self): + expect = "\N{REPLACEMENT CHARACTER}" + self.assert_soup("�", expect) + self.assert_soup("�", expect) + self.assert_soup("�", expect) + + def test_multipart_strings(self): + "Mostly to prevent a recurrence of a bug in the html5lib treebuilder." + soup = self.soup("

\nfoo

") + assert "p" == soup.h2.string.next_element.name + assert "p" == soup.p.name + self.assertConnectedness(soup) + + def test_empty_element_tags(self): + """Verify consistent handling of empty-element tags, + no matter how they come in through the markup. + """ + self.assert_soup('


', "


") + self.assert_soup('


', "


") + + def test_head_tag_between_head_and_body(self): + "Prevent recurrence of a bug in the html5lib treebuilder." + content = """ + + foo + +""" + soup = self.soup(content) + assert soup.html.body is not None + self.assertConnectedness(soup) + + def test_multiple_copies_of_a_tag(self): + "Prevent recurrence of a bug in the html5lib treebuilder." + content = """ + + + + + +""" + soup = self.soup(content) + self.assertConnectedness(soup.article) + + def test_basic_namespaces(self): + """Parsers don't need to *understand* namespaces, but at the + very least they should not choke on namespaces or lose + data.""" + + markup = b'4' + soup = self.soup(markup) + assert markup == soup.encode() + html = soup.html + assert 'http://www.w3.org/1999/xhtml' == soup.html['xmlns'] + assert 'http://www.w3.org/1998/Math/MathML' == soup.html['xmlns:mathml'] + assert 'http://www.w3.org/2000/svg' == soup.html['xmlns:svg'] + + def test_multivalued_attribute_value_becomes_list(self): + markup = b'' + soup = self.soup(markup) + assert ['foo', 'bar'] == soup.a['class'] + + # + # Generally speaking, tests below this point are more tests of + # Beautiful Soup than tests of the tree builders. But parsers are + # weird, so we run these tests separately for every tree builder + # to detect any differences between them. + # + + def test_can_parse_unicode_document(self): + # A seemingly innocuous document... but it's in Unicode! And + # it contains characters that can't be represented in the + # encoding found in the declaration! The horror! + markup = 'Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!' + soup = self.soup(markup) + assert 'Sacr\xe9 bleu!' == soup.body.string + + def test_soupstrainer(self): + """Parsers should be able to work with SoupStrainers.""" + strainer = SoupStrainer("b") + soup = self.soup("A bold statement", + parse_only=strainer) + assert soup.decode() == "bold" + + def test_single_quote_attribute_values_become_double_quotes(self): + self.assert_soup("", + '') + + def test_attribute_values_with_nested_quotes_are_left_alone(self): + text = """a""" + self.assert_soup(text) + + def test_attribute_values_with_double_nested_quotes_get_quoted(self): + text = """a""" + soup = self.soup(text) + soup.foo['attr'] = 'Brawls happen at "Bob\'s Bar"' + self.assert_soup( + soup.foo.decode(), + """a""") + + def test_ampersand_in_attribute_value_gets_escaped(self): + self.assert_soup('', + '') + + self.assert_soup( + 'foo', + 'foo') + + def test_escaped_ampersand_in_attribute_value_is_left_alone(self): + self.assert_soup('') + + def test_entities_in_strings_converted_during_parsing(self): + # Both XML and HTML entities are converted to Unicode characters + # during parsing. + text = "

<<sacré bleu!>>

" + expected = "

<<sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>

" + self.assert_soup(text, expected) + + def test_smart_quotes_converted_on_the_way_in(self): + # Microsoft smart quotes are converted to Unicode characters during + # parsing. + quote = b"

\x91Foo\x92

" + soup = self.soup(quote) + assert soup.p.string == "\N{LEFT SINGLE QUOTATION MARK}Foo\N{RIGHT SINGLE QUOTATION MARK}" + + def test_non_breaking_spaces_converted_on_the_way_in(self): + soup = self.soup("  ") + assert soup.a.string == "\N{NO-BREAK SPACE}" * 2 + + def test_entities_converted_on_the_way_out(self): + text = "

<<sacré bleu!>>

" + expected = "

<<sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>

".encode("utf-8") + soup = self.soup(text) + assert soup.p.encode("utf-8") == expected + + def test_real_iso_8859_document(self): + # Smoke test of interrelated functionality, using an + # easy-to-understand document. + + # Here it is in Unicode. Note that it claims to be in ISO-8859-1. + unicode_html = '

Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!

' + + # That's because we're going to encode it into ISO-8859-1, + # and use that to test. + iso_latin_html = unicode_html.encode("iso-8859-1") + + # Parse the ISO-8859-1 HTML. + soup = self.soup(iso_latin_html) + + # Encode it to UTF-8. + result = soup.encode("utf-8") + + # What do we expect the result to look like? Well, it would + # look like unicode_html, except that the META tag would say + # UTF-8 instead of ISO-8859-1. + expected = unicode_html.replace("ISO-8859-1", "utf-8") + + # And, of course, it would be in UTF-8, not Unicode. + expected = expected.encode("utf-8") + + # Ta-da! + assert result == expected + + def test_real_shift_jis_document(self): + # Smoke test to make sure the parser can handle a document in + # Shift-JIS encoding, without choking. + shift_jis_html = ( + b'
'
+            b'\x82\xb1\x82\xea\x82\xcdShift-JIS\x82\xc5\x83R\x81[\x83f'
+            b'\x83B\x83\x93\x83O\x82\xb3\x82\xea\x82\xbd\x93\xfa\x96{\x8c'
+            b'\xea\x82\xcc\x83t\x83@\x83C\x83\x8b\x82\xc5\x82\xb7\x81B'
+            b'
') + unicode_html = shift_jis_html.decode("shift-jis") + soup = self.soup(unicode_html) + + # Make sure the parse tree is correctly encoded to various + # encodings. + assert soup.encode("utf-8") == unicode_html.encode("utf-8") + assert soup.encode("euc_jp") == unicode_html.encode("euc_jp") + + def test_real_hebrew_document(self): + # A real-world test to make sure we can convert ISO-8859-9 (a + # Hebrew encoding) to UTF-8. + hebrew_document = b'Hebrew (ISO 8859-8) in Visual Directionality

Hebrew (ISO 8859-8) in Visual Directionality

\xed\xe5\xec\xf9' + soup = self.soup( + hebrew_document, from_encoding="iso8859-8") + # Some tree builders call it iso8859-8, others call it iso-8859-9. + # That's not a difference we really care about. + assert soup.original_encoding in ('iso8859-8', 'iso-8859-8') + assert soup.encode('utf-8') == ( + hebrew_document.decode("iso8859-8").encode("utf-8") + ) + + def test_meta_tag_reflects_current_encoding(self): + # Here's the tag saying that a document is + # encoded in Shift-JIS. + meta_tag = ('') + + # Here's a document incorporating that meta tag. + shift_jis_html = ( + '\n%s\n' + '' + 'Shift-JIS markup goes here.') % meta_tag + soup = self.soup(shift_jis_html) + + # Parse the document, and the charset is seemingly unaffected. + parsed_meta = soup.find('meta', {'http-equiv': 'Content-type'}) + content = parsed_meta['content'] + assert 'text/html; charset=x-sjis' == content + + # But that value is actually a ContentMetaAttributeValue object. + assert isinstance(content, ContentMetaAttributeValue) + + # And it will take on a value that reflects its current + # encoding. + assert 'text/html; charset=utf8' == content.encode("utf8") + + # For the rest of the story, see TestSubstitutions in + # test_tree.py. + + def test_html5_style_meta_tag_reflects_current_encoding(self): + # Here's the tag saying that a document is + # encoded in Shift-JIS. + meta_tag = ('') + + # Here's a document incorporating that meta tag. + shift_jis_html = ( + '\n%s\n' + '' + 'Shift-JIS markup goes here.') % meta_tag + soup = self.soup(shift_jis_html) + + # Parse the document, and the charset is seemingly unaffected. + parsed_meta = soup.find('meta', id="encoding") + charset = parsed_meta['charset'] + assert 'x-sjis' == charset + + # But that value is actually a CharsetMetaAttributeValue object. + assert isinstance(charset, CharsetMetaAttributeValue) + + # And it will take on a value that reflects its current + # encoding. + assert 'utf8' == charset.encode("utf8") + + def test_python_specific_encodings_not_used_in_charset(self): + # You can encode an HTML document using a Python-specific + # encoding, but that encoding won't be mentioned _inside_ the + # resulting document. Instead, the document will appear to + # have no encoding. + for markup in [ + b'' + b'' + ]: + soup = self.soup(markup) + for encoding in PYTHON_SPECIFIC_ENCODINGS: + if encoding in ( + 'idna', 'mbcs', 'oem', 'undefined', + 'string_escape', 'string-escape' + ): + # For one reason or another, these will raise an + # exception if we actually try to use them, so don't + # bother. + continue + encoded = soup.encode(encoding) + assert b'meta charset=""' in encoded + assert encoding.encode("ascii") not in encoded + + def test_tag_with_no_attributes_can_have_attributes_added(self): + data = self.soup("text") + data.a['foo'] = 'bar' + assert 'text' == data.a.decode() + + def test_closing_tag_with_no_opening_tag(self): + # Without BeautifulSoup.open_tag_counter, the tag will + # cause _popToTag to be called over and over again as we look + # for a tag that wasn't there. The result is that 'text2' + # will show up outside the body of the document. + soup = self.soup("

text1

text2
") + assert "

text1

text2
" == soup.body.decode() + + def test_worst_case(self): + """Test the worst case (currently) for linking issues.""" + + soup = self.soup(BAD_DOCUMENT) + self.linkage_validator(soup) + + +class XMLTreeBuilderSmokeTest(TreeBuilderSmokeTest): + + def test_pickle_and_unpickle_identity(self): + # Pickling a tree, then unpickling it, yields a tree identical + # to the original. + tree = self.soup("foo") + dumped = pickle.dumps(tree, 2) + loaded = pickle.loads(dumped) + assert loaded.__class__ == BeautifulSoup + assert loaded.decode() == tree.decode() + + def test_docstring_generated(self): + soup = self.soup("") + assert soup.encode() == b'\n' + + def test_xml_declaration(self): + markup = b"""\n""" + soup = self.soup(markup) + assert markup == soup.encode("utf8") + + def test_python_specific_encodings_not_used_in_xml_declaration(self): + # You can encode an XML document using a Python-specific + # encoding, but that encoding won't be mentioned _inside_ the + # resulting document. + markup = b"""\n""" + soup = self.soup(markup) + for encoding in PYTHON_SPECIFIC_ENCODINGS: + if encoding in ( + 'idna', 'mbcs', 'oem', 'undefined', + 'string_escape', 'string-escape' + ): + # For one reason or another, these will raise an + # exception if we actually try to use them, so don't + # bother. + continue + encoded = soup.encode(encoding) + assert b'' in encoded + assert encoding.encode("ascii") not in encoded + + def test_processing_instruction(self): + markup = b"""\n""" + soup = self.soup(markup) + assert markup == soup.encode("utf8") + + def test_real_xhtml_document(self): + """A real XHTML document should come out *exactly* the same as it went in.""" + markup = b""" + + +Hello. +Goodbye. +""" + soup = self.soup(markup) + assert soup.encode("utf-8") == markup + + def test_nested_namespaces(self): + doc = b""" + + + + + +""" + soup = self.soup(doc) + assert doc == soup.encode() + + def test_formatter_processes_script_tag_for_xml_documents(self): + doc = """ + +""" + soup = BeautifulSoup(doc, "lxml-xml") + # lxml would have stripped this while parsing, but we can add + # it later. + soup.script.string = 'console.log("< < hey > > ");' + encoded = soup.encode() + assert b"< < hey > >" in encoded + + def test_can_parse_unicode_document(self): + markup = 'Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!' + soup = self.soup(markup) + assert 'Sacr\xe9 bleu!' == soup.root.string + + def test_can_parse_unicode_document_begining_with_bom(self): + markup = '\N{BYTE ORDER MARK}Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!' + soup = self.soup(markup) + assert 'Sacr\xe9 bleu!' == soup.root.string + + def test_popping_namespaced_tag(self): + markup = 'b2012-07-02T20:33:42Zcd' + soup = self.soup(markup) + assert str(soup.rss) == markup + + def test_docstring_includes_correct_encoding(self): + soup = self.soup("") + assert soup.encode("latin1") == b'\n' + + def test_large_xml_document(self): + """A large XML document should come out the same as it went in.""" + markup = (b'\n' + + b'0' * (2**12) + + b'') + soup = self.soup(markup) + assert soup.encode("utf-8") == markup + + def test_tags_are_empty_element_if_and_only_if_they_are_empty(self): + self.assert_soup("

", "

") + self.assert_soup("

foo

") + + def test_namespaces_are_preserved(self): + markup = 'This tag is in the a namespaceThis tag is in the b namespace' + soup = self.soup(markup) + root = soup.root + assert "http://example.com/" == root['xmlns:a'] + assert "http://example.net/" == root['xmlns:b'] + + def test_closing_namespaced_tag(self): + markup = '

20010504

' + soup = self.soup(markup) + assert str(soup.p) == markup + + def test_namespaced_attributes(self): + markup = '' + soup = self.soup(markup) + assert str(soup.foo) == markup + + def test_namespaced_attributes_xml_namespace(self): + markup = 'bar' + soup = self.soup(markup) + assert str(soup.foo) == markup + + def test_find_by_prefixed_name(self): + doc = """ +foo + bar + baz + +""" + soup = self.soup(doc) + + # There are three tags. + assert 3 == len(soup.find_all('tag')) + + # But two of them are ns1:tag and one of them is ns2:tag. + assert 2 == len(soup.find_all('ns1:tag')) + assert 1 == len(soup.find_all('ns2:tag')) + + assert 1, len(soup.find_all('ns2:tag', key='value')) + assert 3, len(soup.find_all(['ns1:tag', 'ns2:tag'])) + + def test_copy_tag_preserves_namespace(self): + xml = """ +""" + + soup = self.soup(xml) + tag = soup.document + duplicate = copy.copy(tag) + + # The two tags have the same namespace prefix. + assert tag.prefix == duplicate.prefix + + def test_worst_case(self): + """Test the worst case (currently) for linking issues.""" + + soup = self.soup(BAD_DOCUMENT) + self.linkage_validator(soup) + + +class HTML5TreeBuilderSmokeTest(HTMLTreeBuilderSmokeTest): + """Smoke test for a tree builder that supports HTML5.""" + + def test_real_xhtml_document(self): + # Since XHTML is not HTML5, HTML5 parsers are not tested to handle + # XHTML documents in any particular way. + pass + + def test_html_tags_have_namespace(self): + markup = "" + soup = self.soup(markup) + assert "http://www.w3.org/1999/xhtml" == soup.a.namespace + + def test_svg_tags_have_namespace(self): + markup = '' + soup = self.soup(markup) + namespace = "http://www.w3.org/2000/svg" + assert namespace == soup.svg.namespace + assert namespace == soup.circle.namespace + + + def test_mathml_tags_have_namespace(self): + markup = '5' + soup = self.soup(markup) + namespace = 'http://www.w3.org/1998/Math/MathML' + assert namespace == soup.math.namespace + assert namespace == soup.msqrt.namespace + + def test_xml_declaration_becomes_comment(self): + markup = '' + soup = self.soup(markup) + assert isinstance(soup.contents[0], Comment) + assert soup.contents[0] == '?xml version="1.0" encoding="utf-8"?' + assert "html" == soup.contents[0].next_element.name diff --git a/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase b/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase new file mode 100644 index 0000000..b34be8b --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-4818336571064320.testcase @@ -0,0 +1 @@ +ÿ

\ No newline at end of file diff --git a/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase b/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase new file mode 100644 index 0000000..0fe66dd Binary files /dev/null and b/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5167584867909632.testcase differ diff --git a/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase b/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase new file mode 100644 index 0000000..367106c --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/tests/fuzz/clusterfuzz-testcase-minimized-bs4_fuzzer-5703933063462912.testcase @@ -0,0 +1,2 @@ + +' + "clusterfuzz-testcase-minimized-bs4_fuzzer-5843991618256896", + + # b'ñ' + "clusterfuzz-testcase-minimized-bs4_fuzzer-6241471367348224", + + #
, some ^@ characters, some tags. + "clusterfuzz-testcase-minimized-bs4_fuzzer-6600557255327744", + + # Nested table + "crash-0d306a50c8ed8bcd0785b67000fcd5dea1d33f08" + ] + ) + def test_html5lib_parse_errors(self, filename): + markup = self.__markup(filename) + print(BeautifulSoup(markup, 'html5lib').encode()) + + def __markup(self, filename): + if not filename.endswith(self.TESTCASE_SUFFIX): + filename += self.TESTCASE_SUFFIX + this_dir = os.path.split(__file__)[0] + path = os.path.join(this_dir, 'fuzz', filename) + return open(path, 'rb').read() diff --git a/python/lib/python3.11/site-packages/bs4/tests/test_html5lib.py b/python/lib/python3.11/site-packages/bs4/tests/test_html5lib.py new file mode 100644 index 0000000..4197720 --- /dev/null +++ b/python/lib/python3.11/site-packages/bs4/tests/test_html5lib.py @@ -0,0 +1,224 @@ +"""Tests to ensure that the html5lib tree builder generates good trees.""" + +import pytest +import warnings + +from bs4 import BeautifulSoup +from bs4.element import SoupStrainer +from . import ( + HTML5LIB_PRESENT, + HTML5TreeBuilderSmokeTest, + SoupTest, +) + +@pytest.mark.skipif( + not HTML5LIB_PRESENT, + reason="html5lib seems not to be present, not testing its tree builder." +) +class TestHTML5LibBuilder(SoupTest, HTML5TreeBuilderSmokeTest): + """See ``HTML5TreeBuilderSmokeTest``.""" + + @property + def default_builder(self): + from bs4.builder import HTML5TreeBuilder + return HTML5TreeBuilder + + def test_soupstrainer(self): + # The html5lib tree builder does not support SoupStrainers. + strainer = SoupStrainer("b") + markup = "

A bold statement.

" + with warnings.catch_warnings(record=True) as w: + soup = BeautifulSoup(markup, "html5lib", parse_only=strainer) + assert soup.decode() == self.document_for(markup) + + [warning] = w + assert warning.filename == __file__ + assert "the html5lib tree builder doesn't support parse_only" in str(warning.message) + + def test_correctly_nested_tables(self): + """html5lib inserts tags where other parsers don't.""" + markup = ('
' + '' + "') + + self.assert_soup( + markup, + '
Here's another table:" + '' + '' + '
foo
Here\'s another table:' + '
foo
' + '
') + + self.assert_soup( + "" + "" + "
Foo
Bar
Baz
") + + def test_xml_declaration_followed_by_doctype(self): + markup = ''' + + + + + +

foo

+ +''' + soup = self.soup(markup) + # Verify that we can reach the

tag; this means the tree is connected. + assert b"

foo

" == soup.p.encode() + + def test_reparented_markup(self): + markup = '

foo

\n

bar

' + soup = self.soup(markup) + assert "

foo

\n

bar

" == soup.body.decode() + assert 2 == len(soup.find_all('p')) + + + def test_reparented_markup_ends_with_whitespace(self): + markup = '

foo

\n

bar

\n' + soup = self.soup(markup) + assert "

foo

\n

bar

\n" == soup.body.decode() + assert 2 == len(soup.find_all('p')) + + def test_reparented_markup_containing_identical_whitespace_nodes(self): + """Verify that we keep the two whitespace nodes in this + document distinct when reparenting the adjacent tags. + """ + markup = '
' + soup = self.soup(markup) + space1, space2 = soup.find_all(string=' ') + tbody1, tbody2 = soup.find_all('tbody') + assert space1.next_element is tbody1 + assert tbody2.next_element is space2 + + def test_reparented_markup_containing_children(self): + markup = '' + soup = self.soup(markup) + noscript = soup.noscript + assert "target" == noscript.next_element + target = soup.find(string='target') + + # The 'aftermath' string was duplicated; we want the second one. + final_aftermath = soup.find_all(string='aftermath')[-1] + + # The